Reith Lectures: AI and why people should be scared

Rory Cellan-Jones
Technology correspondent
@BBCRoryCJon Twitter

Image caption,

The Reith Lectures are in Newcastle, Manchester, Edinburgh and London

Prof Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence, at the University of California, Berkeley, is giving this year’s Reith Lectures.

His four lectures, Living With Artificial Intelligence, address the existential threat from machines more powerful than humans – and offer a way forward.

Last month, he spoke to then BBC News technology correspondent Rory Cellan-Jones about what to expect.

How have you shaped the lectures?

The first drafts that I sent them were much too pointy-headed, much too focused on the intellectual roots of AI and the various definitions of rationality and how they emerged over history and things like that.

So I readjusted – and we have one lecture that introduces AI and the future prospects both good and bad.

And then, we talk about weapons and we talk about jobs.

And then, the fourth one will be: “OK, here’s how we avoid losing control over AI systems in the future.”

Do you have a formula, a definition, for what artificial intelligence is?

Yes, it’s machines that perceive and act and hopefully choose actions that will achieve their objectives.

All these other things that you read about, like deep learning and so on, they’re all just special cases of that.

But could a dishwasher not fit into that definition?

Image source, Getty Images
Image caption,

Increasingly, home appliances have a degree of intelligence

Thermostats perceive and act and, in a sense, they have one little rule that says: “If the temperature is below this, turn on the heat.

“If the temperature is above this, turn off the heat.”

So that’s a trivial program and it’s a program that was completely written by a person, so there was no learning involved.

All the way up the other end – you have the self-driving cars, where the decision-making is much more complicated, where a lot of learning was involved in achieving that quality of decision-making.

But there’s no hard-and-fast line.

We can’t say anything below this doesn’t count as AI and anything above this does count.

And is it fair to say there have been great advances in the past decade in particular?

In object recognition, for example, which was one of the things we’ve been trying to do since the 1960s, we’ve gone from completely pathetic to superhuman, according to some measures.

And in machine translation, again we’ve gone from completely pathetic to really pretty good.

So what is the destination for AI?

Image source, Getty Images
Image caption,

Robots are increasingly being used as a teaching resource in schools – but will they one day build one?

If you look at what the founders of the field said their goal was, general-purpose AI, which means not a program that’s really good at playing Go or a program that’s really good at machine translation but something that can do pretty much anything a human could do and probably a lot more besides because machines have huge bandwidth and memory advantages over humans.

Just say we need a new school.

The robots would show up.

The robot trucks, the construction robots, the construction management software would know how to build it, knows how to get permits, knows how to talk to the school district and the principal to figure out the right design for the school and so on so forth – and a week later, you have a school.

And where are we in terms of that journey?

I’d say we’re a fair bit of the way.

Clearly, there are some major breakthroughs that still have to happen.

And I think the biggest one is around complex decision-making.

So if you think about the example of building a school – how do we start from the goal that we want a school, and then all the conversations happen, and then all the construction happens, how do humans do that?

Well, humans have an ability to think at multiple scales of abstraction.

So we might say: “OK, well the first thing we need to figure out is where we’re going to put it. And how big should it be?”

We don’t start thinking about should I move my left finger first or my right foot first, we focus on the high-level decisions that need to be made.

You’ve painted a picture showing AI has made quite a lot of progress – but not as much as it thinks. Are we at a point, though, of extreme danger?

There are two arguments as to why we should pay attention.

One is that even though our algorithms right now are nowhere close to general human capabilities, when you have billions of them running they can still have a very big effect on the world.

The other reason to worry is that it’s entirely plausible – and most experts think very likely – that we will have general-purpose AI within either our lifetimes or in the lifetimes of our children.

I think if general-purpose AI is created in the current context of superpower rivalry – you know, whoever rules AI rules the world, that kind of mentality – then I think the outcomes could be the worst possible.

Your second lecture is about military use of AI and the dangers there. Why does that deserve a whole lecture?

Image source, Getty Images
Image caption,

The military is already experimenting with AI and robots on the battlefield

Because I think it’s really important and really urgent.

And the reason it’s urgent is because the weapons that we have been talking about for the last six years or seven years are now starting to be manufactured and sold.

So in 2017, for example, we produced a movie called Slaughterbots about a small quadcopter about 3in [8cm] in diameter that carries an explosive charge and can kill people by getting close enough to them to blow up.

We showed this first at diplomatic meetings in Geneva and I remember the Russian ambassador basically sneering and sniffing and saying: “Well, you know, this is just science fiction, we don’t have to worry about these things for 25 or 30 years.”

I explained what my robotics colleagues had said, which is that no, they could put a weapon like this together in a few months with a few graduate students.

And in the following month, so three weeks later, the Turkish manufacturer STM [Savunma Teknolojileri Mühendislik ve Ticaret AŞ] actually announced the Kargu drone, which is basically a slightly larger version of the Slaughterbot.

What are you hoping for in terms of the reaction to these lectures – that people will come away scared, inspired, determined to see a path forward with this technology?

All of the above – I think a little bit of fear is appropriate, not fear when you get up tomorrow morning and think my laptop is going to murder me or something, but thinking about the future – I would say the same kind of fear we have about the climate or, rather, we should have about the climate.

I think some people just say: “Well, it looks like a nice day today,” and they don’t think about the longer timescale or the broader picture.

And I think a little bit of fear is necessary, because that’s what makes you act now rather than acting when it’s too late, which is, in fact, what we have done with the climate.

The Reith Lectures will be on BBC Radio 4, BBC World Service and BBC Sounds.

More on this story

Eleanore Beatty

Next Post

Chris Wallace surprises viewers of Sunday show by breaking news of his departure from Fox News

Mon Dec 13 , 2021
NEW YORK (AP) — Veteran anchor Chris Wallace has left Fox News after 18 years for CNN, dealing a significant blow to Fox’s news operation at a time that it has been overshadowed by the network’s opinion side. Wallace delivered the surprising news that he was leaving at the end […]

You May Like