What moral decisions should driverless cars make Iyad Rahwan

Today I’m going to talk
about technology and society.

The Department of Transport
estimated that last year

35,000 people died
from traffic crashes in the US alone.

Worldwide, 1.2 million people
die every year in traffic accidents.

If there was a way we could eliminate
90 percent of those accidents,

would you support it?

Of course you would.

This is what driverless car technology
promises to achieve

by eliminating the main
source of accidents –

human error.

Now picture yourself
in a driverless car in the year 2030,

sitting back and watching
this vintage TEDxCambridge video.

(Laughter)

All of a sudden,

the car experiences mechanical failure
and is unable to stop.

If the car continues,

it will crash into a bunch
of pedestrians crossing the street,

but the car may swerve,

hitting one bystander,

killing them to save the pedestrians.

What should the car do,
and who should decide?

What if instead the car
could swerve into a wall,

crashing and killing you, the passenger,

in order to save those pedestrians?

This scenario is inspired
by the trolley problem,

which was invented
by philosophers a few decades ago

to think about ethics.

Now, the way we think
about this problem matters.

We may for example
not think about it at all.

We may say this scenario is unrealistic,

incredibly unlikely, or just silly.

But I think this criticism
misses the point

because it takes
the scenario too literally.

Of course no accident
is going to look like this;

no accident has two or three options

where everybody dies somehow.

Instead, the car is going
to calculate something

like the probability of hitting
a certain group of people,

if you swerve one direction
versus another direction,

you might slightly increase the risk
to passengers or other drivers

versus pedestrians.

It’s going to be
a more complex calculation,

but it’s still going
to involve trade-offs,

and trade-offs often require ethics.

We might say then,
“Well, let’s not worry about this.

Let’s wait until technology
is fully ready and 100 percent safe.”

Suppose that we can indeed
eliminate 90 percent of those accidents,

or even 99 percent in the next 10 years.

What if eliminating
the last one percent of accidents

requires 50 more years of research?

Should we not adopt the technology?

That’s 60 million people
dead in car accidents

if we maintain the current rate.

So the point is,

waiting for full safety is also a choice,

and it also involves trade-offs.

People online on social media
have been coming up with all sorts of ways

to not think about this problem.

One person suggested
the car should just swerve somehow

in between the passengers –

(Laughter)

and the bystander.

Of course if that’s what the car can do,
that’s what the car should do.

We’re interested in scenarios
in which this is not possible.

And my personal favorite
was a suggestion by a blogger

to have an eject button in the car
that you press –

(Laughter)

just before the car self-destructs.

(Laughter)

So if we acknowledge that cars
will have to make trade-offs on the road,

how do we think about those trade-offs,

and how do we decide?

Well, maybe we should run a survey
to find out what society wants,

because ultimately,

regulations and the law
are a reflection of societal values.

So this is what we did.

With my collaborators,

Jean-François Bonnefon and Azim Shariff,

we ran a survey

in which we presented people
with these types of scenarios.

We gave them two options
inspired by two philosophers:

Jeremy Bentham and Immanuel Kant.

Bentham says the car
should follow utilitarian ethics:

it should take the action
that will minimize total harm –

even if that action will kill a bystander

and even if that action
will kill the passenger.

Immanuel Kant says the car
should follow duty-bound principles,

like “Thou shalt not kill.”

So you should not take an action
that explicitly harms a human being,

and you should let the car take its course

even if that’s going to harm more people.

What do you think?

Bentham or Kant?

Here’s what we found.

Most people sided with Bentham.

So it seems that people
want cars to be utilitarian,

minimize total harm,

and that’s what we should all do.

Problem solved.

But there is a little catch.

When we asked people
whether they would purchase such cars,

they said, “Absolutely not.”

(Laughter)

They would like to buy cars
that protect them at all costs,

but they want everybody else
to buy cars that minimize harm.

(Laughter)

We’ve seen this problem before.

It’s called a social dilemma.

And to understand the social dilemma,

we have to go a little bit
back in history.

In the 1800s,

English economist William Forster Lloyd
published a pamphlet

which describes the following scenario.

You have a group of farmers –

English farmers –

who are sharing a common land
for their sheep to graze.

Now, if each farmer
brings a certain number of sheep –

let’s say three sheep –

the land will be rejuvenated,

the farmers are happy,

the sheep are happy,

everything is good.

Now, if one farmer brings one extra sheep,

that farmer will do slightly better,
and no one else will be harmed.

But if every farmer made
that individually rational decision,

the land will be overrun,
and it will be depleted

to the detriment of all the farmers,

and of course,
to the detriment of the sheep.

We see this problem in many places:

in the difficulty of managing overfishing,

or in reducing carbon emissions
to mitigate climate change.

When it comes to the regulation
of driverless cars,

the common land now
is basically public safety –

that’s the common good –

and the farmers are the passengers

or the car owners who are choosing
to ride in those cars.

And by making the individually
rational choice

of prioritizing their own safety,

they may collectively be
diminishing the common good,

which is minimizing total harm.

It’s called the tragedy of the commons,

traditionally,

but I think in the case
of driverless cars,

the problem may be
a little bit more insidious

because there is not necessarily
an individual human being

making those decisions.

So car manufacturers
may simply program cars

that will maximize safety
for their clients,

and those cars may learn
automatically on their own

that doing so requires slightly
increasing risk for pedestrians.

So to use the sheep metaphor,

it’s like we now have electric sheep
that have a mind of their own.

(Laughter)

And they may go and graze
even if the farmer doesn’t know it.

So this is what we may call
the tragedy of the algorithmic commons,

and if offers new types of challenges.

Typically, traditionally,

we solve these types
of social dilemmas using regulation,

so either governments
or communities get together,

and they decide collectively
what kind of outcome they want

and what sort of constraints
on individual behavior

they need to implement.

And then using monitoring and enforcement,

they can make sure
that the public good is preserved.

So why don’t we just,

as regulators,

require that all cars minimize harm?

After all, this is
what people say they want.

And more importantly,

I can be sure that as an individual,

if I buy a car that may
sacrifice me in a very rare case,

I’m not the only sucker doing that

while everybody else
enjoys unconditional protection.

In our survey, we did ask people
whether they would support regulation

and here’s what we found.

First of all, people
said no to regulation;

and second, they said,

“Well if you regulate cars to do this
and to minimize total harm,

I will not buy those cars.”

So ironically,

by regulating cars to minimize harm,

we may actually end up with more harm

because people may not
opt into the safer technology

even if it’s much safer
than human drivers.

I don’t have the final
answer to this riddle,

but I think as a starting point,

we need society to come together

to decide what trade-offs
we are comfortable with

and to come up with ways
in which we can enforce those trade-offs.

As a starting point,
my brilliant students,

Edmond Awad and Sohan Dsouza,

built the Moral Machine website,

which generates random scenarios at you –

basically a bunch
of random dilemmas in a sequence

where you have to choose what
the car should do in a given scenario.

And we vary the ages and even
the species of the different victims.

So far we’ve collected
over five million decisions

by over one million people worldwide

from the website.

And this is helping us
form an early picture

of what trade-offs
people are comfortable with

and what matters to them –

even across cultures.

But more importantly,

doing this exercise
is helping people recognize

the difficulty of making those choices

and that the regulators
are tasked with impossible choices.

And maybe this will help us as a society
understand the kinds of trade-offs

that will be implemented
ultimately in regulation.

And indeed, I was very happy to hear

that the first set of regulations

that came from
the Department of Transport –

announced last week –

included a 15-point checklist
for all carmakers to provide,

and number 14 was ethical consideration –

how are you going to deal with that.

We also have people
reflect on their own decisions

by giving them summaries
of what they chose.

I’ll give you one example –

I’m just going to warn you
that this is not your typical example,

your typical user.

This is the most sacrificed and the most
saved character for this person.

(Laughter)

Some of you may agree with him,

or her, we don’t know.

But this person also seems to slightly
prefer passengers over pedestrians

in their choices

and is very happy to punish jaywalking.

(Laughter)

So let’s wrap up.

We started with the question –
let’s call it the ethical dilemma –

of what the car should do
in a specific scenario:

swerve or stay?

But then we realized
that the problem was a different one.

It was the problem of how to get
society to agree on and enforce

the trade-offs they’re comfortable with.

It’s a social dilemma.

In the 1940s, Isaac Asimov
wrote his famous laws of robotics –

the three laws of robotics.

A robot may not harm a human being,

a robot may not disobey a human being,

and a robot may not allow
itself to come to harm –

in this order of importance.

But after 40 years or so

and after so many stories
pushing these laws to the limit,

Asimov introduced the zeroth law

which takes precedence above all,

and it’s that a robot
may not harm humanity as a whole.

I don’t know what this means
in the context of driverless cars

or any specific situation,

and I don’t know how we can implement it,

but I think that by recognizing

that the regulation of driverless cars
is not only a technological problem

but also a societal cooperation problem,

I hope that we can at least begin
to ask the right questions.

Thank you.

(Applause)

今天我要
谈谈技术和社会。

交通部
估计,去年

仅在美国就有 35,000 人死于交通事故。

全世界每年有 120 万人
死于交通事故。

如果有一种方法可以消除
90% 的事故,

你会支持吗?

你当然会。

这就是无人驾驶汽车技术
承诺

通过消除事故的主要
来源——

人为错误来实现的目标。

现在想象一下自己
在 2030 年驾驶无人驾驶汽车,

坐下来观看
这段 TEDxCambridge 的老式视频。

(笑声

) 突然

,汽车出现机械故障
,无法停车。

如果汽车继续行驶,

它会撞到一群
过马路的行人,

但汽车可能会突然转向,

撞到一个旁观者,

杀死他们以拯救行人。

汽车应该做什么
,谁来决定?

如果汽车为了拯救那些行人而
可以突然撞到墙上

,撞死你,乘客

怎么办?

这个场景的灵感
来自于

几十年前哲学家

为了思考伦理问题而发明的电车问题。

现在,我们思考这个问题的方式很
重要。

例如,我们可能
根本不会考虑它。

我们可以说这种情况是不现实的、

极不可能的,或者只是愚蠢的。

但我认为这种批评
没有抓住重点,

因为它
过于从字面上理解了这个场景。

当然,不会有这样的
意外;

没有意外有两个或三个选择

,每个人都会以某种方式死亡。

相反,汽车
将计算

诸如撞到特定人群的概率之类的东西

如果你转向一个方向
而不是另一个方向,

你可能会稍微
增加乘客或其他司机

相对于行人的风险。

这将是
一个更复杂的计算,

但仍将
涉及权衡,

而权衡通常需要道德。

我们可能会说,
“好吧,我们不用担心这个。

让我们等到
技术完全准备好并且 100% 安全。”

假设我们确实可以
消除 90% 的这些事故,

甚至在未来 10 年内消除 99%。

如果
消除最后百分之一的事故

需要 50 年以上的研究呢?

我们不应该采用这项技术吗?

如果我们保持目前的速度,那将有 6000 万人死于车祸。

所以重点是,

等待完全安全也是一种选择

,也涉及到取舍。

社交媒体上的
人们一直在想出

各种不去思考这个问题的方法。

一个人
建议汽车应该以某种方式

在乘客之间转向——

(笑声)

和旁观者。

当然,如果这是汽车能做的,
那也是汽车应该做的。

我们对
不可能的情况感兴趣。

而我个人最喜欢的
是一位博主的建议,

在汽车自毁前按下弹出按钮
——

(笑声)

(笑声)

所以如果我们承认
汽车必须在道路上进行权衡,

我们如何考虑这些权衡

,我们如何决定?

好吧,也许我们应该进行一项调查,
以了解社会想要什么,

因为归根结底,

法规和法律
是社会价值观的反映。

所以这就是我们所做的。

我们与我的合作者

Jean-François Bonnefon 和 Azim

Shariff 进行了一项

调查,向人们展示
了这些类型的场景。

我们给了他们两个
受两位哲学家启发的选择:

杰里米·边沁和伊曼纽尔·康德。

边沁说汽车
应该遵循功利主义伦理:

它应该
采取将总伤害降至最低的行动——

即使该行动会杀死旁观者

,即使该行动
会杀死乘客。

伊曼纽尔·康德说汽车
应该遵循责任原则,

比如“你不应该杀人”。

所以你不应该采取
明确伤害人类的行动

,你应该让汽车顺其自然,

即使这会伤害更多人。

你怎么认为?

边沁还是康德?

这是我们发现的。

大多数人站在边沁一边。

因此,人们似乎
希望汽车实用,

最大限度地减少总伤害

,这就是我们都应该做的。

问题解决了。

但是有一个小问题。

当我们问人们
是否会购买这样的汽车时,

他们说:“绝对不会。”

(笑声)

他们想购买
不惜一切代价保护他们的汽车,

但他们希望其他人
都购买将伤害降到最低的汽车。

(笑声)

我们以前见过这个问题。

这被称为社会困境。

为了理解社会困境,

我们必须
回顾一下历史。

1800 年代,

英国经济学家威廉·福斯特·劳埃德 (William Forster Lloyd)
出版了一本小册子

,描述了以下情况。

你有一群农民——

英国农民——

他们共享一块共同的土地
供他们的羊吃草。

现在,如果每个农民
带来一定数量的羊——

比如说三只羊

——土地就会焕然一新

,农民幸福

,羊幸福,

一切都好。

现在,如果一个农夫多带了一只羊,

那个农夫会做得稍微好一点
,不会伤害到其他人。

但是,如果每个农民都做出
了个人理性的决定

,土地就会被侵占
,土地将被耗尽

,损害所有农民

的利益,当然也会损害绵羊的利益。

我们在很多地方都看到了这个问题:

管理过度捕捞的困难,

或减少碳排放
以缓解气候变化。

谈到
无人驾驶汽车的监管,

现在的共同土地
基本上是公共安全——

这是共同利益

——农民是

选择乘坐这些汽车的乘客或车主。

通过做出

优先考虑自身安全的个人理性选择,

他们可能会集体
减少共同利益,

即最大限度地减少总伤害。 传统上,

这被称为公地悲剧,

但我认为在无人驾驶汽车的情况下

,问题可能
更隐蔽,

因为不一定
是个人

做出这些决定。

因此,汽车制造商
可能会简单地对汽车

进行编程,以最大限度地
提高客户的安全性,

而这些汽车可能会
自动学习

,这样做会稍微
增加行人的风险。

所以用绵羊的比喻来说,

就好像我们
现在有了拥有自己思想的电羊。

(笑声)

即使农民不知道,它们也可能去吃草。

所以这就是我们所说
的算法公地悲剧

,如果它提供了新类型的挑战。

通常,传统上,

我们
通过监管来解决这些类型的社会困境,

因此政府
或社区聚集在一起

,共同决定
他们想要什么样的结果

以及他们需要实施什么样
的个人行为约束

然后通过监控和执法,

他们可以
确保公共利益得到保护。

那么,

作为监管者,我们为什么不

要求所有汽车将危害降至最低呢?

毕竟,这是
人们说他们想要的。

更重要的是,

我可以肯定,作为个人,

如果我购买一辆可能会
在极少数情况下牺牲我的汽车,

我并不是唯一这样做的傻瓜,

而其他所有人都
享有无条件的保护。

在我们的调查中,我们确实询问了人们
是否会支持监管

,这就是我们的发现。

首先,人们
对监管说不;

其次,他们说,

“如果你监管汽车来做到这一点
并尽量减少总伤害,

我不会买那些汽车。”

具有讽刺意味的是,

通过对汽车进行监管以尽量减少伤害,

我们实际上可能最终会受到更多伤害,

因为人们可能不会
选择更安全的技术,

即使它
比人类司机更安全。

我没有
这个谜语的最终答案,

但我认为作为一个起点,

我们需要社会团结起来

,决定
我们可以接受哪些权衡,

并想出
我们可以执行这些交易的方法—— 关闭。

作为一个起点,
我的优秀学生

Edmond Awad 和 Sohan Dsouza

建立了道德机器网站,

它会为你生成随机场景——

基本上
是一堆随机的两难境地

,你必须在其中选择
汽车应该做什么 给定的场景。

我们会改变
不同受害者的年龄甚至物种。

到目前为止,我们已经从该网站收集

全球超过 100 万人的超过 500 万个决定

这有助于
我们初步了解人们

对哪些取舍
感到满意

以及对他们来说重要的事情——

即使是跨文化的。

但更重要的是,

做这个练习
可以帮助人们认识

到做出这些选择的困难,

以及监管
者的任务是不可能做出选择。

也许这将帮助我们作为一个社会
了解

最终将在监管中实施的权衡取舍。

事实上,我很高兴听到交通部上周

宣布的第一套规定

其中包括一份 15 点清单
供所有汽车制造商提供

,第 14 条是道德考虑——

如何 你要处理那个吗?

我们还

让人们通过
总结他们的选择来反思他们自己的决定。

我给你举一个例子——

我只是要警告你
,这不是你的典型例子,

你的典型用户。

这是这个人牺牲最多,拯救最多的
性格。

(笑声)

你们中的一些人可能同意他

或她,我们不知道。

但是这个人在选择上似乎也稍微
偏爱乘客而不是行人

并且很乐意惩罚乱穿马路。

(笑声)

所以让我们结束吧。

我们从一个问题开始——
让我们称之为道德困境

——汽车
在特定情况下应该做什么:

转向还是停留?

但后来我们
意识到问题是另一个问题。

问题在于如何让
社会同意并执行

他们所接受的权衡取舍。

这是一个社会困境。

1940 年代,艾萨克·阿西莫夫
写下了他著名的机器人定律

——机器人三定律。

机器人可能不会伤害人类

,机器人可能不会违背人类

,机器人可能不允许
自己受到伤害——

按照重要性顺序排列。

但在 40 多年之后

,经过这么多故事
将这些定律推向极限,

阿西莫夫引入了最重要的第零定律

即机器人
可能不会伤害整个人类。

我不知道这
在无人驾驶汽车

或任何特定情况下意味着什么

,我不知道我们如何实施它,

但我认为通过认识

到无人驾驶汽车的监管
不仅是一个技术问题,

而且 也是一个社会合作问题,

我希望我们至少可以
开始提出正确的问题。

谢谢你。

(掌声)