The ethical dilemma of selfdriving cars Patrick Lin

This is a thought experiment.

Let’s say at some point
in the not so distant future,

you’re barreling down the highway
in your self-driving car,

and you find yourself boxed in
on all sides by other cars.

Suddenly, a large, heavy object
falls off the truck in front of you.

Your car can’t stop in time
to avoid the collision,

so it needs to make a decision:

go straight and hit the object,

swerve left into an SUV,

or swerve right into a motorcycle.

Should it prioritize your safety
by hitting the motorcycle,

minimize danger to others by not swerving,

even if it means hitting the large object
and sacrificing your life,

or take the middle ground
by hitting the SUV,

which has a high passenger safety rating?

So what should the self-driving car do?

If we were driving that boxed in car
in manual mode,

whichever way we’d react
would be understood as just that,

a reaction,

not a deliberate decision.

It would be an instinctual panicked move
with no forethought or malice.

But if a programmer were to instruct
the car to make the same move,

given conditions it may
sense in the future,

well, that looks more
like premeditated homicide.

Now, to be fair,

self-driving cars are predicted
to dramatically reduce traffic accidents

and fatalities

by removing human error
from the driving equation.

Plus, there may be all sorts
of other benefits:

eased road congestion,

decreased harmful emissions,

and minimized unproductive
and stressful driving time.

But accidents can and will still happen,

and when they do,

their outcomes may be determined
months or years in advance

by programmers or policy makers.

And they’ll have
some difficult decisions to make.

It’s tempting to offer up general
decision-making principles,

like minimize harm,

but even that quickly leads
to morally murky decisions.

For example,

let’s say we have the same initial set up,

but now there’s a motorcyclist
wearing a helmet to your left

and another one without
a helmet to your right.

Which one should
your robot car crash into?

If you say the biker with the helmet
because she’s more likely to survive,

then aren’t you penalizing
the responsible motorist?

If, instead, you save the biker
without the helmet

because he’s acting irresponsibly,

then you’ve gone way beyond the initial
design principle about minimizing harm,

and the robot car is now
meting out street justice.

The ethical considerations
get more complicated here.

In both of our scenarios,

the underlying design is functioning
as a targeting algorithm of sorts.

In other words,

it’s systematically favoring
or discriminating

against a certain type
of object to crash into.

And the owners of the target vehicles

will suffer the negative consequences
of this algorithm

through no fault of their own.

Our new technologies are opening up
many other novel ethical dilemmas.

For instance, if you had to
choose between

a car that would always save
as many lives as possible in an accident,

or one that would save you at any cost,

which would you buy?

What happens if the cars start analyzing
and factoring in

the passengers of the cars
and the particulars of their lives?

Could it be the case
that a random decision

is still better than a predetermined one
designed to minimize harm?

And who should be making
all of these decisions anyhow?

Programmers? Companies?
Governments?

Reality may not play out exactly
like our thought experiments,

but that’s not the point.

They’re designed to isolate
and stress test our intuitions on ethics,

just like science experiments do
for the physical world.

Spotting these moral hairpin turns now

will help us maneuver the unfamiliar road
of technology ethics,

and allow us to cruise confidently
and conscientiously

into our brave new future.

这是一个思想实验。

假设
在不久的将来的某个时刻,

你开着自动驾驶汽车在高速公路上

疾驰而过,你发现自己
被其他汽车包围了。

突然,一个大而重的物体
从你面前的卡车上掉了下来。

您的汽车无法及时停下
来避免碰撞,

因此需要做出决定

:直行撞到物体,

左转撞上 SUV,

或右转撞上摩托车。

是应该通过撞摩托车来优先考虑自己的安全

通过不急转弯来尽量减少对他人的危险,

即使这意味着撞到大物体
并牺牲自己的生命,

还是
通过撞上

乘客安全等级高的SUV采取中间立场?

那么自动驾驶汽车应该怎么做呢?

如果我们在手动模式下驾驶那辆装箱的汽车

无论我们以何种方式做出反应,
都将被理解为

一种反应,

而不是一个深思熟虑的决定。

这将是一个本能的惊慌失措
,没有预先考虑或恶意。

但是,如果程序员要
指示汽车采取同样的行动,

考虑
到未来可能会感觉到的

情况,那看起来
更像是有预谋的凶杀案。

现在,公平地说,通过消除驾驶方程式中的人为错误,

预计自动驾驶汽车
将大大减少交通事故

和死亡人数

此外,可能还有
其他各种好处:

缓解道路拥堵,

减少有害排放物,

并最大限度地减少非生产性
和压力性的驾驶时间。

但是事故可能而且仍然会发生

,当它们发生时,

其结果可能会由程序员或政策制定
者提前数月或数年确定

他们会
做出一些艰难的决定。

提供一般的决策原则是很诱人的

比如尽量减少伤害,

但即使这样也会很快
导致道德上模糊的决定。

例如,

假设我们有相同的初始设置,

但现在在
您的左侧有一个戴着头盔的摩托车手,而在您的右边

有另一个
没有头盔的摩托车手。

你的机器人汽车应该撞到哪一个?

如果您说戴头盔的骑自行车的人
因为她更有可能生存,

那么您不是在
惩罚负责任的驾驶者吗?

相反,如果你救了没有头盔的骑自行车的人

因为他的行为不负责任,

那么你已经远远超出了最初的
设计原则,即最大限度地减少伤害

,机器人汽车现在
正在执行街头正义。

伦理方面的考虑
在这里变得更加复杂。

在我们的两个场景中

,底层设计都起到
了某种目标算法的作用。

换句话说,

它系统地偏爱

歧视某种类型
的物体。

而目标车辆的车主

将因为

自己没有过错而遭受该算法的负面后果。

我们的新技术正在引发
许多其他新的道德困境。

例如,如果你必须

一辆总是能
在事故中拯救尽可能多的生命的汽车

和一辆不惜一切代价拯救你的汽车之间做出选择,

你会买哪一辆?

如果汽车开始分析
和考虑

汽车的乘客
和他们的生活细节,会发生什么?

随机决定

是否仍然优于
旨在将伤害降至最低的预先确定的决定?

无论如何,谁应该做出
所有这些决定?

程序员? 公司?
政府?

现实可能不会完全
像我们的思想实验那样发挥作用,

但这不是重点。

它们旨在隔离
和压力测试我们对道德的直觉,

就像科学实验
对物理世界所做的那样。

现在发现这些道德发夹弯

将帮助我们驾驭陌生
的技术伦理之路

,让我们自信
而认真地

驶入我们勇敢的新未来。