Can we build AI without losing control over it Sam Harris

I’m going to talk
about a failure of intuition

that many of us suffer from.

It’s really a failure
to detect a certain kind of danger.

I’m going to describe a scenario

that I think is both terrifying

and likely to occur,

and that’s not a good combination,

as it turns out.

And yet rather than be scared,
most of you will feel

that what I’m talking about
is kind of cool.

I’m going to describe
how the gains we make

in artificial intelligence

could ultimately destroy us.

And in fact, I think it’s very difficult
to see how they won’t destroy us

or inspire us to destroy ourselves.

And yet if you’re anything like me,

you’ll find that it’s fun
to think about these things.

And that response is part of the problem.

OK? That response should worry you.

And if I were to convince you in this talk

that we were likely
to suffer a global famine,

either because of climate change
or some other catastrophe,

and that your grandchildren,
or their grandchildren,

are very likely to live like this,

you wouldn’t think,

“Interesting.

I like this TED Talk.”

Famine isn’t fun.

Death by science fiction,
on the other hand, is fun,

and one of the things that worries me most
about the development of AI at this point

is that we seem unable to marshal
an appropriate emotional response

to the dangers that lie ahead.

I am unable to marshal this response,
and I’m giving this talk.

It’s as though we stand before two doors.

Behind door number one,

we stop making progress
in building intelligent machines.

Our computer hardware and software
just stops getting better for some reason.

Now take a moment
to consider why this might happen.

I mean, given how valuable
intelligence and automation are,

we will continue to improve our technology
if we are at all able to.

What could stop us from doing this?

A full-scale nuclear war?

A global pandemic?

An asteroid impact?

Justin Bieber becoming
president of the United States?

(Laughter)

The point is, something would have to
destroy civilization as we know it.

You have to imagine
how bad it would have to be

to prevent us from making
improvements in our technology

permanently,

generation after generation.

Almost by definition,
this is the worst thing

that’s ever happened in human history.

So the only alternative,

and this is what lies
behind door number two,

is that we continue
to improve our intelligent machines

year after year after year.

At a certain point, we will build
machines that are smarter than we are,

and once we have machines
that are smarter than we are,

they will begin to improve themselves.

And then we risk what
the mathematician IJ Good called

an “intelligence explosion,”

that the process could get away from us.

Now, this is often caricatured,
as I have here,

as a fear that armies of malicious robots

will attack us.

But that isn’t the most likely scenario.

It’s not that our machines
will become spontaneously malevolent.

The concern is really
that we will build machines

that are so much
more competent than we are

that the slightest divergence
between their goals and our own

could destroy us.

Just think about how we relate to ants.

We don’t hate them.

We don’t go out of our way to harm them.

In fact, sometimes
we take pains not to harm them.

We step over them on the sidewalk.

But whenever their presence

seriously conflicts with one of our goals,

let’s say when constructing
a building like this one,

we annihilate them without a qualm.

The concern is that we will
one day build machines

that, whether they’re conscious or not,

could treat us with similar disregard.

Now, I suspect this seems
far-fetched to many of you.

I bet there are those of you who doubt
that superintelligent AI is possible,

much less inevitable.

But then you must find something wrong
with one of the following assumptions.

And there are only three of them.

Intelligence is a matter of information
processing in physical systems.

Actually, this is a little bit more
than an assumption.

We have already built
narrow intelligence into our machines,

and many of these machines perform

at a level of superhuman
intelligence already.

And we know that mere matter

can give rise to what is called
“general intelligence,”

an ability to think flexibly
across multiple domains,

because our brains have managed it. Right?

I mean, there’s just atoms in here,

and as long as we continue
to build systems of atoms

that display more and more
intelligent behavior,

we will eventually,
unless we are interrupted,

we will eventually
build general intelligence

into our machines.

It’s crucial to realize
that the rate of progress doesn’t matter,

because any progress
is enough to get us into the end zone.

We don’t need Moore’s law to continue.
We don’t need exponential progress.

We just need to keep going.

The second assumption
is that we will keep going.

We will continue to improve
our intelligent machines.

And given the value of intelligence –

I mean, intelligence is either
the source of everything we value

or we need it to safeguard
everything we value.

It is our most valuable resource.

So we want to do this.

We have problems
that we desperately need to solve.

We want to cure diseases
like Alzheimer’s and cancer.

We want to understand economic systems.
We want to improve our climate science.

So we will do this, if we can.

The train is already out of the station,
and there’s no brake to pull.

Finally, we don’t stand
on a peak of intelligence,

or anywhere near it, likely.

And this really is the crucial insight.

This is what makes
our situation so precarious,

and this is what makes our intuitions
about risk so unreliable.

Now, just consider the smartest person
who has ever lived.

On almost everyone’s shortlist here
is John von Neumann.

I mean, the impression that von Neumann
made on the people around him,

and this included the greatest
mathematicians and physicists of his time,

is fairly well-documented.

If only half the stories
about him are half true,

there’s no question

he’s one of the smartest people
who has ever lived.

So consider the spectrum of intelligence.

Here we have John von Neumann.

And then we have you and me.

And then we have a chicken.

(Laughter)

Sorry, a chicken.

(Laughter)

There’s no reason for me to make this talk
more depressing than it needs to be.

(Laughter)

It seems overwhelmingly likely, however,
that the spectrum of intelligence

extends much further
than we currently conceive,

and if we build machines
that are more intelligent than we are,

they will very likely
explore this spectrum

in ways that we can’t imagine,

and exceed us in ways
that we can’t imagine.

And it’s important to recognize that
this is true by virtue of speed alone.

Right? So imagine if we just built
a superintelligent AI

that was no smarter
than your average team of researchers

at Stanford or MIT.

Well, electronic circuits
function about a million times faster

than biochemical ones,

so this machine should think
about a million times faster

than the minds that built it.

So you set it running for a week,

and it will perform 20,000 years
of human-level intellectual work,

week after week after week.

How could we even understand,
much less constrain,

a mind making this sort of progress?

The other thing that’s worrying, frankly,

is that, imagine the best case scenario.

So imagine we hit upon a design
of superintelligent AI

that has no safety concerns.

We have the perfect design
the first time around.

It’s as though we’ve been handed an oracle

that behaves exactly as intended.

Well, this machine would be
the perfect labor-saving device.

It can design the machine
that can build the machine

that can do any physical work,

powered by sunlight,

more or less for the cost
of raw materials.

So we’re talking about
the end of human drudgery.

We’re also talking about the end
of most intellectual work.

So what would apes like ourselves
do in this circumstance?

Well, we’d be free to play Frisbee
and give each other massages.

Add some LSD and some
questionable wardrobe choices,

and the whole world
could be like Burning Man.

(Laughter)

Now, that might sound pretty good,

but ask yourself what would happen

under our current economic
and political order?

It seems likely that we would witness

a level of wealth inequality
and unemployment

that we have never seen before.

Absent a willingness
to immediately put this new wealth

to the service of all humanity,

a few trillionaires could grace
the covers of our business magazines

while the rest of the world
would be free to starve.

And what would the Russians
or the Chinese do

if they heard that some company
in Silicon Valley

was about to deploy a superintelligent AI?

This machine would be capable
of waging war,

whether terrestrial or cyber,

with unprecedented power.

This is a winner-take-all scenario.

To be six months ahead
of the competition here

is to be 500,000 years ahead,

at a minimum.

So it seems that even mere rumors
of this kind of breakthrough

could cause our species to go berserk.

Now, one of the most frightening things,

in my view, at this moment,

are the kinds of things
that AI researchers say

when they want to be reassuring.

And the most common reason
we’re told not to worry is time.

This is all a long way off,
don’t you know.

This is probably 50 or 100 years away.

One researcher has said,

“Worrying about AI safety

is like worrying
about overpopulation on Mars.”

This is the Silicon Valley version

of “don’t worry your
pretty little head about it.”

(Laughter)

No one seems to notice

that referencing the time horizon

is a total non sequitur.

If intelligence is just a matter
of information processing,

and we continue to improve our machines,

we will produce
some form of superintelligence.

And we have no idea
how long it will take us

to create the conditions
to do that safely.

Let me say that again.

We have no idea how long it will take us

to create the conditions
to do that safely.

And if you haven’t noticed,
50 years is not what it used to be.

This is 50 years in months.

This is how long we’ve had the iPhone.

This is how long “The Simpsons”
has been on television.

Fifty years is not that much time

to meet one of the greatest challenges
our species will ever face.

Once again, we seem to be failing
to have an appropriate emotional response

to what we have every reason
to believe is coming.

The computer scientist Stuart Russell
has a nice analogy here.

He said, imagine that we received
a message from an alien civilization,

which read:

“People of Earth,

we will arrive on your planet in 50 years.

Get ready.”

And now we’re just counting down
the months until the mothership lands?

We would feel a little
more urgency than we do.

Another reason we’re told not to worry

is that these machines
can’t help but share our values

because they will be literally
extensions of ourselves.

They’ll be grafted onto our brains,

and we’ll essentially
become their limbic systems.

Now take a moment to consider

that the safest
and only prudent path forward,

recommended,

is to implant this technology
directly into our brains.

Now, this may in fact be the safest
and only prudent path forward,

but usually one’s safety concerns
about a technology

have to be pretty much worked out
before you stick it inside your head.

(Laughter)

The deeper problem is that
building superintelligent AI on its own

seems likely to be easier

than building superintelligent AI

and having the completed neuroscience

that allows us to seamlessly
integrate our minds with it.

And given that the companies
and governments doing this work

are likely to perceive themselves
as being in a race against all others,

given that to win this race
is to win the world,

provided you don’t destroy it
in the next moment,

then it seems likely
that whatever is easier to do

will get done first.

Now, unfortunately,
I don’t have a solution to this problem,

apart from recommending
that more of us think about it.

I think we need something
like a Manhattan Project

on the topic of artificial intelligence.

Not to build it, because I think
we’ll inevitably do that,

but to understand
how to avoid an arms race

and to build it in a way
that is aligned with our interests.

When you’re talking
about superintelligent AI

that can make changes to itself,

it seems that we only have one chance
to get the initial conditions right,

and even then we will need to absorb

the economic and political
consequences of getting them right.

But the moment we admit

that information processing
is the source of intelligence,

that some appropriate computational system
is what the basis of intelligence is,

and we admit that we will improve
these systems continuously,

and we admit that the horizon
of cognition very likely far exceeds

what we currently know,

then we have to admit

that we are in the process
of building some sort of god.

Now would be a good time

to make sure it’s a god we can live with.

Thank you very much.

(Applause)

我要谈谈

我们许多人都遭受的直觉失败。

确实是
未能发现某种危险。

我将描述一个

我认为既可怕

又可能发生的场景

,事实证明这不是一个好的组合

然而,与其害怕,
你们中的大多数人会

觉得我所说
的有点酷。

我将描述
我们在人工智能方面取得的成果

最终会如何摧毁我们。

事实上,我认为
很难看出他们不会摧毁我们

或激励我们摧毁自己。

然而,如果你和我一样,

你会发现
思考这些事情很有趣。

这种反应是问题的一部分。

好的? 这种反应应该让你担心。

如果我要在这次演讲中说服你

,我们可能

因为气候变化
或其他灾难

而遭受全球饥荒,而你的孙子
或他们的孙子

很可能会像这样生活,

你不会 不要想,

“有趣。

我喜欢这个 TED 演讲。”

饥荒不好玩。 另一方面,

科幻小说中的死亡
很有趣,

在这一点上,我最担心人工智能发展的一件事

是,我们似乎无法对未来
的危险做出适当的情绪

反应。

我无法编组这个回应
,我正在做这个演讲。

就好像我们站在两扇门前。

在一号门后面,

我们停止
在构建智能机器方面取得进展。 由于某种原因,

我们的计算机硬件和软件
只是停止变得更好。

现在
花点时间考虑一下为什么会发生这种情况。

我的意思是,鉴于
智能和自动化的价值,如果我们有能力的话,

我们将继续改进我们的技术

什么可以阻止我们这样做?

全面核战争?

全球大流行?

小行星撞击?

贾斯汀比伯
成为美国总统?

(笑声

) 关键是,
我们所知道的文明必然会遭到破坏。

你必须想象

,阻止我们一代又一代地永久
改进我们的技术会有多糟糕

几乎按照定义,
这是

人类历史上发生过的最糟糕的事情。

所以唯一的选择,

也就是
第二个门背后的原因,

是我们年复一年地
继续改进我们的智能机器

在某个时刻,我们将制造
比我们更智能的机器

,一旦我们拥有
比我们更智能的机器,

它们就会开始自我改进。

然后我们
冒着数学家 IJ Good 所说

的“智力爆炸”的风险

,这个过程可能会远离我们。

现在,
正如我在这里

所说的那样,这经常被讽刺为害怕恶意机器人军队

会攻击我们。

但这不是最有可能的情况。

并不是说我们的机器
会自发地变得恶毒。

真正令人担忧的是
,我们将制造


比我们更有能力的机器,

以至于
他们的目标与我们自己的目标之间的最轻微分歧

可能会摧毁我们。

想想我们与蚂蚁的关系。

我们不恨他们。

我们不会特意去伤害他们。

事实上,有时
我们会努力不伤害他们。

我们在人行道上跨过他们。

但是,每当他们的存在

与我们的目标发生严重冲突时,

假设在建造
这样的建筑物时,

我们会毫不犹豫地消灭他们。

令人担忧的是,我们
总有一天会制造出机器

,无论它们是否有意识,

都会以类似的方式无视我们。

现在,我怀疑这
对你们中的许多人来说似乎有些牵强。

我敢打赌,你们当中有些人
怀疑超级智能人工智能是可能的,

更不用说不可避免了。

但是,您必须发现
以下假设之一有问题。

而且只有三个。

智能是物理系统中信息
处理的问题。

实际上,这
不仅仅是一个假设。

我们已经
在我们的机器中构建了狭隘的智能,

其中许多机器已经

在超人类智能水平上运行

而且我们知道,仅仅是物质

就可以产生所谓的
“通用智能”,

一种跨多个领域灵活思考的能力

因为我们的大脑已经管理了它。 对?

我的意思是,这里只有原子

,只要我们
继续构建

显示越来越
智能行为的原子系统


除非我们被打断,否则

我们最终会

在我们的机器中构建通用智能。

重要的是要认识
到进步的速度并不重要,

因为任何进步
都足以让我们进入终点区。

我们不需要摩尔定律继续下去。
我们不需要指数级的进步。

我们只需要继续前进。

第二个假设
是我们将继续前进。

我们将继续改进
我们的智能机器。

考虑到智力的价值——

我的意思是,智力要么
是我们所珍视的一切的源泉,

要么是我们需要它来保护
我们所珍视的一切。

这是我们最宝贵的资源。

所以我们想这样做。

我们有
急需解决的问题。

我们想要治愈
阿尔茨海默氏症和癌症等疾病。

我们想了解经济系统。
我们想改进我们的气候科学。

因此,如果可以的话,我们会这样做。

火车已经出站了
,没有刹车可以拉。

最后,我们可能不会
站在智力的顶峰

或接近它的任何地方。

这确实是关键的洞察力。

这就是
我们的处境如此不稳定的原因,

也是我们
对风险的直觉如此不可靠的原因。

现在,想想有史以来最聪明的
人。

几乎每个人的候选名单上
都是约翰·冯·诺依曼。

我的意思是,冯·诺依曼
给他周围的人留下的印象

,包括他那个时代最伟大的
数学家和物理学家,

是有据可查的。

如果
关于他的故事只有一半是真的,

那么毫无疑问,

他是有史以来最聪明的
人之一。

所以考虑智力的范围。

这里有约翰·冯·诺依曼。

然后我们有你和我。

然后我们有一只鸡。

(笑声)

对不起,一只鸡。

(笑声)

我没有理由让这个谈话
比它需要的更令人沮丧。

(笑声)

然而,
智能的

范围似乎
比我们现在想象的要扩展得更远

,如果我们制造
出比我们更智能的机器,

它们很可能会

以我们无法做到的方式探索这个范围 想象,


以我们无法想象的方式超越我们。

重要的是要认识到
仅靠速度就是如此。

对? 所以想象一下,如果我们只是建立
了一个

不比

斯坦福或麻省理工学院的普通研究人员团队聪明的超级智能人工智能。

嗯,电子电路的
运行速度比生化电路快一百万倍

所以这台机器的思考速度应该

比制造它的大脑快一百万倍。

所以你让它运行一周

,它会一周又一周地执行 20,000
年人类水平的智力工作

我们怎么能理解,
更不用说约束,

一个取得这种进步的头脑?

坦率地说,另一件令人担忧的事情

是,想象一下最好的情况。

因此,想象一下我们遇到了一种没有安全问题
的超级智能 AI 设计

我们第一次就有完美
的设计。

就好像我们得到了一个

完全符合预期的神谕。

那么,这台机器将
是完美的省力设备。

它可以设计
机器,制造

出可以做任何体力工作的机器,

由阳光驱动,

或多或少地以原材料成本为
代价。

所以我们正在谈论
人类苦差事的终结。

我们也在谈论
大多数智力工作的终结。

那么像我们
这样的猿类在这种情况下会怎么做呢?

好吧,我们可以自由地玩飞盘
,互相按摩。

加上一些迷幻剂和一些有
问题的衣橱选择

,整个世界
都可能像火人节一样。

(笑声)

现在,这听起来不错,

但问问你自己,

在我们当前的经济
和政治秩序下会发生什么?

我们似乎很可能会

目睹前所未有的财富不平等

失业水平。

如果没有
立即将这些新财富

用于为全人类服务的意愿,

少数万亿富翁可以
为我们的商业杂志封面增光添彩

,而世界其他地方
则可以自由地挨饿。

如果俄罗斯人
或中国

人听说硅谷的某家公司

即将部署超级智能人工智能,他们会怎么做?

这台机器将能够以前所未有
的力量发动战争

,无论是陆地战争还是网络战争

这是一个赢家通吃的场景。

在这里比竞争对手领先 6 个月至少

要领先 500,000 年

因此,即使仅仅是
关于这种突破的谣言,似乎也

可能导致我们的物种发狂。

现在,

在我看来,

目前最可怕的事情之一
就是人工智能研究人员

在想要让人放心时所说的话。

我们被告知不要担心的最常见原因是时间。

这一切都还很遥远,
你不知道吗?

这可能是 50 或 100 年后的事。

一位研究人员说,

“担心人工智能

安全就像
担心火星上的人口过剩。”

这是硅谷版本

的“别担心你的
小脑袋”。

(笑声)

似乎没有人

注意到引用时间范围

是完全不合逻辑的。

如果智能只是
信息处理的问题,

并且我们继续改进我们的机器,

我们将产生
某种形式的超级智能。

我们不知道我们需要
多长时间

才能创造条件
来安全地做到这一点。

让我再说一遍。

我们不知道我们需要多长时间

才能创造条件
来安全地做到这一点。

如果你没有注意到,
50 年已经不是过去的样子了。

这是50年的几个月。

这就是我们拥有 iPhone 的时间。

这就是《辛普森一家
》在电视上播出的时间。

50 年时间

不足以应对
我们物种将面临的最大挑战之一。

再一次,我们似乎没有

对我们有充分
理由相信即将到来的事情做出适当的情绪反应。

计算机科学家 Stuart Russell
在这里有一个很好的类比。

他说,想象一下我们收到
了来自外星文明的信息

,上面写着:

“地球人,

我们将在 50 年后到达你们的星球。

做好准备。”

现在我们只是在
倒计时直到母舰着陆?

我们会感到
比我们更紧迫一点。

我们被告知不要担心的另一个原因

是,这些
机器不得不分享我们的价值观,

因为它们实际上
是我们自己的延伸。

它们将被移植到我们的大脑上

,我们将基本上
成为它们的边缘系统。

现在花点时间考虑

一下,建议的最安全
和唯一审慎的前进道路

是将这项技术
直接植入我们的大脑。

现在,这实际上可能是最安全
和唯一审慎的前进道路,

但通常一个人
对一项技术的安全担忧

必须在你将其深入脑海之前得到充分解决

(笑声

) 更深层次的问题是,
自己构建超级智能 AI

似乎

比构建超级智能 AI

并拥有完整的神经科学

让我们能够
将我们的思想与它无缝整合更容易。

鉴于
从事这项工作的公司和政府

可能会认为
自己在与所有其他人竞争,

鉴于赢得这场比赛
就是赢得世界,

只要你
下一刻不摧毁它,

那么它
似乎任何更容易做的事情

都会先完成。

现在,不幸的
是,

除了
建议我们更多人考虑它之外,我没有解决这个问题的方法。

我认为我们需要一个

关于人工智能主题的曼哈顿计划。

不是要建造它,因为我认为
我们将不可避免地这样做,

而是要了解
如何避免军备竞赛


以符合我们利益的方式建造它。

当你谈论

可以自我改变的超级智能 AI 时,

似乎我们只有一次
机会让初始条件正确

,即使这样,我们也需要承担让它们正确

的经济和政治
后果。

但是当我们

承认信息处理
是智能的源泉

,一些合适的计算系统
是智能的基础

,我们承认我们会
不断改进这些系统

,我们承认
认知的视野很可能远远超过

我们目前所知道的,

那么我们不得不

承认我们
正在建造某种神。

现在是

确定它是我们可以与之共处的上帝的好时机。

非常感谢你。

(掌声)