How to get empowered not overpowered by AI Max Tegmark

After 13.8 billion years
of cosmic history,

our universe has woken up

and become aware of itself.

From a small blue planet,

tiny, conscious parts of our universe
have begun gazing out into the cosmos

with telescopes,

discovering something humbling.

We’ve discovered that our universe
is vastly grander

than our ancestors imagined

and that life seems to be an almost
imperceptibly small perturbation

on an otherwise dead universe.

But we’ve also discovered
something inspiring,

which is that the technology
we’re developing has the potential

to help life flourish like never before,

not just for centuries
but for billions of years,

and not just on earth but throughout
much of this amazing cosmos.

I think of the earliest life as “Life 1.0”

because it was really dumb,

like bacteria, unable to learn
anything during its lifetime.

I think of us humans as “Life 2.0”
because we can learn,

which we in nerdy, geek speak,

might think of as installing
new software into our brains,

like languages and job skills.

“Life 3.0,” which can design not only
its software but also its hardware

of course doesn’t exist yet.

But perhaps our technology
has already made us “Life 2.1,”

with our artificial knees,
pacemakers and cochlear implants.

So let’s take a closer look
at our relationship with technology, OK?

As an example,

the Apollo 11 moon mission
was both successful and inspiring,

showing that when we humans
use technology wisely,

we can accomplish things
that our ancestors could only dream of.

But there’s an even more inspiring journey

propelled by something
more powerful than rocket engines,

where the passengers
aren’t just three astronauts

but all of humanity.

Let’s talk about our collective
journey into the future

with artificial intelligence.

My friend Jaan Tallinn likes to point out
that just as with rocketry,

it’s not enough to make
our technology powerful.

We also have to figure out,
if we’re going to be really ambitious,

how to steer it

and where we want to go with it.

So let’s talk about all three
for artificial intelligence:

the power, the steering
and the destination.

Let’s start with the power.

I define intelligence very inclusively –

simply as our ability
to accomplish complex goals,

because I want to include both
biological and artificial intelligence.

And I want to avoid
the silly carbon-chauvinism idea

that you can only be smart
if you’re made of meat.

It’s really amazing how the power
of AI has grown recently.

Just think about it.

Not long ago, robots couldn’t walk.

Now, they can do backflips.

Not long ago,

we didn’t have self-driving cars.

Now, we have self-flying rockets.

Not long ago,

AI couldn’t do face recognition.

Now, AI can generate fake faces

and simulate your face
saying stuff that you never said.

Not long ago,

AI couldn’t beat us at the game of Go.

Then, Google DeepMind’s AlphaZero AI
took 3,000 years of human Go games

and Go wisdom,

ignored it all and became the world’s best
player by just playing against itself.

And the most impressive feat here
wasn’t that it crushed human gamers,

but that it crushed human AI researchers

who had spent decades
handcrafting game-playing software.

And AlphaZero crushed human AI researchers
not just in Go but even at chess,

which we have been working on since 1950.

So all this amazing recent progress in AI
really begs the question:

How far will it go?

I like to think about this question

in terms of this abstract
landscape of tasks,

where the elevation represents
how hard it is for AI to do each task

at human level,

and the sea level represents
what AI can do today.

The sea level is rising
as AI improves,

so there’s a kind of global warming
going on here in the task landscape.

And the obvious takeaway
is to avoid careers at the waterfront –

(Laughter)

which will soon be
automated and disrupted.

But there’s a much
bigger question as well.

How high will the water end up rising?

Will it eventually rise
to flood everything,

matching human intelligence at all tasks.

This is the definition
of artificial general intelligence –

AGI,

which has been the holy grail
of AI research since its inception.

By this definition, people who say,

“Ah, there will always be jobs
that humans can do better than machines,”

are simply saying
that we’ll never get AGI.

Sure, we might still choose
to have some human jobs

or to give humans income
and purpose with our jobs,

but AGI will in any case
transform life as we know it

with humans no longer being
the most intelligent.

Now, if the water level does reach AGI,

then further AI progress will be driven
mainly not by humans but by AI,

which means that there’s a possibility

that further AI progress
could be way faster

than the typical human research
and development timescale of years,

raising the controversial possibility
of an intelligence explosion

where recursively self-improving AI

rapidly leaves human
intelligence far behind,

creating what’s known
as superintelligence.

Alright, reality check:

Are we going to get AGI any time soon?

Some famous AI researchers,
like Rodney Brooks,

think it won’t happen
for hundreds of years.

But others, like Google DeepMind
founder Demis Hassabis,

are more optimistic

and are working to try to make
it happen much sooner.

And recent surveys have shown
that most AI researchers

actually share Demis’s optimism,

expecting that we will
get AGI within decades,

so within the lifetime of many of us,

which begs the question – and then what?

What do we want the role of humans to be

if machines can do everything better
and cheaper than us?

The way I see it, we face a choice.

One option is to be complacent.

We can say, “Oh, let’s just build machines
that can do everything we can do

and not worry about the consequences.

Come on, if we build technology
that makes all humans obsolete,

what could possibly go wrong?”

(Laughter)

But I think that would be
embarrassingly lame.

I think we should be more ambitious –
in the spirit of TED.

Let’s envision a truly inspiring
high-tech future

and try to steer towards it.

This brings us to the second part
of our rocket metaphor: the steering.

We’re making AI more powerful,

but how can we steer towards a future

where AI helps humanity flourish
rather than flounder?

To help with this,

I cofounded the Future of Life Institute.

It’s a small nonprofit promoting
beneficial technology use,

and our goal is simply
for the future of life to exist

and to be as inspiring as possible.

You know, I love technology.

Technology is why today
is better than the Stone Age.

And I’m optimistic that we can create
a really inspiring high-tech future …

if – and this is a big if –

if we win the wisdom race –

the race between the growing
power of our technology

and the growing wisdom
with which we manage it.

But this is going to require
a change of strategy

because our old strategy
has been learning from mistakes.

We invented fire,

screwed up a bunch of times –

invented the fire extinguisher.

(Laughter)

We invented the car,
screwed up a bunch of times –

invented the traffic light,
the seat belt and the airbag,

but with more powerful technology
like nuclear weapons and AGI,

learning from mistakes
is a lousy strategy,

don’t you think?

(Laughter)

It’s much better to be proactive
rather than reactive;

plan ahead and get things
right the first time

because that might be
the only time we’ll get.

But it is funny because
sometimes people tell me,

“Max, shhh, don’t talk like that.

That’s Luddite scaremongering.”

But it’s not scaremongering.

It’s what we at MIT
call safety engineering.

Think about it:

before NASA launched
the Apollo 11 mission,

they systematically thought through
everything that could go wrong

when you put people
on top of explosive fuel tanks

and launch them somewhere
where no one could help them.

And there was a lot that could go wrong.

Was that scaremongering?

No.

That’s was precisely
the safety engineering

that ensured the success of the mission,

and that is precisely the strategy
I think we should take with AGI.

Think through what can go wrong
to make sure it goes right.

So in this spirit,
we’ve organized conferences,

bringing together leading
AI researchers and other thinkers

to discuss how to grow this wisdom
we need to keep AI beneficial.

Our last conference
was in Asilomar, California last year

and produced this list of 23 principles

which have since been signed
by over 1,000 AI researchers

and key industry leaders,

and I want to tell you
about three of these principles.

One is that we should avoid an arms race
and lethal autonomous weapons.

The idea here is that any science
can be used for new ways of helping people

or new ways of harming people.

For example, biology and chemistry
are much more likely to be used

for new medicines or new cures
than for new ways of killing people,

because biologists
and chemists pushed hard –

and successfully –

for bans on biological
and chemical weapons.

And in the same spirit,

most AI researchers want to stigmatize
and ban lethal autonomous weapons.

Another Asilomar AI principle

is that we should mitigate
AI-fueled income inequality.

I think that if we can grow
the economic pie dramatically with AI

and we still can’t figure out
how to divide this pie

so that everyone is better off,

then shame on us.

(Applause)

Alright, now raise your hand
if your computer has ever crashed.

(Laughter)

Wow, that’s a lot of hands.

Well, then you’ll appreciate
this principle

that we should invest much more
in AI safety research,

because as we put AI in charge
of even more decisions and infrastructure,

we need to figure out how to transform
today’s buggy and hackable computers

into robust AI systems
that we can really trust,

because otherwise,

all this awesome new technology
can malfunction and harm us,

or get hacked and be turned against us.

And this AI safety work
has to include work on AI value alignment,

because the real threat
from AGI isn’t malice,

like in silly Hollywood movies,

but competence –

AGI accomplishing goals
that just aren’t aligned with ours.

For example, when we humans drove
the West African black rhino extinct,

we didn’t do it because we were a bunch
of evil rhinoceros haters, did we?

We did it because
we were smarter than them

and our goals weren’t aligned with theirs.

But AGI is by definition smarter than us,

so to make sure that we don’t put
ourselves in the position of those rhinos

if we create AGI,

we need to figure out how
to make machines understand our goals,

adopt our goals and retain our goals.

And whose goals should these be, anyway?

Which goals should they be?

This brings us to the third part
of our rocket metaphor: the destination.

We’re making AI more powerful,

trying to figure out how to steer it,

but where do we want to go with it?

This is the elephant in the room
that almost nobody talks about –

not even here at TED –

because we’re so fixated
on short-term AI challenges.

Look, our species is trying to build AGI,

motivated by curiosity and economics,

but what sort of future society
are we hoping for if we succeed?

We did an opinion poll on this recently,

and I was struck to see

that most people actually
want us to build superintelligence:

AI that’s vastly smarter
than us in all ways.

What there was the greatest agreement on
was that we should be ambitious

and help life spread into the cosmos,

but there was much less agreement
about who or what should be in charge.

And I was actually quite amused

to see that there’s some some people
who want it to be just machines.

(Laughter)

And there was total disagreement
about what the role of humans should be,

even at the most basic level,

so let’s take a closer look
at possible futures

that we might choose
to steer toward, alright?

So don’t get me wrong here.

I’m not talking about space travel,

merely about humanity’s
metaphorical journey into the future.

So one option that some
of my AI colleagues like

is to build superintelligence
and keep it under human control,

like an enslaved god,

disconnected from the internet

and used to create unimaginable
technology and wealth

for whoever controls it.

But Lord Acton warned us

that power corrupts,
and absolute power corrupts absolutely,

so you might worry that maybe
we humans just aren’t smart enough,

or wise enough rather,

to handle this much power.

Also, aside from any
moral qualms you might have

about enslaving superior minds,

you might worry that maybe
the superintelligence could outsmart us,

break out and take over.

But I also have colleagues
who are fine with AI taking over

and even causing human extinction,

as long as we feel the the AIs
are our worthy descendants,

like our children.

But how would we know that the AIs
have adopted our best values

and aren’t just unconscious zombies
tricking us into anthropomorphizing them?

Also, shouldn’t those people
who don’t want human extinction

have a say in the matter, too?

Now, if you didn’t like either
of those two high-tech options,

it’s important to remember
that low-tech is suicide

from a cosmic perspective,

because if we don’t go far
beyond today’s technology,

the question isn’t whether humanity
is going to go extinct,

merely whether
we’re going to get taken out

by the next killer asteroid, supervolcano

or some other problem
that better technology could have solved.

So, how about having
our cake and eating it …

with AGI that’s not enslaved

but treats us well because its values
are aligned with ours?

This is the gist of what Eliezer Yudkowsky
has called “friendly AI,”

and if we can do this,
it could be awesome.

It could not only eliminate negative
experiences like disease, poverty,

crime and other suffering,

but it could also give us
the freedom to choose

from a fantastic new diversity
of positive experiences –

basically making us
the masters of our own destiny.

So in summary,

our situation with technology
is complicated,

but the big picture is rather simple.

Most AI researchers
expect AGI within decades,

and if we just bumble
into this unprepared,

it will probably be
the biggest mistake in human history –

let’s face it.

It could enable brutal,
global dictatorship

with unprecedented inequality,
surveillance and suffering,

and maybe even human extinction.

But if we steer carefully,

we could end up in a fantastic future
where everybody’s better off:

the poor are richer, the rich are richer,

everybody is healthy
and free to live out their dreams.

Now, hang on.

Do you folks want the future
that’s politically right or left?

Do you want the pious society
with strict moral rules,

or do you an hedonistic free-for-all,

more like Burning Man 24/7?

Do you want beautiful beaches,
forests and lakes,

or would you prefer to rearrange
some of those atoms with the computers,

enabling virtual experiences?

With friendly AI, we could simply
build all of these societies

and give people the freedom
to choose which one they want to live in

because we would no longer
be limited by our intelligence,

merely by the laws of physics.

So the resources and space
for this would be astronomical –

literally.

So here’s our choice.

We can either be complacent
about our future,

taking as an article of blind faith

that any new technology
is guaranteed to be beneficial,

and just repeat that to ourselves
as a mantra over and over and over again

as we drift like a rudderless ship
towards our own obsolescence.

Or we can be ambitious –

thinking hard about how
to steer our technology

and where we want to go with it

to create the age of amazement.

We’re all here to celebrate
the age of amazement,

and I feel that its essence should lie
in becoming not overpowered

but empowered by our technology.

Thank you.

(Applause)

在 138 亿年
的宇宙历史之后,

我们的宇宙已经苏醒

并意识到了自己。

从一颗蓝色的小行星开始

,我们宇宙中微小的、有意识的部分
已经开始用望远镜凝视着宇宙

发现了一些令人谦卑的东西。

我们发现,我们的宇宙

比我们祖先想象的要宏大得多

,而生命似乎只是

对原本已经死亡的宇宙的一种几乎无法察觉的微小扰动。

但我们也发现了
一些鼓舞人心的东西,

那就是
我们正在开发的技术有可能

以前所未有的方式帮助生命蓬勃发展,

不仅仅是几个世纪,
而是数十亿年

,不仅在地球上,而且
在这个惊人的大部分时间里 宇宙。

我认为最早的生命是“生命 1.0”,

因为它真的很笨,

就像细菌一样,
在它的一生中什么都学不到。

我认为我们人类是“生活 2.0”,
因为我们可以学习

,用书呆子、极客的话来说,这

可能会认为是
在我们的大脑中安装新软件,

比如语言和工作技能。

“Life 3.0”不仅可以设计
软件,还可以设计硬件

,当然现在还不存在。

但也许我们的技术
已经

用我们的人工膝关节、
心脏起搏器和人工耳蜗使我们成为“生命 2.1”。

那么让我们仔细
看看我们与技术的关系,好吗?

例如

,阿波罗 11 号登月
任务既成功又鼓舞人心,

这表明当我们人类
明智地使用技术时,

我们可以完成
我们祖先梦寐以求的事情。

但是还有比火箭发动机更强大的东西推动的更鼓舞人心的旅程

乘客不仅仅是三名宇航员,

而是全人类。

让我们谈谈我们与人工智能一起
走向未来的集体旅程

我的朋友 Jaan Tallinn 喜欢指出
,就像火箭一样,

仅仅让
我们的技术变得强大是不够的。

我们还必须弄清楚,
如果我们真的要雄心勃勃,

如何驾驭它

以及我们想要去哪里。

那么让我们来谈谈
人工智能的所有三个方面

:动力、转向
和目的地。

让我们从权力开始。

我对智能的定义非常包容——

简单地说,就是我们
实现复杂目标的能力,

因为我想将
生物智能和人工智能都包括在内。

而且我想
避免愚蠢的碳沙文主义想法

,即只有
当你是肉做的时候你才能变得聪明。 最近人工智能

的力量如何增长真是令人惊讶

考虑一下。

不久前,机器人还不能走路。

现在,他们可以做后空翻了。

不久前,

我们还没有自动驾驶汽车。

现在,我们有自飞火箭。

不久前,

人工智能还不能做人脸识别。

现在,人工智能可以生成假脸

并模拟你的脸
说你从未说过的话。

不久前,

人工智能无法在围棋比赛中击败我们。

然后,谷歌 DeepMind 的 AlphaZero AI
取了人类 3000 年的围棋

和围棋智慧,

无视这一切,只与自己对战,成为了世界上最好的
棋手。

而这里最令人印象深刻的壮举
并不是它压垮了人类游戏玩家,

而是它压垮

了花费数十年时间
手工制作游戏软件的人类 AI 研究人员。

AlphaZero 不仅在围棋方面击败了人类 AI 研究人员
,甚至在国际象棋方面

也取得了成功,这是我们自 1950 年以来一直在研究的。

因此,人工智能最近取得的所有这些惊人进展
确实引出了一个问题:

它会走多远?

我喜欢

从这个抽象
的任务景观来思考这个问题,

其中海拔代表
人工智能在人类层面上完成每项任务的难度,

而海平面代表
人工智能今天可以做什么。

随着人工智能的进步,海平面正在上升,

因此
在任务领域中正在发生一种全球变暖。

显而易见的收获
是避免在海滨工作——

(笑声)

这很快就会被
自动化和颠覆。

但还有一个
更大的问题。

水最终会上升多高?

它最终会不会
泛滥成灾,

在所有任务中匹配人类智能。

这就是
通用人工智能的定义

——AGI,


从一开始就是人工智能研究的圣杯。

根据这个定义,那些说

“啊,总会有人类比
机器做得更好的工作”的

人只是在
说我们永远不会得到 AGI。

当然,我们可能仍然选择
从事一些人类工作,

或者为人类提供收入
和工作目的,

但无论如何,AGI 将
改变我们所知道的生活,

因为人类不再
是最聪明的。

现在,如果水位确实达到了 AGI,

那么 AI 的进一步发展将
主要不是由人类驱动,而是由 AI 驱动,

这意味着

AI 的进一步发展有
可能

比典型的人类
研发时间尺度要快得多,

引发了有争议
的智能爆炸可能性,

即递归自我改进的人工智能

迅速将人类
智能远远抛在后面,

创造了所谓
的超级智能。

好吧,现实检查:

我们会很快获得 AGI 吗?

一些著名的人工智能研究人员,
如罗德尼·布鲁克斯,

认为这不会
在数百年内发生。

但其他人,如谷歌 DeepMind
创始人 Demis Hassabis,

则更为乐观,

并正在努力
使其早日实现。

最近的调查表明
,大多数 AI 研究人员

实际上与 Demis 一样乐观,

期望我们将
在几十年内获得 AGI,

因此在我们许多人的有生之年,

这就引出了一个问题——然后呢?

如果机器可以
比我们做得更好、更便宜,我们希望人类扮演什么角色?

在我看来,我们面临着一个选择。

一种选择是自满。

我们可以说,“哦,让我们制造
可以做我们能做的所有事情的机器,

而不用担心后果。

来吧,如果我们制造的技术
让所有人都过时了,

可能会出什么问题?”

(笑声)

但我认为那将是
令人尴尬的蹩脚。

我认为我们应该更加雄心勃勃——
本着 TED 的精神。

让我们设想一个真正鼓舞人心
的高科技未来,

并努力朝着它前进。

这将我们带到
了火箭隐喻的第二部分:转向。

我们正在让人工智能变得更强大,

但我们如何才能引领

人工智能帮助人类繁荣
而不是挣扎的未来呢?

为了解决这个问题,

我与人共同创立了未来生命研究所。

这是一个促进有益技术使用的小型非营利组织

,我们的目标只是
让生活的未来存在

并尽可能鼓舞人心。

你知道,我喜欢技术。

技术是今天
比石器时代更好的原因。

我很乐观,我们可以创造
一个真正鼓舞人心的高科技未来……

我们管理它的智慧。

但这
需要改变策略,

因为我们的旧策略
一直在从错误中吸取教训。

我们发明了火,

搞砸了很多次——

发明了灭火器。

(笑声)

我们发明了汽车,
搞砸了很多次——

发明了红绿灯
、安全带和安全气囊,

但是随着
核武器和 AGI 等更强大的技术,

从错误中学习
是一个糟糕的策略,

不要 您认为?

(笑声

) 主动
比被动好得多;

提前计划并
在第一时间把事情

做好,因为那可能
是我们唯一能做到的。

但这很有趣,因为
有时人们会告诉我,

“Max,嘘,不要那样说话。

那是 Luddite 的危言耸听。”

但这并不是危言耸听。

这就是我们在麻省理工学院
所说的安全工程。

想一想:

在美国宇航局
发射阿波罗 11 号任务之前,

他们系统地考虑了

当你把人
放在爆炸性燃料箱上

并将他们发射到
无人能帮助他们的地方时可能出现的所有问题。

而且有很多可能出错的地方。

那是在吓唬人吗?

不,

这正是

确保任务成功的安全工程,

而这正是
我认为我们应该对 AGI 采取的策略。

仔细考虑可能出现的问题
,以确保一切正常。

因此,本着这种精神,
我们组织了会议,

将领先的
人工智能研究人员和其他思想家聚集

在一起,讨论如何培养
我们需要的智慧,让人工智能保持有益。

我们的上一次会议
是去年在加利福尼亚州的 Asilomar 举行的,

并列出了 23 条原则清单,这些原则

已由 1000 多名人工智能研究人员

和主要行业领导者签署

,我想告诉你其中
的三个原则。

一是我们应该避免军备竞赛
和致命的自主武器。

这里的想法是,任何科学
都可以用于帮助人们

的新方法或伤害人们的新方法。

例如,生物学和化学
更有可能被

用于新药或新疗法,而
不是用于杀人的新方法,

因为生物学家
和化学家努力

并成功地推动

了禁止生物
和化学武器的工作。

本着同样的精神,

大多数人工智能研究人员都想污名化
和禁止致命的自主武器。

另一个 Asilomar AI 原则

是,我们应该减轻由
AI 推动的收入不平等。

我认为,如果我们可以
用 AI 显着扩大经济蛋糕

,但我们仍然无法弄清楚
如何分割这个蛋糕,

让每个人都过得更好,

那就让我们感到羞耻。

(掌声)

好的,
如果你的电脑曾经死机过,请举手。

(笑声)

哇,有很多手。

好吧,那么你就会明白
这个原则

,我们应该
在人工智能安全研究上投入更多,

因为当我们让人工智能
负责更多的决策和基础设施时,

我们需要弄清楚如何将
今天的错误和可破解的计算机

转变为强大的人工智能
我们可以真正信任的系统,

因为否则,

所有这些令人敬畏的新技术
都可能发生故障并伤害我们,

或者被黑客入侵并反对我们。

而这项人工智能安全
工作必须包括人工智能价值调整工作,

因为
来自 AGI 的真正威胁不是恶意,

就像在愚蠢的好莱坞电影中那样,

而是能力

——AGI 实现的目标
与我们的目标不一致。

例如,当我们人类
驱使西非黑犀牛灭绝时,

我们没有这样做,因为我们是
一群邪恶的犀牛仇恨者,不是吗?

我们这样做是因为
我们比他们更聪明,

而且我们的目标与他们的不一致。

但是 AGI 从定义上讲比我们聪明,

所以为了确保我们在创建 AGI 时不会把
自己置于那些犀牛的位置

我们需要弄清楚
如何让机器理解我们的目标,

采用我们的目标并保留我们的目标。 目标。

无论如何,这些应该是谁的目标?

他们应该是哪些目标?

这将我们带到
了火箭隐喻的第三部分:目的地。

我们正在让人工智能变得更强大,

试图弄清楚如何引导它,

但我们想把它带到哪里去呢?

这是房间
里几乎没有人谈论的大象——

甚至在 TED 上也没有——

因为我们非常
关注短期的人工智能挑战。

看,我们的物种正试图建立 AGI,

出于好奇心和经济的动机,


如果我们成功了,我们希望建立什么样的未来社会?

我们最近对此进行了一项民意调查,令

我震惊的是

,大多数人实际上
希望我们建立超级智能:

在各方面都比我们聪明得多的人工智能。

最大的共识
是我们应该雄心勃勃

,帮助生命传播到宇宙中,


关于谁或什么应该负责的共识要少得多。

我实际上很

高兴看到有些
人希望它只是机器。

(笑声)

对于人类的角色应该是什么,

甚至在最基本的层面上也存在完全的分歧,

所以让我们仔细看看

我们可能
选择引导的未来,好吗?

所以不要在这里误会我的意思。

我不是在谈论太空旅行,

只是在谈论人类
通往未来的隐喻之旅。

因此
,我的一些 AI 同事喜欢的一种选择

是建立超级智能
并使其处于人类控制之下,

就像被奴役的上帝一样,

与互联网断开连接,

并用于为任何控制它的人创造难以想象的
技术和财富

但是阿克顿勋爵警告我们

,权力会腐蚀
,绝对的权力绝对会腐蚀,

所以你可能会担心,也许
我们人类不够聪明,

或者说不够聪明,

无法处理这么大的权力。

此外,除了
你可能

对奴役优越思想有任何道德疑虑之外,

你可能会担心
超级智能可能会智胜我们,

爆发并接管。

但我
也有同事对 AI 接管

甚至导致人类灭绝

感到满意,只要我们认为 AI
是我们有价值的后代,

就像我们的孩子一样。

但是我们怎么知道人工智能
已经接受了我们最好的价值观

,而不仅仅是无意识的僵尸在
欺骗我们将它们拟人化呢?

还有,
那些不希望人类灭绝的人不应该

也有发言权吗?

现在,如果你不
喜欢这两个高科技选项中的任何一个,

重要的是要记住

从宇宙的角度来看,低科技就是自杀,

因为如果我们不
超越今天的技术

,问题不在于是否
人类将会灭绝,

只是
我们是否会

被下一个杀手小行星、超级火山

或其他
更好的技术可以解决的问题所消灭。

那么,拥有
我们的蛋糕并吃掉它怎么样

……AGI 不会被奴役,

但会因为它的价值观
与我们的价值观一致而善待我们?

这就是 Eliezer
Yudkowsky 所说的“友好人工智能”的要点

,如果我们能做到这一点,
那将是非常棒的。

它不仅可以消除
疾病、贫困、

犯罪和其他痛苦等负面体验,

还可以让
我们自由地

从各种奇妙
的积极体验中进行选择——

基本上使
我们成为自己命运的主人。

总而言之,

我们在技术方面的情况
很复杂,

但大局却相当简单。

大多数 AI 研究人员
预计将在几十年内实现 AGI

,如果我们只是
毫无准备地陷入这种情况,

那可能会成为
人类历史上最大的错误——

让我们面对现实吧。

它可以实现残酷的
全球独裁统治

,带来前所未有的不平等、
监视和痛苦,

甚至可能导致人类灭绝。

但是,如果我们谨慎行事,

我们最终会进入一个
每个人都过得更好的美好未来

:穷人更富有,富人更富有,

每个人都健康
,可以自由地实现自己的梦想。

现在,等一下。

你们希望未来
在政治上是右翼还是左翼?

你想要一个
有严格道德规则的虔诚社会,

还是一个享乐主义的自由主义者,

更像是 24/7 火人节?

您想要美丽的海滩、
森林和湖泊,

还是更喜欢
用计算机重新排列其中的一些原子,

从而实现虚拟体验?

有了友好的人工智能,我们可以简单地
建立所有这些社会

,让人们可以
自由选择他们想住在哪一个,

因为我们将
不再受制于我们的智力,而

只受物理定律的限制。

因此,这方面的资源和
空间将是天文数字 -

从字面上看。

所以这是我们的选择。

我们可以
对自己的

未来沾沾自喜,盲目

相信任何新技术
都一定是有益的,

当我们像一艘无舵的船一样
向我们的 自己的过时。

或者我们可以雄心勃勃——

认真思考
如何驾驭我们的技术,

以及我们想用它去哪里,

以创造令人惊叹的时代。

我们都在这里
庆祝惊奇的时代

,我觉得它的本质应该
在于变得不

被我们的技术压倒而是赋予力量。

谢谢你。

(掌声)