Dont fear superintelligent AI Grady Booch

When I was a kid,
I was the quintessential nerd.

I think some of you were, too.

(Laughter)

And you, sir, who laughed the loudest,
you probably still are.

(Laughter)

I grew up in a small town
in the dusty plains of north Texas,

the son of a sheriff
who was the son of a pastor.

Getting into trouble was not an option.

And so I started reading
calculus books for fun.

(Laughter)

You did, too.

That led me to building a laser
and a computer and model rockets,

and that led me to making
rocket fuel in my bedroom.

Now, in scientific terms,

we call this a very bad idea.

(Laughter)

Around that same time,

Stanley Kubrick’s “2001: A Space Odyssey”
came to the theaters,

and my life was forever changed.

I loved everything about that movie,

especially the HAL 9000.

Now, HAL was a sentient computer

designed to guide the Discovery spacecraft

from the Earth to Jupiter.

HAL was also a flawed character,

for in the end he chose
to value the mission over human life.

Now, HAL was a fictional character,

but nonetheless he speaks to our fears,

our fears of being subjugated

by some unfeeling, artificial intelligence

who is indifferent to our humanity.

I believe that such fears are unfounded.

Indeed, we stand at a remarkable time

in human history,

where, driven by refusal to accept
the limits of our bodies and our minds,

we are building machines

of exquisite, beautiful
complexity and grace

that will extend the human experience

in ways beyond our imagining.

After a career that led me
from the Air Force Academy

to Space Command to now,

I became a systems engineer,

and recently I was drawn
into an engineering problem

associated with NASA’s mission to Mars.

Now, in space flights to the Moon,

we can rely upon
mission control in Houston

to watch over all aspects of a flight.

However, Mars is 200 times further away,

and as a result it takes
on average 13 minutes

for a signal to travel
from the Earth to Mars.

If there’s trouble,
there’s not enough time.

And so a reasonable engineering solution

calls for us to put mission control

inside the walls of the Orion spacecraft.

Another fascinating idea
in the mission profile

places humanoid robots
on the surface of Mars

before the humans themselves arrive,

first to build facilities

and later to serve as collaborative
members of the science team.

Now, as I looked at this
from an engineering perspective,

it became very clear to me
that what I needed to architect

was a smart, collaborative,

socially intelligent
artificial intelligence.

In other words, I needed to build
something very much like a HAL

but without the homicidal tendencies.

(Laughter)

Let’s pause for a moment.

Is it really possible to build
an artificial intelligence like that?

Actually, it is.

In many ways,

this is a hard engineering problem

with elements of AI,

not some wet hair ball of an AI problem
that needs to be engineered.

To paraphrase Alan Turing,

I’m not interested
in building a sentient machine.

I’m not building a HAL.

All I’m after is a simple brain,

something that offers
the illusion of intelligence.

The art and the science of computing
have come a long way

since HAL was onscreen,

and I’d imagine if his inventor
Dr. Chandra were here today,

he’d have a whole lot of questions for us.

Is it really possible for us

to take a system of millions
upon millions of devices,

to read in their data streams,

to predict their failures
and act in advance?

Yes.

Can we build systems that converse
with humans in natural language?

Yes.

Can we build systems
that recognize objects, identify emotions,

emote themselves,
play games and even read lips?

Yes.

Can we build a system that sets goals,

that carries out plans against those goals
and learns along the way?

Yes.

Can we build systems
that have a theory of mind?

This we are learning to do.

Can we build systems that have
an ethical and moral foundation?

This we must learn how to do.

So let’s accept for a moment

that it’s possible to build
such an artificial intelligence

for this kind of mission and others.

The next question
you must ask yourself is,

should we fear it?

Now, every new technology

brings with it
some measure of trepidation.

When we first saw cars,

people lamented that we would see
the destruction of the family.

When we first saw telephones come in,

people were worried it would destroy
all civil conversation.

At a point in time we saw
the written word become pervasive,

people thought we would lose
our ability to memorize.

These things are all true to a degree,

but it’s also the case
that these technologies

brought to us things
that extended the human experience

in some profound ways.

So let’s take this a little further.

I do not fear the creation
of an AI like this,

because it will eventually
embody some of our values.

Consider this: building a cognitive system
is fundamentally different

than building a traditional
software-intensive system of the past.

We don’t program them. We teach them.

In order to teach a system
how to recognize flowers,

I show it thousands of flowers
of the kinds I like.

In order to teach a system
how to play a game –

Well, I would. You would, too.

I like flowers. Come on.

To teach a system
how to play a game like Go,

I’d have it play thousands of games of Go,

but in the process I also teach it

how to discern
a good game from a bad game.

If I want to create an artificially
intelligent legal assistant,

I will teach it some corpus of law

but at the same time I am fusing with it

the sense of mercy and justice
that is part of that law.

In scientific terms,
this is what we call ground truth,

and here’s the important point:

in producing these machines,

we are therefore teaching them
a sense of our values.

To that end, I trust
an artificial intelligence

the same, if not more,
as a human who is well-trained.

But, you may ask,

what about rogue agents,

some well-funded
nongovernment organization?

I do not fear an artificial intelligence
in the hand of a lone wolf.

Clearly, we cannot protect ourselves
against all random acts of violence,

but the reality is such a system

requires substantial training
and subtle training

far beyond the resources of an individual.

And furthermore,

it’s far more than just injecting
an internet virus to the world,

where you push a button,
all of a sudden it’s in a million places

and laptops start blowing up
all over the place.

Now, these kinds of substances
are much larger,

and we’ll certainly see them coming.

Do I fear that such
an artificial intelligence

might threaten all of humanity?

If you look at movies
such as “The Matrix,” “Metropolis,”

“The Terminator,”
shows such as “Westworld,”

they all speak of this kind of fear.

Indeed, in the book “Superintelligence”
by the philosopher Nick Bostrom,

he picks up on this theme

and observes that a superintelligence
might not only be dangerous,

it could represent an existential threat
to all of humanity.

Dr. Bostrom’s basic argument

is that such systems will eventually

have such an insatiable
thirst for information

that they will perhaps learn how to learn

and eventually discover
that they may have goals

that are contrary to human needs.

Dr. Bostrom has a number of followers.

He is supported by people
such as Elon Musk and Stephen Hawking.

With all due respect

to these brilliant minds,

I believe that they
are fundamentally wrong.

Now, there are a lot of pieces
of Dr. Bostrom’s argument to unpack,

and I don’t have time to unpack them all,

but very briefly, consider this:

super knowing is very different
than super doing.

HAL was a threat to the Discovery crew

only insofar as HAL commanded
all aspects of the Discovery.

So it would have to be
with a superintelligence.

It would have to have dominion
over all of our world.

This is the stuff of Skynet
from the movie “The Terminator”

in which we had a superintelligence

that commanded human will,

that directed every device
that was in every corner of the world.

Practically speaking,

it ain’t gonna happen.

We are not building AIs
that control the weather,

that direct the tides,

that command us
capricious, chaotic humans.

And furthermore, if such
an artificial intelligence existed,

it would have to compete
with human economies,

and thereby compete for resources with us.

And in the end –

don’t tell Siri this –

we can always unplug them.

(Laughter)

We are on an incredible journey

of coevolution with our machines.

The humans we are today

are not the humans we will be then.

To worry now about the rise
of a superintelligence

is in many ways a dangerous distraction

because the rise of computing itself

brings to us a number
of human and societal issues

to which we must now attend.

How shall I best organize society

when the need for human labor diminishes?

How can I bring understanding
and education throughout the globe

and still respect our differences?

How might I extend and enhance human life
through cognitive healthcare?

How might I use computing

to help take us to the stars?

And that’s the exciting thing.

The opportunities to use computing

to advance the human experience

are within our reach,

here and now,

and we are just beginning.

Thank you very much.

(Applause)

当我还是个孩子的时候,
我是典型的书呆子。

我想你们中的一些人也是。

(笑声)

而你,先生,谁笑得最响亮,
你可能仍然是。

(笑声)

我在
德克萨斯北部尘土飞扬的平原上的一个小镇长大,我

是一位警长
的儿子,他是一位牧师的儿子。

陷入麻烦不是一种选择。

所以我开始阅读
微积分书籍以获得乐趣。

(笑声)

你也是。

这促使我制造了
一台激光器、一台计算机和火箭模型

,这让
我在卧室里制造了火箭燃料。

现在,用科学术语来说,

我们称这是一个非常糟糕的主意。

(笑声)

大约在同一时间,

斯坦利·库布里克(Stanley Kubrick)的《2001:太空漫游》
上映

,我的生活被彻底改变了。

我喜欢那部电影的一切,

尤其是 HAL

9000。现在,HAL 是一台有感知能力的计算机,

旨在引导发现号宇宙飞船

从地球到木星。

HAL也是一个有缺陷的角色,

因为他最终选择
了将使命置于人命之上。

现在,HAL 是一个虚构的人物,

但尽管如此,他还是说出了我们的恐惧,

我们害怕

被一些

对我们的人性漠不关心的无情的人工智能征服。

我相信这种担心是没有根据的。

事实上,我们正处于人类历史上一个非凡的时期

在拒绝接受
我们身体和思想的限制的驱使下,

我们正在建造

精致、美丽、
复杂和优雅的机器

,它将

以超出我们想象的方式扩展人类体验 .

在我
从空军学院

到太空司令部的职业生涯之后,

我成为了一名系统工程师

,最近我被
卷入了

与美国宇航局火星任务相关的工程问题。

现在,在飞往月球的太空飞行中,

我们可以依靠
休斯顿的任务控制

来监视飞行的各个方面。

然而,火星距离地球 200 倍

,因此

信号
从地球传播到火星平均需要 13 分钟。

如果有麻烦,
那就没有足够的时间了。

因此,一个合理的工程解决方案

要求我们将任务控制

放在猎户座飞船的墙壁内。 任务简介中

另一个引人入胜的想法是
在人类自己到达之前

将人形机器人
放在火星表面

首先建造设施

,然后作为
科学团队的协作成员。

现在,当我
从工程的角度看待

这个问题时,我非常
清楚,我需要

构建一个智能、协作、

社交智能的
人工智能。

换句话说,我需要构建
一个非常类似于 HAL

但没有杀人倾向的东西。

(笑声)

让我们暂停一下。

真的有可能建立这样
的人工智能吗?

事实上,它是。

在许多方面,

这是一个

涉及 AI 元素的困难工程问题,

而不是需要工程化的 AI 问题的一些湿毛球

套用艾伦图灵的话说,


对建造一台有感知力的机器不感兴趣。

我不是在构建 HAL。

我所追求的只是一个简单的大脑,

一个
提供智力错觉的东西。 自从 HAL 出现以来

,计算的艺术和科学
已经取得了长足的进步

,我想如果他的发明者
钱德拉博士今天在这里,

他会有很多问题要问我们。

我们真的有可能

采用一个由数
百万个设备组成的系统

,读取它们的数据流

,预测它们的故障
并提前采取行动吗?

是的。

我们可以建立
用自然语言与人类交谈的系统吗?

是的。

我们能否建立
识别物体、识别情绪、

表达自我、
玩游戏甚至读唇语的系统?

是的。

我们能否建立一个设定目标、

针对这些目标执行计划
并在此过程中不断学习的系统?

是的。

我们能建立
有心智理论的系统吗?

这是我们正在学习做的。

我们能否建立
具有伦理道德基础的系统?

这个我们必须学会怎么做。

因此,让我们暂时接受为此类任务和其他任务

构建这样的人工智能是可能的

你必须问自己的下一个问题是,

我们应该害怕吗?

现在,每一项新技术都

带来了
某种程度的恐惧。

当我们第一次看到汽车时,

人们感叹我们会看到
家庭的毁灭。

当我们第一次看到电话进来时,

人们担心它会破坏
所有的民事谈话。

在某个时间点,我们
看到书面文字变得无处不在,

人们认为我们会失去
记忆的能力。

这些事情在一定程度上都是真实的,


这些技术也

给我们带来了一些东西,这些东西

以某种深刻的方式扩展了人类的体验。

所以让我们更进一步。

我不害怕
创造这样的人工智能,

因为它最终会
体现我们的一些价值观。

考虑一下:构建认知系统

与构建过去的传统
软件密集型系统有着根本的不同。

我们不对它们进行编程。 我们教他们。

为了教一个系统
如何识别花朵,

我向它展示了成千上万
种我喜欢的花朵。

为了教一个系统
如何玩游戏——

嗯,我会的。 你也会的。

我喜欢花。 来吧。

要教一个系统
如何玩围棋这样的游戏,

我会让它玩成千上万的围棋,

但在这个过程中,我也教它

如何
辨别好游戏和坏游戏。

如果我想创建一个
人工智能法律助理,

我会教它一些法律语料库,

但同时我会与它融合

作为法律一部分的仁慈和正义感。

用科学术语来说,
这就是我们所说的基本事实

,重要的一点是:

在生产这些机器时,

我们正在教他们
我们的价值观。

为此,我相信

人工智能与
受过良好训练的人类一样,甚至更多。

但是,你可能会问,

流氓特工,

一些资金雄厚的
非政府组织呢?

我不害怕
独狼手中的人工智能。

显然,我们无法保护自己
免受所有随机的暴力行为,

但现实是这样的系统

需要大量的训练
和微妙的训练,

远远超出个人的资源。

此外,

它不仅仅是
向世界注入互联网病毒

,你按下一个按钮,
突然间它就在百万个地方

,笔记本电脑开始
到处炸毁。

现在,这类物质
要大得多

,我们肯定会看到它们出现。

我是否担心这样
的人工智能

会威胁到全人类?

如果你看
诸如《黑客帝国》、《大都会》、

《终结者》之类的电影,
例如《西部世界》之类的节目,

它们都会谈到这种恐惧。

事实上,在
哲学家尼克·博斯特罗姆(Nick Bostrom)的《超级智能》一书中,

他抓住了这个主题,

并观察到超级智能
不仅可能是危险的,

而且可能对全人类构成生存威胁

Bostrom 博士的基本论点

是,这样的系统最终

会对信息产生无限的
渴望,

以至于它们可能会学会如何学习

并最终
发现它们可能有

与人类需求相反的目标。

Bostrom 博士有许多追随者。

他得到了
埃隆马斯克和斯蒂芬霍金等人的支持。

恕我

直言,

我相信
他们从根本上是错误的。

现在,有很多
博斯特罗姆博士的论点需要解包

,我没有时间将它们全部解包,

但非常简单地考虑一下:

超级知道与超级做是非常不同的
。 仅当

HAL 指挥发现号的所有方面时,HAL 才对发现号船员构成威胁

所以它必须
有一个超级智能。

它必须
统治我们整个世界。

这是
电影“终结者”

中天网的东西,在其中,我们拥有一种支配人类意志的超级智能

,它指挥

着世界每个角落的每一个设备。

实际上,

这不会发生。

我们不是在
建造控制天气

、引导潮汐

、指挥我们
反复无常、混乱的人类的人工智能。

而且,如果
存在这样的人工智能,

它必须
与人类经济

竞争,从而与我们竞争资源。

最后——

不要告诉 Siri——

我们总是可以拔掉它们。

(笑声)

我们正

与我们的机器一起踏上一段不可思议的共同进化之旅。

我们今天

的人类不是我们将来会成为的人类。

现在担心
超级智能

的兴起在很多方面都是一种危险的分心,

因为计算本身的兴起

给我们带来了许多我们现在必须
关注的人类和社会

问题。 当对人力的需求减少时

,我应该如何最好地组织社会

我怎样才能
在全球范围内带来理解和教育,

同时仍然尊重我们的差异?

我如何通过认知保健来延长和改善人类生活

我如何使用计算

来帮助我们登上星空?

这就是令人兴奋的事情。

使用计算

来提升人类

体验的机会触手可及,此时

此地

,我们才刚刚开始。

非常感谢你。

(掌声)