Regulating AI for the safety of humanity

Transcriber: Maurício Kakuei Tanaka
Reviewer: omar idmassaoud

“The development of full
artificial intelligence

could spell the end of the human race.”

The words of Stephen Hawking.

Do you want to live in a society
of constant AI surveillance

and invasive data collection?

A society where AI decides
whether you’re guilty of murder or not,

whilst also being able to create
ultrarealistic deep fakes,

planting you at a crime scene.

A society where AI
developed to kill cancer

decides that the best way to do so

is to exterminate any human
genetically prone to the disease.

I know I wouldn’t,

but this is what a society
without AI regulation

could look like.

Of course, these are
very far-fetched outcomes,

but far-fetched does not mean impossible.

And the possibility of an AI dystopia

is reason enough to consider AI regulation

so as to at least address

the more apparent
and immediate dangers of AI.

So first, what actually is AI?

Well, artificial intelligence is a theory
and development of computer systems

able to perform tasks normally
requiring human-level intelligence.

The thing that lets us
make things like this.

A plagiarism checker.

OK, maybe not the most popular
use of AI among students,

but what about this?

A smart home.

Or this?

A Mars rover collecting data
analyzed by AI.

Or this?

A self-driving car?

Seems pretty cool, right?

That’s what I thought.

And it was a self-driving car
that particularly caught my attention.

So last summer, I decided to build one.

I made a small robot one

so that it could autonomously
navigate through lanes

using nothing but a camera,

an ultrasonic sensor,

and a neural network I made.

And it was driving perfectly like this

until one day it just starts
to consistently veer out of the lane.

I spent hours trying to find the bug.

And you know what It was?

A deleted bracket.

I’d accidentally deleted a bracket
when editing the code,

and they stopped
one function from running,

causing the entire system to fail.

And this demonstrated to me,

on a small scale,

how one small bug can have
devastating consequences.

And then I started to think,

“Imagine if this happens
on a larger scale,

say, in a real self-driving car

or nuclear power plant.

Imagine how devastating that would be.”

Well, unfortunately,
you don’t have to imagine.

In 2016, a self-driving Tesla
mistook a white truck trailer

as the bright sky,

leading to the death
of the Tesla occupant.

And this made me think,

“We have regulation
in health care and education

and financial services,

but next to none in AI,

even though it’s such a large
and growing aspect of human life.”

We are all aware of the digital utopia
that AI can provide us with.

So surely we should introduce regulations

to ensure we reach this utopian situation

and avoid a dystopian one.

One suggestion is a compulsory
human-in-the-loop system,

where we put serious research efforts

into not only making AI
work well on its own,

but also collaborate effectively
with its human controllers.

This would effectively
give humans a kill switch

so that control can be
transferred back to humans

when a problem is expected.

But for those in search of a less
restrictive form of regulation,

a transparency-based approach
has been suggested

whereby firms must explain
how and why their AI makes its decisions,

essentially a compulsory
open-source system.

This would allow third parties
to review the AI systems

and spot any potential dangers
or biases before they occur.

However, this could reduce competition
and incentive to innovate

as ideas can easily be copied.

And this demonstrates
just how difficult it is

to regulate AI in a way
which suits everyone

as we must ensure safety

whilst also ensuring

that regulation does not stifle
worthwhile advances in technology.

This would suggest

that the most effective way to regulate AI

would be to introduce AI-specific boards
into the government,

allowing AI experts to make regulations

rather than politicians.

The most important thing for us
is that we don’t settle

for a “one-size-fits-all”
regulatory approach

as a range of possible uses of AI
is far too diverse for this.

You wouldn’t use the same regulation
for a self-driving car

as for a smart fridge.

So our main goal

should be to learn more about
the risks of AI in different applications

to understand where regulation
is actually needed.

And an AI-specific government board
would be far more efficient at this

than politicians who were just
not familiar with AI.

And if people are fundamentally
against government intervention,

then a company-led self-regulated system
must be established.

Trust is very hard
for technology firms to gain,

but also very easy for them to lose.

And since trust is such a vital
commodity for businesses,

it would be in their interest

to go above and beyond
the minimum legal standards

in order to gain
this valuable consumer trust.

As being seen to promote AI safety,

offers an easy way to gain trust
was actively opposing it,

or quickly lose the trust
they worked so hard to gain.

It’s likely that regulation strategies
will differ around the world,

with some countries
taking the government-led approach

whilst others opt
for a company-led approach

or even a mix of the two.

And that is OK.

But the most dangerous thing we can do now

is to completely run away
from the idea of AI regulation.

Google CEO Sundar Pichai has said,

“There is no question in my mind

that artificial intelligence
needs to be regulated.”

Elon Musk has said that AI
is more dangerous than nukes.

When even the people
developing AI themselves

agree with the need for regulation,

it’s time to get down to the business

of how to regulate the rapidly changing
field of artificial intelligence.

Thank you.

(Applause)

抄写员:Maurício Kakuei Tanaka
审稿人:omar idmassaoud

“全面人工智能的发展

可能意味着人类的终结。”

斯蒂芬霍金的话。

你想生活在一个
持续不断的人工智能监视

和侵入性数据收集的社会吗?

一个人工智能
决定你是否犯有谋杀罪的社会,

同时也能够创造
出超现实的深度假货,将

你安置在犯罪现场。

人工智能
发展来杀死癌症的社会

决定这样做的最佳方法

是消灭任何
遗传易患该疾病的人类。

我知道我不会,

但这就是
没有人工智能监管的社会

的样子。

当然,这些都是
非常牵强的结果,

但牵强并不意味着不可能。

人工智能反乌托邦

的可能性足以考虑人工智能

监管,以至少

解决人工智能更明显
和直接的危险。

那么首先,什么是人工智能?

嗯,人工智能是一种
计算机系统的理论和发展,

能够执行通常
需要人类智能的任务。

让我们
做出这样的事情的东西。

抄袭检查器。

好吧,也许不是
学生中最流行的人工智能用途,

但这个呢?

一个智能家居。

或这个?

火星探测器收集
人工智能分析的数据。

或这个?

自动驾驶汽车?

看起来很酷,对吧?

我也这么想。

这是一辆自动驾驶汽车
,特别引起了我的注意。

所以去年夏天,我决定建造一个。

我制作了一个小型机器人,

这样它就可以自动
在车道上导航,

只需要一个摄像头、

一个超声波传感器

和一个我制作的神经网络。

它一直像这样完美地行驶,

直到有一天它
开始持续偏离车道。

我花了几个小时试图找到错误。

你知道那是什么吗?

已删除的括号。

我在编辑代码时不小心删除了一个括号

,他们停止了
一个函数的运行,

导致整个系统失败。

这在小范围内向我展示

了一个小错误如何产生
毁灭性的后果。

然后我开始思考,

“想象一下,如果这种情况发生
在更大的范围内,

比如说,在一辆真正的自动驾驶汽车

或核电站中。

想象一下那将是多么毁灭性的。”

好吧,不幸的是,
你不必想象。

2016年,一辆自动驾驶的特斯拉
把一辆白色的卡车拖车误

认为是明亮的天空,

导致
特斯拉车上的乘客死亡。

这让我想到,

“我们
在医疗保健、教育

和金融服务方面有监管,

但在人工智能方面几乎没有,

尽管它
是人类生活中如此庞大且不断增长的方面。”

我们都
知道人工智能可以为我们提供的数字乌托邦。

因此,我们当然应该引入法规

以确保我们达到这种乌托邦状态

并避免出现反乌托邦状态。

一个建议是强制性的
人在环系统

,我们在其中投入了认真的研究努力,

不仅使人工智能
能够独立运行,

而且还
与其人类控制器有效协作。

这将有效地
为人类提供一个终止开关,

以便在预计出现问题时将控制权
转移回人类

但是对于那些寻求
限制较少的监管形式的人来说,已经提出

了一种基于透明度的方法

,公司必须解释
他们的人工智能如何以及为什么做出决定,

本质上是一个强制性
的开源系统。

这将允许
第三方审查人工智能系统

并在它们发生之前发现任何潜在的危险
或偏见。

然而,这可能会减少竞争
和创新动力,

因为想法很容易被复制。

这表明

以适合每个人的方式监管人工智能是多么困难,

因为我们必须确保安全

,同时

确保监管不会扼杀
有价值的技术进步。

表明监管人工智能的最有效方式

是将人工智能特定委员会
引入政府,

让人工智能专家

而不是政客来制定监管。

对我们来说最重要的
是,我们不会

满足于“一刀切”的
监管方法,

因为人工智能的一系列可能用途
过于多样化。

你不会
对自动驾驶汽车使用

与智能冰箱相同的规定。

因此,我们的主要目标

应该是更多地了解
人工智能在不同应用中的风险,

以了解
实际需要监管的地方。

与不熟悉人工智能的政客相比,一个专门针对人工智能的政府委员会
在这方面的效率要高得多

如果人们从根本上
反对政府干预,

那么就必须建立以公司为主导的自律体系

信任
对科技公司来说很难获得,

但对他们来说也很容易失去。

由于信任对企业来说是如此重要的
商品

,为了获得
这种宝贵的消费者信任,超越最低法律标准将符合他们的利益。

被视为促进人工智能安全,

提供了一种简单的获得信任的方法
是积极反对它,

或者很快失去
他们辛辛苦苦获得的信任。

世界各地的监管策略可能
会有所不同

,一些国家
采用政府主导的方法,

而另一些国家则
选择公司主导的方法

,甚至两者兼而有之。

没关系。

但我们现在能做的最危险的事情,

就是完全
摆脱人工智能监管的想法。

谷歌 CEO Sundar Pichai 曾表示:

“毫无疑问

,人工智能
需要受到监管。”

埃隆马斯克曾说过,人工智能
比核武器更危险。

当甚至
开发人工智能的人自己也

同意监管的必要性时

,是时候开始

着手如何监管快速变化
的人工智能领域了。

谢谢你。

(掌声)