How we can build AI to help humans not hurt us Margaret Mitchell

I work on helping computers
communicate about the world around us.

There are a lot of ways to do this,

and I like to focus on helping computers

to talk about what they see
and understand.

Given a scene like this,

a modern computer-vision algorithm

can tell you that there’s a woman
and there’s a dog.

It can tell you that the woman is smiling.

It might even be able to tell you
that the dog is incredibly cute.

I work on this problem

thinking about how humans
understand and process the world.

The thoughts, memories and stories

that a scene like this
might evoke for humans.

All the interconnections
of related situations.

Maybe you’ve seen
a dog like this one before,

or you’ve spent time
running on a beach like this one,

and that further evokes thoughts
and memories of a past vacation,

past times to the beach,

times spent running around
with other dogs.

One of my guiding principles
is that by helping computers to understand

what it’s like to have these experiences,

to understand what we share
and believe and feel,

then we’re in a great position
to start evolving computer technology

in a way that’s complementary
with our own experiences.

So, digging more deeply into this,

a few years ago I began working on helping
computers to generate human-like stories

from sequences of images.

So, one day,

I was working with my computer to ask it
what it thought about a trip to Australia.

It took a look at the pictures,
and it saw a koala.

It didn’t know what the koala was,

but it said it thought
it was an interesting-looking creature.

Then I shared with it a sequence of images
about a house burning down.

It took a look at the images and it said,

“This is an amazing view!
This is spectacular!”

It sent chills down my spine.

It saw a horrible, life-changing
and life-destroying event

and thought it was something positive.

I realized that it recognized
the contrast,

the reds, the yellows,

and thought it was something
worth remarking on positively.

And part of why it was doing this

was because most
of the images I had given it

were positive images.

That’s because people
tend to share positive images

when they talk about their experiences.

When was the last time
you saw a selfie at a funeral?

I realized that,
as I worked on improving AI

task by task, dataset by dataset,

that I was creating massive gaps,

holes and blind spots
in what it could understand.

And while doing so,

I was encoding all kinds of biases.

Biases that reflect a limited viewpoint,

limited to a single dataset –

biases that can reflect
human biases found in the data,

such as prejudice and stereotyping.

I thought back to the evolution
of the technology

that brought me to where I was that day –

how the first color images

were calibrated against
a white woman’s skin,

meaning that color photography
was biased against black faces.

And that same bias, that same blind spot

continued well into the ’90s.

And the same blind spot
continues even today

in how well we can recognize
different people’s faces

in facial recognition technology.

I though about the state of the art
in research today,

where we tend to limit our thinking
to one dataset and one problem.

And that in doing so, we were creating
more blind spots and biases

that the AI could further amplify.

I realized then
that we had to think deeply

about how the technology we work on today
looks in five years, in 10 years.

Humans evolve slowly,
with time to correct for issues

in the interaction of humans
and their environment.

In contrast, artificial intelligence
is evolving at an incredibly fast rate.

And that means that it really matters

that we think about this
carefully right now –

that we reflect on our own blind spots,

our own biases,

and think about how that’s informing
the technology we’re creating

and discuss what the technology of today
will mean for tomorrow.

CEOs and scientists have weighed in
on what they think

the artificial intelligence technology
of the future will be.

Stephen Hawking warns that

“Artificial intelligence
could end mankind.”

Elon Musk warns
that it’s an existential risk

and one of the greatest risks
that we face as a civilization.

Bill Gates has made the point,

“I don’t understand
why people aren’t more concerned.”

But these views –

they’re part of the story.

The math, the models,

the basic building blocks
of artificial intelligence

are something that we call access
and all work with.

We have open-source tools
for machine learning and intelligence

that we can contribute to.

And beyond that,
we can share our experience.

We can share our experiences
with technology and how it concerns us

and how it excites us.

We can discuss what we love.

We can communicate with foresight

about the aspects of technology
that could be more beneficial

or could be more problematic over time.

If we all focus on opening up
the discussion on AI

with foresight towards the future,

this will help create a general
conversation and awareness

about what AI is now,

what it can become

and all the things that we need to do

in order to enable that outcome
that best suits us.

We already see and know this
in the technology that we use today.

We use smart phones
and digital assistants and Roombas.

Are they evil?

Maybe sometimes.

Are they beneficial?

Yes, they’re that, too.

And they’re not all the same.

And there you already see
a light shining on what the future holds.

The future continues on
from what we build and create right now.

We set into motion that domino effect

that carves out AI’s evolutionary path.

In our time right now,
we shape the AI of tomorrow.

Technology that immerses us
in augmented realities

bringing to life past worlds.

Technology that helps people
to share their experiences

when they have difficulty communicating.

Technology built on understanding
the streaming visual worlds

used as technology for self-driving cars.

Technology built on understanding images
and generating language,

evolving into technology that helps people
who are visually impaired

be better able to access the visual world.

And we also see how technology
can lead to problems.

We have technology today

that analyzes physical
characteristics we’re born with –

such as the color of our skin
or the look of our face –

in order to determine whether or not
we might be criminals or terrorists.

We have technology
that crunches through our data,

even data relating
to our gender or our race,

in order to determine whether or not
we might get a loan.

All that we see now

is a snapshot in the evolution
of artificial intelligence.

Because where we are right now,

is within a moment of that evolution.

That means that what we do now
will affect what happens down the line

and in the future.

If we want AI to evolve
in a way that helps humans,

then we need to define
the goals and strategies

that enable that path now.

What I’d like to see is something
that fits well with humans,

with our culture and with the environment.

Technology that aids and assists
those of us with neurological conditions

or other disabilities

in order to make life
equally challenging for everyone.

Technology that works

regardless of your demographics
or the color of your skin.

And so today, what I focus on
is the technology for tomorrow

and for 10 years from now.

AI can turn out in many different ways.

But in this case,

it isn’t a self-driving car
without any destination.

This is the car that we are driving.

We choose when to speed up
and when to slow down.

We choose if we need to make a turn.

We choose what the AI
of the future will be.

There’s a vast playing field

of all the things that artificial
intelligence can become.

It will become many things.

And it’s up to us now,

in order to figure out
what we need to put in place

to make sure the outcomes
of artificial intelligence

are the ones that will be
better for all of us.

Thank you.

(Applause)

我致力于帮助计算机
交流我们周围的世界。

有很多方法可以做到这一点

,我喜欢专注于帮助

计算机谈论他们所看到
和理解的内容。

给定这样的场景

,现代计算机视觉算法

可以告诉你有一个女人
和一条狗。

它可以告诉你女人在微笑。

它甚至可以告诉
你这只狗非常可爱。

我研究这个问题,

思考人类如何
理解和处理世界。

像这样的场景
可能会唤起人类的思想,记忆和故事。

相关情况的所有相互联系。

也许你以前见过
这样的狗,

或者你曾经
在这样的海滩上跑步

,这进一步
唤起了过去假期的想法和回忆,

过去的海滩

时光,和其他人一起跑来跑去的时光
小狗。

我的指导原则之一
是,通过帮助计算机

了解拥有这些体验的感觉

,了解我们分享
、相信和感受的东西,

然后我们就
可以开始

以一种与我们的互补的方式发展计算机技术。
自己的经历。

因此,为了更深入地研究这一点

,几年前我开始致力于帮助
计算机从图像序列中生成类似人类的故事

所以,有一天,

我正在用我的电脑
询问它对澳大利亚之旅的看法。

它看了看照片
,它看到了一只考拉。

它不知道考拉是什么,

但它说它认为
它是一种看起来很有趣的生物。

然后我与它分享了一系列
关于房屋被烧毁的图像。

它看了看图像,说:

“这是一个惊人的景象!
这太壮观了!”

它让我脊背发凉。

它看到了一个可怕的、
改变生活和毁灭生命的事件,

并认为这是积极的事情。

我意识到它识别
出对比

,红色,黄色,

并认为这是
值得积极评价的东西。

它这样做的部分原因

是因为
我给它的大多数图像

都是正面图像。

这是因为人们

在谈论他们的经历时倾向于分享正面的形象。

你最后一次
在葬礼上看到自拍是什么时候?

我意识到,
当我

逐个任务、逐个数据集地改进 AI 时

,我在它可以理解的内容中创造了巨大的差距、

漏洞和盲点

在这样做的同时,

我正在编码各种偏见。

反映有限观点的偏见,

仅限于单个数据集 -

可以反映
数据中发现的人类偏见的偏见,

例如偏见和刻板印象。

我回

想起让我走到今天的技术的发展——

第一张彩色图像是如何

针对白人女性的皮肤进行校准的,

这意味着彩色
摄影偏向于黑人面孔。

同样的偏见,同样的盲点

一直持续到 90 年代。

即使

在今天,我们在面部识别技术中识别
不同人脸的能力仍然存在同样的盲点

我想到了当今研究的最先进技术

,我们倾向于将思考
限制在一个数据集和一个问题上。

在这样做的过程中,我们创造了
更多

人工智能可以进一步放大的盲点和偏见。

那时我
意识到,我们必须深入

思考我们今天所开发的技术
在 5 年后、10 年后的样子。

人类进化缓慢,
随着时间的推移纠正

人类与其环境相互作用的问题

相比之下,人工智能
正在以令人难以置信的速度发展。

这意味着我们现在仔细考虑这一点真的很重要

——我们反思自己的盲点,

我们自己的偏见

,思考这些如何影响
我们正在创造

的技术,并讨论今天的技术
将会带来什么 明天的意思。

首席执行官和科学家已经权衡
了他们认为

未来的人工智能技术将会是什么。

斯蒂芬霍金警告说,

“人工智能
可能会终结人类。”

埃隆马斯克警告
说,这是一种生存风险

,也是
我们作为一个文明面临的最大风险之一。

比尔盖茨指出,

“我不明白
为什么人们不更关心。”

但是这些观点——

它们是故事的一部分。

数学、模型、人工智能

的基本构建块

是我们称之为访问
和所有工作的东西。

我们拥有可以
为机器学习和智能

做出贡献的开源工具。

除此之外,
我们可以分享我们的经验。

我们可以分享我们在技术方面的经验
,以及它与我们的关系

以及它如何让我们兴奋。

我们可以讨论我们喜欢什么。

我们可以有远见地

就技术的哪些方面进行交流,这些
方面可能更有益,

或者随着时间的推移可能会出现更多问题。

如果我们都专注于

以对未来的远见来开启关于人工智能的讨论,

这将有助于

就人工智能现在

是什么、它可以变成什么

以及我们需要做的所有

事情来实现人工智能的一般对话和意识。
最适合我们的结果。

我们已经
在我们今天使用的技术中看到并知道这一点。

我们使用智能手机
、数字助理和 Roombas。

他们是邪恶的吗?

可能有时。

它们有益吗?

是的,他们也是。

而且它们并不完全相同。

在那里,你已经
看到了照亮未来的光芒。

未来继续
从我们现在建立和创造的东西开始。

我们启动了多米诺骨牌效应

,开辟了人工智能的进化路径。

在我们现在的时代,
我们塑造了明天的人工智能。

使我们沉浸
在增强现实中的技术,

使过去的世界栩栩如生。

帮助人们
在沟通困难时分享经验的技术

技术建立在理解用作自动驾驶汽车技术
的流式视觉世界的基础上

建立在理解图像
和生成语言的

技术,演变成帮助
视障

人士更好地进入视觉世界的技术。

我们还看到技术
如何导致问题。

我们今天拥有的技术

可以分析
我们与生俱来的身体特征——

例如我们的皮肤颜色
或我们的脸庞——

以确定
我们是否可能是罪犯或恐怖分子。

我们有技术
可以处理我们的数据,

甚至是
与我们的性别或种族有关的数据,

以确定
我们是否可以获得贷款。

我们现在看到的

只是人工智能发展的一个快照

因为我们现在所处的位置,

就在进化的瞬间。

这意味着我们现在所做的
将影响未来和未来发生的事情

如果我们希望人工智能以
帮助人类的方式发展,

那么我们现在需要

定义实现这条道路的目标和策略。

我希望看到的是
与人类

、我们的文化和环境完美契合的东西。

帮助和帮助
我们这些患有神经系统疾病

或其他残疾的人的技术

,以
使每个人的生活同样具有挑战性。

无论您的人口统计数据
或皮肤颜色如何,该技术都能发挥作用。

所以今天,我关注的
是明天

和 10 年后的技术。

人工智能可以以许多不同的方式出现。

但在这种情况下,

它不是没有目的地的自动驾驶汽车

这是我们正在驾驶的汽车。

我们选择何时加速
,何时减速。

我们选择是否需要转弯。

我们选择
未来的人工智能。 人工智能可以成为的所有事物

都有广阔的竞争环境

会变成很多东西。

现在取决于我们

,为了
弄清楚我们需要采取什么措施

来确保人工智能的结果

对我们所有人都更好。

谢谢你。

(掌声)