How to keep human bias out of AI Kriti Sharma

Translator: Ivana Korom
Reviewer: Joanna Pietrulewicz

How many decisions
have been made about you today,

or this week or this year,

by artificial intelligence?

I build AI for a living

so, full disclosure, I’m kind of a nerd.

And because I’m kind of a nerd,

wherever some new news story comes out

about artificial intelligence
stealing all our jobs,

or robots getting citizenship
of an actual country,

I’m the person my friends
and followers message

freaking out about the future.

We see this everywhere.

This media panic that
our robot overlords are taking over.

We could blame Hollywood for that.

But in reality, that’s not the problem
we should be focusing on.

There is a more pressing danger,
a bigger risk with AI,

that we need to fix first.

So we are back to this question:

How many decisions
have been made about you today by AI?

And how many of these

were based on your gender,
your race or your background?

Algorithms are being used all the time

to make decisions about who we are
and what we want.

Some of the women in this room
will know what I’m talking about

if you’ve been made to sit through
those pregnancy test adverts on YouTube

like 1,000 times.

Or you’ve scrolled past adverts
of fertility clinics

on your Facebook feed.

Or in my case, Indian marriage bureaus.

(Laughter)

But AI isn’t just being used
to make decisions

about what products we want to buy

or which show we want to binge watch next.

I wonder how you’d feel about someone
who thought things like this:

“A black or Latino person

is less likely than a white person
to pay off their loan on time.”

“A person called John
makes a better programmer

than a person called Mary.”

“A black man is more likely to be
a repeat offender than a white man.”

You’re probably thinking,

“Wow, that sounds like a pretty sexist,
racist person,” right?

These are some real decisions
that AI has made very recently,

based on the biases
it has learned from us,

from the humans.

AI is being used to help decide
whether or not you get that job interview;

how much you pay for your car insurance;

how good your credit score is;

and even what rating you get
in your annual performance review.

But these decisions
are all being filtered through

its assumptions about our identity,
our race, our gender, our age.

How is that happening?

Now, imagine an AI is helping
a hiring manager

find the next tech leader in the company.

So far, the manager
has been hiring mostly men.

So the AI learns men are more likely
to be programmers than women.

And it’s a very short leap from there to:

men make better programmers than women.

We have reinforced
our own bias into the AI.

And now, it’s screening out
female candidates.

Hang on, if a human
hiring manager did that,

we’d be outraged, we wouldn’t allow it.

This kind of gender
discrimination is not OK.

And yet somehow,
AI has become above the law,

because a machine made the decision.

That’s not it.

We are also reinforcing our bias
in how we interact with AI.

How often do you use a voice assistant
like Siri, Alexa or even Cortana?

They all have two things in common:

one, they can never get my name right,

and second, they are all female.

They are designed to be
our obedient servants,

turning your lights on and off,
ordering your shopping.

You get male AIs too,
but they tend to be more high-powered,

like IBM Watson,
making business decisions,

Salesforce Einstein
or ROSS, the robot lawyer.

So poor robots, even they suffer
from sexism in the workplace.

(Laughter)

Think about how these two things combine

and affect a kid growing up
in today’s world around AI.

So they’re doing some research
for a school project

and they Google images of CEO.

The algorithm shows them
results of mostly men.

And now, they Google personal assistant.

As you can guess,
it shows them mostly females.

And then they want to put on some music,
and maybe order some food,

and now, they are barking orders
at an obedient female voice assistant.

Some of our brightest minds
are creating this technology today.

Technology that they could have created
in any way they wanted.

And yet, they have chosen to create it
in the style of 1950s “Mad Man” secretary.

Yay!

But OK, don’t worry,

this is not going to end
with me telling you

that we are all heading towards
sexist, racist machines running the world.

The good news about AI
is that it is entirely within our control.

We get to teach the right values,
the right ethics to AI.

So there are three things we can do.

One, we can be aware of our own biases

and the bias in machines around us.

Two, we can make sure that diverse teams
are building this technology.

And three, we have to give it
diverse experiences to learn from.

I can talk about the first two
from personal experience.

When you work in technology

and you don’t look like
a Mark Zuckerberg or Elon Musk,

your life is a little bit difficult,
your ability gets questioned.

Here’s just one example.

Like most developers,
I often join online tech forums

and share my knowledge to help others.

And I’ve found,

when I log on as myself,
with my own photo, my own name,

I tend to get questions
or comments like this:

“What makes you think
you’re qualified to talk about AI?”

“What makes you think
you know about machine learning?”

So, as you do, I made a new profile,

and this time, instead of my own picture,
I chose a cat with a jet pack on it.

And I chose a name
that did not reveal my gender.

You can probably guess
where this is going, right?

So, this time, I didn’t get any of those
patronizing comments about my ability

and I was able to actually
get some work done.

And it sucks, guys.

I’ve been building robots since I was 15,

I have a few degrees in computer science,

and yet, I had to hide my gender

in order for my work
to be taken seriously.

So, what’s going on here?

Are men just better
at technology than women?

Another study found

that when women coders on one platform
hid their gender, like myself,

their code was accepted
four percent more than men.

So this is not about the talent.

This is about an elitism in AI

that says a programmer
needs to look like a certain person.

What we really need to do
to make AI better

is bring people
from all kinds of backgrounds.

We need people who can
write and tell stories

to help us create personalities of AI.

We need people who can solve problems.

We need people
who face different challenges

and we need people who can tell us
what are the real issues that need fixing

and help us find ways
that technology can actually fix it.

Because, when people
from diverse backgrounds come together,

when we build things in the right way,

the possibilities are limitless.

And that’s what I want to end
by talking to you about.

Less racist robots, less machines
that are going to take our jobs –

and more about what technology
can actually achieve.

So, yes, some of the energy
in the world of AI,

in the world of technology

is going to be about
what ads you see on your stream.

But a lot of it is going towards
making the world so much better.

Think about a pregnant woman
in the Democratic Republic of Congo,

who has to walk 17 hours
to her nearest rural prenatal clinic

to get a checkup.

What if she could get diagnosis
on her phone, instead?

Or think about what AI could do

for those one in three women
in South Africa

who face domestic violence.

If it wasn’t safe to talk out loud,

they could get an AI service
to raise alarm,

get financial and legal advice.

These are all real examples of projects
that people, including myself,

are working on right now, using AI.

So, I’m sure in the next couple of days
there will be yet another news story

about the existential risk,

robots taking over
and coming for your jobs.

(Laughter)

And when something like that happens,

I know I’ll get the same messages
worrying about the future.

But I feel incredibly positive
about this technology.

This is our chance to remake the world
into a much more equal place.

But to do that, we need to build it
the right way from the get go.

We need people of different genders,
races, sexualities and backgrounds.

We need women to be the makers

and not just the machines
who do the makers' bidding.

We need to think very carefully
what we teach machines,

what data we give them,

so they don’t just repeat
our own past mistakes.

So I hope I leave you
thinking about two things.

First, I hope you leave
thinking about bias today.

And that the next time
you scroll past an advert

that assumes you are interested
in fertility clinics

or online betting websites,

that you think and remember

that the same technology is assuming
that a black man will reoffend.

Or that a woman is more likely
to be a personal assistant than a CEO.

And I hope that reminds you
that we need to do something about it.

And second,

I hope you think about the fact

that you don’t need to look a certain way

or have a certain background
in engineering or technology

to create AI,

which is going to be
a phenomenal force for our future.

You don’t need to look
like a Mark Zuckerberg,

you can look like me.

And it is up to all of us in this room

to convince the governments
and the corporations

to build AI technology for everyone,

including the edge cases.

And for us all to get education

about this phenomenal
technology in the future.

Because if we do that,

then we’ve only just scratched the surface
of what we can achieve with AI.

Thank you.

(Applause)

译者:Ivana Korom
审稿人:Joanna Pietrulewicz

今天、

本周或今年

,人工智能为你做出了多少决定?

我以构建人工智能为生,

所以,完全披露,我是个书呆子。

而且因为我有点像书呆子,

每当有

关于人工智能
窃取我们所有工作的新新闻报道,

或者机器人获得
一个实际国家的公民身份时,

我都是我的朋友

追随者对未来感到恐惧的人。

我们到处都能看到这一点。

媒体恐慌,
我们的机器人霸主正在接管。

我们可以为此责怪好莱坞。

但实际上,这不是
我们应该关注的问题。 人工智能

有一个更紧迫的危险,
一个更大的风险

,我们需要首先解决。

所以我们回到这个问题:

今天有多少关于你的决定是由人工智能做出的?

其中有多少

是基于您的性别
、种族或背景?

我们一直在使用算法

来决定我们是谁
以及我们想要什么。

如果你被
要求观看 YouTube 上的妊娠测试广告

1000 次,这个房间里的一些女性就会知道我在说什么。

或者,

您在 Facebook 订阅源上浏览过生育诊所的广告。

或者就我而言,印度婚姻局。

(笑声)

但是人工智能不仅仅被
用来决定

我们想买什么产品

或者我们接下来想看什么节目。

我想知道你对这样想的人有
什么感觉:

“黑人或拉丁裔

人比白人更不可能
按时还清贷款。”

“一个叫 John

的人比一个叫 Mary 的人做的程序员更好。”

“黑人比白人更有可能
是惯犯。”

你可能在想,

“哇,这听起来像是一个非常性别歧视、
种族主义的人,”对吧?

这些是
人工智能最近做出的一些真实决定,

基于
它从我们

和人类那里学到的偏见。

人工智能被用来帮助决定
你是否能得到那份工作面试;

您为汽车保险支付了多少钱;

您的信用评分有多好;

甚至
你在年度绩效评估中得到的评价。

但这些决定
都是通过

它对我们的身份、
我们的种族、我们的性别、我们的年龄的假设来过滤的。

这是怎么回事?

现在,想象一下人工智能正在
帮助招聘经理

找到公司的下一个技术领导者。

到目前为止,经理
一直在招聘男性。

因此,人工智能了解到男性
比女性更有可能成为程序员。

从那里到这只是一个非常短的飞跃:

男性比女性更能成为更好的程序员。

我们加强
了我们对人工智能的偏见。

而现在,它正在筛选
女性候选人。

等等,如果人力
招聘经理这样做,

我们会感到愤怒,我们不会允许这样做。

这种性别
歧视是不行的。

然而不知何故,
人工智能已经超越了法律,

因为机器做出了决定。

不是这个。

我们也在加强
我们在如何与人工智能互动方面的偏见。

你使用
Siri、Alexa 甚至 Cortana 等语音助手的频率如何?

他们都有两个共同点:

第一,他们永远无法正确地称呼我的名字

,第二,他们都是女性。

他们被设计成
我们听话的仆人,

打开和关闭您的灯,
订购您的购物。

你也有男性 AI,
但他们往往更有能力,

比如 IBM Watson,
做商业决策,

Salesforce Einstein
或 ROSS,机器人律师。

如此可怜的机器人,即使他们
在工作场所遭受性别歧视。

(笑声)

想想这两件事是如何结合

起来影响一个
在当今人工智能世界长大的孩子。

所以他们正在
为一个学校项目做一些研究

,他们在谷歌上搜索 CEO 的照片。

该算法向他们展示
了大多数男性的结果。

而现在,他们是谷歌个人助理。

你可以猜到,
它显示的大多是女性。

然后他们想放点音乐
,也许点点菜

,现在,他们正在
对一个听话的女语音助手大吼大叫。 今天

,我们一些最聪明的人
正在创造这项技术。

他们可以
以任何他们想要的方式创造的技术。

然而,他们选择以
1950 年代“疯子”秘书的风格来创作它。

耶!

但是好吧,别担心,

这不会
以我告诉

你我们都在走向
统治世界的性别歧视、种族主义机器而告终。

关于人工智能的好消息
是它完全在我们的控制范围内。

我们可以向人工智能传授正确的价值观
和正确的道德规范。

所以我们可以做三件事。

一,我们可以意识到我们自己的偏见

和我们周围机器的偏见。

第二,我们可以确保不同的团队
正在构建这项技术。

第三,我们必须给它提供
多样化的经验来学习。

我可以
从个人经验谈谈前两个。

当你从事技术工作,

而且你看起来
不像马克扎克伯格或埃隆马斯克时,

你的生活会有点困难,
你的能力会受到质疑。

这里只是一个例子。

像大多数开发人员一样,
我经常加入在线技术论坛

并分享我的知识以帮助他人。

而且我发现,

当我以自己的身份登录时,
使用自己的照片、自己的名字,

我往往会收到这样的问题
或评论:

“是什么让你认为
自己有资格谈论人工智能?”

“是什么让你认为
你了解机器学习?”

所以,和你一样,我创建了一个新的个人资料

,这一次,
我选择了一只带着喷气背包的猫,而不是我自己的照片。

我选择了一个
没有透露我性别的名字。

你可能
猜到这是怎么回事,对吧?

所以,这一次,我没有得到任何
关于我能力的傲慢评论

,我实际上能够
完成一些工作。

这很糟糕,伙计们。

我从 15 岁起就一直在制造机器人,

我有几个计算机科学学位

,然而,我不得不隐藏自己的性别

,以便我的
工作得到认真对待。

那么,这里发生了什么?

男人只是
比女人更擅长技术吗?

另一项研究发现

,当一个平台上的女性程序员
像我一样隐藏自己的性别时,

她们的代码被接受的
几率比男性高 4%。

所以这与人才无关。

这是关于人工智能中的一种精英主义,

即程序员
需要看起来像某个人。

为了让人工智能变得更好,我们真正需要做的

是让
来自各种背景的人参与进来。

我们需要能够
写作和讲故事的人

来帮助我们创造人工智能的个性。

我们需要能解决问题的人。

我们需要
面临不同挑战的

人,我们需要能够
告诉我们需要解决的真正问题

并帮助我们
找到技术可以实际解决问题的方法的人。

因为,当
来自不同背景的人聚在一起,

当我们以正确的方式构建事物时

,可能性是无限的。

这就是我想
通过与您交谈来结束的内容。

更少的种族主义机器人,更少的
机器将取代我们的工作——

以及更多关于技术
实际可以实现的目标。

所以,是的,

人工智能世界和

技术世界的一些能量将与
你在直播中看到的广告有关。

但其中很多都是为了
让世界变得更好。

想想
刚果民主共和国的一名孕妇,

她必须步行 17 小时
到最近的农村产前诊所

进行检查。

如果她可以通过
手机得到诊断呢?

或者想想人工智能可以

南非三分之一面临家庭暴力的女性做些什么。

如果大声说话不安全,

他们可以获得人工智能服务
来发出警报,

获得财务和法律建议。

这些都是
人们(包括我自己)

现在正在使用 AI 进行的项目的真实示例。

所以,我敢肯定,在接下来的几天
里,还会有另一个

关于生存风险的新闻报道,

机器人会接管
你的工作。

(笑声

) 当这样的事情发生时,

我知道我会收到同样的信息,
担心未来。

但我
对这项技术感到非常积极。

这是我们将世界
改造为更加平等的地方的机会。

但要做到这一点,我们需要
从一开始就以正确的方式构建它。

我们需要不同性别、
种族、性取向和背景的人。

我们需要女性成为制造者

,而不仅仅是
为制造者出价的机器。

我们需要非常仔细地思考
我们教

给机器什么,我们给它们什么数据,

这样它们就不会重复
我们过去的错误。

所以我希望我让你
思考两件事。

首先,我希望你
今天不要考虑偏见。

并且下次
您滚动浏览

假设您
对生育诊所

或在线博彩网站感兴趣的广告时

,您会想到并

记住相同的技术
假设黑人会再次犯罪。

或者说女性更有
可能成为私人助理而不是首席执行官。

我希望这能提醒
您我们需要为此做点什么。

其次,

我希望你考虑这样一个事实

,即你不需要以某种方式看待

或具有一定
的工程或技术背景

来创建人工智能,

这将
成为我们未来的一股非凡力量。

你不需要看起来
像马克扎克伯格,

你可以看起来像我。

在这个房间里,我们所有人都有责任

说服政府
和企业

为所有人构建人工智能技术,

包括边缘案例。

并且让我们所有人在未来都能获得

有关这项非凡
技术的教育。

因为如果我们这样做,

那么我们只是触及
了我们可以通过 AI 实现的目标的皮毛。

谢谢你。

(掌声)