Machine intelligence makes human morals more important Zeynep Tufekci

So, I started my first job
as a computer programmer

in my very first year of college –

basically, as a teenager.

Soon after I started working,

writing software in a company,

a manager who worked at the company
came down to where I was,

and he whispered to me,

“Can he tell if I’m lying?”

There was nobody else in the room.

“Can who tell if you’re lying?
And why are we whispering?”

The manager pointed
at the computer in the room.

“Can he tell if I’m lying?”

Well, that manager was having
an affair with the receptionist.

(Laughter)

And I was still a teenager.

So I whisper-shouted back to him,

“Yes, the computer can tell
if you’re lying.”

(Laughter)

Well, I laughed, but actually,
the laugh’s on me.

Nowadays, there are computational systems

that can suss out
emotional states and even lying

from processing human faces.

Advertisers and even governments
are very interested.

I had become a computer programmer

because I was one of those kids
crazy about math and science.

But somewhere along the line
I’d learned about nuclear weapons,

and I’d gotten really concerned
with the ethics of science.

I was troubled.

However, because of family circumstances,

I also needed to start working
as soon as possible.

So I thought to myself, hey,
let me pick a technical field

where I can get a job easily

and where I don’t have to deal
with any troublesome questions of ethics.

So I picked computers.

(Laughter)

Well, ha, ha, ha!
All the laughs are on me.

Nowadays, computer scientists
are building platforms

that control what a billion
people see every day.

They’re developing cars
that could decide who to run over.

They’re even building machines, weapons,

that might kill human beings in war.

It’s ethics all the way down.

Machine intelligence is here.

We’re now using computation
to make all sort of decisions,

but also new kinds of decisions.

We’re asking questions to computation
that have no single right answers,

that are subjective

and open-ended and value-laden.

We’re asking questions like,

“Who should the company hire?”

“Which update from which friend
should you be shown?”

“Which convict is more
likely to reoffend?”

“Which news item or movie
should be recommended to people?”

Look, yes, we’ve been using
computers for a while,

but this is different.

This is a historical twist,

because we cannot anchor computation
for such subjective decisions

the way we can anchor computation
for flying airplanes, building bridges,

going to the moon.

Are airplanes safer?
Did the bridge sway and fall?

There, we have agreed-upon,
fairly clear benchmarks,

and we have laws of nature to guide us.

We have no such anchors and benchmarks

for decisions in messy human affairs.

To make things more complicated,
our software is getting more powerful,

but it’s also getting less
transparent and more complex.

Recently, in the past decade,

complex algorithms
have made great strides.

They can recognize human faces.

They can decipher handwriting.

They can detect credit card fraud

and block spam

and they can translate between languages.

They can detect tumors in medical imaging.

They can beat humans in chess and Go.

Much of this progress comes
from a method called “machine learning.”

Machine learning is different
than traditional programming,

where you give the computer
detailed, exact, painstaking instructions.

It’s more like you take the system
and you feed it lots of data,

including unstructured data,

like the kind we generate
in our digital lives.

And the system learns
by churning through this data.

And also, crucially,

these systems don’t operate
under a single-answer logic.

They don’t produce a simple answer;
it’s more probabilistic:

“This one is probably more like
what you’re looking for.”

Now, the upside is:
this method is really powerful.

The head of Google’s AI systems called it,

“the unreasonable effectiveness of data.”

The downside is,

we don’t really understand
what the system learned.

In fact, that’s its power.

This is less like giving
instructions to a computer;

it’s more like training
a puppy-machine-creature

we don’t really understand or control.

So this is our problem.

It’s a problem when this artificial
intelligence system gets things wrong.

It’s also a problem
when it gets things right,

because we don’t even know which is which
when it’s a subjective problem.

We don’t know what this thing is thinking.

So, consider a hiring algorithm –

a system used to hire people,
using machine-learning systems.

Such a system would have been trained
on previous employees' data

and instructed to find and hire

people like the existing
high performers in the company.

Sounds good.

I once attended a conference

that brought together
human resources managers and executives,

high-level people,

using such systems in hiring.

They were super excited.

They thought that this would make hiring
more objective, less biased,

and give women
and minorities a better shot

against biased human managers.

And look – human hiring is biased.

I know.

I mean, in one of my early jobs
as a programmer,

my immediate manager would sometimes
come down to where I was

really early in the morning
or really late in the afternoon,

and she’d say, “Zeynep,
let’s go to lunch!”

I’d be puzzled by the weird timing.

It’s 4pm. Lunch?

I was broke, so free lunch. I always went.

I later realized what was happening.

My immediate managers
had not confessed to their higher-ups

that the programmer they hired
for a serious job was a teen girl

who wore jeans and sneakers to work.

I was doing a good job,
I just looked wrong

and was the wrong age and gender.

So hiring in a gender- and race-blind way

certainly sounds good to me.

But with these systems,
it is more complicated, and here’s why:

Currently, computational systems
can infer all sorts of things about you

from your digital crumbs,

even if you have not
disclosed those things.

They can infer your sexual orientation,

your personality traits,

your political leanings.

They have predictive power
with high levels of accuracy.

Remember – for things
you haven’t even disclosed.

This is inference.

I have a friend who developed
such computational systems

to predict the likelihood
of clinical or postpartum depression

from social media data.

The results are impressive.

Her system can predict
the likelihood of depression

months before the onset of any symptoms –

months before.

No symptoms, there’s prediction.

She hopes it will be used
for early intervention. Great!

But now put this in the context of hiring.

So at this human resources
managers conference,

I approached a high-level manager
in a very large company,

and I said to her, “Look,
what if, unbeknownst to you,

your system is weeding out people
with high future likelihood of depression?

They’re not depressed now,
just maybe in the future, more likely.

What if it’s weeding out women
more likely to be pregnant

in the next year or two
but aren’t pregnant now?

What if it’s hiring aggressive people
because that’s your workplace culture?”

You can’t tell this by looking
at gender breakdowns.

Those may be balanced.

And since this is machine learning,
not traditional coding,

there is no variable there
labeled “higher risk of depression,”

“higher risk of pregnancy,”

“aggressive guy scale.”

Not only do you not know
what your system is selecting on,

you don’t even know
where to begin to look.

It’s a black box.

It has predictive power,
but you don’t understand it.

“What safeguards,” I asked, “do you have

to make sure that your black box
isn’t doing something shady?”

She looked at me as if I had
just stepped on 10 puppy tails.

(Laughter)

She stared at me and she said,

“I don’t want to hear
another word about this.”

And she turned around and walked away.

Mind you – she wasn’t rude.

It was clearly: what I don’t know
isn’t my problem, go away, death stare.

(Laughter)

Look, such a system
may even be less biased

than human managers in some ways.

And it could make monetary sense.

But it could also lead

to a steady but stealthy
shutting out of the job market

of people with higher risk of depression.

Is this the kind of society
we want to build,

without even knowing we’ve done this,

because we turned decision-making
to machines we don’t totally understand?

Another problem is this:

these systems are often trained
on data generated by our actions,

human imprints.

Well, they could just be
reflecting our biases,

and these systems
could be picking up on our biases

and amplifying them

and showing them back to us,

while we’re telling ourselves,

“We’re just doing objective,
neutral computation.”

Researchers found that on Google,

women are less likely than men
to be shown job ads for high-paying jobs.

And searching for African-American names

is more likely to bring up ads
suggesting criminal history,

even when there is none.

Such hidden biases
and black-box algorithms

that researchers uncover sometimes
but sometimes we don’t know,

can have life-altering consequences.

In Wisconsin, a defendant
was sentenced to six years in prison

for evading the police.

You may not know this,

but algorithms are increasingly used
in parole and sentencing decisions.

He wanted to know:
How is this score calculated?

It’s a commercial black box.

The company refused to have its algorithm
be challenged in open court.

But ProPublica, an investigative
nonprofit, audited that very algorithm

with what public data they could find,

and found that its outcomes were biased

and its predictive power
was dismal, barely better than chance,

and it was wrongly labeling
black defendants as future criminals

at twice the rate of white defendants.

So, consider this case:

This woman was late
picking up her godsister

from a school in Broward County, Florida,

running down the street
with a friend of hers.

They spotted an unlocked kid’s bike
and a scooter on a porch

and foolishly jumped on it.

As they were speeding off,
a woman came out and said,

“Hey! That’s my kid’s bike!”

They dropped it, they walked away,
but they were arrested.

She was wrong, she was foolish,
but she was also just 18.

She had a couple of juvenile misdemeanors.

Meanwhile, that man had been arrested
for shoplifting in Home Depot –

85 dollars' worth of stuff,
a similar petty crime.

But he had two prior
armed robbery convictions.

But the algorithm scored her
as high risk, and not him.

Two years later, ProPublica found
that she had not reoffended.

It was just hard to get a job
for her with her record.

He, on the other hand, did reoffend

and is now serving an eight-year
prison term for a later crime.

Clearly, we need to audit our black boxes

and not have them have
this kind of unchecked power.

(Applause)

Audits are great and important,
but they don’t solve all our problems.

Take Facebook’s powerful
news feed algorithm –

you know, the one that ranks everything
and decides what to show you

from all the friends and pages you follow.

Should you be shown another baby picture?

(Laughter)

A sullen note from an acquaintance?

An important but difficult news item?

There’s no right answer.

Facebook optimizes
for engagement on the site:

likes, shares, comments.

In August of 2014,

protests broke out in Ferguson, Missouri,

after the killing of an African-American
teenager by a white police officer,

under murky circumstances.

The news of the protests was all over

my algorithmically
unfiltered Twitter feed,

but nowhere on my Facebook.

Was it my Facebook friends?

I disabled Facebook’s algorithm,

which is hard because Facebook
keeps wanting to make you

come under the algorithm’s control,

and saw that my friends
were talking about it.

It’s just that the algorithm
wasn’t showing it to me.

I researched this and found
this was a widespread problem.

The story of Ferguson
wasn’t algorithm-friendly.

It’s not “likable.”

Who’s going to click on “like?”

It’s not even easy to comment on.

Without likes and comments,

the algorithm was likely showing it
to even fewer people,

so we didn’t get to see this.

Instead, that week,

Facebook’s algorithm highlighted this,

which is the ALS Ice Bucket Challenge.

Worthy cause; dump ice water,
donate to charity, fine.

But it was super algorithm-friendly.

The machine made this decision for us.

A very important
but difficult conversation

might have been smothered,

had Facebook been the only channel.

Now, finally, these systems
can also be wrong

in ways that don’t resemble human systems.

Do you guys remember Watson,
IBM’s machine-intelligence system

that wiped the floor
with human contestants on Jeopardy?

It was a great player.

But then, for Final Jeopardy,
Watson was asked this question:

“Its largest airport is named
for a World War II hero,

its second-largest
for a World War II battle.”

(Hums Final Jeopardy music)

Chicago.

The two humans got it right.

Watson, on the other hand,
answered “Toronto” –

for a US city category!

The impressive system also made an error

that a human would never make,
a second-grader wouldn’t make.

Our machine intelligence can fail

in ways that don’t fit
error patterns of humans,

in ways we won’t expect
and be prepared for.

It’d be lousy not to get a job
one is qualified for,

but it would triple suck
if it was because of stack overflow

in some subroutine.

(Laughter)

In May of 2010,

a flash crash on Wall Street
fueled by a feedback loop

in Wall Street’s “sell” algorithm

wiped a trillion dollars
of value in 36 minutes.

I don’t even want to think
what “error” means

in the context of lethal
autonomous weapons.

So yes, humans have always made biases.

Decision makers and gatekeepers,

in courts, in news, in war …

they make mistakes;
but that’s exactly my point.

We cannot escape
these difficult questions.

We cannot outsource
our responsibilities to machines.

(Applause)

Artificial intelligence does not give us
a “Get out of ethics free” card.

Data scientist Fred Benenson
calls this math-washing.

We need the opposite.

We need to cultivate algorithm suspicion,
scrutiny and investigation.

We need to make sure we have
algorithmic accountability,

auditing and meaningful transparency.

We need to accept
that bringing math and computation

to messy, value-laden human affairs

does not bring objectivity;

rather, the complexity of human affairs
invades the algorithms.

Yes, we can and we should use computation

to help us make better decisions.

But we have to own up
to our moral responsibility to judgment,

and use algorithms within that framework,

not as a means to abdicate
and outsource our responsibilities

to one another as human to human.

Machine intelligence is here.

That means we must hold on ever tighter

to human values and human ethics.

Thank you.

(Applause)

所以,我在大学的第一年就开始了我的第
一份计算机程序员工作

——

基本上是在十几岁的时候。

我开始

在一家公司工作,写软件不久,公司的

一位经理
来到我所在的地方

,他低声对我说:

“他能看出我是否在撒谎吗?”

房间里没有其他人。

“谁能知道你是否在撒谎?
我们为什么要窃窃私语?”

经理指了指
房间里的电脑。

“他能看出我是不是在撒谎吗?”

嗯,那个
经理和接待员有染。

(笑声)

而我还是个少年。

于是我低声对他喊道:

“是的,电脑可以
判断你是否在撒谎。”

(笑声)

好吧,我笑了,但实际上
,笑声在我身上。

如今,有一些计算

系统可以推断出
情绪状态,甚至可以

在处理人脸时撒谎。

广告商甚至政府
都非常感兴趣。

我之所以成为一名计算机程序员,

是因为我是那些
对数学和科学着迷的孩子之一。

但在某个地方,
我了解了核武器,

并且非常
关心科学伦理。

我很困扰。

但是,由于家庭情况,

我也需要尽快开始工作

所以我想,嘿,
让我选择一个

我可以轻松找到工作的技术领域,

并且我不必
处理任何麻烦的道德问题。

所以我选择了电脑。

(笑声)

嗯,哈,哈,哈!
所有的笑声都在我身上。

如今,计算机科学家
正在构建平台

来控制十亿
人每天看到的内容。

他们正在开发
可以决定碾压谁的汽车。

他们甚至在制造

可能在战争中杀死人类的机器和武器。

从头到尾都是道德。

机器智能就在这里。

我们现在使用计算
来做出各种决定,

但也有新的决定。

我们向计算提出的问题
没有单一的正确答案,

这些问题是主观的

、开放式的和充满价值的。

我们会问诸如

“公司应该雇用谁?”之类的问题。

“应该显示哪个朋友的哪个更新
?”

“哪个罪犯更有
可能再次犯罪?”


应该向人们推荐哪些新闻或电影?”

看,是的,我们使用电脑已经有
一段时间了,

但这次不同。

这是一个历史转折,

因为我们无法将计算锚
定在这些主观决定

上,就像我们锚
定飞行飞机、建造桥梁

、登月的计算一样。

飞机更安全吗?
桥有没有摇晃倒塌?

在那里,我们已经商定了
相当明确的基准,

并且我们有自然法则来指导我们。

在混乱的人类事务中,我们没有这样的决策锚和基准。

为了让事情变得更复杂,
我们的软件变得越来越强大,

但它也变得越来越
不透明和越来越复杂。

最近,在过去的十年中,

复杂算法
取得了长足的进步。

他们可以识别人脸。

他们可以破译笔迹。

他们可以检测信用卡欺诈

并阻止垃圾邮件

,并且可以在不同语言之间进行翻译。

他们可以在医学成像中检测肿瘤。

他们可以在国际象棋和围棋中击败人类。

这种进步大部分
来自一种称为“机器学习”的方法。

机器学习
与传统编程不同,在传统编程中

,您向计算机提供
详细、准确、艰苦的指令。

这更像是您使用系统
并向其提供大量数据,

包括非结构化数据,

就像
我们在数字生活中生成的那种数据一样。

系统
通过搅动这些数据来学习。

而且,至关重要的是,

这些系统不是
在单一答案逻辑下运行的。

他们不会给出一个简单的答案;
它更具概率性:

“这个可能更像
你正在寻找的东西。”

现在,好处是:
这个方法真的很强大。

谷歌人工智能系统负责人称其为

“数据的不合理有效性”。

缺点是,

我们并不真正
了解系统学到了什么。

事实上,这就是它的力量。

这不像
向计算机发出指令;

这更像是训练

我们并不真正了解或控制的小狗机器生物。

所以这是我们的问题。

当这个人工智能系统出错时,这是一个
问题。

当它把事情做对时也是一个问题,

因为
当它是一个主观问题时,我们甚至不知道哪个是哪个。

我们不知道这东西在想什么。

因此,考虑一种招聘算法——

一种用于招聘人员的系统,
使用机器学习系统。

这样的系统将
根据以前员工的数据进行培训,

并指示寻找和

雇用公司中现有的
高绩效人员。

听起来不错。

我曾经参加过一个会议

,该会议汇集了
人力资源经理和高管,

高层人士,

在招聘中使用此类系统。

他们超级兴奋。

他们认为这将使招聘
更加客观,减少偏见,

并让女性
和少数族裔更有机会

对抗有偏见的人类经理。

看 - 人力招聘是有偏见的。

我知道。

我的意思是,在我作为程序员的早期工作之一中

我的直属经理有时
会回到我

真正早上
或下午很晚的地方

,她会说,“Zeynep,
我们去吃午饭吧! "

我会对奇怪的时间感到困惑。

现在是下午 4 点。 午餐?

我破产了,所以免费午餐。 我总是去。

后来我意识到发生了什么。

我的直属经理
没有向他们的上级承认,他们

为一份严肃的工作雇佣的程序员是

一个穿着牛仔裤和运动鞋上班的少女。

我做得很好,
我只是看起来不对,

而且年龄和性别都错了。

因此,以不分性别和不分种族的方式招聘

对我来说当然听起来不错。

但有了这些系统,
情况就更加复杂了,原因如下:

目前,计算系统
可以从你的数字碎片中推断出关于你的各种事情

即使你没有
透露这些事情。

他们可以推断出你的性取向

、性格特征

、政治倾向。

它们
具有高度准确的预测能力。

记住——对于
你甚至没有透露的事情。

这是推理。

我有一个朋友开发了
这样的计算系统

,从社交媒体数据中预测
临床或产后抑郁症的可能性

结果令人印象深刻。

她的系统可以

在任何症状出现前几个月预测抑郁症的可能性 -

几个月前。

没有症状,有预测。

她希望它将
用于早期干预。 伟大的!

但现在把它放在招聘的背景下。

所以在这次人力资源
经理会议上,

我找到
了一家非常大的公司的一位高级经理

,我对她说,“听着
,如果你不知道,

你的系统正在淘汰
那些未来很可能患抑郁症的人怎么办?

他们现在没有抑郁,
只是可能在未来,更有可能。

如果它淘汰

了未来一两年内更有可能怀孕
但现在没有怀孕的女性怎么

办?如果它雇佣有攻击性的人怎么办,
因为那是你的工作场所 文化?”

您无法通过查看性别细分来判断这一点

这些可能是平衡的。

由于这是机器学习,
而不是传统的编码,

因此没有
标有“更高的抑郁风险”、

“更高的怀孕风险”、

“好斗的人量表”的变量。

您不仅不
知道您的系统正在选择什么,

您甚至不知道从
哪里开始寻找。

这是一个黑匣子。

它具有预测能力,
但你不了解它。

“有什么保障措施,”我问道,“你

必须确保你的
黑匣子没有做不正当的事情吗?”

她看着我,好像我
刚刚踩到了 10 条小狗尾巴。

(笑声)

她盯着我说,

“我不想
再听到这件事了。”

她转身就走。

请注意——她并不粗鲁。

很明显:我不知道
的不是我的问题,走开,死神凝视。

(笑声)

听着,这样的系统

在某些方面甚至可能比人类管理者更没有偏见。

它可能具有货币意义。

但这也可能

导致抑郁症风险较高的人稳定而隐秘地被
排除在就业市场之外

这是
我们想要建立的那种社会,

甚至不知道我们已经这样做了,

因为我们将决策
转向我们不完全理解的机器?

另一个问题是:

这些系统通常
根据我们的行为产生的数据、

人类印记进行训练。

好吧,它们可能只是
反映了我们的偏见

,这些系统
可能会发现我们的偏见

并放大它们

并将它们展示给我们,

同时我们告诉自己,

“我们只是在做客观、
中立的计算。”

研究人员发现,在谷歌上,

女性比男性
更不可能看到高薪工作的招聘广告。

搜索非裔美国人的

名字更有可能出现
暗示犯罪历史的广告,

即使没有。

研究人员有时
会发现但有时我们不知道的这种隐藏的偏见和黑盒算法

可能会产生改变生活的后果。

在威斯康星州,一名被告因逃避警察
而被判处六年徒刑

您可能不知道这一点,

但算法越来越多地
用于假释和量刑决定。

他想知道:
这个分数是怎么计算的?

这是一个商业黑匣子。

该公司拒绝
在公开法庭上对其算法提出质疑。

但调查
性非营利组织 ProPublica

使用他们可以找到的公共数据对该算法进行了审计

,发现其结果存在偏见

,预测能力
令人沮丧,几乎比偶然性好,

而且两次错误地将
黑人被告标记为未来的

罪犯 白人被告的比率。

所以,考虑一下这个案例:

这个女人

从佛罗里达州布劳沃德县的一所学校接她的教女到晚了,她

和她的一个朋友在街上跑。

他们在门廊上发现了一辆未上锁的儿童自行车
和一辆踏板车,

然后愚蠢地跳了上去。

当他们快要开走时,
一个女人出来说:

“嘿!那是我孩子的自行车!”

他们丢下它,他们走开了,
但他们被捕了。

她错了,她很愚蠢,
但她也只有 18 岁。

她有几起青少年轻罪。

与此同时,那个人因
在家得宝(Home Depot)入店行窃而被捕——

价值 85 美元的东西
,类似的轻微犯罪。

但他之前有两次
持械抢劫罪。

但算法将她
评为高风险,而不是他。

两年后,ProPublica
发现她没有再犯。

凭借她的唱片,很难为她找到一份工作。

另一方面,他确实再次犯罪

,现在
因后来的犯罪而服刑八年。

显然,我们需要审核我们的黑匣子,

而不是让它们拥有
这种不受约束的权力。

(掌声)

审计是伟大而重要的,
但它们并不能解决我们所有的问题。

以 Facebook 强大的
新闻提要算法为例——

你知道,它可以对所有
内容进行排名并决定

从你关注的所有朋友和页面中向你展示什么。

你应该看到另一张婴儿照片吗?

(笑声)

熟人发来的闷闷不乐的字条?

一个重要但困难的新闻项目?

没有正确的答案。

Facebook 优化
网站的参与度:点

赞、分享、评论。

2014 年 8 月,

一名非裔美国
少年在不明情况下被一名白人警察杀害后,

在密苏里州弗格森爆发了抗议活动。

抗议的消息遍布

我的算法
未经过滤的 Twitter 提要,

但在我的 Facebook 上却没有。

是我的 Facebook 朋友吗?

我禁用了 Facebook 的算法,

这很难,因为 Facebook
一直想让

你处于算法的控制之下,

并且看到我的朋友
在谈论它。

只是
算法没有向我展示。


对此进行了研究,发现这是一个普遍存在的问题。

弗格森的故事
对算法不友好。

这不是“讨人喜欢的”。

谁会点击“喜欢”?

甚至都不容易评论。

如果没有点赞和评论,

该算法可能会
向更少的人展示它,

所以我们没有看到这一点。

相反,那一周,

Facebook 的算法强调了这一点,

即 ALS Ice Bucket Challenge。

值得的事业; 倒冰水,
捐给慈善机构,很好。

但它对算法非常友好。

机器为我们做出了这个决定。

如果 Facebook 是唯一的渠道,那么一次非常重要
但困难的对话

可能会被扼杀

现在,最后,这些系统
也可能

以与人类系统不同的方式出错。

你们还记得
IBM 的机器智能系统 Watson

,它
在 Jeopardy 上与人类参赛者擦肩而过吗?

这是一个伟大的球员。

但随后,在《最后的危险》中,
沃森被问到这个问题:

“它最大的机场
是以二战英雄的名字命名的,它是二战中第二

大机场。”

(哼唱 Final Jeopardy 音乐)

芝加哥。

两个人做对了。

另一方面,Watson
回答“多伦多”

——美国城市类别!

这个令人印象深刻的系统还犯了一个人类永远不会犯的错误

,二年级学生也不会犯。

我们的机器智能可能会

以不符合
人类错误模式的

方式失败,以我们不会期望
和准备好的方式失败。

不得到一份合格的工作会很糟糕


如果是因为

某些子程序中的堆栈溢出,那将是三倍的糟糕。

(笑声)

2010 年 5 月,

华尔街“卖出”算法的反馈循环推动的华尔街闪电

崩盘
在 36 分钟内蒸发了 1 万亿美元的价值。

我什至不想在致命的自主武器的背景下思考
“错误”是什么意思

所以,是的,人类总是有偏见。

决策者和看门人,

在法庭上,在新闻中,在战争中……

他们犯了错误;
但这正是我的观点。

我们无法逃避
这些棘手的问题。

我们不能将
我们的责任外包给机器。

(掌声)

人工智能并没有给我们
一张“摆脱道德自由”的牌。

数据科学家 Fred Benenson
称之为数学清洗。

我们需要相反的。

我们需要培养算法怀疑、
审查和调查。

我们需要确保我们拥有
算法问责制、

审计和有意义的透明度。

我们需要
承认,将数学和计算

带入混乱、充满价值的人类事务

并不能带来客观性。

相反,人类事务的复杂性
侵入了算法。

是的,我们可以而且我们应该使用计算

来帮助我们做出更好的决策。

但是我们必须
承担我们对判断的道德责任,

并在该框架内使用算法,

而不是作为一种放弃
和将我们的责任外包

给彼此的手段,就像人与人之间的那样。

机器智能就在这里。

这意味着我们必须更加

坚守人类价值观和人类道德。

谢谢你。

(掌声)