How deepfakes undermine truth and threaten democracy Danielle Citron

[This talk contains mature content]

Rana Ayyub is a journalist in India

whose work has exposed
government corruption

and human rights violations.

And over the years,

she’s gotten used to vitriol
and controversy around her work.

But none of it could have prepared her
for what she faced in April 2018.

She was sitting in a café with a friend
when she first saw it:

a two-minute, 20-second video
of her engaged in a sex act.

And she couldn’t believe her eyes.

She had never made a sex video.

But unfortunately, thousands
upon thousands of people

would believe it was her.

I interviewed Ms. Ayyub
about three months ago,

in connection with my book
on sexual privacy.

I’m a law professor, lawyer
and civil rights advocate.

So it’s incredibly frustrating
knowing that right now,

law could do very little to help her.

And as we talked,

she explained that she should have seen
the fake sex video coming.

She said, “After all, sex is so often used
to demean and to shame women,

especially minority women,

and especially minority women
who dare to challenge powerful men,”

as she had in her work.

The fake sex video went viral in 48 hours.

All of her online accounts were flooded
with screenshots of the video,

with graphic rape and death threats

and with slurs about her Muslim faith.

Online posts suggested that
she was “available” for sex.

And she was doxed,

which means that her home address
and her cell phone number

were spread across the internet.

The video was shared
more than 40,000 times.

Now, when someone is targeted
with this kind of cybermob attack,

the harm is profound.

Rana Ayyub’s life was turned upside down.

For weeks, she could hardly eat or speak.

She stopped writing and closed
all of her social media accounts,

which is, you know, a tough thing to do
when you’re a journalist.

And she was afraid to go outside
her family’s home.

What if the posters
made good on their threats?

The UN Council on Human Rights
confirmed that she wasn’t being crazy.

It issued a public statement saying
that they were worried about her safety.

What Rana Ayyub faced was a deepfake:

machine-learning technology

that manipulates or fabricates
audio and video recordings

to show people doing and saying things

that they never did or said.

Deepfakes appear authentic
and realistic, but they’re not;

they’re total falsehoods.

Although the technology
is still developing in its sophistication,

it is widely available.

Now, the most recent attention
to deepfakes arose,

as so many things do online,

with pornography.

In early 2018,

someone posted a tool on Reddit

to allow users to insert faces
into porn videos.

And what followed was a cascade
of fake porn videos

featuring people’s favorite
female celebrities.

And today, you can go on YouTube
and pull up countless tutorials

with step-by-step instructions

on how to make a deepfake
on your desktop application.

And soon we may be even able
to make them on our cell phones.

Now, it’s the interaction
of some of our most basic human frailties

and network tools

that can turn deepfakes into weapons.

So let me explain.

As human beings, we have
a visceral reaction to audio and video.

We believe they’re true,

on the notion that
of course you can believe

what your eyes and ears are telling you.

And it’s that mechanism

that might undermine our shared
sense of reality.

Although we believe deepfakes
to be true, they’re not.

And we’re attracted
to the salacious, the provocative.

We tend to believe
and to share information

that’s negative and novel.

And researchers have found that online
hoaxes spread 10 times faster

than accurate stories.

Now, we’re also drawn to information

that aligns with our viewpoints.

Psychologists call that tendency
“confirmation bias.”

And social media platforms
supercharge that tendency,

by allowing us to instantly
and widely share information

that accords with our viewpoints.

Now, deepfakes have the potential to cause
grave individual and societal harm.

So, imagine a deepfake

that shows American soldiers
in Afganistan burning a Koran.

You can imagine that that deepfake
would provoke violence

against those soldiers.

And what if the very next day

there’s another deepfake that drops,

that shows a well-known imam
based in London

praising the attack on those soldiers?

We might see violence and civil unrest,

not only in Afganistan
and the United Kingdom,

but across the globe.

And you might say to me,

“Come on, Danielle, that’s far-fetched.”

But it’s not.

We’ve seen falsehoods spread

on WhatsApp and other
online message services

lead to violence
against ethnic minorities.

And that was just text –

imagine if it were video.

Now, deepfakes have the potential
to corrode the trust that we have

in democratic institutions.

So, imagine the night before an election.

There’s a deepfake showing
one of the major party candidates

gravely sick.

The deepfake could tip the election

and shake our sense
that elections are legitimate.

Imagine if the night before
an initial public offering

of a major global bank,

there was a deepfake
showing the bank’s CEO

drunkenly spouting conspiracy theories.

The deepfake could tank the IPO,

and worse, shake our sense
that financial markets are stable.

So deepfakes can exploit and magnify
the deep distrust that we already have

in politicians, business leaders
and other influential leaders.

They find an audience
primed to believe them.

And the pursuit of truth
is on the line as well.

Technologists expect
that with advances in AI,

soon it may be difficult if not impossible

to tell the difference between
a real video and a fake one.

So how can the truth emerge
in a deepfake-ridden marketplace of ideas?

Will we just proceed along
the path of least resistance

and believe what we want to believe,

truth be damned?

And not only might we believe the fakery,

we might start disbelieving the truth.

We’ve already seen people invoke
the phenomenon of deepfakes

to cast doubt on real evidence
of their wrongdoing.

We’ve heard politicians say of audio
of their disturbing comments,

“Come on, that’s fake news.

You can’t believe what your eyes
and ears are telling you.”

And it’s that risk

that professor Robert Chesney and I
call the “liar’s dividend”:

the risk that liars will invoke deepfakes

to escape accountability
for their wrongdoing.

So we’ve got our work cut out for us,
there’s no doubt about it.

And we’re going to need
a proactive solution

from tech companies, from lawmakers,

law enforcers and the media.

And we’re going to need
a healthy dose of societal resilience.

So now, we’re right now engaged
in a very public conversation

about the responsibility
of tech companies.

And my advice to social media platforms

has been to change their terms of service
and community guidelines

to ban deepfakes that cause harm.

That determination,
that’s going to require human judgment,

and it’s expensive.

But we need human beings

to look at the content
and context of a deepfake

to figure out if it is
a harmful impersonation

or instead, if it’s valuable
satire, art or education.

So now, what about the law?

Law is our educator.

It teaches us about
what’s harmful and what’s wrong.

And it shapes behavior it deters
by punishing perpetrators

and securing remedies for victims.

Right now, law is not up to
the challenge of deepfakes.

Across the globe,

we lack well-tailored laws

that would be designed to tackle
digital impersonations

that invade sexual privacy,

that damage reputations

and that cause emotional distress.

What happened to Rana Ayyub
is increasingly commonplace.

Yet, when she went
to law enforcement in Delhi,

she was told nothing could be done.

And the sad truth is
that the same would be true

in the United States and in Europe.

So we have a legal vacuum
that needs to be filled.

My colleague Dr. Mary Anne Franks and I
are working with US lawmakers

to devise legislation that would ban
harmful digital impersonations

that are tantamount to identity theft.

And we’ve seen similar moves

in Iceland, the UK and Australia.

But of course, that’s just a small piece
of the regulatory puzzle.

Now, I know law is not a cure-all. Right?

It’s a blunt instrument.

And we’ve got to use it wisely.

It also has some practical impediments.

You can’t leverage law against people
you can’t identify and find.

And if a perpetrator lives
outside the country

where a victim lives,

then you may not be able to insist

that the perpetrator
come into local courts

to face justice.

And so we’re going to need
a coordinated international response.

Education has to be part
of our response as well.

Law enforcers are not
going to enforce laws

they don’t know about

and proffer problems
they don’t understand.

In my research on cyberstalking,

I found that law enforcement
lacked the training

to understand the laws available to them

and the problem of online abuse.

And so often they told victims,

“Just turn your computer off.
Ignore it. It’ll go away.”

And we saw that in Rana Ayyub’s case.

She was told, “Come on,
you’re making such a big deal about this.

It’s boys being boys.”

And so we need to pair new legislation
with efforts at training.

And education has to be aimed
on the media as well.

Journalists need educating
about the phenomenon of deepfakes

so they don’t amplify and spread them.

And this is the part
where we’re all involved.

Each and every one of us needs educating.

We click, we share, we like,
and we don’t even think about it.

We need to do better.

We need far better radar for fakery.

So as we’re working
through these solutions,

there’s going to be
a lot of suffering to go around.

Rana Ayyub is still wrestling
with the fallout.

She still doesn’t feel free
to express herself on- and offline.

And as she told me,

she still feels like there are thousands
of eyes on her naked body,

even though, intellectually,
she knows it wasn’t her body.

And she has frequent panic attacks,

especially when someone she doesn’t know
tries to take her picture.

“What if they’re going to make
another deepfake?” she thinks to herself.

And so for the sake of
individuals like Rana Ayyub

and the sake of our democracy,

we need to do something right now.

Thank you.

(Applause)

【本次演讲包含成熟内容】

Rana Ayyub 是印度的一名记者,

她的工作揭露了
政府的腐败

和侵犯人权的行为。

多年来,

她已经习惯了
围绕她的工作的刻薄和争议。

但这些都无法
让她为 2018 年 4 月所面临的事情做好准备。

当她第一次看到它时,她正和一位朋友坐在咖啡馆里:

一段 2 分钟 20 秒的视频
,其中她从事性行为。

而她简直不敢相信自己的眼睛。

她从未制作过性爱视频。

但不幸的是,
成千上万的人

会相信是她。

大约三个月前

,我就我
关于性隐私的书采访了 Ayyub 女士。

我是一名法学教授、律师
和民权倡导者。

因此,
知道现在

法律对她无能为力,这令人难以置信的沮丧。

当我们交谈时,

她解释说她应该
看到假性爱视频的到来。

她说,“毕竟,性经常被
用来贬低和羞辱

女性,尤其是少数族裔女性,

尤其是
敢于挑战有权势的男性的少数族裔女性,”

正如她在工作中所做的那样。

假性爱视频在 48 小时内传播开来。

她所有的在线账户都充斥
着视频的截图

、生动的强奸和死亡威胁

,以及对她的穆斯林信仰的诽谤。

在线帖子表明
她“可以”进行性行为。

她被人肉了,

这意味着她的家庭住址
和手机号码

在互联网上传播。

该视频被分享
了超过 40,000 次。

现在,当有人成为
这种网络暴徒攻击的目标时

,危害是深远的。

Rana Ayyub 的生活发生了翻天覆地的变化。

几个星期以来,她几乎无法进食或说话。

她停止写作并关闭
了她所有的社交媒体账户

,你知道,当你是一名记者时,这是一件很难做到的事情

她害怕出门在外

如果
海报兑现了他们的威胁怎么办?

联合国人权理事会
证实她并没有发疯。

它发表了一份公开声明,
称他们担心她的安全。

Rana Ayyub 面临的是一种深度伪造:

机器学习技术

可以操纵或制作
音频和视频记录,

以显示人们在做和

说他们从未做过或说过的事情。

Deepfake 看起来真实
而真实,但事实并非如此。

他们完全是谎言。

尽管该技术
仍在其复杂性方面发展,

但它已被广泛使用。

现在,最近
出现了对 deepfakes 的关注,

就像网上很多事情一样

,色情内容。

2018 年初,

有人在 Reddit 上发布了一个工具

,允许用户在
色情视频中插入面孔。

随之而来的是一连串

以人们最喜欢的
女性名人为特色的假色情视频。

而今天,你可以在 YouTube
上找到无数教程

,其中包含

有关如何
在桌面应用程序上进行 deepfake 的分步说明。

很快我们甚至可以
在手机上制作它们。

现在,正是
我们一些最基本的人类弱点

和网络工具

的相互作用可以将深度伪造变成武器。

所以让我解释一下。

作为人类,我们
对音频和视频有一种本能的反应。

我们相信它们是真实的

,当然你可以相信

你的眼睛和耳朵告诉你的东西。

正是这种机制

可能会破坏我们共同
的现实感。

尽管我们相信
deepfakes 是真实的,但事实并非如此。

我们被
淫荡的、挑衅的东西所吸引。

我们倾向于相信
并分享

消极和新颖的信息。

研究人员发现,网络
恶作剧的传播速度比准确故事的传播速度快 10 倍

现在,我们也被

与我们的观点一致的信息所吸引。

心理学家称这种倾向为
“确认偏差”。

社交媒体平台

通过允许我们即时
和广泛地分享

符合我们观点的信息来增强这种趋势。

现在,深度伪造有可能对
个人和社会造成严重伤害。

所以,想象一下一个

显示美国士兵
在阿富汗焚烧古兰经的深度伪造。

你可以想象,那个 deepfake
会激起

对那些士兵的暴力。

如果第二天又

出现了另一个 deepfake,

显示了一位驻伦敦的知名伊玛目

赞扬了对这些士兵的袭击,那该怎么办?

我们可能会看到暴力和内乱,

不仅在阿富汗
和英国,

而且在全球范围内。

你可能会对我说,

“来吧,丹妮尔,这太牵强了。”

但事实并非如此。

我们已经看到

在 WhatsApp 和其他
在线消息服务上传播的虚假信息

导致
针对少数民族的暴力行为。

那只是文字——

想象一下如果是视频。

现在,deepfakes 有
可能腐蚀我们

对民主制度的信任。

所以,想象一下选举的前一天晚上。

有一个 deepfake 显示
其中一名主要政党候选人

病得很重。

deepfake 可能会影响选举,

并动摇我们
对选举合法的感觉。

想象一下,如果在

一家大型全球银行首次公开募股的前一天晚上,

有一个 deepfake
显示该银行的首席执行官

醉醺醺地宣扬阴谋论。

deepfake 可能会打击 IPO,

更糟糕的是,会动摇我们
对金融市场稳定的感觉。

因此,deepfakes 可以利用和放大
我们

对政客、商界领袖
和其他有影响力的领导人的深刻不信任。

他们找到了
准备相信他们的观众。

追求真理
也在线上。

技术人员预计
,随着人工智能的进步,

很快就很难

区分真假视频。

那么,真相如何
在一个充斥着深造假的思想市场中浮现呢?

我们是否会
沿着阻力最小的道路前进

,相信我们想相信的东西,

真相该死?

我们不仅可能相信虚假,

我们可能会开始不相信真相。

我们已经看到人们
援引深度造假现象

来质疑
他们不法行为的真实证据。

我们听到政客们谈论
他们令人不安的评论的音频,

“拜托,那是假新闻。

你无法相信你的眼睛
和耳朵告诉你什么。”

正是这种风险

,罗伯特·切斯尼教授和我
称之为“骗子的红利”

:说谎者会利用深度伪造

来逃避
对其不法行为的责任的风险。

所以我们已经完成了我们的工作,
这是毫无疑问的。

我们需要

来自科技公司、立法者、

执法者和媒体的积极解决方案。

我们将
需要健康的社会适应力。

所以现在,我们现在正在就科技公司的责任
进行一场非常公开的对话

我对社交媒体平台的建议

是改变他们的服务条款
和社区准则,

以禁止造成伤害的深度伪造。

这种决心,
需要人的判断,

而且代价高昂。

但我们需要

人类查看
深度伪造的内容和背景,

以确定它
是有害的模仿,

还是有价值的
讽刺、艺术或教育。

那么现在,法律呢?

法律是我们的教育者。

它告诉我们
什么是有害的,什么是错误的。


通过惩罚肇事者

和为受害者提供补救措施来塑造它所阻止的行为。

目前,法律无法
应对深度伪造的挑战。

在全球范围内,

我们缺乏量身定制的法律

,旨在解决

侵犯性隐私

、损害名誉

和导致情绪困扰的数字冒充行为。

发生在 Rana Ayyub 身上的事情
越来越普遍。

然而,当她
去德里的执法部门时,

却被告知无能为力。

可悲的事实是

在美国和欧洲也是如此。

因此,我们有一个法律
真空需要填补。

我的同事 Mary Anne Franks 博士和我
正在与美国立法者

合作制定立法,

禁止等同于身份盗用的有害数字冒充。

我们

在冰岛、英国和澳大利亚也看到了类似的举动。

但当然,这只是
监管难题的一小部分。

现在,我知道法律不是万灵药。 对?

这是一个钝器。

我们必须明智地使用它。

它也有一些实际障碍。

你不能利用法律来对付
你无法识别和找到的人。

如果肇事者居住
在受害者居住的国家之外

那么您可能无法坚持要求

肇事者到当地

法院接受审判。

因此,我们
需要协调一致的国际反应。

教育也必须
成为我们应对措施的一部分。

执法者
不会执行

他们不了解的法律,也不会

提出
他们不了解的问题。

在我对网络跟踪的研究中,

我发现执法部门
缺乏

了解他们可用的法律

和网络滥用问题的培训。

他们经常告诉受害者,

“关掉你的电脑。
忽略它。它会消失的。”

我们在 Rana Ayyub 的案例中看到了这一点。

有人告诉她,“拜托,
你在这件事

上闹得很厉害。男孩就是男孩。”

因此,我们需要将新立法
与培训工作结合起来。

教育也必须
针对媒体。

记者需要
对深造假现象进行教育,

这样他们就不会放大和传播它们。


是我们都参与的部分。

我们每个人都需要接受教育。

我们点击,我们分享,我们喜欢
,我们甚至不去想它。

我们需要做得更好。

我们需要更好的防伪雷达。

因此,当我们正在研究
这些解决方案时,

将会
遇到很多痛苦。

Rana Ayyub 仍在
与后果作斗争。

她仍然无法自由
地在线和离线表达自己。

正如她告诉我的那样,

她仍然觉得
自己赤裸的身体上有成千上万只眼睛,

尽管在理智上,
她知道那不是她的身体。

而且她经常惊恐发作,

尤其是当她不认识的人
试图给她拍照时。

“如果他们要制造
另一个深度伪造怎么办?” 她心想。

因此,
为了像 Rana Ayyub 这样的人

,为了我们的民主,

我们现在需要做点什么。

谢谢你。

(掌声)