Unmasking Misinformation

Transcriber: Eunice Tan
Reviewer: David DeRuwe

Wouldn’t it be great

if there were such a thing
as a misinformation mask?

A mask that would protect us
from being infected by false information

and from spreading it
to others unknowingly,

just like we do to protect
ourselves from COVID.

The analogy is not a stretch.

The first law of misinformation
is “we’re all vulnerable” -

we’re all vulnerable to believing
and spreading false information.

Unfortunately, unlike COVID,
there is no mask to filter information,

and nor can we expect a vaccine.

Yes, we need to hold government
and tech companies accountable -

and that’s another story -

but we can’t rely solely on the efforts
of others to solve this problem.

And that means that we as individuals
need to play an active role

in reducing the spread
and impact of misinformation.

So the mask?

We can forget about it.

I’m a research scientist

at the University of Washington
Information School

and co-founder of the Center
for an Informed Public,

a university-wide effort
to study misinformation

and strengthen democratic discourse.

Let’s start with something
currently in the news:

the killing of George Floyd

and the anti-racism protests
that have swept the country.

This is an issue
that hits close to home too.

My daughter is Black,

and there’s been a lot of misinformation
targeting the Black community.

Now, sometimes people
are simply misinformed,

and this happens all the time

but especially at the outset
of an event like this or COVID,

when there’s a vacuum of good information,

creating a ripe environment
for rumors to spread.

I’ve had to gently correct
family and friends

who have come to me with false information
about Seattle, where I live.

Having these conversations
can be a bit uncomfortable,

but it’s tolerable,
and actually they’re important.

After all, we’re all vulnerable,

and that’s how I approach
these conversations.

On the other end of the spectrum
are conspiracies and extreme views.

There are stories that the killing
of George Floyd was staged

or that it was financed to trigger riots

or that he’s not even dead.

I don’t have anybody in my network
who believes these,

but common sense tells us
if you have an issue like this,

and one person’s a believer
of these conspiracies

and another one isn’t,

that conversation isn’t going to go well.

It’s between these two ends
of the spectrum

that are the most problematic
and challenging forms of misinformation:

the half-truths, the plausible narratives,

the believable stories that can
and do change people’s minds,

sometimes leading them
down a path to extremism.

The individuals and organized groups
that are behind these stories,

they’re doing so
with the intent to deceive.

This is what we call
a disinformation campaign.

Disinformation is the intentional
spread of false information,

motivated by some political,
social, financial, or other agenda.

Let’s take a look at an example.

[ANTIFA America
@ANTIFA_US - ALERT]

This tweet purports to belong
to a national antifa organization.

The logo looks real,
the account name looks real,

the content very plausible.

If you took this at face value,

you’d think that this organization
was behind the protest

and working to bring chaos
to white communities.

In fact, this account was proven
to belong to a white nationalist group,

and using pretty straightforward tactics,

they were able to get this message
spread far and wide.

And who could blame someone
for believing that this was authentic?

After all, we’re all vulnerable.

So what can we do about it?

By now, nearly everyone knows
what misinformation is.

It’s headline news;
it’s in people’s social media feeds.

But few people understand
how misinformation works.

And the main point of my talk
is we all need to learn the basics.

We need to develop an understanding
of how individuals and organized groups

use technology and exploit social media
platforms to spread misinformation,

like the tweet I just showed.

We also need to develop some
better information behaviors ourselves.

One I like a lot is “slow down”:

Pause before sharing something online.

We’ll talk more about that in a moment.

Let’s look at some examples.

We’re going to start with the eyes:

Yes, they can deceive.

Here are two close-ups of faces.

Guess what?

Only one of them is real;
the other is computer-generated.

[whichfaceisreal.com]

These photos are from an online quiz
called “Which Face Is Real?”

created by my colleagues
Jevin West and Carl Bergstrom

to build awareness about
the power of artificial intelligence.

The technology to create these images
is now easily accessible.

One can generate thousands of these images

and use them to, for instance,
create fake social media accounts.

In case you were wondering,
the one on the left is computer-generated.

Let’s look at another,
more recent technology:

deepfakes.

Deepfakes are what you get

when you combine artificial intelligence
with machine learning

to create audio and video of people
saying and doing things they never did.

Here’s a clip:

It’s called “In Event of Moon Disaster.”

It was created by MIT this year
as an art and educational project.

[Project Name: In Event of Moon Disaster

Directors: Francesca Panetta,
Halsey Burgund

Production: MIT Center
for Advanced Virtuality]

(Video) Richard Nixon:
Good evening, my fellow Americans.

Fate has ordained that the men
who went to the moon to explore in peace

will stay on the moon to rest in peace.

For every human being who looks up
at the moon in the nights to come

will know that there is some corner
of another world that is forever mankind.

Good night.

Chris Coward: Obviously,
that speech never occurred.

In case you need a history refresher,
the moon landing was a success!

Until now, this form of media manipulation
has mostly targeted individuals

or been used for entertainment purposes.

However, we’ve also seen
the emergence of deepfakes

influencing politics and elections
in other countries.

Could deepfakes be a factor in America’s
upcoming presidential election?

It’s possible, and many experts think so.

In cooperation with Microsoft,

we just launched an online quiz
called “Spot Deepfakes”

to raise awareness about this technology.

I hope you’ll check it out.

Moving on, people not only
say and do things they never did,

they may not even be people.

Enter the world of social media bots.

Social media bots are accounts
that have been programmed

to generate messages, follow other users,
retweet their messages.

It’s relatively easy, again, to create
thousands and millions of bots

and unleash them onto the internet.

Let’s look at some examples.

Again, we’ll stick with the topic of race.

This is a retweet network graph
of the Black Lives Matter discourse

back in 2016.

It was created

by my colleague Kate Starbird
along with Emma Spiro and their students.

[Retweet Network Graph]

What it shows is two communities:

the pink - pro Black Lives Matter;
and green - anti Black Lives Matter.

As you can see, the conversation
was divided into two echo chambers,

with each community retweeting and sharing
messages of like-minded members.

Now, let’s add the impact
of a disinformation campaign.

[Retweet Network Graph]

Here we have the same graph,
this time with orange dots and lines.

The orange represents
the Internet Research Agency,

Russia’s propaganda organization.

Specifically, the IRA created
false accounts and false messages

and successfully had their messages
retweeted by others in both communities.

Why did Russia
go to the effort to do this?

Remember:

Disinformation campaigns have a motive.

In this case, it was to get Trump elected

as a means to polarize our society
and weaken our democracy.

No matter what side
of the political spectrum you are on

or what beliefs you hold,

you should be angry
that there are those out there

who are trying to infiltrate
your communities

and manipulate your thoughts.

So why was Russia’s campaign so effective?

Why is any disinformation
campaign effective?

This is where we enter
the cognitive realm.

Disinformation is effective
because it exploits personal beliefs

to trigger psychological
and emotional responses,

such as to make you fearful or angry,

like the antifa tweet I showed earlier.

It’s effective because
it’s accomplished slowly over time,

through multiple encounters
across multiple platforms -

from Facebook to Twitter
and YouTube and back.

It’s the weaponization of information,

designed to undermine truth
and our trust in each other.

It’s a big problem,
and many people are working on it:

Tech companies and social media platforms
are working on detection technologies

to remove harmful misinformation.

Policy experts are working
on legal remedies,

mindful of our First Amendment rights.

Journalists are working
on how to tell these stories

without adding fuel to the fire.

And teachers and librarians

are retooling their approaches
to teaching information literacy.

All of these efforts are important,

and many organizations are working on it,
including our center.

But my message has been
“this is not enough,”

and we have to play a role as well.

First, we need to develop
greater situational awareness,

or information awareness if you will,

of how misinformation works.

That’s been the topic of this talk,
but I’ve only scratched the surface.

I hope people will continue
to educate themselves,

especially as new technologies
and tactics emerge, as they will.

Already we’ve witnessed Russia
deploying some new techniques,

targeting our upcoming election.

Second, we need to practice
better ways of navigating information.

The conventional approaches
that most of us grew up with,

like triangulating sources of information,

they’re not sufficient anymore,
as I hope my examples have made clear.

One approach that our center
is promoting is called “SIFT,”

developed by our partner Mike Caulfield
at Washington State University.

SIFT stands for “stop,”

“investigate” the source,

“find” better coverage,

and “trace” claims
to their original context.

These “moves,” as Mike calls them,
take 30 seconds or less to execute,

and they can make a huge difference.

Again, I’m very fond of “stop.”

And if there’s one thing
that people can do right away,

it’s to pause, take a look at the claim:

Does it pass the smell test?

In closing, misinformation
is a foundational problem.

When the World Health Organization

made one of its early
pronouncements about the COVID,

they called it simultaneously
a pandemic and an infodemic,

and they were right.

In fact,

“infodemic” could be used to describe
almost every challenge we face today.

Misinformation also tears
at our social fabric

and our relationships
with family and friends.

No one I’ve spoken with,
whether they’re left, right, or center,

is satisfied with this situation.

It worries everybody.

And this perhaps is a hopeful sign,

but only if we all play our parts.

Thank you for listening.

[whichfaceisreal.com

spotdeepfakes.org

moondisaster.org

infodemic.blog

cip.uw.edu]

抄写员:Eunice Tan
审稿人:David DeRuwe

如果有
像错误信息面具这样的东西,那不是很好吗?

一个面具可以保护我们
免受虚假信息的感染,

也不会
在不知不觉中将其传播给他人,

就像我们保护
自己免受新冠病毒感染一样。

这个类比不是一个延伸。

错误信息的第一条定律
是“我们都很脆弱”——

我们都很容易相信
和传播虚假信息。

不幸的是,与 COVID 不同,
没有口罩可以过滤信息

,我们也不能指望疫苗。

是的,我们需要让政府
和科技公司承担责任

——这是另一回事——

但我们不能仅仅依靠
其他人的努力来解决这个问题。

这意味着我们作为个人
需要

在减少
错误信息的传播和影响方面发挥积极作用。

那么面具呢?

我们可以忘记它。

是华盛顿大学
信息学院

的研究科学家,也是知情公众中心的联合创始人,该
中心是

全校范围
内研究错误信息

和加强民主话语的努力。

让我们从目前新闻中的一些事情开始

:乔治·弗洛伊德(George Floyd)被杀

和席卷全国的反种族主义抗议活动

这也是一个近在咫尺的问题

我的女儿是黑人

,有很多
针对黑人社区的错误信息。

现在,有时
人们只是被误导了

,这种情况一直都在发生,

但尤其是在
像这样的事件或 COVID 事件开始

时,当存在良好信息的真空时,为谣言传播

创造了成熟的环境

我不得不温和地纠正

那些向我提供
有关我居住的西雅图的虚假信息的家人和朋友。

进行这些对话
可能会有点不舒服,

但这是可以忍受的
,实际上它们很重要。

毕竟,我们都很脆弱

,这就是我处理
这些对话的方式。

另一方面
是阴谋和极端观点。

有故事说
乔治·弗洛伊德被杀是上演的,

或者被资助引发骚乱,

或者他甚至没有死。

我的网络中没有任何
人相信这些,

但常识告诉我们,
如果你有这样的问题

,一个人
相信这些阴谋

而另一个人不相信,

那么谈话就不会顺利进行 .

正是在这两个极端

之间是最成问题
和最具挑战性的错误信息形式

:半真半假,似是而非的叙述,

可以
而且确实改变人们思想的可信故事,

有时会导致他们
走上极端主义的道路。

这些故事背后的个人和有组织的团体

他们这样做
是为了欺骗。

这就是我们所说
的虚假宣传活动。

虚假信息是

出于某种政治、
社会、金融或其他议程的动机而故意传播虚假信息。

让我们看一个例子。

[ANTIFA America
@ANTIFA_US - ALERT]

这条推文声称
属于一个国家反法组织。

徽标看起来很真实
,帐户名称看起来很真实

,内容非常合理。

如果你从表面上看,

你会认为这个组织
是抗议活动的幕后黑手,

并致力于
给白人社区带来混乱。

事实上,这个帐户被证明
属于一个白人民族主义团体,

并且使用非常简单的策略,

他们能够让这个信息
传播得更广泛。

谁能责怪
某人相信这是真实的?

毕竟,我们都很脆弱。

那么我们能做些什么呢?

到目前为止,几乎每个人都知道
什么是错误信息。

这是头条新闻;
它在人们的社交媒体供稿中。

但很少有人
了解错误信息是如何运作的。

我演讲的重点
是我们都需要学习基础知识。

我们需要
了解个人和有组织的团体如何

使用技术并利用社交媒体
平台传播错误信息,

就像我刚刚展示的推文一样。

我们还需要自己开发一些
更好的信息行为。

我最喜欢的一个是“慢下来”:

在网上分享内容之前暂停。

我们稍后会详细讨论。

让我们看一些例子。

我们将从眼睛开始:

是的,它们可以欺骗。

这是两张脸部特写。

你猜怎么着?

其中只有一个是真实的;
另一个是计算机生成的。

[whichfaceisreal.com]

这些照片来自一个
名为“哪张脸是真实的?”的在线测验。

由我的同事
Jevin West 和 Carl Bergstrom

创建,旨在提高人们
对人工智能力量的认识。

创建这些图像的技术
现在很容易获得。

一个人可以生成数千张这样的图像

,并使用它们来
创建虚假的社交媒体账户等。

如果您想知道
,左边的那个是计算机生成的。

让我们看看另一种
更新的技术:

deepfakes。

当您将人工智能
与机器学习相结合

以创建人们
说和做他们从未做过的事情的音频和视频时,您就会得到深度伪造。

这是一个剪辑:

它被称为“在月球灾难事件中”。

它是麻省理工学院今年创建
的一个艺术和教育项目。

[项目名称:在月球灾难事件中

导演:Francesca Panetta,
Halsey Burgund

制作:麻省理工
学院高级虚拟中心]

(视频)理查德尼克松:
晚上好,我的美国同胞。

命运已经注定,
那些去月球探险的人

将在月球上安息。

因为每一个
在未来的夜晚仰望月亮的人

都会知道,
另一个世界的某个角落永远是人类。

晚安。

Chris Coward:显然,
那次演讲从未发生过。

如果您需要历史复习
,登月是成功的!

到目前为止,这种媒体操纵
形式主要针对个人

或用于娱乐目的。

然而,我们也看到

影响其他国家政治和选举
的深度伪造的出现。

深度造假会成为美国
即将举行的总统大选的一个因素吗?

这是可能的,许多专家都这么认为。

与微软合作,

我们刚刚推出了一个
名为“Spot Deepfakes”的在线测验,

以提高人们对这项技术的认识。

我希望你能检查出来。

继续前进,人们不仅会
说和做他们从未做过的事情,

他们甚至可能不是人。

进入社交媒体机器人的世界。

社交媒体机器人是
经过编程的帐户,

可以生成消息、关注其他用户、
转发他们的消息。

同样,创建
成千上万的机器人

并将它们释放到互联网上相对容易。

让我们看一些例子。

同样,我们将坚持种族的话题。

这是 2016 年
Black Lives Matter 话语的转

推网络图。

由我的同事 Kate
Starbird 和 Emma Spiro 以及他们的学生创建。

[转推网络图

] 它显示的是两个社区

:粉红色 - pro Black Lives Matter;
和绿色 - 反黑人的命也是命。

如您所见,
对话分为两个回音室

,每个社区都转发和分享
志同道合的成员的消息。

现在,让我们添加
虚假宣传活动的影响。

[转推网络图]

这里我们有相同的图,
这次是橙色的点和线。

橙色代表

俄罗斯的宣传机构互联网研究机构。

具体来说,爱尔兰共和军创建了
虚假账户和虚假消息,

并成功地让
两个社区的其他人转发了他们的消息。

为什么俄罗斯
要努力做到这一点?

请记住:

虚假宣传活动是有动机的。

In this case, it was to get Trump elected

as a means to polarize our society
and weaken our democracy.

无论
你处于政治光谱的哪一边,

或者你持有什么样的信仰,

你都应该为
有些

人试图渗透到
你的社区

并操纵你的思想而感到愤怒。

那么为什么俄罗斯的竞选如此有效呢?

为什么任何虚假
宣传活动都有效?

这是我们
进入认知领域的地方。

虚假信息是有效的,
因为它利用个人信念

来触发心理
和情绪反应,

例如让你害怕或生气,

就像我之前展示的反法推文一样。

它之所以有效,是因为
它随着时间的推移而缓慢地完成,

通过
跨多个平台的多次相遇——

从 Facebook 到 Twitter
和 YouTube 再回来。

这是信息的武器化,

旨在破坏真相
和我们对彼此的信任。

这是一个大问题
,很多人都在努力解决这个问题:

科技公司和社交媒体平台
正在研究检测技术,

以消除有害的错误信息。

政策专家正在
研究法律补救措施,

注意我们的第一修正案权利。

记者们正在
研究如何在

不火上浇油的情况下讲述这些故事。

教师和图书馆员

正在调整
他们教授信息素养的方法。

所有这些努力都很重要

,许多组织都在努力,
包括我们的中心。

但我的信息是
“这还不够”

,我们也必须发挥作用。

首先,我们需要提高

错误信息如何运作的态势感知或信息意识。

这是本次演讲的主题,
但我只是触及了皮毛。

我希望人们会
继续自我教育,

尤其是随着新技术
和战术的出现,他们会这样做。

我们已经目睹了俄罗斯

针对我们即将举行的选举部署了一些新技术。

其次,我们需要练习
更好的信息导航方式。

我们大多数人在成长过程中使用的传统方法,

比如三角信息来源,

它们已经不够用了
,我希望我的例子已经说明了这一点。

我们中心
正在推广的一种方法称为“SIFT”

,由我们
在华盛顿州立大学的合作伙伴 Mike Caulfield 开发。

SIFT 代表“停止”、

“调查”来源、

“找到”更好的覆盖范围

以及“追踪”声明
到其原始上下文。

正如迈克所说,这些“动作”
需要 30 秒或更短的时间来执行

,它们可以产生巨大的影响。

同样,我非常喜欢“停止”。

如果
人们可以立即做一件事,

那就是停下来,看看声明:

它通过了气味测试吗?

最后,错误信息
是一个基本问题。

当世界卫生组织

就 COVID 发表早期声明之一时,

他们同时称其
为大流行病和信息流行病

,他们是对的。

事实上,

“信息流行病”可以用来描述
我们今天面临的几乎所有挑战。

错误信息也会
撕裂我们的社会结构

以及我们
与家人和朋友的关系。

与我交谈过的任何人,
无论他们是左派、右派还是中锋,

都对这种情况感到满意。

它让每个人都感到担忧。

这也许是一个充满希望的迹象,

但前提是我们都发挥自己的作用。

谢谢你的聆听。

[whichfaceisreal.com

spotdeepfakes.org

moondisaster.org

infodemic.blog

cip.uw.edu]