How you can help transform the internet into a place of trust Claire Wardle

No matter who you are or where you live,

I’m guessing that you have
at least one relative

that likes to forward those emails.

You know the ones I’m talking about –

the ones with dubious claims
or conspiracy videos.

And you’ve probably
already muted them on Facebook

for sharing social posts like this one.

It’s an image of a banana

with a strange red cross
running through the center.

And the text around it is warning people

not to eat fruits that look like this,

suggesting they’ve been
injected with blood

contaminated with the HIV virus.

And the social share message
above it simply says,

“Please forward to save lives.”

Now, fact-checkers have been debunking
this one for years,

but it’s one of those rumors
that just won’t die.

A zombie rumor.

And, of course, it’s entirely false.

It might be tempting to laugh
at an example like this, to say,

“Well, who would believe this, anyway?”

But the reason it’s a zombie rumor

is because it taps into people’s
deepest fears about their own safety

and that of the people they love.

And if you spend as enough time
as I have looking at misinformation,

you know that this is just
one example of many

that taps into people’s deepest
fears and vulnerabilities.

Every day, across the world,
we see scores of new memes on Instagram

encouraging parents
not to vaccinate their children.

We see new videos on YouTube
explaining that climate change is a hoax.

And across all platforms, we see
endless posts designed to demonize others

on the basis of their race,
religion or sexuality.

Welcome to one of the central
challenges of our time.

How can we maintain an internet
with freedom of expression at the core,

while also ensuring that the content
that’s being disseminated

doesn’t cause irreparable harms
to our democracies, our communities

and to our physical and mental well-being?

Because we live in the information age,

yet the central currency
upon which we all depend – information –

is no longer deemed entirely trustworthy

and, at times, can appear
downright dangerous.

This is thanks in part to the runaway
growth of social sharing platforms

that allow us to scroll through,

where lies and facts sit side by side,

but with none of the traditional
signals of trustworthiness.

And goodness – our language around this
is horribly muddled.

People are still obsessed
with the phrase “fake news,”

despite the fact that
it’s extraordinarily unhelpful

and used to describe a number of things
that are actually very different:

lies, rumors, hoaxes,
conspiracies, propaganda.

And I really wish
we could stop using a phrase

that’s been co-opted by politicians
right around the world,

from the left and the right,

used as a weapon to attack
a free and independent press.

(Applause)

Because we need our professional
news media now more than ever.

And besides, most of this content
doesn’t even masquerade as news.

It’s memes, videos, social posts.

And most of it is not fake;
it’s misleading.

We tend to fixate on what’s true or false.

But the biggest concern is actually
the weaponization of context.

Because the most effective disinformation

has always been that
which has a kernel of truth to it.

Let’s take this example
from London, from March 2017,

a tweet that circulated widely

in the aftermath of a terrorist incident
on Westminster Bridge.

This is a genuine image, not fake.

The woman who appears in the photograph
was interviewed afterwards,

and she explained that
she was utterly traumatized.

She was on the phone to a loved one,

and she wasn’t looking
at the victim out of respect.

But it still was circulated widely
with this Islamophobic framing,

with multiple hashtags,
including: #BanIslam.

Now, if you worked at Twitter,
what would you do?

Would you take that down,
or would you leave it up?

My gut reaction, my emotional reaction,
is to take this down.

I hate the framing of this image.

But freedom of expression
is a human right,

and if we start taking down speech
that makes us feel uncomfortable,

we’re in trouble.

And this might look like a clear-cut case,

but, actually, most speech isn’t.

These lines are incredibly
difficult to draw.

What’s a well-meaning
decision by one person

is outright censorship to the next.

What we now know is that
this account, Texas Lone Star,

was part of a wider Russian
disinformation campaign,

one that has since been taken down.

Would that change your view?

It would mine,

because now it’s a case
of a coordinated campaign

to sow discord.

And for those of you who’d like to think

that artificial intelligence
will solve all of our problems,

I think we can agree
that we’re a long way away

from AI that’s able to make sense
of posts like this.

So I’d like to explain
three interlocking issues

that make this so complex

and then think about some ways
we can consider these challenges.

First, we just don’t have
a rational relationship to information,

we have an emotional one.

It’s just not true that more facts
will make everything OK,

because the algorithms that determine
what content we see,

well, they’re designed to reward
our emotional responses.

And when we’re fearful,

oversimplified narratives,
conspiratorial explanations

and language that demonizes others
is far more effective.

And besides, many of these companies,

their business model
is attached to attention,

which means these algorithms
will always be skewed towards emotion.

Second, most of the speech
I’m talking about here is legal.

It would be a different matter

if I was talking about
child sexual abuse imagery

or content that incites violence.

It can be perfectly legal
to post an outright lie.

But people keep talking about taking down
“problematic” or “harmful” content,

but with no clear definition
of what they mean by that,

including Mark Zuckerberg,

who recently called for global
regulation to moderate speech.

And my concern is that
we’re seeing governments

right around the world

rolling out hasty policy decisions

that might actually trigger
much more serious consequences

when it comes to our speech.

And even if we could decide
which speech to take up or take down,

we’ve never had so much speech.

Every second, millions
of pieces of content

are uploaded by people
right around the world

in different languages,

drawing on thousands
of different cultural contexts.

We’ve simply never had
effective mechanisms

to moderate speech at this scale,

whether powered by humans
or by technology.

And third, these companies –
Google, Twitter, Facebook, WhatsApp –

they’re part of a wider
information ecosystem.

We like to lay all the blame
at their feet, but the truth is,

the mass media and elected officials
can also play an equal role

in amplifying rumors and conspiracies
when they want to.

As can we, when we mindlessly forward
divisive or misleading content

without trying.

We’re adding to the pollution.

I know we’re all looking for an easy fix.

But there just isn’t one.

Any solution will have to be rolled out
at a massive scale, internet scale,

and yes, the platforms,
they’re used to operating at that level.

But can and should we allow them
to fix these problems?

They’re certainly trying.

But most of us would agree that, actually,
we don’t want global corporations

to be the guardians of truth
and fairness online.

And I also think the platforms
would agree with that.

And at the moment,
they’re marking their own homework.

They like to tell us

that the interventions
they’re rolling out are working,

but because they write
their own transparency reports,

there’s no way for us to independently
verify what’s actually happening.

(Applause)

And let’s also be clear
that most of the changes we see

only happen after journalists
undertake an investigation

and find evidence of bias

or content that breaks
their community guidelines.

So yes, these companies have to play
a really important role in this process,

but they can’t control it.

So what about governments?

Many people believe
that global regulation is our last hope

in terms of cleaning up
our information ecosystem.

But what I see are lawmakers
who are struggling to keep up to date

with the rapid changes in technology.

And worse, they’re working in the dark,

because they don’t have access to data

to understand what’s happening
on these platforms.

And anyway, which governments
would we trust to do this?

We need a global response,
not a national one.

So the missing link is us.

It’s those people who use
these technologies every day.

Can we design a new infrastructure
to support quality information?

Well, I believe we can,

and I’ve got a few ideas about
what we might be able to actually do.

So firstly, if we’re serious
about bringing the public into this,

can we take some inspiration
from Wikipedia?

They’ve shown us what’s possible.

Yes, it’s not perfect,

but they’ve demonstrated
that with the right structures,

with a global outlook
and lots and lots of transparency,

you can build something
that will earn the trust of most people.

Because we have to find a way
to tap into the collective wisdom

and experience of all users.

This is particularly the case
for women, people of color

and underrepresented groups.

Because guess what?

They are experts when it comes
to hate and disinformation,

because they have been the targets
of these campaigns for so long.

And over the years,
they’ve been raising flags,

and they haven’t been listened to.

This has got to change.

So could we build a Wikipedia for trust?

Could we find a way that users
can actually provide insights?

They could offer insights around
difficult content-moderation decisions.

They could provide feedback

when platforms decide
they want to roll out new changes.

Second, people’s experiences
with the information is personalized.

My Facebook news feed
is very different to yours.

Your YouTube recommendations
are very different to mine.

That makes it impossible for us
to actually examine

what information people are seeing.

So could we imagine

developing some kind of centralized
open repository for anonymized data,

with privacy and ethical
concerns built in?

Because imagine what we would learn

if we built out a global network
of concerned citizens

who wanted to donate
their social data to science.

Because we actually know very little

about the long-term consequences
of hate and disinformation

on people’s attitudes and behaviors.

And what we do know,

most of that has been
carried out in the US,

despite the fact that
this is a global problem.

We need to work on that, too.

And third,

can we find a way to connect the dots?

No one sector, let alone nonprofit,
start-up or government,

is going to solve this.

But there are very smart people
right around the world

working on these challenges,

from newsrooms, civil society,
academia, activist groups.

And you can see some of them here.

Some are building out indicators
of content credibility.

Others are fact-checking,

so that false claims, videos and images
can be down-ranked by the platforms.

A nonprofit I helped
to found, First Draft,

is working with normally competitive
newsrooms around the world

to help them build out investigative,
collaborative programs.

And Danny Hillis, a software architect,

is designing a new system
called The Underlay,

which will be a record
of all public statements of fact

connected to their sources,

so that people and algorithms
can better judge what is credible.

And educators around the world
are testing different techniques

for finding ways to make people
critical of the content they consume.

All of these efforts are wonderful,
but they’re working in silos,

and many of them are woefully underfunded.

There are also hundreds
of very smart people

working inside these companies,

but again, these efforts
can feel disjointed,

because they’re actually developing
different solutions to the same problems.

How can we find a way
to bring people together

in one physical location
for days or weeks at a time,

so they can actually tackle
these problems together

but from their different perspectives?

So can we do this?

Can we build out a coordinated,
ambitious response,

one that matches the scale
and the complexity of the problem?

I really think we can.

Together, let’s rebuild
our information commons.

Thank you.

(Applause)

不管你是谁或住在哪里,

我猜你
至少有一个

亲戚喜欢转发这些电子邮件。

你知道我说

的那些——那些带有可疑声明
或阴谋视频的。

而且您可能
已经在 Facebook 上将它们静音,

以分享类似这样的社交帖子。

这是一个香蕉的图像,

中间有一个奇怪的
红十字。

它周围的文字警告人们

不要吃看起来像这样的水果,这

表明他们被
注射了

被艾滋病病毒污染的血液。

上面的社交分享信息
只是说:

“请转发以拯救生命。”

现在,事实核查人员多年来一直在揭穿
这一事实,

但这
是不会消亡的谣言之一。

僵尸谣言。

当然,这完全是错误的。

嘲笑这样的例子可能会很诱人
,说:

“好吧,无论如何,谁会相信这一点?”

但它之所以是僵尸谣言,

是因为它触及了人们
对自己

和所爱之人安全的最深切恐惧。

如果你花的
时间和我查看错误信息的时间一样多,

你就会知道这只是

利用人们最深的
恐惧和脆弱性的众多例子之一。

每天,在世界各地,
我们都会在 Instagram 上看到许多新的模因,

鼓励父母
不要为孩子接种疫苗。

我们在 YouTube 上看到新视频
解释气候变化是一个骗局。

在所有平台上,我们看到
无数旨在

基于种族、
宗教或性取向妖魔化他人的帖子。

欢迎来到
我们这个时代的核心挑战之一。

我们如何才能维持一个
以言论自由为核心的互联网,

同时确保正在传播的内容

不会
对我们的民主国家、我们的社区

以及我们的身心健康造成无法弥补的伤害?

因为我们生活在信息时代,


我们都依赖的中央货币——信息

——不再被认为是完全值得信赖

的,有时甚至会显得
非常危险。

这在一定程度上要归功于
社交共享平台的快速增长,这些平台

让我们可以滚动浏览

,谎言和事实并存,

但没有传统
的可信信号。

天哪——我们在这方面的语言
非常混乱。

人们仍然沉迷
于“假新闻”这个词,

尽管事实上
它非常无用

并且用来描述
许多实际上非常不同的事物:

谎言、谣言、恶作剧、
阴谋、宣传。

我真的希望
我们可以停止使用世界各地

的政客从左到右选择的短语,

用作
攻击自由和独立媒体的武器。

(掌声)

因为我们
现在比以往任何时候都更需要我们的专业新闻媒体。

此外,这些内容
中的大部分甚至都不会伪装成新闻。

它是模因、视频、社交帖子。

而且大部分不是假的;
这是误导。

我们倾向于关注什么是真假。

但最大的担忧实际上
是上下文的武器化。

因为最有效的虚假信息

始终
是具有真理核心的信息。

让我们以
2017 年 3 月的伦敦为例,

这条推文在威斯敏斯特桥

发生恐怖事件后广为流传

这是一个真实的图像,不是假的。

照片中出现的女子
事后接受了采访

,她解释说
她受到了彻底的创伤。

她正在给所爱的人打电话,出于尊重

,她并没有
看着受害者。

但它仍然
以这种仇视伊斯兰教的框架广为流传,

带有多个标签,
包括:#BanIslam。

现在,如果你在 Twitter 工作,
你会做什么?

你会把它拿下来,
还是留下来?

我的直觉反应,我的情绪反应,
就是把它记下来。

我讨厌这个图像的框架。

但言论自由
是一项人权

,如果我们开始删除
让我们感到不舒服的言论,

我们就有麻烦了。

这可能看起来像一个明确的案例,

但实际上,大多数演讲都不是。

这些线条非常
难以绘制。

一个人的善意
决定

是对下一个人的彻底审查。

我们现在知道的是
,这个名为 Texas Lone Star 的帐户

是俄罗斯更广泛的
虚假信息活动的

一部分,该活动已被删除。

这会改变你的看法吗?

它会是我的,

因为现在它
是一个协调运动

以播下不和的案例。

对于那些

认为人工智能
将解决我们所有问题的人来说,

我认为我们可以同意
,我们

距离能够理解
这样的帖子的人工智能还有很长的路要走。

因此,我想解释
三个

使事情变得如此复杂的相互关联的问题

,然后思考
我们可以考虑这些挑战的一些方法。

首先,我们只是没有
与信息的理性关系,

我们有一种情绪化的关系。

更多的事实会让一切都好起来是不正确的

因为决定
我们看到什么内容的算法,

嗯,它们旨在奖励
我们的情绪反应。

当我们感到恐惧时,

过于简单化的叙述、
阴谋论的解释

和妖魔化他人的语言会
更有效。

此外,这些公司中的许多公司,

他们的商业模式
都依赖于注意力,

这意味着这些算法
总是会偏向于情感。

其次,
我这里所说的大部分言论都是合法的。

如果我谈论的是
儿童性虐待图像

或煽动暴力的内容,那就另当别论了。

发布彻头彻尾的谎言是完全合法的。

但是人们一直在谈论删除
“有问题”或“有害”的内容,

但没有明确
定义他们的意思,

包括最近呼吁全球
监管以缓和言论的马克扎克伯格。

我担心的是,
我们正在看到

世界各地的政府

推出仓促的政策决定

,这

实际上可能会在我们的演讲中引发更严重的后果。

即使我们可以
决定接受或删除哪个演讲,

我们也从未有过如此多的演讲。

每秒,

世界各地的人们

以不同的语言上传数以百万计的内容,

并借鉴了数千
种不同的文化背景。

我们从来没有
有效的机制

来缓和这种规模的言论,

无论是由人类
还是技术驱动。

第三,这些公司——
谷歌、Twitter、Facebook、WhatsApp——

它们是更广泛的
信息生态系统的一部分。

我们喜欢把所有的责任都推
到他们的脚下,但事实是

,大众媒体和民选官员
也可以在他们愿意的时候发挥同样的作用

,放大谣言和阴谋

我们也可以,当我们不加尝试就盲目转发
分裂或误导性内容时

我们正在增加污染。

我知道我们都在寻找一个简单的解决方法。

但就是没有。

任何解决方案都必须
大规模推出,互联网规模

,是的,平台,
它们习惯于在那个级别上运行。

但是我们可以而且应该允许
他们解决这些问题吗?

他们肯定在努力。

但我们大多数人都会同意,实际上,
我们不希望全球

公司成为在线真相和公平的守护者

我也认为平台
会同意这一点。

目前,
他们正在标记自己的作业。

他们喜欢告诉我们

他们推出的干预措施
正在发挥作用,

但因为他们编写
了自己的透明度报告,

我们无法独立
验证实际发生的情况。

(掌声

)我们还要明确
一点,我们看到的大多数变化

都是在记者
进行调查

并发现有偏见的证据

或违反
其社区准则的内容之后发生的。

所以是的,这些公司必须
在这个过程中发挥非常重要的作用,

但他们无法控制它。

那么政府呢?

许多人认为
,全球监管是我们

清理
信息生态系统的最后希望。

但我看到的是立法者
正在努力

跟上技术的快速变化。

更糟糕的是,他们在黑暗中工作,

因为他们无法访问数据

来了解这些平台上正在发生的事情

无论如何,
我们会信任哪些政府来做到这一点?

我们需要全球响应,
而不是国家响应。

所以缺失的环节是我们。

是那些
每天都在使用这些技术的人。

我们可以设计一个新的基础设施
来支持高质量的信息吗?

好吧,我相信我们可以,

而且我对
我们实际上可以做什么有一些想法。

所以首先,如果我们
真的要让公众参与进来,

我们可以
从维基百科中获得一些灵感吗?

他们向我们展示了什么是可能的。

是的,它并不完美,

但他们已经证明
,只要有正确的结构

、全球视野
和大量透明度,

你就可以建立
一些赢得大多数人信任的东西。

因为我们必须找到一种方法
来挖掘所有用户的集体智慧

和经验。

对于女性、有色人种

和代表性不足的群体来说尤其如此。

因为你猜怎么着?

他们
是仇恨和虚假信息方面的专家,

因为长期以来他们一直
是这些运动的目标。

多年来,
他们一直在举旗

,却没有被倾听。

这必须改变。

那么我们可以建立一个维基百科来获得信任吗?

我们能否找到一种让
用户真正提供洞察力的方法?

他们可以就
困难的内容审核决策提供见解。

当平台决定
他们想要推出新的变化时,他们可以提供反馈。

其次,人们
对信息的体验是个性化的。

我的 Facebook 新闻提要
和你的很不一样。

您的 YouTube 推荐
与我的非常不同。

这使我们
无法真正检查

人们看到的信息。

那么我们能否想象为匿名数据

开发某种集中式
开放存储库,

并内置隐私和道德
问题?

因为想象一下,

如果我们建立一个

希望将
他们的社会数据捐赠给科学的关注公民的全球网络,我们会学到什么。

因为我们实际上

对仇恨和虚假信息

对人们的态度和行为的长期后果知之甚少。

据我们所知,

尽管
这是一个全球性问题,但其中大部分是在美国进行的。

我们也需要为此努力。

第三,

我们能找到一种方法来连接这些点吗?

没有任何一个部门,更不用说非营利组织、
初创企业或政府

,能够解决这个问题。

但世界各地都有非常聪明的人

致力于应对这些挑战,他们

来自新闻编辑室、民间社会、
学术界、激进组织。

你可以在这里看到其中的一些。

有些人正在建立
内容可信度的指标。

其他人则在进行事实核查,

以便平台可以降低虚假声明、视频和图像
的排名。

我帮助创建的非营利组织
First Draft

正在与世界各地通常竞争激烈的
新闻编辑室合作

,帮助他们建立调查性
合作项目。

软件架构师 Danny Hillis

正在设计一个
名为 The Underlay 的新系统,

该系统将记录与其来源相关
的所有公开事实陈述

以便人们和算法
能够更好地判断什么是可信的。

世界各地的教育工作者
正在测试不同的技术,

以找到让人们
对他们消费的内容持批评态度的方法。

所有这些努力都很棒,
但它们各自为政,

其中许多资金严重不足。 这些公司内部

也有数
百名非常聪明的人

但同样,这些努力
可能会让人感到脱节,

因为他们实际上正在
为相同的问题开发不同的解决方案。

我们如何才能找到一种方法
,让人们一次

在一个物理位置聚集
数天或数周,

以便他们实际上可以

从不同的角度共同解决这些问题?

那么我们可以这样做吗?

我们能否建立一个协调的、
雄心勃勃的应对措施

,与问题的规模
和复杂性相匹配?

我真的认为我们可以。

让我们一起重建
我们的信息共享空间。

谢谢你。

(掌声)