How you can help transform the internet into a place of trust Claire Wardle

No matter who you are or where you live,

I’m guessing that you have
at least one relative

that likes to forward those emails.

You know the ones I’m talking about –

the ones with dubious claims
or conspiracy videos.

And you’ve probably
already muted them on Facebook

for sharing social posts like this one.

It’s an image of a banana

with a strange red cross
running through the center.

And the text around it is warning people

not to eat fruits that look like this,

suggesting they’ve been
injected with blood

contaminated with the HIV virus.

And the social share message
above it simply says,

“Please forward to save lives.”

Now, fact-checkers have been debunking
this one for years,

but it’s one of those rumors
that just won’t die.

A zombie rumor.

And, of course, it’s entirely false.

It might be tempting to laugh
at an example like this, to say,

“Well, who would believe this, anyway?”

But the reason it’s a zombie rumor

is because it taps into people’s
deepest fears about their own safety

and that of the people they love.

And if you spend as enough time
as I have looking at misinformation,

you know that this is just
one example of many

that taps into people’s deepest
fears and vulnerabilities.

Every day, across the world,
we see scores of new memes on Instagram

encouraging parents
not to vaccinate their children.

We see new videos on YouTube
explaining that climate change is a hoax.

And across all platforms, we see
endless posts designed to demonize others

on the basis of their race,
religion or sexuality.

Welcome to one of the central
challenges of our time.

How can we maintain an internet
with freedom of expression at the core,

while also ensuring that the content
that’s being disseminated

doesn’t cause irreparable harms
to our democracies, our communities

and to our physical and mental well-being?

Because we live in the information age,

yet the central currency
upon which we all depend – information –

is no longer deemed entirely trustworthy

and, at times, can appear
downright dangerous.

This is thanks in part to the runaway
growth of social sharing platforms

that allow us to scroll through,

where lies and facts sit side by side,

but with none of the traditional
signals of trustworthiness.

And goodness – our language around this
is horribly muddled.

People are still obsessed
with the phrase “fake news,”

despite the fact that
it’s extraordinarily unhelpful

and used to describe a number of things
that are actually very different:

lies, rumors, hoaxes,
conspiracies, propaganda.

And I really wish
we could stop using a phrase

that’s been co-opted by politicians
right around the world,

from the left and the right,

used as a weapon to attack
a free and independent press.

(Applause)

Because we need our professional
news media now more than ever.

And besides, most of this content
doesn’t even masquerade as news.

It’s memes, videos, social posts.

And most of it is not fake;
it’s misleading.

We tend to fixate on what’s true or false.

But the biggest concern is actually
the weaponization of context.

Because the most effective disinformation

has always been that
which has a kernel of truth to it.

Let’s take this example
from London, from March 2017,

a tweet that circulated widely

in the aftermath of a terrorist incident
on Westminster Bridge.

This is a genuine image, not fake.

The woman who appears in the photograph
was interviewed afterwards,

and she explained that
she was utterly traumatized.

She was on the phone to a loved one,

and she wasn’t looking
at the victim out of respect.

But it still was circulated widely
with this Islamophobic framing,

with multiple hashtags,
including: #BanIslam.

Now, if you worked at Twitter,
what would you do?

Would you take that down,
or would you leave it up?

My gut reaction, my emotional reaction,
is to take this down.

I hate the framing of this image.

But freedom of expression
is a human right,

and if we start taking down speech
that makes us feel uncomfortable,

we’re in trouble.

And this might look like a clear-cut case,

but, actually, most speech isn’t.

These lines are incredibly
difficult to draw.

What’s a well-meaning
decision by one person

is outright censorship to the next.

What we now know is that
this account, Texas Lone Star,

was part of a wider Russian
disinformation campaign,

one that has since been taken down.

Would that change your view?

It would mine,

because now it’s a case
of a coordinated campaign

to sow discord.

And for those of you who’d like to think

that artificial intelligence
will solve all of our problems,

I think we can agree
that we’re a long way away

from AI that’s able to make sense
of posts like this.

So I’d like to explain
three interlocking issues

that make this so complex

and then think about some ways
we can consider these challenges.

First, we just don’t have
a rational relationship to information,

we have an emotional one.

It’s just not true that more facts
will make everything OK,

because the algorithms that determine
what content we see,

well, they’re designed to reward
our emotional responses.

And when we’re fearful,

oversimplified narratives,
conspiratorial explanations

and language that demonizes others
is far more effective.

And besides, many of these companies,

their business model
is attached to attention,

which means these algorithms
will always be skewed towards emotion.

Second, most of the speech
I’m talking about here is legal.

It would be a different matter

if I was talking about
child sexual abuse imagery

or content that incites violence.

It can be perfectly legal
to post an outright lie.

But people keep talking about taking down
“problematic” or “harmful” content,

but with no clear definition
of what they mean by that,

including Mark Zuckerberg,

who recently called for global
regulation to moderate speech.

And my concern is that
we’re seeing governments

right around the world

rolling out hasty policy decisions

that might actually trigger
much more serious consequences

when it comes to our speech.

And even if we could decide
which speech to take up or take down,

we’ve never had so much speech.

Every second, millions
of pieces of content

are uploaded by people
right around the world

in different languages,

drawing on thousands
of different cultural contexts.

We’ve simply never had
effective mechanisms

to moderate speech at this scale,

whether powered by humans
or by technology.

And third, these companies –
Google, Twitter, Facebook, WhatsApp –

they’re part of a wider
information ecosystem.

We like to lay all the blame
at their feet, but the truth is,

the mass media and elected officials
can also play an equal role

in amplifying rumors and conspiracies
when they want to.

As can we, when we mindlessly forward
divisive or misleading content

without trying.

We’re adding to the pollution.

I know we’re all looking for an easy fix.

But there just isn’t one.

Any solution will have to be rolled out
at a massive scale, internet scale,

and yes, the platforms,
they’re used to operating at that level.

But can and should we allow them
to fix these problems?

They’re certainly trying.

But most of us would agree that, actually,
we don’t want global corporations

to be the guardians of truth
and fairness online.

And I also think the platforms
would agree with that.

And at the moment,
they’re marking their own homework.

They like to tell us

that the interventions
they’re rolling out are working,

but because they write
their own transparency reports,

there’s no way for us to independently
verify what’s actually happening.

(Applause)

And let’s also be clear
that most of the changes we see

only happen after journalists
undertake an investigation

and find evidence of bias

or content that breaks
their community guidelines.

So yes, these companies have to play
a really important role in this process,

but they can’t control it.

So what about governments?

Many people believe
that global regulation is our last hope

in terms of cleaning up
our information ecosystem.

But what I see are lawmakers
who are struggling to keep up to date

with the rapid changes in technology.

And worse, they’re working in the dark,

because they don’t have access to data

to understand what’s happening
on these platforms.

And anyway, which governments
would we trust to do this?

We need a global response,
not a national one.

So the missing link is us.

It’s those people who use
these technologies every day.

Can we design a new infrastructure
to support quality information?

Well, I believe we can,

and I’ve got a few ideas about
what we might be able to actually do.

So firstly, if we’re serious
about bringing the public into this,

can we take some inspiration
from Wikipedia?

They’ve shown us what’s possible.

Yes, it’s not perfect,

but they’ve demonstrated
that with the right structures,

with a global outlook
and lots and lots of transparency,

you can build something
that will earn the trust of most people.

Because we have to find a way
to tap into the collective wisdom

and experience of all users.

This is particularly the case
for women, people of color

and underrepresented groups.

Because guess what?

They are experts when it comes
to hate and disinformation,

because they have been the targets
of these campaigns for so long.

And over the years,
they’ve been raising flags,

and they haven’t been listened to.

This has got to change.

So could we build a Wikipedia for trust?

Could we find a way that users
can actually provide insights?

They could offer insights around
difficult content-moderation decisions.

They could provide feedback

when platforms decide
they want to roll out new changes.

Second, people’s experiences
with the information is personalized.

My Facebook news feed
is very different to yours.

Your YouTube recommendations
are very different to mine.

That makes it impossible for us
to actually examine

what information people are seeing.

So could we imagine

developing some kind of centralized
open repository for anonymized data,

with privacy and ethical
concerns built in?

Because imagine what we would learn

if we built out a global network
of concerned citizens

who wanted to donate
their social data to science.

Because we actually know very little

about the long-term consequences
of hate and disinformation

on people’s attitudes and behaviors.

And what we do know,

most of that has been
carried out in the US,

despite the fact that
this is a global problem.

We need to work on that, too.

And third,

can we find a way to connect the dots?

No one sector, let alone nonprofit,
start-up or government,

is going to solve this.

But there are very smart people
right around the world

working on these challenges,

from newsrooms, civil society,
academia, activist groups.

And you can see some of them here.

Some are building out indicators
of content credibility.

Others are fact-checking,

so that false claims, videos and images
can be down-ranked by the platforms.

A nonprofit I helped
to found, First Draft,

is working with normally competitive
newsrooms around the world

to help them build out investigative,
collaborative programs.

And Danny Hillis, a software architect,

is designing a new system
called The Underlay,

which will be a record
of all public statements of fact

connected to their sources,

so that people and algorithms
can better judge what is credible.

And educators around the world
are testing different techniques

for finding ways to make people
critical of the content they consume.

All of these efforts are wonderful,
but they’re working in silos,

and many of them are woefully underfunded.

There are also hundreds
of very smart people

working inside these companies,

but again, these efforts
can feel disjointed,

because they’re actually developing
different solutions to the same problems.

How can we find a way
to bring people together

in one physical location
for days or weeks at a time,

so they can actually tackle
these problems together

but from their different perspectives?

So can we do this?

Can we build out a coordinated,
ambitious response,

one that matches the scale
and the complexity of the problem?

I really think we can.

Together, let’s rebuild
our information commons.

Thank you.

(Applause)