How technology can fight extremism and online harassment Yasmin Green

My relationship with the internet
reminds me of the setup

to a clichéd horror movie.

You know, the blissfully happy family
moves in to their perfect new home,

excited about their perfect future,

and it’s sunny outside
and the birds are chirping …

And then it gets dark.

And there are noises from the attic.

And we realize that that perfect
new house isn’t so perfect.

When I started working at Google in 2006,

Facebook was just a two-year-old,

and Twitter hadn’t yet been born.

And I was in absolute awe
of the internet and all of its promise

to make us closer

and smarter

and more free.

But as we were doing the inspiring work
of building search engines

and video-sharing sites
and social networks,

criminals, dictators and terrorists
were figuring out

how to use those same
platforms against us.

And we didn’t have
the foresight to stop them.

Over the last few years, geopolitical
forces have come online to wreak havoc.

And in response,

Google supported a few colleagues and me
to set up a new group called Jigsaw,

with a mandate to make people safer
from threats like violent extremism,

censorship, persecution –

threats that feel very personal to me
because I was born in Iran,

and I left in the aftermath
of a violent revolution.

But I’ve come to realize
that even if we had all of the resources

of all of the technology
companies in the world,

we’d still fail

if we overlooked one critical ingredient:

the human experiences of the victims
and perpetrators of those threats.

There are many challenges
I could talk to you about today.

I’m going to focus on just two.

The first is terrorism.

So in order to understand
the radicalization process,

we met with dozens of former members
of violent extremist groups.

One was a British schoolgirl,

who had been taken off of a plane
at London Heathrow

as she was trying to make her way
to Syria to join ISIS.

And she was 13 years old.

So I sat down with her and her father,
and I said, “Why?”

And she said,

“I was looking at pictures
of what life is like in Syria,

and I thought I was going to go
and live in the Islamic Disney World.”

That’s what she saw in ISIS.

She thought she’d meet and marry
a jihadi Brad Pitt

and go shopping in the mall all day
and live happily ever after.

ISIS understands what drives people,

and they carefully craft a message
for each audience.

Just look at how many languages

they translate their
marketing material into.

They make pamphlets,
radio shows and videos

in not just English and Arabic,

but German, Russian,
French, Turkish, Kurdish,

Hebrew,

Mandarin Chinese.

I’ve even seen an ISIS-produced
video in sign language.

Just think about that for a second:

ISIS took the time and made the effort

to ensure their message is reaching
the deaf and hard of hearing.

It’s actually not tech-savviness

that is the reason why
ISIS wins hearts and minds.

It’s their insight into the prejudices,
the vulnerabilities, the desires

of the people they’re trying to reach

that does that.

That’s why it’s not enough

for the online platforms
to focus on removing recruiting material.

If we want to have a shot
at building meaningful technology

that’s going to counter radicalization,

we have to start with the human
journey at its core.

So we went to Iraq

to speak to young men
who’d bought into ISIS’s promise

of heroism and righteousness,

who’d taken up arms to fight for them

and then who’d defected

after they witnessed
the brutality of ISIS’s rule.

And I’m sitting there in this makeshift
prison in the north of Iraq

with this 23-year-old who had actually
trained as a suicide bomber

before defecting.

And he says,

“I arrived in Syria full of hope,

and immediately, I had two
of my prized possessions confiscated:

my passport and my mobile phone.”

The symbols of his physical
and digital liberty

were taken away from him on arrival.

And then this is the way he described
that moment of loss to me.

He said,

“You know in ‘Tom and Jerry,’

when Jerry wants to escape,
and then Tom locks the door

and swallows the key

and you see it bulging out
of his throat as it travels down?”

And of course, I really could see
the image that he was describing,

and I really did connect with the feeling
that he was trying to convey,

which was one of doom,

when you know there’s no way out.

And I was wondering:

What, if anything,
could have changed his mind

the day that he left home?

So I asked,

“If you knew everything that you know now

about the suffering
and the corruption, the brutality –

that day you left home,

would you still have gone?”

And he said, “Yes.”

And I thought, “Holy crap, he said ‘Yes.'”

And then he said,

“At that point, I was so brainwashed,

I wasn’t taking in
any contradictory information.

I couldn’t have been swayed.”

“Well, what if you knew
everything that you know now

six months before the day that you left?”

“At that point, I think it probably
would have changed my mind.”

Radicalization isn’t
this yes-or-no choice.

It’s a process, during which
people have questions –

about ideology, religion,
the living conditions.

And they’re coming online for answers,

which is an opportunity to reach them.

And there are videos online
from people who have answers –

defectors, for example,
telling the story of their journey

into and out of violence;

stories like the one from that man
I met in the Iraqi prison.

There are locals who’ve uploaded
cell phone footage

of what life is really like
in the caliphate under ISIS’s rule.

There are clerics who are sharing
peaceful interpretations of Islam.

But you know what?

These people don’t generally have
the marketing prowess of ISIS.

They risk their lives to speak up
and confront terrorist propaganda,

and then they tragically
don’t reach the people

who most need to hear from them.

And we wanted to see
if technology could change that.

So in 2016, we partnered with Moonshot CVE

to pilot a new approach
to countering radicalization

called the “Redirect Method.”

It uses the power of online advertising

to bridge the gap between
those susceptible to ISIS’s messaging

and those credible voices
that are debunking that messaging.

And it works like this:

someone looking for extremist material –

say they search
for “How do I join ISIS?” –

will see an ad appear

that invites them to watch a YouTube video
of a cleric, of a defector –

someone who has an authentic answer.

And that targeting is based
not on a profile of who they are,

but of determining something
that’s directly relevant

to their query or question.

During our eight-week pilot
in English and Arabic,

we reached over 300,000 people

who had expressed an interest in
or sympathy towards a jihadi group.

These people were now watching videos

that could prevent them
from making devastating choices.

And because violent extremism
isn’t confined to any one language,

religion or ideology,

the Redirect Method is now
being deployed globally

to protect people being courted online
by violent ideologues,

whether they’re Islamists,
white supremacists

or other violent extremists,

with the goal of giving them the chance
to hear from someone

on the other side of that journey;

to give them the chance to choose
a different path.

It turns out that often the bad guys
are good at exploiting the internet,

not because they’re some kind
of technological geniuses,

but because they understand
what makes people tick.

I want to give you a second example:

online harassment.

Online harassers also work
to figure out what will resonate

with another human being.

But not to recruit them like ISIS does,

but to cause them pain.

Imagine this:

you’re a woman,

you’re married,

you have a kid.

You post something on social media,

and in a reply,
you’re told that you’ll be raped,

that your son will be watching,

details of when and where.

In fact, your home address
is put online for everyone to see.

That feels like a pretty real threat.

Do you think you’d go home?

Do you think you’d continue doing
the thing that you were doing?

Would you continue doing that thing
that’s irritating your attacker?

Online abuse has been this perverse art

of figuring out what makes people angry,

what makes people afraid,

what makes people insecure,

and then pushing those pressure points
until they’re silenced.

When online harassment goes unchecked,

free speech is stifled.

And even the people
hosting the conversation

throw up their arms and call it quits,

closing their comment sections
and their forums altogether.

That means we’re actually
losing spaces online

to meet and exchange ideas.

And where online spaces remain,

we descend into echo chambers
with people who think just like us.

But that enables
the spread of disinformation;

that facilitates polarization.

What if technology instead
could enable empathy at scale?

This was the question
that motivated our partnership

with Google’s Counter Abuse team,

Wikipedia

and newspapers like the New York Times.

We wanted to see if we could build
machine-learning models

that could understand
the emotional impact of language.

Could we predict which comments
were likely to make someone else leave

the online conversation?

And that’s no mean feat.

That’s no trivial accomplishment

for AI to be able to do
something like that.

I mean, just consider
these two examples of messages

that could have been sent to me last week.

“Break a leg at TED!”

… and

“I’ll break your legs at TED.”

(Laughter)

You are human,

that’s why that’s an obvious
difference to you,

even though the words
are pretty much the same.

But for AI, it takes some training
to teach the models

to recognize that difference.

The beauty of building AI
that can tell the difference

is that AI can then scale to the size
of the online toxicity phenomenon,

and that was our goal in building
our technology called Perspective.

With the help of Perspective,

the New York Times, for example,

has increased spaces
online for conversation.

Before our collaboration,

they only had comments enabled
on just 10 percent of their articles.

With the help of machine learning,

they have that number up to 30 percent.

So they’ve tripled it,

and we’re still just getting started.

But this is about way more than just
making moderators more efficient.

Right now I can see you,

and I can gauge how what I’m saying
is landing with you.

You don’t have that opportunity online.

Imagine if machine learning
could give commenters,

as they’re typing,

real-time feedback about how
their words might land,

just like facial expressions do
in a face-to-face conversation.

Machine learning isn’t perfect,

and it still makes plenty of mistakes.

But if we can build technology

that understands the emotional
impact of language,

we can build empathy.

That means that we can have
dialogue between people

with different politics,

different worldviews,

different values.

And we can reinvigorate the spaces online
that most of us have given up on.

When people use technology
to exploit and harm others,

they’re preying on our human fears
and vulnerabilities.

If we ever thought
that we could build an internet

insulated from the dark side of humanity,

we were wrong.

If we want today to build technology

that can overcome
the challenges that we face,

we have to throw our entire selves
into understanding the issues

and into building solutions

that are as human as the problems
they aim to solve.

Let’s make that happen.

Thank you.

(Applause)