How technology can fight extremism and online harassment Yasmin Green

My relationship with the internet
reminds me of the setup

to a clichéd horror movie.

You know, the blissfully happy family
moves in to their perfect new home,

excited about their perfect future,

and it’s sunny outside
and the birds are chirping …

And then it gets dark.

And there are noises from the attic.

And we realize that that perfect
new house isn’t so perfect.

When I started working at Google in 2006,

Facebook was just a two-year-old,

and Twitter hadn’t yet been born.

And I was in absolute awe
of the internet and all of its promise

to make us closer

and smarter

and more free.

But as we were doing the inspiring work
of building search engines

and video-sharing sites
and social networks,

criminals, dictators and terrorists
were figuring out

how to use those same
platforms against us.

And we didn’t have
the foresight to stop them.

Over the last few years, geopolitical
forces have come online to wreak havoc.

And in response,

Google supported a few colleagues and me
to set up a new group called Jigsaw,

with a mandate to make people safer
from threats like violent extremism,

censorship, persecution –

threats that feel very personal to me
because I was born in Iran,

and I left in the aftermath
of a violent revolution.

But I’ve come to realize
that even if we had all of the resources

of all of the technology
companies in the world,

we’d still fail

if we overlooked one critical ingredient:

the human experiences of the victims
and perpetrators of those threats.

There are many challenges
I could talk to you about today.

I’m going to focus on just two.

The first is terrorism.

So in order to understand
the radicalization process,

we met with dozens of former members
of violent extremist groups.

One was a British schoolgirl,

who had been taken off of a plane
at London Heathrow

as she was trying to make her way
to Syria to join ISIS.

And she was 13 years old.

So I sat down with her and her father,
and I said, “Why?”

And she said,

“I was looking at pictures
of what life is like in Syria,

and I thought I was going to go
and live in the Islamic Disney World.”

That’s what she saw in ISIS.

She thought she’d meet and marry
a jihadi Brad Pitt

and go shopping in the mall all day
and live happily ever after.

ISIS understands what drives people,

and they carefully craft a message
for each audience.

Just look at how many languages

they translate their
marketing material into.

They make pamphlets,
radio shows and videos

in not just English and Arabic,

but German, Russian,
French, Turkish, Kurdish,

Hebrew,

Mandarin Chinese.

I’ve even seen an ISIS-produced
video in sign language.

Just think about that for a second:

ISIS took the time and made the effort

to ensure their message is reaching
the deaf and hard of hearing.

It’s actually not tech-savviness

that is the reason why
ISIS wins hearts and minds.

It’s their insight into the prejudices,
the vulnerabilities, the desires

of the people they’re trying to reach

that does that.

That’s why it’s not enough

for the online platforms
to focus on removing recruiting material.

If we want to have a shot
at building meaningful technology

that’s going to counter radicalization,

we have to start with the human
journey at its core.

So we went to Iraq

to speak to young men
who’d bought into ISIS’s promise

of heroism and righteousness,

who’d taken up arms to fight for them

and then who’d defected

after they witnessed
the brutality of ISIS’s rule.

And I’m sitting there in this makeshift
prison in the north of Iraq

with this 23-year-old who had actually
trained as a suicide bomber

before defecting.

And he says,

“I arrived in Syria full of hope,

and immediately, I had two
of my prized possessions confiscated:

my passport and my mobile phone.”

The symbols of his physical
and digital liberty

were taken away from him on arrival.

And then this is the way he described
that moment of loss to me.

He said,

“You know in ‘Tom and Jerry,’

when Jerry wants to escape,
and then Tom locks the door

and swallows the key

and you see it bulging out
of his throat as it travels down?”

And of course, I really could see
the image that he was describing,

and I really did connect with the feeling
that he was trying to convey,

which was one of doom,

when you know there’s no way out.

And I was wondering:

What, if anything,
could have changed his mind

the day that he left home?

So I asked,

“If you knew everything that you know now

about the suffering
and the corruption, the brutality –

that day you left home,

would you still have gone?”

And he said, “Yes.”

And I thought, “Holy crap, he said ‘Yes.'”

And then he said,

“At that point, I was so brainwashed,

I wasn’t taking in
any contradictory information.

I couldn’t have been swayed.”

“Well, what if you knew
everything that you know now

six months before the day that you left?”

“At that point, I think it probably
would have changed my mind.”

Radicalization isn’t
this yes-or-no choice.

It’s a process, during which
people have questions –

about ideology, religion,
the living conditions.

And they’re coming online for answers,

which is an opportunity to reach them.

And there are videos online
from people who have answers –

defectors, for example,
telling the story of their journey

into and out of violence;

stories like the one from that man
I met in the Iraqi prison.

There are locals who’ve uploaded
cell phone footage

of what life is really like
in the caliphate under ISIS’s rule.

There are clerics who are sharing
peaceful interpretations of Islam.

But you know what?

These people don’t generally have
the marketing prowess of ISIS.

They risk their lives to speak up
and confront terrorist propaganda,

and then they tragically
don’t reach the people

who most need to hear from them.

And we wanted to see
if technology could change that.

So in 2016, we partnered with Moonshot CVE

to pilot a new approach
to countering radicalization

called the “Redirect Method.”

It uses the power of online advertising

to bridge the gap between
those susceptible to ISIS’s messaging

and those credible voices
that are debunking that messaging.

And it works like this:

someone looking for extremist material –

say they search
for “How do I join ISIS?” –

will see an ad appear

that invites them to watch a YouTube video
of a cleric, of a defector –

someone who has an authentic answer.

And that targeting is based
not on a profile of who they are,

but of determining something
that’s directly relevant

to their query or question.

During our eight-week pilot
in English and Arabic,

we reached over 300,000 people

who had expressed an interest in
or sympathy towards a jihadi group.

These people were now watching videos

that could prevent them
from making devastating choices.

And because violent extremism
isn’t confined to any one language,

religion or ideology,

the Redirect Method is now
being deployed globally

to protect people being courted online
by violent ideologues,

whether they’re Islamists,
white supremacists

or other violent extremists,

with the goal of giving them the chance
to hear from someone

on the other side of that journey;

to give them the chance to choose
a different path.

It turns out that often the bad guys
are good at exploiting the internet,

not because they’re some kind
of technological geniuses,

but because they understand
what makes people tick.

I want to give you a second example:

online harassment.

Online harassers also work
to figure out what will resonate

with another human being.

But not to recruit them like ISIS does,

but to cause them pain.

Imagine this:

you’re a woman,

you’re married,

you have a kid.

You post something on social media,

and in a reply,
you’re told that you’ll be raped,

that your son will be watching,

details of when and where.

In fact, your home address
is put online for everyone to see.

That feels like a pretty real threat.

Do you think you’d go home?

Do you think you’d continue doing
the thing that you were doing?

Would you continue doing that thing
that’s irritating your attacker?

Online abuse has been this perverse art

of figuring out what makes people angry,

what makes people afraid,

what makes people insecure,

and then pushing those pressure points
until they’re silenced.

When online harassment goes unchecked,

free speech is stifled.

And even the people
hosting the conversation

throw up their arms and call it quits,

closing their comment sections
and their forums altogether.

That means we’re actually
losing spaces online

to meet and exchange ideas.

And where online spaces remain,

we descend into echo chambers
with people who think just like us.

But that enables
the spread of disinformation;

that facilitates polarization.

What if technology instead
could enable empathy at scale?

This was the question
that motivated our partnership

with Google’s Counter Abuse team,

Wikipedia

and newspapers like the New York Times.

We wanted to see if we could build
machine-learning models

that could understand
the emotional impact of language.

Could we predict which comments
were likely to make someone else leave

the online conversation?

And that’s no mean feat.

That’s no trivial accomplishment

for AI to be able to do
something like that.

I mean, just consider
these two examples of messages

that could have been sent to me last week.

“Break a leg at TED!”

… and

“I’ll break your legs at TED.”

(Laughter)

You are human,

that’s why that’s an obvious
difference to you,

even though the words
are pretty much the same.

But for AI, it takes some training
to teach the models

to recognize that difference.

The beauty of building AI
that can tell the difference

is that AI can then scale to the size
of the online toxicity phenomenon,

and that was our goal in building
our technology called Perspective.

With the help of Perspective,

the New York Times, for example,

has increased spaces
online for conversation.

Before our collaboration,

they only had comments enabled
on just 10 percent of their articles.

With the help of machine learning,

they have that number up to 30 percent.

So they’ve tripled it,

and we’re still just getting started.

But this is about way more than just
making moderators more efficient.

Right now I can see you,

and I can gauge how what I’m saying
is landing with you.

You don’t have that opportunity online.

Imagine if machine learning
could give commenters,

as they’re typing,

real-time feedback about how
their words might land,

just like facial expressions do
in a face-to-face conversation.

Machine learning isn’t perfect,

and it still makes plenty of mistakes.

But if we can build technology

that understands the emotional
impact of language,

we can build empathy.

That means that we can have
dialogue between people

with different politics,

different worldviews,

different values.

And we can reinvigorate the spaces online
that most of us have given up on.

When people use technology
to exploit and harm others,

they’re preying on our human fears
and vulnerabilities.

If we ever thought
that we could build an internet

insulated from the dark side of humanity,

we were wrong.

If we want today to build technology

that can overcome
the challenges that we face,

we have to throw our entire selves
into understanding the issues

and into building solutions

that are as human as the problems
they aim to solve.

Let’s make that happen.

Thank you.

(Applause)

我与互联网的关系
让我想起

了一部陈词滥调的恐怖电影。

你知道,幸福快乐的一家人
搬进了他们完美的新家,

对他们完美的未来感到兴奋,

外面阳光明媚
,鸟儿叽叽喳喳

……然后天就黑了。

阁楼里有声音。

我们意识到完美的
新房子并不是那么完美。

当我 2006 年开始在 Google 工作时,

Facebook 才两岁,

而 Twitter 还没有诞生。


对互联网及其

让我们更亲密

、更聪明

、更自由的所有承诺感到绝对敬畏。

但是,当我们在
构建搜索引擎

、视频共享网站
和社交网络等鼓舞人心的工作时,

犯罪分子、独裁者和恐怖分子
正在研究

如何利用这些相同的
平台来对付我们。

而且我们
没有先见之明来阻止他们。

在过去的几年里,地缘政治
力量已经上线并造成严重破坏。

作为回应,

谷歌支持我和几位同事
成立了一个名为 Jigsaw 的新组织,

其任务是让人们更安全地
免受暴力极端主义、

审查、迫害等

威胁——这些威胁对我来说非常个人化,
因为我出生在 伊朗

,我在
一场暴力革命之后离开了。

但我开始意识到
,即使我们拥有世界

上所有科技公司的所有资源,

如果我们忽视一个关键因素:

这些威胁的受害者和肇事者的人类经历,我们仍然会失败。

今天我可以和你谈谈很多挑战。

我将只关注两个。

首先是恐怖主义。

因此,为了
了解激进化过程,

我们会见了数十名
暴力极端主义组织的前成员。

其中一位是一名英国女学生,

她在伦敦希思罗机场被带下飞机

当时她正试图
前往叙利亚加入伊斯兰国。

她才13岁。

所以我和她和她父亲坐下来
,我说,“为什么?”

她说,

“我在看
叙利亚生活的照片

,我想我
要去伊斯兰迪斯尼世界生活。”

这就是她在 ISIS 中看到的。

她以为她会遇到并嫁给
一个圣战者布拉德皮特

,然后整天在商场购物,从此
过上幸福的生活。

ISIS 了解人们的驱动力

,他们会为每一位观众精心制作信息

看看

他们将
营销材料翻译成多少种语言。

他们不仅用英语和阿拉伯语制作小册子、
广播节目和

视频,

还用德语、俄语、
法语、土耳其语、库尔德语、

希伯来语和

普通话制作。

我什至看过 ISIS 制作
的手语视频。

想一想:

伊斯兰国花时间并

努力确保他们的信息能够传达
给聋哑人和听力障碍者。

实际上


ISIS 赢得人心的原因并不是精通技术。

正是他们对他们试图接触的人的偏见
、脆弱性和愿望

的洞察力

做到了这一点。

这就是为什么

在线
平台专注于删除招聘材料是不够的。

如果我们想尝试
构建有意义的技术

来对抗激进化,

我们必须从人类
旅程的核心开始。

所以我们去伊拉克


那些接受了 ISIS

的英雄主义和正义承诺的年轻人交谈,

他们拿起武器为他们

而战,然后

在他们目睹
了 ISIS 统治的残暴之后叛逃。

我和这个 23 岁的人坐在
伊拉克北部的这个临时监狱里,

他在叛逃前实际上
接受过自杀式炸弹

袭击的训练。

他说:

“我满怀希望地抵达叙利亚

,我的
两件珍贵财产立即被没收

:护照和手机。”

他的身体和数字自由的象征在

他到达时被带走了。

然后这就是他
向我描述那一刻的失落的方式。

他说:

“你知道在《猫和老鼠》

中,杰瑞想逃跑,
然后汤姆锁上门

,吞下钥匙

,你看到钥匙往下走时
从他的喉咙里凸出来吗?”

当然,我真的可以
看到他所描述的形象

,我真的与
他试图传达的感觉联系起来,

这是一种厄运,

当你知道没有出路的时候。

我想知道:

如果有的话,是什么

让他在离开家的那天改变了主意?

所以我问,

“如果你知道你现在所知道的

关于苦难
、腐败、残暴的一切——

你离开家的那一天

,你还会去吗?”

他说:“是的。”

我想,“天哪,他说’是的’。

”然后他说,

“那时,我被洗脑了,

我没有接受
任何相互矛盾的信息。

我不可能动摇。”

“好吧,如果你

在离开前六个月知道你现在所知道的一切会怎样?”

“那时,我认为它可能
会改变我的想法。”

激进化不是
这个是或否的选择。

这是一个过程,在此过程中,
人们

对意识形态、宗教
、生活条件有疑问。

他们正在网上寻求答案,

这是一个接触他们的机会。

还有一些
来自有答案的人的在线视频——

例如,叛逃者
讲述了他们

经历和摆脱暴力的故事;

就像
我在伊拉克监狱遇到的那个人的故事一样。

有些当地人上传了
手机视频


讲述了伊斯兰国统治下哈里发国的真实生活。

有些神职人员正在分享
对伊斯兰教的和平解释。

但你知道吗?

这些人通常不具备
ISIS 的营销能力。

他们冒着生命危险大声疾呼
并对抗恐怖主义宣传,

然后可悲的
是,他们没有接触到

最需要听取他们意见的人。

我们想
看看技术是否可以改变这一点。

因此,在 2016 年,我们与 Moonshot CVE 合作,

试行了一种新的方法
来对抗激进化,

称为“重定向方法”。

它利用在线广告的力量


弥合易受 ISIS 信息影响的

人和
那些揭穿该信息的可信声音之间的差距。

它的工作原理是这样的:

有人在寻找极端主义材料——

说他们
搜索“我如何加入 ISIS?” –

将会看到一个广告

,邀请他们观看一个神职人员的 YouTube
视频,一个叛逃者的视频——

一个有真实答案的人。

并且这种定位
不是基于他们是谁的个人资料,

而是确定

与他们的查询或问题直接相关的东西。

在为期八周
的英语和阿拉伯语试点期间,

我们接触了超过 300,000 名

对圣战组织表示兴趣或同情的人。

这些人现在正在

观看可以阻止
他们做出毁灭性选择的视频。

而且由于暴力极端
主义并不局限于任何一种语言、

宗教或意识形态

,重定向方法现在
正在全球范围内部署,

以保护那些在网上受到暴力意识形态追捧的人

无论他们是伊斯兰主义者、
白人至上主义者

还是其他暴力极端主义者。

目标是让他们有机会

听到旅途另一端的人的声音;

让他们有机会
选择不同的道路。

事实证明,坏人
往往善于利用互联网,

不是因为他们是
某种技术天才,

而是因为他们了解
是什么让人们打勾。

我想给你举第二个例子:

网络骚扰。

在线骚扰者还
努力弄清楚什么会引起

另一个人的共鸣。

但不是像 ISIS 那样招募他们,

而是让他们痛苦。

想象一下:

你是一个女人,

你已婚,

你有一个孩子。

你在社交媒体上发布了一些东西

,在回复中,
你被告知你会被强奸

,你的儿子会在看,

何时何地的细节。

实际上,您的家庭住址
已放在网上供所有人查看。

这感觉像是一个非常真实的威胁。

你觉得你会回家吗?

你认为你会继续做
你正在做的事情吗?

你会继续做
那些激怒攻击者的事情吗?

在线虐待一直是一种不正当的艺术

,它弄清楚是什么让人们生气、

是什么让人们害怕、

是什么让人们不安全,

然后推动这些压力点
直到他们被压制住。

当网络骚扰不受控制时,

言论自由就会被扼杀。

甚至主持对话的人也

举起双臂称其退出,完全

关闭了他们的评论部分
和论坛。

这意味着我们实际上正在
失去

在线见面和交流想法的空间。

在仍然存在在线空间的地方,

我们
与与我们一样思考的人进入回声室。

但这会
导致虚假信息的传播;

这有利于极化。

如果技术
能够大规模实现同理心会怎样?

正是这个问题
促使我们

与 Google 的 Counter Abuse 团队、

维基百科

和纽约时报等报纸建立了合作关系。

我们想看看我们是否可以

构建能够理解
语言的情感影响的机器学习模型。

我们能否预测哪些
评论可能会让其他人

离开在线对话?

这绝非易事。

对 AI 来说,能够做到
这样的事情绝非易事。

我的意思是,只要考虑

一下上周可能发送给我的这两个消息示例。

“在 TED 上断腿!”

…和

“我会在 TED 打断你的腿。”

(笑声)

你是人,

这就是为什么这对你来说是一个明显的
区别,

即使这些
词几乎是一样的。

但对于人工智能来说,需要一些训练
来教

模型识别这种差异。

构建
能够区分差异的 AI 的美妙之处

在于,AI 可以扩展到
在线毒性现象的规模

,这就是我们构建
称为 Perspective 的技术的目标。 例如,

在 Perspective 的帮助下

,《纽约时报》

增加了
在线对话空间。

在我们合作之前,

他们
仅对 10% 的文章启用了评论。

在机器学习的帮助下,

他们拥有高达 30% 的数字。

所以他们把它翻了三倍,

而我们才刚刚开始。

但这不仅仅是
让版主更有效率。

现在我可以看到你

,我可以判断我所说的如何
与你一起着陆。

你在网上没有这个机会。

想象一下,如果机器学习
可以在评论者

打字时为他们

提供实时反馈,让他们知道
他们的话会如何落下,

就像
面对面对话中的面部表情一样。

机器学习并不完美

,它仍然会犯很多错误。

但是,如果我们可以构建

能够理解
语言情感影响的技术,

我们就可以建立同理心。

这意味着我们可以

不同政治、

不同世界观、

不同价值观的人之间进行对话。

我们可以重振我们
大多数人已经放弃的在线空间。

当人们使用技术
来剥削和伤害他人时,

他们就是在利用我们人类的恐惧
和脆弱性。

如果我们曾经
认为我们可以建立一个

与人类黑暗面隔离的互联网,那

我们就错了。

如果我们今天想要构建

能够克服
我们面临的挑战的技术,

我们必须全身心地
投入到理解问题中

,并构建

出与他们旨在解决的问题一样人性化的
解决方案。

让我们做到这一点。

谢谢你。

(掌声)