How racial bias works and how to disrupt it Jennifer L. Eberhardt

Some years ago,

I was on an airplane with my son
who was just five years old at the time.

My son was so excited
about being on this airplane with Mommy.

He’s looking all around
and he’s checking things out

and he’s checking people out.

And he sees this man, and he says,

“Hey! That guy looks like Daddy!”

And I look at the man,

and he didn’t look anything
at all like my husband,

nothing at all.

And so then I start
looking around on the plane,

and I notice this man was
the only black guy on the plane.

And I thought,

“Alright.

I’m going to have to have
a little talk with my son

about how not all
black people look alike.”

My son, he lifts his head up,
and he says to me,

“I hope he doesn’t rob the plane.”

And I said, “What? What did you say?”

And he says, “Well, I hope that man
doesn’t rob the plane.”

And I said, “Well, why would you say that?

You know Daddy wouldn’t rob a plane.”

And he says, “Yeah, yeah,
yeah, well, I know.”

And I said, “Well,
why would you say that?”

And he looked at me
with this really sad face,

and he says,

“I don’t know why I said that.

I don’t know why I was thinking that.”

We are living with such severe
racial stratification

that even a five-year-old can tell us
what’s supposed to happen next,

even with no evildoer,

even with no explicit hatred.

This association between
blackness and crime

made its way into the mind
of my five-year-old.

It makes its way into all of our children,

into all of us.

Our minds are shaped by
the racial disparities

we see out in the world

and the narratives that help us
to make sense of the disparities we see:

“Those people are criminal.”

“Those people are violent.”

“Those people are to be feared.”

When my research team
brought people into our lab

and exposed them to faces,

we found that exposure to black faces
led them to see blurry images of guns

with greater clarity and speed.

Bias cannot only control what we see,

but where we look.

We found that prompting people
to think of violent crime

can lead them to direct their eyes
onto a black face

and away from a white face.

Prompting police officers
to think of capturing and shooting

and arresting

leads their eyes to settle
on black faces, too.

Bias can infect every aspect
of our criminal justice system.

In a large data set
of death-eligible defendants,

we found that looking more black
more than doubled their chances

of receiving a death sentence –

at least when their victims were white.

This effect is significant,

even though we controlled
for the severity of the crime

and the defendant’s attractiveness.

And no matter what we controlled for,

we found that black
people were punished

in proportion to the blackness
of their physical features:

the more black,

the more death-worthy.

Bias can also influence
how teachers discipline students.

My colleagues and I have found
that teachers express a desire

to discipline a black
middle school student more harshly

than a white student

for the same repeated infractions.

In a recent study,

we’re finding that teachers
treat black students as a group

but white students as individuals.

If, for example,
one black student misbehaves

and then a different black student
misbehaves a few days later,

the teacher responds
to that second black student

as if he had misbehaved twice.

It’s as though the sins of one child

get piled onto the other.

We create categories
to make sense of the world,

to assert some control and coherence

to the stimuli that we’re constantly
being bombarded with.

Categorization and the bias that it seeds

allow our brains to make judgments
more quickly and efficiently,

and we do this by instinctively
relying on patterns

that seem predictable.

Yet, just as the categories we create
allow us to make quick decisions,

they also reinforce bias.

So the very things that help us
to see the world

also can blind us to it.

They render our choices effortless,

friction-free.

Yet they exact a heavy toll.

So what can we do?

We are all vulnerable to bias,

but we don’t act on bias all the time.

There are certain conditions
that can bring bias alive

and other conditions that can muffle it.

Let me give you an example.

Many people are familiar
with the tech company Nextdoor.

So, their whole purpose is to create
stronger, healthier, safer neighborhoods.

And so they offer this online space

where neighbors can gather
and share information.

Yet, Nextdoor soon found
that they had a problem

with racial profiling.

In the typical case,

people would look outside their window

and see a black man
in their otherwise white neighborhood

and make the snap judgment
that he was up to no good,

even when there was no evidence
of criminal wrongdoing.

In many ways, how we behave online

is a reflection of how
we behave in the world.

But what we don’t want to do
is create an easy-to-use system

that can amplify bias
and deepen racial disparities,

rather than dismantling them.

So the cofounder of Nextdoor
reached out to me and to others

to try to figure out what to do.

And they realized that
to curb racial profiling on the platform,

they were going to have to add friction;

that is, they were going
to have to slow people down.

So Nextdoor had a choice to make,

and against every impulse,

they decided to add friction.

And they did this by adding
a simple checklist.

There were three items on it.

First, they asked users to pause

and think, “What was this person doing
that made him suspicious?”

The category “black man”
is not grounds for suspicion.

Second, they asked users to describe
the person’s physical features,

not simply their race and gender.

Third, they realized that a lot of people

didn’t seem to know
what racial profiling was,

nor that they were engaging in it.

So Nextdoor provided them
with a definition

and told them that it was
strictly prohibited.

Most of you have seen
those signs in airports

and in metro stations,
“If you see something, say something.”

Nextdoor tried modifying this.

“If you see something suspicious,

say something specific.”

And using this strategy,
by simply slowing people down,

Nextdoor was able to curb
racial profiling by 75 percent.

Now, people often will say to me,

“You can’t add friction
in every situation, in every context,

and especially for people who make
split-second decisions all the time.”

But it turns out we can add friction

to more situations than we think.

Working with the Oakland Police Department

in California,

I and a number of my colleagues
were able to help the department

to reduce the number of stops they made

of people who were not
committing any serious crimes.

And we did this by pushing officers

to ask themselves a question
before each and every stop they made:

“Is this stop intelligence-led,

yes or no?”

In other words,

do I have prior information
to tie this particular person

to a specific crime?

By adding that question

to the form officers complete
during a stop,

they slow down, they pause,

they think, “Why am I considering
pulling this person over?”

In 2017, before we added that
intelligence-led question to the form,

officers made about 32,000 stops
across the city.

In that next year,
with the addition of this question,

that fell to 19,000 stops.

African-American stops alone
fell by 43 percent.

And stopping fewer black people
did not make the city any more dangerous.

In fact, the crime rate continued to fall,

and the city became safer for everybody.

So one solution can come from reducing
the number of unnecessary stops.

Another can come from improving
the quality of the stops

officers do make.

And technology can help us here.

We all know about George Floyd’s death,

because those who tried to come to his aid
held cell phone cameras

to record that horrific, fatal
encounter with the police.

But we have all sorts of technology
that we’re not putting to good use.

Police departments across the country

are now required to wear body-worn cameras

so we have recordings of not only
the most extreme and horrific encounters

but of everyday interactions.

With an interdisciplinary
team at Stanford,

we’ve begun to use
machine learning techniques

to analyze large numbers of encounters.

This is to better understand
what happens in routine traffic stops.

What we found was that

even when police officers
are behaving professionally,

they speak to black drivers
less respectfully than white drivers.

In fact, from the words
officers use alone,

we could predict whether they were talking
to a black driver or a white driver.

The problem is that the vast majority
of the footage from these cameras

is not used by police departments

to understand what’s
going on on the street

or to train officers.

And that’s a shame.

How does a routine stop
turn into a deadly encounter?

How did this happen
in George Floyd’s case?

How did it happen in others?

When my eldest son was 16 years old,

he discovered that
when white people look at him,

they feel fear.

Elevators are the worst, he said.

When those doors close,

people are trapped in this tiny space

with someone they have been taught
to associate with danger.

My son senses their discomfort,

and he smiles to put them at ease,

to calm their fears.

When he speaks,

their bodies relax.

They breathe easier.

They take pleasure in his cadence,

his diction, his word choice.

He sounds like one of them.

I used to think that my son
was a natural extrovert like his father.

But I realized at that moment,
in that conversation,

that his smile was not a sign
that he wanted to connect

with would-be strangers.

It was a talisman he used
to protect himself,

a survival skill he had honed
over thousands of elevator rides.

He was learning to accommodate the tension
that his skin color generated

and that put his own life at risk.

We know that the brain is wired for bias,

and one way to interrupt that bias
is to pause and to reflect

on the evidence of our assumptions.

So we need to ask ourselves:

What assumptions do we bring
when we step onto an elevator?

Or an airplane?

How do we make ourselves aware
of our own unconscious bias?

Who do those assumptions keep safe?

Who do they put at risk?

Until we ask these questions

and insist that our schools
and our courts and our police departments

and every institution do the same,

we will continue to allow bias

to blind us.

And if we do,

none of us are truly safe.

Thank you.

几年前,

我和当时只有五岁的儿子在飞机上

我儿子很高兴能
和妈妈一起坐这架飞机。

他环顾四周
,检查事情

,检查人。

他看到这个人,他说,

“嘿!那家伙看起来像爸爸!”

我看着那个男人

,他看起来一点也
不像我丈夫,一点

也没有。

然后我开始
在飞机上环顾四周

,我注意到这个
人是飞机上唯一的黑人。

我想,

“好吧。

我将
不得不和我儿子

谈谈,为什么不是所有的
黑人都长得一样。”

我的儿子,他抬起头,
对我说,

“我希望他没有抢劫飞机。”

我说:“什么?你说什么?”

他说,“好吧,我希望那个人
不要抢劫飞机。”

我说,“好吧,你为什么要这么说?

你知道爸爸不会抢飞机的。”

他说,“是的,是的,
是的,我知道。”

我说,“好吧,
你为什么要这么说?”


用这张非常悲伤的脸看着我

,他说,

“我不知道我为什么这么说。

我不知道我为什么这么想。”

我们生活在如此严重的
种族分层

中,即使是一个五岁的孩子也能告诉我们
接下来会发生什么,

即使没有作恶者,

即使没有明确的仇恨。

黑暗与

犯罪之间的这种联系进入
了我五岁孩子的脑海。

它进入我们所有的孩子,

进入我们所有人。

我们的思想受到

我们在世界上看到的种族差异

以及帮助
我们理解我们所看到的差异的叙述的影响:

“那些人是罪犯。”

“那些人很暴力。”

“那些人是可怕的。”

当我的研究团队
将人们带入我们的实验室

并让他们接触面部时,

我们发现接触黑色面部
会使他们

以更高的清晰度和速度看到模糊的枪支图像。

偏见不仅控制我们看到的东西,而且控制我们看的

地方。

我们发现,促使
人们想到暴力犯罪

会导致他们将目光
投向黑脸

而远离白脸。

提示
警察考虑抓捕和射击

和逮捕,也

会使他们的眼睛
盯着黑脸。

偏见会
影响我们刑事司法系统的方方面面。

在一个
符合死刑条件的被告的大型数据集中,

我们发现看起来更黑的人被判处死刑
的机会增加了一倍多

——

至少当他们的受害者是白人时是这样。

这种影响是显着的,

即使我们控制
了犯罪的严重性

和被告的吸引力。

而且无论我们控制什么,

我们发现
黑人受到的

惩罚与他们身体特征的黑度成正比

:越黑

,越值得死。

偏见也会
影响教师管教学生的方式。

我和我的同事发现
,对于同样屡次违规的黑人中学生,老师们表达了比白人

学生更严厉的纪律处分的愿望

在最近的一项研究中,

我们发现教师
将黑人学生视为一个群体,

而将白人学生视为个体。

例如,如果
一名黑人学生行为不端

,几天后另一名黑人学生
行为不端,

则教师
对第二名黑人学生的反应

就像他行为不端两次一样。

就好像一个孩子的罪孽

堆积在另一个孩子身上。

我们创建类别
来理解世界,

对我们不断受到轰炸的刺激保持一定的控制和连贯

性。

分类及其产生的偏见

使我们的大脑能够
更快、更有效地做出判断,

而我们通过本能地
依赖

似乎可预测的模式来做到这一点。

然而,正如我们创建的类别
让我们能够快速做出决定一样,

它们也强化了偏见。

因此,帮助
我们看世界的事物

也会让我们视而不见。

它们使我们的选择毫不费力,没有

摩擦。

然而,他们付出了沉重的代价。

所以,我们能做些什么?

我们都容易受到偏见的影响,

但我们不会一直对偏见采取行动。

某些
条件可以使偏见活跃起来,

而其他条件可以消除偏见。

让我给你举个例子。

许多人都
熟悉科技公司 Nextdoor。

因此,他们的全部目的是创建
更强大、更健康、更安全的社区。

所以他们提供了这个在线空间

,让邻居们可以收集
和分享信息。

然而,Nextdoor 很快
发现他们在种族定性方面存在

问题。

在典型的案例中,

人们会向窗外望去


在他们原本是白人的社区中看到一个黑人

,并迅速判断他没有做好事,

即使没有
犯罪行为的证据。

在许多方面,我们在网上的行为方式

反映了
我们在世界上的行为方式。

但我们不想做的
是创建一个易于使用的系统

,它可以放大偏见
并加深种族差异,

而不是消除它们。

因此,Nextdoor 的联合创始人联系
了我和其他人

,试图弄清楚该怎么做。

他们意识到,
为了遏制平台上的种族定性,

他们将不得不增加摩擦;

也就是说,他们将
不得不让人们慢下来。

因此,Nextdoor 不得不做出选择

,他们不顾每一个冲动

,决定增加摩擦。

他们通过添加
一个简单的清单来做到这一点。

上面有三件物品。

首先,他们要求用户停下

来思考,“这个人做了什么
让他怀疑?”

“黑人”这一类别
不是怀疑的理由。

其次,他们要求用户描述
这个人的身体特征,

而不仅仅是他们的种族和性别。

第三,他们意识到很多人

似乎不知道
什么是种族定性,

也不知道他们参与其中。

于是Nextdoor给了
他们一个定义

,告诉他们
严禁。

你们中的大多数人都
在机场

和地铁站看到过这些标志,
“如果你看到了什么,就说点什么。”

Nextdoor 尝试对此进行修改。

“如果你看到一些可疑的东西,

说一些具体的东西。”

使用这种策略,
通过简单地放慢人们的速度,

Nextdoor 能够将
种族定性抑制 75%。

现在,人们经常会对我说,

“你不能
在每一种情况下,在每一种情况下都增加摩擦

,尤其是对于那些总是在瞬间
做出决定的人。”

但事实证明,我们可以在

比我们想象的更多的情况下增加摩擦。

通过与加利福尼亚的奥克兰警察局

合作,

我和我的一些
同事能够帮助该

部门减少他们对

没有
犯下任何严重罪行的人进行的拦截次数。

我们通过督促警员

在每次停车前问自己一个问题来
做到这一点:

“这次停车是情报主导的,

是还是不是?”

换句话说,

我是否有事先的信息
将这个特定的人

与特定的犯罪联系起来?

通过在停车期间

填写的表格中添加这个问题

他们会放慢速度,他们会停下来,

他们会想,“我为什么要考虑
把这个人拉过来?”

2017 年,在我们
在表格中添加以情报为主导的问题之前,

警察在全市进行了大约 32,000 次停留

在接下来的一年里,
加上这个问题,

这个数字下降到了 19,000 次。

仅非裔美国人的停留就
下降了 43%。

阻止更少的黑人
并没有让这座城市变得更加危险。

事实上,犯罪率继续下降

,这座城市对每个人来说都变得更安全了。

因此,一种解决方案可以是减少
不必要的停车次数。

另一个可以来自提高警官所做
的停车质量

技术可以在这里为我们提供帮助。

我们都知道乔治·弗洛伊德(George Floyd)的死,

因为那些试图帮助他的人
拿着手机摄像

头记录了与警察的可怕、致命的
遭遇。

但是我们有各种各样的技术
,我们没有好好利用。

全国各地的警察部门

现在都必须佩戴随身摄像机,

因此我们不仅可以记录
最极端和最可怕的遭遇,

还可以记录日常互动。

通过斯坦福大学的跨学科
团队,

我们已经开始使用
机器学习技术

来分析大量的遭遇。

这是为了更好地
了解日常交通站点中发生的情况。

我们发现,

即使
警察表现得很专业,

他们对黑人司机
的尊重也比白人司机少。

事实上,仅从
警官使用的词语,

我们就可以预测他们是在
与黑人司机交谈还是与白人司机交谈。

问题在于,
这些摄像机的绝大多数镜头

都没有被警察部门

用来了解街上发生的
事情

或培训警察。

这是一种耻辱。

例行的停止如何
变成致命的遭遇?


在乔治·弗洛伊德的案子中是如何发生的?

它是如何发生在其他人身上的?

当我的大儿子 16 岁时,


发现白人看着他时

会感到恐惧。

他说,电梯是最糟糕的。

当这些门关闭时,

人们被困在这个狭小的空间

里,他们被教导
要与危险联系起来。

我儿子感觉到他们的不适

,他微笑着让他们放松

,平息他们的恐惧。

当他说话时,

他们的身体放松了。

他们呼吸更轻松。

他们喜欢他的节奏,

他的措辞,他的措辞。

他听起来像他们中的一员。

我曾经认为我的儿子
和他父亲一样是一个天生的外向者。

但在那一刻我意识到,
在那次谈话中

,他的微笑
并不是他想

与潜在的陌生人联系的标志。

这是他
用来保护自己的护身符,

是他
在数千次乘坐电梯中磨练出来的生存技能。

他正在学习适应
他的肤色所产生的压力

,这种压力会使他自己的生命处于危险之中。

我们知道大脑天生就有偏见,

打断这种偏见的一种方法
是停下来思考

我们假设的证据。

所以我们需要问自己:

当我们踏上电梯时,我们会带来什么假设?

还是飞机?

我们如何让自己
意识到自己无意识的偏见?

这些假设保护了谁?

他们把谁置于危险之中?

在我们提出这些问题

并坚持我们的学校
、法院、警察部门

和每个机构都这样做之前,

我们将继续让

偏见蒙蔽我们的双眼。

如果我们这样做了,

我们都不是真正安全的。

谢谢你。