Why we have an emotional connection to robots Kate Darling

Translator: Joseph Geni
Reviewer: Krystian Aparta

There was a day, about 10 years ago,

when I asked a friend to hold
a baby dinosaur robot upside down.

It was this toy called a Pleo
that I had ordered,

and I was really excited about it
because I’ve always loved robots.

And this one has really cool
technical features.

It had motors and touch sensors

and it had an infrared camera.

And one of the things it had
was a tilt sensor,

so it knew what direction it was facing.

And when you held it upside down,

it would start to cry.

And I thought this was super cool,
so I was showing it off to my friend,

and I said, “Oh, hold it up by the tail.
See what it does.”

So we’re watching
the theatrics of this robot

struggle and cry out.

And after a few seconds,

it starts to bother me a little,

and I said, “OK, that’s enough now.

Let’s put him back down.”

And then I pet the robot
to make it stop crying.

And that was kind of
a weird experience for me.

For one thing, I wasn’t the most
maternal person at the time.

Although since then I’ve become
a mother, nine months ago,

and I’ve learned that babies also squirm
when you hold them upside down.

(Laughter)

But my response to this robot
was also interesting

because I knew exactly
how this machine worked,

and yet I still felt
compelled to be kind to it.

And that observation sparked a curiosity

that I’ve spent the past decade pursuing.

Why did I comfort this robot?

And one of the things I discovered
was that my treatment of this machine

was more than just an awkward moment
in my living room,

that in a world where we’re increasingly
integrating robots into our lives,

an instinct like that
might actually have consequences,

because the first thing that I discovered
is that it’s not just me.

In 2007, the Washington Post
reported that the United States military

was testing this robot
that defused land mines.

And the way it worked
was it was shaped like a stick insect

and it would walk
around a minefield on its legs,

and every time it stepped on a mine,
one of the legs would blow up,

and it would continue on the other legs
to blow up more mines.

And the colonel who was in charge
of this testing exercise

ends up calling it off,

because, he says, it’s too inhumane

to watch this damaged robot
drag itself along the minefield.

Now, what would cause
a hardened military officer

and someone like myself

to have this response to robots?

Well, of course, we’re primed
by science fiction and pop culture

to really want to personify these things,

but it goes a little bit deeper than that.

It turns out that we’re biologically
hardwired to project intent and life

onto any movement in our physical space
that seems autonomous to us.

So people will treat all sorts
of robots like they’re alive.

These bomb-disposal units get names.

They get medals of honor.

They’ve had funerals for them
with gun salutes.

And research shows that we do this
even with very simple household robots,

like the Roomba vacuum cleaner.

(Laughter)

It’s just a disc that roams
around your floor to clean it,

but just the fact it’s moving
around on its own

will cause people to name the Roomba

and feel bad for the Roomba
when it gets stuck under the couch.

(Laughter)

And we can design robots
specifically to evoke this response,

using eyes and faces or movements

that people automatically,
subconsciously associate

with states of mind.

And there’s an entire body of research
called human-robot interaction

that really shows how well this works.

So for example, researchers
at Stanford University found out

that it makes people really uncomfortable

when you ask them to touch
a robot’s private parts.

(Laughter)

So from this, but from many other studies,

we know, we know that people
respond to the cues given to them

by these lifelike machines,

even if they know that they’re not real.

Now, we’re headed towards a world
where robots are everywhere.

Robotic technology is moving out
from behind factory walls.

It’s entering workplaces, households.

And as these machines that can sense
and make autonomous decisions and learn

enter into these shared spaces,

I think that maybe the best
analogy we have for this

is our relationship with animals.

Thousands of years ago,
we started to domesticate animals,

and we trained them for work
and weaponry and companionship.

And throughout history, we’ve treated
some animals like tools or like products,

and other animals,
we’ve treated with kindness

and we’ve given a place in society
as our companions.

I think it’s plausible we might start
to integrate robots in similar ways.

And sure, animals are alive.

Robots are not.

And I can tell you,
from working with roboticists,

that we’re pretty far away from developing
robots that can feel anything.

But we feel for them,

and that matters,

because if we’re trying to integrate
robots into these shared spaces,

we need to understand that people will
treat them differently than other devices,

and that in some cases,

for example, the case of a soldier
who becomes emotionally attached

to the robot that they work with,

that can be anything
from inefficient to dangerous.

But in other cases,
it can actually be useful

to foster this emotional
connection to robots.

We’re already seeing some great use cases,

for example, robots working
with autistic children

to engage them in ways
that we haven’t seen previously,

or robots working with teachers to engage
kids in learning with new results.

And it’s not just for kids.

Early studies show that robots
can help doctors and patients

in health care settings.

This is the PARO baby seal robot.

It’s used in nursing homes
and with dementia patients.

It’s been around for a while.

And I remember, years ago,
being at a party

and telling someone about this robot,

and her response was,

“Oh my gosh.

That’s horrible.

I can’t believe we’re giving people
robots instead of human care.”

And this is a really common response,

and I think it’s absolutely correct,

because that would be terrible.

But in this case,
it’s not what this robot replaces.

What this robot replaces is animal therapy

in contexts where
we can’t use real animals

but we can use robots,

because people will consistently treat
them more like an animal than a device.

Acknowledging this emotional
connection to robots

can also help us anticipate challenges

as these devices move into more intimate
areas of people’s lives.

For example, is it OK
if your child’s teddy bear robot

records private conversations?

Is it OK if your sex robot
has compelling in-app purchases?

(Laughter)

Because robots plus capitalism

equals questions around
consumer protection and privacy.

And those aren’t the only reasons

that our behavior around
these machines could matter.

A few years after that first
initial experience I had

with this baby dinosaur robot,

I did a workshop
with my friend Hannes Gassert.

And we took five
of these baby dinosaur robots

and we gave them to five teams of people.

And we had them name them

and play with them and interact with them
for about an hour.

And then we unveiled
a hammer and a hatchet

and we told them to torture
and kill the robots.

(Laughter)

And this turned out to be
a little more dramatic

than we expected it to be,

because none of the participants
would even so much as strike

these baby dinosaur robots,

so we had to improvise a little,
and at some point, we said,

“OK, you can save your team’s robot
if you destroy another team’s robot.”

(Laughter)

And even that didn’t work.
They couldn’t do it.

So finally, we said,

“We’re going to destroy all of the robots

unless someone takes
a hatchet to one of them.”

And this guy stood up,
and he took the hatchet,

and the whole room winced
as he brought the hatchet down

on the robot’s neck,

and there was this half-joking,
half-serious moment of silence in the room

for this fallen robot.

(Laughter)

So that was a really
interesting experience.

Now, it wasn’t a controlled
study, obviously,

but it did lead to some
later research that I did at MIT

with Palash Nandy and Cynthia Breazeal,

where we had people come into the lab
and smash these HEXBUGs

that move around in a really
lifelike way, like insects.

So instead of choosing something cute
that people are drawn to,

we chose something more basic,

and what we found
was that high-empathy people

would hesitate more to hit the HEXBUGS.

Now this is just a little study,

but it’s part of a larger body of research

that is starting to indicate
that there may be a connection

between people’s tendencies for empathy

and their behavior around robots.

But my question for the coming era
of human-robot interaction

is not: “Do we empathize with robots?”

It’s: “Can robots change
people’s empathy?”

Is there reason to, for example,

prevent your child
from kicking a robotic dog,

not just out of respect for property,

but because the child might be
more likely to kick a real dog?

And again, it’s not just kids.

This is the violent video games question,
but it’s on a completely new level

because of this visceral physicality
that we respond more intensely to

than to images on a screen.

When we behave violently towards robots,

specifically robots
that are designed to mimic life,

is that a healthy outlet
for violent behavior

or is that training our cruelty muscles?

We don’t know …

But the answer to this question has
the potential to impact human behavior,

it has the potential
to impact social norms,

it has the potential to inspire rules
around what we can and can’t do

with certain robots,

similar to our animal cruelty laws.

Because even if robots can’t feel,

our behavior towards them
might matter for us.

And regardless of whether
we end up changing our rules,

robots might be able to help us
come to a new understanding of ourselves.

Most of what I’ve learned
over the past 10 years

has not been about technology at all.

It’s been about human psychology

and empathy and how we relate to others.

Because when a child is kind to a Roomba,

when a soldier tries to save
a robot on the battlefield,

or when a group of people refuses
to harm a robotic baby dinosaur,

those robots aren’t just motors
and gears and algorithms.

They’re reflections of our own humanity.

Thank you.

(Applause)

译者:Joseph Geni
审稿人:Krystian Aparta

大约 10 年前的一天,

我让一个朋友把
一个恐龙机器人宝宝倒过来抱。 我订购的

正是这款名为 Pleo 的玩具
,我对此

感到非常兴奋,
因为我一直很喜欢机器人。

而这个具有非常酷的
技术特性。

它有电机和触摸传感器

,还有一个红外摄像头。

它拥有的东西之一
是倾斜传感器,

因此它知道它面对的方向。

当你把它倒置时,

它会开始哭泣。

我觉得这很酷,
所以我把它展示给我的朋友

,我说,“哦,抓住它的尾巴。
看看它有什么作用。”

所以我们正在观看
这个机器人

挣扎和哭泣的戏剧。

几秒钟后,

我开始有点困扰

,我说,“好吧,现在够了。

让我们把他放回去。”

然后我抚摸
机器人让它停止哭泣。


对我来说是一种奇怪的经历。

一方面,我当时并不是最有
母性的人。

虽然从那以后我成为
了母亲,但九个月前

,我了解到,
当你把婴儿倒置时,它们也会蠕动。

(笑声)

但我对这个机器人
的反应也很有趣,

因为我
知道这台机器是如何工作的

,但我仍然觉得有
必要对它好一点。

这一观察引发

了我过去十年一直在追求的好奇心。

我为什么要安慰这个机器人?

我发现的一件事
是,我对这台机器的处理

不仅仅是
我客厅里的一个尴尬时刻

,在一个我们越来越多地
将机器人融入我们生活的世界里,这样

的本能
实际上可能会产生后果,

因为我发现的第一
件事就是不只是我。

2007年,《华盛顿邮
报》报道称,美国军方

正在测试
这种拆除地雷的机器人。

它的工作方式
是它的形状像竹节虫

,它会
用腿绕着雷区走

,每次踩到地雷时,
一条腿会爆炸

,另一条腿会
继续 炸毁更多的地雷。

负责这次测试的上校

最终取消了它,

因为,他说,

看着这个受损的机器人
沿着雷区拖着自己太不人道了。

现在,什么会导致
一个坚强的军官

和像我

这样的人对机器人有这种反应?

好吧,当然,我们
被科幻小说和流行文化

所激发,真的想把这些东西拟人化,

但它比这更深入一点。

事实证明,我们在生物学上
天生就可以将意图和生命

投射到我们物理空间
中对我们来说似乎是自主的任何运动上。

所以人们会
像对待活着的机器人一样对待各种机器人。

这些炸弹处理单位得名。

他们获得荣誉勋章。

他们用礼炮为他们举行了葬礼

研究表明,
即使是非常简单的家用机器人,

例如 Roomba 吸尘器,我们也能做到这一点。

(笑声)

它只是一个
在你的地板上移动以清洁它的光盘,

但它自己移动的事实

会导致人们为 Roomba 命名,

并且
当 Roomba 卡在沙发下时会为它感到难过。

(笑声

) 我们可以专门设计机器人
来唤起这种反应,

使用人们自动、
下意识地联想

到心理状态的眼睛、面部或动作。

还有一整套
称为人机交互的研究

,真正展示了它的工作原理。

例如,
斯坦福大学的研究人员发现

当你要求他们触摸机器人的私处时,他们会感到非常不舒服

(笑声)

所以从这个,但从许多其他研究中,

我们知道,我们知道人们
会对这些栩栩如生的机器给他们的提示做出反应

即使他们知道这些提示不是真实的。

现在,我们正走向一个
机器人无处不在的世界。

机器人技术正在
从工厂墙后走出。

它正在进入工作场所、家庭。

随着这些能够感知
、做出自主决策和学习的机器

进入这些共享空间,

我认为我们对此最好的类比可能

就是我们与动物的关系。

几千年前,
我们开始驯化动物,

训练它们工作
、装备武器和陪伴。

纵观历史,我们将
一些动物当作工具或产品对待,

而其他动物,
我们以善意对待,

并在社会中给予
我们的同伴地位。

我认为我们可能会开始
以类似的方式整合机器人是合理的。

当然,动物是有生命的。

机器人不是。

我可以告诉你,
通过与机器人专家

的合作,我们离开
发能够感知任何东西的机器人还很遥远。

但我们对它们有感觉

,这很重要,

因为如果我们试图将
机器人整合到这些共享空间中,

我们需要了解人们会
以不同于其他设备的方式对待它们,

并且在某些情况下,

例如, 一名士兵
在情感上依恋

于与他们一起工作的机器人,

这可能是
从低效到危险的任何事情。

但在其他情况下,

培养
与机器人的这种情感联系实际上是有用的。

我们已经看到了一些很棒的用例,

例如,机器人
与自闭症儿童

合作,以
我们以前从未见过的方式让他们参与进来,

或者机器人与教师合作,让
孩子们参与到新的学习中。

这不仅仅是为了孩子。

早期研究表明,机器人
可以帮助

医疗机构的医生和患者。

这是 PARO 海豹宝宝机器人。

它用于疗养院
和痴呆症患者。

它已经存在了一段时间。

我记得,几年前
,在一个聚会

上告诉别人这个机器人

,她的反应是,

“天哪。

这太可怕了。

我不敢相信我们给人们的是
机器人而不是人类的照顾。”

这是一个非常普遍的反应

,我认为这是绝对正确的,

因为那会很糟糕。

但在这种情况下,
这不是这个机器人所取代的。

这个机器人取代的是


我们不能使用真实动物

但我们可以使用机器人的情况下进行的动物治疗,

因为人们会始终将
它们视为动物而不是设备。 随着这些设备进入人们生活中更私密的领域,

承认
与机器人的这种情感联系

也可以帮助我们预测挑战

例如,
如果您孩子的泰迪熊机器人

记录私人对话可以吗?

如果您的性爱机器人
有引人注目的应用内购买,是否可以?

(笑声)

因为机器人加资本主义

等于围绕
消费者保护和隐私的问题。

这些

并不是我们在这些机器周围的行为
可能很重要的唯一原因。


我第一次体验

这个小恐龙机器人的几年后,


和我的朋友 Hannes Gassert 开了一个研讨会。

我们拿了
五个这样的小恐龙机器人

,我们把它们交给了五组人。

我们让他们命名

并与他们一起玩耍并与他们互动
大约一个小时。

然后我们揭开
了一把锤子和一把斧头

,我们告诉他们折磨
和杀死机器人。

(笑声

) 结果

比我们预想的还要戏剧化,

因为没有一个参与者
会打

这些小恐龙机器人,

所以我们不得不即兴发挥一点
,在某些时候,我们 说:

“好吧,
如果你摧毁了另一个队的机器人,你可以拯救你队的机器人。”

(笑声

) 即使这样也没有用。
他们做不到。

所以最后,我们说,

“我们要摧毁所有的机器人,

除非有人
用斧头砍掉其中一个。”

而这家伙站了起来
,拿起了斧头

,整个房间都在颤抖
,他把斧头

放在机器人的脖子上,

房间里

为这个倒下的机器人陷入了半开玩笑半严肃的沉默。

(笑声)

所以那是一次非常
有趣的经历。

现在,这显然不是一项对照
研究,

但它确实导致了
我在麻省理工学院

与 Palash Nandy 和 Cynthia Breazeal 一起进行的一些后来的研究,在

那里我们让人们进入实验室
并粉碎这些

在真正移动的 HEXBUG
栩栩如生,犹如昆虫。

因此,我们没有选择人们喜欢的可爱的东西

而是选择了更基本的东西,我们

发现高同理心的人

会更犹豫是否要点击 HEXBUGS。

现在这只是一项小研究,

但它是更大规模研究的一部分,这些研究

开始表明

人们的同理心倾向

与他们围绕机器人的行为之间可能存在联系。

但对于即将到来的人机交互时代,我的问题

不是:“我们会同情机器人吗?”

它是:“机器人可以改变
人们的同理心吗?”

例如,是否有理由

阻止您的
孩子踢机器狗,

不仅仅是出于对财产的尊重,

而是因为孩子可能
更有可能踢真正的狗?

再说一次,这不仅仅是孩子。

这是暴力视频游戏的问题,
但它处于一个全新的水平,

因为我们对这种发自内心的物质性的
反应

比对屏幕上的图像更强烈。

当我们对机器人采取暴力行为时,

特别
是旨在模仿生活的机器人,这

是暴力行为的健康出路

还是训练我们的残忍肌肉?

我们不知道……

但是这个问题的答案
有可能影响人类行为,

它有
可能影响社会规范,

它有可能激发
关于我们可以对某些机器人做什么和不能做什么的规则

类似于我们的虐待动物法。

因为即使机器人没有感觉,

我们对它们的行为
也可能对我们很重要。

而且无论
我们最终是否改变规则,

机器人都可以帮助
我们对自己有新的认识。

在过去的 10 年

中,我学到的大部分内容都与技术无关。

它是关于人类心理

和同理心以及我们如何与他人相处的。

因为当孩子对 Roomba 友善

时,当士兵试图
在战场上拯救机器人时,

或者当一群人
拒绝伤害机器人恐龙宝宝时,

这些机器人不仅仅是电机
、齿轮和算法。

它们是我们自身人性的反映。

谢谢你。

(掌声)