How we can teach computers to make sense of our emotions Raphael Arar

Translator: Ivana Korom
Reviewer: Joanna Pietrulewicz

I consider myself one part artist
and one part designer.

And I work at an artificial
intelligence research lab.

We’re trying to create technology

that you’ll want to interact with
in the far future.

Not just six months from now,
but try years and decades from now.

And we’re taking a moonshot

that we’ll want to be
interacting with computers

in deeply emotional ways.

So in order to do that,

the technology has to be
just as much human as it is artificial.

It has to get you.

You know, like that inside joke
that’ll have you and your best friend

on the floor, cracking up.

Or that look of disappointment
that you can just smell from miles away.

I view art as the gateway to help us
bridge this gap between human and machine:

to figure out what it means
to get each other

so that we can train AI to get us.

See, to me, art is a way
to put tangible experiences

to intangible ideas,
feelings and emotions.

And I think it’s one
of the most human things about us.

See, we’re a complicated
and complex bunch.

We have what feels like
an infinite range of emotions,

and to top it off, we’re all different.

We have different family backgrounds,

different experiences
and different psychologies.

And this is what makes life
really interesting.

But this is also what makes
working on intelligent technology

extremely difficult.

And right now, AI research, well,

it’s a bit lopsided on the tech side.

And that makes a lot of sense.

See, for every
qualitative thing about us –

you know, those parts of us that are
emotional, dynamic and subjective –

we have to convert it
to a quantitative metric:

something that can be represented
with facts, figures and computer code.

The issue is, there are
many qualitative things

that we just can’t put our finger on.

So, think about hearing
your favorite song for the first time.

What were you doing?

How did you feel?

Did you get goosebumps?

Or did you get fired up?

Hard to describe, right?

See, parts of us feel so simple,

but under the surface,
there’s really a ton of complexity.

And translating
that complexity to machines

is what makes them modern-day moonshots.

And I’m not convinced that we can
answer these deeper questions

with just ones and zeros alone.

So, in the lab, I’ve been creating art

as a way to help me
design better experiences

for bleeding-edge technology.

And it’s been serving as a catalyst

to beef up the more human ways
that computers can relate to us.

Through art, we’re tacking
some of the hardest questions,

like what does it really mean to feel?

Or how do we engage and know
how to be present with each other?

And how does intuition
affect the way that we interact?

So, take for example human emotion.

Right now, computers can make sense
of our most basic ones,

like joy, sadness,
anger, fear and disgust,

by converting those
characteristics to math.

But what about the more complex emotions?

You know, those emotions

that we have a hard time
describing to each other?

Like nostalgia.

So, to explore this, I created
a piece of art, an experience,

that asked people to share a memory,

and I teamed up with some data scientists

to figure out how to take
an emotion that’s so highly subjective

and convert it into something
mathematically precise.

So, we created what we call
a nostalgia score

and it’s the heart of this installation.

To do that, the installation
asks you to share a story,

the computer then analyzes it
for its simpler emotions,

it checks for your tendency
to use past-tense wording

and also looks for words
that we tend to associate with nostalgia,

like “home,” “childhood” and “the past.”

It then creates a nostalgia score

to indicate how nostalgic your story is.

And that score is the driving force
behind these light-based sculptures

that serve as physical embodiments
of your contribution.

And the higher the score,
the rosier the hue.

You know, like looking at the world
through rose-colored glasses.

So, when you see your score

and the physical representation of it,

sometimes you’d agree
and sometimes you wouldn’t.

It’s as if it really understood
how that experience made you feel.

But other times it gets tripped up

and has you thinking
it doesn’t understand you at all.

But the piece really serves to show

that if we have a hard time explaining
the emotions that we have to each other,

how can we teach a computer
to make sense of them?

So, even the more objective parts
about being human are hard to describe.

Like, conversation.

Have you ever really tried
to break down the steps?

So think about sitting
with your friend at a coffee shop

and just having small talk.

How do you know when to take a turn?

How do you know when to shift topics?

And how do you even know
what topics to discuss?

See, most of us
don’t really think about it,

because it’s almost second nature.

And when we get to know someone,
we learn more about what makes them tick,

and then we learn
what topics we can discuss.

But when it comes to teaching
AI systems how to interact with people,

we have to teach them
step by step what to do.

And right now, it feels clunky.

If you’ve ever tried to talk
with Alexa, Siri or Google Assistant,

you can tell that it or they
can still sound cold.

And have you ever gotten annoyed

when they didn’t understand
what you were saying

and you had to rephrase what you wanted
20 times just to play a song?

Alright, to the credit of the designers,
realistic communication is really hard.

And there’s a whole branch of sociology,

called conversation analysis,

that tries to make blueprints
for different types of conversation.

Types like customer service
or counseling, teaching and others.

I’ve been collaborating
with a conversation analyst at the lab

to try to help our AI systems
hold more human-sounding conversations.

This way, when you have an interaction
with a chatbot on your phone

or a voice-based system in the car,

it sounds a little more human
and less cold and disjointed.

So I created a piece of art

that tries to highlight
the robotic, clunky interaction

to help us understand, as designers,

why it doesn’t sound human yet
and, well, what we can do about it.

The piece is called Bot to Bot

and it puts one conversational
system against another

and then exposes it to the general public.

And what ends up happening
is that you get something

that tries to mimic human conversation,

but falls short.

Sometimes it works and sometimes
it gets into these, well,

loops of misunderstanding.

So even though the machine-to-machine
conversation can make sense,

grammatically and colloquially,

it can still end up
feeling cold and robotic.

And despite checking all the boxes,
the dialogue lacks soul

and those one-off quirks
that make each of us who we are.

So while it might be grammatically correct

and uses all the right
hashtags and emojis,

it can end up sounding mechanical
and, well, a little creepy.

And we call this the uncanny valley.

You know, that creepiness factor of tech

where it’s close to human
but just slightly off.

And the piece will start being

one way that we test
for the humanness of a conversation

and the parts that get
lost in translation.

So there are other things
that get lost in translation, too,

like human intuition.

Right now, computers
are gaining more autonomy.

They can take care of things for us,

like change the temperature
of our houses based on our preferences

and even help us drive on the freeway.

But there are things
that you and I do in person

that are really difficult
to translate to AI.

So think about the last time
that you saw an old classmate or coworker.

Did you give them a hug
or go in for a handshake?

You probably didn’t think twice

because you’ve had so many
built up experiences

that had you do one or the other.

And as an artist, I feel
that access to one’s intuition,

your unconscious knowing,

is what helps us create amazing things.

Big ideas, from that abstract,
nonlinear place in our consciousness

that is the culmination
of all of our experiences.

And if we want computers to relate to us
and help amplify our creative abilities,

I feel that we’ll need to start thinking
about how to make computers be intuitive.

So I wanted to explore
how something like human intuition

could be directly translated
to artificial intelligence.

And I created a piece
that explores computer-based intuition

in a physical space.

The piece is called Wayfinding,

and it’s set up as a symbolic compass
that has four kinetic sculptures.

Each one represents a direction,

north, east, south and west.

And there are sensors set up
on the top of each sculpture

that capture how far away
you are from them.

And the data that gets collected

ends up changing the way
that sculptures move

and the direction of the compass.

The thing is, the piece doesn’t work
like the automatic door sensor

that just opens
when you walk in front of it.

See, your contribution is only a part
of its collection of lived experiences.

And all of those experiences
affect the way that it moves.

So when you walk in front of it,

it starts to use all of the data

that it’s captured
throughout its exhibition history –

or its intuition –

to mechanically respond to you
based on what it’s learned from others.

And what ends up happening
is that as participants

we start to learn the level
of detail that we need

in order to manage expectations

from both humans and machines.

We can almost see our intuition
being played out on the computer,

picturing all of that data
being processed in our mind’s eye.

My hope is that this type of art

will help us think differently
about intuition

and how to apply that to AI in the future.

So these are just a few examples
of how I’m using art to feed into my work

as a designer and researcher
of artificial intelligence.

And I see it as a crucial way
to move innovation forward.

Because right now, there are
a lot of extremes when it comes to AI.

Popular movies show it
as this destructive force

while commercials
are showing it as a savior

to solve some of the world’s
most complex problems.

But regardless of where you stand,

it’s hard to deny
that we’re living in a world

that’s becoming more
and more digital by the second.

Our lives revolve around our devices,
smart appliances and more.

And I don’t think
this will let up any time soon.

So, I’m trying to embed
more humanness from the start.

And I have a hunch that bringing art
into an AI research process

is a way to do just that.

Thank you.

(Applause)

译者:Ivana Korom
审稿人:Joanna Pietrulewicz

我认为自己既是艺术家
又是设计师。

我在一个
人工智能研究实验室工作。

我们正在努力

创造您希望
在遥远的未来与之互动的技术。

不仅仅是六个月后,
而是从现在开始几年甚至几十年。

我们正在做一个登月计划

,我们希望能

以深刻的情感方式与计算机进行交互。

因此,为了做到这一点

,技术必须与
人工一样多。

它必须得到你。

你知道,就像那个
让你和你最好的朋友

在地板上开裂的内部笑话一样。

或者
那种你在数英里外就能闻到的失望表情。

我将艺术视为帮助我们
弥合人与机器之间鸿沟的门户

:弄清楚相互获取对方意味着什么,

以便我们可以训练人工智能来获取我们。

看,对我来说,艺术是
一种将有形体验

转化为无形想法、
感受和情感的方式。

我认为这是
我们最人性化的事情之一。

看,我们是一群复杂
而复杂的人。

我们
有无限的情绪

,最重要的是,我们都是不同的。

我们有不同的家庭背景、

不同的经历
和不同的心理。

这就是让生活
真正有趣的原因。

但这也是
智能技术工作

极其困难的原因。

现在,人工智能研究,嗯,

它在技术方面有点不平衡。

这很有意义。

看,
关于我们的每一个定性的东西——

你知道,我们的
情感、动态和主观的部分——

我们必须将其
转换为定量指标:

可以
用事实、数字和计算机代码表示的东西。

问题是,有
很多定性的东西

是我们无法确定的。

所以,想想
第一次听到你最喜欢的歌。

你在做什么?

你感觉怎么样?

你起鸡皮疙瘩了吗?

还是你被解雇了?

很难描述,对吧?

看,我们的某些部分感觉如此简单,

但在表面之下,
确实有很多复杂性。


这种复杂性转化为机器

是使它们成为现代登月计划的原因。

而且我不相信我们可以仅用 1 和 0 来
回答这些更深层次的问题

所以,在实验室里,我一直在创作艺术

,以此来帮助我为尖端技术
设计更好的体验

它一直在充当催化剂

,加强计算机与我们联系的更人性化的
方式。

通过艺术,我们正在解决
一些最棘手的问题,

比如感受的真正含义是什么?

或者我们如何参与并知道
如何与对方相处?

直觉如何
影响我们互动的方式?

因此,以人类情感为例。

现在,计算机可以通过将这些特征转换为数学来
理解我们最基本的特征,

如快乐、悲伤
、愤怒、恐惧和厌恶

但是更复杂的情绪呢?

你知道,

那些我们
很难互相描述的情绪吗?

喜欢怀旧。

因此,为了探索这一点,我创作
了一件艺术品,一种体验

,要求人们分享一段记忆

,我与一些数据科学家

合作,想出如何将
一种如此高度主观的情绪

转化为
数学上的东西 精确的。

所以,我们创造了我们所说
的怀旧乐谱

,它是这个装置的核心。

为此,该装置
要求您分享一个故事

,然后计算机分析
它的简单情绪

,检查您
使用过去时措辞的倾向,

并寻找
我们倾向于与怀旧相关的词,

例如“家 、“童年”和“过去”。

然后它会创建一个怀旧分数,

以表明你的故事有多怀旧。

这个分数是
这些基于光的雕塑背后的驱动力,它们

是您贡献的物理体现。

分数越高,
色调越玫瑰色。

你知道,就像
戴着玫瑰色眼镜看世界一样。

所以,当你看到你的分数

和它的物理表现时,

有时你会同意
,有时你不会。

就好像它真的
了解那种经历给你带来的感受。

但其他时候它会被绊倒

,让你认为
它根本不理解你。

但这篇文章确实表明

,如果我们很难解释
彼此之间的情感,

我们如何教
计算机理解它们?

因此,即使是关于人类的更客观的部分
也很难描述。

比如,对话。

您是否真的尝试
过分解这些步骤?

所以想想
和你的朋友在咖啡店

坐下来聊几句。

你怎么知道什么时候转弯?

你怎么知道什么时候转移话题?

你怎么知道
要讨论什么话题?

看,我们大多数人
并没有真正考虑过它,

因为它几乎是第二天性。

当我们了解某人时,
我们会更多地了解是什么让他们打勾,

然后我们会了解
我们可以讨论哪些话题。

但是在教
人工智能系统如何与人互动时,

我们必须
一步一步地教他们做什么。

而现在,感觉很笨拙。

如果您曾经尝试
与 Alexa、Siri 或 Google 助理交谈,

您可以说出来,否则它们
听起来仍然很冷。

当他们不
明白你在说

什么并且你不得不将你想要的东西改写
20 次来播放一首歌时,你是否曾经生气过?

好吧,对于设计师来说,
现实的沟通真的很难。

还有一个完整的社会学分支,

称为对话分析,

它试图
为不同类型的对话制定蓝图。

客户服务
或咨询、教学等类型。

我一直
在与实验室的一位对话分析师合作,

试图帮助我们的人工智能系统进行
更多听起来像人类的对话。

这样,当
您与手机上的聊天机器人

或车内基于语音的系统进行交互时,

听起来会更人性
化,不那么冷漠和脱节。

所以我创作了一件艺术作品

,试图
突出机器人笨拙的交互,

以帮助我们作为设计师理解

为什么它听起来还不像人类
,以及我们能做些什么。

这件作品被称为 Bot to Bot

,它将一个对话
系统与另一个对话系统进行对比

,然后将其公开给公众。

最终发生的事情
是,你得到了

一些试图模仿人类对话的东西,

但却达不到要求。

有时它会起作用,有时
它会陷入这些

误解循环。

因此,即使机器对机器的
对话在

语法和口语上都有意义,

但最终还是会让人
感到冷漠和机械化。

尽管勾选了所有方框
,但对话缺乏灵魂


那些使我们每个人成为我们自己的一次性怪癖。

因此,虽然它在语法上可能是正确的,

并且使用了所有正确的
标签和表情符号,

但它最终可能听起来很机械
,而且有点令人毛骨悚然。

我们称之为恐怖谷。

你知道,科技的那种令人毛骨悚然的因素

,它接近人类,
但略有不同。

这篇文章将开始

成为我们测试
对话的人性

以及
翻译中丢失的部分的一种方式。

因此
,在翻译中也会丢失其他一些东西,

比如人类的直觉。

现在,计算机
正在获得更多的自主权。

他们可以为我们处理事情,

比如
根据我们的喜好改变我们房子的温度

,甚至帮助我们在高速公路上开车。

但是
你我亲自做

的一些事情真的
很难转化为人工智能。

所以想想你最后一次
见到老同学或同事是什么时候。

你给了他们一个拥抱
还是去握手?

您可能没有三思而后行,

因为您已经积累了如此多的
经验

,而您只能选择其中的一种。

作为一名艺术家,我
觉得进入一个人的直觉,

你的无意识知道,

是帮助我们创造惊人事物的原因。

伟大的想法,来自我们意识中那个抽象的、
非线性的地方


是我们所有经历的高潮。

如果我们希望计算机与我们联系起来
并帮助增强我们的创造能力,

我觉得我们需要开始
思考如何让计算机变得直观。

所以我想
探索像人类直觉

这样的东西是如何直接转化
为人工智能的。

我创作了
一篇探索物理空间中基于计算机的直觉

的作品。

这件作品被称为寻路

,它被设置为
一个有四个动力雕塑的象征性指南针。

每一个代表一个方向,

北、东、南和西。

每个雕塑的顶部都设置了传感器

,可以捕捉到
你离它们有多远。

收集到的数据最终会

改变雕塑的移动方式

和指南针的方向。

问题是,它
不像自动门传感器

那样工作,
当你走到它前面时它就会打开。

看,你的贡献只是
它收集的生活经验的一部分。

所有这些经历
都会影响它的移动方式。

所以当你走在它面前时,

它开始使用


在整个展览历史中捕获的所有数据——

或者它的直觉——

根据从其他人那里学到的东西机械地回应你。

最终发生的
是,作为参与者,

我们开始
了解我们需要的详细程度,

以便管理

人类和机器的期望。

我们几乎可以看到我们的直觉
在计算机上发挥作用,

想象所有这些数据
正在我们的脑海中处理。

我希望这种类型的艺术

能帮助我们以不同的方式
思考直觉

以及未来如何将直觉应用于人工智能。

所以这些只是
我作为人工智能设计师和研究员如何利用艺术来融入我的工作的几个例子

我认为这是
推动创新向前发展的重要途径。

因为现在,
在人工智能方面有很多极端。

流行电影将其展示
为这种破坏性力量,

而广告
则将其展示

为解决世界上一些
最复杂问题的救星。

但无论你站在哪里,

都很难
否认我们生活在一个

越来越数字化的世界。

我们的生活围绕着我们的设备、
智能电器等等。

而且我认为
这不会很快消失。

所以,我试图
从一开始就嵌入更多的人性。

我有一种预感,将艺术
带入人工智能研究过程

是实现这一目标的一种方式。

谢谢你。

(掌声)