How we can teach computers to make sense of our emotions Raphael Arar

Translator: Ivana Korom
Reviewer: Joanna Pietrulewicz

I consider myself one part artist
and one part designer.

And I work at an artificial
intelligence research lab.

We’re trying to create technology

that you’ll want to interact with
in the far future.

Not just six months from now,
but try years and decades from now.

And we’re taking a moonshot

that we’ll want to be
interacting with computers

in deeply emotional ways.

So in order to do that,

the technology has to be
just as much human as it is artificial.

It has to get you.

You know, like that inside joke
that’ll have you and your best friend

on the floor, cracking up.

Or that look of disappointment
that you can just smell from miles away.

I view art as the gateway to help us
bridge this gap between human and machine:

to figure out what it means
to get each other

so that we can train AI to get us.

See, to me, art is a way
to put tangible experiences

to intangible ideas,
feelings and emotions.

And I think it’s one
of the most human things about us.

See, we’re a complicated
and complex bunch.

We have what feels like
an infinite range of emotions,

and to top it off, we’re all different.

We have different family backgrounds,

different experiences
and different psychologies.

And this is what makes life
really interesting.

But this is also what makes
working on intelligent technology

extremely difficult.

And right now, AI research, well,

it’s a bit lopsided on the tech side.

And that makes a lot of sense.

See, for every
qualitative thing about us –

you know, those parts of us that are
emotional, dynamic and subjective –

we have to convert it
to a quantitative metric:

something that can be represented
with facts, figures and computer code.

The issue is, there are
many qualitative things

that we just can’t put our finger on.

So, think about hearing
your favorite song for the first time.

What were you doing?

How did you feel?

Did you get goosebumps?

Or did you get fired up?

Hard to describe, right?

See, parts of us feel so simple,

but under the surface,
there’s really a ton of complexity.

And translating
that complexity to machines

is what makes them modern-day moonshots.

And I’m not convinced that we can
answer these deeper questions

with just ones and zeros alone.

So, in the lab, I’ve been creating art

as a way to help me
design better experiences

for bleeding-edge technology.

And it’s been serving as a catalyst

to beef up the more human ways
that computers can relate to us.

Through art, we’re tacking
some of the hardest questions,

like what does it really mean to feel?

Or how do we engage and know
how to be present with each other?

And how does intuition
affect the way that we interact?

So, take for example human emotion.

Right now, computers can make sense
of our most basic ones,

like joy, sadness,
anger, fear and disgust,

by converting those
characteristics to math.

But what about the more complex emotions?

You know, those emotions

that we have a hard time
describing to each other?

Like nostalgia.

So, to explore this, I created
a piece of art, an experience,

that asked people to share a memory,

and I teamed up with some data scientists

to figure out how to take
an emotion that’s so highly subjective

and convert it into something
mathematically precise.

So, we created what we call
a nostalgia score

and it’s the heart of this installation.

To do that, the installation
asks you to share a story,

the computer then analyzes it
for its simpler emotions,

it checks for your tendency
to use past-tense wording

and also looks for words
that we tend to associate with nostalgia,

like “home,” “childhood” and “the past.”

It then creates a nostalgia score

to indicate how nostalgic your story is.

And that score is the driving force
behind these light-based sculptures

that serve as physical embodiments
of your contribution.

And the higher the score,
the rosier the hue.

You know, like looking at the world
through rose-colored glasses.

So, when you see your score

and the physical representation of it,

sometimes you’d agree
and sometimes you wouldn’t.

It’s as if it really understood
how that experience made you feel.

But other times it gets tripped up

and has you thinking
it doesn’t understand you at all.

But the piece really serves to show

that if we have a hard time explaining
the emotions that we have to each other,

how can we teach a computer
to make sense of them?

So, even the more objective parts
about being human are hard to describe.

Like, conversation.

Have you ever really tried
to break down the steps?

So think about sitting
with your friend at a coffee shop

and just having small talk.

How do you know when to take a turn?

How do you know when to shift topics?

And how do you even know
what topics to discuss?

See, most of us
don’t really think about it,

because it’s almost second nature.

And when we get to know someone,
we learn more about what makes them tick,

and then we learn
what topics we can discuss.

But when it comes to teaching
AI systems how to interact with people,

we have to teach them
step by step what to do.

And right now, it feels clunky.

If you’ve ever tried to talk
with Alexa, Siri or Google Assistant,

you can tell that it or they
can still sound cold.

And have you ever gotten annoyed

when they didn’t understand
what you were saying

and you had to rephrase what you wanted
20 times just to play a song?

Alright, to the credit of the designers,
realistic communication is really hard.

And there’s a whole branch of sociology,

called conversation analysis,

that tries to make blueprints
for different types of conversation.

Types like customer service
or counseling, teaching and others.

I’ve been collaborating
with a conversation analyst at the lab

to try to help our AI systems
hold more human-sounding conversations.

This way, when you have an interaction
with a chatbot on your phone

or a voice-based system in the car,

it sounds a little more human
and less cold and disjointed.

So I created a piece of art

that tries to highlight
the robotic, clunky interaction

to help us understand, as designers,

why it doesn’t sound human yet
and, well, what we can do about it.

The piece is called Bot to Bot

and it puts one conversational
system against another

and then exposes it to the general public.

And what ends up happening
is that you get something

that tries to mimic human conversation,

but falls short.

Sometimes it works and sometimes
it gets into these, well,

loops of misunderstanding.

So even though the machine-to-machine
conversation can make sense,

grammatically and colloquially,

it can still end up
feeling cold and robotic.

And despite checking all the boxes,
the dialogue lacks soul

and those one-off quirks
that make each of us who we are.

So while it might be grammatically correct

and uses all the right
hashtags and emojis,

it can end up sounding mechanical
and, well, a little creepy.

And we call this the uncanny valley.

You know, that creepiness factor of tech

where it’s close to human
but just slightly off.

And the piece will start being

one way that we test
for the humanness of a conversation

and the parts that get
lost in translation.

So there are other things
that get lost in translation, too,

like human intuition.

Right now, computers
are gaining more autonomy.

They can take care of things for us,

like change the temperature
of our houses based on our preferences

and even help us drive on the freeway.

But there are things
that you and I do in person

that are really difficult
to translate to AI.

So think about the last time
that you saw an old classmate or coworker.

Did you give them a hug
or go in for a handshake?

You probably didn’t think twice

because you’ve had so many
built up experiences

that had you do one or the other.

And as an artist, I feel
that access to one’s intuition,

your unconscious knowing,

is what helps us create amazing things.

Big ideas, from that abstract,
nonlinear place in our consciousness

that is the culmination
of all of our experiences.

And if we want computers to relate to us
and help amplify our creative abilities,

I feel that we’ll need to start thinking
about how to make computers be intuitive.

So I wanted to explore
how something like human intuition

could be directly translated
to artificial intelligence.

And I created a piece
that explores computer-based intuition

in a physical space.

The piece is called Wayfinding,

and it’s set up as a symbolic compass
that has four kinetic sculptures.

Each one represents a direction,

north, east, south and west.

And there are sensors set up
on the top of each sculpture

that capture how far away
you are from them.

And the data that gets collected

ends up changing the way
that sculptures move

and the direction of the compass.

The thing is, the piece doesn’t work
like the automatic door sensor

that just opens
when you walk in front of it.

See, your contribution is only a part
of its collection of lived experiences.

And all of those experiences
affect the way that it moves.

So when you walk in front of it,

it starts to use all of the data

that it’s captured
throughout its exhibition history –

or its intuition –

to mechanically respond to you
based on what it’s learned from others.

And what ends up happening
is that as participants

we start to learn the level
of detail that we need

in order to manage expectations

from both humans and machines.

We can almost see our intuition
being played out on the computer,

picturing all of that data
being processed in our mind’s eye.

My hope is that this type of art

will help us think differently
about intuition

and how to apply that to AI in the future.

So these are just a few examples
of how I’m using art to feed into my work

as a designer and researcher
of artificial intelligence.

And I see it as a crucial way
to move innovation forward.

Because right now, there are
a lot of extremes when it comes to AI.

Popular movies show it
as this destructive force

while commercials
are showing it as a savior

to solve some of the world’s
most complex problems.

But regardless of where you stand,

it’s hard to deny
that we’re living in a world

that’s becoming more
and more digital by the second.

Our lives revolve around our devices,
smart appliances and more.

And I don’t think
this will let up any time soon.

So, I’m trying to embed
more humanness from the start.

And I have a hunch that bringing art
into an AI research process

is a way to do just that.

Thank you.

(Applause)