Can machines read your emotions Kostas Karpouzis

With every year, machines surpass humans
in more and more activities

we once thought only we were capable of.

Today’s computers can beat us
in complex board games,

transcribe speech in dozens of languages,

and instantly identify almost any object.

But the robots of tomorrow may go futher

by learning to figure out
what we’re feeling.

And why does that matter?

Because if machines
and the people who run them

can accurately read our emotional states,

they may be able to assist us
or manipulate us

at unprecedented scales.

But before we get there,

how can something so complex as emotion
be converted into mere numbers,

the only language machines understand?

Essentially the same way our own brains
interpret emotions,

by learning how to spot them.

American psychologist Paul Ekman
identified certain universal emotions

whose visual cues are understood
the same way across cultures.

For example, an image of a smile
signals joy to modern urban dwellers

and aboriginal tribesmen alike.

And according to Ekman,

anger,

disgust,

fear,

joy,

sadness,

and surprise are equally recognizable.

As it turns out, computers are rapidly
getting better at image recognition

thanks to machine learning algorithms,
such as neural networks.

These consist of artificial nodes that
mimic our biological neurons

by forming connections
and exchanging information.

To train the network, sample inputs
pre-classified into different categories,

such as photos marked happy or sad,

are fed into the system.

The network then learns to classify
those samples

by adjusting the relative weights
assigned to particular features.

The more training data it’s given,

the better the algorithm becomes
at correctly identifying new images.

This is similar to our own brains,

which learn from previous experiences
to shape how new stimuli are processed.

Recognition algorithms aren’t just
limited to facial expressions.

Our emotions manifest in many ways.

There’s body language and vocal tone,

changes in heart rate, complexion,
and skin temperature,

or even word frequency and sentence
structure in our writing.

You might think that training
neural networks to recognize these

would be a long and complicated task

until you realize just how much
data is out there,

and how quickly modern computers
can process it.

From social media posts,

uploaded photos and videos,

and phone recordings,

to heat-sensitive security cameras

and wearables that monitor
physiological signs,

the big question is not how to collect
enough data,

but what we’re going to do with it.

There are plenty of beneficial uses
for computerized emotion recognition.

Robots using algorithms to identify
facial expressions

can help children learn

or provide lonely people
with a sense of companionship.

Social media companies are considering
using algorithms

to help prevent suicides by flagging posts
that contain specific words or phrases.

And emotion recognition software can help
treat mental disorders

or even provide people with low-cost
automated psychotherapy.

Despite the potential benefits,

the prospect of a massive network
automatically scanning our photos,

communications,

and physiological signs
is also quite disturbing.

What are the implications for our privacy
when such impersonal systems

are used by corporations to exploit
our emotions through advertising?

And what becomes of our rights

if authorities think they can identify
the people likely to commit crimes

before they even make
a conscious decision to act?

Robots currently have a long way to go

in distinguishing emotional nuances,
like irony,

and scales of emotions,
just how happy or sad someone is.

Nonetheless, they may eventually be able
to accurately read our emotions

and respond to them.

Whether they can empathize with our fear
of unwanted intrusion, however,

that’s another story.