Can machines read your emotions Kostas Karpouzis

With every year, machines surpass humans
in more and more activities

we once thought only we were capable of.

Today’s computers can beat us
in complex board games,

transcribe speech in dozens of languages,

and instantly identify almost any object.

But the robots of tomorrow may go futher

by learning to figure out
what we’re feeling.

And why does that matter?

Because if machines
and the people who run them

can accurately read our emotional states,

they may be able to assist us
or manipulate us

at unprecedented scales.

But before we get there,

how can something so complex as emotion
be converted into mere numbers,

the only language machines understand?

Essentially the same way our own brains
interpret emotions,

by learning how to spot them.

American psychologist Paul Ekman
identified certain universal emotions

whose visual cues are understood
the same way across cultures.

For example, an image of a smile
signals joy to modern urban dwellers

and aboriginal tribesmen alike.

And according to Ekman,

anger,

disgust,

fear,

joy,

sadness,

and surprise are equally recognizable.

As it turns out, computers are rapidly
getting better at image recognition

thanks to machine learning algorithms,
such as neural networks.

These consist of artificial nodes that
mimic our biological neurons

by forming connections
and exchanging information.

To train the network, sample inputs
pre-classified into different categories,

such as photos marked happy or sad,

are fed into the system.

The network then learns to classify
those samples

by adjusting the relative weights
assigned to particular features.

The more training data it’s given,

the better the algorithm becomes
at correctly identifying new images.

This is similar to our own brains,

which learn from previous experiences
to shape how new stimuli are processed.

Recognition algorithms aren’t just
limited to facial expressions.

Our emotions manifest in many ways.

There’s body language and vocal tone,

changes in heart rate, complexion,
and skin temperature,

or even word frequency and sentence
structure in our writing.

You might think that training
neural networks to recognize these

would be a long and complicated task

until you realize just how much
data is out there,

and how quickly modern computers
can process it.

From social media posts,

uploaded photos and videos,

and phone recordings,

to heat-sensitive security cameras

and wearables that monitor
physiological signs,

the big question is not how to collect
enough data,

but what we’re going to do with it.

There are plenty of beneficial uses
for computerized emotion recognition.

Robots using algorithms to identify
facial expressions

can help children learn

or provide lonely people
with a sense of companionship.

Social media companies are considering
using algorithms

to help prevent suicides by flagging posts
that contain specific words or phrases.

And emotion recognition software can help
treat mental disorders

or even provide people with low-cost
automated psychotherapy.

Despite the potential benefits,

the prospect of a massive network
automatically scanning our photos,

communications,

and physiological signs
is also quite disturbing.

What are the implications for our privacy
when such impersonal systems

are used by corporations to exploit
our emotions through advertising?

And what becomes of our rights

if authorities think they can identify
the people likely to commit crimes

before they even make
a conscious decision to act?

Robots currently have a long way to go

in distinguishing emotional nuances,
like irony,

and scales of emotions,
just how happy or sad someone is.

Nonetheless, they may eventually be able
to accurately read our emotions

and respond to them.

Whether they can empathize with our fear
of unwanted intrusion, however,

that’s another story.

每年,机器

我们曾经认为只有我们才能做的越来越多的活动中超越人类。

今天的计算机可以
在复杂的棋盘游戏中击败我们,

用数十种语言转录语音,

并立即识别几乎任何物体。

但是明天的机器人可能会

通过学习
了解我们的感受而走得更远。

为什么这很重要?

因为如果机器
和运行它们的人

能够准确地读取我们的情绪状态,

它们也许能够以前所未有的规模帮助我们
或操纵我们

但是在我们到达那里之前,

像情感这样复杂的东西怎么
能被转换成仅仅是数字

,唯一的语言机器可以理解呢?

本质上,就像我们自己的大脑

通过学习如何识别情绪来解释情绪一样。

美国心理学家保罗·埃克曼(Paul Ekman)
确定了某些普遍情绪,

其视觉线索在
不同文化中的理解方式相同。

例如,一个微笑的图像
对现代城市居民

和原住民部落来说都是一种快乐。

根据埃克曼的说法,

愤怒、

厌恶、

恐惧、

喜悦、

悲伤

和惊讶同样可以识别。

事实证明,由于机器学习算法(例如神经网络),计算机
在图像识别方面的能力正在迅速提高

这些由人工节点组成,通过形成连接和交换信息来
模仿我们的生物神经元

为了训练网络,将
预先分类为不同类别的样本输入(

例如标记为快乐或悲伤的照片)

输入系统。

然后,网络

通过调整
分配给特定特征的相对权重来学习对这些样本进行分类。

提供的训练数据越多,

算法就越能
正确识别新图像。

这类似于我们自己的大脑,

它从以前的经验中学习
来塑造如何处理新的刺激。

识别算法不仅
限于面部表情。

我们的情绪以多种方式表现出来。 在我们的写作中

,有肢体语言和声调

、心率、肤色
和皮肤温度的变化,

甚至是词频和句子
结构。

你可能认为训练
神经网络来识别这些

将是一项漫长而复杂的任务,

直到你意识到
那里有多少数据,

以及现代计算机可以多快
处理它。

从社交媒体帖子、

上传的照片和视频

、电话录音,

到热敏安全摄像头

和监测生理体征的可穿戴设备

,最大的问题不是如何收集
足够的数据,

而是我们将如何处理这些数据。

计算机化的情感识别有很多有益的用途。

使用算法识别
面部表情的机器人

可以帮助孩子学习

或为孤独的人
提供陪伴感。

社交媒体公司正在考虑
使用算法

通过标记
包含特定单词或短语的帖子来帮助防止自杀。

而情绪识别软件可以帮助
治疗精神障碍

,甚至可以为人们提供低成本的
自动化心理治疗。

尽管有潜在的好处

,但大规模网络
自动扫描我们的照片、

通信

和生理迹象
的前景也相当令人不安。

当公司使用这种非个人的系统

通过广告利用我们的情感时,对我们的隐私有什么影响?

如果当局认为

他们甚至可以在他们
有意识地决定采取行动之前就识别出可能犯罪的人,那么我们的权利又会如何呢?

机器人目前

在区分情感上的细微差别(
如讽刺)

和情绪尺度方面
还有很长的路要走,比如一个人的快乐或悲伤。

尽管如此,他们最终可能
能够准确地阅读我们的情绪

并做出回应。

然而,他们是否能够理解我们
对不必要入侵的恐惧,

那就是另一回事了。