This computer is learning to read your mind DIY Neuroscience a TED series

Translator: Joseph Geni
Reviewer: Krystian Aparta

Greg Gage: Mind-reading.
You’ve seen this in sci-fi movies:

machines that can read our thoughts.

However, there are devices today

that can read the electrical
activity from our brains.

We call this the EEG.

Is there information
contained in these brainwaves?

And if so, could we train a computer
to read our thoughts?

My buddy Nathan
has been working to hack the EEG

to build a mind-reading machine.

[DIY Neuroscience]

So this is how the EEG works.

Inside your head is a brain,

and that brain is made
out of billions of neurons.

Each of those neurons sends
an electrical message to each other.

These small messages can combine
to make an electrical wave

that we can detect on a monitor.

Now traditionally, the EEG
can tell us large-scale things,

for example if you’re asleep
or if you’re alert.

But can it tell us anything else?

Can it actually read our thoughts?

We’re going to test this,

and we’re not going to start
with some complex thoughts.

We’re going to do something very simple.

Can we interpret what someone is seeing
using only their brainwaves?

Nathan’s going to begin by placing
electrodes on Christy’s head.

Nathan: My life is tangled.

(Laughter)

GG: And then he’s going to show her
a bunch of pictures

from four different categories.

Nathan: Face, house, scenery
and weird pictures.

GG: As we show Christy
hundreds of these images,

we are also capturing the electrical waves
onto Nathan’s computer.

We want to see if we can detect
any visual information about the photos

contained in the brainwaves,

so when we’re done,
we’re going to see if the EEG

can tell us what kind of picture
Christy is looking at,

and if it does, each category
should trigger a different brain signal.

OK, so we collected all the raw EEG data,

and this is what we got.

It all looks pretty messy,
so let’s arrange them by picture.

Now, still a bit too noisy
to see any differences,

but if we average the EEG
across all image types

by aligning them
to when the image first appeared,

we can remove this noise,

and pretty soon, we can see
some dominant patterns

emerge for each category.

Now the signals all
still look pretty similar.

Let’s take a closer look.

About a hundred milliseconds
after the image comes on,

we see a positive bump in all four cases,

and we call this the P100,
and what we think that is

is what happens in your brain
when you recognize an object.

But damn, look at
that signal for the face.

It looks different than the others.

There’s a negative dip
about 170 milliseconds

after the image comes on.

What could be going on here?

Research shows that our brain
has a lot of neurons that are dedicated

to recognizing human faces,

so this N170 spike could be
all those neurons

firing at once in the same location,

and we can detect that in the EEG.

So there are two takeaways here.

One, our eyes can’t really detect
the differences in patterns

without averaging out the noise,

and two, even after removing the noise,

our eyes can only pick up
the signals associated with faces.

So this is where we turn
to machine learning.

Now, our eyes are not very good
at picking up patterns in noisy data,

but machine learning algorithms
are designed to do just that,

so could we take a lot of pictures
and a lot of data

and feed it in and train a computer

to be able to interpret
what Christy is looking at in real time?

We’re trying to code the information
that’s coming out of her EEG

in real time

and predict what it is
that her eyes are looking at.

And if it works, what we should see

is every time that she gets
a picture of scenery,

it should say scenery,
scenery, scenery, scenery.

A face – face, face, face, face,

but it’s not quite working that way,
is what we’re discovering.

(Laughter)

OK.

Director: So what’s going on here?
GG: We need a new career, I think.

(Laughter)

OK, so that was a massive failure.

But we’re still curious:
How far could we push this technology?

And we looked back at what we did.

We noticed that the data was coming
into our computer very quickly,

without any timing
of when the images came on,

and that’s the equivalent
of reading a very long sentence

without spaces between the words.

It would be hard to read,

but once we add the spaces,
individual words appear

and it becomes a lot more understandable.

But what if we cheat a little bit?

By using a sensor, we can tell
the computer when the image first appears.

That way, the brainwave stops being
a continuous stream of information,

and instead becomes
individual packets of meaning.

Also, we’re going
to cheat a little bit more,

by limiting the categories to two.

Let’s see if we can do
some real-time mind-reading.

In this new experiment,

we’re going to constrict it
a little bit more

so that we know the onset of the image

and we’re going to limit
the categories to “face” or “scenery.”

Nathan: Face. Correct.

Scenery. Correct.

GG: So right now,
every time the image comes on,

we’re taking a picture
of the onset of the image

and decoding the EEG.

It’s getting correct.

Nathan: Yes. Face. Correct.

GG: So there is information
in the EEG signal, which is cool.

We just had to align it
to the onset of the image.

Nathan: Scenery. Correct.

Face. Yeah.

GG: This means there is some
information there,

so if we know at what time
the picture came on,

we can tell what type of picture it was,

possibly, at least on average,
by looking at these evoked potentials.

Nathan: Exactly.

GG: If you had told me at the beginning
of this project this was possible,

I would have said no way.

I literally did not think
we could do this.

Did our mind-reading
experiment really work?

Yes, but we had to do a lot of cheating.

It turns out you can find
some interesting things in the EEG,

for example if you’re
looking at someone’s face,

but it does have a lot of limitations.

Perhaps advances in machine learning
will make huge strides,

and one day we will be able to decode
what’s going on in our thoughts.

But for now, the next time a company says
that they can harness your brainwaves

to be able to control devices,

it is your right, it is your duty
to be skeptical.

译者:Joseph Geni
审稿人:Krystian Aparta

Greg Gage:读心术。
你已经在科幻电影中看到了这一点:

可以阅读我们思想的机器。

然而,今天有一些设备

可以
读取我们大脑的电活动。

我们称之为脑电图。

这些脑电波中是否包含信息?

如果是这样,我们可以训练
计算机读取我们的想法吗?

我的好友 Nathan
一直在努力破解 EEG

来制造一台读心机。

[DIY 神经科学]

这就是脑电图的工作原理。

你的脑袋里有一个大脑

,这个大脑是
由数十亿个神经元组成的。

这些神经元中的每
一个都向彼此发送电信息。

这些小消息可以结合起来

产生我们可以在监视器上检测到的电波。

现在传统上,脑电图
可以告诉我们大范围的事情,

例如你是否睡着了
或你是否警觉。

但它还能告诉我们其他什么吗?

它真的能读懂我们的想法吗?

我们将对此进行测试

,我们不会
从一些复杂的想法开始。

我们要做一些非常简单的事情。

我们
可以仅使用他们的脑电波来解释某人所看到的吗?

内森首先要
在克里斯蒂的头上放置电极。

内森:我的生活很纠结。

(笑声)

GG:然后他会给她看
一组

来自四个不同类别的照片。

内森:脸、房子、风景
和奇怪的图片。

GG:当我们向克里斯蒂展示
数百张这样的图像时,

我们也在将电波捕捉
到内森的计算机上。

我们想看看我们是否能检测

到脑电波中包含的照片的任何视觉信息,

所以当我们完成后,
我们将看看脑电图是否

能告诉我们克里斯蒂正在看什么样的照片

,如果它 确实,每个类别
都应该触发不同的大脑信号。

好的,所以我们收集了所有原始脑电图数据

,这就是我们得到的。

这一切看起来都很混乱,
所以让我们按图片排列它们。

现在,仍然有点太嘈杂
,看不出任何差异,

但如果我们通过将
所有图像类型

的 EEG
与图像首次出现的时间对齐来对它们进行平均,

我们可以消除这种噪声,

并且很快,我们可以看到
一些主要模式

出现 每个类别。

现在所有信号
看起来仍然非常相似。

让我们仔细看看。 图像出现

大约一百毫秒
后,

我们在所有四种情况下都看到了一个积极的凸起

,我们称之为 P100
,我们认为这

就是当你识别一个物体时你的大脑中发生的事情

但是该死的,看
那个信号的脸。

它看起来与其他人不同。

图像出现后约 170 毫秒出现负下降

这里会发生什么?

研究表明,我们
的大脑有很多专门

用于识别人脸的神经元,

所以这个 N170 尖峰可能是
所有这些神经元

在同一位置同时发射

,我们可以在脑电图中检测到这一点。

所以这里有两个要点。

第一,如果不平均噪声,我们的眼睛无法真正检测
到图案的差异

,第二,即使去除了噪声,

我们的眼睛也只能接收到
与面部相关的信号。

所以这就是我们
转向机器学习的地方。

现在,我们的眼睛不太
擅长从嘈杂的数据中识别模式,

但机器学习算法
就是为此而设计的,

所以我们是否可以拍摄大量照片
和大量数据

并将其输入并训练

计算机成为 能够
实时解释克里斯蒂正在看什么?

我们正在尝试对
来自她的脑电图的信息

进行实时编码,


预测她的眼睛在看什么。

如果它有效,我们应该看到的

是,每次她得到
一张风景照片,

它应该说风景,
风景,风景,风景。

一张脸——一张脸,一张脸,一张脸,一张脸,

但它不是那样工作的,这
就是我们正在发现的东西。

(笑声)

好的。

导演:那这里是怎么回事?
GG:我想,我们需要一个新的职业。

(笑声)

好吧,这是一个巨大的失败。

但我们仍然很好奇:
我们能将这项技术推到多远?

我们回顾了我们所做的事情。

我们注意到数据
很快进入我们的计算机,

没有任何
图像出现的时间

,这
相当于阅读一个很长的句子

,单词之间没有空格。

这将很难阅读,

但是一旦我们添加空格,就会
出现单个单词

并且变得更容易理解。

但是如果我们稍微作弊呢?

通过使用传感器,我们可以
告诉计算机图像何时首次出现。

这样,脑电波就不再
是连续的信息流

,而是变成了
单独的意义包。

此外,我们将

通过将类别限制为两个来作弊。

让我们看看我们是否可以做
一些实时的读心术。

在这个新的实验中,

我们将把它缩小
一点,

以便我们知道图像的起始点,

并将类别限制为“面部”或“风景”。

内森:脸。 正确的。

风景。 正确的。

GG:所以现在,
每次图像出现时,

我们都会
对图像的开始进行拍照

并对脑电图进行解码。

它越来越正确。

内森:是的。 脸。 正确的。

GG:所以
EEG信号中有信息,这很酷。

我们只需要将它
与图像的开头对齐。

内森:风景。 正确的。

脸。 是的。

GG:这意味着那里有一些
信息,

所以如果我们知道
图片出现的时间,

我们可以通过查看这些诱发电位来判断它是什么类型的图片,

可能至少平均而言

内森:没错。

GG:如果你在
这个项目开始时告诉我这是可能的,

我会说不可能。

我真的不认为
我们可以做到这一点。

我们的读心
实验真的有效吗?

是的,但我们不得不作弊。

事实证明,你可以
在脑电图中找到一些有趣的东西,

例如,如果你正在
看某人的脸,

但它确实有很多限制。

也许机器学习的进步
会取得巨大的进步,

有一天我们将能够解码
我们的想法。

但是现在,下次当一家公司
说他们可以利用你的脑电波

来控制设备时,

保持怀疑是你的权利,你有责任
保持怀疑。