How AI could become an extension of your mind Arnav Kapur

Ever since computers were invented,

we’ve been trying to make them
smarter and more powerful.

From the abacus, to room-sized machines,

to desktops, to computers in our pockets.

And are now designing
artificial intelligence to automate tasks

that would require human intelligence.

If you look at the history of computing,

we’ve always treated computers
as external devices

that compute and act on our behalf.

What I want to do is I want to weave
computing, AI and internet as part of us.

As part of human cognition,

freeing us to interact
with the world around us.

Integrate human and machine intelligence

right inside our own bodies to augment us,
instead of diminishing us or replacing us.

Could we combine what people do best,
such as creative and intuitive thinking,

with what computers do best,

such as processing information
and perfectly memorizing stuff?

Could this whole be better
than the sum of its parts?

We have a device
that could make that possible.

It’s called AlterEgo,
and it’s a wearable device

that gives you the experience
of a conversational AI

that lives inside your head,

that you could talk to in likeness
to talking to yourself internally.

We have a new prototype
that we’re showing here,

for the first time at TED,
and here’s how it works.

Normally, when we speak,

the brain sends neurosignals
through the nerves

to your internal speech systems,

to activate them and your vocal cords
to produce speech.

One of the most complex
cognitive and motor tasks

that we do as human beings.

Now, imagine talking to yourself

without vocalizing,
without moving your mouth,

without moving your jaw,

but by simply articulating
those words internally.

Thereby very subtly engaging
your internal speech systems,

such as your tongue
and back of your palate.

When that happens,

the brain sends extremely weak signals
to these internal speech systems.

AlterEgo has sensors,

embedded in a thin plastic,
flexible and transparent device

that sits on your neck
just like a sticker.

These sensors pick up
on these internal signals

sourced deep within the mouth cavity,

right from the surface of the skin.

An AI program running in the background

then tries to figure out
what the user’s trying to say.

It then feeds back an answer to the user

by means of bone conduction,

audio conducted through the skull
into the user’s inner ear,

that the user hears,

overlaid on top of the user’s
natural hearing of the environment,

without blocking it.

The combination of all these parts,
the input, the output and the AI,

gives a net subjective experience
of an interface inside your head

that you could talk to
in likeness to talking to yourself.

Just to be very clear, the device
does not record or read your thoughts.

It records deliberate information
that you want to communicate

through deliberate engagement
of your internal speech systems.

People don’t want to be read,
they want to write.

Which is why we designed the system

to deliberately record
from the peripheral nervous system.

Which is why the control
in all situations resides with the user.

I want to stop here for a second
and show you a live demo.

What I’m going to do is,
I’m going to ask Eric a question.

And he’s going to search
for that information

without vocalizing, without typing,
without moving his fingers,

without moving his mouth.

Simply by internally asking that question.

The AI will then figure out the answer
and feed it back to Eric,

through audio, through the device.

While you see a laptop
in front of him, he’s not using it.

Everything lives on the device.

All he needs is that sticker device
to interface with the AI and the internet.

So, Eric, what’s the weather
in Vancouver like, right now?

What you see on the screen

are the words that Eric
is speaking to himself right now.

This is happening in real time.

Eric: It’s 50 degrees
and rainy here in Vancouver.

Arnav Kapur: What happened is
that the AI sent the answer

through audio, through
the device, back to Eric.

What could the implications
of something like this be?

Imagine perfectly memorizing things,

where you perfectly record information
that you silently speak,

and then hear them later when you want to,

internally searching for information,

crunching numbers at speeds computers do,

silently texting other people.

Suddenly becoming multilingual,

so that you internally
speak in one language,

and hear the translation
in your head in another.

The potential could be far-reaching.

There are millions of people
around the world

who struggle with using natural speech.

People with conditions such as ALS,
or Lou Gehrig’s disease,

stroke and oral cancer,

amongst many other conditions.

For them, communicating is
a painstakingly slow and tiring process.

This is Doug.

Doug was diagnosed with ALS
about 12 years ago

and has since lost the ability to speak.

Today, he uses an on-screen keyboard

where he types in individual letters
using his head movements.

And it takes several minutes
to communicate a single sentence.

So we went to Doug and asked him

what were the first words
he’d like to use or say, using our system.

Perhaps a greeting, like,
“Hello, how are you?”

Or indicate that he needed
help with something.

What Doug said that he wanted
to use our system for

is to reboot the old system he had,
because that old system kept on crashing.

(Laughter)

We never could have predicted that.

I’m going to show you a short clip of Doug
using our system for the first time.

(Voice) Reboot computer.

AK: What you just saw there

was Doug communicating or speaking
in real time for the first time

since he lost the ability to speak.

There are millions of people

who might be able to communicate
in real time like Doug,

with other people, with their friends
and with their families.

My hope is to be able to help them
express their thoughts and ideas.

I believe computing, AI and the internet

would disappear into us
as extensions of our cognition,

instead of being external
entities or adversaries,

amplifying human ingenuity,

giving us unimaginable abilities
and unlocking our true potential.

And perhaps even freeing us
to becoming better at being human.

Thank you so much.

(Applause)

Shoham Arad: Come over here.

OK.

I want to ask you a couple of questions,
they’re going to clear the stage.

I feel like this is amazing,
it’s innovative,

it’s creepy, it’s terrifying.

Can you tell us what I think …

I think there are some
uncomfortable feelings around this.

Tell us, is this reading your thoughts,

will it in five years,

is there a weaponized version of this,
what does it look like?

AK: So our first design principle,
before we started working on this,

was to not render ethics
as an afterthought.

So we wanted to bake ethics
right into the design.

We flipped the design.

Instead of reading
from the brain directly,

we’re reading from
the voluntary nervous system

that you deliberately have to engage
to communicate with the device,

while still bringing the benefits
of a thinking or a thought device.

The best of both worlds in a way.

SA: OK, I think people are going to have
a lot more questions for you.

Also, you said that it’s a sticker.

So right now it sits just right here?

Is that the final iteration,

what the final design you hope looks like?

AK: Our goal is for the technology
to disappear completely.

SA: What does that mean?

AK: If you’re wearing it,
I shouldn’t be able to see it.

You don’t want technology on your face,
you want it in the background,

to augment you in the background.

So we have a sticker version
that conforms to the skin,

that looks like the skin,

but we’re trying to make
an even smaller version

that would sit right here.

SA: OK.

I feel like if anyone has any questions
they want to ask Arnav,

he’ll be here all week.

OK, thank you so much, Arnav.

AK: Thanks, Shoham.

自从发明计算机以来,

我们一直在努力让它们变得
更智能、更强大。

从算盘,到房间大小的机器,

到台式机,再到我们口袋里的电脑。

现在正在设计
人工智能来自动化

需要人类智能的任务。

如果您查看计算的历史,

我们一直将计算机
视为

代表我们进行计算和操作的外部设备。

我想做的是我想把
计算、人工智能和互联网作为我们的一部分。

作为人类认知的一部分,

让我们可以自由地
与周围的世界互动。

将人类和机器智能整合

到我们自己的身体中,以增强我们,
而不是削弱我们或取代我们。

我们能否将人们最擅长的东西(
例如创造性和直觉思维)

与计算机最擅长的东西(

例如处理信息
和完美记忆东西)结合起来?

这整体会
比部分的总和更好吗?

我们有一个
设备可以让这成为可能。

它被称为 AlterEgo
,它是一种可穿戴设备

,可让您体验生活在您脑海
中的对话式 AI

,您可以像
在内部与自己交谈一样与之交谈。

我们有一个新的原型
,这是我们

第一次在 TED 上展示
,这就是它的工作原理。

通常,当我们说话时

,大脑会
通过神经

向您的内部语音系统发送神经信号,

以激活它们和您的声带
以产生语音。

我们作为人类所做的最复杂的
认知和运动任务

之一。

现在,想象一下在

不发声、
不移动嘴巴、

不移动下巴的

情况下自言自语,只需在内部清晰地表达
这些词。

从而非常巧妙地使用
您的内部语音系统,

例如您的舌头
和上颚后部。

当这种情况发生时

,大脑会向这些内部语音系统发送极其微弱的信号

AlterEgo 有传感器,

嵌入在一个薄塑料、
灵活和透明的设备

中,就像贴纸一样贴在你的脖子上

这些传感器直接从皮肤表面
接收来自口腔深处的这些内部信号

。 然后

,在后台运行的 AI 程序

试图弄清楚用户想说什么。

然后,它

通过骨传导向用户反馈一个答案,通过颅骨传导

到用户内耳的音频

,用户听到,

覆盖在用户
对环境的自然听觉之上,

而不会阻塞它。

所有这些部分
,输入、输出和 AI 的组合,在你的脑海中

提供了一个纯粹的主观体验

,你可以
像自言自语一样交谈。

需要非常清楚的是,该设备
不会记录或读取您的想法。

它记录
了您想要通过

您的内部语音系统的故意参与来传达的故意信息。

人们不想被阅读,
他们想写作。

这就是为什么我们设计该系统

来故意
记录周围神经系统的记录。

这就是为什么
所有情况下的控制权都在用户手中。

我想在这里停一下
,给你看一个现场演示。

我要做的是,
我要问 Eric 一个问题。

他将

不发声、不打字、
不移动手指、

不移动嘴巴来搜索这些信息。

只需在内部提出这个问题。

然后,人工智能将找出答案
,并通过音频、设备将其反馈给 Eric

当你看到
他面前有一台笔记本电脑时,他并没有使用它。

一切都存在于设备上。

他所需要的只是
与人工智能和互联网接口的贴纸设备。

那么,埃里克,现在温哥华的天气怎么
样?

你在屏幕上看到的

是埃里克
现在对自己说的话。

这是实时发生的。

埃里克:温哥华的气温是 50 度
,有雨。

Arnav Kapur:发生的
事情是 AI

通过音频
、设备将答案发送回 Eric。

这样的事情可能会产生什么影响

想象一下完美地记忆事物

,你可以完美地记录
你默默说出的信息,

然后在你想听的时候听到它们,在

内部搜索信息,

以计算机的速度处理数字,

默默地给其他人发短信。

突然变得多语种,

以至于你在内心
说一种语言,


在你的脑海中听到另一种语言的翻译。

潜力可能是深远的。

全世界有数百万人

努力使用自然语言。

患有 ALS
或 Lou Gehrig 病、

中风和口腔癌

等疾病的人,以及许多其他疾病。

对他们来说,沟通是
一个非常缓慢和累人的过程。

这是道格。

道格大约在 12 年前被诊断出患有 ALS

,此后就失去了说话的能力。

今天,他使用屏幕键盘

,通过头部动作输入单个字母

交流一个句子需要几分钟。

所以我们去找道格,问他

使用我们的系统,他想使用或说的第一句话是什么。

也许是问候,比如
“你好,你好吗?”

或者表明他
需要帮助。

Doug 说他
想使用我们的系统

是为了重新启动他拥有的旧系统,
因为那个旧系统一直在崩溃。

(笑声)

我们永远无法预料到这一点。

我将向您展示 Doug
第一次使用我们的系统的短片。

(语音)重新启动计算机。

AK:你刚才

看到道格

失去说话能力后第一次实时交流或说话。

有数以百万计的

人可能能够
像 Doug 一样

与其他人、与他们的朋友
和他们的家人进行实时交流。

我希望能够帮助他们
表达自己的想法和想法。

我相信计算、人工智能和互联网


作为我们认知的延伸消失在我们心中,

而不是成为外部
实体或对手,

放大人类的创造力,

赋予我们难以想象的能力
并释放我们的真正潜力。

甚至可能让
我们更擅长做人。

太感谢了。

(掌声)

Shoham Arad:过来。

行。

我想问你几个问题,
他们会清理舞台。

我觉得这很棒,
很创新,

很恐怖,很可怕。

你能告诉我们我的想法吗?

我认为这有一些
不舒服的感觉。

告诉我们,这是在读你的想法

吗,五年后会不会,

有没有武器化的版本,它
是什么样子的?

AK:所以在我们开始研究之前,我们的第一个设计原则

是不要把道德
作为事后的想法。

所以我们想
在设计中融入道德规范。

我们翻转了设计。 我们

不是
直接从大脑读取数据,而是

从自愿神经系统读取数据

,您必须故意
与设备进行交流,

同时仍然
带来思考或思考设备的好处。

在某种程度上,两全其美。

SA:好的,我想人们会
问你更多的问题。

另外,你说这是一个贴纸。

所以现在它就坐在这里?

那是最后的迭代

吗,你希望的最终设计是什么样的?

AK:我们的目标是让技术
彻底消失。

SA:那是什么意思?

AK:如果你戴着它,
我应该看不到它。

你不想让技术出现在你的脸上,
你希望它在后台,

在后台增强你。

所以我们有一个贴合皮肤的贴纸版本

,看起来像皮肤,

但我们正在尝试制作
一个更小的版本

,可以坐在这里。

萨:好的。

我觉得如果有人有任何
问题想问 Arnav,

他整个星期都会在这里。

好的,非常感谢,Arnav。

AK:谢谢,肖汉姆。