How AI could become an extension of your mind Arnav Kapur

Ever since computers were invented,

we’ve been trying to make them
smarter and more powerful.

From the abacus, to room-sized machines,

to desktops, to computers in our pockets.

And are now designing
artificial intelligence to automate tasks

that would require human intelligence.

If you look at the history of computing,

we’ve always treated computers
as external devices

that compute and act on our behalf.

What I want to do is I want to weave
computing, AI and internet as part of us.

As part of human cognition,

freeing us to interact
with the world around us.

Integrate human and machine intelligence

right inside our own bodies to augment us,
instead of diminishing us or replacing us.

Could we combine what people do best,
such as creative and intuitive thinking,

with what computers do best,

such as processing information
and perfectly memorizing stuff?

Could this whole be better
than the sum of its parts?

We have a device
that could make that possible.

It’s called AlterEgo,
and it’s a wearable device

that gives you the experience
of a conversational AI

that lives inside your head,

that you could talk to in likeness
to talking to yourself internally.

We have a new prototype
that we’re showing here,

for the first time at TED,
and here’s how it works.

Normally, when we speak,

the brain sends neurosignals
through the nerves

to your internal speech systems,

to activate them and your vocal cords
to produce speech.

One of the most complex
cognitive and motor tasks

that we do as human beings.

Now, imagine talking to yourself

without vocalizing,
without moving your mouth,

without moving your jaw,

but by simply articulating
those words internally.

Thereby very subtly engaging
your internal speech systems,

such as your tongue
and back of your palate.

When that happens,

the brain sends extremely weak signals
to these internal speech systems.

AlterEgo has sensors,

embedded in a thin plastic,
flexible and transparent device

that sits on your neck
just like a sticker.

These sensors pick up
on these internal signals

sourced deep within the mouth cavity,

right from the surface of the skin.

An AI program running in the background

then tries to figure out
what the user’s trying to say.

It then feeds back an answer to the user

by means of bone conduction,

audio conducted through the skull
into the user’s inner ear,

that the user hears,

overlaid on top of the user’s
natural hearing of the environment,

without blocking it.

The combination of all these parts,
the input, the output and the AI,

gives a net subjective experience
of an interface inside your head

that you could talk to
in likeness to talking to yourself.

Just to be very clear, the device
does not record or read your thoughts.

It records deliberate information
that you want to communicate

through deliberate engagement
of your internal speech systems.

People don’t want to be read,
they want to write.

Which is why we designed the system

to deliberately record
from the peripheral nervous system.

Which is why the control
in all situations resides with the user.

I want to stop here for a second
and show you a live demo.

What I’m going to do is,
I’m going to ask Eric a question.

And he’s going to search
for that information

without vocalizing, without typing,
without moving his fingers,

without moving his mouth.

Simply by internally asking that question.

The AI will then figure out the answer
and feed it back to Eric,

through audio, through the device.

While you see a laptop
in front of him, he’s not using it.

Everything lives on the device.

All he needs is that sticker device
to interface with the AI and the internet.

So, Eric, what’s the weather
in Vancouver like, right now?

What you see on the screen

are the words that Eric
is speaking to himself right now.

This is happening in real time.

Eric: It’s 50 degrees
and rainy here in Vancouver.

Arnav Kapur: What happened is
that the AI sent the answer

through audio, through
the device, back to Eric.

What could the implications
of something like this be?

Imagine perfectly memorizing things,

where you perfectly record information
that you silently speak,

and then hear them later when you want to,

internally searching for information,

crunching numbers at speeds computers do,

silently texting other people.

Suddenly becoming multilingual,

so that you internally
speak in one language,

and hear the translation
in your head in another.

The potential could be far-reaching.

There are millions of people
around the world

who struggle with using natural speech.

People with conditions such as ALS,
or Lou Gehrig’s disease,

stroke and oral cancer,

amongst many other conditions.

For them, communicating is
a painstakingly slow and tiring process.

This is Doug.

Doug was diagnosed with ALS
about 12 years ago

and has since lost the ability to speak.

Today, he uses an on-screen keyboard

where he types in individual letters
using his head movements.

And it takes several minutes
to communicate a single sentence.

So we went to Doug and asked him

what were the first words
he’d like to use or say, using our system.

Perhaps a greeting, like,
“Hello, how are you?”

Or indicate that he needed
help with something.

What Doug said that he wanted
to use our system for

is to reboot the old system he had,
because that old system kept on crashing.

(Laughter)

We never could have predicted that.

I’m going to show you a short clip of Doug
using our system for the first time.

(Voice) Reboot computer.

AK: What you just saw there

was Doug communicating or speaking
in real time for the first time

since he lost the ability to speak.

There are millions of people

who might be able to communicate
in real time like Doug,

with other people, with their friends
and with their families.

My hope is to be able to help them
express their thoughts and ideas.

I believe computing, AI and the internet

would disappear into us
as extensions of our cognition,

instead of being external
entities or adversaries,

amplifying human ingenuity,

giving us unimaginable abilities
and unlocking our true potential.

And perhaps even freeing us
to becoming better at being human.

Thank you so much.

(Applause)

Shoham Arad: Come over here.

OK.

I want to ask you a couple of questions,
they’re going to clear the stage.

I feel like this is amazing,
it’s innovative,

it’s creepy, it’s terrifying.

Can you tell us what I think …

I think there are some
uncomfortable feelings around this.

Tell us, is this reading your thoughts,

will it in five years,

is there a weaponized version of this,
what does it look like?

AK: So our first design principle,
before we started working on this,

was to not render ethics
as an afterthought.

So we wanted to bake ethics
right into the design.

We flipped the design.

Instead of reading
from the brain directly,

we’re reading from
the voluntary nervous system

that you deliberately have to engage
to communicate with the device,

while still bringing the benefits
of a thinking or a thought device.

The best of both worlds in a way.

SA: OK, I think people are going to have
a lot more questions for you.

Also, you said that it’s a sticker.

So right now it sits just right here?

Is that the final iteration,

what the final design you hope looks like?

AK: Our goal is for the technology
to disappear completely.

SA: What does that mean?

AK: If you’re wearing it,
I shouldn’t be able to see it.

You don’t want technology on your face,
you want it in the background,

to augment you in the background.

So we have a sticker version
that conforms to the skin,

that looks like the skin,

but we’re trying to make
an even smaller version

that would sit right here.

SA: OK.

I feel like if anyone has any questions
they want to ask Arnav,

he’ll be here all week.

OK, thank you so much, Arnav.

AK: Thanks, Shoham.