The incredible inventions of intuitive AI Maurice Conti

Translator: Leslie Gauthier
Reviewer: Camille Martínez

How many of you are creatives,

designers, engineers,
entrepreneurs, artists,

or maybe you just have
a really big imagination?

Show of hands? (Cheers)

That’s most of you.

I have some news for us creatives.

Over the course of the next 20 years,

more will change around
the way we do our work

than has happened in the last 2,000.

In fact, I think we’re at the dawn
of a new age in human history.

Now, there have been four major historical
eras defined by the way we work.

The Hunter-Gatherer Age
lasted several million years.

And then the Agricultural Age
lasted several thousand years.

The Industrial Age lasted
a couple of centuries.

And now the Information Age
has lasted just a few decades.

And now today, we’re on the cusp
of our next great era as a species.

Welcome to the Augmented Age.

In this new era, your natural human
capabilities are going to be augmented

by computational systems
that help you think,

robotic systems that help you make,

and a digital nervous system

that connects you to the world
far beyond your natural senses.

Let’s start with cognitive augmentation.

How many of you are augmented cyborgs?

(Laughter)

I would actually argue
that we’re already augmented.

Imagine you’re at a party,

and somebody asks you a question
that you don’t know the answer to.

If you have one of these,
in a few seconds, you can know the answer.

But this is just a primitive beginning.

Even Siri is just a passive tool.

In fact, for the last
three-and-a-half million years,

the tools that we’ve had
have been completely passive.

They do exactly what we tell them
and nothing more.

Our very first tool only cut
where we struck it.

The chisel only carves
where the artist points it.

And even our most advanced tools
do nothing without our explicit direction.

In fact, to date, and this
is something that frustrates me,

we’ve always been limited

by this need to manually
push our wills into our tools –

like, manual,
literally using our hands,

even with computers.

But I’m more like Scotty in “Star Trek.”

(Laughter)

I want to have a conversation
with a computer.

I want to say, “Computer,
let’s design a car,”

and the computer shows me a car.

And I say, “No, more fast-looking,
and less German,”

and bang, the computer shows me an option.

(Laughter)

That conversation might be
a little ways off,

probably less than many of us think,

but right now,

we’re working on it.

Tools are making this leap
from being passive to being generative.

Generative design tools
use a computer and algorithms

to synthesize geometry

to come up with new designs
all by themselves.

All it needs are your goals
and your constraints.

I’ll give you an example.

In the case of this aerial drone chassis,

all you would need to do
is tell it something like,

it has four propellers,

you want it to be
as lightweight as possible,

and you need it to be
aerodynamically efficient.

Then what the computer does
is it explores the entire solution space:

every single possibility that solves
and meets your criteria –

millions of them.

It takes big computers to do this.

But it comes back to us with designs

that we, by ourselves,
never could’ve imagined.

And the computer’s coming up
with this stuff all by itself –

no one ever drew anything,

and it started completely from scratch.

And by the way, it’s no accident

that the drone body looks just like
the pelvis of a flying squirrel.

(Laughter)

It’s because the algorithms
are designed to work

the same way evolution does.

What’s exciting is we’re starting
to see this technology

out in the real world.

We’ve been working with Airbus
for a couple of years

on this concept plane for the future.

It’s a ways out still.

But just recently we used
a generative-design AI

to come up with this.

This is a 3D-printed cabin partition
that’s been designed by a computer.

It’s stronger than the original
yet half the weight,

and it will be flying
in the Airbus A320 later this year.

So computers can now generate;

they can come up with their own solutions
to our well-defined problems.

But they’re not intuitive.

They still have to start from scratch
every single time,

and that’s because they never learn.

Unlike Maggie.

(Laughter)

Maggie’s actually smarter
than our most advanced design tools.

What do I mean by that?

If her owner picks up that leash,

Maggie knows with a fair
degree of certainty

it’s time to go for a walk.

And how did she learn?

Well, every time the owner picked up
the leash, they went for a walk.

And Maggie did three things:

she had to pay attention,

she had to remember what happened

and she had to retain and create
a pattern in her mind.

Interestingly, that’s exactly what

computer scientists
have been trying to get AIs to do

for the last 60 or so years.

Back in 1952,

they built this computer
that could play Tic-Tac-Toe.

Big deal.

Then 45 years later, in 1997,

Deep Blue beats Kasparov at chess.

2011, Watson beats these two
humans at Jeopardy,

which is much harder for a computer
to play than chess is.

In fact, rather than working
from predefined recipes,

Watson had to use reasoning
to overcome his human opponents.

And then a couple of weeks ago,

DeepMind’s AlphaGo beats
the world’s best human at Go,

which is the most difficult
game that we have.

In fact, in Go, there are more
possible moves

than there are atoms in the universe.

So in order to win,

what AlphaGo had to do
was develop intuition.

And in fact, at some points,
AlphaGo’s programmers didn’t understand

why AlphaGo was doing what it was doing.

And things are moving really fast.

I mean, consider –
in the space of a human lifetime,

computers have gone from a child’s game

to what’s recognized as the pinnacle
of strategic thought.

What’s basically happening

is computers are going
from being like Spock

to being a lot more like Kirk.

(Laughter)

Right? From pure logic to intuition.

Would you cross this bridge?

Most of you are saying, “Oh, hell no!”

(Laughter)

And you arrived at that decision
in a split second.

You just sort of knew
that bridge was unsafe.

And that’s exactly the kind of intuition

that our deep-learning systems
are starting to develop right now.

Very soon, you’ll literally be able

to show something you’ve made,
you’ve designed,

to a computer,

and it will look at it and say,

“Sorry, homie, that’ll never work.
You have to try again.”

Or you could ask it if people
are going to like your next song,

or your next flavor of ice cream.

Or, much more importantly,

you could work with a computer
to solve a problem

that we’ve never faced before.

For instance, climate change.

We’re not doing a very
good job on our own,

we could certainly use
all the help we can get.

That’s what I’m talking about,

technology amplifying
our cognitive abilities

so we can imagine and design things
that were simply out of our reach

as plain old un-augmented humans.

So what about making
all of this crazy new stuff

that we’re going to invent and design?

I think the era of human augmentation
is as much about the physical world

as it is about the virtual,
intellectual realm.

How will technology augment us?

In the physical world, robotic systems.

OK, there’s certainly a fear

that robots are going to take
jobs away from humans,

and that is true in certain sectors.

But I’m much more interested in this idea

that humans and robots working together
are going to augment each other,

and start to inhabit a new space.

This is our applied research lab
in San Francisco,

where one of our areas of focus
is advanced robotics,

specifically, human-robot collaboration.

And this is Bishop, one of our robots.

As an experiment, we set it up

to help a person working in construction
doing repetitive tasks –

tasks like cutting out holes for outlets
or light switches in drywall.

(Laughter)

So, Bishop’s human partner
can tell what to do in plain English

and with simple gestures,

kind of like talking to a dog,

and then Bishop executes
on those instructions

with perfect precision.

We’re using the human
for what the human is good at:

awareness, perception and decision making.

And we’re using the robot
for what it’s good at:

precision and repetitiveness.

Here’s another cool project
that Bishop worked on.

The goal of this project,
which we called the HIVE,

was to prototype the experience
of humans, computers and robots

all working together to solve
a highly complex design problem.

The humans acted as labor.

They cruised around the construction site,
they manipulated the bamboo –

which, by the way,
because it’s a non-isomorphic material,

is super hard for robots to deal with.

But then the robots
did this fiber winding,

which was almost impossible
for a human to do.

And then we had an AI
that was controlling everything.

It was telling the humans what to do,
telling the robots what to do

and keeping track of thousands
of individual components.

What’s interesting is,

building this pavilion
was simply not possible

without human, robot and AI
augmenting each other.

OK, I’ll share one more project.
This one’s a little bit crazy.

We’re working with Amsterdam-based artist
Joris Laarman and his team at MX3D

to generatively design
and robotically print

the world’s first autonomously
manufactured bridge.

So, Joris and an AI are designing
this thing right now, as we speak,

in Amsterdam.

And when they’re done,
we’re going to hit “Go,”

and robots will start 3D printing
in stainless steel,

and then they’re going to keep printing,
without human intervention,

until the bridge is finished.

So, as computers are going
to augment our ability

to imagine and design new stuff,

robotic systems are going to help us
build and make things

that we’ve never been able to make before.

But what about our ability
to sense and control these things?

What about a nervous system
for the things that we make?

Our nervous system,
the human nervous system,

tells us everything
that’s going on around us.

But the nervous system of the things
we make is rudimentary at best.

For instance, a car doesn’t tell
the city’s public works department

that it just hit a pothole at the corner
of Broadway and Morrison.

A building doesn’t tell its designers

whether or not the people inside
like being there,

and the toy manufacturer doesn’t know

if a toy is actually being played with –

how and where and whether
or not it’s any fun.

Look, I’m sure that the designers
imagined this lifestyle for Barbie

when they designed her.

(Laughter)

But what if it turns out that Barbie’s
actually really lonely?

(Laughter)

If the designers had known

what was really happening
in the real world

with their designs – the road,
the building, Barbie –

they could’ve used that knowledge
to create an experience

that was better for the user.

What’s missing is a nervous system

connecting us to all of the things
that we design, make and use.

What if all of you had that kind
of information flowing to you

from the things you create
in the real world?

With all of the stuff we make,

we spend a tremendous amount
of money and energy –

in fact, last year,
about two trillion dollars –

convincing people to buy
the things we’ve made.

But if you had this connection
to the things that you design and create

after they’re out in the real world,

after they’ve been sold
or launched or whatever,

we could actually change that,

and go from making people want our stuff,

to just making stuff that people
want in the first place.

The good news is, we’re working
on digital nervous systems

that connect us to the things we design.

We’re working on one project

with a couple of guys down in Los Angeles
called the Bandito Brothers

and their team.

And one of the things these guys do
is build insane cars

that do absolutely insane things.

These guys are crazy –

(Laughter)

in the best way.

And what we’re doing with them

is taking a traditional race-car chassis

and giving it a nervous system.

So we instrumented it
with dozens of sensors,

put a world-class driver behind the wheel,

took it out to the desert
and drove the hell out of it for a week.

And the car’s nervous system
captured everything

that was happening to the car.

We captured four billion data points;

all of the forces
that it was subjected to.

And then we did something crazy.

We took all of that data,

and plugged it into a generative-design AI
we call “Dreamcatcher.”

So what do get when you give
a design tool a nervous system,

and you ask it to build you
the ultimate car chassis?

You get this.

This is something that a human
could never have designed.

Except a human did design this,

but it was a human that was augmented
by a generative-design AI,

a digital nervous system

and robots that can actually
fabricate something like this.

So if this is the future,
the Augmented Age,

and we’re going to be augmented
cognitively, physically and perceptually,

what will that look like?

What is this wonderland going to be like?

I think we’re going to see a world

where we’re moving
from things that are fabricated

to things that are farmed.

Where we’re moving from things
that are constructed

to that which is grown.

We’re going to move from being isolated

to being connected.

And we’ll move away from extraction

to embrace aggregation.

I also think we’ll shift
from craving obedience from our things

to valuing autonomy.

Thanks to our augmented capabilities,

our world is going to change dramatically.

We’re going to have a world
with more variety, more connectedness,

more dynamism, more complexity,

more adaptability and, of course,

more beauty.

The shape of things to come

will be unlike anything
we’ve ever seen before.

Why?

Because what will be shaping those things
is this new partnership

between technology, nature and humanity.

That, to me, is a future
well worth looking forward to.

Thank you all so much.

(Applause)

译者:Leslie Gauthier
审稿人:Camille Martínez

你们中有多少人是创意人士、

设计师、工程师、
企业家、艺术家,

或者你只是
有很大的想象力?

举手? (欢呼声)

这是你们中的大多数人。

我有一些消息要告诉我们的创意人员。

在接下来的 20 年中,

与过去 2,000 年相比,我们工作方式的变化将更多。

事实上,我认为我们正
处于人类历史新时代的黎明。

现在,
我们的工作方式定义了四个主要的历史时代。

狩猎采集时代
持续了几百万年。

然后农业时代
持续了几千年。

工业时代持续
了几个世纪。

而现在信息时代
只持续了几十年。

而今天,
作为一个物种,我们正处于下一个伟大时代的风口浪尖。

欢迎来到增强时代。

在这个新时代,您的自然人类
能力将

通过帮助您思考的计算系统

、帮助您制造的机器人系统

以及

将您与远远超出您的自然感官的世界连接起来的数字神经系统得到增强

让我们从认知增强开始。

你们中有多少人是增强型电子人?

(笑声)

我实际上
认为我们已经被增强了。

想象你在一个聚会上

,有人问你
一个你不知道答案的问题。

如果你有其中之一,
在几秒钟内,你就能知道答案。

但这只是一个原始的开始。

甚至 Siri 也只是一个被动工具。

事实上,在过去的
三百五十万年里

,我们所拥有的工具
完全是被动的。

他们完全按照我们告诉他们的去做,仅此
而已。

我们的第一个工具只
在我们碰到它的地方切割。

凿子只
在艺术家指向的地方雕刻。

如果没有我们明确的指示,即使是我们最先进的工具
也无济于事。

事实上,到目前为止,
这让我感到沮丧,

我们一直

受到手动
将我们的意志推动到我们的工具中的需求的限制——

比如手动,
实际上是使用我们的双手,

甚至使用计算机。

但我更像是《星际迷航》中的斯科蒂。

(笑声)

我想
和电脑对话。

我想说,“计算机,
让我们设计一辆汽车”

,然后计算机向我展示了一辆汽车。

我说,“不,更快速
,更少德语,”

然后砰的一声,电脑显示了一个选项。

(笑声)

那次谈话可能
还有点距离,

可能比我们许多人想象的要少,

但现在,

我们正在努力。

工具正在实现
从被动到生成的飞跃。

生成式设计工具
使用计算机和算法

来合成几何图形

,以自行提出新
设计。

它所需要的只是你的目标
和你的约束。

我给你举个例子。

对于这个空中无人机底盘

,您需要
做的就是告诉它,

它有四个螺旋桨,

您希望它
尽可能轻巧,

并且您需要它具有
空气动力学效率。

然后计算机所做的
就是探索整个解决方案空间:

解决
并满足您的标准的每一种可能性——

数以百万计的可能性。

这需要大型计算机来完成。

但它带着

我们自己
无法想象的设计回到我们身边。

电脑自己
想出这些东西——

从来没有人画过任何东西

,它完全是从零开始的。

顺便说一句

,无人机的机身看起来就像
飞鼠的骨盆,这绝非偶然。

(笑声)

这是因为算法
被设计成

和进化一样工作。

令人兴奋的是,我们

开始在现实世界中看到这项技术。 几年来,

我们一直在与空中客车公司合作

开发这款面向未来的概念飞机。

这仍然是一条出路。

但就在最近,我们使用
了生成式设计 AI

来解决这个问题。

这是由计算机设计的 3D 打印舱室隔断

它比原来的更坚固,
但重量只有一半

,它将
在今年晚些时候在空中客车 A320 上飞行。

所以计算机现在可以生成;

他们可以
为我们明确定义的问题提出自己的解决方案。

但它们并不直观。

他们仍然必须每次都从头开始

,那是因为他们从不学习。

不像玛姬。

(笑声)

Maggie 实际上
比我们最先进的设计工具更聪明。

我的意思是什么?

如果她的主人捡起那条皮带,

玛吉
相当肯定地

知道是时候出去散散步了。

她是怎么学的?

好吧,每次主人
拿起皮带,他们都会去散步。

Maggie 做了三件事:

她必须注意,

她必须记住发生的事情

,她必须在脑海中保留并创造
一个模式。

有趣的是,这正是

计算机科学家
在过去 60 年左右试图让 AI 做的事情

早在 1952 年,

他们就制造了这
台可以玩井字游戏的计算机。

大不了。

45 年后的 1997 年,

深蓝在国际象棋上击败了卡斯帕罗夫。

2011 年,Watson
在 Jeopardy 中击败了这两个人,

这对计算机
来说比国际象棋要难下得多。

事实上,沃森不是
根据预定义的食谱工作,而是

必须使用推理
来战胜他的人类对手。

然后几周前,

DeepMind 的 AlphaGo
在围棋中击败了世界上最优秀的人类,

这是我们拥有的最困难的
游戏。

事实上,在围棋中,
可能的移动

比宇宙中的原子还多。

所以为了获胜

,AlphaGo 必须做的
就是培养直觉。

事实上,在某些时候,
AlphaGo 的程序员并不理解

AlphaGo 为何要这么做。

事情进展得非常快。

我的意思是,考虑一下——
在人类一生的时间里,

计算机已经从孩子的游戏

变成了公认
的战略思想的顶峰。

基本上正在发生的事情

是计算机
正在从像 Spock

变成更像 Kirk。

(笑声)

对吧? 从纯逻辑到直觉。

你愿意过这座桥吗?

你们中的大多数人都在说,“哦,见鬼!”

(笑声

) 你在一瞬间就做出了这个决定

你只是知道
那座桥是不安全的。

正是我们的深度学习系统
现在开始发展的直觉。

很快,你就可以真正地

向电脑展示你制作的东西,
你设计的东西

,它会看着它说:

“对不起,伙计,这永远行不通。
你必须尝试 再一次。”

或者你可以问它人们
是否会喜欢你的下一首歌,

或者你下一个口味的冰淇淋。

或者,更重要的是,

您可以使用计算机

解决我们以前从未遇到过的问题。

例如,气候变化。

我们自己做的不是
很好,

我们当然可以利用
我们能得到的所有帮助。

这就是我所说的,

技术增强
了我们的认知能力,

因此我们可以想象和设计
那些我们

作为普通的未增强人类根本无法企及的东西。

那么,如何制作

我们将要发明和设计的所有这些疯狂的新东西呢?

我认为人类增强的时代
既关乎物理世界

,也关乎虚拟的
智力领域。

技术将如何增强我们?

在物理世界中,机器人系统。

好吧,肯定有人

担心机器人会
抢走人类的工作

,在某些领域确实如此。

但我更感兴趣的

是人类和机器人一起工作
将相互增强,

并开始居住在一个新的空间。

这是我们在旧金山的应用研究实验室

,我们的重点领域之一
是先进的机器人技术,

特别是人机协作。

这是 Bishop,我们的机器人之一。

作为一项实验,我们将其设置

为帮助从事建筑工作的人
执行重复性任务——

例如
在干墙上为插座或电灯开关挖洞等任务。

(笑声)

所以,Bishop 的人类搭档
可以

用简单的英语和简单的手势告诉该做什么,

有点像和狗说话,

然后 Bishop

非常精确地执行这些指令。

我们将人类
用于人类擅长的领域:

意识、感知和决策。

我们将机器人
用于它擅长的领域:

精确性和重复性。

这是 Bishop 从事的另一个很酷的项目
。 我们称之为 HIVE

的这个项目的目标是

对人类、计算机和机器人的体验进行原型设计,

所有这些都可以协同工作以
解决高度复杂的设计问题。

人类充当劳动力。

他们在建筑工地周围巡视,
他们操纵竹子

——顺便说一下,
因为它是一种非同构材料

,机器人很难处理。

但随后机器人
完成了这种纤维缠绕,

这对人类来说几乎是不可能
的。

然后我们有了
一个控制一切的人工智能。

它告诉人类该做什么,
告诉机器人该做什么,

并跟踪
数千个单独的组件。

有趣的是,

如果没有人类、机器人和人工智能的
相互增强,建造这个展馆是不可能的。

好的,我再分享一个项目。
这个有点疯狂。

我们正在与阿姆斯特丹艺术家
Joris Laarman 和他在 MX3D 的团队合作,

以生成式设计
和机器人

打印世界上第
一座自主制造的桥梁。

所以,正如我们所说,Joris 和 AI 现在正在阿姆斯特丹设计
这个东西

当他们完成后,
我们将点击“

开始”,机器人将开始
用不锈钢进行 3D 打印,

然后他们将继续打印,
无需人工干预,

直到桥梁完工。

因此,随着计算机
将增强我们

想象和设计新事物的能力,

机器人系统将帮助我们
构建和制造

我们以前无法制造的东西。

但是我们
感知和控制这些事物的能力呢?

我们制造的东西的神经系统怎么样

我们的神经系统
,人类的神经系统,

告诉我们周围发生的一切。

但我们制造的东西的神经系统
充其量只是简陋的。

例如,一辆汽车不会
告诉该市的公共工程

部门它只是在百老汇和莫里森的拐角处撞到了一个坑洼

一栋建筑不会告诉它的设计师

,里面的人是否
喜欢在那里

,玩具制造商也不

知道玩具是否真的被玩过——

如何玩、在哪里玩以及
它是否有趣。

看,我敢肯定设计师在设计
芭比时为芭比设想了这种生活方式

(笑声)

但是如果事实证明芭比
真的很孤独呢?

(笑声)

如果设计师们通过他们的设计

了解现实世界中真正发生的事情

——道路
、建筑物、芭比娃娃——

他们本可以利用这些知识
为用户创造更好的体验

缺少的是一个

将我们与
我们设计、制造和使用的所有事物联系起来的神经系统。

如果你们所有人都能

从你
在现实世界中创造的东西中获得那种信息会怎样?

对于我们制造的所有东西,

我们花费了大量
的金钱和精力

——事实上,去年,
大约 2 万亿美元——

说服人们购买
我们制造的东西。

但是,如果你
与你设计和创造的东西

在现实世界中出现,

在它们被出售
或推出或其他任何东西之后有这种联系,

我们实际上可以改变这种情况,

而不是让人们想要我们的东西,

只是制造人们一开始就
想要的东西。

好消息是,我们正在研究

将我们与我们设计的事物联系起来的数字神经系统。

我们正在

与洛杉矶的几个
名为 Bandito Brothers 的人

及其团队合作开展一个项目。

这些家伙所做的其中一件事
就是制造疯狂的汽车

,做绝对疯狂的事情。

这些家伙疯了——

(笑声)

以最好的方式。

我们对它们所做的

就是采用传统的赛车底盘

并为其赋予神经系统。

所以我们给它
安装了几十个传感器,

让世界级的司机驾驶,

把它带到沙漠里
,把它开出一个星期。

汽车的神经系统
捕捉

到了汽车上发生的一切。

我们捕获了 40 亿个数据点;

它受到的所有力量。

然后我们做了一些疯狂的事情。

我们获取了所有这些数据,

并将其插入到
我们称为“Dreamcatcher”的衍生式设计 AI 中。

那么,当你给
一个设计工具一个神经系统,

并要求它为你打造
终极汽车底盘时,你会得到什么?

你明白了。

这是人类
永远无法设计的。

除了人类确实设计了这个,

但它是
由生成设计人工智能

、数字神经系统

和机器人增强的人类,实际上可以
制造这样的东西。

因此,如果这是未来
,增强时代

,我们将在
认知、身体和知觉上得到增强,

那会是什么样子?

这个仙境会是什么样子?

我认为我们将看到一个世界

,我们
正在从制造

的东西转向养殖的东西。

我们正在从
构建

的事物转向成长的事物。

我们将从孤立

转向连接。

我们将从提取

转向聚合。

我也认为我们
将从对事物的渴望服从转变

为重视自主权。

多亏了我们增强的能力,

我们的世界将发生巨大变化。

我们将拥有
一个更多样、更多联系、

更多活力、更多复杂性、

更多适应性,当然还有

更多美丽的世界。

未来事物的形状

将不同于
我们以前见过的任何事物。

为什么?

因为将塑造这些事物的

技术、自然和人类之间的这种新的伙伴关系。

对我来说,这是一个
非常值得期待的未来。

非常感谢大家。

(掌声)