Meet Spot the robot dog that can run hop and open doors Marc Raibert

(Laughter)

(Laughter)

That’s SpotMini.

He’ll be back in a little while.

I –

(Applause)

I love building robots.

And my long-term goal is to build robots

that can do what people and animals do.

And there’s three things in particular

that we’re interested in.

One is balance and dynamic mobility,

the second one is mobile manipulation,

and the third one is mobile perception.

So, dynamic mobility and balance –

I’m going to do a demo for you.

I’m standing here, balancing.

I can see you’re not very impressed.
OK, how about now?

(Laughter)

How about now?

(Applause)

Those simple capabilities mean that people
can go almost anywhere on earth,

on any kind of terrain.

We want to capture that for robots.

What about manipulation?

I’m holding this clicker in my hand;

I’m not even looking at it,

and I can manipulate it
without any problem.

But even more important,

I can move my body while I hold
the manipulator, the clicker,

and stabilize and coordinate my body,

and I can even walk around.

And that means
I can move around in the world

and expand the range
of my arms and my hands

and really be able to handle
almost anything.

So that’s mobile manipulation.

And all of you can do this.

Third is perception.

I’m looking at a room
with over 1,000 people in it,

and my amazing visual system
can see every one of you –

you’re all stable in space,

even when I move my head,

even when I move around.

That kind of mobile perception
is really important for robots

that are going to move and act

out in the world.

I’m going to give you
a little status report

on where we are in developing robots
toward these ends.

The first three robots are all
dynamically stabilized robots.

This one goes back
a little over 10 years ago –

“BigDog.”

It’s got a gyroscope
that helps stabilize it.

It’s got sensors and a control computer.

Here’s a Cheetah robot
that’s running with a galloping gait,

where it recycles its energy,

it bounces on the ground,

and it’s computing all the time

in order to keep itself
stabilized and propelled.

And here’s a bigger robot

that’s got such good
locomotion using its legs,

that it can go in deep snow.

This is about 10 inches deep,

and it doesn’t really have any trouble.

This is Spot, a new generation of robot –

just slightly older than the one
that came out onstage.

And we’ve been asking the question –

you’ve all heard about drone delivery:

Can we deliver packages
to your houses with drones?

Well, what about plain old
legged-robot delivery?

(Laughter)

So we’ve been taking our robot
to our employees' homes

to see whether we could get in –

(Laughter)

the various access ways.

And believe me, in the Boston area,

there’s every manner
of stairway twists and turns.

So it’s a real challenge.

But we’re doing very well,
about 70 percent of the way.

And here’s mobile manipulation,

where we’ve put an arm on the robot,

and it’s finding its way through the door.

Now, one of the important things
about making autonomous robots

is to make them not do
just exactly what you say,

but make them deal with the uncertainty
of what happens in the real world.

So we have Steve there,
one of the engineers,

giving the robot a hard time.

(Laughter)

And the fact that the programming
still tolerates all that disturbance –

it does what it’s supposed to.

Here’s another example,
where Eric is tugging on the robot

as it goes up the stairs.

And believe me,

getting it to do what it’s supposed to do
in those circumstances

is a real challenge,

but the result is something
that’s going to generalize

and make robots much more autonomous
than they would be otherwise.

This is Atlas, a humanoid robot.

It’s a third-generation humanoid
that we’ve been building.

I’ll tell you a little bit
about the hardware design later.

And we’ve been saying:

How close to human levels
of performance and speed could we get

in an ordinary task,

like moving boxes around on a conveyor?

We’re getting up to about two-thirds
of the speed that a human operates

on average.

And this robot is using both hands,
it’s using its body,

it’s stepping,

so it’s really an example
of dynamic stability,

mobile manipulation

and mobile perception.

Here –

(Laughter)

We actually have two Atlases.

(Laughter)

Now, everything doesn’t go exactly
the way it’s supposed to.

(Laughter)

(Laughter)

(Laughter)

And here’s our latest robot,
called “Handle.”

Handle is interesting,
because it’s sort of half like an animal,

and it’s half something else

with these leg-like things and wheels.

It’s got its arms on
in kind of a funny way,

but it really does some remarkable things.

It can carry 100 pounds.

It’s probably going to lift
more than that,

but so far we’ve done 100.

It’s got some pretty good
rough-terrain capability,

even though it has wheels.

And Handle loves to put on a show.

(Laughter)

(Applause)

I’m going to give you
a little bit of robot religion.

A lot of people think that a robot
is a machine where there’s a computer

that’s telling it what to do,

and the computer is listening
through its sensors.

But that’s really only half of the story.

The real story is
that the computer is on one side,

making suggestions to the robot,

and on the other side
are the physics of the world.

And that physics involves gravity,
friction, bouncing into things.

In order to have a successful robot,

my religion is that you have to do
a holistic design,

where you’re designing the software,
the hardware and the behavior

all at one time,

and all these parts really intermesh
and cooperate with each other.

And when you get the perfect design,
you get a real harmony

between all those parts
interacting with each other.

So it’s half software and half hardware,

plus the behavior.

We’ve done some work lately
on the hardware, where we tried to go –

the picture on the left
is a conventional design,

where you have parts
that are all bolted together,

conductors, tubes, connectors.

And on the right
is a more integrated thing;

it’s supposed to look like
an anatomy drawing.

Using the miracle of 3-D printing,

we’re starting to build parts of robots

that look a lot more
like the anatomy of an animal.

So that’s an upper-leg part
that has hydraulic pathways –

actuators, filters –

all embedded, all printed as one piece,

and the whole structure is developed

with a knowledge of what the loads
and behavior are going to be,

which is available from data
recorded from robots

and simulations and things like that.

So it’s a data-driven hardware design.

And using processes like that,

not only the upper leg
but some other things,

we’ve gotten our robots to go from big,
behemoth, bulky, slow, bad robots –

that one on the right,
weighing almost 400 pounds –

down to the one in the middle
which was just in the video,

weighs about 190 pounds,

just a little bit more than me,

and we have a new one,

which is working but I’m not
going to show it to you yet,

on the left,

which weighs just 165 pounds,

with all the same
strength and capabilities.

So these things are really getting
better very quickly.

So it’s time for Spot to come back out,

and we’re going to demonstrate
a little bit of mobility,

dexterity and perception.

This is Seth Davis,
who’s my robot wrangler today,

and he’s giving Spot
some general direction

by steering it around,

but all the coordination
of the legs and the sensors

is done by the robot’s computers on board.

The robot can walk
with a number of different gaits;

it’s got a gyro,

or a solid-state gyro,

an IMU on board.

Obviously, it’s got a battery,
and things like that.

One of the cool things
about a legged robot is,

it’s omnidirectional.

In addition to going forward,
it can go sideways,

it can turn in place.

And this robot
is a little bit of a show-off.

It loves to use its dynamic gaits,

like running –

(Laughter)

And it’s got one more.

(Laughter)

Now if it were really a show-off,
it would be hopping on one foot,

but, you know.

Now, Spot has a set of cameras
here, stereo cameras,

and we have a feed up in the center.

It’s kind of dark out in the audience,

but it’s going to use those cameras
in order to look at the terrain

right in front of it,

while it goes over
these obstacles back here.

For this demo, Seth is steering,

but the robot’s doing
all its own terrain planning.

This is a terrain map,

where the data from the cameras
is being developed in real time,

showing the red spots,
which are where it doesn’t want to step,

and the green spots are the good places.

And here it’s treating
them like stepping-stones.

So it’s trying to stay up on the blocks,

and it adjusts its stride,

and there’s a ton of planning

that has to go into
an operation like that,

and it does all
that planning in real time,

where it adjusts the steps
a little bit longer

or a little bit shorter.

Now we’re going to change it
into a different mode,

where it’s just going to treat
the blocks like terrain

and decide whether to step up or down

as it goes.

So this is using dynamic balance

and mobile perception,

because it has to coordinate what it sees
along with how it’s moving.

The other thing Spot has is a robot arm.

Some of you may see that
as a head and a neck,

but believe me, it’s an arm.

Seth is driving it around.

He’s actually driving the hand
and the body is following.

So the two are coordinated
in the way I was talking about before –

in the way people can do that.

In fact, one of the cool things
Spot can do we call, “chicken-head mode,”

and it keeps its head
in one place in space,

and it moves its body all around.

There’s a variation of this
that’s called “twerking” –

(Laughter)

but we’re not going to use that today.

(Laughter)

So, Spot: I’m feeling a little thirsty.
Could you get me a soda?

For this demo,
Seth is not doing any driving.

We have a LIDAR on the back of the robot,

and it’s using these props
we’ve put on the stage

to localize itself.

It’s gone over to that location.

Now it’s using a camera that’s in its hand

to find the cup,

picks it up –

and again, Seth’s not driving.

We’ve planned out a path for it to go –

it looked like it was
going off the path –

and now Seth’s going
to take over control again,

because I’m a little bit chicken
about having it do this by itself.

Thank you, Spot.

(Applause)

So, Spot:

How do you feel about having just finished
your TED performance?

(Laughter)

Me, too!

(Laughter)

Thank you all,

and thanks to the team at Boston Dynamics,

who did all the hard work behind this.

(Applause)

Helen Walters: Marc,
come back in the middle.

Thank you so much.

Come over here, I have questions.

So, you mentioned the UPS
and the package delivery.

What are the other applications
that you see for your robots?

Marc Raibert: You know,
I think that robots

that have the capabilities
I’ve been talking about

are going to be incredibly useful.

About a year ago, I went to Fukushima

to see what the situation was there,

and there’s just a huge need

for machines that can go
into some of the dirty places

and help remediate that.

I think it won’t be too long until
we have robots like this in our homes,

and one of the big needs
is to take care of the aging

and invalids.

I think that it won’t be too long
till we’re using robots

to help take care of our parents,

or probably more likely,
have our children help take care of us.

And there’s a bunch of other things.

I think the sky’s the limit.

Many of the ideas
we haven’t thought of yet,

and people like you will help us
think of new applications.

HW: So what about the dark side?

What about the military?

Are they interested?

MR: Sure, the military has been
a big funder of robotics.

I don’t think the military
is the dark side myself,

but I think, as with all
advanced technology,

it can be used for all kinds of things.

HW: Awesome. Thank you so much.

MR: OK, you’re welcome.

Thank you.

(Applause)

(笑声)

(笑声)

那是 SpotMini。

他一会儿就会回来。

我——

(掌声)

我喜欢制造机器人。

我的长期目标是制造

可以做人和动物做的事情的机器人。

我们特别感兴趣的是三件事

一是平衡和动态移动

,二是移动操控

,三是移动感知。

所以,动态移动和平衡——

我将为你做一个演示。

我站在这里,保持平衡。

我看得出来你不是很感动。
好的,现在呢?

(笑声)

现在呢?

(掌声)

这些简单的能力意味着人们
几乎可以去地球

上的任何地方,在任何地形上。

我们想为机器人捕捉到这一点。

操纵呢?

我手里拿着这个答题器;

我什至没有看它

,我可以
毫无问题地操纵它。

但更重要的是,

我可以在
握住机械手、答题器的同时移动身体

,稳定和协调身体,

甚至可以四处走动。

这意味着
我可以在世界各地四处走动


扩大我的手臂和手的范围,

并且真正能够处理
几乎任何事情。

这就是移动操纵。

你们所有人都可以做到这一点。

三是认知。

我看着一个
有 1000 多人的房间

,我惊人的视觉系统
可以看到你们每个人——

你们在空间中都很稳定,

即使我移动我的头,

即使我四处走动。

这种移动感知
对于

要在世界上移动和行动的机器人来说非常重要

我将向您

介绍我们在开发机器人方面的
进展情况。

前三个机器人都是
动态稳定机器人。

这个可以
追溯到 10 多年前——

“BigDog”。

它有一个陀螺仪
可以帮助稳定它。

它有传感器和控制计算机。

这是一个猎豹机器人
,它以疾驰的步态运行

,它回收能量,

在地面上弹跳

,它一直在计算

,以保持自己的
稳定和推进。

这是一个更大的机器人


它的腿

能很好地移动,它可以在深雪中行走。

这大约有 10 英寸深,

并没有任何问题。

这就是 Spot,新一代机器人——

只比
舞台上出现的机器人稍老。

我们一直在问这个问题——

你们都听说过无人机送货:

我们可以
用无人机将包裹送到你家吗?

那么,普通的老式
腿式机器人送货呢?

(笑声)

所以我们一直把我们的机器人
带到我们员工的家里

,看看我们是否可以进入——

(笑声

)各种访问方式。

相信我,在波士顿地区,

有各种
曲折的楼梯。

所以这是一个真正的挑战。

但我们做得很好,
大约完成了 70%。

这是移动操作

,我们在机器人上放了一只手臂

,它正在寻找穿过门的路。

现在,制造自主机器人的重要事情之一

是让它们不
只是按照你说的去做,

而是让它们处理
现实世界中发生的不确定性。

所以我们有史蒂夫,
其中一位工程师,

给机器人带来了困难。

(笑声

)事实上,程序
仍然容忍所有的干扰——

它做它应该做的事情。

这是另一个例子
,埃里克在

机器人上楼时拉着它。

相信我,

让它在这种情况下做它应该做的事情

是一个真正的挑战,

但结果是一些
东西将会泛化

并使机器人比其他情况下更加自主

这是阿特拉斯,一个人形机器人。


是我们一直在建造的第三代人形机器人。

稍后我会告诉您一些
有关硬件设计的信息。

我们一直在说:

在普通任务中,

例如在传送带上移动箱子,我们的性能和速度能接近人类水平吗?

我们正在达到
人类平均操作速度的三分之二

这个机器人用双手
,用身体,

用脚步,

所以它确实
是动态稳定性、

移动操纵

和移动感知的一个例子。

这里——

(笑声)

我们实际上有两个地图集。

(笑声)

现在,一切都没有按照
它应该的方式进行。

(笑声)

(笑声)

(笑声

) 这是我们最新的机器人,
叫做“Handle”。

手柄很有趣,
因为它有一半像动物,

带有这些腿状的东西和轮子的一半是别的东西。


以一种有趣的方式张开双臂,

但它确实做了一些了不起的事情。

它可以承载100磅。

它可能会举起
更多,

但到目前为止我们已经完成了 100 次。

它有一些相当不错的
崎岖地形能力,

即使它有轮子。

Handle 喜欢表演。

(笑声)

(掌声)

我要给你们
讲一点机器人宗教。

很多人认为机器人
是一台机器,其中有一台计算机

告诉它该做什么,

而计算机
通过其传感器进行监听。

但这实际上只是故事的一半。

真实的故事是
,计算机一方面是

向机器人提出建议

,另一方面
是世界的物理学。

那个物理学涉及到重力、
摩擦、撞击物体。

为了拥有一个成功的机器人,

我的信仰是你必须
进行整体设计

,你同时设计软件
、硬件和行为

所有这些部分真正相互交织
和相互合作。

当你得到完美的设计时,
你会

在所有相互作用的部分之间获得真正的和谐

所以它是一半软件和一半硬件,

加上行为。

我们最近在硬件上做了一些工作
,我们试图去的地方——

左边的图片
是一个传统的设计

,你
有所有的零件用螺栓固定在一起,

导体、管子、连接器。

而右边
则是更综合的东西;

它应该看起来
像解剖图。

利用 3D 打印的奇迹,

我们开始制造

看起来
更像动物解剖结构的机器人部件。

所以这是一个
具有液压通路的大腿部分——

执行器、过滤器——

全部嵌入,全部打印为一体

,整个结构的开发

都知道负载
和行为将是什么,

这是可用的
来自机器人

和模拟等记录的数据。

所以这是一个数据驱动的硬件设计。

使用这样的过程,

不仅是大腿
,还有其他一些东西,

我们已经让我们的机器人从大型、
庞然大物、笨重、缓慢、糟糕的机器人——

右边那个
重近 400 磅的机器人——

下降 到
视频中的中间那个,

重约 190 磅,

只比我重一点

,我们有一个新的,

它正在工作,但我
不会给你看,

on 左侧

仅重 165 磅,

具有相同的
力量和能力。

所以这些事情真的
很快就变好了。

所以现在是 Spot 回归的时候了

,我们将
展示一点机动性、

灵巧性和感知能力。

这是赛斯戴维斯,
他是我今天的机器人牧马人

,他通过操纵 Spot 给 Spot
一些大致的方向

但所有
腿和传感器

的协调都是由机器人的机载计算机完成的。

机器人可以
以多种不同的步态行走;

它有一个陀螺仪,

或者一个固态陀螺仪,

一个 IMU。

显然,它有一个电池
,诸如此类。

腿式机器人最酷的一点是,

它是全方位的。

除了前进,
它可以横着走,

它可以原地转弯。

这个
机器人有点炫耀。

它喜欢使用它的动态步态,

比如跑步——

(笑声

) 它还有一个。

(笑声)

现在如果真的是炫耀的话,
它会单脚跳跃,

但是,你知道的。

现在,Spot 在这里有一组摄像头
,立体摄像头

,我们在中心有一个提要。

观众中有点黑,

但它会使用这些
摄像机来观察

它前面的地形,

同时它会越过
这里的这些障碍物。

在这个演示中,Seth 正在转向,

但机器人正在做
所有自己的地形规划。

这是一张地形图

,摄像头的数据
是实时开发的,

红点
是它不想踩的地方

,绿点是好地方。

在这里,它把
他们当作垫脚石。

所以它试图保持在块上

,它调整它的步伐

,有大量的

计划必须进入
这样的操作

,它
实时完成所有计划

,它
稍微调整步骤 更长

或更短一点。

现在我们要改变它
到一个不同的模式

,它只是
将块视为地形,

并决定是向上还是向下

所以这是使用动态平衡

和移动感知,

因为它必须协调它所看到
的和它的移动方式。

Spot 的另一件事是机械臂。

你们中的一些人可能将其
视为头和脖子,

但相信我,它是手臂。

赛斯正在驾驶它。

他实际上是在推动手
,而身体也在跟随。

所以这两者
按照我之前所说的方式进行协调——

人们可以做到这一点。

事实上,Spot 可以做的其中一项很酷的事情
我们称之为“鸡头模式”

,它将头部保持
在空间中的一个位置,

并四处移动它的身体。

有一种变
体叫做“twerking”——

(笑声)

但我们今天不打算使用它。

(笑声)

所以,Spot:我感觉有点口渴。
你能给我一杯苏打水吗?

对于这个演示,
赛斯没有做任何驾驶。

我们在机器人的背部有一个激光雷达

,它使用
我们放在舞台上的这些道具

来定位自己。

它去了那个位置。

现在它正在使用手中的相机

来寻找杯子,

然后把它捡起来

——同样,赛斯没有开车。

我们已经为它设计了一条路径——

看起来它
正在偏离路径

——现在赛斯将
再次接管控制权,

因为我有点
害怕让它自己做这件事。

谢谢你,现货。

(掌声)

那么,Spot:

你对刚刚结束
你的 TED 表演感觉如何?

(笑声)

我也是!

(笑声)

谢谢大家,

也感谢波士顿动力的团队,

他们为此付出了辛勤的工作。

(掌声)

Helen Walters:马克
,回到中间。

太感谢了。

过来,我有问题。

所以,你提到了 UPS
和包裹递送。

您看到的机器人还有哪些其他应用程序?

Marc Raibert:你知道,

认为具有
我一直在谈论

的能力的机器人将非常有用。

大约一年前,我去

福岛看看那里的情况,我

非常

需要可以
进入一些肮脏的地方

并帮助修复的机器。

我认为用不了多久
,我们家中就会有这样的机器人,

而最大的需求之一
就是照顾老年人

和残疾人。

我认为不久
之后我们将使用机器人

来帮助照顾我们的父母,

或者更有可能
让我们的孩子帮助照顾我们。

还有很多其他的东西。

我认为天空是极限。

很多想法
我们还没有想到,

像你这样的人会帮助我们
思考新的应用。

HW:那么黑暗面呢?

军队呢?

他们感兴趣吗?

MR:当然,军方一直是
机器人技术的重要资助者。

我不认为军队
是黑暗面,

但我认为,与所有
先进技术一样,

它可以用于各种事情。

HW:太棒了。 太感谢了。

MR:好的,不客气。

谢谢你。

(掌声)