The Future of Flying Robots Vijay Kumar TED Talks

In my lab, we build
autonomous aerial robots

like the one you see flying here.

Unlike the commercially available drones
that you can buy today,

this robot doesn’t have any GPS on board.

So without GPS,

it’s hard for robots like this
to determine their position.

This robot uses onboard sensors,
cameras and laser scanners,

to scan the environment.

It detects features from the environment,

and it determines where it is
relative to those features,

using a method of triangulation.

And then it can assemble
all these features into a map,

like you see behind me.

And this map then allows the robot
to understand where the obstacles are

and navigate in a collision-free manner.

What I want to show you next

is a set of experiments
we did inside our laboratory,

where this robot was able
to go for longer distances.

So here you’ll see, on the top right,
what the robot sees with the camera.

And on the main screen –

and of course this is sped up
by a factor of four –

on the main screen you’ll see
the map that it’s building.

So this is a high-resolution map
of the corridor around our laboratory.

And in a minute
you’ll see it enter our lab,

which is recognizable
by the clutter that you see.

(Laughter)

But the main point I want to convey to you

is that these robots are capable
of building high-resolution maps

at five centimeters resolution,

allowing somebody who is outside the lab,
or outside the building

to deploy these
without actually going inside,

and trying to infer
what happens inside the building.

Now there’s one problem
with robots like this.

The first problem is it’s pretty big.

Because it’s big, it’s heavy.

And these robots consume
about 100 watts per pound.

And this makes for
a very short mission life.

The second problem

is that these robots have onboard sensors
that end up being very expensive –

a laser scanner, a camera
and the processors.

That drives up the cost of this robot.

So we asked ourselves a question:

what consumer product
can you buy in an electronics store

that is inexpensive, that’s lightweight,
that has sensing onboard and computation?

And we invented the flying phone.

(Laughter)

So this robot uses a Samsung Galaxy
smartphone that you can buy off the shelf,

and all you need is an app that you
can download from our app store.

And you can see this robot
reading the letters, “TED” in this case,

looking at the corners
of the “T” and the “E”

and then triangulating off of that,
flying autonomously.

That joystick is just there
to make sure if the robot goes crazy,

Giuseppe can kill it.

(Laughter)

In addition to building
these small robots,

we also experiment with aggressive
behaviors, like you see here.

So this robot is now traveling
at two to three meters per second,

pitching and rolling aggressively
as it changes direction.

The main point is we can have
smaller robots that can go faster

and then travel in these
very unstructured environments.

And in this next video,

just like you see this bird, an eagle,
gracefully coordinating its wings,

its eyes and feet
to grab prey out of the water,

our robot can go fishing, too.

(Laughter)

In this case, this is a Philly cheesesteak
hoagie that it’s grabbing out of thin air.

(Laughter)

So you can see this robot
going at about three meters per second,

which is faster than walking speed,
coordinating its arms, its claws

and its flight with split-second timing
to achieve this maneuver.

In another experiment,

I want to show you
how the robot adapts its flight

to control its suspended payload,

whose length is actually larger
than the width of the window.

So in order to accomplish this,

it actually has to pitch
and adjust the altitude

and swing the payload through.

But of course we want
to make these even smaller,

and we’re inspired
in particular by honeybees.

So if you look at honeybees,
and this is a slowed down video,

they’re so small,
the inertia is so lightweight –

(Laughter)

that they don’t care –
they bounce off my hand, for example.

This is a little robot
that mimics the honeybee behavior.

And smaller is better,

because along with the small size
you get lower inertia.

Along with lower inertia –

(Robot buzzing, laughter)

along with lower inertia,
you’re resistant to collisions.

And that makes you more robust.

So just like these honeybees,
we build small robots.

And this particular one
is only 25 grams in weight.

It consumes only six watts of power.

And it can travel
up to six meters per second.

So if I normalize that to its size,

it’s like a Boeing 787 traveling
ten times the speed of sound.

(Laughter)

And I want to show you an example.

This is probably the first planned mid-air
collision, at one-twentieth normal speed.

These are going at a relative speed
of two meters per second,

and this illustrates the basic principle.

The two-gram carbon fiber cage around it
prevents the propellers from entangling,

but essentially the collision is absorbed
and the robot responds to the collisions.

And so small also means safe.

In my lab, as we developed these robots,

we start off with these big robots

and then now we’re down
to these small robots.

And if you plot a histogram
of the number of Band-Aids we’ve ordered

in the past, that sort of tailed off now.

Because these robots are really safe.

The small size has some disadvantages,

and nature has found a number of ways
to compensate for these disadvantages.

The basic idea is they aggregate
to form large groups, or swarms.

So, similarly, in our lab,
we try to create artificial robot swarms.

And this is quite challenging

because now you have to think
about networks of robots.

And within each robot,

you have to think about the interplay
of sensing, communication, computation –

and this network then becomes
quite difficult to control and manage.

So from nature we take away
three organizing principles

that essentially allow us
to develop our algorithms.

The first idea is that robots
need to be aware of their neighbors.

They need to be able to sense
and communicate with their neighbors.

So this video illustrates the basic idea.

You have four robots –

one of the robots has actually been
hijacked by a human operator, literally.

But because the robots
interact with each other,

they sense their neighbors,

they essentially follow.

And here there’s a single person
able to lead this network of followers.

So again, it’s not because all the robots
know where they’re supposed to go.

It’s because they’re just reacting
to the positions of their neighbors.

(Laughter)

So the next experiment illustrates
the second organizing principle.

And this principle has to do
with the principle of anonymity.

Here the key idea is that

the robots are agnostic
to the identities of their neighbors.

They’re asked to form a circular shape,

and no matter how many robots
you introduce into the formation,

or how many robots you pull out,

each robot is simply
reacting to its neighbor.

It’s aware of the fact that it needs
to form the circular shape,

but collaborating with its neighbors

it forms the shape
without central coordination.

Now if you put these ideas together,

the third idea is that we
essentially give these robots

mathematical descriptions
of the shape they need to execute.

And these shapes can be varying
as a function of time,

and you’ll see these robots
start from a circular formation,

change into a rectangular formation,
stretch into a straight line,

back into an ellipse.

And they do this with the same
kind of split-second coordination

that you see in natural swarms, in nature.

So why work with swarms?

Let me tell you about two applications
that we are very interested in.

The first one has to do with agriculture,

which is probably the biggest problem
that we’re facing worldwide.

As you well know,

one in every seven persons
in this earth is malnourished.

Most of the land that we can cultivate
has already been cultivated.

And the efficiency of most systems
in the world is improving,

but our production system
efficiency is actually declining.

And that’s mostly because of water
shortage, crop diseases, climate change

and a couple of other things.

So what can robots do?

Well, we adopt an approach that’s
called Precision Farming in the community.

And the basic idea is that we fly
aerial robots through orchards,

and then we build
precision models of individual plants.

So just like personalized medicine,

while you might imagine wanting
to treat every patient individually,

what we’d like to do is build
models of individual plants

and then tell the farmer
what kind of inputs every plant needs –

the inputs in this case being water,
fertilizer and pesticide.

Here you’ll see robots
traveling through an apple orchard,

and in a minute you’ll see
two of its companions

doing the same thing on the left side.

And what they’re doing is essentially
building a map of the orchard.

Within the map is a map
of every plant in this orchard.

(Robot buzzing)

Let’s see what those maps look like.

In the next video, you’ll see the cameras
that are being used on this robot.

On the top-left is essentially
a standard color camera.

On the left-center is an infrared camera.

And on the bottom-left
is a thermal camera.

And on the main panel, you’re seeing
a three-dimensional reconstruction

of every tree in the orchard
as the sensors fly right past the trees.

Armed with information like this,
we can do several things.

The first and possibly the most important
thing we can do is very simple:

count the number of fruits on every tree.

By doing this, you tell the farmer
how many fruits she has in every tree

and allow her to estimate
the yield in the orchard,

optimizing the production
chain downstream.

The second thing we can do

is take models of plants, construct
three-dimensional reconstructions,

and from that estimate the canopy size,

and then correlate the canopy size
to the amount of leaf area on every plant.

And this is called the leaf area index.

So if you know this leaf area index,

you essentially have a measure of how much
photosynthesis is possible in every plant,

which again tells you
how healthy each plant is.

By combining visual
and infrared information,

we can also compute indices such as NDVI.

And in this particular case,
you can essentially see

there are some crops that are
not doing as well as other crops.

This is easily discernible from imagery,

not just visual imagery but combining

both visual imagery and infrared imagery.

And then lastly,

one thing we’re interested in doing is
detecting the early onset of chlorosis –

and this is an orange tree –

which is essentially seen
by yellowing of leaves.

But robots flying overhead
can easily spot this autonomously

and then report to the farmer
that he or she has a problem

in this section of the orchard.

Systems like this can really help,

and we’re projecting yields
that can improve by about ten percent

and, more importantly, decrease
the amount of inputs such as water

by 25 percent by using
aerial robot swarms.

Lastly, I want you to applaud
the people who actually create the future,

Yash Mulgaonkar, Sikang Liu
and Giuseppe Loianno,

who are responsible for the three
demonstrations that you saw.

Thank you.

(Applause)

在我的实验室里,我们建造了
自主空中机器人,

就像你在这里看到的那样。


您今天可以购买的商用无人机不同,

这款机器人没有任何车载 GPS。

所以没有GPS,

这样的机器人
很难确定自己的位置。

该机器人使用板载传感器、
摄像头和激光扫描仪

来扫描环境。

它从环境中检测特征,

并使用三角测量方法确定它
相对于这些特征

的位置。

然后它可以将
所有这些特征组合成一张地图,

就像你在我身后看到的那样。

然后,此地图允许
机器人了解障碍物的位置

并以无碰撞的方式导航。

接下来我想向您展示的


我们在实验室内进行的一组实验

,这个机器人
能够走更远的距离。

所以在这里你会看到,在右上角
,机器人用相机看到的东西。

在主屏幕上

——当然这速度
提高了四倍——

在主屏幕上,你会
看到它正在构建的地图。

这是
我们实验室周围走廊的高分辨率地图。

很快你就会看到它进入我们的实验室,

你可以从你看到的混乱中认出它。

(笑声)

但我想向你们传达的主要观点

是,这些机器人能够

以 5 厘米的分辨率构建高分辨率地图,

允许在实验室
外或建筑物外的

人部署这些地图,
而无需实际进入,

并试图推断
建筑物内部发生了什么。

现在这样的机器人存在一个问题

第一个问题是它相当大。

因为大,重。

这些机器人
每磅消耗大约 100 瓦。


使得任务寿命非常短。

第二个问题

是这些机器人
具有最终非常昂贵的板载传感器

——激光扫描仪、照相机
和处理器。

这推高了这个机器人的成本。

所以我们问自己一个问题:

你可以在电子商店买到什么消费产品

,价格便宜,重量轻
,有板载传感和计算功能?

我们发明了飞行电话。

(笑声)

所以这个机器人使用的是三星 Galaxy
智能手机,你可以从现成的商店购买,

而你只需要一个
可以从我们的应用商店下载的应用程序。

你可以看到这个机器人在
读字母,在这种情况下是“TED”,

看着
“T”和“E”的角

,然后用三角测量,
自主飞行。

那个操纵杆只是
用来确保如果机器人发疯了,

朱塞佩可以杀死它。

(笑声)

除了建造
这些小型机器人,

我们还尝试了攻击
性行为,就像你在这里看到的那样。

所以这个机器人现在
以每秒 2 到 3 米的速度行进,

当它改变方向时,它会猛烈地俯仰和滚动。

重点是我们可以拥有
更小的机器人,它们可以走得更快

,然后在这些
非常非结构化的环境中旅行。

在下一个视频中,

就像你看到这只鸟,一只鹰,
优雅地协调它的翅膀

、眼睛和脚
从水中抓住猎物,

我们的机器人也可以去钓鱼。

(笑声)

在这种情况下,这是一个费城奶酪
牛排三明治,它是凭空抓起的。

(笑声)

所以你可以看到这个
机器人以大约每秒三米的速度前进,

这比步行速度还快,
它的手臂、爪子

和飞行以瞬间的时间协调
来实现这个动作。

在另一个实验中,

我想向您
展示机器人如何调整其飞行

来控制其悬挂的有效载荷,

其长度实际上
大于窗口的宽度。

因此,为了实现这一点,

它实际上必须俯仰
和调整高度

并摆动有效载荷。

但我们当然
想让它们更小

,我们
特别受到蜜蜂的启发。

所以如果你看蜜蜂
,这是一个慢下来的视频,

它们太小了
,惯性太轻了——

(笑声)

它们根本不在乎——
比如它们会从我的手上弹开。

这是一个
模仿蜜蜂行为的小机器人。

越小越好,

因为随着
尺寸越小,惯性越小。

随着较低的惯性——

(机器人嗡嗡声,笑声)

随着较低的惯性,
你可以抵抗碰撞。

这让你更加健壮。

所以就像这些蜜蜂一样,
我们制造了小型机器人。

而这个特殊
的重量只有 25 克。

它仅消耗六瓦的功率。


每秒可以行进六米。

所以如果我把它归一化到它的大小,

它就像一架波音 787 以
十倍的音速飞行。

(笑声)

我想给你们举个例子。

这可能是第一次计划中的空中
碰撞,速度是正常速度的二十分之一。

它们以每秒两米的相对速度前进

,这说明了基本原理。

它周围的两克碳纤维笼
可防止螺旋桨纠缠,

但本质上碰撞被吸收
,机器人对碰撞做出反应。

如此小也意味着安全。

在我的实验室中,当我们开发这些机器人时,

我们从这些大型机器人开始

,然后我们
开始使用这些小型机器人。

如果你绘制一个
我们过去订购的创可贴数量的直方图

,现在这种情况已经减少了。

因为这些机器人真的很安全。

小尺寸有一些缺点

,大自然已经找到了许多方法
来弥补这些缺点。

基本思想是它们聚集
在一起形成大的群体或群体。

因此,同样地,在我们的实验室中,
我们尝试创建人工机器人群。

这非常具有挑战性,

因为现在您必须
考虑机器人网络。

在每个机器人内部,

你必须考虑
传感、通信、计算之间的相互作用——

然后这个网络就变得
非常难以控制和管理。

因此,我们从大自然中带走了
三个组织原则

,这些原则本质上允许
我们开发我们的算法。

第一个想法是机器人
需要了解他们的邻居。

他们需要能够感知
并与邻居交流。

所以这个视频说明了基本思想。

你有四个机器人——

其中一个机器人实际上已经
被人类操作员劫持了,从字面上看。

但由于机器人
相互交互,

它们感知邻居,

它们基本上跟随。

这里有一个人
能够领导这个追随者网络。

再说一次,这并不是因为所有的机器人都
知道他们应该去哪里。

这是因为他们只是
对邻居的立场做出反应。

(笑声)

所以接下来的实验说明
了第二个组织原则。

这个原则与匿名原则有关。

这里的关键思想

是机器人不知道
他们邻居的身份。

他们被要求形成一个圆形

,无论
您将多少机器人引入编队,

或者您拉出多少机器人,

每个机器人都只是
对其邻居做出反应。

它知道它
需要形成圆形,

但与它的邻居合作

它形成了
没有中央协调的形状。

现在如果你把这些想法放在一起

,第三个想法是我们
基本上给这些机器人

他们需要执行的形状的数学描述。

这些形状会
随着时间的变化

而变化,你会看到这些机器人
从一个圆形阵型开始,

变成一个矩形阵型,
伸展成一条直线,

再变成一个椭圆形。

他们用

你在自然界中的自然群体中看到的那种瞬间协调来做到这一点。

那么为什么要使用 swarms 呢?

让我告诉你两个
我们非常感兴趣的应用

。第一个与农业有关,

这可能
是我们在全球范围内面临的最大问题。

众所周知

,地球上每七人中就有一人
营养不良。

我们可以耕种的大部分土地
都已经耕种过了。

而且世界上大多数系统的效率都
在提高,

但我们的生产系统
效率实际上在下降。

这主要是因为
缺水、作物病害、气候变化

和其他一些事情。

那么机器人能做什么呢?

好吧,我们采用了一种
在社区中称为精准农业的方法。

基本的想法是我们让
空中机器人飞过果园,

然后我们建立
单个植物的精确模型。

所以就像个性化医疗一样,

虽然你可能想
单独治疗每个病人,

但我们想做的是建立
个体植物的模型

,然后告诉农民
每株植物需要什么样的投入——

在这种情况下,投入是 水、
化肥和农药。

在这里,您会看到机器人
穿过苹果园

,一分钟后您会看到它的
两个同伴

在左侧做同样的事情。

他们所做的实际上是
建立果园地图。

地图内是
该果园中所有植物的地图。

(机器人嗡嗡声)

让我们看看那些地图是什么样子的。

在下一个视频中,您将看到
该机器人上使用的摄像头。

左上角本质上是
一个标准的彩色相机。

左侧中央是红外摄像头。

左下角
是热像仪。

在主面板上,您会看到

果园中每棵树的 3D 重建,
因为传感器正好飞过树木。

有了这样的信息,
我们可以做几件事。 我们能做

的第一件事,可能也是最重要的
事情非常简单:

数一数每棵树上的果实数量。

通过这样做,您可以告诉农民
每棵树有多少果实,

并让她估算
果园的产量,

从而优化下游的生产
链。

我们可以做的第二件事

是建立植物模型,构建
三维重建,

并据此估计冠层大小,

然后将冠层大小
与每株植物的叶面积相关联。

这称为叶面积指数。

所以如果你知道这个叶面积指数,

你就可以衡量
每株植物可能进行多少光合作用,

这再次告诉你
每株植物的健康程度。

通过结合视觉
和红外信息,

我们还可以计算 NDVI 等指数。

在这种特殊情况下,
您基本上可以看到

有些作物的
表现不如其他作物。

这很容易从图像中辨别出来,

不仅仅是视觉图像,而是结合

了视觉图像和红外图像。

最后,

我们有兴趣做的一件事是
检测早发的萎黄病

——这是一棵橘子树——

本质上
是叶子变黄。

但是在头顶飞行的机器人
可以很容易地自主发现这一点

,然后向农民
报告他或她

在果园的这一部分有问题。

像这样的系统真的很有帮助

,我们预计
产量可以提高大约 10%

,更重要的是,通过使用空中机器人群
,可以将水等输入量

减少 25%

最后,我希望你为
真正创造未来的人鼓掌,

Yash Mulgaonkar、Sikang Liu
和 Giuseppe Loianno,

他们
对你看到的三个示威活动负有责任。

谢谢你。

(掌声)