How do selfdriving cars see Sajan Saini

It’s late, pitch dark, and a self-driving
car winds down a narrow country road.

Suddenly, three hazards appear
at the same time.

What happens next?

Before it can navigate this
onslaught of obstacles,

the car has to detect them—

gleaning enough information about
their size, shape, and position,

so that its control algorithms
can plot the safest course.

With no human at the wheel,

the car needs smart eyes, sensors
that’ll resolve these details—

no matter the environment,
weather, or how dark it is—

all in a split-second.

That’s a tall order, but there’s a
solution that partners two things:

a special kind of laser-based probe
called LIDAR,

and a miniature version of
the communications technology

that keeps the internet humming,
called integrated photonics.

To understand LIDAR, it helps to start
with a related technology— radar.

In aviation,

radar antennas launch pulses
of radio or microwaves at planes

to learn their locations by timing
how long the beams take to bounce back.

That’s a limited way of seeing, though,

because the large beam-size
can’t visualize fine details.

In contrast, a self-driving car’s
LIDAR system,

which stands for Light Detection
and Ranging,

uses a narrow invisible infrared laser.

It can image features as small as the
button on a pedestrian’s shirt

across the street.

But how do we determine the shape,
or depth, of these features?

LIDAR fires a train of super-short laser
pulses to give depth resolution.

Take the moose on the country road.

As the car drives by, one LIDAR pulse
scatters off the base of its antlers,

while the next may travel to the tip
of one antler before bouncing back.

Measuring how much longer
the second pulse takes to return

provides data about the antler’s shape.

With a lot of short pulses, a LIDAR system
quickly renders a detailed profile.

The most obvious way to create a pulse
of light is to switch a laser on and off.

But this makes a laser unstable and
affects the precise timing of its pulses,

which limits depth resolution.

Better to leave it on,

and use something else to periodically
block the light reliably and rapidly.

That’s where integrated photonics come in.

The digital data of the internet

is carried by precision-timed
pulses of light,

some as short as a hundred picoseconds.

One way to create these pulses is
with a Mach-Zehnder modulator.

This device takes advantage of a
particular wave property,

called interference.

Imagine dropping pebbles into a pond:

as the ripples spread and overlap,
a pattern forms.

In some places, wave peaks add
up to become very large;

in other places, they completely
cancel out.

The Mach-Zehnder modulator
does something similar.

It splits waves of light along two
parallel arms and eventually rejoins them.

If the light is slowed down and
delayed in one arm,

the waves recombine out of sync and
cancel, blocking the light.

By toggling this delay in one arm,

the modulator acts like an on/off switch,
emitting pulses of light.

A light pulse lasting a hundred
picoseconds

leads to a depth resolution of a
few centimeters,

but tomorrow’s cars will need
to see better than that.

By pairing the modulator with a super-
sensitive, fast-acting light detector,

the resolution can be refined
to a millimeter.

That’s more than a hundred times better

than what we can make out with
20/20 vision, from across a street.

The first generation of automobile LIDAR
has relied on complex spinning assemblies

that scan from rooftops or hoods.

With integrated photonics,

modulators and detectors are being shrunk
to less than a tenth of a millimeter,

and packed into tiny chips that’ll one
day fit inside a car’s lights.

These chips will also include a clever
variation on the modulator

to help do away with moving parts
and scan at rapid speeds.

By slowing the light in a modulator
arm only a tiny bit,

this additional device will act more
like a dimmer than an on/off switch.

If an array of many such arms, each with
a tiny controlled delay,

is stacked in parallel, something novel
can be designed:

a steerable laser beam.

From their new vantage,

these smart eyes will probe and
see more thoroughly

than anything nature could’ve imagined—

and help navigate any number
of obstacles.

All without anyone breaking a sweat—

except for maybe one disoriented moose.

天色已晚,漆黑一片,一辆自动驾驶
汽车在狭窄的乡间小路上蜿蜒前行。

突然,三种危险
同时出现。

接下来发生什么?

在能够驾驭这些
障碍物之前

,汽车必须先检测到它们——

收集关于
它们的大小、形状和位置的足够信息,

以便其控制算法
能够绘制出最安全的路线。

在没有人

驾驶的情况下,汽车需要智能的眼睛和
能够在瞬间解决这些细节的传感器——

无论环境、
天气或多黑——

所有这些都在瞬间完成。

这是一项艰巨的任务,但有一个
解决方案与两件事相结合:

一种称为 LIDAR 的特殊激光探测器

以及

一种保持互联网嗡嗡声的通信技术的微型版本,
称为集成光子学。

要了解 LIDAR,有必要
从相关技术——雷达开始。

在航空领域,

雷达天线
在飞机上发射无线电或微波脉冲,

通过定时波束反弹所需的时间来了解它们的位置

不过,这是一种有限的观察方式,

因为大光束尺寸
无法显示精细的细节。

相比之下,自动驾驶汽车的
LIDAR 系统

(代表光检测
和测距)

使用窄的不可见红外激光。

它可以对小到街对面
行人衬衫上的按钮的特征进行成像

但是我们如何确定
这些特征的形状或深度呢?

激光雷达发射一系列超短激光
脉冲以提供深度分辨率。

带上乡间小路上的驼鹿。

当汽车驶过时,一个激光雷达脉冲会
从它的鹿角底部散射,

而下一个激光雷达脉冲可能会
在反弹回来之前到达一个鹿角的尖端。

测量
第二个脉冲返回所需的时间可以

提供有关鹿角形状的数据。

通过大量短脉冲,激光雷达系统可以
快速呈现详细的轮廓。

产生光脉冲的最明显方法
是打开和关闭激光。

但这会使激光不稳定并
影响其脉冲的精确计时,

从而限制了深度分辨率。

最好将其保持打开状态,

并使用其他东西定期
可靠且快速地阻挡光线。

这就是集成光子学的用武之地。

互联网的数字数据

由精确定时
的光脉冲承载,

有些短至一百皮秒。

创建这些脉冲的一种方法是
使用 Mach-Zehnder 调制器。

该设备利用了一种
特定的波特性,

称为干扰。

想象一下将鹅卵石投入池塘:

随着涟漪的扩散和重叠,
形成了一种图案。

在一些地方,波峰
加起来变得非常大;

在其他地方,它们完全
抵消了。

Mach-Zehnder 调制
器做类似的事情。

它沿着两条
平行的手臂分裂光波并最终重新加入它们。

如果光
在一只手臂中减慢并延迟,

则波重新组合不同步并
取消,从而阻挡光。

通过在一个臂中切换这种延迟

,调制器就像一个开/关开关,
发射光脉冲。

持续 100 皮秒的光脉冲

导致几厘米的深度分辨率

但明天的汽车
需要看得更清楚。

通过将调制器与
超灵敏、快速反应的光检测器配对

,分辨率可以提高
到一毫米。

比我们
在街对面用 20/20 的视力所能看到的要好一百倍以上。

第一代汽车激光
雷达依赖于

从屋顶或引擎盖扫描的复杂旋转组件。

借助集成光子学,

调制器和检测器正在缩小
到不到十分之一毫米,

并被装入微型芯片中,
有朝一日可以装进汽车的灯中。

这些芯片还将包括
调制器的巧妙变化,

以帮助消除移动部件
并快速扫描。

通过将调制器
臂中的光放慢一点,

这个额外的设备将
更像一个调光器,而不是一个开/关开关。

如果将许多这样的臂阵列平行堆叠,每个臂都
具有微小的受控延迟,

则可以设计出一些新颖的东西

:可操纵的激光束。

从他们的新优势来看,

这些聪明的眼睛

将比大自然想象的任何东西都更彻底地探测和观察——

并帮助驾驭任何数量
的障碍。

所有这些都没有任何人流汗——

也许除了一只迷失方向的驼鹿。