How do we learn to work with intelligent machines Matt Beane

It’s 6:30 in the morning,

and Kristen is wheeling
her prostate patient into the OR.

She’s a resident, a surgeon in training.

It’s her job to learn.

Today, she’s really hoping to do
some of the nerve-sparing,

extremely delicate dissection
that can preserve erectile function.

That’ll be up to the attending surgeon,
though, but he’s not there yet.

She and the team put the patient under,

and she leads the initial eight-inch
incision in the lower abdomen.

Once she’s got that clamped back,
she tells the nurse to call the attending.

He arrives, gowns up,

And from there on in, their four hands
are mostly in that patient –

with him guiding
but Kristin leading the way.

When the prostates out (and, yes,
he let Kristen do a little nerve sparing),

he rips off his scrubs.

He starts to do paperwork.

Kristen closes the patient by 8:15,

with a junior resident
looking over her shoulder.

And she lets him do
the final line of sutures.

Kristen feels great.

Patient’s going to be fine,

and no doubt she’s a better surgeon
than she was at 6:30.

Now this is extreme work.

But Kristin’s learning to do her job
the way that most of us do:

watching an expert for a bit,

getting involved in easy,
safe parts of the work

and progressing to riskier
and harder tasks

as they guide and decide she’s ready.

My whole life I’ve been fascinated
by this kind of learning.

It feels elemental,
part of what makes us human.

It has different names: apprenticeship,
coaching, mentorship, on the job training.

In surgery, it’s called
“see one, do one, teach one.”

But the process is the same,

and it’s been the main path to skill
around the globe for thousands of years.

Right now, we’re handling AI
in a way that blocks that path.

We’re sacrificing learning
in our quest for productivity.

I found this first in surgery
while I was at MIT,

but now I’ve got evidence
it’s happening all over,

in very different industries
and with very different kinds of AI.

If we do nothing, millions of us
are going to hit a brick wall

as we try to learn to deal with AI.

Let’s go back to surgery to see how.

Fast forward six months.

It’s 6:30am again, and Kristen
is wheeling another prostate patient in,

but this time to the robotic OR.

The attending leads attaching

a four-armed, thousand-pound
robot to the patient.

They both rip off their scrubs,

head to control consoles
10 or 15 feet away,

and Kristen just watches.

The robot allows the attending
to do the whole procedure himself,

so he basically does.

He knows she needs practice.

He wants to give her control.

But he also knows she’d be slower
and make more mistakes,

and his patient comes first.

So Kristin has no hope of getting anywhere
near those nerves during this rotation.

She’ll be lucky if she operates more than
15 minutes during a four-hour procedure.

And she knows that when she slips up,

he’ll tap a touch screen,
and she’ll be watching again,

feeling like a kid in the corner
with a dunce cap.

Like all the studies of robots and work
I’ve done in the last eight years,

I started this one
with a big, open question:

How do we learn to work
with intelligent machines?

To find out, I spent two and a half years
observing dozens of residents and surgeons

doing traditional and robotic surgery,
interviewing them

and in general hanging out
with the residents as they tried to learn.

I covered 18 of the top
US teaching hospitals,

and the story was the same.

Most residents were in Kristen’s shoes.

They got to “see one” plenty,

but the “do one” was barely available.

So they couldn’t struggle,
and they weren’t learning.

This was important news for surgeons, but
I needed to know how widespread it was:

Where else was using AI
blocking learning on the job?

To find out, I’ve connected with a small
but growing group of young researchers

who’ve done boots-on-the-ground studies
of work involving AI

in very diverse settings
like start-ups, policing,

investment banking and online education.

Like me, they spent at least a year
and many hundreds of hours observing,

interviewing and often working
side-by-side with the people they studied.

We shared data, and I looked for patterns.

No matter the industry, the work,
the AI, the story was the same.

Organizations were trying harder
and harder to get results from AI,

and they were peeling learners away from
expert work as they did it.

Start-up managers were outsourcing
their customer contact.

Cops had to learn to deal with crime
forecasts without experts support.

Junior bankers were getting
cut out of complex analysis,

and professors had to build
online courses without help.

And the effect of all of this
was the same as in surgery.

Learning on the job
was getting much harder.

This can’t last.

McKinsey estimates that between half
a billion and a billion of us

are going to have to adapt to AI
in our daily work by 2030.

And we’re assuming
that on-the-job learning

will be there for us as we try.

Accenture’s latest workers survey showed
that most workers learned key skills

on the job, not in formal training.

So while we talk a lot about its
potential future impact,

the aspect of AI
that may matter most right now

is that we’re handling it in a way
that blocks learning on the job

just when we need it most.

Now across all our sites,
a small minority found a way to learn.

They did it by breaking and bending rules.

Approved methods weren’t working,
so they bent and broke rules

to get hands-on practice with experts.

In my setting, residents got involved
in robotic surgery in medical school

at the expense
of their generalist education.

And they spent hundreds of extra hours
with simulators and recordings of surgery,

when you were supposed to learn in the OR.

And maybe most importantly,
they found ways to struggle

in live procedures
with limited expert supervision.

I call all this “shadow learning,”
because it bends the rules

and learner’s do it out of the limelight.

And everyone turns a blind eye
because it gets results.

Remember, these are
the star pupils of the bunch.

Now, obviously, this is not OK,
and it’s not sustainable.

No one should have to risk getting fired

to learn the skills
they need to do their job.

But we do need to learn from these people.

They took serious risks to learn.

They understood they needed to protect
struggle and challenge in their work

so that they could push themselves
to tackle hard problems

right near the edge of their capacity.

They also made sure
there was an expert nearby

to offer pointers and to backstop
against catastrophe.

Let’s build this combination
of struggle and expert support

into each AI implementation.

Here’s one clear example
I could get of this on the ground.

Before robots,

if you were a bomb disposal technician,
you dealt with an IED by walking up to it.

A junior officer was
hundreds of feet away,

so could only watch and help
if you decided it was safe

and invited them downrange.

Now you sit side-by-side
in a bomb-proof truck.

You both watched the video feed.

They control a distant robot,
and you guide the work out loud.

Trainees learn better than they
did before robots.

We can scale this to surgery,
start-ups, policing,

investment banking,
online education and beyond.

The good news is
we’ve got new tools to do it.

The internet and the cloud mean we don’t
always need one expert for every trainee,

for them to be physically near each other
or even to be in the same organization.

And we can build AI to help:

to coach learners as they struggle,
to coach experts as they coach

and to connect those two groups
in smart ways.

There are people at work
on systems like this,

but they’ve been mostly focused
on formal training.

And the deeper crisis
is in on-the-job learning.

We must do better.

Today’s problems demand we do better

to create work that takes full advantage
of AI’s amazing capabilities

while enhancing our skills as we do it.

That’s the kind of future
I dreamed of as a kid.

And the time to create it is now.

Thank you.

(Applause)

现在是早上 6 点 30 分

,克里斯汀正把
她的前列腺病人推入手术室。

她是一名住院医师,一名接受培训的外科医生。

学习是她的工作。

今天,她真的希望做
一些可以保留勃起功能的神经保护,

极其精细的解剖

不过,这将取决于主治外科医生
,但他还没有到那里。

她和团队将患者置于下方,

并在下腹部进行最初的 8 英寸
切口。

一旦她把那个夹住了,
她告诉护士打电话给主治医生。

他来了,穿上长袍

,从那里开始,他们的
四只手大部分都在那个病人身上

——他在引导,
但克里斯汀带路。

当前列腺出来时(是的,
他让克里斯汀稍微保留一下神经),

他撕下他的磨砂膏。

他开始做文书工作。

克里斯汀在 8 点 15 分之前关闭了病人

,一名初级住院医师
在她的肩膀上看着。

她让他
做最后一道缝合线。

克里斯汀感觉很棒。

病人会好起来的

,毫无疑问,
她比 6:30 时的外科医生更好。

现在这是一项极端的工作。

但克里斯汀正在学习
像我们大多数人一样做她的工作:

观察专家一段时间,

参与工作中简单、
安全的部分,

在他们指导和决定她准备好时继续从事风险更大、难度更大的任务。

我一生都对
这种学习着迷。

它感觉很基本,
是使我们成为人类的一部分。

它有不同的名称:学徒、
教练、指导、在职培训。

在手术中,它被称为
“看一,做一,教一”。

但过程是相同的

,几千年来它一直是全球获得技能的主要途径

现在,我们正在
以一种阻碍这条路径的方式处理人工智能。

我们在追求生产力的过程中牺牲了学习。 当

我在麻省理工学院时,我第一次在外科手术中发现了这一点

但现在我有证据表明
它正在发生,

在非常不同的行业
和非常不同类型的人工智能中。

如果我们什么都不做,数以百万计的
人将在

尝试学习处理人工智能时碰壁。

让我们回到手术,看看如何。

快进六个月。

又是早上 6 点 30 分,克里斯汀
正在推着另一名前列腺患者进来,

但这次是机器人手术室。

主治医生将

一个四臂、一千磅重的
机器人连接到病人身上。

他们俩都撕下他们的磨砂膏,

前往
10 或 15 英尺外的控制台

,克里斯汀只是看着。

机器人允许主治医生
自己完成整个过程,

所以他基本上是这样做的。

他知道她需要练习。

他想把控制权交给她。

但他也知道她会慢一些
,犯更多的错误

,他的病人是第一位的。

所以克里斯汀没有希望
在这次轮换中接近这些神经。

如果她
在四个小时的手术过程中操作超过 15 分钟,她会很幸运。

而且她知道,当她滑倒时,

他会点击触摸屏,
而她会再次注视,

感觉就像一个戴着傻帽的角落里的孩子

就像我在过去八年中所做的所有关于机器人和工作的研究一样

我开始这个研究时提出
了一个大而开放的问题:

我们如何学习
使用智能机器?

为了找出答案,我花了两年半的时间
观察数十名

进行传统和机器人手术的住院医师和外科医生,
采访他们,

并在
他们试图学习时与他们一起闲逛。

我报道了 18 家美国顶级
教学医院

,故事也是如此。

大多数居民都站在克里斯汀的立场上。

他们必须“看一个”很多,

但“做一个”几乎没有。

所以他们不能挣扎,
也不能学习。

这对外科医生来说是个重要消息,但
我需要知道它的普及程度:

还有什么地方在使用 AI 来
阻止学习?

为了找出答案,我与一
小群年轻的研究人员建立了联系

,他们在

初创企业、警务、

投资银行和在线教育等非常多样化的环境中对涉及人工智能的工作进行了实地研究。

和我一样,他们至少花了一年
和数百小时观察、

采访并经常
与他们研究的人并肩工作。

我们共享数据,我寻找模式。

无论行业、工作
、人工智能,故事都是一样的。

组织
越来越努力地从 AI 中获得成果,

并且他们正在使学习者远离
专业工作。

初创公司的经理正在外包
他们的客户联系。

警察必须学会在
没有专家支持的情况下处理犯罪预测。

初级银行家
被排除在复杂分析之外

,教授们不得不在
没有帮助的情况下建立在线课程。

而这一切的效果和
手术一样。

在工作
中学习变得越来越困难。

这不能持久。

麦肯锡估计,到 2030 年,我们中有 50
到 10 亿

人将不得不
在我们的日常工作中适应人工智能。

我们
假设在

我们尝试的过程中,在职学习将为我们提供支持。

埃森哲最新的员工调查显示
,大多数员工在工作中学到了关键

技能,而不是在正规培训中。

因此,虽然我们经常谈论它对
未来的潜在影响

,但
目前可能最重要的人工智能方面

是我们正在以一种

在我们最需要它的时候阻止学习的方式来处理它。

现在,在我们所有的网站中,
一小部分人找到了学习的方法。

他们通过打破和弯曲规则来做到这一点。

批准的方法不起作用,
所以他们弯曲并打破规则

,与专家进行实践练习。

在我的环境中,住院医师以牺牲通才教育为代价
参与了医学院的机器人手术

。 当你本应在手术室学习的时候

,他们又花了数百个小时
在模拟器和手术记录上

也许最重要的是,
他们找到了


专家监督有限的情况下在现场手术中挣扎的方法。

我称这一切为“影子学习”,
因为它打破了规则,

而学习者的做法是不受关注的。

每个人都视而不见,
因为它得到了结果。

请记住,这些是
这群人中的明星学生。

现在,显然,这是不行的,
而且是不可持续的。

没有人应该冒着被解雇的风险

来学习
完成工作所需的技能。

但我们确实需要向这些人学习。

他们冒着巨大的风险去学习。

他们明白他们需要保护工作中的
斗争和挑战,

以便他们能够推动
自己解决

接近能力边缘的难题。

他们还确保
附近有专家

提供指导并支持
应对灾难。

让我们将
这种斗争和专家支持的结合

融入到每个 AI 实施中。

这是
我可以在实地得到的一个明显的例子。

在机器人出现之前,

如果您是拆弹技术员,
您会通过走近来处理简易爆炸装置。

一名下级军官在
数百英尺外,

因此
只有在您认为安全

并邀请他们下线时才能观看并提供帮助。

现在你并排
坐在防弹卡车上。

你们俩都看了视频。

他们控制着一个遥远的机器人
,你大声指导工作。

与机器人之前相比,受训者学得更好

我们可以将其扩展到手术、
初创企业、警务、

投资银行、
在线教育等领域。

好消息是
我们有新的工具可以做到这一点。

互联网和云意味着我们并不
总是需要为每个学员配备一位专家,

因为他们彼此靠近
,甚至在同一个组织中。

我们可以构建人工智能来提供帮助:

在学习者挣扎时指导他们,
在专家指导时指导他们,

并以聪明的方式将这两个群体联系起来

有人在
这样的系统上工作,

但他们主要专注
于正式培训。

而更深层次的危机
在于在职学习。

我们必须做得更好。

今天的问题要求我们更好

地创造充分
利用人工智能惊人能力的工作,

同时提高我们的技能。

这就是
我小时候梦想的那种未来。

现在是创建它的时候了。

谢谢你。

(掌声)