How do we learn to work with intelligent machines Matt Beane

It’s 6:30 in the morning,

and Kristen is wheeling
her prostate patient into the OR.

She’s a resident, a surgeon in training.

It’s her job to learn.

Today, she’s really hoping to do
some of the nerve-sparing,

extremely delicate dissection
that can preserve erectile function.

That’ll be up to the attending surgeon,
though, but he’s not there yet.

She and the team put the patient under,

and she leads the initial eight-inch
incision in the lower abdomen.

Once she’s got that clamped back,
she tells the nurse to call the attending.

He arrives, gowns up,

And from there on in, their four hands
are mostly in that patient –

with him guiding
but Kristin leading the way.

When the prostates out (and, yes,
he let Kristen do a little nerve sparing),

he rips off his scrubs.

He starts to do paperwork.

Kristen closes the patient by 8:15,

with a junior resident
looking over her shoulder.

And she lets him do
the final line of sutures.

Kristen feels great.

Patient’s going to be fine,

and no doubt she’s a better surgeon
than she was at 6:30.

Now this is extreme work.

But Kristin’s learning to do her job
the way that most of us do:

watching an expert for a bit,

getting involved in easy,
safe parts of the work

and progressing to riskier
and harder tasks

as they guide and decide she’s ready.

My whole life I’ve been fascinated
by this kind of learning.

It feels elemental,
part of what makes us human.

It has different names: apprenticeship,
coaching, mentorship, on the job training.

In surgery, it’s called
“see one, do one, teach one.”

But the process is the same,

and it’s been the main path to skill
around the globe for thousands of years.

Right now, we’re handling AI
in a way that blocks that path.

We’re sacrificing learning
in our quest for productivity.

I found this first in surgery
while I was at MIT,

but now I’ve got evidence
it’s happening all over,

in very different industries
and with very different kinds of AI.

If we do nothing, millions of us
are going to hit a brick wall

as we try to learn to deal with AI.

Let’s go back to surgery to see how.

Fast forward six months.

It’s 6:30am again, and Kristen
is wheeling another prostate patient in,

but this time to the robotic OR.

The attending leads attaching

a four-armed, thousand-pound
robot to the patient.

They both rip off their scrubs,

head to control consoles
10 or 15 feet away,

and Kristen just watches.

The robot allows the attending
to do the whole procedure himself,

so he basically does.

He knows she needs practice.

He wants to give her control.

But he also knows she’d be slower
and make more mistakes,

and his patient comes first.

So Kristin has no hope of getting anywhere
near those nerves during this rotation.

She’ll be lucky if she operates more than
15 minutes during a four-hour procedure.

And she knows that when she slips up,

he’ll tap a touch screen,
and she’ll be watching again,

feeling like a kid in the corner
with a dunce cap.

Like all the studies of robots and work
I’ve done in the last eight years,

I started this one
with a big, open question:

How do we learn to work
with intelligent machines?

To find out, I spent two and a half years
observing dozens of residents and surgeons

doing traditional and robotic surgery,
interviewing them

and in general hanging out
with the residents as they tried to learn.

I covered 18 of the top
US teaching hospitals,

and the story was the same.

Most residents were in Kristen’s shoes.

They got to “see one” plenty,

but the “do one” was barely available.

So they couldn’t struggle,
and they weren’t learning.

This was important news for surgeons, but
I needed to know how widespread it was:

Where else was using AI
blocking learning on the job?

To find out, I’ve connected with a small
but growing group of young researchers

who’ve done boots-on-the-ground studies
of work involving AI

in very diverse settings
like start-ups, policing,

investment banking and online education.

Like me, they spent at least a year
and many hundreds of hours observing,

interviewing and often working
side-by-side with the people they studied.

We shared data, and I looked for patterns.

No matter the industry, the work,
the AI, the story was the same.

Organizations were trying harder
and harder to get results from AI,

and they were peeling learners away from
expert work as they did it.

Start-up managers were outsourcing
their customer contact.

Cops had to learn to deal with crime
forecasts without experts support.

Junior bankers were getting
cut out of complex analysis,

and professors had to build
online courses without help.

And the effect of all of this
was the same as in surgery.

Learning on the job
was getting much harder.

This can’t last.

McKinsey estimates that between half
a billion and a billion of us

are going to have to adapt to AI
in our daily work by 2030.

And we’re assuming
that on-the-job learning

will be there for us as we try.

Accenture’s latest workers survey showed
that most workers learned key skills

on the job, not in formal training.

So while we talk a lot about its
potential future impact,

the aspect of AI
that may matter most right now

is that we’re handling it in a way
that blocks learning on the job

just when we need it most.

Now across all our sites,
a small minority found a way to learn.

They did it by breaking and bending rules.

Approved methods weren’t working,
so they bent and broke rules

to get hands-on practice with experts.

In my setting, residents got involved
in robotic surgery in medical school

at the expense
of their generalist education.

And they spent hundreds of extra hours
with simulators and recordings of surgery,

when you were supposed to learn in the OR.

And maybe most importantly,
they found ways to struggle

in live procedures
with limited expert supervision.

I call all this “shadow learning,”
because it bends the rules

and learner’s do it out of the limelight.

And everyone turns a blind eye
because it gets results.

Remember, these are
the star pupils of the bunch.

Now, obviously, this is not OK,
and it’s not sustainable.

No one should have to risk getting fired

to learn the skills
they need to do their job.

But we do need to learn from these people.

They took serious risks to learn.

They understood they needed to protect
struggle and challenge in their work

so that they could push themselves
to tackle hard problems

right near the edge of their capacity.

They also made sure
there was an expert nearby

to offer pointers and to backstop
against catastrophe.

Let’s build this combination
of struggle and expert support

into each AI implementation.

Here’s one clear example
I could get of this on the ground.

Before robots,

if you were a bomb disposal technician,
you dealt with an IED by walking up to it.

A junior officer was
hundreds of feet away,

so could only watch and help
if you decided it was safe

and invited them downrange.

Now you sit side-by-side
in a bomb-proof truck.

You both watched the video feed.

They control a distant robot,
and you guide the work out loud.

Trainees learn better than they
did before robots.

We can scale this to surgery,
start-ups, policing,

investment banking,
online education and beyond.

The good news is
we’ve got new tools to do it.

The internet and the cloud mean we don’t
always need one expert for every trainee,

for them to be physically near each other
or even to be in the same organization.

And we can build AI to help:

to coach learners as they struggle,
to coach experts as they coach

and to connect those two groups
in smart ways.

There are people at work
on systems like this,

but they’ve been mostly focused
on formal training.

And the deeper crisis
is in on-the-job learning.

We must do better.

Today’s problems demand we do better

to create work that takes full advantage
of AI’s amazing capabilities

while enhancing our skills as we do it.

That’s the kind of future
I dreamed of as a kid.

And the time to create it is now.

Thank you.

(Applause)