3 myths about the future of work and why theyre not true Daniel Susskind

Automation anxiety
has been spreading lately,

a fear that in the future,

many jobs will be performed by machines

rather than human beings,

given the remarkable advances
that are unfolding

in artificial intelligence and robotics.

What’s clear is that
there will be significant change.

What’s less clear
is what that change will look like.

My research suggests that the future
is both troubling and exciting.

The threat of technological
unemployment is real,

and yet it’s a good problem to have.

And to explain
how I came to that conclusion,

I want to confront three myths

that I think are currently obscuring
our vision of this automated future.

A picture that we see
on our television screens,

in books, in films, in everyday commentary

is one where an army of robots
descends on the workplace

with one goal in mind:

to displace human beings from their work.

And I call this the Terminator myth.

Yes, machines displace
human beings from particular tasks,

but they don’t just
substitute for human beings.

They also complement them in other tasks,

making that work more valuable
and more important.

Sometimes they complement
human beings directly,

making them more productive
or more efficient at a particular task.

So a taxi driver can use a satnav system
to navigate on unfamiliar roads.

An architect can use
computer-assisted design software

to design bigger,
more complicated buildings.

But technological progress doesn’t
just complement human beings directly.

It also complements them indirectly,
and it does this in two ways.

The first is if we think
of the economy as a pie,

technological progress
makes the pie bigger.

As productivity increases,
incomes rise and demand grows.

The British pie, for instance,

is more than a hundred times
the size it was 300 years ago.

And so people displaced
from tasks in the old pie

could find tasks to do
in the new pie instead.

But technological progress
doesn’t just make the pie bigger.

It also changes
the ingredients in the pie.

As time passes, people spend
their income in different ways,

changing how they spread it
across existing goods,

and developing tastes
for entirely new goods, too.

New industries are created,

new tasks have to be done

and that means often
new roles have to be filled.

So again, the British pie:

300 years ago,
most people worked on farms,

150 years ago, in factories,

and today, most people work in offices.

And once again, people displaced
from tasks in the old bit of pie

could tumble into tasks
in the new bit of pie instead.

Economists call these effects
complementarities,

but really that’s just a fancy word
to capture the different way

that technological progress
helps human beings.

Resolving this Terminator myth

shows us that there are
two forces at play:

one, machine substitution
that harms workers,

but also these complementarities
that do the opposite.

Now the second myth,

what I call the intelligence myth.

What do the tasks of driving a car,
making a medical diagnosis

and identifying a bird
at a fleeting glimpse have in common?

Well, these are all tasks
that until very recently,

leading economists thought
couldn’t readily be automated.

And yet today, all of these tasks
can be automated.

You know, all major car manufacturers
have driverless car programs.

There’s countless systems out there
that can diagnose medical problems.

And there’s even an app
that can identify a bird

at a fleeting glimpse.

Now, this wasn’t simply a case of bad luck
on the part of economists.

They were wrong,

and the reason why
they were wrong is very important.

They’ve fallen for the intelligence myth,

the belief that machines
have to copy the way

that human beings think and reason

in order to outperform them.

When these economists
were trying to figure out

what tasks machines could not do,

they imagined the only way
to automate a task

was to sit down with a human being,

get them to explain to you
how it was they performed a task,

and then try and capture that explanation

in a set of instructions
for a machine to follow.

This view was popular in artificial
intelligence at one point, too.

I know this because Richard Susskind,

who is my dad and my coauthor,

wrote his doctorate in the 1980s
on artificial intelligence and the law

at Oxford University,

and he was part of the vanguard.

And with a professor called Phillip Capper

and a legal publisher called Butterworths,

they produced the world’s first
commercially available

artificial intelligence system in the law.

This was the home screen design.

He assures me this was
a cool screen design at the time.

(Laughter)

I’ve never been entirely convinced.

He published it
in the form of two floppy disks,

at a time where floppy disks
genuinely were floppy,

and his approach was the same
as the economists':

sit down with a lawyer,

get her to explain to you
how it was she solved a legal problem,

and then try and capture that explanation
in a set of rules for a machine to follow.

In economics, if human beings
could explain themselves in this way,

the tasks are called routine,
and they could be automated.

But if human beings
can’t explain themselves,

the tasks are called non-routine,
and they’re thought to be out reach.

Today, that routine-nonroutine
distinction is widespread.

Think how often you hear people say to you

machines can only perform tasks
that are predictable or repetitive,

rules-based or well-defined.

Those are all just
different words for routine.

And go back to those three cases
that I mentioned at the start.

Those are all classic cases
of nonroutine tasks.

Ask a doctor, for instance,
how she makes a medical diagnosis,

and she might be able
to give you a few rules of thumb,

but ultimately she’d struggle.

She’d say it requires things like
creativity and judgment and intuition.

And these things are
very difficult to articulate,

and so it was thought these tasks
would be very hard to automate.

If a human being can’t explain themselves,

where on earth do we begin
in writing a set of instructions

for a machine to follow?

Thirty years ago, this view was right,

but today it’s looking shaky,

and in the future
it’s simply going to be wrong.

Advances in processing power,
in data storage capability

and in algorithm design

mean that this
routine-nonroutine distinction

is diminishingly useful.

To see this, go back to the case
of making a medical diagnosis.

Earlier in the year,

a team of researchers at Stanford
announced they’d developed a system

which can tell you
whether or not a freckle is cancerous

as accurately as leading dermatologists.

How does it work?

It’s not trying to copy the judgment
or the intuition of a doctor.

It knows or understands
nothing about medicine at all.

Instead, it’s running
a pattern recognition algorithm

through 129,450 past cases,

hunting for similarities
between those cases

and the particular lesion in question.

It’s performing these tasks
in an unhuman way,

based on the analysis
of more possible cases

than any doctor could hope
to review in their lifetime.

It didn’t matter that that human being,

that doctor, couldn’t explain
how she’d performed the task.

Now, there are those
who dwell upon that the fact

that these machines
aren’t built in our image.

As an example, take IBM’s Watson,

the supercomputer that went
on the US quiz show “Jeopardy!” in 2011,

and it beat the two
human champions at “Jeopardy!”

The day after it won,

The Wall Street Journal ran a piece
by the philosopher John Searle

with the title “Watson
Doesn’t Know It Won on ‘Jeopardy!'”

Right, and it’s brilliant, and it’s true.

You know, Watson didn’t
let out a cry of excitement.

It didn’t call up its parents
to say what a good job it had done.

It didn’t go down to the pub for a drink.

This system wasn’t trying to copy the way
that those human contestants played,

but it didn’t matter.

It still outperformed them.

Resolving the intelligence myth

shows us that our limited understanding
about human intelligence,

about how we think and reason,

is far less of a constraint
on automation than it was in the past.

What’s more, as we’ve seen,

when these machines
perform tasks differently to human beings,

there’s no reason to think

that what human beings
are currently capable of doing

represents any sort of summit

in what these machines
might be capable of doing in the future.

Now the third myth,

what I call the superiority myth.

It’s often said that those who forget

about the helpful side
of technological progress,

those complementarities from before,

are committing something
known as the lump of labor fallacy.

Now, the problem is
the lump of labor fallacy

is itself a fallacy,

and I call this the lump
of labor fallacy fallacy,

or LOLFF, for short.

Let me explain.

The lump of labor fallacy
is a very old idea.

It was a British economist, David Schloss,
who gave it this name in 1892.

He was puzzled
to come across a dock worker

who had begun to use
a machine to make washers,

the small metal discs
that fasten on the end of screws.

And this dock worker
felt guilty for being more productive.

Now, most of the time,
we expect the opposite,

that people feel guilty
for being unproductive,

you know, a little too much time
on Facebook or Twitter at work.

But this worker felt guilty
for being more productive,

and asked why, he said,
“I know I’m doing wrong.

I’m taking away the work of another man.”

In his mind, there was
some fixed lump of work

to be divided up between him and his pals,

so that if he used
this machine to do more,

there’d be less left for his pals to do.

Schloss saw the mistake.

The lump of work wasn’t fixed.

As this worker used the machine
and became more productive,

the price of washers would fall,
demand for washers would rise,

more washers would have to be made,

and there’d be more work
for his pals to do.

The lump of work would get bigger.

Schloss called this
“the lump of labor fallacy.”

And today you hear people talk
about the lump of labor fallacy

to think about the future
of all types of work.

There’s no fixed lump of work
out there to be divided up

between people and machines.

Yes, machines substitute for human beings,
making the original lump of work smaller,

but they also complement human beings,

and the lump of work
gets bigger and changes.

But LOLFF.

Here’s the mistake:

it’s right to think
that technological progress

makes the lump of work to be done bigger.

Some tasks become more valuable.
New tasks have to be done.

But it’s wrong to think that necessarily,

human beings will be best placed
to perform those tasks.

And this is the superiority myth.

Yes, the lump of work
might get bigger and change,

but as machines become more capable,

it’s likely that they’ll take on
the extra lump of work themselves.

Technological progress,
rather than complement human beings,

complements machines instead.

To see this, go back
to the task of driving a car.

Today, satnav systems
directly complement human beings.

They make some
human beings better drivers.

But in the future,

software is going to displace
human beings from the driving seat,

and these satnav systems,
rather than complement human beings,

will simply make these
driverless cars more efficient,

helping the machines instead.

Or go to those indirect complementarities
that I mentioned as well.

The economic pie may get larger,

but as machines become more capable,

it’s possible that any new demand
will fall on goods that machines,

rather than human beings,
are best placed to produce.

The economic pie may change,

but as machines become more capable,

it’s possible that they’ll be best placed
to do the new tasks that have to be done.

In short, demand for tasks
isn’t demand for human labor.

Human beings only stand to benefit

if they retain the upper hand
in all these complemented tasks,

but as machines become more capable,
that becomes less likely.

So what do these three myths tell us then?

Well, resolving the Terminator myth

shows us that the future of work depends
upon this balance between two forces:

one, machine substitution
that harms workers

but also those complementarities
that do the opposite.

And until now, this balance
has fallen in favor of human beings.

But resolving the intelligence myth

shows us that that first force,
machine substitution,

is gathering strength.

Machines, of course, can’t do everything,

but they can do far more,

encroaching ever deeper into the realm
of tasks performed by human beings.

What’s more, there’s no reason to think

that what human beings
are currently capable of

represents any sort of finishing line,

that machines are going
to draw to a polite stop

once they’re as capable as us.

Now, none of this matters

so long as those helpful
winds of complementarity

blow firmly enough,

but resolving the superiority myth

shows us that that process
of task encroachment

not only strengthens
the force of machine substitution,

but it wears down
those helpful complementarities too.

Bring these three myths together

and I think we can capture a glimpse
of that troubling future.

Machines continue to become more capable,

encroaching ever deeper
on tasks performed by human beings,

strengthening the force
of machine substitution,

weakening the force
of machine complementarity.

And at some point, that balance
falls in favor of machines

rather than human beings.

This is the path we’re currently on.

I say “path” deliberately,
because I don’t think we’re there yet,

but it is hard to avoid the conclusion
that this is our direction of travel.

That’s the troubling part.

Let me say now why I think actually
this is a good problem to have.

For most of human history,
one economic problem has dominated:

how to make the economic pie
large enough for everyone to live on.

Go back to the turn
of the first century AD,

and if you took the global economic pie

and divided it up into equal slices
for everyone in the world,

everyone would get a few hundred dollars.

Almost everyone lived
on or around the poverty line.

And if you roll forward a thousand years,

roughly the same is true.

But in the last few hundred years,
economic growth has taken off.

Those economic pies have exploded in size.

Global GDP per head,

the value of those individual
slices of the pie today,

they’re about 10,150 dollars.

If economic growth continues
at two percent,

our children will be twice as rich as us.

If it continues
at a more measly one percent,

our grandchildren
will be twice as rich as us.

By and large, we’ve solved
that traditional economic problem.

Now, technological unemployment,
if it does happen,

in a strange way will be
a symptom of that success,

will have solved one problem –
how to make the pie bigger –

but replaced it with another –

how to make sure
that everyone gets a slice.

As other economists have noted,
solving this problem won’t be easy.

Today, for most people,

their job is their seat
at the economic dinner table,

and in a world with less work
or even without work,

it won’t be clear
how they get their slice.

There’s a great deal
of discussion, for instance,

about various forms
of universal basic income

as one possible approach,

and there’s trials underway

in the United States
and in Finland and in Kenya.

And this is the collective challenge
that’s right in front of us,

to figure out how this material prosperity
generated by our economic system

can be enjoyed by everyone

in a world in which
our traditional mechanism

for slicing up the pie,

the work that people do,

withers away and perhaps disappears.

Solving this problem is going to require
us to think in very different ways.

There’s going to be a lot of disagreement
about what ought to be done,

but it’s important to remember
that this is a far better problem to have

than the one that haunted
our ancestors for centuries:

how to make that pie
big enough in the first place.

Thank you very much.

(Applause)