3 myths about the future of work and why theyre not true Daniel Susskind

Automation anxiety
has been spreading lately,

a fear that in the future,

many jobs will be performed by machines

rather than human beings,

given the remarkable advances
that are unfolding

in artificial intelligence and robotics.

What’s clear is that
there will be significant change.

What’s less clear
is what that change will look like.

My research suggests that the future
is both troubling and exciting.

The threat of technological
unemployment is real,

and yet it’s a good problem to have.

And to explain
how I came to that conclusion,

I want to confront three myths

that I think are currently obscuring
our vision of this automated future.

A picture that we see
on our television screens,

in books, in films, in everyday commentary

is one where an army of robots
descends on the workplace

with one goal in mind:

to displace human beings from their work.

And I call this the Terminator myth.

Yes, machines displace
human beings from particular tasks,

but they don’t just
substitute for human beings.

They also complement them in other tasks,

making that work more valuable
and more important.

Sometimes they complement
human beings directly,

making them more productive
or more efficient at a particular task.

So a taxi driver can use a satnav system
to navigate on unfamiliar roads.

An architect can use
computer-assisted design software

to design bigger,
more complicated buildings.

But technological progress doesn’t
just complement human beings directly.

It also complements them indirectly,
and it does this in two ways.

The first is if we think
of the economy as a pie,

technological progress
makes the pie bigger.

As productivity increases,
incomes rise and demand grows.

The British pie, for instance,

is more than a hundred times
the size it was 300 years ago.

And so people displaced
from tasks in the old pie

could find tasks to do
in the new pie instead.

But technological progress
doesn’t just make the pie bigger.

It also changes
the ingredients in the pie.

As time passes, people spend
their income in different ways,

changing how they spread it
across existing goods,

and developing tastes
for entirely new goods, too.

New industries are created,

new tasks have to be done

and that means often
new roles have to be filled.

So again, the British pie:

300 years ago,
most people worked on farms,

150 years ago, in factories,

and today, most people work in offices.

And once again, people displaced
from tasks in the old bit of pie

could tumble into tasks
in the new bit of pie instead.

Economists call these effects
complementarities,

but really that’s just a fancy word
to capture the different way

that technological progress
helps human beings.

Resolving this Terminator myth

shows us that there are
two forces at play:

one, machine substitution
that harms workers,

but also these complementarities
that do the opposite.

Now the second myth,

what I call the intelligence myth.

What do the tasks of driving a car,
making a medical diagnosis

and identifying a bird
at a fleeting glimpse have in common?

Well, these are all tasks
that until very recently,

leading economists thought
couldn’t readily be automated.

And yet today, all of these tasks
can be automated.

You know, all major car manufacturers
have driverless car programs.

There’s countless systems out there
that can diagnose medical problems.

And there’s even an app
that can identify a bird

at a fleeting glimpse.

Now, this wasn’t simply a case of bad luck
on the part of economists.

They were wrong,

and the reason why
they were wrong is very important.

They’ve fallen for the intelligence myth,

the belief that machines
have to copy the way

that human beings think and reason

in order to outperform them.

When these economists
were trying to figure out

what tasks machines could not do,

they imagined the only way
to automate a task

was to sit down with a human being,

get them to explain to you
how it was they performed a task,

and then try and capture that explanation

in a set of instructions
for a machine to follow.

This view was popular in artificial
intelligence at one point, too.

I know this because Richard Susskind,

who is my dad and my coauthor,

wrote his doctorate in the 1980s
on artificial intelligence and the law

at Oxford University,

and he was part of the vanguard.

And with a professor called Phillip Capper

and a legal publisher called Butterworths,

they produced the world’s first
commercially available

artificial intelligence system in the law.

This was the home screen design.

He assures me this was
a cool screen design at the time.

(Laughter)

I’ve never been entirely convinced.

He published it
in the form of two floppy disks,

at a time where floppy disks
genuinely were floppy,

and his approach was the same
as the economists':

sit down with a lawyer,

get her to explain to you
how it was she solved a legal problem,

and then try and capture that explanation
in a set of rules for a machine to follow.

In economics, if human beings
could explain themselves in this way,

the tasks are called routine,
and they could be automated.

But if human beings
can’t explain themselves,

the tasks are called non-routine,
and they’re thought to be out reach.

Today, that routine-nonroutine
distinction is widespread.

Think how often you hear people say to you

machines can only perform tasks
that are predictable or repetitive,

rules-based or well-defined.

Those are all just
different words for routine.

And go back to those three cases
that I mentioned at the start.

Those are all classic cases
of nonroutine tasks.

Ask a doctor, for instance,
how she makes a medical diagnosis,

and she might be able
to give you a few rules of thumb,

but ultimately she’d struggle.

She’d say it requires things like
creativity and judgment and intuition.

And these things are
very difficult to articulate,

and so it was thought these tasks
would be very hard to automate.

If a human being can’t explain themselves,

where on earth do we begin
in writing a set of instructions

for a machine to follow?

Thirty years ago, this view was right,

but today it’s looking shaky,

and in the future
it’s simply going to be wrong.

Advances in processing power,
in data storage capability

and in algorithm design

mean that this
routine-nonroutine distinction

is diminishingly useful.

To see this, go back to the case
of making a medical diagnosis.

Earlier in the year,

a team of researchers at Stanford
announced they’d developed a system

which can tell you
whether or not a freckle is cancerous

as accurately as leading dermatologists.

How does it work?

It’s not trying to copy the judgment
or the intuition of a doctor.

It knows or understands
nothing about medicine at all.

Instead, it’s running
a pattern recognition algorithm

through 129,450 past cases,

hunting for similarities
between those cases

and the particular lesion in question.

It’s performing these tasks
in an unhuman way,

based on the analysis
of more possible cases

than any doctor could hope
to review in their lifetime.

It didn’t matter that that human being,

that doctor, couldn’t explain
how she’d performed the task.

Now, there are those
who dwell upon that the fact

that these machines
aren’t built in our image.

As an example, take IBM’s Watson,

the supercomputer that went
on the US quiz show “Jeopardy!” in 2011,

and it beat the two
human champions at “Jeopardy!”

The day after it won,

The Wall Street Journal ran a piece
by the philosopher John Searle

with the title “Watson
Doesn’t Know It Won on ‘Jeopardy!'”

Right, and it’s brilliant, and it’s true.

You know, Watson didn’t
let out a cry of excitement.

It didn’t call up its parents
to say what a good job it had done.

It didn’t go down to the pub for a drink.

This system wasn’t trying to copy the way
that those human contestants played,

but it didn’t matter.

It still outperformed them.

Resolving the intelligence myth

shows us that our limited understanding
about human intelligence,

about how we think and reason,

is far less of a constraint
on automation than it was in the past.

What’s more, as we’ve seen,

when these machines
perform tasks differently to human beings,

there’s no reason to think

that what human beings
are currently capable of doing

represents any sort of summit

in what these machines
might be capable of doing in the future.

Now the third myth,

what I call the superiority myth.

It’s often said that those who forget

about the helpful side
of technological progress,

those complementarities from before,

are committing something
known as the lump of labor fallacy.

Now, the problem is
the lump of labor fallacy

is itself a fallacy,

and I call this the lump
of labor fallacy fallacy,

or LOLFF, for short.

Let me explain.

The lump of labor fallacy
is a very old idea.

It was a British economist, David Schloss,
who gave it this name in 1892.

He was puzzled
to come across a dock worker

who had begun to use
a machine to make washers,

the small metal discs
that fasten on the end of screws.

And this dock worker
felt guilty for being more productive.

Now, most of the time,
we expect the opposite,

that people feel guilty
for being unproductive,

you know, a little too much time
on Facebook or Twitter at work.

But this worker felt guilty
for being more productive,

and asked why, he said,
“I know I’m doing wrong.

I’m taking away the work of another man.”

In his mind, there was
some fixed lump of work

to be divided up between him and his pals,

so that if he used
this machine to do more,

there’d be less left for his pals to do.

Schloss saw the mistake.

The lump of work wasn’t fixed.

As this worker used the machine
and became more productive,

the price of washers would fall,
demand for washers would rise,

more washers would have to be made,

and there’d be more work
for his pals to do.

The lump of work would get bigger.

Schloss called this
“the lump of labor fallacy.”

And today you hear people talk
about the lump of labor fallacy

to think about the future
of all types of work.

There’s no fixed lump of work
out there to be divided up

between people and machines.

Yes, machines substitute for human beings,
making the original lump of work smaller,

but they also complement human beings,

and the lump of work
gets bigger and changes.

But LOLFF.

Here’s the mistake:

it’s right to think
that technological progress

makes the lump of work to be done bigger.

Some tasks become more valuable.
New tasks have to be done.

But it’s wrong to think that necessarily,

human beings will be best placed
to perform those tasks.

And this is the superiority myth.

Yes, the lump of work
might get bigger and change,

but as machines become more capable,

it’s likely that they’ll take on
the extra lump of work themselves.

Technological progress,
rather than complement human beings,

complements machines instead.

To see this, go back
to the task of driving a car.

Today, satnav systems
directly complement human beings.

They make some
human beings better drivers.

But in the future,

software is going to displace
human beings from the driving seat,

and these satnav systems,
rather than complement human beings,

will simply make these
driverless cars more efficient,

helping the machines instead.

Or go to those indirect complementarities
that I mentioned as well.

The economic pie may get larger,

but as machines become more capable,

it’s possible that any new demand
will fall on goods that machines,

rather than human beings,
are best placed to produce.

The economic pie may change,

but as machines become more capable,

it’s possible that they’ll be best placed
to do the new tasks that have to be done.

In short, demand for tasks
isn’t demand for human labor.

Human beings only stand to benefit

if they retain the upper hand
in all these complemented tasks,

but as machines become more capable,
that becomes less likely.

So what do these three myths tell us then?

Well, resolving the Terminator myth

shows us that the future of work depends
upon this balance between two forces:

one, machine substitution
that harms workers

but also those complementarities
that do the opposite.

And until now, this balance
has fallen in favor of human beings.

But resolving the intelligence myth

shows us that that first force,
machine substitution,

is gathering strength.

Machines, of course, can’t do everything,

but they can do far more,

encroaching ever deeper into the realm
of tasks performed by human beings.

What’s more, there’s no reason to think

that what human beings
are currently capable of

represents any sort of finishing line,

that machines are going
to draw to a polite stop

once they’re as capable as us.

Now, none of this matters

so long as those helpful
winds of complementarity

blow firmly enough,

but resolving the superiority myth

shows us that that process
of task encroachment

not only strengthens
the force of machine substitution,

but it wears down
those helpful complementarities too.

Bring these three myths together

and I think we can capture a glimpse
of that troubling future.

Machines continue to become more capable,

encroaching ever deeper
on tasks performed by human beings,

strengthening the force
of machine substitution,

weakening the force
of machine complementarity.

And at some point, that balance
falls in favor of machines

rather than human beings.

This is the path we’re currently on.

I say “path” deliberately,
because I don’t think we’re there yet,

but it is hard to avoid the conclusion
that this is our direction of travel.

That’s the troubling part.

Let me say now why I think actually
this is a good problem to have.

For most of human history,
one economic problem has dominated:

how to make the economic pie
large enough for everyone to live on.

Go back to the turn
of the first century AD,

and if you took the global economic pie

and divided it up into equal slices
for everyone in the world,

everyone would get a few hundred dollars.

Almost everyone lived
on or around the poverty line.

And if you roll forward a thousand years,

roughly the same is true.

But in the last few hundred years,
economic growth has taken off.

Those economic pies have exploded in size.

Global GDP per head,

the value of those individual
slices of the pie today,

they’re about 10,150 dollars.

If economic growth continues
at two percent,

our children will be twice as rich as us.

If it continues
at a more measly one percent,

our grandchildren
will be twice as rich as us.

By and large, we’ve solved
that traditional economic problem.

Now, technological unemployment,
if it does happen,

in a strange way will be
a symptom of that success,

will have solved one problem –
how to make the pie bigger –

but replaced it with another –

how to make sure
that everyone gets a slice.

As other economists have noted,
solving this problem won’t be easy.

Today, for most people,

their job is their seat
at the economic dinner table,

and in a world with less work
or even without work,

it won’t be clear
how they get their slice.

There’s a great deal
of discussion, for instance,

about various forms
of universal basic income

as one possible approach,

and there’s trials underway

in the United States
and in Finland and in Kenya.

And this is the collective challenge
that’s right in front of us,

to figure out how this material prosperity
generated by our economic system

can be enjoyed by everyone

in a world in which
our traditional mechanism

for slicing up the pie,

the work that people do,

withers away and perhaps disappears.

Solving this problem is going to require
us to think in very different ways.

There’s going to be a lot of disagreement
about what ought to be done,

but it’s important to remember
that this is a far better problem to have

than the one that haunted
our ancestors for centuries:

how to make that pie
big enough in the first place.

Thank you very much.

(Applause)

自动化焦虑
最近一直在蔓延,人们

担心未来

许多工作将由机器

而不是人类来完成,

因为

人工智能和机器人技术正在取得显着进步。

很明显,
将会发生重大变化。

不太清楚的
是这种变化会是什么样子。

我的研究表明,未来
既令人不安又令人兴奋。

技术失业的威胁
是真实存在的

,但这是一个很好的问题。

为了
解释我是如何得出这个结论的,

我想直面三个

我认为目前正在模糊
我们对这个自动化未来的愿景的神话。

我们在电视屏幕

、书籍、电影和日常评论

中看到的一幅画面是,一群机器人

带着一个目标来到工作场所

:取代人类的工作。

我称之为终结者神话。

是的,机器将
人类从特定任务中取代,

但它们不仅仅是
替代人类。

它们还在其他任务中补充它们,

使这项工作更有价值
和更重要。

有时它们
直接补充人类,

使他们
在特定任务中更有生产力或更高效。

因此,出租车司机可以使用卫星
导航系统在不熟悉的道路上导航。

建筑师可以使用
计算机辅助设计软件

来设计更大、
更复杂的建筑。

但技术进步
不仅仅直接补充了人类。

它还间接地补充了它们,
并以两种方式做到这一点。

首先,如果我们
将经济视为一块蛋糕,

技术进步
会使蛋糕变大。

随着生产力的提高,
收入增加,需求增加。

例如,英国馅饼的大小

是 300 年前的一百多倍

因此,
从旧馅饼中的任务中流离失所的人

可以在新馅饼中找到要完成的任务

但技术进步
不仅仅让蛋糕变大。

它还改变
了馅饼中的成分。

随着时间的推移,人们
以不同的方式花费他们的收入,

改变了他们
在现有商品中

分配收入的方式,并发展了
对全新商品的品味。

新的行业被创造出来,

新的任务必须完成

,这意味着
新的角色往往必须被填补。

再说一遍,英国馅饼:

300 年前,
大多数人在农场工作,

150 年前,在工厂工作,

而今天,大多数人在办公室工作。

再一次,
从旧派任务中流离失所的人

可能会转而投入到
新派任务中。

经济学家称这些效应为
互补性,

但实际上这只是一个花哨的词,
用来捕捉

技术进步
帮助人类的不同方式。

解决这个终结者神话

向我们展示了有
两种力量在起作用:

一种
是损害工人的机器替代,

另一种是相反的互补性

现在是第二个神话,

我称之为智力神话。

驾驶汽车、
进行医疗诊断

和一眼识别鸟类的任务
有什么共同点?

嗯,这些都是
直到最近,

主要经济学家还认为
不能轻易实现自动化的任务。

然而今天,所有这些任务
都可以自动化。

要知道,所有主要的汽车制造商
都有无人驾驶汽车计划。

有无数的
系统可以诊断医疗问题。

甚至还有一个应用
程序可以一眼就识别出一只鸟

现在,这不仅仅是
经济学家的厄运。

他们错了,他们错

的原因
很重要。

他们迷上了智能神话

,相信机器
必须

复制人类的思维和推理

方式才能超越它们。

当这些
经济学家试图

找出机器无法完成的任务时,

他们认为自动化任务的唯一方法

是与人类坐下来,

让他们向你
解释他们是如何执行任务的,

然后尝试 并

在一组指令中捕获该解释,以
供机器遵循。

这种观点一度在人工智能中也很流行

我知道这一点是因为我父亲和合著者 Richard Susskind

于 1980 年代

在牛津大学攻读人工智能和法律博士学位

,他是先锋队的一员。

他们与一位名叫 Phillip Capper 的教授

和一位名叫 Butterworths 的法律出版商合作

,制作了世界上第一个
商业上可用

的法律人工智能系统。

这是主屏幕设计。

他向我保证,这在当时是
一个很酷的屏幕设计。

(笑声)

我从来没有完全相信过。


以两张软盘的形式发表,

当时软盘
还真的是软盘

,他的做法和
经济学家一样:

找个律师坐下来,

让她给你解释一下,她是
怎么解决一个问题的。 法律问题,

然后尝试
在机器遵循的一组规则中捕获该解释。

在经济学中,如果人类
可以用这种方式解释自己,

那么这些任务就被称为例行公事
,它们可以被自动化。

但如果人类
无法解释自己,

这些任务就被称为非常规任务
,它们被认为是遥不可及的。

今天,这种常规与非常规的
区别很普遍。

想想你经常听到人们对你说

机器只能执行
可预测或重复、

基于规则或明确定义的任务。

这些都是
例行公事的不同说法。

回到
我一开始提到的那三个案例。

这些都是
非常规任务的经典案例。

例如,向医生询问
她是如何做出医疗诊断的

,她或许可以
给你一些经验法则,

但最终她还是会挣扎。

她会说这需要
创造力、判断力和直觉。

而且这些事情
很难说清楚

,因此人们认为这些任务
很难自动化。

如果一个人无法解释自己,

我们究竟从哪里
开始编写一组指令

让机器遵循?

三十年前,这种观点是正确的,

但今天它看起来摇摇欲坠

,未来
它只会是错误的。

处理能力
、数据存储能力

和算法设计的进步

意味着这种
常规与非常规的区别

正在变得越来越有用。

要了解这一点,请回到
做出医疗诊断的案例。

今年早些时候,

斯坦福大学的一组研究人员
宣布他们开发了一个系统

,可以

像领先的皮肤科医生一样准确地告诉你雀斑是否癌变。

它是如何工作的?

这不是试图复制医生的判断
或直觉。

它对
医学一无所知或一无所知。

相反,它

通过 129,450 个过去的病例运行模式识别算法,

寻找
这些病例

与所讨论的特定病变之间的相似之处。


以一种不人道的方式执行这些任务,

基于对
可能病例的分析,这

比任何医生都希望
在他们的一生中审查的病例要多。

那个人,

那个医生,无法
解释她是如何完成这项任务的,这并不重要。

现在,有些人

认为这些机器
不是按照我们的形象建造的。

以 IBM 的 Watson 为例,它是

参加美国智力竞赛节目“Jeopardy!”的超级计算机。 2011 年

,它
在“危险边缘!”中击败了两位人类冠军。

在它获胜的第二天,

《华尔街日报》
刊登了哲学家约翰·塞尔(John Searle)的一篇文章

,标题为“沃森
不知道它在‘危险边缘’中获胜!”

是的,它很精彩,而且是真的。

要知道,沃森并没有
发出兴奋的叫声。

它没有打电话给它的
父母说它做得多么好。

它没有去酒吧喝一杯。

这个系统并没有试图复制
那些人类选手的比赛方式,

但这并不重要。

它仍然胜过他们。

解决智能神话

向我们表明,我们
对人类智能的有限理解,

关于我们如何思考和推理,

与过去相比,对自动化的限制要小得多

更重要的是,正如我们所看到的,

当这些机器
执行与人类不同的任务时,

没有理由

认为人类
目前能够做的事情代表了这些机器未来可能做的

任何形式的顶峰


.

现在是第三个神话,

我称之为优越神话。

人们常说,那些

忘记了技术进步有益的一面

那些以前的互补性,

正在犯下
所谓的劳动力谬误。

现在,问题
是一团劳动

谬误本身就是一个谬论

,我把它称为一
团劳动谬误

,简称 LOLFF。

让我解释。

劳动力谬误
是一个非常古老的想法。

1892 年,英国经济学家戴维·施洛斯(David Schloss
)给它起了这个名字。

他很不解
地遇到了一位码头工人

,他开始
使用机器制造垫圈,

即固定在螺钉末端的小金属盘。

这位码头工人
因工作效率更高而感到内疚。

现在,大多数时候,
我们期望相反

,人们会
因为工作效率低下而感到内疚,

你知道,
在 Facebook 或 Twitter 上工作的时间太多了。

但这位工人为自己的
工作效率更高而感到内疚,

并问为什么,他说:
“我知道我做错了。

我正在抢走另一个人的工作。”

在他的脑海中,

他和他的朋友之间有一些固定的工作要分配,

这样如果他用
这台机器做更多的事情,

他的朋友可以做的就更少了。

施洛斯看到了错误。

工作量没有固定。

随着这名工人使用机器
并变得更有效率,

洗衣机的价格会下降,
对洗衣机的需求会上升,

必须制造更多的洗衣机

,他的朋友们就会有更多的工作要做。

工作量会变得更大。

施洛斯称其
为“劳动力谬误”。

今天你听到人们
谈论劳动谬误

来思考
所有类型工作的未来。

没有固定的工作块
可以

在人和机器之间进行分配。

是的,机器代替了人,
使原来的工作块变小了,

但也补充了人,

工作块
变大了,变了。

但是洛夫。

这是错误的:

认为技术进步

使需要完成的工作变得更大是正确的。

有些任务变得更有价值。
必须完成新的任务。

但认为

人类将最
适合执行这些任务的想法是错误的。

这就是优越的神话。

是的,工作量
可能会变得更大和变化,

但随着机器变得更有能力

,它们很可能会
自己承担额外的工作量。

技术进步
不是补充人类,

而是补充机器。

要看到这一点,请
回到驾驶汽车的任务。

今天,卫星导航系统
直接补充了人类。

他们让一些
人成为更好的司机。

但在未来,

软件将取代
人类的驾驶座位,

而这些卫星导航系统,
而不是补充人类,

只会让这些
无人驾驶汽车更有效率,

而是帮助机器。

或者转到
我提到的那些间接互补性。

经济蛋糕可能会变大

,但随着机器的能力越来越强

,任何新的需求
都可能落在机器

而不是
人类最适合生产的商品上。

经济蛋糕可能会发生变化,

但随着机器的能力越来越强

,它们可能最
适合完成必须完成的新任务。

简而言之,对任务
的需求不是对人力的需求。

人类只有

在所有这些补充任务中保持优势时才能受益,

但随着机器变得更有能力,
这种可能性变得越来越小。

那么这三个神话告诉我们什么呢?

好吧,解决终结者的神话

向我们表明,工作的未来
取决于两种力量之间的平衡:

一种
是损害工人的机器替代,另一种是

相反的互补性

直到现在,这种平衡
已经有利于人类。

但解决智能神话

向我们展示了第一股力量,
机器替代,

正在积聚力量。

机器当然不能做所有事情,

但它们可以做得更多,

更深入地侵入
人类执行的任务领域。

更重要的是,没有理由

认为人类
目前的能力

代表了任何形式的终点线,

一旦机器

像我们一样有能力,它们就会礼貌地停下来。

现在,

只要那些有益
的互补之

风吹得足够坚定,这些都不重要,

但解决优势神话

向我们表明,
任务蚕食的过程

不仅增强
了机器替代的力量,

而且也削弱了
这些有益的互补性。

把这三个神话放在一起

,我想我们可以
瞥见那个令人不安的未来。

机器能力不断增强,对人类任务的

蚕食越来越深

,机器替代力增强

,机器互补力减弱。

在某些时候,这种平衡
倾向于机器

而不是人类。

这是我们目前正在走的路。

我特意说“路径”,
因为我认为我们还没有到达那里,

但很难避免
这就是我们前进的方向的结论。

这是令人不安的部分。

现在让我说一下为什么我认为实际上
这是一个很好的问题。

在人类历史的大部分时间里,
一个经济问题一直占据主导地位:

如何让经济蛋糕变得
足够大,让每个人都可以继续生存。

回到
公元一世纪之交

,如果你把全球经济

蛋糕分成平等的几块
给世界上的每个人,

每个人都会得到几百美元。

几乎每个人都生活
在贫困线之上或附近。

如果你向前滚动一千年,情况

大致相同。

但在过去的几百年里,
经济增长已经起飞。

这些经济蛋糕的规模已经爆炸式增长。

全球人均国内生产总值,今天这些蛋糕

的价值,

大约是 10,150 美元。

如果经济增长继续
保持 2%,

我们的孩子将比我们富裕一倍。

如果它继续
保持微不足道的百分之一,

我们的
孙辈将是我们的两倍。

总的来说,我们已经解决
了传统的经济问题。

现在,技术性失业,
如果真的发生了,

将以一种奇怪的方式成为
成功的征兆,

将解决一个问题——
如何让蛋糕变大——

但用另一个问题代替——

如何
确保每个人 得到一片。

正如其他经济学家所指出的,
解决这个问题并不容易。

今天,对于大多数人来说,

他们的工作就是他们
在经济餐桌上的一席之地,

而在一个工作更少甚至没有工作的世界里,

他们如何获得自己的份额还不清楚。

例如,

关于将各种形式
的普遍基本收入

作为一种可能的方法进行了大量讨论,

并且

在美国
、芬兰和肯尼亚正在进行试验。

这是
摆在我们面前的集体挑战,

要弄清楚
我们的经济体系所产生的这种物质繁荣如何让

每个人

都享受这个世界上
我们传统

的切饼机制

,人们所做的工作,

枯萎,也许消失。

解决这个问题需要
我们以非常不同的方式思考。

关于应该做什么会有很多分歧

但重要的是要记住
,这是

一个比
几个世纪以来困扰我们祖先的问题要好得多的问题:首先

如何使馅饼
足够大 .

非常感谢你。

(掌声)