How humans and AI can work together to create better businesses Sylvain Duranton

Translator: Ivana Korom
Reviewer: Krystian Aparta

Let me share a paradox.

For the last 10 years,

many companies have been trying
to become less bureaucratic,

to have fewer central rules
and procedures,

more autonomy for their local
teams to be more agile.

And now they are pushing
artificial intelligence, AI,

unaware that cool technology

might make them
more bureaucratic than ever.

Why?

Because AI operates
just like bureaucracies.

The essence of bureaucracy

is to favor rules and procedures
over human judgment.

And AI decides solely based on rules.

Many rules inferred from past data

but only rules.

And if human judgment
is not kept in the loop,

AI will bring a terrifying form
of new bureaucracy –

I call it “algocracy” –

where AI will take more and more
critical decisions by the rules

outside of any human control.

Is there a real risk?

Yes.

I’m leading a team of 800 AI specialists.

We have deployed
over 100 customized AI solutions

for large companies around the world.

And I see too many corporate executives
behaving like bureaucrats from the past.

They want to take costly,
old-fashioned humans out of the loop

and rely only upon AI to take decisions.

I call this the “human-zero mindset.”

And why is it so tempting?

Because the other route,
“Human plus AI,” is long,

costly and difficult.

Business teams, tech teams,
data-science teams

have to iterate for months

to craft exactly how humans and AI
can best work together.

Long, costly and difficult.

But the reward is huge.

A recent survey from BCG and MIT

shows that 18 percent
of companies in the world

are pioneering AI,

making money with it.

Those companies focus 80 percent
of their AI initiatives

on effectiveness and growth,

taking better decisions –

not replacing humans with AI
to save costs.

Why is it important
to keep humans in the loop?

Simply because, left alone,
AI can do very dumb things.

Sometimes with no consequences,
like in this tweet.

“Dear Amazon,

I bought a toilet seat.

Necessity, not desire.

I do not collect them,

I’m not a toilet-seat addict.

No matter how temptingly you email me,

I am not going to think, ‘Oh, go on, then,

one more toilet seat,
I’ll treat myself.’ "

(Laughter)

Sometimes, with more consequence,
like in this other tweet.

“Had the same situation

with my mother’s burial urn.”

(Laughter)

“For months after her death,

I got messages from Amazon,
saying, ‘If you liked that …’ "

(Laughter)

Sometimes with worse consequences.

Take an AI engine rejecting
a student application for university.

Why?

Because it has “learned,” on past data,

characteristics of students
that will pass and fail.

Some are obvious, like GPAs.

But if, in the past, all students
from a given postal code have failed,

it is very likely
that AI will make this a rule

and will reject every student
with this postal code,

not giving anyone the opportunity
to prove the rule wrong.

And no one can check all the rules,

because advanced AI
is constantly learning.

And if humans are kept out of the room,

there comes the algocratic nightmare.

Who is accountable
for rejecting the student?

No one, AI did.

Is it fair? Yes.

The same set of objective rules
has been applied to everyone.

Could we reconsider for this bright kid
with the wrong postal code?

No, algos don’t change their mind.

We have a choice here.

Carry on with algocracy

or decide to go to “Human plus AI.”

And to do this,

we need to stop thinking tech first,

and we need to start applying
the secret formula.

To deploy “Human plus AI,”

10 percent of the effort is to code algos;

20 percent to build tech
around the algos,

collecting data, building UI,
integrating into legacy systems;

But 70 percent, the bulk of the effort,

is about weaving together AI
with people and processes

to maximize real outcome.

AI fails when cutting short
on the 70 percent.

The price tag for that can be small,

wasting many, many millions
of dollars on useless technology.

Anyone cares?

Or real tragedies:

346 casualties in the recent crashes
of two B-737 aircrafts

when pilots could not interact properly

with a computerized command system.

For a successful 70 percent,

the first step is to make sure
that algos are coded by data scientists

and domain experts together.

Take health care for example.

One of our teams worked on a new drug
with a slight problem.

When taking their first dose,

some patients, very few,
have heart attacks.

So, all patients,
when taking their first dose,

have to spend one day in hospital,

for monitoring, just in case.

Our objective was to identify patients
who were at zero risk of heart attacks,

who could skip the day in hospital.

We used AI to analyze data
from clinical trials,

to correlate ECG signal,
blood composition, biomarkers,

with the risk of heart attack.

In one month,

our model could flag 62 percent
of patients at zero risk.

They could skip the day in hospital.

Would you be comfortable
staying at home for your first dose

if the algo said so?

(Laughter)

Doctors were not.

What if we had false negatives,

meaning people who are told by AI
they can stay at home, and die?

(Laughter)

There started our 70 percent.

We worked with a team of doctors

to check the medical logic
of each variable in our model.

For instance, we were using
the concentration of a liver enzyme

as a predictor,

for which the medical logic
was not obvious.

The statistical signal was quite strong.

But what if it was a bias in our sample?

That predictor was taken out of the model.

We also took out predictors
for which experts told us

they cannot be rigorously measured
by doctors in real life.

After four months,

we had a model and a medical protocol.

They both got approved

my medical authorities
in the US last spring,

resulting in far less stress
for half of the patients

and better quality of life.

And an expected upside on sales
over 100 million for that drug.

Seventy percent weaving AI
with team and processes

also means building powerful interfaces

for humans and AI to solve
the most difficult problems together.

Once, we got challenged
by a fashion retailer.

“We have the best buyers in the world.

Could you build an AI engine
that would beat them at forecasting sales?

At telling how many high-end,
light-green, men XL shirts

we need to buy for next year?

At predicting better what will sell or not

than our designers.”

Our team trained a model in a few weeks,
on past sales data,

and the competition was organized
with human buyers.

Result?

AI wins, reducing forecasting
errors by 25 percent.

Human-zero champions could have tried
to implement this initial model

and create a fight with all human buyers.

Have fun.

But we knew that human buyers
had insights on fashion trends

that could not be found in past data.

There started our 70 percent.

We went for a second test,

where human buyers
were reviewing quantities

suggested by AI

and could correct them if needed.

Result?

Humans using AI …

lose.

Seventy-five percent
of the corrections made by a human

were reducing accuracy.

Was it time to get rid of human buyers?

No.

It was time to recreate a model

where humans would not try
to guess when AI is wrong,

but where AI would take real input
from human buyers.

We fully rebuilt the model

and went away from our initial interface,
which was, more or less,

“Hey, human! This is what I forecast,

correct whatever you want,”

and moved to a much richer one, more like,

“Hey, humans!

I don’t know the trends for next year.

Could you share with me
your top creative bets?”

“Hey, humans!

Could you help me quantify
those few big items?

I cannot find any good comparables
in the past for them.”

Result?

“Human plus AI” wins,

reducing forecast errors by 50 percent.

It took one year to finalize the tool.

Long, costly and difficult.

But profits and benefits

were in excess of 100 million of savings
per year for that retailer.

Seventy percent on very sensitive topics

also means human have to decide
what is right or wrong

and define rules
for what AI can do or not,

like setting caps on prices
to prevent pricing engines

[from charging] outrageously high prices
to uneducated customers

who would accept them.

Only humans can define those boundaries –

there is no way AI
can find them in past data.

Some situations are in the gray zone.

We worked with a health insurer.

He developed an AI engine
to identify, among his clients,

people who are just about
to go to hospital

to sell them premium services.

And the problem is,

some prospects were called
by the commercial team

while they did not know yet

they would have to go
to hospital very soon.

You are the CEO of this company.

Do you stop that program?

Not an easy question.

And to tackle this question,
some companies are building teams,

defining ethical rules and standards
to help business and tech teams set limits

between personalization and manipulation,

customization of offers
and discrimination,

targeting and intrusion.

I am convinced that in every company,

applying AI where it really matters
has massive payback.

Business leaders need to be bold

and select a few topics,

and for each of them, mobilize
10, 20, 30 people from their best teams –

tech, AI, data science, ethics –

and go through the full
10-, 20-, 70-percent cycle

of “Human plus AI,”

if they want to land AI effectively
in their teams and processes.

There is no other way.

Citizens in developed economies
already fear algocracy.

Seven thousand were interviewed
in a recent survey.

More than 75 percent
expressed real concerns

on the impact of AI
on the workforce, on privacy,

on the risk of a dehumanized society.

Pushing algocracy creates a real risk
of severe backlash against AI

within companies or in society at large.

“Human plus AI” is our only option

to bring the benefits of AI
to the real world.

And in the end,

winning organizations
will invest in human knowledge,

not just AI and data.

Recruiting, training,
rewarding human experts.

Data is said to be the new oil,

but believe me, human knowledge
will make the difference,

because it is the only derrick available

to pump the oil hidden in the data.

Thank you.

(Applause)

译者:Ivana Korom
审稿人:Krystian Aparta

让我分享一个悖论。

在过去的 10 年中,

许多公司一直在
努力减少官僚主义

,减少中央规则
和程序,

为当地团队提供更多自主权,
使其更加灵活。

现在他们正在推动
人工智能,AI,却

没有意识到很酷的技术

可能会让他们
比以往任何时候都更加官僚。

为什么?

因为人工智能
就像官僚机构一样运作。

官僚主义的本质

是偏爱规则和程序
而不是人的判断。

人工智能完全根据规则做出决定。

从过去的数据中推断出许多规则,

但只有规则。

而且,如果人类的
判断不保持在循环中,

人工智能将带来一种可怕
的新官僚机构——

我称之为“算法”

——人工智能将

根据不受任何人类控制的规则做出越来越多的关键决定。

有真正的风险吗?

是的。

我正在领导一个由 800 名 AI 专家组成的团队。

我们已经为世界各地的大公司部署
了 100 多个定制的 AI 解决

方案。

我看到太多的企业高管
表现得像过去的官僚。

他们希望将昂贵
的老式人类排除在外

,只依靠人工智能来做出决定。

我称之为“人类零思维”。

为什么它如此诱人?

因为另一条路线,
“人加人工智能”,漫长、

成本高、难度大。

业务团队、技术团队、
数据科学团队

必须迭代数月

才能准确地设计出人类和人工智能
如何最好地协同工作。

漫长、昂贵和困难。

但回报是巨大的。

BCG 和麻省理工学院最近的一项调查

显示,世界上 18%
的公司

正在开创人工智能,

并通过它赚钱。

这些公司将 80
% 的人工智能计划重点

放在有效性和增长上,

做出更好的决策——

而不是用人工智能代替人类
来节省成本。

为什么
让人类参与其中很重要?

仅仅是因为,不管怎样,
人工智能可以做非常愚蠢的事情。

有时没有任何后果,
就像在这条推文中一样。

“亲爱的亚马逊,

我买了一个马桶座。是

必需品,而不是欲望。

我不收集它们,

我不是马桶座瘾君子。

无论你给我发电子邮件多么诱人,

我都不会想,‘哦,去吧 “那么,再上

一个马桶座,
我会犒劳自己。” ”

(笑声)

有时,后果更严重,
就像在另一条推文中一样。

我母亲的骨灰盒也有同样的情况。”

(笑声)

“在她死后的几个月里,

我收到来自亚马逊的消息,
说,‘如果你喜欢……’”

(笑声)

有时后果更糟。

以拒绝学生申请大学的 AI 引擎为例

为什么?

因为它已经根据过去的数据“学习”

了通过和失败的学生特征。

有些是显而易见的,比如 GPA。

但是,如果在过去,
来自给定邮政编码的所有学生都失败了,

那么人工智能很可能
会将此作为规则,

并会拒绝
使用该邮政编码的每个学生,

而不给任何人
证明规则错误的机会。

没有人可以检查所有的规则,

因为先进的人工智能
在不断地学习。

如果人类被拒之门外,

就会出现算法噩梦。


对拒绝学生负责?

没有人,人工智能做到了。

公平吗? 是的。

同一组客观规则
已适用于每个人。

我们可以重新考虑这个
邮编错误的聪明孩子吗?

不,算法不会改变主意。

我们在这里有一个选择。

继续算法

或决定去“人类加人工智能”。

要做到这一点,

我们需要先停止思考技术

,我们需要开始
应用秘密公式。

要部署“人与人工智能”,

10% 的工作是编写算法代码;

20% 用于
围绕算法构建技术、

收集数据、构建 UI、
集成到遗留系统中;

但 70% 的大部分工作

是关于将 AI
与人员和流程结合在一起,

以最大限度地提高实际结果。

人工智能在减少
70% 时失败。

这样做的代价可能很小,

在无用的技术上浪费了数百万美元。

有人关心吗?

或者真正的悲剧:

最近
两架 B-737 飞机

在飞行员无法

与计算机化指挥系统正确交互时坠毁,造成 346 人伤亡。

对于成功的 70%

,第一步是
确保算法由数据科学家

和领域专家一起编码。

以医疗保健为例。

我们的一个团队正在研究一种
有轻微问题的新药。

服用第一剂时,

一些患者(极少数)会
出现心脏病发作。

因此,所有患者
在服用第一剂时,

都必须在医院度过一天,

以进行监测,以防万一。

我们的目标是确定
心脏病发作风险为零的患者,

他们可以跳过住院日。

我们使用人工智能分析
来自临床试验的数据

,将心电图信号、
血液成分、生物标志物

与心脏病发作的风险相关联。

在一个月内,

我们的模型可以将 62%
的患者标记为零风险。

他们可以跳过住院的一天。

如果算法这么说
,您是否愿意在家中进行第一次服药

(笑声)

医生不是。

如果我们有假阴性,

这意味着人工智能告诉人们
他们可以呆在家里,然后死去怎么办?

(笑声)

我们的 70% 开始了。

我们与一组医生

合作,检查
我们模型中每个变量的医学逻辑。

例如,我们使用
肝酶的浓度

作为预测指标

,医学
逻辑并不明显。

统计信号相当强。

但是,如果这是我们样本中的偏差怎么办?

该预测器已从模型中取出。

我们还取出
了专家告诉我们

在现实生活中无法被医生严格测量的预测因子。

四个月后,

我们有了一个模型和一个医疗协议。

他们都

在去年春天获得了我在美国的医疗机构的批准,

从而为一半的患者带来了更少的压力

和更好的生活质量。

预计该药物的销售额将
超过 1 亿美元。

70% 的 AI
与团队和流程相结合

还意味着

为人类和 AI 构建强大的界面,以共同
解决最困难的问题。

有一次,我们
受到了一家时装零售商的挑战。

“我们有世界上最好的买家。

你能建立一个人工智能引擎
来预测销售吗

?告诉我们明年需要购买多少高端
浅绿色男士 XL 衬衫

?预测更好 什么会

比我们的设计师卖或不卖。”

我们的团队在几周内
根据过去的销售数据训练了一个模型,

并与人类买家一起组织了比赛

结果?

AI 获胜,将预测
错误减少了 25%。

零人类冠军本可以
尝试实施这个初始模型,

并与所有人类买家进行斗争。

玩得开心。

但我们知道,人类买家
对时尚趋势的洞察力

在过去的数据中是找不到的。

我们的 70% 就这样开始了。

我们进行了第二次测试

,人类买家
正在审查

人工智能建议的数量,

并在需要时进行纠正。

结果?

使用人工智能的人类……

输了。

75%
的人工修正

降低了准确性。

是时候摆脱人类买家了吗?

不。是时候重新创建一个模型,在这个模型中

,人类不会
试图猜测人工智能何时出错,

而是人工智能会从人类买家那里获得真实的输入

我们完全重建了模型

并摆脱了我们最初的界面,
它或多或少是

“嘿,人类!这就是我的预测,

纠正你想要的任何东西”,

并转移到一个更丰富的界面,更像是

“嘿 ,人类!

我不知道明年的趋势。

你能和我分享
你的顶级创意赌注吗?

“嘿,人类!

你能帮我量化一下
那几个大项目吗?

我在过去找不到任何好的可比性
。”

结果?

“人与人工智能”获胜,

预测错误减少了 50%。

最终确定该工具花了一年时间。

漫长、昂贵和困难。

但该零售商每年可

节省超过 1 亿美元的利润和收益

70% 在非常敏感的话题上

也意味着人类必须决定
什么是对或错,

并定义
人工智能可以做什么或不做什么的规则,

比如设置价格上限,
以防止定价引擎

向愿意接受的未受过教育的客户收取过高的价格

他们。

只有人类才能定义这些边界

——人工智能
无法在过去的数据中找到它们。

有些情况处于灰色地带。

我们与一家健康保险公司合作。

他开发了一个人工智能引擎
来识别他的客户

中即将
去医院

向他们推销优质服务的人。

问题是,商业团队

打电话给一些潜在客户,

而他们还不知道

他们
很快就要去医院了。

你是这家公司的CEO。

你停止那个程序吗?

不是一个简单的问题。

为了解决这个问题,
一些公司正在组建团队,

定义道德规则和标准,
以帮助业务和技术团队

在个性化和操纵、

报价
和歧视的定制、

目标和入侵之间设置限制。

我相信,在每家公司中,

将人工智能应用到真正重要的地方都会带来
巨大的回报。

商业领袖需要大胆

地选择一些主题,

并为每个主题动员
他们最好的团队中的 10、20、30 人——

技术、人工智能、数据科学、道德——

并完成完整的
10、20 -,如果他们想在团队

和流程中有效地使用人工智能,则需要 70% 的“人与人工智能”循环

没有其他办法。

发达经济体的公民
已经害怕算法。

在最近的一项调查中,有七千人接受了采访。

超过 75% 的受访者

对人工智能
对劳动力、隐私

和非人性化社会的风险的影响表示真正的担忧。

推行算法会

在公司或整个社会中产生对人工智能的强烈反对的真正风险。

“人加人工智能”是我们

将人工智能的好处
带入现实世界的唯一选择。

最后,

获胜的组织
将投资于人类知识,

而不仅仅是人工智能和数据。

招聘、培训、
奖励人类专家。

数据被说成是新的石油,

但相信我,人类的知识
会有所作为,

因为它是唯一

可以抽出隐藏在数据中的石油的井架。

谢谢你。

(掌声)