How humans and AI can work together to create better businesses Sylvain Duranton

Translator: Ivana Korom
Reviewer: Krystian Aparta

Let me share a paradox.

For the last 10 years,

many companies have been trying
to become less bureaucratic,

to have fewer central rules
and procedures,

more autonomy for their local
teams to be more agile.

And now they are pushing
artificial intelligence, AI,

unaware that cool technology

might make them
more bureaucratic than ever.

Why?

Because AI operates
just like bureaucracies.

The essence of bureaucracy

is to favor rules and procedures
over human judgment.

And AI decides solely based on rules.

Many rules inferred from past data

but only rules.

And if human judgment
is not kept in the loop,

AI will bring a terrifying form
of new bureaucracy –

I call it “algocracy” –

where AI will take more and more
critical decisions by the rules

outside of any human control.

Is there a real risk?

Yes.

I’m leading a team of 800 AI specialists.

We have deployed
over 100 customized AI solutions

for large companies around the world.

And I see too many corporate executives
behaving like bureaucrats from the past.

They want to take costly,
old-fashioned humans out of the loop

and rely only upon AI to take decisions.

I call this the “human-zero mindset.”

And why is it so tempting?

Because the other route,
“Human plus AI,” is long,

costly and difficult.

Business teams, tech teams,
data-science teams

have to iterate for months

to craft exactly how humans and AI
can best work together.

Long, costly and difficult.

But the reward is huge.

A recent survey from BCG and MIT

shows that 18 percent
of companies in the world

are pioneering AI,

making money with it.

Those companies focus 80 percent
of their AI initiatives

on effectiveness and growth,

taking better decisions –

not replacing humans with AI
to save costs.

Why is it important
to keep humans in the loop?

Simply because, left alone,
AI can do very dumb things.

Sometimes with no consequences,
like in this tweet.

“Dear Amazon,

I bought a toilet seat.

Necessity, not desire.

I do not collect them,

I’m not a toilet-seat addict.

No matter how temptingly you email me,

I am not going to think, ‘Oh, go on, then,

one more toilet seat,
I’ll treat myself.’ "

(Laughter)

Sometimes, with more consequence,
like in this other tweet.

“Had the same situation

with my mother’s burial urn.”

(Laughter)

“For months after her death,

I got messages from Amazon,
saying, ‘If you liked that …’ "

(Laughter)

Sometimes with worse consequences.

Take an AI engine rejecting
a student application for university.

Why?

Because it has “learned,” on past data,

characteristics of students
that will pass and fail.

Some are obvious, like GPAs.

But if, in the past, all students
from a given postal code have failed,

it is very likely
that AI will make this a rule

and will reject every student
with this postal code,

not giving anyone the opportunity
to prove the rule wrong.

And no one can check all the rules,

because advanced AI
is constantly learning.

And if humans are kept out of the room,

there comes the algocratic nightmare.

Who is accountable
for rejecting the student?

No one, AI did.

Is it fair? Yes.

The same set of objective rules
has been applied to everyone.

Could we reconsider for this bright kid
with the wrong postal code?

No, algos don’t change their mind.

We have a choice here.

Carry on with algocracy

or decide to go to “Human plus AI.”

And to do this,

we need to stop thinking tech first,

and we need to start applying
the secret formula.

To deploy “Human plus AI,”

10 percent of the effort is to code algos;

20 percent to build tech
around the algos,

collecting data, building UI,
integrating into legacy systems;

But 70 percent, the bulk of the effort,

is about weaving together AI
with people and processes

to maximize real outcome.

AI fails when cutting short
on the 70 percent.

The price tag for that can be small,

wasting many, many millions
of dollars on useless technology.

Anyone cares?

Or real tragedies:

346 casualties in the recent crashes
of two B-737 aircrafts

when pilots could not interact properly

with a computerized command system.

For a successful 70 percent,

the first step is to make sure
that algos are coded by data scientists

and domain experts together.

Take health care for example.

One of our teams worked on a new drug
with a slight problem.

When taking their first dose,

some patients, very few,
have heart attacks.

So, all patients,
when taking their first dose,

have to spend one day in hospital,

for monitoring, just in case.

Our objective was to identify patients
who were at zero risk of heart attacks,

who could skip the day in hospital.

We used AI to analyze data
from clinical trials,

to correlate ECG signal,
blood composition, biomarkers,

with the risk of heart attack.

In one month,

our model could flag 62 percent
of patients at zero risk.

They could skip the day in hospital.

Would you be comfortable
staying at home for your first dose

if the algo said so?

(Laughter)

Doctors were not.

What if we had false negatives,

meaning people who are told by AI
they can stay at home, and die?

(Laughter)

There started our 70 percent.

We worked with a team of doctors

to check the medical logic
of each variable in our model.

For instance, we were using
the concentration of a liver enzyme

as a predictor,

for which the medical logic
was not obvious.

The statistical signal was quite strong.

But what if it was a bias in our sample?

That predictor was taken out of the model.

We also took out predictors
for which experts told us

they cannot be rigorously measured
by doctors in real life.

After four months,

we had a model and a medical protocol.

They both got approved

my medical authorities
in the US last spring,

resulting in far less stress
for half of the patients

and better quality of life.

And an expected upside on sales
over 100 million for that drug.

Seventy percent weaving AI
with team and processes

also means building powerful interfaces

for humans and AI to solve
the most difficult problems together.

Once, we got challenged
by a fashion retailer.

“We have the best buyers in the world.

Could you build an AI engine
that would beat them at forecasting sales?

At telling how many high-end,
light-green, men XL shirts

we need to buy for next year?

At predicting better what will sell or not

than our designers.”

Our team trained a model in a few weeks,
on past sales data,

and the competition was organized
with human buyers.

Result?

AI wins, reducing forecasting
errors by 25 percent.

Human-zero champions could have tried
to implement this initial model

and create a fight with all human buyers.

Have fun.

But we knew that human buyers
had insights on fashion trends

that could not be found in past data.

There started our 70 percent.

We went for a second test,

where human buyers
were reviewing quantities

suggested by AI

and could correct them if needed.

Result?

Humans using AI …

lose.

Seventy-five percent
of the corrections made by a human

were reducing accuracy.

Was it time to get rid of human buyers?

No.

It was time to recreate a model

where humans would not try
to guess when AI is wrong,

but where AI would take real input
from human buyers.

We fully rebuilt the model

and went away from our initial interface,
which was, more or less,

“Hey, human! This is what I forecast,

correct whatever you want,”

and moved to a much richer one, more like,

“Hey, humans!

I don’t know the trends for next year.

Could you share with me
your top creative bets?”

“Hey, humans!

Could you help me quantify
those few big items?

I cannot find any good comparables
in the past for them.”

Result?

“Human plus AI” wins,

reducing forecast errors by 50 percent.

It took one year to finalize the tool.

Long, costly and difficult.

But profits and benefits

were in excess of 100 million of savings
per year for that retailer.

Seventy percent on very sensitive topics

also means human have to decide
what is right or wrong

and define rules
for what AI can do or not,

like setting caps on prices
to prevent pricing engines

[from charging] outrageously high prices
to uneducated customers

who would accept them.

Only humans can define those boundaries –

there is no way AI
can find them in past data.

Some situations are in the gray zone.

We worked with a health insurer.

He developed an AI engine
to identify, among his clients,

people who are just about
to go to hospital

to sell them premium services.

And the problem is,

some prospects were called
by the commercial team

while they did not know yet

they would have to go
to hospital very soon.

You are the CEO of this company.

Do you stop that program?

Not an easy question.

And to tackle this question,
some companies are building teams,

defining ethical rules and standards
to help business and tech teams set limits

between personalization and manipulation,

customization of offers
and discrimination,

targeting and intrusion.

I am convinced that in every company,

applying AI where it really matters
has massive payback.

Business leaders need to be bold

and select a few topics,

and for each of them, mobilize
10, 20, 30 people from their best teams –

tech, AI, data science, ethics –

and go through the full
10-, 20-, 70-percent cycle

of “Human plus AI,”

if they want to land AI effectively
in their teams and processes.

There is no other way.

Citizens in developed economies
already fear algocracy.

Seven thousand were interviewed
in a recent survey.

More than 75 percent
expressed real concerns

on the impact of AI
on the workforce, on privacy,

on the risk of a dehumanized society.

Pushing algocracy creates a real risk
of severe backlash against AI

within companies or in society at large.

“Human plus AI” is our only option

to bring the benefits of AI
to the real world.

And in the end,

winning organizations
will invest in human knowledge,

not just AI and data.

Recruiting, training,
rewarding human experts.

Data is said to be the new oil,

but believe me, human knowledge
will make the difference,

because it is the only derrick available

to pump the oil hidden in the data.

Thank you.

(Applause)