What to trust in a posttruth world Alex Edmans

Belle Gibson was a happy young Australian.

She lived in Perth,
and she loved skateboarding.

But in 2009, Belle learned that she had
brain cancer and four months to live.

Two months of chemo
and radiotherapy had no effect.

But Belle was determined.

She’d been a fighter her whole life.

From age six, she had to cook
for her brother, who had autism,

and her mother,
who had multiple sclerosis.

Her father was out of the picture.

So Belle fought, with exercise,
with meditation

and by ditching meat
for fruit and vegetables.

And she made a complete recovery.

Belle’s story went viral.

It was tweeted, blogged about,
shared and reached millions of people.

It showed the benefits of shunning
traditional medicine

for diet and exercise.

In August 2013, Belle launched
a healthy eating app,

The Whole Pantry,

downloaded 200,000 times
in the first month.

But Belle’s story was a lie.

Belle never had cancer.

People shared her story
without ever checking if it was true.

This is a classic example
of confirmation bias.

We accept a story uncritically
if it confirms what we’d like to be true.

And we reject any story
that contradicts it.

How often do we see this

in the stories
that we share and we ignore?

In politics, in business,
in health advice.

The Oxford Dictionary’s
word of 2016 was “post-truth.”

And the recognition that we now live
in a post-truth world

has led to a much needed emphasis
on checking the facts.

But the punch line of my talk

is that just checking
the facts is not enough.

Even if Belle’s story were true,

it would be just as irrelevant.

Why?

Well, let’s look at one of the most
fundamental techniques in statistics.

It’s called Bayesian inference.

And the very simple version is this:

We care about “does the data
support the theory?”

Does the data increase our belief
that the theory is true?

But instead, we end up asking,
“Is the data consistent with the theory?”

But being consistent with the theory

does not mean that the data
supports the theory.

Why?

Because of a crucial
but forgotten third term –

the data could also be consistent
with rival theories.

But due to confirmation bias,
we never consider the rival theories,

because we’re so protective
of our own pet theory.

Now, let’s look at this for Belle’s story.

Well, we care about:
Does Belle’s story support the theory

that diet cures cancer?

But instead, we end up asking,

“Is Belle’s story consistent
with diet curing cancer?”

And the answer is yes.

If diet did cure cancer,
we’d see stories like Belle’s.

But even if diet did not cure cancer,

we’d still see stories like Belle’s.

A single story in which
a patient apparently self-cured

just due to being misdiagnosed
in the first place.

Just like, even if smoking
was bad for your health,

you’d still see one smoker
who lived until 100.

(Laughter)

Just like, even if education
was good for your income,

you’d still see one multimillionaire
who didn’t go to university.

(Laughter)

So the biggest problem with Belle’s story
is not that it was false.

It’s that it’s only one story.

There might be thousands of other stories
where diet alone failed,

but we never hear about them.

We share the outlier cases
because they are new,

and therefore they are news.

We never share the ordinary cases.

They’re too ordinary,
they’re what normally happens.

And that’s the true
99 percent that we ignore.

Just like in society, you can’t just
listen to the one percent,

the outliers,

and ignore the 99 percent, the ordinary.

Because that’s the second example
of confirmation bias.

We accept a fact as data.

The biggest problem is not
that we live in a post-truth world;

it’s that we live in a post-data world.

We prefer a single story to tons of data.

Now, stories are powerful,
they’re vivid, they bring it to life.

They tell you to start
every talk with a story.

I did.

But a single story
is meaningless and misleading

unless it’s backed up by large-scale data.

But even if we had large-scale data,

that might still not be enough.

Because it could still be consistent
with rival theories.

Let me explain.

A classic study
by psychologist Peter Wason

gives you a set of three numbers

and asks you to think of the rule
that generated them.

So if you’re given two, four, six,

what’s the rule?

Well, most people would think,
it’s successive even numbers.

How would you test it?

Well, you’d propose other sets
of successive even numbers:

4, 6, 8 or 12, 14, 16.

And Peter would say these sets also work.

But knowing that these sets also work,

knowing that perhaps hundreds of sets
of successive even numbers also work,

tells you nothing.

Because this is still consistent
with rival theories.

Perhaps the rule
is any three even numbers.

Or any three increasing numbers.

And that’s the third example
of confirmation bias:

accepting data as evidence,

even if it’s consistent
with rival theories.

Data is just a collection of facts.

Evidence is data that supports
one theory and rules out others.

So the best way to support your theory

is actually to try to disprove it,
to play devil’s advocate.

So test something, like 4, 12, 26.

If you got a yes to that,
that would disprove your theory

of successive even numbers.

Yet this test is powerful,

because if you got a no, it would rule out
“any three even numbers”

and “any three increasing numbers.”

It would rule out the rival theories,
but not rule out yours.

But most people are too afraid
of testing the 4, 12, 26,

because they don’t want to get a yes
and prove their pet theory to be wrong.

Confirmation bias is not only
about failing to search for new data,

but it’s also about misinterpreting
data once you receive it.

And this applies outside the lab
to important, real-world problems.

Indeed, Thomas Edison famously said,

“I have not failed,

I have found 10,000 ways that won’t work.”

Finding out that you’re wrong

is the only way to find out what’s right.

Say you’re a university
admissions director

and your theory is that only
students with good grades

from rich families do well.

So you only let in such students.

And they do well.

But that’s also consistent
with the rival theory.

Perhaps all students
with good grades do well,

rich or poor.

But you never test that theory
because you never let in poor students

because you don’t want to be proven wrong.

So, what have we learned?

A story is not fact,
because it may not be true.

A fact is not data,

it may not be representative
if it’s only one data point.

And data is not evidence –

it may not be supportive
if it’s consistent with rival theories.

So, what do you do?

When you’re at
the inflection points of life,

deciding on a strategy for your business,

a parenting technique for your child

or a regimen for your health,

how do you ensure
that you don’t have a story

but you have evidence?

Let me give you three tips.

The first is to actively seek
other viewpoints.

Read and listen to people
you flagrantly disagree with.

Ninety percent of what they say
may be wrong, in your view.

But what if 10 percent is right?

As Aristotle said,

“The mark of an educated man

is the ability to entertain a thought

without necessarily accepting it.”

Surround yourself with people
who challenge you,

and create a culture
that actively encourages dissent.

Some banks suffered from groupthink,

where staff were too afraid to challenge
management’s lending decisions,

contributing to the financial crisis.

In a meeting, appoint someone
to be devil’s advocate

against your pet idea.

And don’t just hear another viewpoint –

listen to it, as well.

As psychologist Stephen Covey said,

“Listen with the intent to understand,

not the intent to reply.”

A dissenting viewpoint
is something to learn from

not to argue against.

Which takes us to the other
forgotten terms in Bayesian inference.

Because data allows you to learn,

but learning is only relative
to a starting point.

If you started with complete certainty
that your pet theory must be true,

then your view won’t change –

regardless of what data you see.

Only if you are truly open
to the possibility of being wrong

can you ever learn.

As Leo Tolstoy wrote,

“The most difficult subjects

can be explained to the most
slow-witted man

if he has not formed
any idea of them already.

But the simplest thing

cannot be made clear
to the most intelligent man

if he is firmly persuaded
that he knows already.”

Tip number two is “listen to experts.”

Now, that’s perhaps the most
unpopular advice that I could give you.

(Laughter)

British politician Michael Gove
famously said that people in this country

have had enough of experts.

A recent poll showed that more people
would trust their hairdresser –

(Laughter)

or the man on the street

than they would leaders of businesses,
the health service and even charities.

So we respect a teeth-whitening formula
discovered by a mom,

or we listen to an actress’s view
on vaccination.

We like people who tell it like it is,
who go with their gut,

and we call them authentic.

But gut feel can only get you so far.

Gut feel would tell you never to give
water to a baby with diarrhea,

because it would just
flow out the other end.

Expertise tells you otherwise.

You’d never trust your surgery
to the man on the street.

You’d want an expert
who spent years doing surgery

and knows the best techniques.

But that should apply
to every major decision.

Politics, business, health advice

require expertise, just like surgery.

So then, why are experts so mistrusted?

Well, one reason
is they’re seen as out of touch.

A millionaire CEO couldn’t possibly
speak for the man on the street.

But true expertise is found on evidence.

And evidence stands up
for the man on the street

and against the elites.

Because evidence forces you to prove it.

Evidence prevents the elites
from imposing their own view

without proof.

A second reason
why experts are not trusted

is that different experts
say different things.

For every expert who claimed that leaving
the EU would be bad for Britain,

another expert claimed it would be good.

Half of these so-called experts
will be wrong.

And I have to admit that most papers
written by experts are wrong.

Or at best, make claims that
the evidence doesn’t actually support.

So we can’t just take
an expert’s word for it.

In November 2016, a study
on executive pay hit national headlines.

Even though none of the newspapers
who covered the study

had even seen the study.

It wasn’t even out yet.

They just took the author’s word for it,

just like with Belle.

Nor does it mean that we can
just handpick any study

that happens to support our viewpoint –

that would, again, be confirmation bias.

Nor does it mean
that if seven studies show A

and three show B,

that A must be true.

What matters is the quality,

and not the quantity of expertise.

So we should do two things.

First, we should critically examine
the credentials of the authors.

Just like you’d critically examine
the credentials of a potential surgeon.

Are they truly experts in the matter,

or do they have a vested interest?

Second, we should pay particular attention

to papers published
in the top academic journals.

Now, academics are often accused
of being detached from the real world.

But this detachment gives you
years to spend on a study.

To really nail down a result,

to rule out those rival theories,

and to distinguish correlation
from causation.

And academic journals involve peer review,

where a paper is rigorously scrutinized

(Laughter)

by the world’s leading minds.

The better the journal,
the higher the standard.

The most elite journals
reject 95 percent of papers.

Now, academic evidence is not everything.

Real-world experience is critical, also.

And peer review is not perfect,
mistakes are made.

But it’s better to go
with something checked

than something unchecked.

If we latch onto a study
because we like the findings,

without considering who it’s by
or whether it’s even been vetted,

there is a massive chance
that that study is misleading.

And those of us who claim to be experts

should recognize the limitations
of our analysis.

Very rarely is it possible to prove
or predict something with certainty,

yet it’s so tempting to make
a sweeping, unqualified statement.

It’s easier to turn into a headline
or to be tweeted in 140 characters.

But even evidence may not be proof.

It may not be universal,
it may not apply in every setting.

So don’t say, “Red wine
causes longer life,”

when the evidence is only that red wine
is correlated with longer life.

And only then in people
who exercise as well.

Tip number three
is “pause before sharing anything.”

The Hippocratic oath says,
“First, do no harm.”

What we share is potentially contagious,

so be very careful about what we spread.

Our goal should not be
to get likes or retweets.

Otherwise, we only share the consensus;
we don’t challenge anyone’s thinking.

Otherwise, we only share what sounds good,

regardless of whether it’s evidence.

Instead, we should ask the following:

If it’s a story, is it true?

If it’s true, is it backed up
by large-scale evidence?

If it is, who is it by,
what are their credentials?

Is it published,
how rigorous is the journal?

And ask yourself
the million-dollar question:

If the same study was written by the same
authors with the same credentials

but found the opposite results,

would you still be willing
to believe it and to share it?

Treating any problem –

a nation’s economic problem
or an individual’s health problem,

is difficult.

So we must ensure that we have
the very best evidence to guide us.

Only if it’s true can it be fact.

Only if it’s representative
can it be data.

Only if it’s supportive
can it be evidence.

And only with evidence
can we move from a post-truth world

to a pro-truth world.

Thank you very much.

(Applause)