What to trust in a posttruth world Alex Edmans

Belle Gibson was a happy young Australian.

She lived in Perth,
and she loved skateboarding.

But in 2009, Belle learned that she had
brain cancer and four months to live.

Two months of chemo
and radiotherapy had no effect.

But Belle was determined.

She’d been a fighter her whole life.

From age six, she had to cook
for her brother, who had autism,

and her mother,
who had multiple sclerosis.

Her father was out of the picture.

So Belle fought, with exercise,
with meditation

and by ditching meat
for fruit and vegetables.

And she made a complete recovery.

Belle’s story went viral.

It was tweeted, blogged about,
shared and reached millions of people.

It showed the benefits of shunning
traditional medicine

for diet and exercise.

In August 2013, Belle launched
a healthy eating app,

The Whole Pantry,

downloaded 200,000 times
in the first month.

But Belle’s story was a lie.

Belle never had cancer.

People shared her story
without ever checking if it was true.

This is a classic example
of confirmation bias.

We accept a story uncritically
if it confirms what we’d like to be true.

And we reject any story
that contradicts it.

How often do we see this

in the stories
that we share and we ignore?

In politics, in business,
in health advice.

The Oxford Dictionary’s
word of 2016 was “post-truth.”

And the recognition that we now live
in a post-truth world

has led to a much needed emphasis
on checking the facts.

But the punch line of my talk

is that just checking
the facts is not enough.

Even if Belle’s story were true,

it would be just as irrelevant.

Why?

Well, let’s look at one of the most
fundamental techniques in statistics.

It’s called Bayesian inference.

And the very simple version is this:

We care about “does the data
support the theory?”

Does the data increase our belief
that the theory is true?

But instead, we end up asking,
“Is the data consistent with the theory?”

But being consistent with the theory

does not mean that the data
supports the theory.

Why?

Because of a crucial
but forgotten third term –

the data could also be consistent
with rival theories.

But due to confirmation bias,
we never consider the rival theories,

because we’re so protective
of our own pet theory.

Now, let’s look at this for Belle’s story.

Well, we care about:
Does Belle’s story support the theory

that diet cures cancer?

But instead, we end up asking,

“Is Belle’s story consistent
with diet curing cancer?”

And the answer is yes.

If diet did cure cancer,
we’d see stories like Belle’s.

But even if diet did not cure cancer,

we’d still see stories like Belle’s.

A single story in which
a patient apparently self-cured

just due to being misdiagnosed
in the first place.

Just like, even if smoking
was bad for your health,

you’d still see one smoker
who lived until 100.

(Laughter)

Just like, even if education
was good for your income,

you’d still see one multimillionaire
who didn’t go to university.

(Laughter)

So the biggest problem with Belle’s story
is not that it was false.

It’s that it’s only one story.

There might be thousands of other stories
where diet alone failed,

but we never hear about them.

We share the outlier cases
because they are new,

and therefore they are news.

We never share the ordinary cases.

They’re too ordinary,
they’re what normally happens.

And that’s the true
99 percent that we ignore.

Just like in society, you can’t just
listen to the one percent,

the outliers,

and ignore the 99 percent, the ordinary.

Because that’s the second example
of confirmation bias.

We accept a fact as data.

The biggest problem is not
that we live in a post-truth world;

it’s that we live in a post-data world.

We prefer a single story to tons of data.

Now, stories are powerful,
they’re vivid, they bring it to life.

They tell you to start
every talk with a story.

I did.

But a single story
is meaningless and misleading

unless it’s backed up by large-scale data.

But even if we had large-scale data,

that might still not be enough.

Because it could still be consistent
with rival theories.

Let me explain.

A classic study
by psychologist Peter Wason

gives you a set of three numbers

and asks you to think of the rule
that generated them.

So if you’re given two, four, six,

what’s the rule?

Well, most people would think,
it’s successive even numbers.

How would you test it?

Well, you’d propose other sets
of successive even numbers:

4, 6, 8 or 12, 14, 16.

And Peter would say these sets also work.

But knowing that these sets also work,

knowing that perhaps hundreds of sets
of successive even numbers also work,

tells you nothing.

Because this is still consistent
with rival theories.

Perhaps the rule
is any three even numbers.

Or any three increasing numbers.

And that’s the third example
of confirmation bias:

accepting data as evidence,

even if it’s consistent
with rival theories.

Data is just a collection of facts.

Evidence is data that supports
one theory and rules out others.

So the best way to support your theory

is actually to try to disprove it,
to play devil’s advocate.

So test something, like 4, 12, 26.

If you got a yes to that,
that would disprove your theory

of successive even numbers.

Yet this test is powerful,

because if you got a no, it would rule out
“any three even numbers”

and “any three increasing numbers.”

It would rule out the rival theories,
but not rule out yours.

But most people are too afraid
of testing the 4, 12, 26,

because they don’t want to get a yes
and prove their pet theory to be wrong.

Confirmation bias is not only
about failing to search for new data,

but it’s also about misinterpreting
data once you receive it.

And this applies outside the lab
to important, real-world problems.

Indeed, Thomas Edison famously said,

“I have not failed,

I have found 10,000 ways that won’t work.”

Finding out that you’re wrong

is the only way to find out what’s right.

Say you’re a university
admissions director

and your theory is that only
students with good grades

from rich families do well.

So you only let in such students.

And they do well.

But that’s also consistent
with the rival theory.

Perhaps all students
with good grades do well,

rich or poor.

But you never test that theory
because you never let in poor students

because you don’t want to be proven wrong.

So, what have we learned?

A story is not fact,
because it may not be true.

A fact is not data,

it may not be representative
if it’s only one data point.

And data is not evidence –

it may not be supportive
if it’s consistent with rival theories.

So, what do you do?

When you’re at
the inflection points of life,

deciding on a strategy for your business,

a parenting technique for your child

or a regimen for your health,

how do you ensure
that you don’t have a story

but you have evidence?

Let me give you three tips.

The first is to actively seek
other viewpoints.

Read and listen to people
you flagrantly disagree with.

Ninety percent of what they say
may be wrong, in your view.

But what if 10 percent is right?

As Aristotle said,

“The mark of an educated man

is the ability to entertain a thought

without necessarily accepting it.”

Surround yourself with people
who challenge you,

and create a culture
that actively encourages dissent.

Some banks suffered from groupthink,

where staff were too afraid to challenge
management’s lending decisions,

contributing to the financial crisis.

In a meeting, appoint someone
to be devil’s advocate

against your pet idea.

And don’t just hear another viewpoint –

listen to it, as well.

As psychologist Stephen Covey said,

“Listen with the intent to understand,

not the intent to reply.”

A dissenting viewpoint
is something to learn from

not to argue against.

Which takes us to the other
forgotten terms in Bayesian inference.

Because data allows you to learn,

but learning is only relative
to a starting point.

If you started with complete certainty
that your pet theory must be true,

then your view won’t change –

regardless of what data you see.

Only if you are truly open
to the possibility of being wrong

can you ever learn.

As Leo Tolstoy wrote,

“The most difficult subjects

can be explained to the most
slow-witted man

if he has not formed
any idea of them already.

But the simplest thing

cannot be made clear
to the most intelligent man

if he is firmly persuaded
that he knows already.”

Tip number two is “listen to experts.”

Now, that’s perhaps the most
unpopular advice that I could give you.

(Laughter)

British politician Michael Gove
famously said that people in this country

have had enough of experts.

A recent poll showed that more people
would trust their hairdresser –

(Laughter)

or the man on the street

than they would leaders of businesses,
the health service and even charities.

So we respect a teeth-whitening formula
discovered by a mom,

or we listen to an actress’s view
on vaccination.

We like people who tell it like it is,
who go with their gut,

and we call them authentic.

But gut feel can only get you so far.

Gut feel would tell you never to give
water to a baby with diarrhea,

because it would just
flow out the other end.

Expertise tells you otherwise.

You’d never trust your surgery
to the man on the street.

You’d want an expert
who spent years doing surgery

and knows the best techniques.

But that should apply
to every major decision.

Politics, business, health advice

require expertise, just like surgery.

So then, why are experts so mistrusted?

Well, one reason
is they’re seen as out of touch.

A millionaire CEO couldn’t possibly
speak for the man on the street.

But true expertise is found on evidence.

And evidence stands up
for the man on the street

and against the elites.

Because evidence forces you to prove it.

Evidence prevents the elites
from imposing their own view

without proof.

A second reason
why experts are not trusted

is that different experts
say different things.

For every expert who claimed that leaving
the EU would be bad for Britain,

another expert claimed it would be good.

Half of these so-called experts
will be wrong.

And I have to admit that most papers
written by experts are wrong.

Or at best, make claims that
the evidence doesn’t actually support.

So we can’t just take
an expert’s word for it.

In November 2016, a study
on executive pay hit national headlines.

Even though none of the newspapers
who covered the study

had even seen the study.

It wasn’t even out yet.

They just took the author’s word for it,

just like with Belle.

Nor does it mean that we can
just handpick any study

that happens to support our viewpoint –

that would, again, be confirmation bias.

Nor does it mean
that if seven studies show A

and three show B,

that A must be true.

What matters is the quality,

and not the quantity of expertise.

So we should do two things.

First, we should critically examine
the credentials of the authors.

Just like you’d critically examine
the credentials of a potential surgeon.

Are they truly experts in the matter,

or do they have a vested interest?

Second, we should pay particular attention

to papers published
in the top academic journals.

Now, academics are often accused
of being detached from the real world.

But this detachment gives you
years to spend on a study.

To really nail down a result,

to rule out those rival theories,

and to distinguish correlation
from causation.

And academic journals involve peer review,

where a paper is rigorously scrutinized

(Laughter)

by the world’s leading minds.

The better the journal,
the higher the standard.

The most elite journals
reject 95 percent of papers.

Now, academic evidence is not everything.

Real-world experience is critical, also.

And peer review is not perfect,
mistakes are made.

But it’s better to go
with something checked

than something unchecked.

If we latch onto a study
because we like the findings,

without considering who it’s by
or whether it’s even been vetted,

there is a massive chance
that that study is misleading.

And those of us who claim to be experts

should recognize the limitations
of our analysis.

Very rarely is it possible to prove
or predict something with certainty,

yet it’s so tempting to make
a sweeping, unqualified statement.

It’s easier to turn into a headline
or to be tweeted in 140 characters.

But even evidence may not be proof.

It may not be universal,
it may not apply in every setting.

So don’t say, “Red wine
causes longer life,”

when the evidence is only that red wine
is correlated with longer life.

And only then in people
who exercise as well.

Tip number three
is “pause before sharing anything.”

The Hippocratic oath says,
“First, do no harm.”

What we share is potentially contagious,

so be very careful about what we spread.

Our goal should not be
to get likes or retweets.

Otherwise, we only share the consensus;
we don’t challenge anyone’s thinking.

Otherwise, we only share what sounds good,

regardless of whether it’s evidence.

Instead, we should ask the following:

If it’s a story, is it true?

If it’s true, is it backed up
by large-scale evidence?

If it is, who is it by,
what are their credentials?

Is it published,
how rigorous is the journal?

And ask yourself
the million-dollar question:

If the same study was written by the same
authors with the same credentials

but found the opposite results,

would you still be willing
to believe it and to share it?

Treating any problem –

a nation’s economic problem
or an individual’s health problem,

is difficult.

So we must ensure that we have
the very best evidence to guide us.

Only if it’s true can it be fact.

Only if it’s representative
can it be data.

Only if it’s supportive
can it be evidence.

And only with evidence
can we move from a post-truth world

to a pro-truth world.

Thank you very much.

(Applause)

Belle Gibson 是一个快乐的年轻澳大利亚人。

她住在珀斯
,她喜欢滑板。

但在 2009 年,贝尔得知她患有
脑癌,还有四个月的生命。

两个月的化疗
和放疗没有效果。

但贝儿下定了决心。

她一生都是斗士。

从六岁起,她就不得不为
患有自闭症的哥哥和

患有多发性硬化症的母亲做饭。

她的父亲不在照片中。

因此,贝儿通过锻炼
、冥想

和不吃肉
来吃水果和蔬菜来抗争。

她完全康复了。

贝尔的故事在网上疯传。

它被推特、博客、
分享并吸引了数百万人。

它显示了避免使用
传统药物

进行饮食和锻炼的好处。

2013年8月,百丽
推出健康饮食应用

The Whole Pantry,首月

下载量达到20万
次。

但贝尔的故事是个谎言。

贝尔从未患过癌症。

人们分享了她的故事,
而从未检查它是否属实。

这是确认偏差的典型例子
。 如果

一个故事
证实了我们希望是真实的,我们就会不加批判地接受它。

我们拒绝任何
与之相矛盾的故事。

我们多久


我们分享和忽略的故事中看到这一点?

在政治、商业
、健康建议方面。

牛津词典
2016 年的词是“后真相”。

认识到我们现在生活
在一个后真相世界中,

这导致了对核实事实的迫切需要的强调

但我谈话的妙语

是,仅仅
检查事实是不够的。

即使贝尔的故事是真实的,

也同样无关紧要。

为什么?

好吧,让我们看一下
统计学中最基本的技术之一。

这称为贝叶斯推理。

非常简单的版本是这样的:

我们关心“数据是否
支持理论?”

数据是否增加了我们
对该理论真实性的信念?

但相反,我们最终会问,
“数据与理论一致吗?”

但与理论一致

并不意味着数据
支持理论。

为什么?

由于一个关键
但被遗忘的第三项

——数据也可能
与竞争对手的理论一致。

但是由于确认偏差,
我们从不考虑竞争对手的理论,

因为我们非常
保护自己的宠物理论。

现在,让我们来看看贝儿的故事。

好吧,我们关心的是:
Belle 的故事是否支持

饮食可以治愈癌症的理论?

但相反,我们最终会问,

“Belle 的故事
与饮食治疗癌症一致吗?”

答案是肯定的。

如果饮食确实能治愈癌症,
我们就会看到像 Belle 这样的故事。

但即使饮食不能治愈癌症,

我们仍然会看到像贝儿这样的故事。

一个单独的故事,其中
一个病人显然

是因为一开始就被误诊而自我治愈
的。

就像,即使
吸烟对你的健康有害,

你仍然会看到一个
吸烟者活到 100 岁。

(笑声)

就像,即使教育
对你的收入有好处,

你仍然会看到一个
百万富翁没有 去上大学。

(笑声)

所以贝尔的故事最大的问题
不是它是假的。

那是只有一个故事。

可能有成千上万的其他
故事仅靠饮食就失败了,

但我们从未听说过。

我们分享异常案例,
因为它们是新的

,因此它们是新闻。

我们从不分享普通案件。

它们太普通了,
它们是通常发生的事情。

这就是我们忽略的真正的
99%。

就像在社会上一样,你不能
只听百分之一

的离群值,

而忽略百分之九十九的平凡。

因为这是确认偏差的第二个例子

我们接受事实作为数据。

最大的问题
不是我们生活在一个后真相世界;而是我们生活在一个后真相世界。

我们生活在一个后数据世界。

我们更喜欢单个故事而不是大量数据。

现在,故事很强大,
很生动,它们使故事栩栩如生。

他们告诉你
每一次谈话都要从一个故事开始。

我做到了。

但是

除非有大规模数据支持,否则单个故事是毫无意义和误导性的。

但即使我们有大规模的数据,

这可能还不够。

因为它仍然可能
与竞争对手的理论相一致。

让我解释。

心理学家彼得·沃森(Peter Wason)的一项经典研究

为您提供了一组三个数字,

并要求您思考产生它们的规则

所以如果给你两个、四个、六个,

规则是什么?

嗯,大多数人会认为,
它是连续的偶数。

你将如何测试它?

好吧,你会提出其他
连续偶数的集合:

4、6、8 或 12、14、16。

彼得会说这些集合也有效。

但是知道这些集合也有效,

知道也许数百
个连续的偶数集合也有效,这并不能

告诉你什么。

因为这仍然
与竞争对手的理论相一致。

也许规则
是任意三个偶数。

或任何三个递增的数字。

这是确认偏差的第三个例子

接受数据作为证据,

即使它
与竞争对手的理论一致。

数据只是事实的集合。

证据是支持
一种理论并排除其他理论的数据。

所以支持你的理论的最好方法

实际上是试图反驳它
,扮演魔鬼的拥护者。

所以测试一些东西,比如 4、12、26。

如果你的答案是肯定的,
那将反驳你

的连续偶数理论。

然而这个测试很强大,

因为如果你得到否定,它将排除
“任何三个偶数”

和“任何三个递增的数字”。

它将排除竞争对手的理论,
但不排除您的理论。

但大多数人太
害怕测试 4、12、26,

因为他们不想得到肯定
并证明他们的宠物理论是错误的。

确认偏差不仅
与未能搜索新数据有关,

而且还与
收到数据后的错误解释有关。

这适用于实验室之外
的重要的现实问题。

确实,托马斯·爱迪生有句名言:

“我没有失败,

我找到了 10,000 种行不通的方法。”

找出你错了

是找出正确的唯一方法。

假设你是一名大学
招生主任

,你的理论是,只有

来自富裕家庭的成绩好的学生才能取得好成绩。

所以你只让这样的学生进来。

他们做得很好。

但这也
与竞争对手理论一致。

或许所有
成绩好的学生都做得很好,

无论贫富。

但是你永远不会测试那个理论,
因为你永远不会让成绩差的学生入学,

因为你不想被证明是错误的。

所以我们学了什么?

故事不是事实,
因为它可能不是真的。

事实不是数据,

如果它只是一个数据点,它可能不具有代表性。

并且数据不是证据——

如果它与竞争对手的理论一致,它可能不会提供支持。

所以你会怎么做?

当您
处于人生的

转折点,为您的企业制定战略、

为孩子制定育儿技巧

或为您的健康制定方案时,

您如何确保没有故事

但有证据?

让我给你三个提示。

一是积极寻求
其他观点。

阅读和倾听
你公然不同意的人。 在你看来,

他们所说的百分之九十
可能是错误的。

但如果 10% 是对的呢?

正如亚里士多德所说:

“一个受过教育的人的标志

是能够接受一个想法

而不必接受它。”


挑战你的人在一起,

并创造
一种积极鼓励异议的文化。

一些银行受到群体思维的影响

,员工太害怕挑战
管理层的贷款决定,

导致金融危机。

在一次会议上,任命
某人为

反对你的宠物想法的魔鬼代言人。

并且不要只听另一种观点——

也要听它。

正如心理学家史蒂芬柯维所说:

“倾听是为了理解,

而不是为了回答。”

反对的观点
是值得学习的东西,而

不是反对。

这将我们带到
了贝叶斯推理中其他被遗忘的术语。

因为数据可以让你学习,

但学习只是相
对于一个起点。

如果你一开始就完全
确定你的宠物理论一定是正确的,

那么你的观点就不会改变——

不管你看到什么数据。

只有当你真正
对错误的可能性持开放态度时,

你才能学习。

正如列夫·托尔斯泰 (Leo Tolstoy) 所写:

“如果


迟钝的

人还没有形成
任何想法,那么最困难的主题

可以解释
给他。但如果最聪明的人

坚信最简单的
事情 他已经知道了。”

提示二是“听专家的意见”。

现在,这可能
是我能给你的最不受欢迎的建议。

(笑声)

英国政治家迈克尔·戈夫(Michael Gove)有句
名言,这个国家的

人已经受够了专家。

最近的一项民意调查显示,比起企业、医疗服务机构甚至慈善机构的领导人,更多的人
会信任他们的理发师——

(笑声)

或街上的人

所以我们尊重一位妈妈发现的牙齿美白配方

或者我们听取女演员
对疫苗接种的看法。

我们喜欢这样说的人,
他们凭直觉行事

,我们称他们为真实的。

但直觉只能让你走这么远。

直觉会告诉你永远不要
给腹泻的婴儿喝水,

因为它会
从另一端流出。

专业知识告诉你不然。

你永远不会把你的手术
交给街上的那个人。

您需要
一位花费数年时间做手术

并且知道最佳技术的专家。

但这应该适用
于每一个重大决定。

政治、商业、健康建议

需要专业知识,就像手术一样。

那么,为什么专家如此不信任呢?

嗯,一个原因
是他们被视为脱节。

一位百万富翁的 CEO 不可能
为街上的那个人说话。

但真正的专业知识是在证据中找到的。

有证据支持
街头男子

和反对精英。

因为证据迫使你去证明它。

证据可以防止精英
们在

没有证据的情况下强加自己的观点。

专家不被信任的第二个原因

是不同的专家
说的不同。

对于每一个声称
离开欧盟对英国不利的

专家来说,另一位专家声称这将是一件好事。

这些所谓的专家中有一半
是错误的。

而且我不得不承认,大多数
专家写的论文都是错误的。

或者充其量,
声称证据实际上并不支持。

所以我们不能
只听专家的话。

2016 年 11 月,一项
关于高管薪酬的研究登上了全国头条。

尽管
报道这项研究的报纸

都没有看过这项研究。

它甚至还没有出来。

他们只是相信作者的话,

就像贝尔一样。

这也不意味着我们可以
随便挑选

任何恰好支持我们观点的研究——

这又是确认偏差。

这也不
意味着如果七项研究显示 A

,三项研究显示 B,

则 A 一定是正确的。

重要的是质量,

而不是专业知识的数量。

所以我们应该做两件事。

首先,我们应该严格审查
作者的资历。

就像您严格检查
潜在外科医生的资格一样。

他们真的是这方面的专家,

还是他们有既得利益?

其次,要特别关注

在顶级学术期刊上发表的论文。

现在,学者们经常被
指责脱离现实世界。

但是这种超然性让你有数
年的时间花在学习上。

真正确定一个结果

,排除那些对立的理论,

并区分相关
性和因果性。

学术期刊涉及同行评审

,一篇论文

由世界领先的思想家严格审查(笑声)。

期刊越好
,标准越高。

最精英的期刊
拒绝 95% 的论文。

现在,学术证据并不是一切。

现实世界的经验也很重要。

同行评审并不完美,会
犯错误。


最好是检查一些东西而

不是一些未经检查的东西。

如果
我们因为喜欢研究结果

而锁定一项研究,而不考虑它的作者
或是否经过审查,

那么该研究很有可能具有误导性。

我们这些自称是专家的人

应该认识到
我们分析的局限性。

很少有可能肯定地证明
或预测某事,


做出笼统的、无条件的陈述是如此诱人。

变成标题
或以 140 个字符发布推文更容易。

但即使是证据也可能不是证据。

它可能不是通用的,
它可能并不适用于所有环境。

所以不要说“红
葡萄酒可以延长寿命”

,因为只有红
葡萄酒与长寿有关。

只有在
锻炼的人中也是如此。

第三个提示
是“在分享任何东西之前暂停”。

希波克拉底誓言说:
“首先,不要伤害。”

我们分享的内容可能具有传染性,

所以要非常小心我们传播的内容。

我们的目标不应该
是获得喜欢或转发。

否则,我们只分享共识;
我们不挑战任何人的想法。

否则,我们只分享听起来不错的东西,

不管它是否是证据。

相反,我们应该问以下问题:

如果它是一个故事,它是真的吗?

如果属实,是否
有大量证据支持?

如果是,它是谁,
他们的凭据是什么?

出版
了吗,期刊严谨吗?

并问
自己一个价值百万美元的问题:

如果相同的研究是由相同的
作者以相同的证书撰写

但发现相反的结果

,你仍然
愿意相信并分享它吗?

处理任何问题——

一个国家的经济问题
或个人的健康问题,

都是困难的。

所以我们必须确保我们
有最好的证据来指导我们。

只有当它是真的,它才能成为事实。

只有具有代表性,
才能成为数据。

只有支持,
才能成为证据。

只有有了证据
,我们才能从后真相世界

走向亲真相世界。

非常感谢你。

(掌声)