Why do people fear the wrong things Gerd Gigerenzer

A new drug reduces
the risk of heart attacks by 40%.

Shark attacks are up by a factor of two.

Drinking a liter of soda per day
doubles your chance of developing cancer.

These are all examples of relative risk,

a common way risk
is presented in news articles.

Risk evaluation is a complicated tangle
of statistical thinking

and personal preference.

One common stumbling block
is the difference between

relative risks like these
and what are called absolute risks.

Risk is the likelihood
that an event will occur.

It can be expressed
as either a percentage—

for example, that heart attacks
occur in 11% of men

between the ages of 60 and 79—

or as a rate— that one in two million
divers along Australia’s western coast

will suffer a fatal shark bite each year.

These numbers express
the absolute risk of heart attacks

and shark attacks in these groups.

Changes in risk can be expressed
in relative or absolute terms.

For example, a review in 2009
found that mammography screenings

reduced the number of breast cancer deaths
from five women in one thousand to four.

The absolute risk reduction
was about .1%.

But the relative risk reduction
from 5 cases of cancer mortality to four

is 20%.

Based on reports of this higher number,

people overestimated
the impact of screening.

To see why the difference between
the two ways of expressing risk matters,

let’s consider
the hypothetical example of a drug

that reduces heart attack risk by 40%.

Imagine that out of a group
of 1,000 people

who didn’t take the new drug,
10 would have heart attacks.

The absolute risk
is 10 out of 1,000, or 1%.

If a similar group of 1,000 people
did take the drug,

the number of heart attacks would be six.

In other words, the drug could prevent
four out of ten heart attacks—

a relative risk reduction of 40%.

Meanwhile, the absolute risk
only dropped from 1% to 0.6%—

but the 40% relative risk decrease
sounds a lot more significant.

Surely preventing
even a handful of heart attacks,

or any other negative outcome,
is worthwhile— isn’t it?

Not necessarily.

The problem is that choices
that reduce some risks

can put you in the path of others.

Suppose the heart-attack drug caused
cancer in one half of 1% of patients.

In our group of 1,000 people,

four heart attacks
would be prevented by taking the drug,

but there would be
five new cases of cancer.

The relative reduction
in heart attack risk sounds substantial

and the absolute risk of cancer
sounds small,

but they work out
to about the same number of cases.

In real life,

everyone’s individual evaluation of risk
will vary

depending on
their personal circumstances.

If you know you have a family history
of heart disease

you might be more strongly motivated
to take a medication

that would lower your heart-attack risk,

even knowing it provided
only a small reduction in absolute risk.

Sometimes, we have to decide between
exposing ourselves to risks

that aren’t directly comparable.

If, for example, the heart attack drug
carried a higher risk

of a debilitating,
but not life-threatening,

side effect like migraines
rather than cancer,

our evaluation of whether that risk
is worth taking might change.

And sometimes there isn’t necessarily
a correct choice:

some might say even a minuscule risk
of shark attack is worth avoiding,

because all you’d miss out on
is an ocean swim,

while others wouldn’t even consider
skipping a swim

to avoid an objectively tiny risk
of shark attack.

For all these reasons,
risk evaluation is tricky at baseline,

and reporting on risk can be misleading,

especially when it shares some numbers
in absolute terms

and others in relative terms.

Understanding how these measures work

will help you cut through
some of the confusion

and better evaluate risk.