The promise of quantum computers Matt Langione

Transcriber:

What will the biggest challenges
of the 21st century turn out to be?

Today, one might guess climate change,
public health, inequality,

but the truth is we don’t yet know.

What we do know

is that supercomputing
will have to be part of the solution.

For nearly a hundred years,

our reliance on high-performance computers

in the face of our most urgent challenges

has grown and grown,

from cracking Nazi codes
to sequencing the human genome.

Computer processors have risen to meet
increasingly critical and complex demands

by getting smaller, faster
and better year after year,

as if by magic.

But there’s a problem.

At the very moment that our reliance on
computers is growing faster than ever,

progress in compute power

is coming to a standstill.

The magic is just about spent.

The timing couldn’t be worse.

We rarely talk about it, but for all
that we’ve accomplished with computers,

there remain a startling number of things
that computers still can’t do,

at a great cost to business and society.

The dream of near-instant
computational drug design, for instance,

has yet to come to fruition, nearly
50 years after it was first conceived.

Never has that been clearer than now,

as the world sits in a state
of isolation and paralysis,

as we await a vaccine for COVID-19.

But drug discovery is just one area
in which researchers are beset –

and in some cases, blocked entirely –

by the inadequacy of even today’s
fastest supercomputers,

putting great constraints
in areas like climate change,

and in value creation
in areas like finance and logistics.

In the past,

we could rely on supercomputers
simply getting better and faster,

as parts got smaller
and smaller every year,

but no longer.

For now, we’re drawing up
against a hard physical limit.

Transistors have become so minuscule

that they’re fast approaching
the size of an atom.

Such a state of affairs invites
a natural follow-up question,

and it’s one that I’ve spent
the last several years

encouraging business leaders
and policy makers to address:

If not traditional supercomputers,

what technology will emerge

to arm us against the challenges
of the 21st century?

Enter quantum computing.

Quantum computers like this one

promise to address the atomic limitation

by exploiting subatomic
physical properties

that weren’t even known to man
a hundred years ago.

How does it work?

Quantum computing enables
a departure from two major constraints

of classical semiconductor computing.

Classical computers
operate deterministically:

everything is either yes or no,
on or off, with no in between.

They also operate serially;
they can only do one thing at a time.

Quantum computers operate
probabilistically, and most importantly,

they operate simultaneously,
thanks to three properties –

superposition, entanglement
and interference –

which allow them to explore
many possibilities at once.

To illustrate how this works, imagine
a computer is trying to solve a maze.

The classical computer would do so
by exhausting every potential pathway

in a sequence.

If it came across a roadblock
on the first path,

it would simply rule
that out as a solution,

revert to its original position

and try the next logical path,

and so on and so forth
until it found the right solution.

A quantum computer could
test every single pathway

at the same time,

in effect, solving the maze
in only a single try.

As it happens,

many complex problems are characterized
by this mazelike quality,

especially simulation
and optimization problems,

some of which can be solved
exponentially faster

with a quantum computer.

But is there really value
to this so-called quantum speedup?

In order to believe that we need
faster supercomputers,

we need to first believe that our problems
are indeed computational in nature.

It turns out that many are,
at least in part.

For an example, let’s turn to
fertilizer production,

one of the hallmark problems
in the science of climate change.

The way most fertilizer is produced today

is by fusing nitrogen and hydrogen
to make ammonia,

which is the active ingredient.

The process works,

but only at a severe, a severe cost

to businesses, who spend
100 to 300 billion every year,

and to the environment:

three to five percent
of the world’s natural gas

is expended on fertilizer synthesis

every single year.

So why have scientists failed to develop
a more efficient process?

The reason is that in order to do so,

they would need to simulate
the mazelike molecular interactions

that make up the electrostatic field
of the key catalyst, nitrogenase.

Scientists actually know
how to do that today,

but it would take 800,000 years
on the world’s fastest supercomputer.

With a full-scale quantum computer,

less than 24 hours.

For another example,
let’s return to drug discovery,

a process COVID-19 has brought
into sharp focus for most of us

for the very first time.

Designing a vaccine for an infectious
disease like COVID-19,

from identifying the drivers
of the disease,

to screening millions
of candidate activators and inhibitors,

is a process that typically
takes 10 or more years per drug,

90 percent of which
fail to pass clinical trials.

The cost to pharmaceutical companies
is two to three billion dollars

per approved drug.

But the social costs of delays
and failures are much, much higher.

More than eight million people die
every year of infectious diseases.

That’s 15 times as many people
as died during the first six months

of the coronavirus pandemic.

So why has computational drug design
failed to live up to expectations?

Again, it’s a matter of limited
computational resources, at least in part.

If identifying a disease pathway
in the body is like a lock,

designing a drug requires searching
through a massive chemical space,

effectively a maze
of molecular structures,

to find the right compound,

to find a key, in other
words, that fits the lock.

The problem is that tracing the entire
relevant span of chemical space

and converting it into a searchable
database for drug design

would take 5 trillion trillion,
trillion, trillion years

on the world’s fastest supercomputer.

On a quantum computer,
a little more than a half hour.

But quantum computing is not
just about triumphs in the lab.

The flow of progress and industries
of all kinds is currently blocked

by discreet, but intractable
computational constraints

that have a real impact
on business and society.

For what may seem an unlikely example,
let’s turn to banks.

What if banks were able
to lend more freely

to individuals, entrepreneurs
and businesses?

One of the key holdups today

is that banks keep 10 to 15 percent
of assets in cash reserves,

in part, because their risk simulations
are compute constrained.

They can’t account for global
or whole-market risks that are rare

but severe and unpredictable.

Black swan events, for example.

Now, 10 to 15 percent
is a whole lot of money.

When you consider that for every
one-percent reduction in cash reserves,

it would lead to an extra trillion dollars
of investible capital.

What this means is that if banks
ultimately became comfortable enough

with quantum-powered risk simulations

to reduce cash reserves to, say,
five to 10 percent of assets,

the effect would be like
a COVID-19-level stimulus

for individuals and businesses

every single year.

Once the transformative power
of quantum computing is clear,

the question then becomes:
Well, how long must we wait?

Researchers are cautious when asked about
the timeline to quantum advantage.

Rightly so – there remain a number
of critical hurdles to overcome,

and not just engineering challenges,

but fundamental scientific questions
about the nature of quantum mechanics.

As a result, it may be one, two,
even three decades

before quantum computers fully mature.

Some executives that I’ve spoken with
have come to the conclusion on this basis

that they can afford to wait,

that they can afford
to postpone investing.

I believe this to be a real mistake,

for while some technologies
develop steadily,

according to the laws
of cumulative causation,

many emerge as precipitous breakthroughs,

almost overnight defying any timeline
that could be drawn out in advance.

Quantum computing is a candidate
for just such a breakthrough,

having already reached
a number of critical milestones

decades ahead of schedule.

In the late 80s, for example,

many researchers thought that the basic
building block of quantum computing,

the qubit,

would take a hundred years to build.

Ten years later, it arrived.

Now IBM has nearly 500 qubits
across 29 machines,

available for client use and research.

What this means
is that we should worry less

about quantum computers arriving too late

and more about them arriving too soon,

before the necessary
preparations have been made.

For to quote one Nobel prize
winning physicist,

“Quantum computers are more
different from current computers

than current computers are

from the abacus.”

It’ll take time to make
the necessary workflow integrations.

It’ll take time to onboard
the right talent.

Most importantly,

it’ll take time, not to mention
vision and imagination,

to identify and scope high-value problems

for quantum computers to tackle
for your business.

Governments are already investing
heavily in quantum technologies –

15 billion dollars among
China, Europe and the US.

And VCs are following suit.

But what’s needed now
to accelerate innovation

is business investment
in developing use cases,

in onboarding talent

and on experimenting
with real quantum computers

that are available today.

In a world such as ours,

the demands of innovation
can’t be put off for another day.

Leaders must act now,

for the processor speedups that have
driven innovation for nearly 70 years

are set to stop dead in their tracks.

The race toward a new age of magic
and supercomputing is already underway.

It’s one we can’t afford to lose.

Quantum computers are in pole position.
They’re the car to beat.

Thank you.