Regulating AI for the safety of humanity

Transcriber: Maurício Kakuei Tanaka
Reviewer: omar idmassaoud

“The development of full
artificial intelligence

could spell the end of the human race.”

The words of Stephen Hawking.

Do you want to live in a society
of constant AI surveillance

and invasive data collection?

A society where AI decides
whether you’re guilty of murder or not,

whilst also being able to create
ultrarealistic deep fakes,

planting you at a crime scene.

A society where AI
developed to kill cancer

decides that the best way to do so

is to exterminate any human
genetically prone to the disease.

I know I wouldn’t,

but this is what a society
without AI regulation

could look like.

Of course, these are
very far-fetched outcomes,

but far-fetched does not mean impossible.

And the possibility of an AI dystopia

is reason enough to consider AI regulation

so as to at least address

the more apparent
and immediate dangers of AI.

So first, what actually is AI?

Well, artificial intelligence is a theory
and development of computer systems

able to perform tasks normally
requiring human-level intelligence.

The thing that lets us
make things like this.

A plagiarism checker.

OK, maybe not the most popular
use of AI among students,

but what about this?

A smart home.

Or this?

A Mars rover collecting data
analyzed by AI.

Or this?

A self-driving car?

Seems pretty cool, right?

That’s what I thought.

And it was a self-driving car
that particularly caught my attention.

So last summer, I decided to build one.

I made a small robot one

so that it could autonomously
navigate through lanes

using nothing but a camera,

an ultrasonic sensor,

and a neural network I made.

And it was driving perfectly like this

until one day it just starts
to consistently veer out of the lane.

I spent hours trying to find the bug.

And you know what It was?

A deleted bracket.

I’d accidentally deleted a bracket
when editing the code,

and they stopped
one function from running,

causing the entire system to fail.

And this demonstrated to me,

on a small scale,

how one small bug can have
devastating consequences.

And then I started to think,

“Imagine if this happens
on a larger scale,

say, in a real self-driving car

or nuclear power plant.

Imagine how devastating that would be.”

Well, unfortunately,
you don’t have to imagine.

In 2016, a self-driving Tesla
mistook a white truck trailer

as the bright sky,

leading to the death
of the Tesla occupant.

And this made me think,

“We have regulation
in health care and education

and financial services,

but next to none in AI,

even though it’s such a large
and growing aspect of human life.”

We are all aware of the digital utopia
that AI can provide us with.

So surely we should introduce regulations

to ensure we reach this utopian situation

and avoid a dystopian one.

One suggestion is a compulsory
human-in-the-loop system,

where we put serious research efforts

into not only making AI
work well on its own,

but also collaborate effectively
with its human controllers.

This would effectively
give humans a kill switch

so that control can be
transferred back to humans

when a problem is expected.

But for those in search of a less
restrictive form of regulation,

a transparency-based approach
has been suggested

whereby firms must explain
how and why their AI makes its decisions,

essentially a compulsory
open-source system.

This would allow third parties
to review the AI systems

and spot any potential dangers
or biases before they occur.

However, this could reduce competition
and incentive to innovate

as ideas can easily be copied.

And this demonstrates
just how difficult it is

to regulate AI in a way
which suits everyone

as we must ensure safety

whilst also ensuring

that regulation does not stifle
worthwhile advances in technology.

This would suggest

that the most effective way to regulate AI

would be to introduce AI-specific boards
into the government,

allowing AI experts to make regulations

rather than politicians.

The most important thing for us
is that we don’t settle

for a “one-size-fits-all”
regulatory approach

as a range of possible uses of AI
is far too diverse for this.

You wouldn’t use the same regulation
for a self-driving car

as for a smart fridge.

So our main goal

should be to learn more about
the risks of AI in different applications

to understand where regulation
is actually needed.

And an AI-specific government board
would be far more efficient at this

than politicians who were just
not familiar with AI.

And if people are fundamentally
against government intervention,

then a company-led self-regulated system
must be established.

Trust is very hard
for technology firms to gain,

but also very easy for them to lose.

And since trust is such a vital
commodity for businesses,

it would be in their interest

to go above and beyond
the minimum legal standards

in order to gain
this valuable consumer trust.

As being seen to promote AI safety,

offers an easy way to gain trust
was actively opposing it,

or quickly lose the trust
they worked so hard to gain.

It’s likely that regulation strategies
will differ around the world,

with some countries
taking the government-led approach

whilst others opt
for a company-led approach

or even a mix of the two.

And that is OK.

But the most dangerous thing we can do now

is to completely run away
from the idea of AI regulation.

Google CEO Sundar Pichai has said,

“There is no question in my mind

that artificial intelligence
needs to be regulated.”

Elon Musk has said that AI
is more dangerous than nukes.

When even the people
developing AI themselves

agree with the need for regulation,

it’s time to get down to the business

of how to regulate the rapidly changing
field of artificial intelligence.

Thank you.

(Applause)