How we can build AI to help humans not hurt us Margaret Mitchell

I work on helping computers
communicate about the world around us.

There are a lot of ways to do this,

and I like to focus on helping computers

to talk about what they see
and understand.

Given a scene like this,

a modern computer-vision algorithm

can tell you that there’s a woman
and there’s a dog.

It can tell you that the woman is smiling.

It might even be able to tell you
that the dog is incredibly cute.

I work on this problem

thinking about how humans
understand and process the world.

The thoughts, memories and stories

that a scene like this
might evoke for humans.

All the interconnections
of related situations.

Maybe you’ve seen
a dog like this one before,

or you’ve spent time
running on a beach like this one,

and that further evokes thoughts
and memories of a past vacation,

past times to the beach,

times spent running around
with other dogs.

One of my guiding principles
is that by helping computers to understand

what it’s like to have these experiences,

to understand what we share
and believe and feel,

then we’re in a great position
to start evolving computer technology

in a way that’s complementary
with our own experiences.

So, digging more deeply into this,

a few years ago I began working on helping
computers to generate human-like stories

from sequences of images.

So, one day,

I was working with my computer to ask it
what it thought about a trip to Australia.

It took a look at the pictures,
and it saw a koala.

It didn’t know what the koala was,

but it said it thought
it was an interesting-looking creature.

Then I shared with it a sequence of images
about a house burning down.

It took a look at the images and it said,

“This is an amazing view!
This is spectacular!”

It sent chills down my spine.

It saw a horrible, life-changing
and life-destroying event

and thought it was something positive.

I realized that it recognized
the contrast,

the reds, the yellows,

and thought it was something
worth remarking on positively.

And part of why it was doing this

was because most
of the images I had given it

were positive images.

That’s because people
tend to share positive images

when they talk about their experiences.

When was the last time
you saw a selfie at a funeral?

I realized that,
as I worked on improving AI

task by task, dataset by dataset,

that I was creating massive gaps,

holes and blind spots
in what it could understand.

And while doing so,

I was encoding all kinds of biases.

Biases that reflect a limited viewpoint,

limited to a single dataset –

biases that can reflect
human biases found in the data,

such as prejudice and stereotyping.

I thought back to the evolution
of the technology

that brought me to where I was that day –

how the first color images

were calibrated against
a white woman’s skin,

meaning that color photography
was biased against black faces.

And that same bias, that same blind spot

continued well into the ’90s.

And the same blind spot
continues even today

in how well we can recognize
different people’s faces

in facial recognition technology.

I though about the state of the art
in research today,

where we tend to limit our thinking
to one dataset and one problem.

And that in doing so, we were creating
more blind spots and biases

that the AI could further amplify.

I realized then
that we had to think deeply

about how the technology we work on today
looks in five years, in 10 years.

Humans evolve slowly,
with time to correct for issues

in the interaction of humans
and their environment.

In contrast, artificial intelligence
is evolving at an incredibly fast rate.

And that means that it really matters

that we think about this
carefully right now –

that we reflect on our own blind spots,

our own biases,

and think about how that’s informing
the technology we’re creating

and discuss what the technology of today
will mean for tomorrow.

CEOs and scientists have weighed in
on what they think

the artificial intelligence technology
of the future will be.

Stephen Hawking warns that

“Artificial intelligence
could end mankind.”

Elon Musk warns
that it’s an existential risk

and one of the greatest risks
that we face as a civilization.

Bill Gates has made the point,

“I don’t understand
why people aren’t more concerned.”

But these views –

they’re part of the story.

The math, the models,

the basic building blocks
of artificial intelligence

are something that we call access
and all work with.

We have open-source tools
for machine learning and intelligence

that we can contribute to.

And beyond that,
we can share our experience.

We can share our experiences
with technology and how it concerns us

and how it excites us.

We can discuss what we love.

We can communicate with foresight

about the aspects of technology
that could be more beneficial

or could be more problematic over time.

If we all focus on opening up
the discussion on AI

with foresight towards the future,

this will help create a general
conversation and awareness

about what AI is now,

what it can become

and all the things that we need to do

in order to enable that outcome
that best suits us.

We already see and know this
in the technology that we use today.

We use smart phones
and digital assistants and Roombas.

Are they evil?

Maybe sometimes.

Are they beneficial?

Yes, they’re that, too.

And they’re not all the same.

And there you already see
a light shining on what the future holds.

The future continues on
from what we build and create right now.

We set into motion that domino effect

that carves out AI’s evolutionary path.

In our time right now,
we shape the AI of tomorrow.

Technology that immerses us
in augmented realities

bringing to life past worlds.

Technology that helps people
to share their experiences

when they have difficulty communicating.

Technology built on understanding
the streaming visual worlds

used as technology for self-driving cars.

Technology built on understanding images
and generating language,

evolving into technology that helps people
who are visually impaired

be better able to access the visual world.

And we also see how technology
can lead to problems.

We have technology today

that analyzes physical
characteristics we’re born with –

such as the color of our skin
or the look of our face –

in order to determine whether or not
we might be criminals or terrorists.

We have technology
that crunches through our data,

even data relating
to our gender or our race,

in order to determine whether or not
we might get a loan.

All that we see now

is a snapshot in the evolution
of artificial intelligence.

Because where we are right now,

is within a moment of that evolution.

That means that what we do now
will affect what happens down the line

and in the future.

If we want AI to evolve
in a way that helps humans,

then we need to define
the goals and strategies

that enable that path now.

What I’d like to see is something
that fits well with humans,

with our culture and with the environment.

Technology that aids and assists
those of us with neurological conditions

or other disabilities

in order to make life
equally challenging for everyone.

Technology that works

regardless of your demographics
or the color of your skin.

And so today, what I focus on
is the technology for tomorrow

and for 10 years from now.

AI can turn out in many different ways.

But in this case,

it isn’t a self-driving car
without any destination.

This is the car that we are driving.

We choose when to speed up
and when to slow down.

We choose if we need to make a turn.

We choose what the AI
of the future will be.

There’s a vast playing field

of all the things that artificial
intelligence can become.

It will become many things.

And it’s up to us now,

in order to figure out
what we need to put in place

to make sure the outcomes
of artificial intelligence

are the ones that will be
better for all of us.

Thank you.

(Applause)