How to keep human bias out of AI Kriti Sharma

Translator: Ivana Korom
Reviewer: Joanna Pietrulewicz

How many decisions
have been made about you today,

or this week or this year,

by artificial intelligence?

I build AI for a living

so, full disclosure, I’m kind of a nerd.

And because I’m kind of a nerd,

wherever some new news story comes out

about artificial intelligence
stealing all our jobs,

or robots getting citizenship
of an actual country,

I’m the person my friends
and followers message

freaking out about the future.

We see this everywhere.

This media panic that
our robot overlords are taking over.

We could blame Hollywood for that.

But in reality, that’s not the problem
we should be focusing on.

There is a more pressing danger,
a bigger risk with AI,

that we need to fix first.

So we are back to this question:

How many decisions
have been made about you today by AI?

And how many of these

were based on your gender,
your race or your background?

Algorithms are being used all the time

to make decisions about who we are
and what we want.

Some of the women in this room
will know what I’m talking about

if you’ve been made to sit through
those pregnancy test adverts on YouTube

like 1,000 times.

Or you’ve scrolled past adverts
of fertility clinics

on your Facebook feed.

Or in my case, Indian marriage bureaus.

(Laughter)

But AI isn’t just being used
to make decisions

about what products we want to buy

or which show we want to binge watch next.

I wonder how you’d feel about someone
who thought things like this:

“A black or Latino person

is less likely than a white person
to pay off their loan on time.”

“A person called John
makes a better programmer

than a person called Mary.”

“A black man is more likely to be
a repeat offender than a white man.”

You’re probably thinking,

“Wow, that sounds like a pretty sexist,
racist person,” right?

These are some real decisions
that AI has made very recently,

based on the biases
it has learned from us,

from the humans.

AI is being used to help decide
whether or not you get that job interview;

how much you pay for your car insurance;

how good your credit score is;

and even what rating you get
in your annual performance review.

But these decisions
are all being filtered through

its assumptions about our identity,
our race, our gender, our age.

How is that happening?

Now, imagine an AI is helping
a hiring manager

find the next tech leader in the company.

So far, the manager
has been hiring mostly men.

So the AI learns men are more likely
to be programmers than women.

And it’s a very short leap from there to:

men make better programmers than women.

We have reinforced
our own bias into the AI.

And now, it’s screening out
female candidates.

Hang on, if a human
hiring manager did that,

we’d be outraged, we wouldn’t allow it.

This kind of gender
discrimination is not OK.

And yet somehow,
AI has become above the law,

because a machine made the decision.

That’s not it.

We are also reinforcing our bias
in how we interact with AI.

How often do you use a voice assistant
like Siri, Alexa or even Cortana?

They all have two things in common:

one, they can never get my name right,

and second, they are all female.

They are designed to be
our obedient servants,

turning your lights on and off,
ordering your shopping.

You get male AIs too,
but they tend to be more high-powered,

like IBM Watson,
making business decisions,

Salesforce Einstein
or ROSS, the robot lawyer.

So poor robots, even they suffer
from sexism in the workplace.

(Laughter)

Think about how these two things combine

and affect a kid growing up
in today’s world around AI.

So they’re doing some research
for a school project

and they Google images of CEO.

The algorithm shows them
results of mostly men.

And now, they Google personal assistant.

As you can guess,
it shows them mostly females.

And then they want to put on some music,
and maybe order some food,

and now, they are barking orders
at an obedient female voice assistant.

Some of our brightest minds
are creating this technology today.

Technology that they could have created
in any way they wanted.

And yet, they have chosen to create it
in the style of 1950s “Mad Man” secretary.

Yay!

But OK, don’t worry,

this is not going to end
with me telling you

that we are all heading towards
sexist, racist machines running the world.

The good news about AI
is that it is entirely within our control.

We get to teach the right values,
the right ethics to AI.

So there are three things we can do.

One, we can be aware of our own biases

and the bias in machines around us.

Two, we can make sure that diverse teams
are building this technology.

And three, we have to give it
diverse experiences to learn from.

I can talk about the first two
from personal experience.

When you work in technology

and you don’t look like
a Mark Zuckerberg or Elon Musk,

your life is a little bit difficult,
your ability gets questioned.

Here’s just one example.

Like most developers,
I often join online tech forums

and share my knowledge to help others.

And I’ve found,

when I log on as myself,
with my own photo, my own name,

I tend to get questions
or comments like this:

“What makes you think
you’re qualified to talk about AI?”

“What makes you think
you know about machine learning?”

So, as you do, I made a new profile,

and this time, instead of my own picture,
I chose a cat with a jet pack on it.

And I chose a name
that did not reveal my gender.

You can probably guess
where this is going, right?

So, this time, I didn’t get any of those
patronizing comments about my ability

and I was able to actually
get some work done.

And it sucks, guys.

I’ve been building robots since I was 15,

I have a few degrees in computer science,

and yet, I had to hide my gender

in order for my work
to be taken seriously.

So, what’s going on here?

Are men just better
at technology than women?

Another study found

that when women coders on one platform
hid their gender, like myself,

their code was accepted
four percent more than men.

So this is not about the talent.

This is about an elitism in AI

that says a programmer
needs to look like a certain person.

What we really need to do
to make AI better

is bring people
from all kinds of backgrounds.

We need people who can
write and tell stories

to help us create personalities of AI.

We need people who can solve problems.

We need people
who face different challenges

and we need people who can tell us
what are the real issues that need fixing

and help us find ways
that technology can actually fix it.

Because, when people
from diverse backgrounds come together,

when we build things in the right way,

the possibilities are limitless.

And that’s what I want to end
by talking to you about.

Less racist robots, less machines
that are going to take our jobs –

and more about what technology
can actually achieve.

So, yes, some of the energy
in the world of AI,

in the world of technology

is going to be about
what ads you see on your stream.

But a lot of it is going towards
making the world so much better.

Think about a pregnant woman
in the Democratic Republic of Congo,

who has to walk 17 hours
to her nearest rural prenatal clinic

to get a checkup.

What if she could get diagnosis
on her phone, instead?

Or think about what AI could do

for those one in three women
in South Africa

who face domestic violence.

If it wasn’t safe to talk out loud,

they could get an AI service
to raise alarm,

get financial and legal advice.

These are all real examples of projects
that people, including myself,

are working on right now, using AI.

So, I’m sure in the next couple of days
there will be yet another news story

about the existential risk,

robots taking over
and coming for your jobs.

(Laughter)

And when something like that happens,

I know I’ll get the same messages
worrying about the future.

But I feel incredibly positive
about this technology.

This is our chance to remake the world
into a much more equal place.

But to do that, we need to build it
the right way from the get go.

We need people of different genders,
races, sexualities and backgrounds.

We need women to be the makers

and not just the machines
who do the makers' bidding.

We need to think very carefully
what we teach machines,

what data we give them,

so they don’t just repeat
our own past mistakes.

So I hope I leave you
thinking about two things.

First, I hope you leave
thinking about bias today.

And that the next time
you scroll past an advert

that assumes you are interested
in fertility clinics

or online betting websites,

that you think and remember

that the same technology is assuming
that a black man will reoffend.

Or that a woman is more likely
to be a personal assistant than a CEO.

And I hope that reminds you
that we need to do something about it.

And second,

I hope you think about the fact

that you don’t need to look a certain way

or have a certain background
in engineering or technology

to create AI,

which is going to be
a phenomenal force for our future.

You don’t need to look
like a Mark Zuckerberg,

you can look like me.

And it is up to all of us in this room

to convince the governments
and the corporations

to build AI technology for everyone,

including the edge cases.

And for us all to get education

about this phenomenal
technology in the future.

Because if we do that,

then we’ve only just scratched the surface
of what we can achieve with AI.

Thank you.

(Applause)