Bias Artificial Intelligence and the Number 8

i’ve always been fascinated by computers

throughout my life i grew up really

interested in computing and it drove me

to

engineering school where i wanted to

learn how computers work how do you

build computers

how do they process and one of the

classes that i had in school

was called computer vision it was a

really good class we worked with

computers

interfacing them to the real world

through cameras bringing in images and

learning to process those images and one

of the assignments that we had

was to do recognition of

numbers handwritten numbers the numbers

0 through nine so we had

index cards and we wrote numbers on

individual index cards and then use

cameras to process that and

and recognize those images through

algorithms that we were to develop

and i wrote my number eight and i i

remember my number it was a beautiful

eight i think you would all agree

and i i took a lot of care and effort

into crafting my circles and

and then working on algorithms that

would look at the area of those circles

and how the circles aligned and

to really tell that that was the number

eight and i felt pretty good about that

and then the day of uh the presentation

of my assignment came my professor came

in and i dutifully handed over my index

cards ready

to be graded and i remember our

professor taking index cards and tossing

them

and then pulling out a different set and

you know that feeling that sometimes you

have when you

you know you’re screwed well i had it

because this was the number eight that

got put under the camera

and i immediately saw the fallacy in my

algorithm in my mind and

once my process ran and told my

professor that that was the number six i

i realized i wasn’t going to get a good

grade on that but

i did learn a really good lesson and i

brought a lot of bias

to what i thought the number eight

looked like and i developed a system

to process that number eight based on

the bias that i had i didn’t think about

the variance

in the different number eights that

might exist as people write them in

different ways

that may seem like a trivial example

it’s it’s just a number in the number

eight it’s probably not too terribly

offended

but this exists in other systems let me

tell you about a system

called google cloud vision it sounds

very impressive

in fact google cloud vision if you read

the slug line on their website

they detect objects and faces read

printed and handwritten text

and build valuable metadata valuable

metadata so just last year

some people did an experiment with

google cloud vision and you submit a

photo

and google will tell you what’s in the

photo

and this was the experiment that was

done so

essentially the same picture it’s a hand

holding

as it turns out a monocular or a digital

thermometer

one hand is dark skin and that same

image

the hand was shaded light both submitted

to google

it did a great job of coming back and

saying that there was a hand in each

photo with a high degree of confidence

but you can see that the dark skin hand

google thought was holding a gun

and the light skin hand it successfully

identified as holding a digital

thermometer monocular

when google became aware of that problem

they quickly fixed it as you would hope

they would do

and this is how they fixed it

they redacted the word gun from the

return on the dark skinned hand

and you see it’s not easy to fix a

problem like that

because the problem isn’t a switch that

you flip that you turn off

the problem is in the engineering it’s

systemic it’s baked into the algorithm

now i don’t at all think that the

engineers at google were evil people

i don’t think that they did this by

malice

i think that this happens because they

built their system and they probably did

iterative testing based on the data that

they had on hand pun

intended because their hand probably

looked a lot like my hand

and that might be what my algorithms

would come back with in processing data

but this problem exists in other things

too

this year has been an amazing year in a

massive shift to online education

this has happened across the world the

pandemic has really driven us to

accelerate what we do online

and to do that on scale we absolutely

need technology to be our friend

to help us to assist in that uplift

so when you think about trying to simply

proctor

exams on scale it’s it’s nearly

impossible without some type of

assistance

artificial intelligence to the rescue so

now

teachers professors can proctor exams

online

using automation and artificial

intelligence to help do that

the problem is as this happened

throughout the year we started to find

that

people with darker skin were

disadvantaged by these systems

different studies and reports but in

some cases facial recognition doesn’t

work as well

people with darker skin they couldn’t

even gain access to the system to take

their exam

or in other cases they were flagged for

potential cheating at

much higher rates than people with

lighter skin

how does that happen and so

if we think about there’s bias that i’ve

talked to about in handwriting

recognition

in racial um and

skin tone but what about other things

what about ethics and morality

does bias creep in there through

artificial intelligence

we currently have systems today that are

making decisions artificial intelligence

decisions

to decide things like who gets a loan

who gets hired who gets paroled

artificial intelligence makes those

decisions is bias

in that decision-making process

and what about taking it even beyond

that and we start to think about life

and death decisions

does artificial intelligence make life

and death decisions

well take for instance self-driving cars

they do a pretty good job and

they certainly have crash and accident

avoidance systems in there

but at some point there might be a an

accident that comes up that’s hard to

just avoid

and it has to be the lesser of two evils

and that decision is made

in an instant the car’s driving along at

a high rate of speed a child steps in

front

the car can hit the child or swerve into

the oncoming lane

it’s got two choices who decides what

choice

is it the consumer’s decision they

bought the car

is it the insurance company’s decision

is it politicians that decide this

who decides do we know we should know

what about a military drone it’s

now an autonomous attack drone and its

prime directive is to attack people

that maybe have a gun i hope they didn’t

use the google vision api

to make that determination

but this isn’t a new problem we’ve dealt

with

the the struggle between humanity and

technology

for a long time deep thinkers

from years ago really really considered

this

i want to show you a quote from a

pioneer in healthcare information

systems

written 35 years ago if you go back with

me 35 years

and the point was made that it’s

it’s in the crucible of the individual

that technology most forcefully

confronts human values

it’s what artificial intelligence is

doing it’s confronting human values

it’s making decisions on human values

that we are imparting

into the algorithms that do this

and it’s really important because ai is

doing

more and more it’s making more important

decisions

it’s making ethical decisions

it’s making life and death decisions

and it’s making decisions that we must

be intentional about addressing the bias

in these systems

you know it’s been said that those that

don’t learn from history or doomed to

repeat it

and i would say that if we don’t learn

and explore the future of

artificial intelligence then we might

just be doomed to

thank you