Bias Artificial Intelligence and the Number 8

i’ve always been fascinated by computers

throughout my life i grew up really

interested in computing and it drove me

to

engineering school where i wanted to

learn how computers work how do you

build computers

how do they process and one of the

classes that i had in school

was called computer vision it was a

really good class we worked with

computers

interfacing them to the real world

through cameras bringing in images and

learning to process those images and one

of the assignments that we had

was to do recognition of

numbers handwritten numbers the numbers

0 through nine so we had

index cards and we wrote numbers on

individual index cards and then use

cameras to process that and

and recognize those images through

algorithms that we were to develop

and i wrote my number eight and i i

remember my number it was a beautiful

eight i think you would all agree

and i i took a lot of care and effort

into crafting my circles and

and then working on algorithms that

would look at the area of those circles

and how the circles aligned and

to really tell that that was the number

eight and i felt pretty good about that

and then the day of uh the presentation

of my assignment came my professor came

in and i dutifully handed over my index

cards ready

to be graded and i remember our

professor taking index cards and tossing

them

and then pulling out a different set and

you know that feeling that sometimes you

have when you

you know you’re screwed well i had it

because this was the number eight that

got put under the camera

and i immediately saw the fallacy in my

algorithm in my mind and

once my process ran and told my

professor that that was the number six i

i realized i wasn’t going to get a good

grade on that but

i did learn a really good lesson and i

brought a lot of bias

to what i thought the number eight

looked like and i developed a system

to process that number eight based on

the bias that i had i didn’t think about

the variance

in the different number eights that

might exist as people write them in

different ways

that may seem like a trivial example

it’s it’s just a number in the number

eight it’s probably not too terribly

offended

but this exists in other systems let me

tell you about a system

called google cloud vision it sounds

very impressive

in fact google cloud vision if you read

the slug line on their website

they detect objects and faces read

printed and handwritten text

and build valuable metadata valuable

metadata so just last year

some people did an experiment with

google cloud vision and you submit a

photo

and google will tell you what’s in the

photo

and this was the experiment that was

done so

essentially the same picture it’s a hand

holding

as it turns out a monocular or a digital

thermometer

one hand is dark skin and that same

image

the hand was shaded light both submitted

to google

it did a great job of coming back and

saying that there was a hand in each

photo with a high degree of confidence

but you can see that the dark skin hand

google thought was holding a gun

and the light skin hand it successfully

identified as holding a digital

thermometer monocular

when google became aware of that problem

they quickly fixed it as you would hope

they would do

and this is how they fixed it

they redacted the word gun from the

return on the dark skinned hand

and you see it’s not easy to fix a

problem like that

because the problem isn’t a switch that

you flip that you turn off

the problem is in the engineering it’s

systemic it’s baked into the algorithm

now i don’t at all think that the

engineers at google were evil people

i don’t think that they did this by

malice

i think that this happens because they

built their system and they probably did

iterative testing based on the data that

they had on hand pun

intended because their hand probably

looked a lot like my hand

and that might be what my algorithms

would come back with in processing data

but this problem exists in other things

too

this year has been an amazing year in a

massive shift to online education

this has happened across the world the

pandemic has really driven us to

accelerate what we do online

and to do that on scale we absolutely

need technology to be our friend

to help us to assist in that uplift

so when you think about trying to simply

proctor

exams on scale it’s it’s nearly

impossible without some type of

assistance

artificial intelligence to the rescue so

now

teachers professors can proctor exams

online

using automation and artificial

intelligence to help do that

the problem is as this happened

throughout the year we started to find

that

people with darker skin were

disadvantaged by these systems

different studies and reports but in

some cases facial recognition doesn’t

work as well

people with darker skin they couldn’t

even gain access to the system to take

their exam

or in other cases they were flagged for

potential cheating at

much higher rates than people with

lighter skin

how does that happen and so

if we think about there’s bias that i’ve

talked to about in handwriting

recognition

in racial um and

skin tone but what about other things

what about ethics and morality

does bias creep in there through

artificial intelligence

we currently have systems today that are

making decisions artificial intelligence

decisions

to decide things like who gets a loan

who gets hired who gets paroled

artificial intelligence makes those

decisions is bias

in that decision-making process

and what about taking it even beyond

that and we start to think about life

and death decisions

does artificial intelligence make life

and death decisions

well take for instance self-driving cars

they do a pretty good job and

they certainly have crash and accident

avoidance systems in there

but at some point there might be a an

accident that comes up that’s hard to

just avoid

and it has to be the lesser of two evils

and that decision is made

in an instant the car’s driving along at

a high rate of speed a child steps in

front

the car can hit the child or swerve into

the oncoming lane

it’s got two choices who decides what

choice

is it the consumer’s decision they

bought the car

is it the insurance company’s decision

is it politicians that decide this

who decides do we know we should know

what about a military drone it’s

now an autonomous attack drone and its

prime directive is to attack people

that maybe have a gun i hope they didn’t

use the google vision api

to make that determination

but this isn’t a new problem we’ve dealt

with

the the struggle between humanity and

technology

for a long time deep thinkers

from years ago really really considered

this

i want to show you a quote from a

pioneer in healthcare information

systems

written 35 years ago if you go back with

me 35 years

and the point was made that it’s

it’s in the crucible of the individual

that technology most forcefully

confronts human values

it’s what artificial intelligence is

doing it’s confronting human values

it’s making decisions on human values

that we are imparting

into the algorithms that do this

and it’s really important because ai is

doing

more and more it’s making more important

decisions

it’s making ethical decisions

it’s making life and death decisions

and it’s making decisions that we must

be intentional about addressing the bias

in these systems

you know it’s been said that those that

don’t learn from history or doomed to

repeat it

and i would say that if we don’t learn

and explore the future of

artificial intelligence then we might

just be doomed to

thank you

我一生都对计算机着迷

我从小就

对计算机非常感兴趣,它驱使我

进入

工程学校,在那里我想

了解计算机是如何工作的,你是如何

构建计算机的

,它们是如何处理的,以及

我上过的一门课 在学校里

被称为计算机视觉,这是一门

非常好的课程,我们使用

计算机将

它们与现实世界连接起来,

通过摄像头将图像带入并

学习处理这些图像

,我们的一项任务

是识别

数字手写数字 数字

0 到 9,所以我们有

索引卡,我们在

各个索引卡上写下数字,然后使用

相机处理它,

并通过

我们将开发的算法识别这些图像

,我写了我的数字 8,

我记得我的数字是 美丽的

八个我想你们都会同意的

,我花了很多心思和精力

来制作我的圈子

,然后研究

算法 应该看看那些圆圈的面积和圆圈是

如何对齐的,

然后真的告诉那是数字

八,我对此感觉很好,

然后呃我的作业报告的那一天,

我的教授进来

了,我尽职尽责地交给了我 在我

准备评分的索引卡上,我记得我们的

教授拿着索引卡,把

它们扔了

,然后拿出另一套,

你知道有时当你知道自己搞砸了的时候,我有这种感觉,

因为这是

被放在摄像机下的数字 8

我立即

在我的脑海中看到了我算法中的谬误,

一旦我的过程运行并告诉我的

教授那是数字 6

我意识到我不会在这方面获得好

成绩 但

我确实学到了很好的一课,我

对我认为数字 8 的

样子产生了很大的偏见,我开发了一个系统

来处理那个数字 8,基于

我没有考虑过的偏见

当人们以不同的方式编写它们时,可能存在不同数字 8 的

差异

,这可能看起来像一个微不足道的例子

,它只是数字 8 中的一个数字,

它可能不会太被

冒犯,

但这存在于其他系统中,让我

告诉你一个 系统

称为谷歌云视觉它听起来

非常令人印象深刻

事实上谷歌云视觉如果你阅读

他们网站上的slug line

他们会检测物体和面部阅读

印刷和手写文本

并构建有价值的元数据有价值的

元数据所以就在去年

有些人用谷歌做了一个实验

云视觉,你提交一张

照片

,谷歌会告诉你

照片里有什么

,这是完成的实验,

所以

本质上是相同的照片,它是一只手

拿着

,因为结果证明是单筒或数字

温度计,

一只手是深色皮肤,而且 同一张

图片 手被遮光 都提交

给谷歌

它做得很好 回来并

说有一只手 每张

照片都有很高的信心,

但你可以看到谷歌认为深色皮肤的手

拿着枪

,而当谷歌意识到这个问题时,它成功识别出浅色皮肤的手

拿着数字

温度计单筒望远镜

他们很快就修复了你 希望

他们会这样做

,这就是他们修复它的方法

他们从黑皮肤手的回报中编辑了枪这个词

,您会发现解决这样的问题并不容易

因为问题不是

您翻转的开关

关掉 问题在于工程

它是系统性的 它已融入算法

现在我根本不认为

谷歌的工程师是邪恶的人

我不认为他们这样做是

出于恶意

我认为这是因为他们

建立了 他们的系统,他们可能会

根据他们手头双关语的数据进行迭代测试,

因为他们的手可能

看起来很像我的手,

而这可能就是我的算法

会出现的结果 同意处理数据,

但这个问题也存在于其他事情中

今年是

向在线教育大规模转变的惊人一年

这已经在世界各地发生

大流行确实促使

我们加快在线工作

并在 规模我们绝对

需要技术成为我们的朋友

来帮助我们协助提升,

所以当你考虑尝试简单地

按比例监考考试时,

如果没有某种类型的

帮助

人工智能来拯救几乎是不可能的,所以

现在

教师教授可以监考考试

在线

使用自动化和

人工智能来帮助做到这

一点问题是因为这种情况

全年都在发生,我们开始发现

肤色较深的人

在这些系统

不同的研究和报告中处于不利地位,但在

某些情况下面部识别

效果不佳

人们 肤色较深的他们甚至无法

访问系统

参加考试,

或者在其他情况下,他们 与肤色较浅的人相比,e 被标记为

潜在的作弊

率要高得多,

这是怎么发生的

道德

确实通过人工智能蔓延到那里

我们目前拥有正在

做出决策的系统 人工智能

决策

来决定诸如谁获得贷款

谁被雇用 谁获得假释

人工智能做出这些

决定

是该决策过程中的偏见

和 甚至超越它怎么样

,我们开始考虑

生与死的

决定 人工智能是否能很好地做出

生与死的决定

以自动驾驶汽车为例,

它们做得很好,而且

它们肯定有碰撞和事故

避免系统

但在某些时候可能会发生

难以避免

的事故 d 它必须是两害相权取其轻,

并且该决定是

在瞬间做出的 汽车

高速行驶 一个孩子走在

前面 汽车可能会撞到孩子或

转向对面车道

它有两个选择谁 决定什么

选择

是消费者

购买汽车

的决定 是保险公司的决定

是政治家决定这个

谁决定 我们知道我们应该知道

什么是军用无人机

现在是自主攻击无人机,它的

主要指令是攻击

可能有枪的人我希望他们没有

使用谷歌视觉 API

来做出决定,

但这不是一个新问题,我们多年来一直在处理

人类与技术之间的

斗争 真的考虑到

这一点,

我想向你展示 35 年前一位

医疗信息

系统

先驱

的名言

技术最有力地

对抗人类价值观

这就是人工智能正在

做的事情 它正在对抗人类价值观

它正在根据人类价值观做出决策

我们正在将这些决策传递

给执行此操作的算法

,这非常重要,因为人工智能正在

越来越多的事情 更重要的

决定

它正在做出道德决定

它正在做出生与死的

决定 它正在做出我们必须

有意识地解决

这些系统中的偏见的决定

你知道有人说那些

不从历史中吸取教训或注定要

重蹈覆辙的人

,我 会说,如果我们不学习

和探索人工智能的未来,

那么

我们可能注定要

感谢你