Tackling AI Bias is a human problem
[Music]
[Music]
hi
i’m excited to have the opportunity to
speak with you i want to provide my
unbiased perspective on the critical
role of managing bias and ai
does that sound ridiculous well that’s
because it should
since there is no conversation without
bias
as humans we perceive our environments
our experiences and that affects our
perspective this lens
this perspective is bias similarly
an ai perceives its environment its
experiences
and this is in the form of data and this
affects its perspective
this perspective that the ai has learned
from the data influences and creates
bias in the resulting ai
but we want to know we can trust the ai
and be able to understand
how and why it came to a recommendation
but what does ai look like in action a
good example
is an algorithm that predicts credit
risk from your credit history and assets
this is based on how people similar to
you have behaved in the past
in other words the algorithm is scoring
you
based on the behavior of other similar
individuals
and algorithms are really good at this
in 2019
apple and goldman sachs released a
credit card aptly named the apple card
but when janet hill pulled out her
iphone and applied for the apple card
instantaneously an algorithm issued her
a card
but she was given a credit limit that
was 10 times lower than her husband
they share all the same assets and they
share all the same accounts
yet her husband who happened to be steve
wozniak
got 10 times the credit that she did now
the algorithm
doesn’t even use gender as a factor
they didn’t want by this bias to exist
in fact
they made sure gender wasn’t even a
consideration in the algorithm
but before we address how this likely
happened i want to use a story to
explain some of the different types of
biases that exist
in late 2017 i started a team
to help companies climb the ladder to ai
when we set out on this mission we
established a cognitive bias
by having a goal to create a team that
was as diverse as possible
we created a framing bias by centering
the entire hiring process
to support our cognitive bias after that
we did something quite simple
we worked to minimize a selection bias
by using a set of job postings that were
designed to be more inclusive
we took this approach as there’s a
difference in the types of job
descriptions
that different groups will and will not
apply to
because of an implicit bias in addition
to these simple
adjustments to the job postings we set a
requirement
before anyone could start interviewing
we needed to make sure we had a pool of
qualified candidates that’s at least
50 percent diverse this decision
of consciously establishing a selection
bias
proved to be one of the most important
parts of bringing this team together
it was so simple but it was so impactful
and i want to be clear we had no
requirement on the mix of candidates
that we would
or would not offer jobs to in fact we
had a very rigorous process
and only the most qualified candidates
got hired with no exceptions
when we finished the team of a hundred
data scientists was nearly twice as
diverse as the industry average
with nearly an equal number of men and
women who spoke more than 26 different
languages
and were from a wide variety of cultural
religious
and geopolitical backgrounds underneath
all this was a confirmation bias
in the form of a set of academic and
real world research
that was hand selected but why is this
so important when we start to think
about the development of ai
the same types of biases that are
expressed in the story about the data
science team
apply directly to ai that’s because
humans build ai
and we come to the table with a set of
biases and data is used to teach
or train the ai and data has bias that’s
based on the historical decisions
and actions of humans these factors
impact the level of bias an ai will have
so if we go back to the apple card
example
while they explicitly removed gender
there were
other factors associated with gender
that the ai algorithm
identified in order to classify
individuals based on a perceived risk
these features of the data are based on
a historical bias
but we’re not explicitly are you a man
or are you a woman
however they were still associated with
being a man or being a woman
and led to the unintended bias that
occurred
as we look at the pandemic we’re facing
it’s only increasing the rate
and pace of adoption of ai i know
sometimes ai can see him abstract
but you are already impacted by ai every
day
it’s in every industry from media and
entertainment
when you think about netflix
recommending a movie to you
to automotive with the autonomous and
semi-autonomous cars driving around
even telecommunications with the need to
automatically allocate
and prioritize network capacity based on
demand
and finally to the response to the
pandemic
as we are forecasting the impact that
rates of infection have on economies
being able to reopen or not
and bias in any of these interactions
can have a direct impact on our lives
from the simply annoying when netflix
tells you to watch bridget jones and you
really just want to watch fifth element
to predicting what happens to the
availability of critical supplies
if the pandemic shuts down an entire
economy
as ai becomes more pervasive we need to
understand
mitigate and remediate bias and ai
and the development of an ai algorithm
can be thought of as similar to our own
human development
as we are born mostly without biases ai
algorithms are not inherently biased
and environments introduce bias to us
both positive and negative
and data is the algorithmic equivalent
to an environment for ai
external pressure causes us to adjust
our biases over time just as data
changes over time
and as we mature we’re introduced into
new environments
just as ai algorithms learn from new
data over time
ai is very good at picking up on small
details in their environment
and classifying groups or individuals
based on the data which is sometimes
biased
so we think back again to the apple card
gender was
absolutely not a feature of the
algorithm and if ai has no inherent bias
then where did this bias come from
while data sets may seem like the most
obvious source of bias in ai
bias can also be introduced by teams
that don’t have proper training
and if teams are not sufficiently
diverse they’re more
likely to introduce a cognitive bias
when they’re setting up the problem
or framing bias is they lay out the
experimental design
or a selection bias when they start
picking algorithms
and this is in addition to whatever
biases may already exist in the data
from the historical actions of humans
so let’s go through another story
several years ago
a bank started using ai to decide if
mortgages should be given to applicants
or not
since this decision can have serious
long-term impact
on the lives of families the bank was
very careful
to ensure that the algorithms didn’t
have
any gender racial religious or ethnic
biases
however even when everything else was
the same the algorithm started to deny
people of color mortgages at a higher
rate
now remember race gender religion and
ethnicity were specifically excluded
from the algorithm
so what could have happened here it
turns out that address was collected
and in the united states and other parts
of the world
people of similar backgrounds tend to
live together in communities
and it also turns out that people of
color have been has
been historically denied credit at much
higher rates
and have historically lived in certain
zip codes
so if you happen to live in one of these
zip codes
you are more likely to be denied credit
this example demonstrates a few things
first that two or more pieces of data or
features
are likely to be very tightly connected
or correlated
in this case race address and zip code
are tightly correlated
and even though race was excluded the
algorithm found the result of historical
bias
in brace correlated to a separate set of
features
address and zip code second
it highlights the effect of biased data
and how biases introduced into the data
because the model was tainted by a
historical bias in the data
they were making a bunch of bad
decisions really fast
so what does it look like to truly
understand bias
it takes highly talented and diverse
teams
building the ai it is a set of separate
ai algorithms that are used to identify
outliers that represent both known and
unknown biases
it’s making sure we frame projects in a
way that’s as unbiased as possible
and it’s being transparent with the ai
in order to create a trusted foundation
for the ai
now the process of mitigating a
remediating bias
can still be challenging and to
understand why let’s go back to the
example of the bank
they were trying to look they took this
very seriously and their intent was not
to be biased against people of color
or anyone else for that matter in fact
the whole point of implementing the ai
was to remove the potential of implicit
bias
affecting the decisions of their
underwriters and mortgage brokers
now it was not easy for them to
understand what happened
remember they went to great lanes to try
to prevent this from happening in the
first place
it was such a challenge primarily
because their teams had to manually go
back through the algorithms and untangle
what happened
and this isn’t an easy process but we’re
at an amazing point in time
technologically we are at the point
where the bias mitigation process
can be automatic and can be integral to
the entire development life cycle of ai
the right people with the right tools
and the right technology
can start creating a fantastic future of
less biased
and more ethical ai in the form of a
fully automated end-to-end process that
accounts for bias at every step of the
way
and is overseen by talented and diverse
teams
thinking about the new reality we live
in today thanks to the pandemic
entire economies are shutting down and
this is resulting in a world that was
once physically connected via air travel
becoming completely remote where
millions of people are without work
and many others are working remotely
these fundamental changes in the world
are exponentially accelerating the rate
and pace
of the implementation and adoption of ai
whatever trajectory we were on has been
accelerated
because we’re relying on data and ai
more than ever
but as we interact with ai in our daily
lives
we need to be cognizant of the fact that
biases
can be and often are impacting us
directly and indirectly
we need to keep in mind that even
ethical ai is biased
even if it’s consciously biased in a way
that’s aligned to societal norms
now we’ve all heard about the possible
benefits of ai
everything from better customer service
to more efficient and resilient supply
chains
to faster and smarter drug discovery
process and so on
and many of us look forward to the
innovations and impact that ai can offer
organizations and individuals want to
know that they can trust their data
and their ai and explain how it came to
a recommendation
think about how many more organizations
would be ready to use ai
if they could rely on a trusted and
transparent process
now when i think about the task at hand
i know we have a lot of work to do
but i am very hopeful humans created the
bias that’s out there in the world today
and together we have a shared
responsibility
to make sure that ai reflects the best
in human thinking
not the worst done right
done ethically ai will help us emerge
for the pandemic
by us towards a better more equitable
society