What tech companies know about your kids Veronica Barassi

Transcriber: Leslie Gauthier
Reviewer: Joanna Pietrulewicz

Every day, every week,

we agree to terms and conditions.

And when we do this,

we provide companies with the lawful right

to do whatever they want with our data

and with the data of our children.

Which makes us wonder:

how much data are we giving
away of children,

and what are its implications?

I’m an anthropologist,

and I’m also the mother
of two little girls.

And I started to become interested
in this question in 2015

when I suddenly realized
that there were vast –

almost unimaginable amounts of data traces

that are being produced
and collected about children.

So I launched a research project,

which is called Child Data Citizen,

and I aimed at filling in the blank.

Now you may think
that I’m here to blame you

for posting photos
of your children on social media,

but that’s not really the point.

The problem is way bigger
than so-called “sharenting.”

This is about systems, not individuals.

You and your habits are not to blame.

For the very first time in history,

we are tracking
the individual data of children

from long before they’re born –

sometimes from the moment of conception,

and then throughout their lives.

You see, when parents decide to conceive,

they go online to look
for “ways to get pregnant,”

or they download ovulation-tracking apps.

When they do get pregnant,

they post ultrasounds
of their babies on social media,

they download pregnancy apps

or they consult Dr. Google
for all sorts of things,

like, you know –

for “miscarriage risk when flying”

or “abdominal cramps in early pregnancy.”

I know because I’ve done it –

and many times.

And then, when the baby is born,
they track every nap,

every feed,

every life event
on different technologies.

And all of these technologies

transform the baby’s most intimate
behavioral and health data into profit

by sharing it with others.

So to give you an idea of how this works,

in 2019, the British Medical Journal
published research that showed

that out of 24 mobile health apps,

19 shared information with third parties.

And these third parties shared information
with 216 other organizations.

Of these 216 other fourth parties,

only three belonged to the health sector.

The other companies that had access
to that data were big tech companies

like Google, Facebook or Oracle,

they were digital advertising companies

and there was also
a consumer credit reporting agency.

So you get it right:

ad companies and credit agencies may
already have data points on little babies.

But mobile apps,
web searches and social media

are really just the tip of the iceberg,

because children are being tracked
by multiple technologies

in their everyday lives.

They’re tracked by home technologies
and virtual assistants in their homes.

They’re tracked by educational platforms

and educational technologies
in their schools.

They’re tracked by online records

and online portals
at their doctor’s office.

They’re tracked by their
internet-connected toys,

their online games

and many, many, many,
many other technologies.

So during my research,

a lot of parents came up to me
and they were like, “So what?

Why does it matter
if my children are being tracked?

We’ve got nothing to hide.”

Well, it matters.

It matters because today individuals
are not only being tracked,

they’re also being profiled
on the basis of their data traces.

Artificial intelligence and predictive
analytics are being used

to harness as much data as possible
of an individual life

from different sources:

family history, purchasing habits,
social media comments.

And then they bring this data together

to make data-driven decisions
about the individual.

And these technologies
are used everywhere.

Banks use them to decide loans.

Insurance uses them to decide premiums.

Recruiters and employers use them

to decide whether one
is a good fit for a job or not.

Also the police and courts use them

to determine whether one
is a potential criminal

or is likely to recommit a crime.

We have no knowledge or control

over the ways in which those who buy,
sell and process our data

are profiling us and our children.

But these profiles can come to impact
our rights in significant ways.

To give you an example,

in 2018 the “New York Times”
published the news

that the data that had been gathered

through online
college-planning services –

that are actually completed by millions
of high school kids across the US

who are looking for a college
program or a scholarship –

had been sold to educational data brokers.

Now, researchers at Fordham
who studied educational data brokers

revealed that these companies
profiled kids as young as two

on the basis of different categories:

ethnicity, religion, affluence,

social awkwardness

and many other random categories.

And then they sell these profiles
together with the name of the kid,

their home address and the contact details

to different companies,

including trade and career institutions,

student loans

and student credit card companies.

To push the boundaries,

the researchers at Fordham
asked an educational data broker

to provide them with a list
of 14-to-15-year-old girls

who were interested
in family planning services.

The data broker agreed
to provide them the list.

So imagine how intimate
and how intrusive that is for our kids.

But educational data brokers
are really just an example.

The truth is that our children are being
profiled in ways that we cannot control

but that can significantly impact
their chances in life.

So we need to ask ourselves:

can we trust these technologies
when it comes to profiling our children?

Can we?

My answer is no.

As an anthropologist,

I believe that artificial intelligence
and predictive analytics can be great

to predict the course of a disease

or to fight climate change.

But we need to abandon the belief

that these technologies
can objectively profile humans

and that we can rely on them
to make data-driven decisions

about individual lives.

Because they can’t profile humans.

Data traces are not
the mirror of who we are.

Humans think one thing
and say the opposite,

feel one way and act differently.

Algorithmic predictions
or our digital practices

cannot account for the unpredictability
and complexity of human experience.

But on top of that,

these technologies are always –

always –

in one way or another, biased.

You see, algorithms are by definition
sets of rules or steps

that have been designed to achieve
a specific result, OK?

But these sets of rules or steps
cannot be objective,

because they’ve been designed
by human beings

within a specific cultural context

and are shaped
by specific cultural values.

So when machines learn,

they learn from biased algorithms,

and they often learn
from biased databases as well.

At the moment, we’re seeing
the first examples of algorithmic bias.

And some of these examples
are frankly terrifying.

This year, the AI Now Institute
in New York published a report

that revealed that the AI technologies

that are being used
for predictive policing

have been trained on “dirty” data.

This is basically data
that had been gathered

during historical periods
of known racial bias

and nontransparent police practices.

Because these technologies
are being trained with dirty data,

they’re not objective,

and their outcomes are only
amplifying and perpetrating

police bias and error.

So I think we are faced
with a fundamental problem

in our society.

We are starting to trust technologies
when it comes to profiling human beings.

We know that in profiling humans,

these technologies
are always going to be biased

and are never really going to be accurate.

So what we need now
is actually political solution.

We need governments to recognize
that our data rights are our human rights.

(Applause and cheers)

Until this happens, we cannot hope
for a more just future.

I worry that my daughters
are going to be exposed

to all sorts of algorithmic
discrimination and error.

You see the difference
between me and my daughters

is that there’s no public record
out there of my childhood.

There’s certainly no database
of all the stupid things that I’ve done

and thought when I was a teenager.

(Laughter)

But for my daughters
this may be different.

The data that is being collected
from them today

may be used to judge them in the future

and can come to prevent
their hopes and dreams.

I think that’s it’s time.

It’s time that we all step up.

It’s time that we start working together

as individuals,

as organizations and as institutions,

and that we demand
greater data justice for us

and for our children

before it’s too late.

Thank you.

(Applause)