Were building a dystopia just to make people click on ads Zeynep Tufekci

So when people voice fears
of artificial intelligence,

very often, they invoke images
of humanoid robots run amok.

You know? Terminator?

You know, that might be
something to consider,

but that’s a distant threat.

Or, we fret about digital surveillance

with metaphors from the past.

“1984,” George Orwell’s “1984,”

it’s hitting the bestseller lists again.

It’s a great book,

but it’s not the correct dystopia
for the 21st century.

What we need to fear most

is not what artificial intelligence
will do to us on its own,

but how the people in power
will use artificial intelligence

to control us and to manipulate us

in novel, sometimes hidden,

subtle and unexpected ways.

Much of the technology

that threatens our freedom
and our dignity in the near-term future

is being developed by companies

in the business of capturing
and selling our data and our attention

to advertisers and others:

Facebook, Google, Amazon,

Alibaba, Tencent.

Now, artificial intelligence has started
bolstering their business as well.

And it may seem
like artificial intelligence

is just the next thing after online ads.

It’s not.

It’s a jump in category.

It’s a whole different world,

and it has great potential.

It could accelerate our understanding
of many areas of study and research.

But to paraphrase
a famous Hollywood philosopher,

“With prodigious potential
comes prodigious risk.”

Now let’s look at a basic fact
of our digital lives, online ads.

Right? We kind of dismiss them.

They seem crude, ineffective.

We’ve all had the experience
of being followed on the web

by an ad based on something
we searched or read.

You know, you look up a pair of boots

and for a week, those boots are following
you around everywhere you go.

Even after you succumb and buy them,
they’re still following you around.

We’re kind of inured to that kind
of basic, cheap manipulation.

We roll our eyes and we think,
“You know what? These things don’t work.”

Except, online,

the digital technologies are not just ads.

Now, to understand that,
let’s think of a physical world example.

You know how, at the checkout counters
at supermarkets, near the cashier,

there’s candy and gum
at the eye level of kids?

That’s designed to make them
whine at their parents

just as the parents
are about to sort of check out.

Now, that’s a persuasion architecture.

It’s not nice, but it kind of works.

That’s why you see it
in every supermarket.

Now, in the physical world,

such persuasion architectures
are kind of limited,

because you can only put
so many things by the cashier. Right?

And the candy and gum,
it’s the same for everyone,

even though it mostly works

only for people who have
whiny little humans beside them.

In the physical world,
we live with those limitations.

In the digital world, though,

persuasion architectures
can be built at the scale of billions

and they can target, infer, understand

and be deployed at individuals

one by one

by figuring out your weaknesses,

and they can be sent
to everyone’s phone private screen,

so it’s not visible to us.

And that’s different.

And that’s just one of the basic things
that artificial intelligence can do.

Now, let’s take an example.

Let’s say you want to sell
plane tickets to Vegas. Right?

So in the old world, you could think
of some demographics to target

based on experience
and what you can guess.

You might try to advertise to, oh,

men between the ages of 25 and 35,

or people who have
a high limit on their credit card,

or retired couples. Right?

That’s what you would do in the past.

With big data and machine learning,

that’s not how it works anymore.

So to imagine that,

think of all the data
that Facebook has on you:

every status update you ever typed,

every Messenger conversation,

every place you logged in from,

all your photographs
that you uploaded there.

If you start typing something
and change your mind and delete it,

Facebook keeps those
and analyzes them, too.

Increasingly, it tries
to match you with your offline data.

It also purchases
a lot of data from data brokers.

It could be everything
from your financial records

to a good chunk of your browsing history.

Right? In the US,
such data is routinely collected,

collated and sold.

In Europe, they have tougher rules.

So what happens then is,

by churning through all that data,
these machine-learning algorithms –

that’s why they’re called
learning algorithms –

they learn to understand
the characteristics of people

who purchased tickets to Vegas before.

When they learn this from existing data,

they also learn
how to apply this to new people.

So if they’re presented with a new person,

they can classify whether that person
is likely to buy a ticket to Vegas or not.

Fine. You’re thinking,
an offer to buy tickets to Vegas.

I can ignore that.

But the problem isn’t that.

The problem is,

we no longer really understand
how these complex algorithms work.

We don’t understand
how they’re doing this categorization.

It’s giant matrices,
thousands of rows and columns,

maybe millions of rows and columns,

and not the programmers

and not anybody who looks at it,

even if you have all the data,

understands anymore
how exactly it’s operating

any more than you’d know
what I was thinking right now

if you were shown
a cross section of my brain.

It’s like we’re not programming anymore,

we’re growing intelligence
that we don’t truly understand.

And these things only work
if there’s an enormous amount of data,

so they also encourage
deep surveillance on all of us

so that the machine learning
algorithms can work.

That’s why Facebook wants
to collect all the data it can about you.

The algorithms work better.

So let’s push that Vegas example a bit.

What if the system
that we do not understand

was picking up that it’s easier
to sell Vegas tickets

to people who are bipolar
and about to enter the manic phase.

Such people tend to become
overspenders, compulsive gamblers.

They could do this, and you’d have no clue
that’s what they were picking up on.

I gave this example
to a bunch of computer scientists once

and afterwards, one of them came up to me.

He was troubled and he said,
“That’s why I couldn’t publish it.”

I was like, “Couldn’t publish what?”

He had tried to see whether you can indeed
figure out the onset of mania

from social media posts
before clinical symptoms,

and it had worked,

and it had worked very well,

and he had no idea how it worked
or what it was picking up on.

Now, the problem isn’t solved
if he doesn’t publish it,

because there are already companies

that are developing
this kind of technology,

and a lot of the stuff
is just off the shelf.

This is not very difficult anymore.

Do you ever go on YouTube
meaning to watch one video

and an hour later you’ve watched 27?

You know how YouTube
has this column on the right

that says, “Up next”

and it autoplays something?

It’s an algorithm

picking what it thinks
that you might be interested in

and maybe not find on your own.

It’s not a human editor.

It’s what algorithms do.

It picks up on what you have watched
and what people like you have watched,

and infers that that must be
what you’re interested in,

what you want more of,

and just shows you more.

It sounds like a benign
and useful feature,

except when it isn’t.

So in 2016, I attended rallies
of then-candidate Donald Trump

to study as a scholar
the movement supporting him.

I study social movements,
so I was studying it, too.

And then I wanted to write something
about one of his rallies,

so I watched it a few times on YouTube.

YouTube started recommending to me

and autoplaying to me
white supremacist videos

in increasing order of extremism.

If I watched one,

it served up one even more extreme

and autoplayed that one, too.

If you watch Hillary Clinton
or Bernie Sanders content,

YouTube recommends
and autoplays conspiracy left,

and it goes downhill from there.

Well, you might be thinking,
this is politics, but it’s not.

This isn’t about politics.

This is just the algorithm
figuring out human behavior.

I once watched a video
about vegetarianism on YouTube

and YouTube recommended
and autoplayed a video about being vegan.

It’s like you’re never
hardcore enough for YouTube.

(Laughter)

So what’s going on?

Now, YouTube’s algorithm is proprietary,

but here’s what I think is going on.

The algorithm has figured out

that if you can entice people

into thinking that you can
show them something more hardcore,

they’re more likely to stay on the site

watching video after video
going down that rabbit hole

while Google serves them ads.

Now, with nobody minding
the ethics of the store,

these sites can profile people

who are Jew haters,

who think that Jews are parasites

and who have such explicit
anti-Semitic content,

and let you target them with ads.

They can also mobilize algorithms

to find for you look-alike audiences,

people who do not have such explicit
anti-Semitic content on their profile

but who the algorithm detects
may be susceptible to such messages,

and lets you target them with ads, too.

Now, this may sound
like an implausible example,

but this is real.

ProPublica investigated this

and found that you can indeed
do this on Facebook,

and Facebook helpfully
offered up suggestions

on how to broaden that audience.

BuzzFeed tried it for Google,
and very quickly they found,

yep, you can do it on Google, too.

And it wasn’t even expensive.

The ProPublica reporter
spent about 30 dollars

to target this category.

So last year, Donald Trump’s
social media manager disclosed

that they were using Facebook dark posts
to demobilize people,

not to persuade them,

but to convince them not to vote at all.

And to do that,
they targeted specifically,

for example, African-American men
in key cities like Philadelphia,

and I’m going to read
exactly what he said.

I’m quoting.

They were using “nonpublic posts

whose viewership the campaign controls

so that only the people
we want to see it see it.

We modeled this.

It will dramatically affect her ability
to turn these people out.”

What’s in those dark posts?

We have no idea.

Facebook won’t tell us.

So Facebook also algorithmically
arranges the posts

that your friends put on Facebook,
or the pages you follow.

It doesn’t show you
everything chronologically.

It puts the order in the way
that the algorithm thinks will entice you

to stay on the site longer.

Now, so this has a lot of consequences.

You may be thinking
somebody is snubbing you on Facebook.

The algorithm may never
be showing your post to them.

The algorithm is prioritizing
some of them and burying the others.

Experiments show

that what the algorithm picks to show you
can affect your emotions.

But that’s not all.

It also affects political behavior.

So in 2010, in the midterm elections,

Facebook did an experiment
on 61 million people in the US

that was disclosed after the fact.

So some people were shown,
“Today is election day,”

the simpler one,

and some people were shown
the one with that tiny tweak

with those little thumbnails

of your friends who clicked on “I voted.”

This simple tweak.

OK? So the pictures were the only change,

and that post shown just once

turned out an additional 340,000 voters

in that election,

according to this research

as confirmed by the voter rolls.

A fluke? No.

Because in 2012,
they repeated the same experiment.

And that time,

that civic message shown just once

turned out an additional 270,000 voters.

For reference, the 2016
US presidential election

was decided by about 100,000 votes.

Now, Facebook can also
very easily infer what your politics are,

even if you’ve never
disclosed them on the site.

Right? These algorithms
can do that quite easily.

What if a platform with that kind of power

decides to turn out supporters
of one candidate over the other?

How would we even know about it?

Now, we started from someplace
seemingly innocuous –

online adds following us around –

and we’ve landed someplace else.

As a public and as citizens,

we no longer know
if we’re seeing the same information

or what anybody else is seeing,

and without a common basis of information,

little by little,

public debate is becoming impossible,

and we’re just at
the beginning stages of this.

These algorithms can quite easily infer

things like your people’s ethnicity,

religious and political views,
personality traits,

intelligence, happiness,
use of addictive substances,

parental separation, age and genders,

just from Facebook likes.

These algorithms can identify protesters

even if their faces
are partially concealed.

These algorithms may be able
to detect people’s sexual orientation

just from their dating profile pictures.

Now, these are probabilistic guesses,

so they’re not going
to be 100 percent right,

but I don’t see the powerful resisting
the temptation to use these technologies

just because there are
some false positives,

which will of course create
a whole other layer of problems.

Imagine what a state can do

with the immense amount of data
it has on its citizens.

China is already using
face detection technology

to identify and arrest people.

And here’s the tragedy:

we’re building this infrastructure
of surveillance authoritarianism

merely to get people to click on ads.

And this won’t be
Orwell’s authoritarianism.

This isn’t “1984.”

Now, if authoritarianism
is using overt fear to terrorize us,

we’ll all be scared, but we’ll know it,

we’ll hate it and we’ll resist it.

But if the people in power
are using these algorithms

to quietly watch us,

to judge us and to nudge us,

to predict and identify
the troublemakers and the rebels,

to deploy persuasion
architectures at scale

and to manipulate individuals one by one

using their personal, individual
weaknesses and vulnerabilities,

and if they’re doing it at scale

through our private screens

so that we don’t even know

what our fellow citizens
and neighbors are seeing,

that authoritarianism
will envelop us like a spider’s web

and we may not even know we’re in it.

So Facebook’s market capitalization

is approaching half a trillion dollars.

It’s because it works great
as a persuasion architecture.

But the structure of that architecture

is the same whether you’re selling shoes

or whether you’re selling politics.

The algorithms do not know the difference.

The same algorithms set loose upon us

to make us more pliable for ads

are also organizing our political,
personal and social information flows,

and that’s what’s got to change.

Now, don’t get me wrong,

we use digital platforms
because they provide us with great value.

I use Facebook to keep in touch
with friends and family around the world.

I’ve written about how crucial
social media is for social movements.

I have studied how
these technologies can be used

to circumvent censorship around the world.

But it’s not that the people who run,
you know, Facebook or Google

are maliciously and deliberately trying

to make the country
or the world more polarized

and encourage extremism.

I read the many
well-intentioned statements

that these people put out.

But it’s not the intent or the statements
people in technology make that matter,

it’s the structures
and business models they’re building.

And that’s the core of the problem.

Either Facebook is a giant con
of half a trillion dollars

and ads don’t work on the site,

it doesn’t work
as a persuasion architecture,

or its power of influence
is of great concern.

It’s either one or the other.

It’s similar for Google, too.

So what can we do?

This needs to change.

Now, I can’t offer a simple recipe,

because we need to restructure

the whole way our
digital technology operates.

Everything from the way
technology is developed

to the way the incentives,
economic and otherwise,

are built into the system.

We have to face and try to deal with

the lack of transparency
created by the proprietary algorithms,

the structural challenge
of machine learning’s opacity,

all this indiscriminate data
that’s being collected about us.

We have a big task in front of us.

We have to mobilize our technology,

our creativity

and yes, our politics

so that we can build
artificial intelligence

that supports us in our human goals

but that is also constrained
by our human values.

And I understand this won’t be easy.

We might not even easily agree
on what those terms mean.

But if we take seriously

how these systems that we
depend on for so much operate,

I don’t see how we can postpone
this conversation anymore.

These structures

are organizing how we function

and they’re controlling

what we can and we cannot do.

And many of these ad-financed platforms,

they boast that they’re free.

In this context, it means
that we are the product that’s being sold.

We need a digital economy

where our data and our attention

is not for sale to the highest-bidding
authoritarian or demagogue.

(Applause)

So to go back to
that Hollywood paraphrase,

we do want the prodigious potential

of artificial intelligence
and digital technology to blossom,

but for that, we must face
this prodigious menace,

open-eyed and now.

Thank you.

(Applause)

因此,当人们表达
对人工智能的恐惧时

,他们经常会引用
人形机器人失控的图像。

你懂? 终结者?

你知道,这可能
是需要考虑的事情,

但这是一个遥远的威胁。

或者,我们

用过去的隐喻来担心数字监控。

“1984”,乔治奥威尔的“1984”,

它再次登上畅销书排行榜。

这是一本很棒的书,

但它不是 21 世纪正确的反
乌托邦。

我们最需要害怕

的不是人工智能
本身会对我们做什么,

而是当权者
如何利用人工智能

来控制我们,并

以新颖的、有时是隐藏的、

微妙的和意想不到的方式操纵我们。 在短期内

威胁我们的自由
和尊严的大部分技术

是由

从事获取
和出售我们的数据以及我们

对广告商和其他人的关注的公司开发的:

Facebook、谷歌、亚马逊、

阿里巴巴、腾讯。

现在,人工智能也开始
支持他们的业务。

人工智能似乎

只是在线广告之后的下一个目标。

不是。

这是一个类别的跳跃。

这是一个完全不同的世界

,它具有巨大的潜力。

它可以加速我们
对许多研究领域的理解。

但套用
好莱坞著名哲学家的

话说,“巨大的潜力
伴随着巨大的风险。”

现在让我们看一下
我们数字生活的一个基本事实,即在线广告。

对? 我们有点解雇他们。

它们看起来很粗糙,无效。

我们都有过在
网络上

被基于
我们搜索或阅读的内容的广告跟踪的经历。

你知道,你查了一双靴子

,一周之内,
无论你走到哪里,这些靴子都会跟着你。

即使在您屈服并购买它们之后,
它们仍然会跟随您。

我们已经习惯了
这种基本的、廉价的操纵。

我们翻白眼想,
“你知道吗?这些东西行不通。”

除了在线

,数字技术不仅仅是广告。

现在,为了理解这一点,
让我们考虑一个物理世界的例子。

你知道,
在超市的收银台,靠近收银台的地方,

有糖果和口香糖
在孩子们的视线范围内吗?

这是为了让他们

父母即将退房时对父母发牢骚。

现在,这是一个说服架构。

这不是很好,但它有点工作。

这就是为什么你
在每个超市都能看到它的原因。

现在,在现实世界中,

这种说服架构
是有限的,

因为你只能
由收银员放这么多东西。 对?

还有糖果和口香糖,
每个人都一样,

尽管它主要

只适用于身边有
爱发牢骚的小人类的人。

在物理世界中,
我们生活在这些限制中。

然而在数字世界中,

说服架构
可以建立在数十亿的规模上

,它们可以通过找出你的弱点来针对、推断、理解

和部署到每个人身上

并且可以将它们发送
到每个人的手机私人屏幕上,

所以它对我们来说是不可见的。

这是不同的。

这只是
人工智能可以做的基本事情之一。

现在,让我们举个例子。

假设您想出售
飞往维加斯的机票。 对?

因此,在旧世界中,您可以

根据经验
和您的猜测来考虑一些人口统计数据。

您可能会尝试向

25 至 35 岁的男性、

信用卡额度高的人

或退休夫妇做广告。 对?

这就是你过去会做的事情。

有了大数据和机器学习,

它就不再是这样了。

所以想象一下,

想想
Facebook 拥有的关于你的所有数据:

你输入的每一次状态更新、

每一次 Messenger 对话、

你登录的每一个地方

、你上传到那里的所有照片。

如果您开始输入内容
并改变主意并删除它,

Facebook 也会保留这些内容
并对其进行分析。

它越来越多地
尝试将您与您的离线数据相匹配。

它还
从数据经纪人那里购买大量数据。

它可能是
从您的财务记录

到大部分浏览历史记录的所有内容。

对? 在美国,
此类数据经常被收集、

整理和出售。

在欧洲,他们有更严格的规定。

那么接下来发生的事情是,

通过搅动所有这些数据,
这些机器学习算法——

这就是它们被称为
学习算法的原因——

它们学会了解

以前购买拉斯维加斯门票的人的特征。

当他们从现有数据中学习到这一点时,

他们也学习了
如何将其应用于新人。

因此,如果向他们介绍了一个新人,

他们可以分类该人
是否有可能购买去维加斯的机票。

美好的。 你在想,
一个买去维加斯门票的提议。

我可以忽略这一点。

但问题不在于。

问题是,

我们不再真正
了解这些复杂算法的工作原理。

我们不明白
他们是如何进行这种分类的。

它是巨大的矩阵,有
数千行和列,

可能有数百万行和列,

而不是程序员

,也不是任何查看它的人,

即使你拥有所有数据,

也比你知道的更了解它的运行方式

如果你看到
我大脑的横截面,我现在在想什么。

就像我们不再编程一样,

我们正在增长
我们并不真正理解的智能。

这些东西只有
在有大量数据的情况下才有效,

因此它们还鼓励
对我们所有人进行深度监视,

以便机器学习
算法能够工作。

这就是 Facebook
想要收集所有关于你的数据的原因。

算法工作得更好。

所以让我们稍微推一下维加斯的例子。

如果
我们不了解的系统发现

向躁郁症
和即将进入躁狂阶段的人出售拉斯维加斯门票会更容易。

这样的人往往会成为
过度消费、嗜赌成性的人。

他们可以做到这一点,而你不
知道他们正在接受什么。

我曾经把这个例子
给了一群计算机科学家

,之后,其中一个人找到了我。

他很困扰,他说:
“这就是为什么我不能发布它。”

我当时想,“不能发布什么?”

他曾试图看看你是否真的可以在出现临床症状之前

从社交媒体帖子中找出躁狂症的发作

,并且它奏效了,

而且效果很好

,但他不知道它是如何运作的,
也不知道它是怎么回事 .

现在,
如果他不发布它,问题就没有解决,

因为已经有

公司在开发
这种技术,

而且很多东西
都是现成的。

这已经不是很困难了。

您是否曾经在
YouTube 上观看一个视频,

而一个小时后您又看了 27 个视频?

您知道
YouTube 如何在右侧

显示“Up next”

并自动播放某些内容吗?

这是一种算法

,它会挑选你
认为你可能感兴趣但

你自己可能找不到的东西。

它不是人类编辑器。

这就是算法所做的。

它会收集您观看的内容
以及与您类似的人观看的内容,

并推断这一定是
您感兴趣的内容,

您想要更多的内容,

并且只是向您展示更多内容。

这听起来像是一个良性
且有用的功能,

除非它不是。

所以在 2016 年,我参加
了当时的候选人唐纳德特朗普的集会,

以学者的身份研究
支持他的运动。

我研究社会运动,
所以我也在研究它。

然后我想写一些
关于他的一次集会的东西,

所以我在 YouTube 上看了几遍。

YouTube 开始向我推荐

并向我自动播放
白人至上主义视频,

以增加极端主义的顺序。

如果我看了一个,

它会提供更极端的

一个,并且也会自动播放那个。

如果您观看希拉里·克林顿
或伯尼·桑德斯的内容,

YouTube 会推荐
并自动播放左边的阴谋,

然后它就会从那里走下坡路。

好吧,你可能会想,
这就是政治,但事实并非如此。

这与政治无关。

这只是
计算人类行为的算法。

我曾经
在 YouTube 上看过一段关于素食主义的视频

,YouTube 推荐
并自动播放了一段关于成为素食主义者的视频。

就好像你
对 YouTube 来说永远不够铁杆一样。

(笑声)

那么发生了什么?

现在,YouTube 的算法是专有的,

但这是我认为正在发生的事情。

该算法发现

,如果您可以诱使

人们认为您可以
向他们展示更核心的东西,

那么他们更有可能留在网站上

观看一个又一个视频
进入兔子洞的视频,

而谷歌则为他们提供广告。

现在,由于没有人在意
商店的道德规范,

这些网站可以描述

那些仇视犹太人

、认为犹太人是寄生虫

并拥有如此明确
的反犹太内容的人,

并让你用广告定位他们。

他们还可以动员算法

为您寻找相似的受众,即

个人资料中没有此类明确
的反犹太内容

但算法检测到的人
可能容易受到此类消息的影响,

并让您也可以通过广告定位他们。

现在,这听起来
像是一个难以置信的例子,

但这是真实的。

ProPublica 对此进行了调查

,发现您确实
可以在 Facebook 上做到这一点,Facebook

提供了

有关如何扩大受众范围的有用建议。

BuzzFeed 为 Google 尝试过
,很快他们发现,

是的,你也可以在 Google 上做。

而且它甚至都不贵。

ProPublica 记者
花了大约 30 美元

来瞄准这一类别。

所以去年,唐纳德特朗普的
社交媒体经理透露

,他们正在使用 Facebook 的黑暗帖子
让人们复员,

而不是说服他们,

而是说服他们根本不投票。

为此,
他们专门针对,

例如费城等主要城市的非裔美国人

,我将
准确地阅读他所说的话。

我在引用。

他们使用的是“非公开帖子,

其收视率受到竞选活动的

控制,只有
我们想看到的人才能看到。

我们对此进行了建模。

这将极大地影响她
将这些人拒之门外的能力。”

那些黑暗的帖子里有什么?

我们不知道。

Facebook不会告诉我们。

因此,Facebook 还会通过算法
排列

您朋友在 Facebook 上发布的帖子
或您关注的页面。

它不会
按时间顺序向您显示所有内容。


以算法认为会诱使您

在网站上停留更长时间的方式排列顺序。

现在,所以这有很多后果。

您可能认为
有人在 Facebook 上冷落您。

该算法可能永远
不会向他们显示您的帖子。

该算法正在优先考虑
其中一些并掩埋其他的。

实验表明

,算法选择向您展示的内容
会影响您的情绪。

但这还不是全部。

它还影响政治行为。

所以在 2010 年的中期选举中,

Facebook 对美国 6100 万人做了一个实验

,事后披露。

因此,有些人看到的
是“今天是选举日”,

这是比较简单的一个

,有些人看到
的是经过微调的那个

,上面有你点击“我投票”的朋友的小缩略图。

这个简单的调整。

好的? 所以照片是唯一的变化

,根据选民名册证实的这项研究,仅显示一次的帖子在那次选举中

增加了 340,000 名

选民。

侥幸? 不,

因为在 2012 年,
他们重复了同样的实验。

而那一次,

仅显示一次的公民信息

又增加了 270,000 名选民。

作为参考,2016年
美国总统大选

是由大约10万张选票决定的。

现在,即使您从未在网站上披露过,Facebook 也可以
很容易地推断出您的政治立场

对? 这些算法
可以很容易地做到这一点。

如果拥有这种权力的平台

决定让
一位候选人的支持者超过另一位候选人怎么办?

我们怎么知道呢?

现在,我们从一个
看似无害的地方开始——

在线添加跟随我们

——我们已经降落在其他地方。

作为公众和公民,

我们不再
知道我们看到的是相同的信息

还是其他人看到

的信息,而且如果没有共同的信息基础

公开辩论就会逐渐变得不可能,

而我们只是在
这个的开始阶段。

这些算法可以很容易地

推断出你的人的种族、

宗教和政治观点、
个性特征、

智力、幸福、
成瘾物质的使用、

父母分离、年龄和性别,

仅仅从 Facebook 喜欢。

即使他们的脸
被部分隐藏,这些算法也可以识别抗议者。

这些算法或许能够仅从
人们的

约会资料图片中检测出他们的性取向。

现在,这些都是概率猜测,

所以它们
不会是 100% 正确的,

但我不认为强大
的人会因为存在一些误报而抵制使用这些技术的诱惑

这当然会创造
一个全新的 层问题。

想象一下,一个国家可以

利用其拥有的大量
公民数据做些什么。

中国已经在使用
面部检测技术

来识别和逮捕人。

悲剧

就在这里:我们正在建立这种
监控威权

主义的基础设施,只是为了让人们点击广告。

这不会是
奥威尔的威权主义。

这不是“1984”。

现在,如果威权
主义利用公开的恐惧来恐吓我们,

我们都会害怕,但我们会知道,

我们会憎恨它,我们会抵制它。

但是,如果当权者
正在使用这些

算法悄悄地观察我们

,判断我们并推动我们

,预测和
识别麻烦制造者和叛乱分子

,大规模部署说服
架构

使用他们的个人来一一操纵个人, 个人的
弱点和弱点

,如果他们通过我们的私人屏幕大规模地这样做,

以至于我们甚至不

知道我们的同胞
和邻居看到了什么,

那么威权
主义就会像蜘蛛网一样包围我们

,我们甚至可能不知道 我们在其中。

因此,Facebook 的

市值接近 500 亿美元。

这是因为它
作为一种说服架构非常有效。

但是

无论你是卖鞋

还是卖政治,这种架构的结构都是一样的。

算法不知道区别。

为使我们更容易接受广告

而设置的相同算法也在组织我们的政治、
个人和社会信息流

,这就是必须改变的。

现在,不要误会我的意思,

我们使用数字平台
是因为它们为我们提供了巨大的价值。

我使用 Facebook
与世界各地的朋友和家人保持联系。

我写过关于
社交媒体对社会运动的重要性的文章。

我研究了如何使用
这些技术

来规避世界各地的审查制度。

但这并不是
说运行 Facebook 或 Google 的

人恶意和蓄意地

试图使国家
或世界更加两极分化

并鼓励极端主义。

我阅读了

这些人发表的许多善意的声明。

但重要的不是技术人员的意图或
陈述,

而是
他们正在构建的结构和商业模式。

这就是问题的核心。

要么 Facebook 是一个拥有 500 亿美元的巨大骗局

,广告在该网站上不起作用,

它不能
作为一种说服架构起作用,

要么它的影响
力令人非常担忧。

它是一个或另一个。

谷歌也是如此。

所以,我们能做些什么?

这需要改变。

现在,我不能提供一个简单的方法,

因为我们需要重组

我们的
数字技术运作的整个方式。


技术的

开发方式到经济和其他方面的激励措施

都内置到系统中。

我们必须面对并尝试应对

专有算法造成的缺乏透明度

、机器学习不透明性的结构性挑战,以及

所有这些不加选择
地收集的关于我们的数据。

我们面前有一项艰巨的任务。

我们必须调动我们的技术、

我们的创造力

,是的,我们的政治,

这样我们才能建立
人工智能

,支持我们实现人类目标,

但也
受到人类价值观的限制。

我知道这并不容易。

我们甚至可能不会轻易
就这些术语的含义达成一致。

但是,如果我们认真对待

我们如此依赖的这些系统是如何
运作的,

我看不出我们怎么能再推迟
这场对话。

这些结构

正在组织我们的运作方式

,它们控制

着我们能做什么和不能做什么。

许多这些广告资助的平台,

他们吹嘘他们是免费的。

在这种情况下,这
意味着我们是正在销售的产品。

我们需要一个数字经济

,我们的数据和

注意力不会出售给出价最高的
独裁者或煽动者。

(掌声)

所以
回到好莱坞的那句话,

我们确实希望

人工智能
和数字技术的巨大潜力能够开花结果,

但为此,我们必须面对
这个巨大的威胁,

睁大眼睛,现在。

谢谢你。

(掌声)