How a handful of tech companies control billions of minds every day Tristan Harris

I want you to imagine

walking into a room,

a control room with a bunch of people,

a hundred people, hunched
over a desk with little dials,

and that that control room

will shape the thoughts and feelings

of a billion people.

This might sound like science fiction,

but this actually exists

right now, today.

I know because I used to be
in one of those control rooms.

I was a design ethicist at Google,

where I studied how do you ethically
steer people’s thoughts?

Because what we don’t talk about
is how the handful of people

working at a handful
of technology companies

through their choices will steer
what a billion people are thinking today.

Because when you pull out your phone

and they design how this works
or what’s on the feed,

it’s scheduling little blocks
of time in our minds.

If you see a notification,
it schedules you to have thoughts

that maybe you didn’t intend to have.

If you swipe over that notification,

it schedules you into spending
a little bit of time

getting sucked into something

that maybe you didn’t intend
to get sucked into.

When we talk about technology,

we tend to talk about it
as this blue sky opportunity.

It could go any direction.

And I want to get serious for a moment

and tell you why it’s going
in a very specific direction.

Because it’s not evolving randomly.

There’s a hidden goal
driving the direction

of all of the technology we make,

and that goal is the race
for our attention.

Because every news site,

TED, elections, politicians,

games, even meditation apps

have to compete for one thing,

which is our attention,

and there’s only so much of it.

And the best way to get people’s attention

is to know how someone’s mind works.

And there’s a whole bunch
of persuasive techniques

that I learned in college at a lab
called the Persuasive Technology Lab

to get people’s attention.

A simple example is YouTube.

YouTube wants to maximize
how much time you spend.

And so what do they do?

They autoplay the next video.

And let’s say that works really well.

They’re getting a little bit
more of people’s time.

Well, if you’re Netflix,
you look at that and say,

well, that’s shrinking my market share,

so I’m going to autoplay the next episode.

But then if you’re Facebook,

you say, that’s shrinking
all of my market share,

so now I have to autoplay
all the videos in the newsfeed

before waiting for you to click play.

So the internet is not evolving at random.

The reason it feels
like it’s sucking us in the way it is

is because of this race for attention.

We know where this is going.

Technology is not neutral,

and it becomes this race
to the bottom of the brain stem

of who can go lower to get it.

Let me give you an example of Snapchat.

If you didn’t know,
Snapchat is the number one way

that teenagers in
the United States communicate.

So if you’re like me, and you use
text messages to communicate,

Snapchat is that for teenagers,

and there’s, like,
a hundred million of them that use it.

And they invented
a feature called Snapstreaks,

which shows the number of days in a row

that two people have
communicated with each other.

In other words, what they just did

is they gave two people
something they don’t want to lose.

Because if you’re a teenager,
and you have 150 days in a row,

you don’t want that to go away.

And so think of the little blocks of time
that that schedules in kids' minds.

This isn’t theoretical:
when kids go on vacation,

it’s been shown they give their passwords
to up to five other friends

to keep their Snapstreaks going,

even when they can’t do it.

And they have, like, 30 of these things,

and so they have to get through
taking photos of just pictures or walls

or ceilings just to get through their day.

So it’s not even like
they’re having real conversations.

We have a temptation to think about this

as, oh, they’re just using Snapchat

the way we used to
gossip on the telephone.

It’s probably OK.

Well, what this misses
is that in the 1970s,

when you were just
gossiping on the telephone,

there wasn’t a hundred engineers
on the other side of the screen

who knew exactly
how your psychology worked

and orchestrated you
into a double bind with each other.

Now, if this is making you
feel a little bit of outrage,

notice that that thought
just comes over you.

Outrage is a really good way also
of getting your attention,

because we don’t choose outrage.

It happens to us.

And if you’re the Facebook newsfeed,

whether you’d want to or not,

you actually benefit when there’s outrage.

Because outrage
doesn’t just schedule a reaction

in emotional time, space, for you.

We want to share that outrage
with other people.

So we want to hit share and say,

“Can you believe the thing
that they said?”

And so outrage works really well
at getting attention,

such that if Facebook had a choice
between showing you the outrage feed

and a calm newsfeed,

they would want
to show you the outrage feed,

not because someone
consciously chose that,

but because that worked better
at getting your attention.

And the newsfeed control room
is not accountable to us.

It’s only accountable
to maximizing attention.

It’s also accountable,

because of the business model
of advertising,

for anybody who can pay the most
to actually walk into the control room

and say, “That group over there,

I want to schedule these thoughts
into their minds.”

So you can target,

you can precisely target a lie

directly to the people
who are most susceptible.

And because this is profitable,
it’s only going to get worse.

So I’m here today

because the costs are so obvious.

I don’t know a more urgent
problem than this,

because this problem
is underneath all other problems.

It’s not just taking away our agency

to spend our attention
and live the lives that we want,

it’s changing the way
that we have our conversations,

it’s changing our democracy,

and it’s changing our ability
to have the conversations

and relationships we want with each other.

And it affects everyone,

because a billion people
have one of these in their pocket.

So how do we fix this?

We need to make three radical changes

to technology and to our society.

The first is we need to acknowledge
that we are persuadable.

Once you start understanding

that your mind can be scheduled
into having little thoughts

or little blocks of time
that you didn’t choose,

wouldn’t we want to use that understanding

and protect against the way
that that happens?

I think we need to see ourselves
fundamentally in a new way.

It’s almost like a new period
of human history,

like the Enlightenment,

but almost a kind of
self-aware Enlightenment,

that we can be persuaded,

and there might be something
we want to protect.

The second is we need new models
and accountability systems

so that as the world gets better
and more and more persuasive over time –

because it’s only going
to get more persuasive –

that the people in those control rooms

are accountable and transparent
to what we want.

The only form of ethical
persuasion that exists

is when the goals of the persuader

are aligned with the goals
of the persuadee.

And that involves questioning big things,
like the business model of advertising.

Lastly,

we need a design renaissance,

because once you have
this view of human nature,

that you can steer the timelines
of a billion people –

just imagine, there’s people
who have some desire

about what they want to do
and what they want to be thinking

and what they want to be feeling
and how they want to be informed,

and we’re all just tugged
into these other directions.

And you have a billion people just tugged
into all these different directions.

Well, imagine an entire design renaissance

that tried to orchestrate
the exact and most empowering

time-well-spent way
for those timelines to happen.

And that would involve two things:

one would be protecting
against the timelines

that we don’t want to be experiencing,

the thoughts that we
wouldn’t want to be happening,

so that when that ding happens,
not having the ding that sends us away;

and the second would be empowering us
to live out the timeline that we want.

So let me give you a concrete example.

Today, let’s say your friend
cancels dinner on you,

and you are feeling a little bit lonely.

And so what do you do in that moment?

You open up Facebook.

And in that moment,

the designers in the control room
want to schedule exactly one thing,

which is to maximize how much time
you spend on the screen.

Now, instead, imagine if those designers
created a different timeline

that was the easiest way,
using all of their data,

to actually help you get out
with the people that you care about?

Just think, alleviating
all loneliness in society,

if that was the timeline that Facebook
wanted to make possible for people.

Or imagine a different conversation.

Let’s say you wanted to post
something supercontroversial on Facebook,

which is a really important
thing to be able to do,

to talk about controversial topics.

And right now, when there’s
that big comment box,

it’s almost asking you,
what key do you want to type?

In other words, it’s scheduling
a little timeline of things

you’re going to continue
to do on the screen.

And imagine instead that there was
another button there saying,

what would be most
time well spent for you?

And you click “host a dinner.”

And right there
underneath the item it said,

“Who wants to RSVP for the dinner?”

And so you’d still have a conversation
about something controversial,

but you’d be having it in the most
empowering place on your timeline,

which would be at home that night
with a bunch of a friends over

to talk about it.

So imagine we’re running, like,
a find and replace

on all of the timelines
that are currently steering us

towards more and more
screen time persuasively

and replacing all of those timelines

with what do we want in our lives.

It doesn’t have to be this way.

Instead of handicapping our attention,

imagine if we used all of this data
and all of this power

and this new view of human nature

to give us a superhuman ability to focus

and a superhuman ability to put
our attention to what we cared about

and a superhuman ability
to have the conversations

that we need to have for democracy.

The most complex challenges in the world

require not just us
to use our attention individually.

They require us to use our attention
and coordinate it together.

Climate change is going to require
that a lot of people

are being able
to coordinate their attention

in the most empowering way together.

And imagine creating
a superhuman ability to do that.

Sometimes the world’s
most pressing and important problems

are not these hypothetical future things
that we could create in the future.

Sometimes the most pressing problems

are the ones that are
right underneath our noses,

the things that are already directing
a billion people’s thoughts.

And maybe instead of getting excited
about the new augmented reality

and virtual reality
and these cool things that could happen,

which are going to be susceptible
to the same race for attention,

if we could fix the race for attention

on the thing that’s already
in a billion people’s pockets.

Maybe instead of getting excited

about the most exciting
new cool fancy education apps,

we could fix the way
kids' minds are getting manipulated

into sending empty messages
back and forth.

(Applause)

Maybe instead of worrying

about hypothetical future
runaway artificial intelligences

that are maximizing for one goal,

we could solve the runaway
artificial intelligence

that already exists right now,

which are these newsfeeds
maximizing for one thing.

It’s almost like instead of running away
to colonize new planets,

we could fix the one
that we’re already on.

(Applause)

Solving this problem

is critical infrastructure
for solving every other problem.

There’s nothing in your life
or in our collective problems

that does not require our ability
to put our attention where we care about.

At the end of our lives,

all we have is our attention and our time.

What will be time well spent for ours?

Thank you.

(Applause)

Chris Anderson: Tristan, thank you.
Hey, stay up here a sec.

First of all, thank you.

I know we asked you to do this talk
on pretty short notice,

and you’ve had quite a stressful week

getting this thing together, so thank you.

Some people listening might say,
what you complain about is addiction,

and all these people doing this stuff,
for them it’s actually interesting.

All these design decisions

have built user content
that is fantastically interesting.

The world’s more interesting
than it ever has been.

What’s wrong with that?

Tristan Harris:
I think it’s really interesting.

One way to see this
is if you’re just YouTube, for example,

you want to always show
the more interesting next video.

You want to get better and better
at suggesting that next video,

but even if you could propose
the perfect next video

that everyone would want to watch,

it would just be better and better
at keeping you hooked on the screen.

So what’s missing in that equation

is figuring out what
our boundaries would be.

You would want YouTube to know
something about, say, falling asleep.

The CEO of Netflix recently said,

“our biggest competitors
are Facebook, YouTube and sleep.”

And so what we need to recognize
is that the human architecture is limited

and that we have certain boundaries
or dimensions of our lives

that we want to be honored and respected,

and technology could help do that.

(Applause)

CA: I mean, could you make the case

that part of the problem here is that
we’ve got a naïve model of human nature?

So much of this is justified
in terms of human preference,

where we’ve got these algorithms
that do an amazing job

of optimizing for human preference,

but which preference?

There’s the preferences
of things that we really care about

when we think about them

versus the preferences
of what we just instinctively click on.

If we could implant that more nuanced
view of human nature in every design,

would that be a step forward?

TH: Absolutely. I mean, I think right now

it’s as if all of our technology
is basically only asking our lizard brain

what’s the best way
to just impulsively get you to do

the next tiniest thing with your time,

instead of asking you in your life

what we would be most
time well spent for you?

What would be the perfect timeline
that might include something later,

would be time well spent for you
here at TED in your last day here?

CA: So if Facebook and Google
and everyone said to us first up,

“Hey, would you like us
to optimize for your reflective brain

or your lizard brain? You choose.”

TH: Right. That would be one way. Yes.

CA: You said persuadability,
that’s an interesting word to me

because to me there’s
two different types of persuadability.

There’s the persuadability
that we’re trying right now

of reason and thinking
and making an argument,

but I think you’re almost
talking about a different kind,

a more visceral type of persuadability,

of being persuaded without
even knowing that you’re thinking.

TH: Exactly. The reason
I care about this problem so much is

I studied at a lab called
the Persuasive Technology Lab at Stanford

that taught [students how to recognize]
exactly these techniques.

There’s conferences and workshops
that teach people all these covert ways

of getting people’s attention
and orchestrating people’s lives.

And it’s because most people
don’t know that that exists

that this conversation is so important.

CA: Tristan, you and I, we both know
so many people from all these companies.

There are actually many here in the room,

and I don’t know about you,
but my experience of them

is that there is
no shortage of good intent.

People want a better world.

They are actually – they really want it.

And I don’t think anything you’re saying
is that these are evil people.

It’s a system where there’s
these unintended consequences

that have really got out of control –

TH: Of this race for attention.

It’s the classic race to the bottom
when you have to get attention,

and it’s so tense.

The only way to get more
is to go lower on the brain stem,

to go lower into outrage,
to go lower into emotion,

to go lower into the lizard brain.

CA: Well, thank you so much for helping us
all get a little bit wiser about this.

Tristan Harris, thank you.
TH: Thank you very much.

(Applause)

我想让你想象

走进一个房间,

一个控制室,里面有一群人

,一百个人,弓着身子坐在
一张带小表盘的桌子上

,那个控制室

将塑造十亿人的思想和

感受。

这可能听起来像科幻小说,

但它现在确实存在

,今天。

我知道,因为我曾经
在其中一个控制室里。

我是谷歌的设计伦理学家,

在那里我研究了你如何在道德上
引导人们的思想?

因为我们不谈论的
是在少数科技公司工作的少数人如何

通过他们的选择来
引导十亿人今天的想法。

因为当你拿出你的手机

并设计它的工作原理
或提要上的内容时,

它会
在我们的脑海中安排一小段时间。

如果您看到一条通知,
它会安排您有

一些您可能不打算有的想法。

如果您在该通知上滑动,

它会安排您
花一点时间

被吸引到

您可能不
打算被吸引的事情中。

当我们谈论技术时,

我们倾向于将其
称为蓝天机会。

它可以向任何方向发展。

我想认真一点

,告诉你为什么它会
朝着一个非常具体的方向发展。

因为它不是随机进化的。

有一个隐藏的目标在
推动

我们制造的所有技术的发展方向,

而这个目标就是
争夺我们的注意力。

因为每个新闻网站、

TED、选举、政治家、

游戏,甚至冥想应用程序

都必须争夺一件事,

那就是我们的注意力,

而且只有这么多。

引起人们注意的最好方法

是了解某人的思想是如何运作的。

还有
一大堆说服技巧

是我在大学里在一个叫做说服技术实验室的实验室里学到的,

用来引起人们的注意。

一个简单的例子是 YouTube。

YouTube 希望最大限度
地延长您花费的时间。

那么他们是做什么的呢?

他们自动播放下一个视频。

假设效果非常好。

他们得到了
更多人的时间。

好吧,如果你是 Netflix,
你会说,

好吧,这正在缩小我的市场份额,

所以我要自动播放下一集。

但是,如果你是 Facebook,

你会说,这会缩小
我所有的市场份额,

所以现在我必须在等待你点击播放之前自动播放
新闻源中的所有视频

所以互联网不是随机发展的。

感觉它以这种方式吸引我们的原因

是因为这场争夺注意力的竞赛。

我们知道这是怎么回事。

技术不是中立的

,它变成了
这场脑干底部的竞赛

,谁能走得更低才能得到它。

让我举一个 Snapchat 的例子。

如果您不知道,
Snapchat

是美国青少年交流的第一方式。

因此,如果您像我一样使用
短信进行交流,那么

Snapchat 就是为青少年准备

的,并且有大约
一亿人在使用它。

他们还发明
了一项名为 Snapstreaks 的功能,

该功能可以

显示两个人
相互交流的连续天数。

也就是说,他们刚刚所做的,

就是给了两个人
一些他们不想失去的东西。

因为如果您是青少年,
并且连续 150 天,

您不希望这种情况消失。

所以想想
孩子们脑海中安排的一小段时间。

这不是理论上的:
当孩子们去度假时

,已经表明他们将密码提供
给最多五个其他朋友,

以保持他们的 Snapstreaks 继续运行,

即使他们做不到。

他们有 30 件这样的东西

,所以他们必须
通过只拍照片、墙壁

或天花板的照片才能度过他们的一天。

所以这甚至不像
他们在进行真正的对话。

我们很想这样

想,哦,他们只是

像我们过去
在电话上闲聊一样使用 Snapchat。

应该没问题吧。

好吧,这遗漏的
是,在 1970 年代,

当你只是
在电话里闲聊时,屏幕另

一边没有一百个工程师

确切地
知道你的心理是如何运作的,

并把你
与每个人都编成双重束缚 其他。

现在,如果这让你
感到有点愤怒,请

注意那个想法
刚刚出现在你身上。

愤怒
也是引起你注意的好方法,

因为我们不选择愤怒。

它发生在我们身上。

而且,如果您是 Facebook 新闻源,

无论您愿意与否,

当出现愤怒时,您实际上都会受益。

因为愤怒
不只是

为你安排情绪时间、空间上的反应。

我们想与其他人分享这种愤怒

所以我们想点击分享并说,

“你能
相信他们说的话吗?”

因此,愤怒
在吸引注意力方面非常有效,

因此如果 Facebook 可以
在向您展示愤怒信息流

和平静的新闻信息流之间做出选择,

他们会想
向您展示愤怒信息流,

不是因为有人
有意识地选择了它,

而是因为这样效果更好
在引起你的注意。

新闻源
控制室不对我们负责。

它只
负责最大化注意力。

由于广告的商业模式

,任何能够支付最多费用的
人实际上走进控制室

并说:“那边的那群人,

我想把这些想法安排
在他们的脑海中,这也是负责任的。”

所以你可以瞄准,

你可以准确地将谎言直接瞄准

最容易受到影响的人。

因为这是有利可图的,
它只会变得更糟。

所以我今天在这里,

因为成本是如此明显。

我不知道比这更紧迫的
问题,

因为这个问题
在所有其他问题之下。

这不仅仅是剥夺我们的机构

来花费我们的注意力
并过上我们想要的生活,

它正在改变
我们进行对话的方式,

它正在改变我们的民主

,它正在改变我们
进行

我们想要的对话和关系的能力 .

它影响到每个人,

因为十亿人
的口袋里有一个。

那么我们如何解决这个问题呢?

我们需要对技术和社会做出三个根本性的改变

首先是我们需要
承认我们是有说服力的。

一旦你开始

明白你的头脑可以被安排
在你没有选择的小想法

或小块时间
里,

我们难道不想利用这种理解

来防止
这种情况发生吗?

我认为我们需要
从根本上以一种新的方式看待自己。

这几乎就像
人类历史的一个新时期,

就像启蒙运动一样,

但几乎是一种
自我意识的启蒙运动

,我们可以被说服,

并且可能会有
我们想要保护的东西。

第二个是我们需要新的模型
和问责制度,

以便随着时间的推移,随着世界变得更好
、越来越有说服力——

因为它
只会变得更有说服力——

那些控制室里的人

对我们的事情负责和透明 想。

唯一存在的道德说服形式

是当说服者的目标与被说服

者的目标一致时

这涉及质疑大事,
比如广告的商业模式。

最后,

我们需要设计复兴,

因为一旦你
对人性有了这种看法

,你就可以控制
十亿人的时间线——

试想一下,有些人


他们想要做什么和想要做什么有一些渴望 正在思考

,他们想要感受什么
,以及他们想要如何被告知

,我们都只是被
拉到这些其他方向。

你有十亿人被
拉到所有这些不同的方向。

好吧,想象一下整个设计复兴

,它试图为这些时间线的发生
安排准确和最有能力的、最有效的、最有效的

时间消耗方式

这将涉及两件事:

一是防止

我们不想经历的时间线,我们不想发生

的想法,

这样当叮当声发生时,就
没有发送给我们的叮当声 离开;

第二个是让我们
能够活出我们想要的时间表。

那么让我给你一个具体的例子。

今天,假设您的朋友
取消了您的晚餐,

而您感到有些孤独。

那你在那一刻做什么?

你打开脸书。

在那一刻,

控制室的设计师
想要准确地安排一件事,

那就是最大化
你在屏幕上花费的时间。

现在,相反,想象一下,如果这些设计师
创建了一个不同的时间线

,这是最简单的方法,
使用他们的所有数据

,真正帮助你
与你关心的人相处?

想想看,

如果那是 Facebook 想要为人们创造的时间线,那么它会减轻社会上的所有孤独感

或者想象一个不同的对话。

假设您想
在 Facebook 上发布一些极具争议的内容,

这是一件非常重要的
事情,

可以谈论有争议的话题。

现在,当有
那个大评论框时,

它几乎是在问你,
你想输入什么键?

换句话说,它安排
了一些

你将要
在屏幕上继续做的事情的时间线。

想象一下,那里有
另一个按钮,上面写着,


适合你的时间是什么?

然后你点击“举办晚宴”。


在项目下方,它说:

“谁想回复晚餐?”

所以你仍然会
就一些有争议的事情进行对话,

但你会在你的时间轴上最有
权力的地方进行对话

,那天晚上会在家里
和一群

朋友谈论它。

所以想象一下,我们正在运行,比如,

在所有时间线上进行查找和替换,这些时间线
目前正在引导我们有说服力地

走向越来越多的
屏幕时间,

用我们生活中想要的东西替换所有这些时间线。

它不必是这样的。

与其阻碍我们的注意力,

想象一下,如果我们利用所有这些数据
、所有这些力量

以及这种对人性的新观点,

赋予我们超人的专注能力

和超人的能力,让
我们将注意力放在我们关心的事情上

,成为超人

进行民主所需的对话的能力。

世界上最复杂的挑战

不仅需要
我们单独使用我们的注意力。

它们要求我们使用我们的注意力
并一起协调它。

气候变化将
要求很多

人能够

以最赋权的方式协调他们的注意力。

想象一下创造
一种超人的能力来做到这一点。

有时,世界上
最紧迫和最重要的问题

并不是
我们可以在未来创造的这些假设的未来事物。

有时,最紧迫的问题

就在我们眼皮底下,

那些已经在
引导十亿人思想的事情。

也许不是
对新的增强现实

和虚拟现实
以及可能发生的这些很酷的事情感到兴奋,

这些事情很容易
受到同样的注意力竞争,

如果我们可以将注意力竞争解决

在已经存在的事物上
亿人的腰包。

也许与其

对最令人兴奋的
新酷炫的教育应用程序感到兴奋,

我们可以解决
孩子们的思想被操纵

来来回发送空信息的
方式。

(掌声)

也许我们可以解决现在已经存在的失控人工智能,而不是

担心假设的未来
失控的人工智能

为一个目标而最大化,

也就是这些新闻源
为一件事最大化。

这几乎就像不是逃跑
去殖民新行星,

我们可以
修复我们已经在上面的那个。

(掌声)

解决这个问题

是解决其他所有问题的关键基础设施。

在你的生活
或我们的集体问题

中,没有任何事情不需要我们
将注意力放在我们关心的地方。

在我们生命的尽头

,我们所拥有的只是我们的注意力和时间。

什么时间会花在我们身上?

谢谢你。

(掌声)

克里斯·安德森:特里斯坦,谢谢。
嘿,在这里待一会儿。

首先,谢谢你。

我知道我们要求你
在很短的时间内进行这次谈话

,你已经度过了相当紧张的一周来

整理这件事,所以谢谢你。

听的人可能会说,
你抱怨的是上瘾,

而所有这些人都在做这些事情,
对他们来说这实际上很有趣。

所有这些设计决策

都构建
了非常有趣的用户内容。

世界
比以往任何时候都更有趣。

那有什么问题?

特里斯坦哈里斯:
我认为这真的很有趣。

看到这
一点的一种方法是,如果您只是 YouTube,例如,

您希望始终
显示更有趣的下一个视频。

您希望
在建议下一个视频方面做得越来越好,

但即使您可以提出每个人都想观看
的完美下一个视频

它只会越来越好
地让您对屏幕着迷。

所以这个等式中缺少的

是弄清楚
我们的界限是什么。

您会希望 YouTube 了解
一些关于入睡的信息。

Netflix 的 CEO 最近表示,

“我们最大的竞争对手
是 Facebook、YouTube 和 sleep。”

所以我们需要认识到的
是,人类的建筑是有限

的,我们的生活有一定的界限
或维度

,我们希望得到尊重和尊重,

而技术可以帮助做到这一点。

(掌声)

CA:我的意思是,你能

证明这里的部分问题是
我们有一个天真的人性模型吗?

这么多这
在人类偏好方面是合理的

,我们有这些算法

在优化人类偏好方面做得非常出色,

但是哪种偏好呢? 当

我们考虑它们时
,我们真正关心的事物

的偏好
与我们本能地点击的事物的偏好不同。

如果我们可以
在每个设计中植入对人性的更细致入微的观点,

那会是向前迈出的一步吗?

TH:当然。 我的意思是,我认为现在

就好像我们所有的技术
基本上只是问我们的蜥蜴大脑

什么是最好的方法
,让你冲动地

用你的时间做下一件最微小的事情,

而不是问你在你的生活中

我们会做什么 大部分
时间都花在你身上了吗?

什么是完美的时间表
,可能会在以后包含一些东西,

在你在这里的最后一天在 TED 上度过的时间对你来说是不是很值得

CA:所以,如果 Facebook 和 Google
以及每个人都先对我们说,

“嘿,您希望
我们针对您的反射大脑

还是蜥蜴大脑进行优化?您选择。”

TH:对。 那将是一种方式。 是的。

CA:你说说服力,
这对我来说是一个有趣的词,

因为对我来说有
两种不同类型的说服力。

我们现在正在尝试

推理、思考
和论证的说服力,

但我认为你几乎是
在谈论一种不同的

、更本能的说服力,

即在
不知道自己在思考的情况下被说服 .

TH:没错。
我如此关心这个问题的原因是

我在斯坦福大学的一个名为说服技术实验室的实验室学习,该实验室

教[学生如何识别]
这些技术。

有会议和讲习班
向人们传授所有这些隐蔽的方式

来引起人们的注意
和协调人们的生活。

正是因为大多数人
不知道它的存在

,所以这次对话如此重要。

CA:特里斯坦,你和我,我们都
认识很多来自所有这些公司的人。

房间里其实有很多人

,我不知道你怎么样,
但我的经验

是,
不乏善意的。

人们想要一个更美好的世界。

他们实际上——他们真的想要它。

我不认为你在
说这些是邪恶的人。

这是一个系统,其中有
这些意想不到的后果

,已经真正失控了——

TH:这场争夺注意力的竞赛。

当你必须引起注意时

,这是一场经典的逐底竞赛,而且非常紧张。

获得更多的唯一方法
是降低脑干

,降低愤怒
,降低情绪

,降低蜥蜴脑。

CA:嗯,非常感谢你帮助我们
大家对此有所了解。

特里斯坦哈里斯,谢谢。
TH:非常感谢。

(掌声)