How Twitter needs to change Jack Dorsey

Chris Anderson:
What worries you right now?

You’ve been very open
about lots of issues on Twitter.

What would be your top worry

about where things are right now?

Jack Dorsey: Right now,
the health of the conversation.

So, our purpose is to serve
the public conversation,

and we have seen
a number of attacks on it.

We’ve seen abuse, we’ve seen harassment,

we’ve seen manipulation,

automation, human coordination,
misinformation.

So these are all dynamics
that we were not expecting

13 years ago when we were
starting the company.

But we do now see them at scale,

and what worries me most
is just our ability to address it

in a systemic way that is scalable,

that has a rigorous understanding
of how we’re taking action,

a transparent understanding
of how we’re taking action

and a rigorous appeals process
for when we’re wrong,

because we will be wrong.

Whitney Pennington Rodgers:
I’m really glad to hear

that that’s something that concerns you,

because I think there’s been
a lot written about people

who feel they’ve been abused
and harassed on Twitter,

and I think no one more so
than women and women of color

and black women.

And there’s been data that’s come out –

Amnesty International put out
a report a few months ago

where they showed that a subset
of active black female Twitter users

were receiving, on average,
one in 10 of their tweets

were some form of harassment.

And so when you think about health
for the community on Twitter,

I’m interested to hear,
“health for everyone,”

but specifically: How are you looking
to make Twitter a safe space

for that subset, for women,
for women of color and black women?

JD: Yeah.

So it’s a pretty terrible situation

when you’re coming to a service

that, ideally, you want to learn
something about the world,

and you spend the majority of your time
reporting abuse, receiving abuse,

receiving harassment.

So what we’re looking most deeply at
is just the incentives

that the platform naturally provides
and the service provides.

Right now, the dynamic of the system
makes it super-easy to harass

and to abuse others through the service,

and unfortunately, the majority
of our system in the past

worked entirely based on people
reporting harassment and abuse.

So about midway last year,
we decided that we were going to apply

a lot more machine learning,
a lot more deep learning to the problem,

and try to be a lot more proactive
around where abuse is happening,

so that we can take the burden
off the victim completely.

And we’ve made some progress recently.

About 38 percent of abusive tweets
are now proactively identified

by machine learning algorithms

so that people don’t actually
have to report them.

But those that are identified
are still reviewed by humans,

so we do not take down content or accounts
without a human actually reviewing it.

But that was from zero percent
just a year ago.

So that meant, at that zero percent,

every single person who received abuse
had to actually report it,

which was a lot of work for them,
a lot of work for us

and just ultimately unfair.

The other thing that we’re doing
is making sure that we, as a company,

have representation of all the communities
that we’re trying to serve.

We can’t build a business
that is successful

unless we have a diversity
of perspective inside of our walls

that actually feel these issues
every single day.

And that’s not just with the team
that’s doing the work,

it’s also within our leadership as well.

So we need to continue to build empathy
for what people are experiencing

and give them better tools to act on it

and also give our customers
a much better and easier approach

to handle some of the things
that they’re seeing.

So a lot of what we’re doing
is around technology,

but we’re also looking at
the incentives on the service:

What does Twitter incentivize you to do
when you first open it up?

And in the past,

it’s incented a lot of outrage,
it’s incented a lot of mob behavior,

it’s incented a lot of group harassment.

And we have to look a lot deeper
at some of the fundamentals

of what the service is doing
to make the bigger shifts.

We can make a bunch of small shifts
around technology, as I just described,

but ultimately, we have to look deeply
at the dynamics in the network itself,

and that’s what we’re doing.

CA: But what’s your sense –

what is the kind of thing
that you might be able to change

that would actually
fundamentally shift behavior?

JD: Well, one of the things –

we started the service
with this concept of following an account,

as an example,

and I don’t believe that’s why
people actually come to Twitter.

I believe Twitter is best
as an interest-based network.

People come with a particular interest.

They have to do a ton of work
to find and follow the related accounts

around those interests.

What we could do instead
is allow you to follow an interest,

follow a hashtag, follow a trend,

follow a community,

which gives us the opportunity
to show all of the accounts,

all the topics, all the moments,
all the hashtags

that are associated with that
particular topic and interest,

which really opens up
the perspective that you see.

But that is a huge fundamental shift

to bias the entire network
away from just an account bias

towards a topics and interest bias.

CA: Because isn’t it the case

that one reason why you have
so much content on there

is a result of putting millions
of people around the world

in this kind of gladiatorial
contest with each other

for followers, for attention?

Like, from the point of view
of people who just read Twitter,

that’s not an issue,

but for the people who actually create it,
everyone’s out there saying,

“You know, I wish I had
a few more ‘likes,’ followers, retweets.”

And so they’re constantly experimenting,

trying to find the path to do that.

And what we’ve all discovered
is that the number one path to do that

is to be some form of provocative,

obnoxious, eloquently obnoxious,

like, eloquent insults
are a dream on Twitter,

where you rapidly pile up –

and it becomes this self-fueling
process of driving outrage.

How do you defuse that?

JD: Yeah, I mean, I think you’re spot on,

but that goes back to the incentives.

Like, one of the choices
we made in the early days was

we had this number that showed
how many people follow you.

We decided that number
should be big and bold,

and anything that’s on the page
that’s big and bold has importance,

and those are the things
that you want to drive.

Was that the right decision at the time?

Probably not.

If I had to start the service again,

I would not emphasize
the follower count as much.

I would not emphasize
the “like” count as much.

I don’t think I would even
create “like” in the first place,

because it doesn’t actually push

what we believe now
to be the most important thing,

which is healthy contribution
back to the network

and conversation to the network,

participation within conversation,

learning something from the conversation.

Those are not things
that we thought of 13 years ago,

and we believe are extremely
important right now.

So we have to look at
how we display the follower count,

how we display retweet count,

how we display “likes,”

and just ask the deep question:

Is this really the number
that we want people to drive up?

Is this the thing that,
when you open Twitter,

you see, “That’s the thing
I need to increase?”

And I don’t believe
that’s the case right now.

(Applause)

WPR: I think we should look at
some of the tweets

that are coming
in from the audience as well.

CA: Let’s see what you guys are asking.

I mean, this is – generally, one
of the amazing things about Twitter

is how you can use it for crowd wisdom,

you know, that more knowledge,
more questions, more points of view

than you can imagine,

and sometimes, many of them
are really healthy.

WPR: I think one I saw that
passed already quickly down here,

“What’s Twitter’s plan to combat
foreign meddling in the 2020 US election?”

I think that’s something
that’s an issue we’re seeing

on the internet in general,

that we have a lot of malicious
automated activity happening.

And on Twitter, for example,
in fact, we have some work

that’s come from our friends
at Zignal Labs,

and maybe we can even see that
to give us an example

of what exactly I’m talking about,

where you have these bots, if you will,

or coordinated automated
malicious account activity,

that is being used to influence
things like elections.

And in this example we have
from Zignal which they’ve shared with us

using the data that
they have from Twitter,

you actually see that in this case,

white represents the humans –
human accounts, each dot is an account.

The pinker it is,

the more automated the activity is.

And you can see how you have
a few humans interacting with bots.

In this case, it’s related
to the election in Israel

and spreading misinformation
about Benny Gantz,

and as we know, in the end,
that was an election

that Netanyahu won by a slim margin,

and that may have been
in some case influenced by this.

And when you think about
that happening on Twitter,

what are the things
that you’re doing, specifically,

to ensure you don’t have misinformation
like this spreading in this way,

influencing people in ways
that could affect democracy?

JD: Just to back up a bit,

we asked ourselves a question:

Can we actually measure
the health of a conversation,

and what does that mean?

And in the same way
that you have indicators

and we have indicators as humans
in terms of are we healthy or not,

such as temperature,
the flushness of your face,

we believe that we could find
the indicators of conversational health.

And we worked with a lab
called Cortico at MIT

to propose four starter indicators

that we believe we could ultimately
measure on the system.

And the first one is
what we’re calling shared attention.

It’s a measure of how much
of the conversation is attentive

on the same topic versus disparate.

The second one is called shared reality,

and this is what percentage
of the conversation

shares the same facts –

not whether those facts
are truthful or not,

but are we sharing
the same facts as we converse?

The third is receptivity:

How much of the conversation
is receptive or civil

or the inverse, toxic?

And then the fourth
is variety of perspective.

So, are we seeing filter bubbles
or echo chambers,

or are we actually getting
a variety of opinions

within the conversation?

And implicit in all four of these
is the understanding that,

as they increase, the conversation
gets healthier and healthier.

So our first step is to see
if we can measure these online,

which we believe we can.

We have the most momentum
around receptivity.

We have a toxicity score,
a toxicity model, on our system

that can actually measure
whether you are likely to walk away

from a conversation
that you’re having on Twitter

because you feel it’s toxic,

with some pretty high degree.

We’re working to measure the rest,

and the next step is,

as we build up solutions,

to watch how these measurements
trend over time

and continue to experiment.

And our goal is to make sure
that these are balanced,

because if you increase one,
you might decrease another.

If you increase variety of perspective,

you might actually decrease
shared reality.

CA: Just picking up on some
of the questions flooding in here.

JD: Constant questioning.

CA: A lot of people are puzzled why,

like, how hard is it to get rid
of Nazis from Twitter?

JD: (Laughs)

So we have policies
around violent extremist groups,

and the majority of our work
and our terms of service

works on conduct, not content.

So we’re actually looking for conduct.

Conduct being using the service

to repeatedly or episodically
harass someone,

using hateful imagery

that might be associated with the KKK

or the American Nazi Party.

Those are all things
that we act on immediately.

We’re in a situation right now
where that term is used fairly loosely,

and we just cannot take
any one mention of that word

accusing someone else

as a factual indication that they
should be removed from the platform.

So a lot of our models
are based around, number one:

Is this account associated
with a violent extremist group?

And if so, we can take action.

And we have done so on the KKK
and the American Nazi Party and others.

And number two: Are they using
imagery or conduct

that would associate them as such as well?

CA: How many people do you have
working on content moderation

to look at this?

JD: It varies.

We want to be flexible on this,

because we want to make sure
that we’re, number one,

building algorithms instead of just
hiring massive amounts of people,

because we need to make sure
that this is scalable,

and there are no amount of people
that can actually scale this.

So this is why we’ve done so much work
around proactive detection of abuse

that humans can then review.

We want to have a situation

where algorithms are constantly
scouring every single tweet

and bringing the most
interesting ones to the top

so that humans can bring their judgment
to whether we should take action or not,

based on our terms of service.

WPR: But there’s not an amount
of people that are scalable,

but how many people do you currently have
monitoring these accounts,

and how do you figure out what’s enough?

JD: They’re completely flexible.

Sometimes we associate folks with spam.

Sometimes we associate folks
with abuse and harassment.

We’re going to make sure that
we have flexibility in our people

so that we can direct them
at what is most needed.

Sometimes, the elections.

We’ve had a string of elections
in Mexico, one coming up in India,

obviously, the election last year,
the midterm election,

so we just want to be flexible
with our resources.

So when people –

just as an example, if you go
to our current terms of service

and you bring the page up,

and you’re wondering about abuse
and harassment that you just received

and whether it was against
our terms of service to report it,

the first thing you see
when you open that page

is around intellectual
property protection.

You scroll down and you get to
abuse, harassment

and everything else
that you might be experiencing.

So I don’t know how that happened
over the company’s history,

but we put that above
the thing that people want

the most information on
and to actually act on.

And just our ordering shows the world
what we believed was important.

So we’re changing all that.

We’re ordering it the right way,

but we’re also simplifying the rules
so that they’re human-readable

so that people can actually
understand themselves

when something is against our terms
and when something is not.

And then we’re making –

again, our big focus is on removing
the burden of work from the victims.

So that means push more
towards technology,

rather than humans doing the work –

that means the humans receiving the abuse

and also the humans
having to review that work.

So we want to make sure

that we’re not just encouraging more work

around something
that’s super, super negative,

and we want to have a good balance
between the technology

and where humans can actually be creative,

which is the judgment of the rules,

and not just all the mechanical stuff
of finding and reporting them.

So that’s how we think about it.

CA: I’m curious to dig in more
about what you said.

I mean, I love that you said
you are looking for ways

to re-tweak the fundamental
design of the system

to discourage some of the reactive
behavior, and perhaps –

to use Tristan Harris-type language –

engage people’s more reflective thinking.

How far advanced is that?

What would alternatives
to that “like” button be?

JD: Well, first and foremost,

my personal goal with the service
is that I believe fundamentally

that public conversation is critical.

There are existential problems
facing the world

that are facing the entire world,
not any one particular nation-state,

that global public conversation benefits.

And that is one of the unique
dynamics of Twitter,

that it is completely open,

it is completely public,

it is completely fluid,

and anyone can see any other conversation
and participate in it.

So there are conversations
like climate change.

There are conversations
like the displacement in the work

through artificial intelligence.

There are conversations
like economic disparity.

No matter what any one nation-state does,

they will not be able
to solve the problem alone.

It takes coordination around the world,

and that’s where I think
Twitter can play a part.

The second thing is that Twitter,
right now, when you go to it,

you don’t necessarily walk away
feeling like you learned something.

Some people do.

Some people have
a very, very rich network,

a very rich community
that they learn from every single day.

But it takes a lot of work
and a lot of time to build up to that.

So we want to get people
to those topics and those interests

much, much faster

and make sure that
they’re finding something that,

no matter how much time
they spend on Twitter –

and I don’t want to maximize
the time on Twitter,

I want to maximize
what they actually take away from it

and what they learn from it, and –

CA: Well, do you, though?

Because that’s the core question
that a lot of people want to know.

Surely, Jack, you’re constrained,
to a huge extent,

by the fact that you’re a public company,

you’ve got investors pressing on you,

the number one way you make your money
is from advertising –

that depends on user engagement.

Are you willing to sacrifice
user time, if need be,

to go for a more reflective conversation?

JD: Yeah; more relevance means
less time on the service,

and that’s perfectly fine,

because we want to make sure
that, like, you’re coming to Twitter,

and you see something immediately
that you learn from and that you push.

We can still serve an ad against that.

That doesn’t mean you need to spend
any more time to see more.

The second thing we’re looking at –

CA: But just – on that goal,
daily active usage,

if you’re measuring that,
that doesn’t necessarily mean things

that people value every day.

It may well mean

things that people are drawn to
like a moth to the flame, every day.

We are addicted, because we see
something that pisses us off,

so we go in and add fuel to the fire,

and the daily active usage goes up,

and there’s more ad revenue there,

but we all get angrier with each other.

How do you define …

“Daily active usage” seems like a really
dangerous term to be optimizing.

(Applause)

JD: Taken alone, it is,

but you didn’t let me
finish the other metric,

which is, we’re watching for conversations

and conversation chains.

So we want to incentivize
healthy contribution back to the network,

and what we believe that is
is actually participating in conversation

that is healthy,

as defined by those four indicators
I articulated earlier.

So you can’t just optimize
around one metric.

You have to balance and look constantly

at what is actually going to create
a healthy contribution to the network

and a healthy experience for people.

Ultimately, we want to get to a metric

where people can tell us,
“Hey, I learned something from Twitter,

and I’m walking away
with something valuable.”

That is our goal ultimately over time,

but that’s going to take some time.

CA: You come over to many,
I think to me, as this enigma.

This is possibly unfair,
but I woke up the other night

with this picture of how I found I was
thinking about you and the situation,

that we’re on this great voyage with you
on this ship called the “Twittanic” –

(Laughter)

and there are people on board in steerage

who are expressing discomfort,

and you, unlike many other captains,

are saying, “Well, tell me, talk to me,
listen to me, I want to hear.”

And they talk to you, and they say,
“We’re worried about the iceberg ahead.”

And you go, “You know,
that is a powerful point,

and our ship, frankly,
hasn’t been built properly

for steering as well as it might.”

And we say, “Please do something.”

And you go to the bridge,

and we’re waiting,

and we look, and then you’re showing
this extraordinary calm,

but we’re all standing outside,
saying, “Jack, turn the fucking wheel!”

You know?

(Laughter)

(Applause)

I mean –

(Applause)

It’s democracy at stake.

It’s our culture at stake.
It’s our world at stake.

And Twitter is amazing and shapes so much.

It’s not as big as some
of the other platforms,

but the people of influence use it
to set the agenda,

and it’s just hard to imagine a more
important role in the world than to …

I mean, you’re doing a brilliant job
of listening, Jack, and hearing people,

but to actually dial up the urgency
and move on this stuff –

will you do that?

JD: Yes, and we have been
moving substantially.

I mean, there’s been
a few dynamics in Twitter’s history.

One, when I came back to the company,

we were in a pretty dire state
in terms of our future,

and not just from how people
were using the platform,

but from a corporate narrative as well.

So we had to fix
a bunch of the foundation,

turn the company around,

go through two crazy layoffs,

because we just got too big
for what we were doing,

and we focused all of our energy

on this concept of serving
the public conversation.

And that took some work.

And as we dived into that,

we realized some of the issues
with the fundamentals.

We could do a bunch of superficial things
to address what you’re talking about,

but we need the changes to last,

and that means going really, really deep

and paying attention
to what we started 13 years ago

and really questioning

how the system works
and how the framework works

and what is needed for the world today,

given how quickly everything is moving
and how people are using it.

So we are working as quickly as we can,
but quickness will not get the job done.

It’s focus, it’s prioritization,

it’s understanding
the fundamentals of the network

and building a framework that scales

and that is resilient to change,

and being open about where we are
and being transparent about where are

so that we can continue to earn trust.

So I’m proud of all the frameworks
that we’ve put in place.

I’m proud of our direction.

We obviously can move faster,

but that required just stopping a bunch
of stupid stuff we were doing in the past.

CA: All right.

Well, I suspect there are many people here
who, if given the chance,

would love to help you
on this change-making agenda you’re on,

and I don’t know if Whitney –

Jack, thank you for coming here
and speaking so openly.

It took courage.

I really appreciate what you said,
and good luck with your mission.

JD: Thank you so much.
Thanks for having me.

(Applause)

Thank you.

克里斯安德森:
你现在担心什么?


对 Twitter 上的很多问题都持开放态度。

您现在最

担心的事情是什么?

Jack Dorsey:现在,
谈话的健康状况。

因此,我们的目的是
为公众对话服务

,我们已经看到
了一些针对它的攻击。

我们看到了虐待,我们看到了骚扰,

我们看到了操纵、

自动化、人类协调、
错误信息。

所以这些
都是我们在 13 年前创办公司时没有预料到的动态

但我们现在确实看到了它们的规模

,最让我担心的
是我们以

可扩展的系统方式解决它的能力,


我们如何采取行动有严格的理解,对我们如何

采取透明的
理解 当我们犯错时采取行动

和严格的上诉
程序,

因为我们会犯错。

Whitney Pennington Rodgers:
我真的很高兴听到

这让你感到担忧,

因为我认为
有很多关于

人们在 Twitter 上感到自己受到虐待
和骚扰的文章,

我认为没有人
比女性和 有色

女性和黑人女性。

并且已经出现了一些数据——

国际特赦组织
几个月前发布了一份报告,

其中显示
一部分活跃的黑人女性推特

用户平均收到
了十分之一的推文

是某种形式的骚扰。

因此,当您
在 Twitter 上考虑社区的健康时,

我很想听到
“每个人的健康”,

但具体来说:您希望
如何使 Twitter

成为该子集、女性
、有色女性的安全空间 和黑人妇女?

JD:是的。

所以

当你来到一个服务机构时

,这是一个非常可怕的情况,理想情况下,你想了解
一些关于这个世界的东西

,你把大部分时间都花在
报告虐待、接受虐待、

接受骚扰上。

所以我们最看重的

只是平台自然提供的激励
和服务提供的激励。

目前,系统的动态性
使得

通过该服务骚扰和虐待他人变得非常容易

,不幸的是,
过去我们系统的大部分

工作完全基于
报告骚扰和虐待的人。

所以大约在去年年中
,我们决定

应用更多的机器学习
,更多的深度学习来解决这个问题,

并尝试
在滥用行为发生的地方更加积极主动,

这样我们就可以承担负担
完全脱离受害者。

我们最近取得了一些进展。

大约 38% 的滥用推
文现在

由机器学习算法主动识别,

因此人们实际上
不必报告它们。

但是那些被识别
的内容仍然是由人工审查的,

所以我们不会在
没有人工审查的情况下删除内容或帐户。

但那
是一年前的零百分比。

所以这意味着,在那个 0% 的情况下,

每个受到虐待的人
都必须实际报告,

这对他们来说
是很多工作,对我们来说是很多工作,

而且最终是不公平的。

我们正在做的另一件事
是确保我们作为一家公司

,代表
我们正在努力服务的所有社区。

除非我们每天

都能真正感受到这些问题的多样性,否则我们无法建立成功的企业

这不仅是在
做这项工作的团队

,也是在我们的领导范围内。

因此,我们需要继续
对人们正在经历的事情建立同理心,

并为他们提供更好的工具来采取行动,

并为我们的客户
提供更好、更轻松的方法

来处理
他们所看到的一些事情。

所以我们做的很多事情
都是围绕技术做的,

但我们也在
关注服务的激励措施:

当你第一次打开它时,Twitter 会激励你做什么?

在过去,

它激起了很多愤怒
,激起了很多暴民行为

,激起了很多群体骚扰。

我们必须更深入地

研究服务正在做什么
以做出更大转变的一些基础知识。

正如我刚才所描述的,我们可以围绕技术进行一系列小的转变,

但最终,我们必须
深入研究网络本身的动态

,这就是我们正在做的事情。

CA:但是你的感觉是什么——你可以改变

什么样的事情

实际上会
从根本上改变行为?

JD:嗯,其中一件事——例如,

我们
以关注一个帐户的概念开始了这项服务

,我不相信这就是
人们真正来到 Twitter 的原因。

我相信 Twitter 最适合
作为基于兴趣的网络。

人们带着特别的兴趣而来。

他们必须做大量工作
才能找到并关注

围绕这些兴趣的相关帐户。

相反,我们可以做的
是让您关注兴趣、

关注主题标签、关注趋势、

关注社区,

这让我们有
机会展示所有帐户、

所有主题、所有时刻、
所有主题

标签 与该
特定主题和兴趣相关联,

这确实打开
了您所看到的视角。

但这是一个巨大的根本性转变

,使整个网络
从账户偏见

转向主题和兴趣偏见。

CA:因为

你有
这么多内容

的一个原因是不是因为让
世界各地的数百万人

参加这种角斗
比赛,以

获得追随者和关注?

就像,从
刚刚阅读 Twitter 的人的角度来看,

这不是问题,

但对于实际创建它的人来说,
每个人都在说,

“你知道,我希望我
有更多的‘喜欢’,追随者, 转发。”

所以他们不断地进行试验,

试图找到实现这一目标的途径。

我们都发现
,做到这一点的第一条途径

是成为某种形式的挑衅、

令人讨厌、雄辩地令人讨厌,

就像,雄辩的侮辱
是 Twitter 上的一个梦想,

在那里你迅速堆积起来

——它变成了
驱动愤怒的自我燃料过程。

你如何化解它?

JD:是的,我的意思是,我认为你是正确的,

但这可以追溯到激励措施。

就像,
我们在早期做出的选择之一是

我们有这个数字来显示有
多少人关注你。

我们决定这个数字
应该大而粗

,页面上任何
大而粗

的东西都很重要,这些
都是你想要驱动的东西。

那在当时是正确的决定吗?

可能不是。

如果我必须重新启动服务,

我不会那么
强调关注者数量。

我不会
强调“喜欢”的数量。

我认为我什至不会一开始就
创造“赞”,

因为它实际上并没有推动

我们
现在认为最重要的事情,


对网络的健康贡献和

对网络的对话、

参与 在谈话中,

从谈话中学到一些东西。

这些不是
我们 13 年前想到的事情

,我们认为现在非常
重要。

所以我们必须
看看我们如何显示关注者数量,

我们如何显示转发数量,

我们如何显示“喜欢”,

并提出一个深刻的问题:

这真的
是我们希望人们提高的数字吗?

当你打开推特时,

你会看到“这就是
我需要增加的东西”吗?

我不
相信现在是这样。

(掌声)

WPR:我认为我们也应该看看

来自观众的一些推文。

CA:让我们看看你们在问什么。

我的意思是,一般来说,
关于 Twitter 的令人惊奇的事情之一

就是你可以如何将它用于群体智慧,

你知道,比你想象的更多的知识、
更多的问题、更多的观点

,有时,许多
他们真的很健康。

WPR:我想我看到
这里已经很快通过了,

“Twitter 打击
外国干预 2020 年美国大选的计划是什么?”

我认为这是
我们在互联网上普遍看到的一个问题

,我们有很多恶意的
自动化活动正在发生。

例如,在 Twitter 上,
事实上,我们有一些

来自 Zignal Labs 的朋友的工作

,也许我们甚至可以看到这些
,给我们一个

例子,说明我在说什么,

你在哪里拥有这些机器人, 如果您愿意,

或者协调自动
恶意帐户活动,

这些活动被用来影响
选举等事情。

在这个例子中,我们
从 Zignal 获得,他们

使用
他们从 Twitter 获得的数据与我们分享,

你实际上看到,在这种情况下,

白色代表人类——
人类账户,每个点都是一个账户。

它越粉红

,活动就越自动化。

你可以看到你是如何
让一些人与机器人交互的。

在这种情况下,它
与以色列的选举

和传播
有关本尼甘茨的错误信息有关

,正如我们所知,最后,

内塔尼亚胡以微弱优势赢得了选举,

并且
在某些情况下可能受到此影响 .

当你想到
Twitter 上

发生的事情时
,你正在做些什么,特别是

为了确保你没有
像这样的错误信息以这种方式传播,

以可能影响民主的方式影响人们?

JD:为了稍微支持一下,

我们问了自己一个问题:

我们能否真正衡量
对话的健康程度

,这意味着什么?

就像
你有指标一样

,我们也有指标
,比如我们是否健康,

比如温度、
脸的潮红,

我们相信我们可以找到
对话健康的指标。

我们与
麻省理工学院的一个名为 Cortico 的实验室合作,

提出了四个启动指标

,我们相信这些指标最终
可以在系统上进行测量。

第一个
就是我们所说的共同关注。


衡量了对话中有多少是

关注同一主题而不是不同主题。

第二个称为共享现实

,这是对话中有多少百分比

共享相同的事实——

不是这些事实
是否真实,

而是
我们在交谈时是否共享相同的事实?

第三个是接受性:

谈话中有多少
是接受性的或文明的

或相反的,有毒的?

第四
是视角的多样性。

那么,我们是否看到了过滤气泡
或回声室,

或者我们实际上在对话中得到
了各种意见

所有这四个都隐含着这样
一种理解,即

随着它们的增加,对话
会变得越来越健康。

所以我们的第一步是
看看我们是否可以在线测量这些

,我们相信我们可以。

我们
在接受度方面拥有最大的动力。

我们的系统上有一个毒性评分,
一个毒性模型,

它实际上可以衡量
你是否有可能放弃

你在 Twitter 上进行的对话,

因为你觉得它是有毒的,

而且程度相当高。

我们正在努力测量其余部分

,下一步是

在我们建立解决方案时

,观察这些测量结果
如何随时间变化

并继续进行实验。

我们的目标是
确保这些是平衡的,

因为如果你增加一个,
你可能会减少另一个。

如果你增加视角的多样性,

你实际上可能会减少
共享现实。

CA:刚刚
回答了这里涌现的一些问题。

JD:不断的质疑。

CA:很多人都很困惑,

比如,
从 Twitter 上清除纳粹有多难?

JD:(笑)

所以我们有
针对暴力极端主义团体的政策

,我们的大部分工作
和服务条款

都是针对行为,而不是内容。

所以我们实际上是在寻找行为。

使用可能与 KKK 或美国纳粹党有关的仇恨图像,使用该

服务反复或偶尔
骚扰某人

这些都是
我们立即采取行动的事情。

我们现在的
情况是,该术语的使用相当松散

,我们不能将
任何人提到该词

指责他人

作为他们
应该从平台上删除的事实迹象。

因此,我们的许多模型
都基于第一个问题

:该帐户是否
与暴力极端主义团体有关?

如果是这样,我们可以采取行动。

我们已经对 KKK
和美国纳粹党和其他人这样做了。

第二点:他们是否使用了

能够将他们联系起来的图像或行为?

CA:你有多少人
在研究内容

审核?

JD:情况不同。

我们希望在这方面保持灵活,

因为我们希望
确保我们是第一,

构建算法,而不是仅仅
雇佣大量的人,

因为我们需要
确保这是可扩展的,

并且没有任何数量的
真正可以扩大规模的人。

因此,这就是为什么我们
围绕主动检测滥用

行为做了大量工作,然后人类可以对其进行审查。

我们

希望算法不断地
搜索每一条推文

,并将最
有趣的推文放到顶部,

以便人类可以根据我们的服务条款
判断我们是否应该采取行动

WPR:但是没有
多少人是可扩展的,

但你目前有多少人在
监控这些账户

,你如何确定什么是足够的?

JD:它们是完全灵活的。

有时我们将人们与垃圾邮件联系起来。

有时我们将人们
与虐待和骚扰联系起来。

我们将确保
我们的员工具有灵活性,

以便我们可以将他们引导
到最需要的地方。

有时,选举。

我们在墨西哥举行了一系列选举
,印度即将

举行一场选举,显然是去年
的选举,中期选举,

所以我们只想灵活地
利用我们的资源。

因此,当人们

——例如,如果您
访问我们当前的服务条款

并打开页面,您想

知道您刚刚收到的虐待和骚扰,

以及报告是否违反
我们的服务条款 它,

当您打开该页面时,您首先看到的


知识产权保护。

您向下滚动,您会遇到
虐待、骚扰


您可能遇到的所有其他事情。

所以我不知道这
在公司的历史上是如何发生的,

但我们把它放在
人们

想要最多信息
并实际采取行动的事情之上。

仅仅我们的排序就向世界展示了
我们认为重要的东西。

所以我们正在改变这一切。

我们以正确的方式对其进行排序,

但我们也在简化规则,
使它们易于阅读,

这样人们就可以真正
理解自己

什么时候违反我们的条款,什么时候不违反我们的条款

然后我们正在制作 -

再次,我们的重点是
减轻受害者的工作负担。

所以这意味着更多地
推动技术,

而不是人类做这项工作——

这意味着人类受到虐待

,而且
人类必须审查这项工作。

因此,我们要

确保我们不仅鼓励

围绕
超级、超级消极的事情开展更多工作,

而且我们希望
在技术

和人类实际上可以发挥创造力的领域之间取得良好的平衡,

这是对规则的判断,

而不仅仅是
查找和报告它们的所有机械材料。

所以我们就是这么想的。

CA:我很想深入
了解你所说的内容。

我的意思是,我喜欢你说
你正在寻找方法

来重新调整系统的基本
设计

以阻止一些反应
性行为,也许

——使用特里斯坦哈里斯式的语言——

让人们更加反思。

这到底有多先进?

那个“喜欢”按钮的替代品是什么?

JD:嗯,首先,

我个人对这项服务的目标
是,从根本上说,我

相信公共对话是至关重要的。

世界面临的生存问题

是面向整个世界的,
而不是任何一个特定的民族国家

,全球公共对话会从中受益。

这是 Twitter 的独特
动态之一

,它是完全开放的、

完全公开的、

完全流动的

,任何人都可以看到任何其他对话
并参与其中。

所以有
像气候变化这样的对话。

通过人工智能
进行工作中的位移之类的对话


诸如经济差距之类的对话。

无论任何一个民族国家做什么,

都无法
单独解决问题。

它需要世界各地的协调,

而这正是我认为
Twitter 可以发挥作用的地方。

第二件事是 Twitter,
现在,当你访问它时,

你不一定会
觉得你学到了一些东西。

有些人会。

有些人有
一个非常非常丰富的网络,

一个非常丰富的社区
,他们每天都可以从中学习。

但要做到这一点需要大量的工作
和大量的时间。

因此,我们希望让人们更快地了解
这些主题和

兴趣,

并确保
他们找到的东西,

无论
他们在 Twitter 上花费了多少时间

——我不想最大化
在 Twitter 上的时间 ,

我想最大限度地利用
他们实际从中获得的东西

以及他们从中学到的东西,并且–

CA:嗯,你呢?

因为
这是很多人想知道的核心问题。

当然,杰克,你在很大程度上受到限制,

因为你是一家上市公司,

你有投资者在逼迫你,

你赚钱的第一方式
是广告——

这取决于 用户参与度。 如果需要,

您是否愿意牺牲
用户时间

来进行更具反思性的对话?

JD:是的; 更高的相关性意味着
更少的时间在服务上

,这很好,

因为我们想要
确保,比如,你来到 Twitter

,你会立即看到一些
你可以从中学习和推动的东西。

我们仍然可以为此投放广告。

这并不意味着您需要
花费更多时间来查看更多内容。

我们正在关注的第二件事

——CA:但只是——就这个目标而言,
每日活跃使用量,

如果你正在衡量它,
那并不一定

意味着人们每天都重视的东西。

这很可能

意味着人们
每天都会像飞蛾扑火一样被吸引。

我们上瘾了,因为我们看到
了让我们生气的东西,

所以我们进去火上浇油

,每天的活跃使用量上升了

,那里的广告收入也增加了,

但我们都互相生气。

你如何定义……

“每日活跃使用”似乎是一个非常
危险的优化术语。

(掌声)

JD:单独来看是这样,

但你没有让我
完成另一个指标,

即我们正在关注对话

和对话链。

因此,我们希望激励
对网络的健康贡献

,我们认为这
实际上是参与

健康的对话,

正如我之前阐述的这四个指标所定义的
那样。

所以你不能只
围绕一个指标进行优化。

您必须平衡并

不断关注实际上
将为网络创造健康贡献

并为人们带来健康体验的东西。

最终,我们希望得到一个

人们可以告诉我们的指标,
“嘿,我从 Twitter 学到了一些东西

,我
带着一些有价值的东西离开了。”

随着时间的推移,这是我们最终的目标,

但这需要一些时间。

CA:在我看来,你对很多人
来说都是个谜。

这可能是不公平的,
但那天晚上我醒来时看到

了这张照片,我发现我在
想你和当时的情况

,我们和你一起
在这艘名为“Twittanic”的船上进行了一次伟大的航行——

(笑声 )

并且在机上有些

人表达了不适,

而你,与许多其他船长不同,

正在说,“好吧,告诉我,跟我说话,
听我说,我想听听。”

他们和你说话,他们说,
“我们担心前面的冰山。”

然后你会说,“你知道,
这是一个强大的点

,坦率地说,我们的船
并没有

像它可能的那样正确建造。”

我们说,“请做点什么。”

然后你走到桥上

,我们在等待

,我们看,然后你表现出
非凡的平静,

但我们都站在外面,
说,“杰克,转动他妈的轮子!”

你懂?

(笑声)

(掌声)

我的意思是——

(掌声)

这关系到民主。

这是我们的文化受到威胁。
这是我们的世界处于危险之中。

Twitter 非常棒,而且影响很大。

它没有
其他一些平台那么大,

但有影响力的人用它
来设定议程

,很难想象
在这个世界上扮演的角色比……

我的意思是,你做得很棒
倾听的工作,杰克,倾听人们的心声,

但要真正提高紧迫性
并继续做这些事情——

你会这样做吗?

JD:是的,而且我们一直
在进行大幅度的调整。

我的意思是,
在 Twitter 的历史上有一些动态。

第一,当我回到公司时,


我们的未来而言,我们处于非常可怕的状态

,不仅来自人们
如何使用该平台,

还来自公司的叙述。

所以我们不得不修复
一堆基础,

扭转公司局面,

经历两次疯狂的裁员,

因为我们
对我们所做的事情来说太大了

,我们把所有的精力都

集中在了为公众对话服务的概念上

这需要一些工作。

当我们深入研究时,

我们意识到了一些
基本面问题。

我们可以做一些肤浅的事情
来解决你所说的问题,

但我们需要让这些变化持续

下去,这意味着要真正、非常深入地

关注我们 13 年前开始的事情,

并真正质疑

系统是如何运作的

考虑到一切发展的速度
以及人们使用它的方式,以及该框架如何工作以及当今世界需要什么。

因此,我们正在尽可能快地工作,
但快速不会完成工作。

它是重点,是优先级,

是对网络基础的理解,是

建立一个可扩展

且对变化具有弹性的框架

,对我们所处的位置持开放态度,对所处的位置
保持透明,

以便我们能够继续赢得信任。

因此,我为
我们实施的所有框架感到自豪。

我为我们的方向感到自豪。

我们显然可以更快地行动,

但这需要停止
我们过去所做的一堆愚蠢的事情。

CA:好的。

好吧,我怀疑这里有很多
人,如果有机会,

很乐意帮助你完成
你正在进行的变革议程

,我不知道惠特尼 -

杰克,谢谢你来这里
和 这么开诚布公的说。

这需要勇气。

我非常感谢你所说的
,祝你任务顺利。

JD:非常感谢。
感谢您的款待。

(掌声)

谢谢。