How Twitter needs to change Jack Dorsey

Chris Anderson:
What worries you right now?

You’ve been very open
about lots of issues on Twitter.

What would be your top worry

about where things are right now?

Jack Dorsey: Right now,
the health of the conversation.

So, our purpose is to serve
the public conversation,

and we have seen
a number of attacks on it.

We’ve seen abuse, we’ve seen harassment,

we’ve seen manipulation,

automation, human coordination,
misinformation.

So these are all dynamics
that we were not expecting

13 years ago when we were
starting the company.

But we do now see them at scale,

and what worries me most
is just our ability to address it

in a systemic way that is scalable,

that has a rigorous understanding
of how we’re taking action,

a transparent understanding
of how we’re taking action

and a rigorous appeals process
for when we’re wrong,

because we will be wrong.

Whitney Pennington Rodgers:
I’m really glad to hear

that that’s something that concerns you,

because I think there’s been
a lot written about people

who feel they’ve been abused
and harassed on Twitter,

and I think no one more so
than women and women of color

and black women.

And there’s been data that’s come out –

Amnesty International put out
a report a few months ago

where they showed that a subset
of active black female Twitter users

were receiving, on average,
one in 10 of their tweets

were some form of harassment.

And so when you think about health
for the community on Twitter,

I’m interested to hear,
“health for everyone,”

but specifically: How are you looking
to make Twitter a safe space

for that subset, for women,
for women of color and black women?

JD: Yeah.

So it’s a pretty terrible situation

when you’re coming to a service

that, ideally, you want to learn
something about the world,

and you spend the majority of your time
reporting abuse, receiving abuse,

receiving harassment.

So what we’re looking most deeply at
is just the incentives

that the platform naturally provides
and the service provides.

Right now, the dynamic of the system
makes it super-easy to harass

and to abuse others through the service,

and unfortunately, the majority
of our system in the past

worked entirely based on people
reporting harassment and abuse.

So about midway last year,
we decided that we were going to apply

a lot more machine learning,
a lot more deep learning to the problem,

and try to be a lot more proactive
around where abuse is happening,

so that we can take the burden
off the victim completely.

And we’ve made some progress recently.

About 38 percent of abusive tweets
are now proactively identified

by machine learning algorithms

so that people don’t actually
have to report them.

But those that are identified
are still reviewed by humans,

so we do not take down content or accounts
without a human actually reviewing it.

But that was from zero percent
just a year ago.

So that meant, at that zero percent,

every single person who received abuse
had to actually report it,

which was a lot of work for them,
a lot of work for us

and just ultimately unfair.

The other thing that we’re doing
is making sure that we, as a company,

have representation of all the communities
that we’re trying to serve.

We can’t build a business
that is successful

unless we have a diversity
of perspective inside of our walls

that actually feel these issues
every single day.

And that’s not just with the team
that’s doing the work,

it’s also within our leadership as well.

So we need to continue to build empathy
for what people are experiencing

and give them better tools to act on it

and also give our customers
a much better and easier approach

to handle some of the things
that they’re seeing.

So a lot of what we’re doing
is around technology,

but we’re also looking at
the incentives on the service:

What does Twitter incentivize you to do
when you first open it up?

And in the past,

it’s incented a lot of outrage,
it’s incented a lot of mob behavior,

it’s incented a lot of group harassment.

And we have to look a lot deeper
at some of the fundamentals

of what the service is doing
to make the bigger shifts.

We can make a bunch of small shifts
around technology, as I just described,

but ultimately, we have to look deeply
at the dynamics in the network itself,

and that’s what we’re doing.

CA: But what’s your sense –

what is the kind of thing
that you might be able to change

that would actually
fundamentally shift behavior?

JD: Well, one of the things –

we started the service
with this concept of following an account,

as an example,

and I don’t believe that’s why
people actually come to Twitter.

I believe Twitter is best
as an interest-based network.

People come with a particular interest.

They have to do a ton of work
to find and follow the related accounts

around those interests.

What we could do instead
is allow you to follow an interest,

follow a hashtag, follow a trend,

follow a community,

which gives us the opportunity
to show all of the accounts,

all the topics, all the moments,
all the hashtags

that are associated with that
particular topic and interest,

which really opens up
the perspective that you see.

But that is a huge fundamental shift

to bias the entire network
away from just an account bias

towards a topics and interest bias.

CA: Because isn’t it the case

that one reason why you have
so much content on there

is a result of putting millions
of people around the world

in this kind of gladiatorial
contest with each other

for followers, for attention?

Like, from the point of view
of people who just read Twitter,

that’s not an issue,

but for the people who actually create it,
everyone’s out there saying,

“You know, I wish I had
a few more ‘likes,’ followers, retweets.”

And so they’re constantly experimenting,

trying to find the path to do that.

And what we’ve all discovered
is that the number one path to do that

is to be some form of provocative,

obnoxious, eloquently obnoxious,

like, eloquent insults
are a dream on Twitter,

where you rapidly pile up –

and it becomes this self-fueling
process of driving outrage.

How do you defuse that?

JD: Yeah, I mean, I think you’re spot on,

but that goes back to the incentives.

Like, one of the choices
we made in the early days was

we had this number that showed
how many people follow you.

We decided that number
should be big and bold,

and anything that’s on the page
that’s big and bold has importance,

and those are the things
that you want to drive.

Was that the right decision at the time?

Probably not.

If I had to start the service again,

I would not emphasize
the follower count as much.

I would not emphasize
the “like” count as much.

I don’t think I would even
create “like” in the first place,

because it doesn’t actually push

what we believe now
to be the most important thing,

which is healthy contribution
back to the network

and conversation to the network,

participation within conversation,

learning something from the conversation.

Those are not things
that we thought of 13 years ago,

and we believe are extremely
important right now.

So we have to look at
how we display the follower count,

how we display retweet count,

how we display “likes,”

and just ask the deep question:

Is this really the number
that we want people to drive up?

Is this the thing that,
when you open Twitter,

you see, “That’s the thing
I need to increase?”

And I don’t believe
that’s the case right now.

(Applause)

WPR: I think we should look at
some of the tweets

that are coming
in from the audience as well.

CA: Let’s see what you guys are asking.

I mean, this is – generally, one
of the amazing things about Twitter

is how you can use it for crowd wisdom,

you know, that more knowledge,
more questions, more points of view

than you can imagine,

and sometimes, many of them
are really healthy.

WPR: I think one I saw that
passed already quickly down here,

“What’s Twitter’s plan to combat
foreign meddling in the 2020 US election?”

I think that’s something
that’s an issue we’re seeing

on the internet in general,

that we have a lot of malicious
automated activity happening.

And on Twitter, for example,
in fact, we have some work

that’s come from our friends
at Zignal Labs,

and maybe we can even see that
to give us an example

of what exactly I’m talking about,

where you have these bots, if you will,

or coordinated automated
malicious account activity,

that is being used to influence
things like elections.

And in this example we have
from Zignal which they’ve shared with us

using the data that
they have from Twitter,

you actually see that in this case,

white represents the humans –
human accounts, each dot is an account.

The pinker it is,

the more automated the activity is.

And you can see how you have
a few humans interacting with bots.

In this case, it’s related
to the election in Israel

and spreading misinformation
about Benny Gantz,

and as we know, in the end,
that was an election

that Netanyahu won by a slim margin,

and that may have been
in some case influenced by this.

And when you think about
that happening on Twitter,

what are the things
that you’re doing, specifically,

to ensure you don’t have misinformation
like this spreading in this way,

influencing people in ways
that could affect democracy?

JD: Just to back up a bit,

we asked ourselves a question:

Can we actually measure
the health of a conversation,

and what does that mean?

And in the same way
that you have indicators

and we have indicators as humans
in terms of are we healthy or not,

such as temperature,
the flushness of your face,

we believe that we could find
the indicators of conversational health.

And we worked with a lab
called Cortico at MIT

to propose four starter indicators

that we believe we could ultimately
measure on the system.

And the first one is
what we’re calling shared attention.

It’s a measure of how much
of the conversation is attentive

on the same topic versus disparate.

The second one is called shared reality,

and this is what percentage
of the conversation

shares the same facts –

not whether those facts
are truthful or not,

but are we sharing
the same facts as we converse?

The third is receptivity:

How much of the conversation
is receptive or civil

or the inverse, toxic?

And then the fourth
is variety of perspective.

So, are we seeing filter bubbles
or echo chambers,

or are we actually getting
a variety of opinions

within the conversation?

And implicit in all four of these
is the understanding that,

as they increase, the conversation
gets healthier and healthier.

So our first step is to see
if we can measure these online,

which we believe we can.

We have the most momentum
around receptivity.

We have a toxicity score,
a toxicity model, on our system

that can actually measure
whether you are likely to walk away

from a conversation
that you’re having on Twitter

because you feel it’s toxic,

with some pretty high degree.

We’re working to measure the rest,

and the next step is,

as we build up solutions,

to watch how these measurements
trend over time

and continue to experiment.

And our goal is to make sure
that these are balanced,

because if you increase one,
you might decrease another.

If you increase variety of perspective,

you might actually decrease
shared reality.

CA: Just picking up on some
of the questions flooding in here.

JD: Constant questioning.

CA: A lot of people are puzzled why,

like, how hard is it to get rid
of Nazis from Twitter?

JD: (Laughs)

So we have policies
around violent extremist groups,

and the majority of our work
and our terms of service

works on conduct, not content.

So we’re actually looking for conduct.

Conduct being using the service

to repeatedly or episodically
harass someone,

using hateful imagery

that might be associated with the KKK

or the American Nazi Party.

Those are all things
that we act on immediately.

We’re in a situation right now
where that term is used fairly loosely,

and we just cannot take
any one mention of that word

accusing someone else

as a factual indication that they
should be removed from the platform.

So a lot of our models
are based around, number one:

Is this account associated
with a violent extremist group?

And if so, we can take action.

And we have done so on the KKK
and the American Nazi Party and others.

And number two: Are they using
imagery or conduct

that would associate them as such as well?

CA: How many people do you have
working on content moderation

to look at this?

JD: It varies.

We want to be flexible on this,

because we want to make sure
that we’re, number one,

building algorithms instead of just
hiring massive amounts of people,

because we need to make sure
that this is scalable,

and there are no amount of people
that can actually scale this.

So this is why we’ve done so much work
around proactive detection of abuse

that humans can then review.

We want to have a situation

where algorithms are constantly
scouring every single tweet

and bringing the most
interesting ones to the top

so that humans can bring their judgment
to whether we should take action or not,

based on our terms of service.

WPR: But there’s not an amount
of people that are scalable,

but how many people do you currently have
monitoring these accounts,

and how do you figure out what’s enough?

JD: They’re completely flexible.

Sometimes we associate folks with spam.

Sometimes we associate folks
with abuse and harassment.

We’re going to make sure that
we have flexibility in our people

so that we can direct them
at what is most needed.

Sometimes, the elections.

We’ve had a string of elections
in Mexico, one coming up in India,

obviously, the election last year,
the midterm election,

so we just want to be flexible
with our resources.

So when people –

just as an example, if you go
to our current terms of service

and you bring the page up,

and you’re wondering about abuse
and harassment that you just received

and whether it was against
our terms of service to report it,

the first thing you see
when you open that page

is around intellectual
property protection.

You scroll down and you get to
abuse, harassment

and everything else
that you might be experiencing.

So I don’t know how that happened
over the company’s history,

but we put that above
the thing that people want

the most information on
and to actually act on.

And just our ordering shows the world
what we believed was important.

So we’re changing all that.

We’re ordering it the right way,

but we’re also simplifying the rules
so that they’re human-readable

so that people can actually
understand themselves

when something is against our terms
and when something is not.

And then we’re making –

again, our big focus is on removing
the burden of work from the victims.

So that means push more
towards technology,

rather than humans doing the work –

that means the humans receiving the abuse

and also the humans
having to review that work.

So we want to make sure

that we’re not just encouraging more work

around something
that’s super, super negative,

and we want to have a good balance
between the technology

and where humans can actually be creative,

which is the judgment of the rules,

and not just all the mechanical stuff
of finding and reporting them.

So that’s how we think about it.

CA: I’m curious to dig in more
about what you said.

I mean, I love that you said
you are looking for ways

to re-tweak the fundamental
design of the system

to discourage some of the reactive
behavior, and perhaps –

to use Tristan Harris-type language –

engage people’s more reflective thinking.

How far advanced is that?

What would alternatives
to that “like” button be?

JD: Well, first and foremost,

my personal goal with the service
is that I believe fundamentally

that public conversation is critical.

There are existential problems
facing the world

that are facing the entire world,
not any one particular nation-state,

that global public conversation benefits.

And that is one of the unique
dynamics of Twitter,

that it is completely open,

it is completely public,

it is completely fluid,

and anyone can see any other conversation
and participate in it.

So there are conversations
like climate change.

There are conversations
like the displacement in the work

through artificial intelligence.

There are conversations
like economic disparity.

No matter what any one nation-state does,

they will not be able
to solve the problem alone.

It takes coordination around the world,

and that’s where I think
Twitter can play a part.

The second thing is that Twitter,
right now, when you go to it,

you don’t necessarily walk away
feeling like you learned something.

Some people do.

Some people have
a very, very rich network,

a very rich community
that they learn from every single day.

But it takes a lot of work
and a lot of time to build up to that.

So we want to get people
to those topics and those interests

much, much faster

and make sure that
they’re finding something that,

no matter how much time
they spend on Twitter –

and I don’t want to maximize
the time on Twitter,

I want to maximize
what they actually take away from it

and what they learn from it, and –

CA: Well, do you, though?

Because that’s the core question
that a lot of people want to know.

Surely, Jack, you’re constrained,
to a huge extent,

by the fact that you’re a public company,

you’ve got investors pressing on you,

the number one way you make your money
is from advertising –

that depends on user engagement.

Are you willing to sacrifice
user time, if need be,

to go for a more reflective conversation?

JD: Yeah; more relevance means
less time on the service,

and that’s perfectly fine,

because we want to make sure
that, like, you’re coming to Twitter,

and you see something immediately
that you learn from and that you push.

We can still serve an ad against that.

That doesn’t mean you need to spend
any more time to see more.

The second thing we’re looking at –

CA: But just – on that goal,
daily active usage,

if you’re measuring that,
that doesn’t necessarily mean things

that people value every day.

It may well mean

things that people are drawn to
like a moth to the flame, every day.

We are addicted, because we see
something that pisses us off,

so we go in and add fuel to the fire,

and the daily active usage goes up,

and there’s more ad revenue there,

but we all get angrier with each other.

How do you define …

“Daily active usage” seems like a really
dangerous term to be optimizing.

(Applause)

JD: Taken alone, it is,

but you didn’t let me
finish the other metric,

which is, we’re watching for conversations

and conversation chains.

So we want to incentivize
healthy contribution back to the network,

and what we believe that is
is actually participating in conversation

that is healthy,

as defined by those four indicators
I articulated earlier.

So you can’t just optimize
around one metric.

You have to balance and look constantly

at what is actually going to create
a healthy contribution to the network

and a healthy experience for people.

Ultimately, we want to get to a metric

where people can tell us,
“Hey, I learned something from Twitter,

and I’m walking away
with something valuable.”

That is our goal ultimately over time,

but that’s going to take some time.

CA: You come over to many,
I think to me, as this enigma.

This is possibly unfair,
but I woke up the other night

with this picture of how I found I was
thinking about you and the situation,

that we’re on this great voyage with you
on this ship called the “Twittanic” –

(Laughter)

and there are people on board in steerage

who are expressing discomfort,

and you, unlike many other captains,

are saying, “Well, tell me, talk to me,
listen to me, I want to hear.”

And they talk to you, and they say,
“We’re worried about the iceberg ahead.”

And you go, “You know,
that is a powerful point,

and our ship, frankly,
hasn’t been built properly

for steering as well as it might.”

And we say, “Please do something.”

And you go to the bridge,

and we’re waiting,

and we look, and then you’re showing
this extraordinary calm,

but we’re all standing outside,
saying, “Jack, turn the fucking wheel!”

You know?

(Laughter)

(Applause)

I mean –

(Applause)

It’s democracy at stake.

It’s our culture at stake.
It’s our world at stake.

And Twitter is amazing and shapes so much.

It’s not as big as some
of the other platforms,

but the people of influence use it
to set the agenda,

and it’s just hard to imagine a more
important role in the world than to …

I mean, you’re doing a brilliant job
of listening, Jack, and hearing people,

but to actually dial up the urgency
and move on this stuff –

will you do that?

JD: Yes, and we have been
moving substantially.

I mean, there’s been
a few dynamics in Twitter’s history.

One, when I came back to the company,

we were in a pretty dire state
in terms of our future,

and not just from how people
were using the platform,

but from a corporate narrative as well.

So we had to fix
a bunch of the foundation,

turn the company around,

go through two crazy layoffs,

because we just got too big
for what we were doing,

and we focused all of our energy

on this concept of serving
the public conversation.

And that took some work.

And as we dived into that,

we realized some of the issues
with the fundamentals.

We could do a bunch of superficial things
to address what you’re talking about,

but we need the changes to last,

and that means going really, really deep

and paying attention
to what we started 13 years ago

and really questioning

how the system works
and how the framework works

and what is needed for the world today,

given how quickly everything is moving
and how people are using it.

So we are working as quickly as we can,
but quickness will not get the job done.

It’s focus, it’s prioritization,

it’s understanding
the fundamentals of the network

and building a framework that scales

and that is resilient to change,

and being open about where we are
and being transparent about where are

so that we can continue to earn trust.

So I’m proud of all the frameworks
that we’ve put in place.

I’m proud of our direction.

We obviously can move faster,

but that required just stopping a bunch
of stupid stuff we were doing in the past.

CA: All right.

Well, I suspect there are many people here
who, if given the chance,

would love to help you
on this change-making agenda you’re on,

and I don’t know if Whitney –

Jack, thank you for coming here
and speaking so openly.

It took courage.

I really appreciate what you said,
and good luck with your mission.

JD: Thank you so much.
Thanks for having me.

(Applause)

Thank you.