How civilization could destroy itself and 4 ways we could prevent it Nick Bostrom

Chris Anderson: Nick Bostrom.

So, you have already given us
so many crazy ideas out there.

I think a couple of decades ago,

you made the case that we might
all be living in a simulation,

or perhaps probably were.

More recently,

you’ve painted the most vivid examples
of how artificial general intelligence

could go horribly wrong.

And now this year,

you’re about to publish

a paper that presents something called
the vulnerable world hypothesis.

And our job this evening is to
give the illustrated guide to that.

So let’s do that.

What is that hypothesis?

Nick Bostrom: It’s trying to think about

a sort of structural feature
of the current human condition.

You like the urn metaphor,

so I’m going to use that to explain it.

So picture a big urn filled with balls

representing ideas, methods,
possible technologies.

You can think of the history
of human creativity

as the process of reaching into this urn
and pulling out one ball after another,

and the net effect so far
has been hugely beneficial, right?

We’ve extracted a great many white balls,

some various shades of gray,
mixed blessings.

We haven’t so far
pulled out the black ball –

a technology that invariably destroys
the civilization that discovers it.

So the paper tries to think
about what could such a black ball be.

CA: So you define that ball

as one that would inevitably
bring about civilizational destruction.

NB: Unless we exit what I call
the semi-anarchic default condition.

But sort of, by default.

CA: So, you make the case compelling

by showing some sort of counterexamples

where you believe that so far
we’ve actually got lucky,

that we might have pulled out
that death ball

without even knowing it.

So there’s this quote, what’s this quote?

NB: Well, I guess
it’s just meant to illustrate

the difficulty of foreseeing

what basic discoveries will lead to.

We just don’t have that capability.

Because we have become quite good
at pulling out balls,

but we don’t really have the ability
to put the ball back into the urn, right.

We can invent, but we can’t un-invent.

So our strategy, such as it is,

is to hope that there is
no black ball in the urn.

CA: So once it’s out, it’s out,
and you can’t put it back in,

and you think we’ve been lucky.

So talk through a couple
of these examples.

You talk about different
types of vulnerability.

NB: So the easiest type to understand

is a technology
that just makes it very easy

to cause massive amounts of destruction.

Synthetic biology might be a fecund
source of that kind of black ball,

but many other possible things we could –

think of geoengineering,
really great, right?

We could combat global warming,

but you don’t want it
to get too easy either,

you don’t want any random person
and his grandmother

to have the ability to radically
alter the earth’s climate.

Or maybe lethal autonomous drones,

massed-produced, mosquito-sized
killer bot swarms.

Nanotechnology,
artificial general intelligence.

CA: You argue in the paper

that it’s a matter of luck
that when we discovered

that nuclear power could create a bomb,

it might have been the case

that you could have created a bomb

with much easier resources,
accessible to anyone.

NB: Yeah, so think back to the 1930s

where for the first time we make
some breakthroughs in nuclear physics,

some genius figures out that it’s possible
to create a nuclear chain reaction

and then realizes
that this could lead to the bomb.

And we do some more work,

it turns out that what you require
to make a nuclear bomb

is highly enriched uranium or plutonium,

which are very difficult materials to get.

You need ultracentrifuges,

you need reactors, like,
massive amounts of energy.

But suppose it had turned out instead

there had been an easy way
to unlock the energy of the atom.

That maybe by baking sand
in the microwave oven

or something like that

you could have created
a nuclear detonation.

So we know that that’s
physically impossible.

But before you did the relevant physics

how could you have known
how it would turn out?

CA: Although, couldn’t you argue

that for life to evolve on Earth

that implied sort of stable environment,

that if it was possible to create
massive nuclear reactions relatively easy,

the Earth would never have been stable,

that we wouldn’t be here at all.

NB: Yeah, unless there were something
that is easy to do on purpose

but that wouldn’t happen by random chance.

So, like things we can easily do,

we can stack 10 blocks
on top of one another,

but in nature, you’re not going to find,
like, a stack of 10 blocks.

CA: OK, so this is probably the one

that many of us worry about most,

and yes, synthetic biology
is perhaps the quickest route

that we can foresee
in our near future to get us here.

NB: Yeah, and so think
about what that would have meant

if, say, anybody by working
in their kitchen for an afternoon

could destroy a city.

It’s hard to see how
modern civilization as we know it

could have survived that.

Because in any population
of a million people,

there will always be some
who would, for whatever reason,

choose to use that destructive power.

So if that apocalyptic residual

would choose to destroy a city, or worse,

then cities would get destroyed.

CA: So here’s another type
of vulnerability.

Talk about this.

NB: Yeah, so in addition to these
kind of obvious types of black balls

that would just make it possible
to blow up a lot of things,

other types would act
by creating bad incentives

for humans to do things that are harmful.

So, the Type-2a, we might call it that,

is to think about some technology
that incentivizes great powers

to use their massive amounts of force
to create destruction.

So, nuclear weapons were actually
very close to this, right?

What we did, we spent
over 10 trillion dollars

to build 70,000 nuclear warheads

and put them on hair-trigger alert.

And there were several times
during the Cold War

we almost blew each other up.

It’s not because a lot of people felt
this would be a great idea,

let’s all spend 10 trillion dollars
to blow ourselves up,

but the incentives were such
that we were finding ourselves –

this could have been worse.

Imagine if there had been
a safe first strike.

Then it might have been very tricky,

in a crisis situation,

to refrain from launching
all their nuclear missiles.

If nothing else, because you would fear
that the other side might do it.

CA: Right, mutual assured destruction

kept the Cold War relatively stable,

without that, we might not be here now.

NB: It could have been
more unstable than it was.

And there could be
other properties of technology.

It could have been harder
to have arms treaties,

if instead of nuclear weapons

there had been some smaller thing
or something less distinctive.

CA: And as well as bad incentives
for powerful actors,

you also worry about bad incentives
for all of us, in Type-2b here.

NB: Yeah, so, here we might
take the case of global warming.

There are a lot of little conveniences

that cause each one of us to do things

that individually
have no significant effect, right?

But if billions of people do it,

cumulatively, it has a damaging effect.

Now, global warming
could have been a lot worse than it is.

So we have the climate
sensitivity parameter, right.

It’s a parameter that says
how much warmer does it get

if you emit a certain amount
of greenhouse gases.

But, suppose that it had been the case

that with the amount
of greenhouse gases we emitted,

instead of the temperature rising by, say,

between three and 4.5 degrees by 2100,

suppose it had been
15 degrees or 20 degrees.

Like, then we might have been
in a very bad situation.

Or suppose that renewable energy
had just been a lot harder to do.

Or that there had been
more fossil fuels in the ground.

CA: Couldn’t you argue
that if in that case of –

if what we are doing today

had resulted in 10 degrees difference
in the time period that we could see,

actually humanity would have got
off its ass and done something about it.

We’re stupid, but we’re not
maybe that stupid.

Or maybe we are.

NB: I wouldn’t bet on it.

(Laughter)

You could imagine other features.

So, right now, it’s a little bit difficult
to switch to renewables and stuff, right,

but it can be done.

But it might just have been,
with slightly different physics,

it could have been much more expensive
to do these things.

CA: And what’s your view, Nick?

Do you think, putting
these possibilities together,

that this earth, humanity that we are,

we count as a vulnerable world?

That there is a death ball in our future?

NB: It’s hard to say.

I mean, I think there might
well be various black balls in the urn,

that’s what it looks like.

There might also be some golden balls

that would help us
protect against black balls.

And I don’t know which order
they will come out.

CA: I mean, one possible
philosophical critique of this idea

is that it implies a view
that the future is essentially settled.

That there either
is that ball there or it’s not.

And in a way,

that’s not a view of the future
that I want to believe.

I want to believe
that the future is undetermined,

that our decisions today will determine

what kind of balls
we pull out of that urn.

NB: I mean, if we just keep inventing,

like, eventually we will
pull out all the balls.

I mean, I think there’s a kind
of weak form of technological determinism

that is quite plausible,

like, you’re unlikely
to encounter a society

that uses flint axes and jet planes.

But you can almost think
of a technology as a set of affordances.

So technology is the thing
that enables us to do various things

and achieve various effects in the world.

How we’d then use that,
of course depends on human choice.

But if we think about these
three types of vulnerability,

they make quite weak assumptions
about how we would choose to use them.

So a Type-1 vulnerability, again,
this massive, destructive power,

it’s a fairly weak assumption

to think that in a population
of millions of people

there would be some that would choose
to use it destructively.

CA: For me, the most single
disturbing argument

is that we actually might have
some kind of view into the urn

that makes it actually
very likely that we’re doomed.

Namely, if you believe
in accelerating power,

that technology inherently accelerates,

that we build the tools
that make us more powerful,

then at some point you get to a stage

where a single individual
can take us all down,

and then it looks like we’re screwed.

Isn’t that argument quite alarming?

NB: Ah, yeah.

(Laughter)

I think –

Yeah, we get more and more power,

and [it’s] easier and easier
to use those powers,

but we can also invent technologies
that kind of help us control

how people use those powers.

CA: So let’s talk about that,
let’s talk about the response.

Suppose that thinking
about all the possibilities

that are out there now –

it’s not just synbio,
it’s things like cyberwarfare,

artificial intelligence, etc., etc. –

that there might be
serious doom in our future.

What are the possible responses?

And you’ve talked about
four possible responses as well.

NB: Restricting technological development
doesn’t seem promising,

if we are talking about a general halt
to technological progress.

I think neither feasible,

nor would it be desirable
even if we could do it.

I think there might be very limited areas

where maybe you would want
slower technological progress.

You don’t, I think, want
faster progress in bioweapons,

or in, say, isotope separation,

that would make it easier to create nukes.

CA: I mean, I used to be
fully on board with that.

But I would like to actually
push back on that for a minute.

Just because, first of all,

if you look at the history
of the last couple of decades,

you know, it’s always been
push forward at full speed,

it’s OK, that’s our only choice.

But if you look at globalization
and the rapid acceleration of that,

if you look at the strategy of
“move fast and break things”

and what happened with that,

and then you look at the potential
for synthetic biology,

I don’t know that we should
move forward rapidly

or without any kind of restriction

to a world where you could have
a DNA printer in every home

and high school lab.

There are some restrictions, right?

NB: Possibly, there is
the first part, the not feasible.

If you think it would be
desirable to stop it,

there’s the problem of feasibility.

So it doesn’t really help
if one nation kind of –

CA: No, it doesn’t help
if one nation does,

but we’ve had treaties before.

That’s really how we survived
the nuclear threat,

was by going out there

and going through
the painful process of negotiating.

I just wonder whether the logic isn’t
that we, as a matter of global priority,

we shouldn’t go out there and try,

like, now start negotiating
really strict rules

on where synthetic bioresearch is done,

that it’s not something
that you want to democratize, no?

NB: I totally agree with that –

that it would be desirable, for example,

maybe to have DNA synthesis machines,

not as a product where each lab
has their own device,

but maybe as a service.

Maybe there could be
four or five places in the world

where you send in your digital blueprint
and the DNA comes back, right?

And then, you would have the ability,

if one day it really looked
like it was necessary,

we would have like,
a finite set of choke points.

So I think you want to look
for kind of special opportunities,

where you could have tighter control.

CA: Your belief is, fundamentally,

we are not going to be successful
in just holding back.

Someone, somewhere –
North Korea, you know –

someone is going to go there
and discover this knowledge,

if it’s there to be found.

NB: That looks plausible
under current conditions.

It’s not just synthetic biology, either.

I mean, any kind of profound,
new change in the world

could turn out to be a black ball.

CA: Let’s look at
another possible response.

NB: This also, I think,
has only limited potential.

So, with the Type-1 vulnerability again,

I mean, if you could reduce the number
of people who are incentivized

to destroy the world,

if only they could get
access and the means,

that would be good.

CA: In this image that you asked us to do

you’re imagining these drones
flying around the world

with facial recognition.

When they spot someone
showing signs of sociopathic behavior,

they shower them with love, they fix them.

NB: I think it’s like a hybrid picture.

Eliminate can either mean,
like, incarcerate or kill,

or it can mean persuade them
to a better view of the world.

But the point is that,

suppose you were
extremely successful in this,

and you reduced the number
of such individuals by half.

And if you want to do it by persuasion,

you are competing against
all other powerful forces

that are trying to persuade people,

parties, religion, education system.

But suppose you could reduce it by half,

I don’t think the risk
would be reduced by half.

Maybe by five or 10 percent.

CA: You’re not recommending that we gamble
humanity’s future on response two.

NB: I think it’s all good
to try to deter and persuade people,

but we shouldn’t rely on that
as our only safeguard.

CA: How about three?

NB: I think there are two general methods

that we could use to achieve
the ability to stabilize the world

against the whole spectrum
of possible vulnerabilities.

And we probably would need both.

So, one is an extremely effective ability

to do preventive policing.

Such that you could intercept.

If anybody started to do
this dangerous thing,

you could intercept them
in real time, and stop them.

So this would require
ubiquitous surveillance,

everybody would be monitored all the time.

CA: This is “Minority Report,”
essentially, a form of.

NB: You would have maybe AI algorithms,

big freedom centers
that were reviewing this, etc., etc.

CA: You know that mass surveillance
is not a very popular term right now?

(Laughter)

NB: Yeah, so this little device there,

imagine that kind of necklace
that you would have to wear at all times

with multidirectional cameras.

But, to make it go down better,

just call it the “freedom tag”
or something like that.

(Laughter)

CA: OK.

I mean, this is the conversation, friends,

this is why this is
such a mind-blowing conversation.

NB: Actually, there’s
a whole big conversation on this

on its own, obviously.

There are huge problems and risks
with that, right?

We may come back to that.

So the other, the final,

the other general stabilization capability

is kind of plugging
another governance gap.

So the surveillance would be kind of
governance gap at the microlevel,

like, preventing anybody
from ever doing something highly illegal.

Then, there’s a corresponding
governance gap

at the macro level, at the global level.

You would need the ability, reliably,

to prevent the worst kinds
of global coordination failures,

to avoid wars between great powers,

arms races,

cataclysmic commons problems,

in order to deal with
the Type-2a vulnerabilities.

CA: Global governance is a term

that’s definitely way out
of fashion right now,

but could you make the case
that throughout history,

the history of humanity

is that at every stage
of technological power increase,

people have reorganized
and sort of centralized the power.

So, for example,
when a roving band of criminals

could take over a society,

the response was,
well, you have a nation-state

and you centralize force,
a police force or an army,

so, “No, you can’t do that.”

The logic, perhaps, of having
a single person or a single group

able to take out humanity

means at some point
we’re going to have to go this route,

at least in some form, no?

NB: It’s certainly true that the scale
of political organization has increased

over the course of human history.

It used to be hunter-gatherer band, right,

and then chiefdom, city-states, nations,

now there are international organizations
and so on and so forth.

Again, I just want to make sure

I get the chance to stress

that obviously there are huge downsides

and indeed, massive risks,

both to mass surveillance
and to global governance.

I’m just pointing out
that if we are lucky,

the world could be such
that these would be the only ways

you could survive a black ball.

CA: The logic of this theory,

it seems to me,

is that we’ve got to recognize
we can’t have it all.

That the sort of,

I would say, naive dream
that many of us had

that technology is always
going to be a force for good,

keep going, don’t stop,
go as fast as you can

and not pay attention
to some of the consequences,

that’s actually just not an option.

We can have that.

If we have that,

we’re going to have to accept

some of these other
very uncomfortable things with it,

and kind of be in this
arms race with ourselves

of, you want the power,
you better limit it,

you better figure out how to limit it.

NB: I think it is an option,

a very tempting option,
it’s in a sense the easiest option

and it might work,

but it means we are fundamentally
vulnerable to extracting a black ball.

Now, I think with a bit of coordination,

like, if you did solve this
macrogovernance problem,

and the microgovernance problem,

then we could extract
all the balls from the urn

and we’d benefit greatly.

CA: I mean, if we’re living
in a simulation, does it matter?

We just reboot.

(Laughter)

NB: Then … I …

(Laughter)

I didn’t see that one coming.

CA: So what’s your view?

Putting all the pieces together,
how likely is it that we’re doomed?

(Laughter)

I love how people laugh
when you ask that question.

NB: On an individual level,

we seem to kind of be doomed anyway,
just with the time line,

we’re rotting and aging
and all kinds of things, right?

(Laughter)

It’s actually a little bit tricky.

If you want to set up
so that you can attach a probability,

first, who are we?

If you’re very old,
probably you’ll die of natural causes,

if you’re very young,
you might have a 100-year –

the probability might depend
on who you ask.

Then the threshold, like, what counts
as civilizational devastation?

In the paper I don’t require
an existential catastrophe

in order for it to count.

This is just a definitional matter,

I say a billion dead,

or a reduction of world GDP by 50 percent,

but depending on what
you say the threshold is,

you get a different probability estimate.

But I guess you could
put me down as a frightened optimist.

(Laughter)

CA: You’re a frightened optimist,

and I think you’ve just created
a large number of other frightened …

people.

(Laughter)

NB: In the simulation.

CA: In a simulation.

Nick Bostrom, your mind amazes me,

thank you so much for scaring
the living daylights out of us.

(Applause)

克里斯·安德森:尼克·博斯特罗姆。

所以,你已经给了
我们很多疯狂的想法。

我认为几十年前,

你提出我们可能
都生活在模拟中,

或者可能是。

最近,

您描绘了最生动的例子
,说明通用人工智能

如何出现可怕的错误。

而现在,今年,

你即将发表

一篇论文,提出一种
叫做脆弱世界假说的东西。

我们今晚的工作就是为此
提供插图指南。

所以让我们这样做。

那个假设是什么?

Nick Bostrom:它试图思考

当前人类状况的一种结构特征。

你喜欢骨灰盒的比喻,

所以我将用它来解释它。

所以想象一个装满

代表想法、方法和
可能技术的球的大瓮。

你可以把
人类创造力

的历史想象成一个把手伸进这个瓮中
并一个接一个地取出球的过程,

到目前为止,净效应
是非常有益的,对吧?

我们已经提取了很多白球,

一些不同深浅的灰色,
混合的祝福。

到目前为止,我们还没有
拿出黑球——

一种总是会摧毁
发现它的文明的技术。

所以这篇论文试图
思考这样一个黑球可能是什么。

CA:所以你把那个球定义

为一个不可避免地会
带来文明毁灭的球。

注意:除非我们退出我
称之为半无政府状态的默认状态。

但是,默认情况下。

CA:所以,你

通过展示一些反例来使这个案例引人注目

,你认为到目前为止
我们实际上很幸运

,我们可能在不知情的情况下拔出了
那个死球

所以有这个报价,这个报价是什么?

注意:嗯,我想
这只是为了说明

预测

基本发现将导致什么的困难。

我们只是没有这种能力。

因为我们已经非常
擅长把球拉出来,

但是我们真的没有能力
把球放回瓮中,对吧。

我们可以发明,但我们不能不发明。

所以我们的策略

就是希望
骨灰盒里没有黑球。

CA:所以一旦它出来了,它就出来了
,你不能把它放回去

,你认为我们很幸运。

因此,通过
几个这样的例子来谈谈。

你谈到了不同
类型的脆弱性。

注意:所以最容易理解的类型

是一种技术
,它可以很容易

地造成大量破坏。

合成生物学可能是
这种黑球的丰富来源,

但我们可以做许多其他可能的事情——

想想地球工程,
真的很棒,对吧?

我们可以对抗全球变暖,

但你也不希望
它变得太容易,

你不希望任何随机的
人和他的

祖母有能力从根本上
改变地球的气候。

或者可能是致命的自主无人机,

大规模生产的蚊子大小的
杀手机器人群。

纳米技术,
通用人工智能。

CA:你在论文中争辩

说,当我们

发现核能可以制造炸弹时,

你可以

用更容易的资源制造
出任何人都可以使用的炸弹,这完全是运气问题。

NB:是的,所以回想一下

1930 年代,我们第一次
在核物理方面取得了一些突破,

一些天才发现
可以产生核链式反应

,然后
意识到这可能会导致炸弹。

我们做了更多的工作

,结果
证明制造核弹需要的

是高浓缩铀或钚,

它们是非常难以获得的材料。

你需要超速离心机,

你需要反应堆,比如
大量的能量。

但假设事实证明

,有一种简单的方法
可以释放原子的能量。

那也许通过
在微波炉中烘烤沙子

或类似的东西,

你可能会
产生核爆炸。

所以我们知道这在
物理上是不可能的。

但在你做相关的物理学之前,你

怎么知道结果会怎样?

CA:虽然,你能不能说

地球上的生命进化

意味着某种稳定的环境

,如果有可能
相对容易地产生大规模的核反应

,地球就永远不会稳定

,我们也不会稳定 在这里。

NB:是的,除非有
一些很容易故意做的事情,

但不会偶然发生。

所以,就像我们可以轻松做的事情一样,

我们可以将 10 个
积木堆叠在一起,

但在自然界中,你不会找到
像 10 个积木一样的堆叠。

CA:好的,所以这可能

是我们许多人最担心的问题

,是的,合成
生物学也许

是我们可以预见的
在不久的将来到达这里的最快途径。

NB:是的,所以想想

如果有人
在厨房工作一个下午

就可以摧毁一座城市,这意味着什么。

很难看出
我们所知道的现代文明如何

能够幸存下来。

因为在任何
一百万人口中

,总会有一些
人,无论出于何种原因,都会

选择使用这种破坏力。

因此,如果世界末日的残余

物选择摧毁一座城市,或者更糟,

那么城市就会被摧毁。

CA:所以这是另一种类型
的漏洞。

谈论这个。

NB:是的,所以除了这些
明显类型的黑球

可以炸毁很多东西之外,

其他类型的黑球会
通过

为人类做有害的事情创造不良动机而采取行动。

所以,Type-2a,我们可以这么称呼它,

就是考虑一些技术
,它可以激励大

国使用其巨大的力量
来制造破坏。

所以,核武器实际上
非常接近这个,对吧?

我们所做的是,我们花费
了超过 10 万亿美元

来建造 70,000 枚核弹头

,并将它们置于一触即发的戒备状态。

在冷战期间,

我们有好几次差点把对方炸毁。

这并不是因为很多人认为
这将是一个好主意,

让我们都花 10 万亿美元
来炸毁自己,

而是
激励我们发现自己——

这可能会更糟。

想象一下,如果有
一次安全的先发制人。

那么,

在危机情况下

,避免发射
所有核导弹可能会非常棘手。

如果不出意外,因为你会
担心对方可能会这样做。

CA:是的,相互保证的毁灭

使冷战保持了相对稳定,

没有它,我们现在可能不会在这里。

注意:它可能比以前
更不稳定。

并且可能还有
其他技术属性。

如果不是核武器,而是

有一些更小的
东西或不太独特的东西,那么签订武器条约可能会更难。

CA:除了
对强大演员的不良激励外,

您还担心
对我们所有人的不良激励,在这里是 Type-2b。

注意:是的,所以,在这里我们可能会
以全球变暖为例。

有很多小便利

会导致我们每个人做

一些单独
没有显着影响的事情,对吧?

但如果数十亿人这样做,

累积起来,就会产生破坏性影响。

现在,全球变暖
可能比现在严重得多。

所以我们有气候
敏感性参数,对。

这是一个参数,表示

如果排放一定量
的温室气体,它会变暖多少。

但是,假设

我们排放的温室气体量是这样的,

而不是

到 2100 年温度上升 3 到 4.5 度,

假设它是
15 度或 20 度。

就像,那么我们可能
处于非常糟糕的境地。

或者假设可再生
能源更难做到。

或者
地下有更多的化石燃料。

CA:你能不能争辩
说,如果在那种情况下——

如果我们今天所做的事情

导致
我们所看到的时间段有 10 度的差异,那么

实际上人类会
摆脱困境并采取一些措施。

我们很愚蠢,但我们
也许没有那么愚蠢。

或许我们是。

注意:我不会打赌。

(笑声)

你可以想象其他功能。

所以,现在,
切换到可再生能源和其他东西有点困难,对,

但可以做到。

但它可能只是,
由于物理学略有不同,

做这些事情的成本可能要高得多

CA:尼克,你有什么看法?

你认为,把
这些可能性放在一起,

这个地球,我们这个人类,

我们算作一个脆弱的世界吗?

我们的未来有死球吗?

注意:很难说。

我的意思是,我认为
骨灰盒里可能有各种各样的黑球,

这就是它的样子。

可能还有一些

金球可以帮助我们
防止黑球。

而且我不知道
他们会按什么顺序出来。

CA:我的意思是,
对这个想法的一个可能的哲学批评

是,它暗示了一种观点
,即未来基本上已经确定。

要么
有那个球,要么没有。

在某种程度上,

这不是我想要相信的对未来的看法

我想
相信未来是不确定的

,我们今天的决定将决定

我们从那个瓮中取出什么样的球。

NB:我的意思是,如果我们继续发明

,最终我们会
拿出所有的球。

我的意思是,我认为有
一种很弱的技术决定论

形式是很合理的,

比如,你不太可能
遇到

一个使用燧石斧和喷气式飞机的社会。

但是您几乎可以
将技术视为一组可供性。

因此,技术是
使我们能够

在世界上做各种事情并取得各种效果的东西。

我们如何使用它
,当然取决于人类的选择。

但是,如果我们考虑这
三种类型的漏洞,

它们会对
我们将如何选择使用它们做出非常薄弱的假设。

因此,第 1 类漏洞,同样,
这种巨大的破坏性力量,

认为在
数百万人的人口中

会有一些人会选择
破坏性地使用它,这是一个相当薄弱的假设。

CA:对我来说,最
令人不安的论点

是,我们实际上可能
对骨灰盒有某种看法,


使得我们很可能注定要死。

也就是说,如果你
相信加速力量

,技术本质上会加速,

相信我们构建了
让我们更强大的工具,

那么在某些时候你会

到达一个人
可以让我们所有人失望的阶段,

然后看起来我们 ‘搞砸了。

这种说法是不是很令人震惊?

注意:啊,是的。

(笑声)

我认为——

是的,我们获得了越来越多的权力

,[它]越来越
容易使用这些权力,

但我们也可以发明一些技术
来帮助我们控制

人们如何使用这些权力。

CA:所以让我们谈谈这个,
让我们谈谈回应。

假设考虑现在存在
的所有可能性

——不仅仅是合成生物,还有
网络战、

人工智能等等等——

我们的未来可能会有严重的厄运。

可能的反应是什么?

你也谈到了
四种可能的反应。

注意:

如果我们谈论的是技术进步的全面停止,那么限制技术发展似乎并不乐观

我认为既不可行,

也不可取,
即使我们能做到。

我认为可能存在非常有限的领域

,您可能希望
更慢的技术进步。

我认为,您不希望
在生物武器或同位素分离方面取得更快的进展,

这会使制造核武器变得更容易。

CA:我的意思是,我曾经
完全同意这一点。

但我想实际上
推迟一分钟。

只是因为,首先,

如果你看看
过去几十年的历史,

你知道,它一直
在全速前进,

没关系,这是我们唯一的选择。

但是如果你看看全球化
和它的快速加速,

如果你看看
“快速行动,打破常规”的战略

以及随之而来的事情,

然后你看看
合成生物学的潜力,

我不知道 我们应该
迅速

或不受任何限制

地向前迈进,让您可以
在每个家庭和高中实验室都拥有一台 DNA 打印机

有一些限制,对吧?

注意:可能
有第一部分,不可行。

如果你认为
停止它是可取的,

那就是可行性问题。

所以,
如果一个国家有点

——CA:不,
如果一个国家这样做也没有用,

但我们以前有过条约。

这就是我们
在核威胁中幸存下来的真正方法,

就是走出去


经历痛苦的谈判过程。

我只是想知道逻辑是否
不是我们,作为全球优先事项,

我们不应该去那里尝试,

例如,现在开始就

合成生物研究的完成地点进行非常严格的规则谈判

,这不是你的事
想要民主化,不是吗?

注意:我完全同意这一点——

例如,

可能需要 DNA 合成机器,

而不是作为每个实验室
都有自己的设备的产品,

而是作为一种服务。

也许世界上可能有
四五个

地方,你发送你的数字蓝图
,然后 DNA 就会回来,对吧?

然后,你将有能力,

如果有一天它
看起来真的有必要,

我们会想要
一组有限的阻塞点。

所以我认为你想
寻找一种特殊的机会,

在那里你可以有更严格的控制。

CA:你的信念是,从根本上说,

我们不会因为
退缩而成功。

某人,某处——
朝鲜,你知道——

有人会去
那里发现这些知识,

如果它在那里被发现的话。

注意:
在当前条件下,这看起来是合理的。

它也不仅仅是合成生物学。

我的意思是,世界上任何一种深刻的、
新的变化

都可能变成一个黑球。

CA:让我们看看
另一种可能的反应。

注意:我认为这
也只有有限的潜力。

因此,再次提到 Type-1 漏洞,

我的意思是,如果你能减少

激励破坏世界的人数,

只要他们能够获得
访问权限和手段,

那就太好了。

CA:在你要求我们做的这张图片中,

你想象这些无人机通过面部识别
在世界各地飞行

当他们发现有人
表现出反社会行为的迹象时,

他们会用爱浇灌他们,他们会修复他们。

注意:我认为这就像一张混合图片。

消除可以意味着,
例如,监禁或杀死,

也可以意味着说服
他们更好地看待世界。

但重点是,

假设您
在这方面非常成功,

并且您将
此类人员的数量减少了一半。

如果你想通过说服来做到这一点,

你就是在与
所有其他

试图说服人们、

政党、宗教、教育系统的强大力量竞争。

但是假设你可以减少一半,

我认为风险
不会减少一半。

也许是百分之五或百分之十。

CA:你不是建议我们把
人类的未来赌在回应二上。

注意:我认为
试图阻止和说服人们是好的,

但我们不应该依赖它
作为我们唯一的保障。

CA:三个怎么样?

注意:我认为我们可以使用两种通用方法


实现稳定世界

免受所有
可能漏洞影响的能力。

我们可能两者都需要。

因此,这是一种非常有效的

预防性警务能力。

这样你就可以拦截。

如果有人开始做
这种危险的事情,

你可以
实时拦截并阻止他们。

所以这需要
无处不在的监控,

每个人都会一直受到监控。

CA:这是“少数派报告”,
本质上是一种形式。

NB:你可能会有人工智能算法

,正在审查这个的大型自由中心等等

。CA:你知道大规模监视
现在不是一个非常流行的术语吗?

(笑声)

NB:是的,所以这个小装置,

想象
一下你必须一直佩戴的那种项链,

带有多向摄像机。

但是,为了让它更好地下降,

只需将其称为“自由标签”
或类似的东西。

(笑声)

CA:好的。

我的意思是,这就是对话,朋友们,

这就是为什么这
是一场令人兴奋的对话。

NB:实际上,显然
,这本身就有一个很大的讨论

这样做存在巨大的问题和
风险,对吧?

我们可能会回到这一点。

因此,另一种,最终的

,另一种总体稳定

能力在某种程度上填补了
另一个治理空白。

因此,监视将是
微观层面的一种治理差距,

例如,防止
任何人做一些高度非法的事情。

然后

,在宏观层面和全球层面存在相应的治理差距。

你需要能够可靠

地防止最严重
的全球协调失败

,避免大国之间的战争、

军备竞赛、

灾难性的公地问题,


应对 Type-2a 漏洞。

CA:全球治理这个词

现在肯定已经
过时了,

但你能不能
证明,

纵观人类历史

,在
技术力量增长的每个阶段,

人们都在重组
和集中权力。

因此,例如,
当一群流动的犯罪分子

可以接管一个社会时

,反应是,
嗯,你有一个民族国家

,你集中力量,
一支警察部队或一支军队,

所以,“不,你不能 去做。”

也许,让
一个人或一个团体

能够消灭人类的逻辑

意味着在某些时候
我们将不得不走这条路,

至少以某种形式,不是吗?

注意:

在人类历史进程中,政治组织的规模确实有所增加。

以前是狩猎采集乐队,对,

然后是酋长国、城邦、民族,

现在有国际组织
等等。

再一次,我只是想确保

我有机会强调

,大规模监控和全球治理显然存在巨大的不利因素,

而且确实存在巨大的风险

我只是指出
,如果我们幸运的话

,世界可能会这样
,这些将是

您在黑球中幸存下来的唯一方法。

CA:在我看来,这个理论的逻辑

是我们必须认识到
我们不可能拥有一切。

那种,

我想说,天真的梦想
,我们中的许多人

都有技术总是
会成为一种向善的力量,

继续前进,不要停止
,尽可能快地前进

,不要
关注一些 结果,

这实际上不是一个选择。

我们可以拥有它。

如果我们拥有它,

我们将不得不接受

一些其他
非常不舒服的事情,

并且有点
与我们自己

进行军备竞赛,你想要权力,
你最好限制它,

你最好弄清楚如何 限制它。

NB:我认为这是一个选项,

一个非常诱人的选项
,从某种意义上说,它是最简单的选项

,它可能会奏效,

但这意味着我们从根本上
很容易提取黑球。

现在,我认为通过一些协调,

例如,如果你确实解决了这个
宏观治理问题

和微观治理问题,

那么我们可以
从瓮中取出所有的球

,我们会受益匪浅。

CA:我的意思是,如果我们生活
在模拟中,这有关系吗?

我们只是重新启动。

(笑声)

注意:然后……我……

(笑声)

我没有看到那个人来了。

CA:那你怎么看?

将所有部分放在一起
,我们注定要失败的可能性有多大?

(笑声)

当你问这个问题时,我喜欢人们的笑声。

NB:在个人层面上,

无论如何,我们似乎注定要失败,
只是随着时间的推移,

我们正在腐烂和老化
以及各种各样的事情,对吧?

(笑声)

这实际上有点棘手。

如果你想设置
以便你可以附加一个概率,

首先,我们是谁?

如果你很老,
你可能会死于自然原因,

如果你很年轻,
你可能有 100 岁——

这个概率可能
取决于你问的是谁。

那么门槛,比如,什么
算作文明毁灭?

在论文中,我
不需要存在性的

灾难来计算它。

这只是一个定义问题,

我说十亿人死亡,

或者世界 GDP 减少 50%,

但根据
你所说的阈值,

你会得到不同的概率估计。

但我想你可能
会把我贬低为一个受惊的乐观主义者。

(笑声)

CA:你是一个受惊的乐观主义者

,我认为你刚刚创造
了许多其他受惊的……

人。

(笑声)

注意:在模拟中。

CA:在模拟中。

尼克博斯特罗姆,你的头脑让我吃惊,

非常感谢你
把我们吓得魂不附体。

(掌声)