6 big ethical questions about the future of AI Genevieve Bell

Let me tell you a story
about artificial intelligence.

There’s a building in Sydney
at 1 Bligh Street.

It houses lots of government apartments

and busy people.

From the outside, it looks like something
out of American science fiction:

all gleaming glass and curved lines,

and a piece of orange sculpture.

On the inside, it has excellent coffee
on the ground floor

and my favorite lifts in Sydney.

They’re beautiful;

they look almost alive.

And it turns out
I’m fascinated with lifts.

For lots of reasons.

But because lifts are one of the places
you can see the future.

In the 21st century, lifts are interesting

because they’re one of the first places
that AI will touch you

without you even knowing it happened.

In many buildings all around the world,

the lifts are running a set of algorithms.

A form of protoartificial intelligence.

That means before you even
walk up to the lift to press the button,

it’s anticipated you being there.

It’s already rearranging
all the carriages.

Always going down, to save energy,

and to know where
the traffic is going to be.

By the time you’ve actually
pressed the button,

you’re already part of an entire system

that’s making sense of people
and the environment

and the building and the built world.

I know when we talk about AI,
we often talk about a world of robots.

It’s easy for our imaginations
to be occupied with science fiction,

well, over the last 100 years.

I say AI and you think “The Terminator.”

Somewhere, for us, making the connection
between AI and the built world,

that’s a harder story to tell.

But the reality is AI is already
everywhere around us.

And in many places.

It’s in buildings and in systems.

More than 200 years of industrialization

suggest that AI will find its way
to systems-level scale relatively easily.

After all, one telling of that history

suggests that all you have to do
is find a technology,

achieve scale and revolution will follow.

The story of mechanization,
automation and digitization

all point to the role of technology
and its importance.

Those stories of technological
transformation

make scale seem, well, normal.

Or expected.

And stable.

And sometimes even predictable.

But it also puts the focus squarely
on technology and technology change.

But I believe that scaling a technology
and building a system

requires something more.

We founded the 3Ai Institute
at the Australian National University

in September 2017.

It has one deceptively simple mission:

to establish a new branch of engineering

to take AI safely, sustainably
and responsibly to scale.

But how do you build a new branch
of engineering in the 21st century?

Well, we’re teaching it into existence

through an experimental education program.

We’re researching it into existence

with locations as diverse
as Shakespeare’s birthplace,

the Great Barrier Reef,

not to mention one of Australia’s
largest autonomous mines.

And we’re theorizing it into existence,

paying attention to the complexities
of cybernetic systems.

We’re working to build something new
and something useful.

Something to create the next generation
of critical thinkers and critical doers.

And we’re doing all of that

through a richer understanding
of AI’s many pasts and many stories.

And by working collaboratively
and collectively

through teaching and research
and engagement,

and by focusing as much
on the framing of the questions

as the solving of the problems.

We’re not making a single AI,

we’re making the possibilities for many.

And we’re actively working
to decolonize our imaginations

and to build a curriculum and a pedagogy

that leaves room for a range of different
conversations and possibilities.

We are making and remaking.

And I know we’re always
a work in progress.

But here’s a little glimpse

into how we’re approaching
that problem of scaling a future.

We start by making sure
we’re grounded in our own history.

In December of 2018,

I took myself up to the town of Brewarrina

on the New South Wales-Queensland border.

This place was a meeting place
for Aboriginal people,

for different groups,

to gather, have ceremonies,
meet, to be together.

There, on the Barwon River,
there’s a set of fish weirs

that are one of the oldest
and largest systems

of Aboriginal fish traps in Australia.

This system is comprised
of 1.8 kilometers of stone walls

shaped like a series of fishnets

with the “Us” pointing down the river,

allowing fish to be trapped
at different heights of the water.

They’re also fish holding pens
with different-height walls for storage,

designed to change the way the water moves

and to be able to store
big fish and little fish

and to keep those fish
in cool, clear running water.

This fish-trap system was a way to ensure
that you could feed people

as they gathered there in a place
that was both a meeting of rivers

and a meeting of cultures.

It isn’t about the rocks
or even the traps per se.

It is about the system
that those traps created.

One that involves technical knowledge,

cultural knowledge

and ecological knowledge.

This system is old.

Some archaeologists
think it’s as old as 40,000 years.

The last time we have its recorded uses
is in the nineteen teens.

It’s had remarkable longevity
and incredible scale.

And it’s an inspiration to me.

And a photo of the weir
is on our walls here at the Institute,

to remind us of the promise
and the challenge

of building something meaningful.

And to remind us
that we’re building systems

in a place where people have built systems

and sustained those same systems
for generations.

It isn’t just our history,

it’s our legacy as we seek to establish
a new branch of engineering.

To build on that legacy
and our sense of purpose,

I think we need a clear framework
for asking questions about the future.

Questions for which there aren’t
ready or easy answers.

Here, the point is the asking
of the questions.

We believe you need to go
beyond the traditional approach

of problem-solving,

to the more complicated one
of question asking

and question framing.

Because in so doing, you open up
all kinds of new possibilities

and new challenges.

For me, right now,

there are six big questions
that frame our approach

for taking AI safely, sustainably
and responsibly to scale.

Questions about autonomy,

agency, assurance,

indicators, interfaces and intentionality.

The first question we ask is a simple one.

Is the system autonomous?

Think back to that lift on Bligh Street.

The reality is, one day,
that lift may be autonomous.

Which is to say it will be able
to act without being told to act.

But it isn’t fully autonomous, right?

It can’t leave that Bligh Street building

and wonder down
to Circular Quay for a beer.

It goes up and down, that’s all.

But it does it by itself.

It’s autonomous in that sense.

The second question we ask:

does this system have agency?

Does this system have controls
and limits that live somewhere

that prevent it from doing certain
kinds of things under certain conditions.

The reality with lifts,
that’s absolutely the case.

Think of any lift you’ve been in.

There’s a red keyslot
in the elevator carriage

that an emergency services person
can stick a key into

and override the whole system.

But what happens
when that system is AI-driven?

Where does the key live?

Is it a physical key, is it a digital key?

Who gets to use it?

Is that the emergency services people?

And how would you know
if that was happening?

How would all of that be manifested
to you in the lift?

The third question we ask
is how do we think about assurance.

How do we think about all of its pieces:

safety, security, trust, risk,
liability, manageability,

explicability, ethics,
public policy, law, regulation?

And how would we tell you
that the system was safe and functioning?

The fourth question we ask

is what would be our interfaces
with these AI-driven systems.

Will we talk to them?

Will they talk to us,
will they talk to each other?

And what will it mean to have
a series of technologies we’ve known,

for some of us, all our lives,

now suddenly behave
in entirely different ways?

Lifts, cars, the electrical grid,
traffic lights, things in your home.

The fifth question
for these AI-driven systems:

What will the indicators be
to show that they’re working well?

Two hundred years
of the industrial revolution

tells us that the two most important ways
to think about a good system

are productivity and efficiency.

In the 21st century,

you might want to expand
that just a little bit.

Is the system sustainable,

is it safe, is it responsible?

Who gets to judge those things for us?

Users of the systems
would want to understand

how these things are regulated,
managed and built.

And then there’s the final,
perhaps most critical question

that you need to ask
of these new AI systems.

What’s its intent?

What’s the system designed to do

and who said that was a good idea?

Or put another way,

what is the world
that this system is building,

how is that world imagined,

and what is its relationship
to the world we live in today?

Who gets to be part of that conversation?

Who gets to articulate it?

How does it get framed and imagined?

There are no simple answers
to these questions.

Instead, they frame what’s possible

and what we need to imagine,

design, build, regulate
and even decommission.

They point us in the right directions

and help us on a path to establish
a new branch of engineering.

But critical questions aren’t enough.

You also need a way of holding
all those questions together.

For us at the Institute,

we’re also really interested
in how to think about AI as a system,

and where and how to draw
the boundaries of that system.

And those feel like especially
important things right now.

Here, we’re influenced by the work
that was started way back in the 1940s.

In 1944, along with anthropologists
Gregory Bateson and Margaret Mead,

mathematician Norbert Wiener
convened a series of conversations

that would become known
as the Macy Conferences on Cybernetics.

Ultimately, between 1946 and 1953,

ten conferences were held
under the banner of cybernetics.

As defined by Norbert Wiener,

cybernetics sought
to “develop a language and techniques

that will enable us to indeed attack
the problem of control and communication

in advanced computing technologies.”

Cybernetics argued persuasively

that one had to think
about the relationship

between humans, computers

and the broader ecological world.

You had to think about them
as a holistic system.

Participants in the Macy Conferences
were concerned with how the mind worked,

with ideas about
intelligence and learning,

and about the role
of technology in our future.

Sadly, the conversations that started
with the Macy Conference

are often forgotten
when the talk is about AI.

But for me, there’s something
really important to reclaim here

about the idea of a system
that has to accommodate culture,

technology and the environment.

At the Institute, that sort
of systems thinking is core to our work.

Over the last three years,

a whole collection of amazing people
have joined me here

on this crazy journey to do this work.

Our staff includes anthropologists,

systems and environmental engineers,
and computer scientists

as well as a nuclear physicist,

an award-winning photo journalist,

and at least one policy
and standards expert.

It’s a heady mix.

And the range of experience
and expertise is powerful,

as are the conflicts and the challenges.

Being diverse requires
a constant willingness

to find ways to hold people
in conversation.

And to dwell just a little bit
with the conflict.

We also worked out early

that the way to build
a new way of doing things

would require a commitment to bringing
others along on that same journey with us.

So we opened our doors
to an education program very quickly,

and we launched our first
master’s program in 2018.

Since then, we’ve had two cohorts
of master’s students

and one cohort of PhD students.

Our students come from all over the world

and all over life.

Australia, New Zealand, Nigeria, Nepal,

Mexico, India, the United States.

And they range in age from 23 to 60.

They variously had backgrounds
in maths and music,

policy and performance,

systems and standards,

architecture and arts.

Before they joined us at the Institute,

they ran companies,
they worked for government,

served in the army, taught high school,

and managed arts organizations.

They were adventurers

and committed to each other,

and to building something new.

And really, what more could you ask for?

Because although I’ve spent
20 years in Silicon Valley

and I know the stories
about the lone inventor

and the hero’s journey,

I also know the reality.

That it’s never just a hero’s journey.

It’s always a collection of people
who have a shared sense of purpose

who can change the world.

So where do you start?

Well, I think you start where you stand.

And for me, that means
I want to acknowledge

the traditional owners of the land
upon which I’m standing.

The Ngunnawal and Ngambri people,

this is their land,

never ceded, always sacred.

And I pay my respects to the elders,
past and present, of this place.

I also acknowledge
that we’re gathering today

in many other places,

and I pay my respects
to the traditional owners and elders

of all those places too.

It means a lot to me
to get to say those words

and to dwell on what they mean and signal.

And to remember that we live in a country

that has been continuously occupied
for at least 60,000 years.

Aboriginal people built worlds here,

they built social systems,
they built technologies.

They built ways to manage this place

and to manage it remarkably
over a protracted period of time.

And every moment any one of us
stands on a stage as Australians,

here or abroad,

we carry with us a privilege
and a responsibility

because of that history.

And it’s not just a history.

It’s also an incredibly rich
set of resources,

worldviews and knowledge.

And it should run through all of our bones

and it should be the story we always tell.

Ultimately, it’s about
thinking differently,

asking different kinds of questions,

looking holistically
at the world and the systems,

and finding other people who want
to be on that journey with you.

Because for me,

the only way to actually think
about the future and scale

is to always be doing it collectively.

And because for me,

the notion of humans in it together

is one of the ways
we get to think about things

that are responsible, safe

and ultimately, sustainable.

Thank you.

让我告诉你一个
关于人工智能的故事。

悉尼
布莱街 1 号有一栋建筑。

它容纳了许多政府公寓

和忙碌的人们。

从外面看,它看起来
像是美国科幻小说中的东西:

所有闪闪发光的玻璃和曲线,

还有一个橙色的雕塑。

在里面,它在一楼有很棒的咖啡

还有我在悉尼最喜欢的电梯。

它们很漂亮;

他们看起来几乎还活着。

事实证明,
我对升降机很着迷。

有很多原因。

但因为电梯是
您可以看到未来的地方之一。

在 21 世纪,电梯很有趣,

因为它们
是 AI 会在

你不知道发生的情况下首先接触你的地方之一。

在世界各地的许多建筑物中

,电梯都运行着一套算法。

原始人工智能的一种形式。

这意味着在你
走到电梯前按下按钮之前,

你就已经预料到了。

它已经在重新安排
所有的车厢。

总是往下走,以节省能源,


知道交通将在哪里。

当你真正
按下按钮时,

你已经是整个系统的一部分,这个系统

正在理解
人和环境

、建筑和建成的世界。

我知道当我们谈论人工智能时,
我们经常谈论机器人的世界。 在

过去的 100 年里,我们的想象力很
容易被科幻小说所占据

我说人工智能,你认为是“终结者”。

在某个地方,对我们来说,
在人工智能和建筑世界之间建立联系,

这是一个更难讲述的故事。

但现实是人工智能已经
无处不在。

而且在很多地方。

它存在于建筑物和系统中。

200 多年的工业化

表明,人工智能将
相对容易地进入系统级规模。

毕竟,讲述那段历史

表明,你所要做的
就是找到一种技术,

实现规模化,革命就会随之而来。

机械化、
自动化和数字化的故事

都指向了技术的作用
及其重要性。

那些技术
转型的故事

使规模看起来,嗯,很正常。

或预期。

并且稳定。

有时甚至可以预测。

但它也将重点直接
放在技术和技术变革上。

但我相信,扩展技术
和构建系统

需要更多的东西。

我们

于 2017 年 9 月在澳大利亚国立大学成立了 3Ai 研究所。

它有一个看似简单的使命

:建立一个新的工程分支,

以安全、可持续
和负责任地扩展人工智能。

但是如何
在 21 世纪建立一个新的工程分支?

好吧,我们正在

通过一个实验教育计划来教授它。

我们正在研究它的存在

地点,
例如莎士比亚的

出生地大堡礁,

更不用说澳大利亚
最大的自主矿山之一。

我们正在将其理论化,

关注
控制论系统的复杂性。

我们正在努力构建一些新的
和有用的东西。

创造
下一代批判性思想家和批判性实干家的东西。

我们正在

通过更深入地
了解人工智能的许多过去和许多故事来做到这一切。

通过教学、研究
和参与进行协作和集体工作,

同时关注
问题的框架和问题

的解决。

我们不是在制造单一的人工智能,

而是在为许多人创造可能性。

我们正在积极努力
使我们的想象力去殖民化,

并建立一个课程和教学法

,为一系列不同的对话和可能性留出空间

我们正在制作和改造。

我知道我们总是
在进行中。

但这里有一些

关于我们
如何解决扩展未来的问题的一瞥。

我们首先要确保
我们立足于自己的历史。

2018 年 12 月,

我来到了

位于新南威尔士州和昆士兰州边界的 Brewarrina 镇。

这个地方
是原住民

、不同群体

、聚会、举行仪式、
见面、在一起的聚会场所。

在那里,在巴旺河上,
有一组鱼堰

,是澳大利亚最古老
、最大

的原住民鱼网系统之一。

该系统
由 1.8 公里长的石墙组成,

形状像一系列渔网

,“我们”指向河流下游,

允许鱼被困
在水的不同高度。

它们也是
带有不同高度墙壁的鱼笼,用于储存,

旨在改变水的流动方式

,能够储存
大鱼和小鱼

,并将这些鱼保持
在凉爽、清澈的自来水中。

这种捕鱼器系统是一种
确保您可以

在人们聚集在
一个既是河流交汇处

又是文化交汇处的地方时为他们提供食物的方法。

这与岩石
甚至陷阱本身无关。

这是关于
那些陷阱创建的系统。

一种涉及技术知识、

文化知识

和生态知识。

这个系统很旧。

一些考古学家
认为它已有 4 万年的历史。

我们最后一次记录它的用途
是在十九岁的时候。

它具有非凡的寿命
和令人难以置信的规模。

这对我来说是一种启发。

我们在研究所的墙上挂着一张堰的照片

,提醒我们

建造有意义的东西的承诺和挑战。

并提醒我们
,我们正在建立

系统的地方,人们已经建立了系统

并维持了几代人的相同系统

这不仅仅是我们的历史,

也是我们在寻求建立
一个新的工程分支时留下的遗产。

为了巩固这一遗产
和我们的使命感,

我认为我们需要一个清晰的框架
来提出有关未来的问题。

没有
现成或简单答案的问题。

在这里,重点是
提出问题。

我们相信您需要
超越传统

的解决问题的方法,

而采用更复杂
的问题提问

和问题框架。

因为这样做,你开启了
各种新的可能性

和新的挑战。

对我来说,

目前有六个大问题
构成了我们

安全、可持续
和负责任地扩展人工智能的方法。

关于自治、

代理、保证、

指标、接口和意向性的问题。

我们提出的第一个问题很简单。

系统是自主的吗?

回想一下布莱街的电梯。

现实情况是,有一天
,电梯可能是自动的。

也就是说,它可以
在不被告知采取行动的情况下采取行动。

但它不是完全自主的,对吧?

它不能离开布莱街的那栋建筑,

想去环形码头喝杯啤酒。

它上升和下降,仅此而已。

但它自己做。

从这个意义上说,它是自主的。

我们要问的第二个问题

:这个系统有代理吗?

这个系统是否有控制
和限制存在于某个地方

,阻止它
在某些条件下做某些类型的事情。

电梯的现实
,绝对是这样。

想想你坐过的任何电梯。

电梯车厢

上有一个红色的钥匙槽,紧急服务人员
可以将钥匙插入

并覆盖整个系统。

但是,
当该系统由 AI 驱动时会发生什么?

钥匙在哪里?

它是物理密钥,还是数字密钥?

谁可以使用它?

是急救人员吗?

你怎么
知道这是否发生了?

这一切将如何
在电梯中向您展示?

我们提出的第三个问题
是我们如何看待保证。

我们如何看待它的所有部分:

安全、保障、信任、风险、
责任、可管理性、可

解释性、道德、
公共政策、法律、法规?

我们将如何告诉您
该系统是安全且正常运行的?

我们提出的第四个

问题是我们
与这些人工智能驱动系统的接口是什么。

我们会和他们谈谈吗?

他们会和我们说话,
他们会互相说话吗?

对于我们中的一些人,我们一生中的一系列技术,

现在突然
以完全不同的方式表现,这意味着什么?

电梯、汽车、电网、
交通信号灯、家里的东西。

这些人工智能驱动系统的第五个问题
是:

表明它们运行良好的指标是什么?

200 年
的工业革命

告诉我们,思考一个好的系统的两个最重要的方法

是生产力和效率。

在 21 世纪,

您可能
希望稍微扩展一下。

该系统是否可持续、

是否安全、是否负责任?

谁来为我们评判这些事情?

系统用户
希望

了解这些东西是如何被监管、
管理和建造的。

然后是关于这些新 AI 系统的最后一个,
也许是最关键的问题

它的意图是什么?

系统的设计目的是

什么?谁说这是个好主意?

或者换句话说

,这个系统正在构建的世界是什么,

这个世界是如何想象的,

它与
我们今天生活的世界有什么关系?

谁会成为对话的一部分?

谁来表达它?

它是如何被构图和想象的?

这些问题没有简单的
答案。

取而代之的是,它们构建了可能

的东西以及我们需要想象、

设计、建造、监管
甚至退役的东西。

他们为我们指明了正确的方向,

并帮助我们走上了
建立新工程分支的道路。

但关键问题是不够的。

您还需要一种将
所有这些问题放在一起的方法。

对于研究所的我们来说,

我们也
对如何将人工智能视为一个系统

以及在何处以及如何
划定该系统的边界非常感兴趣。

这些现在感觉特别
重要。

在这里,我们受到
了 1940 年代开始的工作的影响。

1944 年,与人类学家
Gregory Bateson 和 Margaret Mead 一起,

数学家 Norbert Wiener
召集了一系列对话

,后来被
称为 Macy 控制论会议。

最终,在 1946 年到 1953 年间,

在控制论的旗帜下召开了十次会议。

正如诺伯特·维纳(Norbert Wiener)所定义的那样,

控制论
试图“开发一种语言和技术

,使我们能够
真正解决先进计算技术中的控制和通信问题

。”

控制论有说服力

地认为,人们必须考虑

人类、计算机

和更广泛的生态世界之间的关系。

您必须将它们
视为一个整体系统。

梅西会议的
参与者关注大脑如何工作、

关于
智能和学习的想法,

以及
技术在我们未来的作用。

可悲的是,当谈论人工智能时
,从梅西会议开始的对话

往往被遗忘

但对我来说,这里有一些
非常重要的东西要回收

关于
必须适应文化、

技术和环境的系统的想法。

在研究所,
这种系统思维是我们工作的核心。

在过去的三年里,

一大群了不起的
人加入了

我这个疯狂的旅程来做这项工作。

我们的员工包括人类学家、

系统和环境工程师
、计算机科学家

以及核物理学家、

屡获殊荣的摄影记者,

以及至少一名政策
和标准专家。

这是一个令人兴奋的组合。

经验
和专业知识的范围是强大的

,冲突和挑战也是如此。

多元化
需要始终如一

地想方设法让人们
进行对话。

并与冲突稍作停留

我们还很早就

发现,要建立
一种新的做事方式,

就需要承诺让
其他人与我们一起踏上同样的旅程。

所以我们很快就打开
了教育计划的大门

,我们在 2018 年启动了第一个
硕士课程。

从那时起,我们有两个
硕士生

和一个博士生。

我们的学生来自世界各地

,来自世界各地。

澳大利亚、新西兰、尼日利亚、尼泊尔、

墨西哥、印度、美国。

他们的年龄从 23 岁到 60 岁不等。

他们
在数学和音乐、

政策和表演、

系统和标准、

建筑和艺术方面拥有不同的背景。

在他们加入我们学院之前

,他们经营公司,
为政府工作,

在军队服役,教高中,

并管理艺术组织。

他们是冒险家

,彼此承诺,

并致力于打造新事物。

真的,你还能要求什么?

因为虽然我
在硅谷呆了 20 年

,我
知道孤独的发明家

和英雄的旅程的故事,但

我也知道现实。

这不仅仅是英雄的旅程。

能够改变世界的总是
一群有共同目标的

人。

那你会从哪里开始?

好吧,我认为你从你的立场开始。

对我来说,这意味着
我要感谢我所站立

的土地的传统所有者

Ngunnawal 和 Ngambri 人,

这是他们的土地,

从未割让,永远神圣。

我向
这个地方的过去和现在的长者表示敬意。

我也
承认我们今天

在许多其他地方聚会

,我也向所有这些地方
的传统业主和长老表示敬意

能够说出这些话

并仔细研究它们的含义和信号对我来说意义重大。

并且要记住,我们生活在

一个被连续
占领至少 60,000 年的国家。

原住民在这里建立了世界,

他们建立了社会系统,
他们建立了技术。

他们建立了管理这个地方的方法,


在很长一段时间内显着地管理它。

作为澳大利亚人

,无论是在国内还是国外,每时每刻站在舞台上,

我们都因为那段历史而肩负着一份特权
和一份责任

这不仅仅是一段历史。

它也是一组极其
丰富的资源、

世界观和知识。

它应该贯穿我们所有的骨头

,它应该是我们一直讲述的故事。

归根结底,它是关于以
不同的方式思考,

提出不同类型的问题,从

整体
上看待世界和系统,

并找到其他
想和你一起踏上这段旅程的人。

因为对我来说,

真正思考未来和规模的唯一方法

是始终集体行动。

因为对我来说,

人类共同参与的概念是

我们思考

负责任、安全

和最终可持续的事物的方式之一。

谢谢你。