Why I draw with robots Sougwen Chung

Translator: Ivana Korom
Reviewer: Camille Martínez

Many of us here use technology
in our day-to-day.

And some of us rely
on technology to do our jobs.

For a while, I thought of machines
and the technologies that drive them

as perfect tools that could make my work
more efficient and more productive.

But with the rise of automation
across so many different industries,

it led me to wonder:

If machines are starting
to be able to do the work

traditionally done by humans,

what will become of the human hand?

How does our desire for perfection,
precision and automation

affect our ability to be creative?

In my work as an artist and researcher,
I explore AI and robotics

to develop new processes
for human creativity.

For the past few years,

I’ve made work alongside machines,
data and emerging technologies.

It’s part of a lifelong fascination

about the dynamics
of individuals and systems

and all the messiness that that entails.

It’s how I’m exploring questions about
where AI ends and we begin

and where I’m developing processes

that investigate potential
sensory mixes of the future.

I think it’s where philosophy
and technology intersect.

Doing this work
has taught me a few things.

It’s taught me how embracing imperfection

can actually teach us
something about ourselves.

It’s taught me that exploring art

can actually help shape
the technology that shapes us.

And it’s taught me
that combining AI and robotics

with traditional forms of creativity –
visual arts in my case –

can help us think a little bit more deeply

about what is human
and what is the machine.

And it’s led me to the realization

that collaboration is the key
to creating the space for both

as we move forward.

It all started with a simple
experiment with machines,

called “Drawing Operations
Unit: Generation 1.”

I call the machine “D.O.U.G.” for short.

Before I built D.O.U.G,

I didn’t know anything
about building robots.

I took some open-source
robotic arm designs,

I hacked together a system
where the robot would match my gestures

and follow [them] in real time.

The premise was simple:

I would lead, and it would follow.

I would draw a line,
and it would mimic my line.

So back in 2015, there we were,
drawing for the first time,

in front of a small audience
in New York City.

The process was pretty sparse –

no lights, no sounds,
nothing to hide behind.

Just my palms sweating
and the robot’s new servos heating up.

(Laughs) Clearly, we were
not built for this.

But something interesting happened,
something I didn’t anticipate.

See, D.O.U.G., in its primitive form,
wasn’t tracking my line perfectly.

While in the simulation
that happened onscreen

it was pixel-perfect,

in physical reality,
it was a different story.

It would slip and slide
and punctuate and falter,

and I would be forced to respond.

There was nothing pristine about it.

And yet, somehow, the mistakes
made the work more interesting.

The machine was interpreting
my line but not perfectly.

And I was forced to respond.

We were adapting
to each other in real time.

And seeing this taught me a few things.

It showed me that our mistakes
actually made the work more interesting.

And I realized that, you know,
through the imperfection of the machine,

our imperfections became
what was beautiful about the interaction.

And I was excited,
because it led me to the realization

that maybe part of the beauty
of human and machine systems

is their shared inherent fallibility.

For the second generation of D.O.U.G.,

I knew I wanted to explore this idea.

But instead of an accident produced
by pushing a robotic arm to its limits,

I wanted to design a system
that would respond to my drawings

in ways that I didn’t expect.

So, I used a visual algorithm
to extract visual information

from decades of my digital
and analog drawings.

I trained a neural net on these drawings

in order to generate
recurring patterns in the work

that were then fed through custom software
back into the machine.

I painstakingly collected
as many of my drawings as I could find –

finished works, unfinished experiments
and random sketches –

and tagged them for the AI system.

And since I’m an artist,
I’ve been making work for over 20 years.

Collecting that many drawings took months,

it was a whole thing.

And here’s the thing
about training AI systems:

it’s actually a lot of hard work.

A lot of work goes on behind the scenes.

But in doing the work,
I realized a little bit more

about how the architecture
of an AI is constructed.

And I realized it’s not just made
of models and classifiers

for the neural network.

But it’s a fundamentally
malleable and shapable system,

one in which the human hand
is always present.

It’s far from the omnipotent AI
we’ve been told to believe in.

So I collected these drawings
for the neural net.

And we realized something
that wasn’t previously possible.

My robot D.O.U.G. became
a real-time interactive reflection

of the work I’d done
through the course of my life.

The data was personal,
but the results were powerful.

And I got really excited,

because I started thinking maybe
machines don’t need to be just tools,

but they can function
as nonhuman collaborators.

And even more than that,

I thought maybe
the future of human creativity

isn’t in what it makes

but how it comes together
to explore new ways of making.

So if D.O.U.G._1 was the muscle,

and D.O.U.G._2 was the brain,

then I like to think
of D.O.U.G._3 as the family.

I knew I wanted to explore this idea
of human-nonhuman collaboration at scale.

So over the past few months,

I worked with my team
to develop 20 custom robots

that could work with me as a collective.

They would work as a group,

and together, we would collaborate
with all of New York City.

I was really inspired
by Stanford researcher Fei-Fei Li,

who said, “if we want to teach
machines how to think,

we need to first teach them how to see.”

It made me think of the past decade
of my life in New York,

and how I’d been all watched over by these
surveillance cameras around the city.

And I thought it would be
really interesting

if I could use them
to teach my robots to see.

So with this project,

I thought about the gaze of the machine,

and I began to think about vision
as multidimensional,

as views from somewhere.

We collected video

from publicly available
camera feeds on the internet

of people walking on the sidewalks,

cars and taxis on the road,

all kinds of urban movement.

We trained a vision algorithm
on those feeds

based on a technique
called “optical flow,”

to analyze the collective density,

direction, dwell and velocity states
of urban movement.

Our system extracted those states
from the feeds as positional data

and became pads for my
robotic units to draw on.

Instead of a collaboration of one-to-one,

we made a collaboration of many-to-many.

By combining the vision of human
and machine in the city,

we reimagined what
a landscape painting could be.

Throughout all of my
experiments with D.O.U.G.,

no two performances
have ever been the same.

And through collaboration,

we create something that neither of us
could have done alone:

we explore the boundaries
of our creativity,

human and nonhuman working in parallel.

I think this is just the beginning.

This year, I’ve launched Scilicet,

my new lab exploring human
and interhuman collaboration.

We’re really interested
in the feedback loop

between individual, artificial
and ecological systems.

We’re connecting human and machine output

to biometrics and other kinds
of environmental data.

We’re inviting anyone who’s interested
in the future of work, systems

and interhuman collaboration

to explore with us.

We know it’s not just technologists
that have to do this work

and that we all have a role to play.

We believe that by teaching machines

how to do the work
traditionally done by humans,

we can explore and evolve our criteria

of what’s made possible by the human hand.

And part of that journey
is embracing the imperfections

and recognizing the fallibility
of both human and machine,

in order to expand the potential of both.

Today, I’m still in pursuit
of finding the beauty

in human and nonhuman creativity.

In the future, I have no idea
what that will look like,

but I’m pretty curious to find out.

Thank you.

(Applause)

译者:Ivana Korom
审稿人:Camille Martínez

我们中的许多人
在日常生活中都使用技术。

我们中的一些人
依靠技术来完成我们的工作。

有一段时间,我认为机器
和驱动它们的技术

是完美的工具,可以让我的工作
更有效率和生产力。

但随着
许多不同行业自动化的兴起,

这让我想知道:

如果机器开始
能够

完成传统上由人类完成的工作,

那么人类的手会变成什么样子?

我们对完美、
精确和自动化的渴望如何

影响我们的创造力?

在我作为艺术家和研究员的工作中,
我探索人工智能和机器人技术,

以开发
人类创造力的新过程。

在过去的几年里,

我与机器、
数据和新兴技术一起工作。

这是

对个人和系统的动态

以及随之而来的所有混乱的终生迷恋的一部分。

这就是我探索
人工智能在哪里结束和我们从

哪里开始以及我正在开发

调查未来潜在
感官组合的过程的问题的方式。

我认为这是哲学
和技术相交的地方。

做这项
工作教会了我一些事情。

它教会了我如何拥抱不完美

实际上可以教会我们
一些关于自己的事情。

它告诉我,探索艺术

实际上可以帮助
塑造塑造我们的技术。

它告诉我
,将人工智能和机器人技术

与传统形式的创造力——
在我的例子中是视觉艺术——结合起来,

可以帮助我们更深入地

思考什么是人类
,什么是机器。

这让我

意识到,在我们前进的过程中,合作是
为双方创造空间的关键

这一切都始于一个简单
的机器实验,

称为“绘图操作
单元:第一代”。

我称这台机器为“D.O.U.G.” 简而言之。

在我建造 D.O.U.G 之前,


对建造机器人一无所知。

我采用了一些开源
机械臂设计,

我编写了一个系统
,机器人将匹配我的手势

并实时跟随[它们]。

前提很简单:

我会领导,它会跟随。

我会画一条线
,它会模仿我的线。

所以早在 2015 年,我们
第一次

在纽约市的一小部分观众面前画画

这个过程非常稀疏——

没有灯光,没有声音,
没有什么可隐藏的。

只是我的手心出汗
,机器人的新伺服系统开始升温。

(笑)显然,我们
不是为此而生的。

但是发生了一些有趣的事情,
这是我没有预料到的。

看,原始形式的 D.O.U.G
并没有完美地跟踪我的路线。

虽然在
屏幕上发生的模拟

是像素完美的,但

在物理现实中,情况
就完全不同了。

它会滑动、滑动
、断断续续

,我会被迫做出回应。

它没有什么原始的。

然而,不知何故,这些错误
使这项工作变得更有趣。

机器正在解释
我的台词,但并不完美。

我被迫做出回应。

我们
正在实时适应彼此。

看到这一点,我学到了一些东西。

它向我表明,我们的错误
实际上使这项工作更有趣。

我意识到,你知道,
通过机器

的不完美,我们的不完美变成
了交互的美妙之处。

我很兴奋,
因为它让我意识到

,也许
人类和机器系统的美的一部分

是它们共同的固有错误。

对于第二代 D.O.U.G.,

我知道我想探索这个想法。

但是
,我不想因为将机械臂推到极限而造成事故,

而是想设计一个系统
,以我没想到的方式响应我的图纸

因此,我使用视觉算法

从我数十年的数字
和模拟绘图中提取视觉信息。

我在这些图纸上训练了一个神经网络

,以便
在工作中生成重复出现的模式,

然后通过定制软件将这些模式反馈
回机器。

我煞费苦心地收集了
我能找到的尽可能多的图纸——

完成的作品、未完成的实验
和随机草图——

并将它们标记为人工智能系统。

自从我是一名艺术家以来,
我已经从事了 20 多年的工作。

收集这么多图纸花了几个月的时间,

这是一回事。


就是训练 AI 系统

的事情:这实际上是一项艰巨的工作。

许多工作在幕后进行。

但在做这项工作时,

对如何
构建人工智能架构有了更多的了解。

我意识到它不仅仅是由神经网络
的模型和分类器组成

但它是一个从根本上具有
延展性和可塑性的系统,

其中人手
始终存在。

它远非
我们被告知要相信的无所不能的人工智能。

所以我为神经网络收集了这些图纸

我们意识到了
以前不可能的事情。

我的机器人 D.O.U.G. 成为

我一生中所做工作的实时互动反映。

数据是个人的,
但结果很强大。

我真的很兴奋,

因为我开始思考也许
机器不需要只是工具

,它们也可以
作为非人类的合作者。

更重要的是,

我认为
人类创造力的未来可能

不在于它创造了什么,

而在于它如何结合
起来探索新的制造方式。

因此,如果 D.O.U.G._1 是肌肉

,D.O.U.G._2 是大脑,

那么我喜欢将
D.O.U.G._3 视为一个家庭。

我知道我想大规模地探索
人类与非人类合作的想法。

所以在过去的几个月里,

我和我的团队
一起开发了 20 个

可以与我一起工作的定制机器人。

他们将作为一个

团队一起工作,我们将
与整个纽约市合作。

我真的
受到斯坦福大学研究员李飞飞的启发,

他说:“如果我们想教
机器如何思考,

我们首先要教它们如何看。”

这让我想起
了我在纽约生活的过去十年,

以及我是如何被
城市周围的这些监控摄像头监视的。

我认为

如果我能用它们
来教我的机器人看东西会很有趣。

所以在这个项目中,

我想到了机器的凝视

,我开始将视觉
视为多维的,

作为来自某个地方的视图。

我们


互联网上公开的摄像头视频中收集

了人行道上行走的人、

路上的汽车和出租车以及

各种城市运动的视频。

我们

基于一种
称为“光流”的技术在这些提要上训练了一种视觉算法,

以分析城市运动的集体密度、

方向、停留和速度
状态。

我们的系统
从提要中提取这些状态作为位置数据,

并成为我的
机器人单元使用的垫子。 我们进行了多对多

的协作,而不是一对一

的协作。

通过结合城市中人类
和机器的视觉,

我们重新构想
了风景画的可能。

在我
对 D.O.U.G. 的所有实验中,

没有两次
表现是相同的。

通过合作,

我们创造了我们两个人
都无法单独完成的事情:

我们探索
我们的创造力、

人类和非人类并行工作的界限。

我认为这只是一个开始。

今年,我推出了 Scilicet,这是

我探索人类
和人类合作的新实验室。

我们对

个体、人工
和生态系统之间的反馈循环非常感兴趣。

我们将人类和机器输出

与生物识别和其他类型
的环境数据联系起来。

我们邀请任何
对未来工作、系统

和人际协作感兴趣的人

与我们一起探索。

我们知道,不仅仅是技术
人员必须做这项工作

,而且我们都可以发挥作用。

我们相信,通过教机器

如何
完成传统上由人类完成的工作,

我们可以探索和发展

我们的标准,以判断人手所能做的事情。

这一旅程的一部分
是拥抱不完美

并认识到
人和机器的错误,

以扩大两者的潜力。

今天,我仍在
追求发现

人类和非人类创造力的美。

将来,我不
知道那会是什么样子,

但我很想知道。

谢谢你。

(掌声)