Shapeshifting tech will change work as we know it Sean Follmer

We’ve evolved with tools,
and tools have evolved with us.

Our ancestors created these
hand axes 1.5 million years ago,

shaping them to not only
fit the task at hand

but also their hand.

However, over the years,

tools have become
more and more specialized.

These sculpting tools
have evolved through their use,

and each one has a different form
which matches its function.

And they leverage
the dexterity of our hands

in order to manipulate things
with much more precision.

But as tools have become
more and more complex,

we need more complex controls
to control them.

And so designers have become
very adept at creating interfaces

that allow you to manipulate parameters
while you’re attending to other things,

such as taking a photograph
and changing the focus

or the aperture.

But the computer has fundamentally
changed the way we think about tools

because computation is dynamic.

So it can do a million different things

and run a million different applications.

However, computers have
the same static physical form

for all of these different applications

and the same static
interface elements as well.

And I believe that this
is fundamentally a problem,

because it doesn’t really allow us
to interact with our hands

and capture the rich dexterity
that we have in our bodies.

And my belief is that, then,
we must need new types of interfaces

that can capture these
rich abilities that we have

and that can physically adapt to us

and allow us to interact in new ways.

And so that’s what I’ve been doing
at the MIT Media Lab

and now at Stanford.

So with my colleagues,
Daniel Leithinger and Hiroshi Ishii,

we created inFORM,

where the interface can actually
come off the screen

and you can physically manipulate it.

Or you can visualize
3D information physically

and touch it and feel it
to understand it in new ways.

Or you can interact through gestures
and direct deformations

to sculpt digital clay.

Or interface elements can arise
out of the surface

and change on demand.

And the idea is that for each
individual application,

the physical form can be matched
to the application.

And I believe this represents a new way

that we can interact with information,

by making it physical.

So the question is, how can we use this?

Traditionally, urban planners
and architects build physical models

of cities and buildings
to better understand them.

So with Tony Tang at the Media Lab,
we created an interface built on inFORM

to allow urban planners
to design and view entire cities.

And now you can walk around it,
but it’s dynamic, it’s physical,

and you can also interact directly.

Or you can look at different views,

such as population or traffic information,

but it’s made physical.

We also believe that these dynamic
shape displays can really change

the ways that we remotely
collaborate with people.

So when we’re working together in person,

I’m not only looking at your face

but I’m also gesturing
and manipulating objects,

and that’s really hard to do
when you’re using tools like Skype.

And so using inFORM,
you can reach out from the screen

and manipulate things at a distance.

So we used the pins of the display
to represent people’s hands,

allowing them to actually touch
and manipulate objects at a distance.

And you can also manipulate
and collaborate on 3D data sets as well,

so you can gesture around them
as well as manipulate them.

And that allows people to collaborate
on these new types of 3D information

in a richer way than might
be possible with traditional tools.

And so you can also
bring in existing objects,

and those will be captured on one side
and transmitted to the other.

Or you can have an object that’s linked
between two places,

so as I move a ball on one side,

the ball moves on the other as well.

And so we do this by capturing
the remote user

using a depth-sensing camera
like a Microsoft Kinect.

Now, you might be wondering
how does this all work,

and essentially, what it is,
is 900 linear actuators

that are connected to these
mechanical linkages

that allow motion down here
to be propagated in these pins above.

So it’s not that complex
compared to what’s going on at CERN,

but it did take a long time
for us to build it.

And so we started with a single motor,

a single linear actuator,

and then we had to design
a custom circuit board to control them.

And then we had to make a lot of them.

And so the problem with having
900 of something

is that you have to do
every step 900 times.

And so that meant that we had
a lot of work to do.

So we sort of set up
a mini-sweatshop in the Media Lab

and brought undergrads in and convinced
them to do “research” –

(Laughter)

and had late nights
watching movies, eating pizza

and screwing in thousands of screws.

You know – research.

(Laughter)

But anyway, I think that we were
really excited by the things

that inFORM allowed us to do.

Increasingly, we’re using mobile devices,
and we interact on the go.

But mobile devices, just like computers,

are used for so many
different applications.

So you use them to talk on the phone,

to surf the web, to play games,
to take pictures

or even a million different things.

But again, they have the same
static physical form

for each of these applications.

And so we wanted to know how can we take
some of the same interactions

that we developed for inFORM

and bring them to mobile devices.

So at Stanford, we created
this haptic edge display,

which is a mobile device
with an array of linear actuators

that can change shape,

so you can feel in your hand
where you are as you’re reading a book.

Or you can feel in your pocket
new types of tactile sensations

that are richer than the vibration.

Or buttons can emerge from the side
that allow you to interact

where you want them to be.

Or you can play games
and have actual buttons.

And so we were able to do this

by embedding 40 small, tiny
linear actuators inside the device,

and that allow you not only to touch them

but also back-drive them as well.

But we’ve also looked at other ways
to create more complex shape change.

So we’ve used pneumatic actuation
to create a morphing device

where you can go from something
that looks a lot like a phone …

to a wristband on the go.

And so together with Ken Nakagaki
at the Media Lab,

we created this new
high-resolution version

that uses an array of servomotors
to change from interactive wristband

to a touch-input device

to a phone.

(Laughter)

And we’re also interested
in looking at ways

that users can actually
deform the interfaces

to shape them into the devices
that they want to use.

So you can make something
like a game controller,

and then the system will understand
what shape it’s in

and change to that mode.

So, where does this point?

How do we move forward from here?

I think, really, where we are today

is in this new age
of the Internet of Things,

where we have computers everywhere –

they’re in our pockets,
they’re in our walls,

they’re in almost every device
that you’ll buy in the next five years.

But what if we stopped
thinking about devices

and think instead about environments?

And so how can we have smart furniture

or smart rooms or smart environments

or cities that can adapt to us physically,

and allow us to do new ways
of collaborating with people

and doing new types of tasks?

So for the Milan Design Week,
we created TRANSFORM,

which is an interactive table-scale
version of these shape displays,

which can move physical objects
on the surface; for example,

reminding you to take your keys.

But it can also transform
to fit different ways of interacting.

So if you want to work,

then it can change to sort of
set up your work system.

And so as you bring a device over,

it creates all the affordances you need

and brings other objects
to help you accomplish those goals.

So, in conclusion,

I really think that we need to think
about a new, fundamentally different way

of interacting with computers.

We need computers
that can physically adapt to us

and adapt to the ways
that we want to use them

and really harness the rich dexterity
that we have of our hands,

and our ability to think spatially
about information by making it physical.

But looking forward, I think we need
to go beyond this, beyond devices,

to really think about new ways
that we can bring people together,

and bring our information into the world,

and think about smart environments
that can adapt to us physically.

So with that, I will leave you.

Thank you very much.

(Applause)

我们随着
工具而进化,工具也随着我们而进化。

我们的祖先在
150 万年前创造了这些手斧,

将它们塑造成不仅
适合手头的任务,

也适合他们的手。

然而,多年来,

工具变得
越来越专业化。

这些雕刻工具
随着它们的使用而演变

,每一种都有
与其功能相匹配的不同形式。

他们利用
我们双手的灵巧性


更精确地操纵事物。

但是随着工具变得
越来越复杂,

我们需要更复杂的控件
来控制它们。

因此,设计师已经
非常擅长创建界面

,允许您在
处理其他事情(

例如拍照
和更改焦点

或光圈)时操纵参数。

但是计算机从根本上
改变了我们对工具的看法,

因为计算是动态的。

所以它可以做一百万种不同的事情

并运行一百万种不同的应用程序。

但是,对于所有这些不同的应用程序,计算机
具有相同的静态物理形式

并且也具有相同的静态
界面元素。

而且我相信这
从根本上是一个问题,

因为它并没有真正让我们
与我们的手互动

并捕捉我们身体中丰富的灵巧性

我的信念是,那么,
我们必须需要新类型的界面

来捕捉
我们拥有的这些丰富的能力,

并且可以在物理上适应我们

并允许我们以新的方式进行交互。

这就是我
在麻省理工学院媒体实验室

和现在斯坦福大学一直在做的事情。

因此,我们与我的同事
Daniel Leithinger 和 Hiroshi Ishii

一起创建了 inFORM,

其中界面实际上
可以脱离屏幕

,您可以对其进行物理操作。

或者,您可以
物理地可视化 3D 信息

并触摸并感受它
,从而以新的方式理解它。

或者您可以通过手势
和直接变形

进行交互以雕刻数字粘土。

或者界面元素可以
从表面出现

并根据需要进行更改。

这个想法是,对于每个
单独的应用程序

,物理形式都可以
与应用程序相匹配。

我相信这代表了一种新的方式

,我们可以通过将信息变为物理方式来与信息交互

所以问题是,我们如何使用它?

传统上,城市规划师
和建筑师建立

城市和建筑物的物理模型
以更好地理解它们。

因此,我们与媒体实验室的 Tony Tang 一起
创建了一个基于 inFORM 的界面

,让城市规划者
能够设计和查看整个城市。

现在你可以在它周围走动,
但它是动态的,它是物理的

,你也可以直接互动。

或者您可以查看不同的视图,

例如人口或交通信息,

但它是物理的。

我们还相信,这些动态的
形状显示可以真正

改变我们
与人们远程协作的方式。

所以当我们亲自一起工作时,

我不仅在看你的脸,

而且还在做手势
和操作物体,

当你使用像 Skype 这样的工具时,这真的很难做到。

因此,使用 inFORM,
您可以从屏幕上伸出

手,在远处操纵事物。

因此,我们使用显示器的引脚
来代表人们的手,

让他们能够实际触摸
和操纵远处的物体。

您还可以
对 3D 数据集进行操作和协作,

因此您可以围绕它们进行手势
操作以及操作它们。

这使得人们能够以

比传统工具更丰富的方式就这些新型 3D 信息进行协作

因此,您还可以
引入现有对象

,这些对象将在一侧捕获
并传输到另一侧。

或者你可以有一个连接两个地方的对象,

所以当我在一侧移动球时

,球也会在另一侧移动。

因此,我们通过

使用深度感应相机(
如 Microsoft Kinect)捕捉远程用户来做到这一点。

现在,您可能想
知道这一切是如何工作的

,本质上,它是什么,
是 900 个线性致动器

,它们连接到这些
机械联动装置

,允许向下的运动
在上面的这些销中传播。

因此,
与 CERN 的情况相比,

它并没有那么复杂,但我们确实花了很长时间
来构建它。

所以我们从一个电机、

一个线性致动器开始,

然后我们必须设计
一个定制的电路板来控制它们。

然后我们不得不制作很多。

所以拥有
900 个东西的问题

是你必须
每一步都做 900 次。

这意味着我们
有很多工作要做。

所以我们
在媒体实验室建立了一个小型血汗工厂

,把本科生带进来,说服
他们做“研究”——

(笑声

) 晚上
看电影,吃披萨

,拧上数千个螺丝。

你知道——研究。

(笑声)

但无论如何,我认为我们

对 inFORM 允许我们做的事情感到非常兴奋。

我们越来越多地使用移动设备,
并且我们在旅途中进行交互。

但是移动设备,就像计算机一样,

被用于许多
不同的应用程序。

所以你用它们来打电话

、上网、玩游戏
、拍照

,甚至是一百万种不同的事情。

但同样,它们

对于这些应用程序中的每一个都具有相同的静态物理形式。

因此,我们想知道如何将我们为 inFORM 开发的
一些相同的交互方式

带到移动设备上。

所以在斯坦福,我们创造了
这种触觉边缘显示器,

它是一种移动设备
,带有一系列可以改变形状的线性致动

器,

这样你就可以在
看书的时候感觉到自己在哪里。

或者,您可以在口袋里感受到

比振动更丰富的新型触感。

或者按钮可以从侧面出现
,允许

您在想要的位置进行交互。

或者您可以玩游戏
并拥有实际的按钮。

因此,我们能够

通过在设备中嵌入 40 个小型
线性致动器来做到这一点

,这不仅可以让您触摸它们

,还可以反向驱动它们。

但我们也研究了其他方法
来创建更复杂的形状变化。

因此,我们使用气动
驱动创建了一个变形设备

,您可以在其中从
看起来很像电话的东西……移动

到移动的腕带。

因此
,我们与媒体实验室的 Ken Nakagaki 一起

创建了这个新
的高分辨率版本

,它使用一系列伺服电机
将交互式

腕带变成触摸输入设备

再到手机。

(笑声

) 我们也有
兴趣研究

用户可以实际
变形界面

以将其塑造成
他们想要使用的设备的方法。

所以你可以
做一个游戏控制器之类的东西,

然后系统会
理解它的形状

并改变到那个模式。

那么,这意味着什么?

我们如何从这里向前发展?

我认为,真的,我们今天所处的位置

是在这个
物联网的新时代

,我们到处都有电脑——

它们在我们的口袋里,
它们在我们的墙上,

它们几乎
在你的每一个设备中 会在未来五年内购买。

但是,如果我们不再
考虑设备

,而是考虑环境呢?

那么,我们如何才能拥有能够适应我们身体的智能家具

或智能房间或智能环境

或城市

,让我们能够以新的
方式与人合作

并完成新类型的任务呢?

因此,对于米兰设计周,
我们创造了 TRANSFORM,


是这些形状展示的交互式桌面版本

,可以移动
表面上的物理对象; 例如,

提醒您拿钥匙。

但它也可以转换
以适应不同的交互方式。

因此,如果您想工作,

则可以更改为
设置您的工作系统。

因此,当您将设备带过来时,

它会创建您需要的所有功能,

并带来其他对象
来帮助您实现这些目标。

所以,总而言之,

我真的认为我们需要
考虑一种全新的、完全不同

的与计算机交互的方式。

我们需要
能够在物理上适应我们

并适应
我们想要使用它们的方式的计算机,

并真正利用我们双手的丰富灵巧
性,

以及我们通过将信息变为物理来在空间上
思考信息的能力。

但展望未来,我认为我们
需要超越这一点,超越设备

,真正思考
将人们聚集在一起的新方法

,将我们的信息带入世界,

并思考
能够适应我们身体的智能环境。

就这样,我会离开你。

非常感谢你。

(掌声)