How Benjamin Button got his face Ed Ulbrich

I’m here today representing a team of

artists and technologists and filmmakers

that work together on a remarkable film

project for the last four years and

along the way they created a

breakthrough in computer visualization

so I want to show you a clip of the film

now hopefully it won’t stutter and if we

did our jobs well you won’t know that we

were even involved but you seem to have

more uh-huh

what if I told you I wasn’t getting

older I was getting younger than

everybody else I was born with some form

of disease

what kinda C’s Osborne all

I’m sorry no need to be this at the

wrong old age

are you sick I heard mama and tizzy Miss

penny said I was gonna die soon but

maybe not it different than anybody I’ve

ever met

there were many changes some you could

see some you couldn’t hey I started

growing in all sorts of places along

with other things I felt pretty good

considering that was a clip from The

Curious Case of Benjamin Button many you

maybe you’ve seen it or you’ve heard of

the story but what you might not know is

that for nearly the first hour of the

film the main character Benjamin Button

who’s played by Brad Pitt is completely

computer-generated from the neck up now

there’s no use of prosthetic makeup or

photography of Brad superimposed over

the over another actors body we’ve

created a completely digital human head

so I’d like to start with a little bit

of history on the project this is based

on an F scott Fitzgerald short story

it’s about a man who’s born old and

lives his life in Reverse now this

movies floated around Hollywood for well

over half a century that we first got

involved the project in the early 90s

with Ron Howard as the director we took

a lot of meetings and we seriously

considered it and at the time we had to

throw in the towel it was deemed

impossible it was beyond the technology

of the day to depict a man aging and

back backwards the the human form in

particularly human head has been

considered the holy grail of our

industry the project came back just

about a decade later and this time with

a director named David Fincher now

Fincher is an interesting guy

David is fearless of Technology and he

is absolutely tenacious and David won’t

take no and David believed like we do in

the visual effects industry that

anything is possible as long as you have

enough time resources and of course

money and so David had an interesting

take on the film and he threw a

challenge at us he wanted the film to be

or the main character of the film to be

played from the cradle to the grave by

one actor it happened to be this guy we

went through a process of elimination

and a process of discovery with Dave and

we ruled out of course swapping actors

that was one idea that we would have

different actors we would hand off from

actor to actor and we’ve been ruled out

the idea of using makeup we realized

that prosthetic makeup just wouldn’t

hold

particularly in close-up and makeup is

an additive process you have to build

the face up and David one to carve

deeply into Brad’s face to bring the

aging to this character he needed to be

very sympathetic character so we decided

to cast a series of little people that

would play the different bodies of

Benjamin at different increments of his

life and that we would in fact create a

computer-generated version of Brad’s

head aged to appears Benjamin and attach

that to the body of the real actor

sounded great of course this was the

holy grail of our our industry and the

fact that this guy is a global icon

didn’t help either because I’m sure if

anybody ever stand in line at the

grocery store you know we see his face

constantly so there really was no

tolerable margin of error there were two

studios involved Warner Brothers and

Paramount and they both believed this

would make an amazing film of course it

was a very high-risk proposition there

was lots of money in reputations at

stake and we believed that we had a very

solid methodology that might work but

despite our verbal assurances they

wanted some proof and so in 2004 they

commissioned us to do a screen test of

Benjamin and we did it about five weeks

but we use lots of cheats and shortcuts

we basically put something together to

get through the meeting and I’ll roll

that for you now this was the first test

for Benjamin Button and in here you can

see it’s a that’s a computer-generated

head it’s pretty good

attached to the body of another actor

and it worked and it gave the studio

great relief after many years of starts

and stops on this project and and making

that tough decision they finally decided

to greenlight the movie and I can

remember actually when I got the phone

call to congratulate us that the the

move was a go I actually threw up

you know this is tough stuff so we

started to have early team meetings and

and we got everybody together and it was

really more like therapy in the

beginning of convincing each other and

reassuring each other that we can

actually undertake this we had to hold

up an hour of a movie with a character

and it’s not a special-effects film it

has to be a man we really felt like we

were you know kind of like a 12-step

program and of course you know the first

step is it mate you’ve got a problem so

um and we had a big problem we didn’t

know how we’re gonna do this and but we

did know one thing being from the visual

effects industry we with David believed

that we now had enough time enough

resources and God we hoped we had enough

money and we had enough passion to will

the processes and technology into

existence so when you’re faced with

something like that adversity I’ve got

to break it down you take the big

problem you break it down into smaller

pieces you can start to attack that so

we had three main areas that we had to

focus on we needed to make Brad look a

lot older we needed to age him 45 years

or so and we also needed to make sure

that we could take Brad’s idiosyncrasies

his little ticks the little subtleties

that make him who he is and have that

translate through our process so that it

appears in Benjamin on the screen and we

also needed to create a character that

could hold up under really all

conditions you needed to be able to walk

in broad daylight at nighttime under

candlelight he had to hold an extreme

closeup he had to deliver dialogue he

had to be able to run he had to be able

to sweat he had to be able to take a

bath to cry he even had to throw up not

all at the same time but he had to you

know do all of those those things and

the work had to hold up for almost the

first hour of the movie we did about 325

shots so we needed a system that would

allow Benjamin to be able to do

everything a human being can do and we

realized that there was a giant chasm

between the state of the art of

Technology in about 2004 and where we

needed it to be so we focused on motion

capture now I’m sure many of you have

seen motion capture and and the state of

the art at the time was something called

marker based motion capture I’ll show

you an example here it’s basically the

idea of you wear a leotard and they put

some reflective markers on your body and

instead of using cameras there’s

infrared sensors around a volume and

those infrared sensors track the three

position of those markers in real time

and then animators can take the data of

the motion of those markers and apply

them to a computer-generated character

so you can see the computer characters

on the right are having the same complex

motion as the as the dancers so we also

looked at numbers of other films of the

time that were using facial marker

tracking and that’s the idea of putting

markers on the human face and doing the

same process and as you can see it gives

you a pretty crappy performance that’s

not terribly compelling and what we

realized was that what we needed was the

information what was going on between

the markers we needed the subtleties of

the skin we needed to see skin moving

over muscle over bone heat increases in

dimples and wrinkles and all of those

things so our first big revelation was

to completely abort and walk away from

the technology of the day the status quo

the state of the art and so we aborted

using motion capture and we were now

well out of our comfort zone and an

uncharted territory so we were left with

this idea that we ended up kind of

calling technologies Stu we started to

look out in other fields and the idea

was that we were going to find nuggets

or gems of technology that perhaps come

from other industries like medical

imaging or the video game space and

Andrea propriate them and we had to

create kind of a sauce and the sauce was

a code and software that we written to

allow these disparate pieces of

technologies to come together and work

as one and so initially we came across

some remarkable research done by a

gentleman named dr. Paul Ekman in the

early 70s and he believed that he could

in fact catalog the human face and he

came up with his idea of facial action

coding system he believed that there are

basically 70 basic poses or shapes of

the human face and that from those basic

poses or shapes of the face they can be

combined to create infinite

possibilities of everything the human

face is capable of doing and of course

these transcend age race culture gender

and so this was a became kind of the

foundation of our research as we went

forward and then we came across some

remarkable technology called contour and

here you can see a subject having

phosphorescent makeup stippled on her

face and now what we’re looking at is

really of creating a surface capture as

opposed to a marker capture the subject

stands in front of a computer array

cameras and those cameras can

frame-by-frame reconstruct geometry of

exactly what the subject is doing at the

moment right so effectively you get 3d

data in real time of the subject and if

you look in a comparison on the left we

see what volumetric data gives us and on

the right you see what markers give us

so clearly we were in a substantially

better place for this but these were the

early days of this technology and it

wasn’t really proven yet but you know we

measure complexity and fidelity of data

in terms of polygonal count and so on

the left we were seeing a hundred

thousand polygons we can go up into the

millions of polygons it seemed to be

infinite this is when we had our AHA

this was the breakthrough this is one

like okay we’re gonna be okay this is

actually gonna work and the AHA was what

if we could take Brad Pitt and we could

put Brad in this device and use this

contour process and we could stipple on

the phosphorescent makeup and put him

under the black lights and we could in

fact scan him in real time performing

Ekman’s facts poses right so effectively

we ended up with a 3d database of

everything Brad Pitt’s face is capable

of doing so from there we actually

carved up those faces into smaller

pieces and components of his face so we

ended up with literally thousands and

thousands and thousands of shapes a

complete database of all possibilities

that his face is capable of of doing now

that’s great except we had him at age 44

we need to put another 40 years on him

at this point we brought in Rick Baker

and Rick’s one of the great makeup and

special effects viewers of our industry

and and we also brought in a gentleman

named kazoo soo-ji and cause of soo-ji

is one of the great photo real sculptors

of our time and we commissioned them to

make a maquette or a bust of a Benjamin

and so in the spirit of the great

unveiling I had to do this I had to I

had to unveil something so this has been

a tea right now with we created three of

these there’s been a T there’s been 70

there’s been 60 and this really became

the the template of moving forward now

this was made from a life cast of Brad

so in fact anatomically it is correct

the eyes the jaw the teeth everything is

in perfect alignment with what the real

guy has we have these maquettes scanned

in

computer at at very high resolution an

enormous polygonal count and so now we

had three age increments of Benjamin in

the computer but we needed to get a

database of him doing more than that so

we went through this process then called

retargeting so this is Brad doing one of

the ecumene facts poses and here’s the

resulting data that comes from that of

the model that comes from that and

retargeting is the process of

transposing that data onto another model

and because the lifecast are the bust

the maquette of Benjamin was made from

Brad we could transpose the data of Brad

at forty four on to Brad at eighty seven

so now we had a 3d database of

everything Brad Pitt’s face can do at

age 87 in his 70s and then in his 60s

next we had to go into the shooting

process so while that’s going on we’re

down in New Orleans and locations around

the world and we shot our body actors

and we shot them wearing blue hoods so

these are the gentleman who played

Benjamin and the blue hoods helped us

for two things one we could easily erase

their heads and we also put tracking

markers on their heads so in fact we

could recreate the camera motion of the

lens optics from the set but now we

needed to get Brad’s performance to

drive our virtual Benjamin and so we

edited the footage that was shot on

location with the rest of the cast and

the the body actors and about six months

later we brought Brad onto a soundstage

in Los Angeles and he watched on the

screen and his job then was to become

Benjamin and so we looped the scenes he

watched again and again we encourage him

to improvise and he took Benjamin in

interesting unusual places that we

didn’t think he was going to go we shot

him a four HD camera so we get multiple

views of him and then David would choose

the take of Brad being Benjamin that he

thought best matched the footage with

the rest of the cast and from there we

went into a process called image

analysis and so here you can see again

the chosen take and we are seeing now

that data being transposed on to Ben

eighty-seven and so what’s interesting

about this is we use something called

image analysis which is taking timings

from different components of Benjamin’s

face so we could choose say his left

eyebrow and the software would tell us

that well in frame 14 the left eyebrow

begins to move from here to here it

concludes moving in frame 32 and so we

could choose

members of positions on the face to pull

that data from and then the sauce I

talked about with our technologies to

that secret sauce was effectively

software that allowed us to match the

performance footage of Brad in

live-action with our database of aged

Benjamin the the fact shapes that we had

and on a frame-by-frame basis we can

actually reconstruct a 3d head that

exactly matched the performance of Brad

so this is how the finished shot

appeared in the film and here you can

see the body actor and then this is what

we called the dead head no reference to

Jerry Garcia and then here’s the

reconstructed performance now with the

timings of the performance and then

again the final shot it was a long

process

the neck next section here I’m gonna

just blast through this because we could

do it a whole TED talk on the next you

know several slides we had to create a

lighting system so really a big part of

our processes from creating a lighting

environment for every single location

that Benjamin had to appear so that we

could put Ben’s head into any scene and

what is exactly matched the lighting

that’s on the other actors in the real

world we also had to create an eye

system we found the the old adage you

know the eyes of the window to the soul

is absolutely true so the goal here was

to keep everybody looking in Ben’s eyes

and if you could feel the warmth and

feel the humanity and feel his intent

coming through the eyes then we would

succeed so we had one person focused on

the eye system for almost two full years

we also had to create a mouth system we

worked from dental molds of Bradway they

aged the teeth over time we also had to

create an articulating tongue that

allowed him to enunciate his word so

there’s a whole system and software

written to articulate the tongue we had

one person devoted the tongue for about

nine months he was he was very popular

skin displacement another big deal the

skin had to be absolutely accurate and

he’s also in an old-age home he’s in a

nursing home around other old people so

he had to look exactly the same as the

others so lots of work on skin

deformation you can see in some of these

cases it works in some cases it looks

bad this is a very very very early test

in our process so effectively we created

a digital puppet that Brad Pitt could

operate with his own face there were no

animators necessary to come in and

interpret behavior or enhance his

performance there was something we

encountered though that we end up kind

of calling the digital Botox effect so

as things kind of went through this

process it did kind of Fincher would

always say it’s and blasts the edges off

of the performance and one thing that

our our process and the technology

couldn’t do is it couldn’t understand

intent the intent of the actor so it

sees a smile as a smile it doesn’t

recognize an ironic smile or a happy

smile or frustrated smile so did take

humans to kind of push it that one way

or the other but that we ended up

calling the entire process and all the

technology emotion capture as opposed to

just motion capture so take another look

Wow

I heard mama and tizzy whispering they

said I was gonna die soon but maybe not

well I heard mama and tizzy whispering I

said I was gonna die soon but

maybe not

well I heard mama and tizzy whispering I

said I was gonna die soon but maybe not

that’s how to create a digital human in

18 minutes

a couple of quick factoids it really

took 155 people over two years and we

didn’t even talk about 60 hairstyles and

an all-digital haircut but that is

Benjamin thank you

我今天在这里代表一个

艺术家、技术人员和电影制作人团队,

他们在过去四年中共同致力于一个非凡的电影项目,并在此过程中创造

了计算机可视化方面的突破,

所以我现在想给你看一个电影片段

希望它不会结结巴巴,如果

我们把工作做好,你甚至不会知道

我们参与其中,但你似乎有

更多,

嗯,如果我告诉你我没有变

老,我比

其他人都年轻 我生来就患有某种

疾病 有点像 C 的奥斯本

我很抱歉 没必要在

错误的

年纪做这个 你生病了吗 我听到妈妈和 tizzy 彭妮小姐

说我很快就会死 但

也许和任何人没有什么不同 我见过

有很多变化有些你可以

看到有些你看不到嘿我开始

在各种各样的地方成长

以及其他一些我感觉很好的东西

考虑到这是

本杰明巴顿奇事的剪辑很多你

可能 你见过吗 你听说过

这个故事,但你可能不知道的是

,在影片的前一个小时里

,布拉德·皮特扮演的主角本杰明·巴顿是完全由

电脑从脖子以上生成的,现在

没有使用假肢化妆 或者

将布拉德的照片叠加

在另一个演员的身体上,我们已经

创建了一个完全数字化的人头,

所以我想从

这个项目的一点历史开始,这是

基于 F 斯科特菲茨杰拉德的短篇小说,

它是关于一个男人 谁出生就老

了,现在他的生活在逆向中,这部

电影在好莱坞流传

了半个多世纪,我们第一次

参与这个项目是在 90 年代初,

由罗恩·霍华德担任导演,我们参加

了很多会议,我们认真

考虑过, 在我们不得不

认输的时候,人们认为

不可能用当今的

技术来描绘一个男人的衰老和

倒退 获得了我们

行业的圣杯,该项目在

大约十年后又回来了,这次与

一位名叫大卫芬奇的导演合作,现在

芬奇是一个有趣的人,

大卫无畏技术,他

绝对坚韧,大卫不会

拒绝,大卫 就像我们

在视觉效果行业

所做的那样,只要你有

足够的时间资源,当然还有

钱,一切皆有可能,所以大卫对这部电影有一个有趣的

看法,他

向我们提出了一个挑战,他希望这部电影成为

或 这部电影的主角

将由一位演员从摇篮到坟墓扮演

恰好是这个人 我们

与戴夫一起经历了淘汰和发现的过程,

我们当然排除了交换演员的可能性,

这是一个想法 我们会有

不同的演员,我们会从一个

演员到另一个演员,我们已经排除

了化妆的想法 eup 是

一个附加过程,你必须

建立面部,而 David 需要

在 Brad 的脸上深深雕刻,以

使这个角色变老,他需要成为

非常有同情心的角色,所以我们决定

选择一系列小人物

来扮演不同的角色

本杰明在他生命的不同阶段的身体

,我们实际上会创建一个

计算机生成的

布拉德头的版本,以显示本杰明并将其

附加到真正演员的身体上

听起来很棒当然这

是我们的圣杯 行业和

这个人是全球偶像的事实

也没有帮助,因为我敢肯定,如果

有人在杂货店排队,

你知道我们经常看到他的脸,

所以真的没有

可以容忍的误差范围有两个

制片厂涉及华纳兄弟和

派拉蒙,他们都相信这

会拍出一部很棒的电影

d 我们有一个非常

可靠的方法可能会奏效,但

尽管我们口头保证他们

想要一些证据,所以在 2004 年他们

委托我们对 Benjamin 进行屏幕测试

,我们做了大约五周,

但我们使用了很多秘籍和捷径,

我们 基本上把一些东西放在

一起通过会议,我会

为你滚动,这是对本杰明巴顿的第一次测试

,在这里你可以

看到它是一个计算机生成的

头,它很好地

附着在另一个演员的身体上

它奏效了,在这个项目多年的开始和停止之后,它给了工作室

很大的安慰

,并做出了

这个艰难的决定,他们最终决定

为这部电影开绿灯,我

记得实际上当我接到

电话祝贺我们的时候

搬家是一个尝试,我实际上吐了,

你知道这是艰难的事情,所以我们

开始召开早期的团队会议

,我们让每个人都聚在一起,这

真的更像

是合作开始时的治疗 说服对方,

让对方放心,

我们实际上可以承担这件事

就像一个 12 步的

程序,当然你知道

第一步是它,伙计,你遇到了一个问题,

嗯,我们遇到了一个大问题,我们不

知道我们将如何做到这一点,但我们

确实知道一件事 来自视觉

效果行业,我们和大卫一起

相信我们现在有足够的时间

和资源,上帝我们希望我们有足够的

钱,我们有足够的热情

将流程和技术付诸实践,

所以当你面临

这样的逆境时 我必须

把它分解你把大

问题分解成更小的

部分你可以开始攻击

所以我们必须关注三个主要领域

我们需要让布拉德看起来

更老我们需要 年龄他 45 岁

左右,我们也需要 m

确保我们可以接受布拉德的特质,

他的小勾号

,使他成为自己的细微之处,并

通过我们的过程进行翻译,以便它

出现在屏幕上的本杰明身上,我们

还需要创建一个

可以承受的角色 所有

条件 你需要能够

在光天化日之下在夜间

烛光下行走 他必须进行极端

特写 他必须进行对话 他

必须能够奔跑 他必须

能够流汗 他必须能够拍摄

洗澡哭了,他甚至不得不同时呕吐,

但他必须让你

知道做所有这些事情,

而且工作几乎要坚持

到电影的第一个小时,我们做了大约 325 个

镜头,所以我们需要 一个系统可以

让 Benjamin 能够做

人类能做的所有事情,我们

意识到

在 2004 年左右的技术水平和我们

需要它的地方之间存在巨大的鸿沟,因此我们专注于动作

捕捉 n 我相信你们中的很多人都

看过动作捕捉,当时最

先进的技术是

基于标记的动作捕捉我会

在这里给你看一个例子,基本上

是你穿紧身连衣裤的想法,他们放了

一些 您身体上的反射标记,

而不是使用相机,而是

在体积周围使用红外传感器,

这些红外传感器

实时跟踪这些标记的三个位置

,然后动画师可以获取

这些标记的运动数据并将

它们应用到计算机上- 生成的角色,

因此您可以看到右侧的计算机角色

具有与

舞者相同的复杂动作,因此我们还

查看了当时使用面部标记跟踪的其他电影的数量,

这就是将

标记放在 人脸并做

同样的过程,正如你所看到的,它给

你一个相当糟糕的表现,这

并不是非常引人注目,我们

意识到我们需要的是

信息 关于标记之间发生的事情,

我们需要皮肤的细微之处,

我们需要看到皮肤

在肌肉上移动到骨骼上的热量增加,

酒窝和皱纹以及所有这些

事情,所以我们的第一个重大发现

是完全中止并

远离 当时的技术 现状

最先进的技术,所以我们放弃了

使用动作捕捉,我们现在

已经远离了我们的舒适区和一个

未知的领域,所以我们留下

了这样的想法,即我们最终会

调用技术 Stu we 开始

寻找其他领域,我们的想法

是我们要

找到可能来自其他行业的技术或技术宝石,

如医学

成像或视频游戏领域,

安德里亚占有它们,我们必须

创造一种调味汁和 酱汁是

我们编写的代码和软件,用于

使这些不同的

技术结合在一起并协同

工作,因此最初我们遇到了

一些非凡的研究 由

一位名叫博士的绅士。 Paul Ekman 在

70 年代初期,他相信他

实际上可以对人脸进行分类

,他提出了面部动作

编码系统的想法,他认为人脸

基本上有 70 种基本姿势或形状,

而这些基本

面部的姿势或形状它们可以

结合起来创造

人脸能够做的一切的无限可能性

,当然

这些超越年龄种族文化性别

,所以这

成为我们研究的基础,因为我们

前进和 然后我们遇到了

一种叫做轮廓的非凡技术,

在这里你可以看到一个主题

在她的脸上点了磷光妆

现在我们正在看的

是创造一个表面捕捉而

不是一个标记捕捉这个主题

站在前面 一台计算机阵列

相机,这些相机可以

逐帧重建

对象当前正在做

什么的几何形状,因此您可以有效地获得 3d

数据 n 实时的主题,如果

您在左侧进行比较,我们会

看到体积数据为我们提供了什么,而

在右侧,您可以清楚地看到标记为我们提供了什么,

我们在这方面处于一个

更好的位置,但这些都是

早期 这项技术

还没有真正得到证实,但你知道我们

根据多边形数量来衡量数据的复杂性和保真度,所以

在左边我们看到了

十万个多边形,我们可以上升到

数百万个多边形。 是

无限的 这是当我们有我们的 AHA

这是一个突破 这是一个

像好的我们会好的 这

实际上会起作用而 AHA 是

如果我们可以带 Brad Pitt 我们可以

将 Brad 放入这个设备并使用 这个

轮廓过程,我们可以在磷光妆容上点画,

然后把他

放在黑灯下,我们

实际上可以实时扫描他,

非常有效地执行 Ekman 的事实姿势,

我们最终得到了一个包含所有东西的 3d 数据库

胸罩 d Pitt 的脸

能够做到这一点,我们

实际上将这些脸切割成更小的

部分和他脸的组成部分,所以我们

最终得到了

成千上万的形状,一个

完整的数据库,

包含他的脸能够做到的所有可能性 现在这样做

很好,除了我们让他在 44 岁时

我们需要再给他 40

年的时间我们带来了 Rick Baker

和 Rick 是我们行业中最出色的化妆和

特效观众之一

,我们还带来了一位绅士

名为 kazoo soo-ji 和 soo-ji 的事业

是我们这个时代最伟大的照片真实雕塑家

之一,我们委托他们

制作本杰明的模型或半身像

,因此本着伟大揭幕的精神,

我不得不这样做 我不得不我

必须揭开一些东西,所以这

就是现在的茶,我们创造了其中三个

,有一个 T,有 70 个,

有 60 个,这真的成为

了前进的模板,现在

它是由 布拉德的真人版

所以实际上在解剖学上它是正确

的眼睛下巴牙齿一切都

与真正的人完美对齐

我们

计算机中以非常高分辨率扫描了这些模型一个

巨大的多边形数量所以现在我们

有了 Benjamin 在计算机中的三个年龄增量,

但我们需要获取

他的数据库做的更多,所以

我们经历了这个过程,然后称为

重新定位,所以这是 Brad 做的其中

一个 ecumene 事实姿势,

这是来自那个的结果数据

来自该模型的模型和

重定向是将

数据转换到另一个模型的过程,

并且因为 lifecast 是半身像,

所以本杰明的模型是由布拉德制作的,

我们可以将布拉德 44 岁的数据转移到 87 岁的布拉德

所以现在我们有了一个 3D 数据库,其中包含

布拉德皮特在

87 岁、70 多岁和 60 多岁时的面部可以做的所有事情,

接下来我们必须进入拍摄

过程,所以在这期间

我们在新奥尔良和

世界各地拍摄我们的身体演员

,我们拍摄他们戴着蓝色头巾所以

这些是扮演本杰明的绅士

,蓝色头巾帮助我们

做了两件事,我们可以轻松抹去

他们的头 我们

还在他们的头上放置了跟踪标记,所以实际上我们

可以从场景中重新创建镜头光学系统的摄像机运动,

但现在我们

需要让布拉德的表演来

驱动我们的虚拟本杰明,所以我们

编辑了在现场拍摄的镜头

其他演员

和身体演员,大约六个月

后,我们把布拉德带到

洛杉矶的摄影棚,他在

银幕上观看,然后他的工作就是成为

本杰明,所以我们循环播放他

一遍又一遍观看的场景,我们 鼓励

他即兴发挥,他带本杰明去

有趣的不寻常的地方,我们

认为他不会去我们

给他拍了一个四高清摄像机,所以我们得到

了他的多个视图,然后大卫会

选择 把布拉德当作本杰明,他认为他

认为镜头

与其他演员最匹配,从那里我们

进入了一个称为图像分析的过程

,所以在这里你可以再次看到

选择的镜头,我们现在

看到数据被转换为

Ben 87 有趣

的是我们使用了一种叫做

图像分析的东西,它

从 Benjamin 的脸的不同部分获取时间,

所以我们可以选择说他的左

眉毛,软件会告诉

我们在第 14 帧左眉毛

开始 从这里移动到这里,它

在第 32 帧中结束移动,因此我们

可以选择

面部位置的成员以从中

提取数据,然后我

用我们的技术谈到的

那个秘密酱汁是有效的

软件,它允许我们匹配

布拉德在现场表演中的表演镜头

与我们的老

本杰明数据库 我们拥有的事实形状,

并且我们可以逐帧

重建 一个

与布拉德的表演完全匹配的 3d 头,

所以这就是完成的镜头

出现在电影中的方式,在这里你可以

看到身体演员,然后这就是

我们所说的死头,没有提到

杰里·加西亚,然后这是

重建的表演 现在

是表演的时间安排,

然后是最后的镜头,这是一个漫长的

过程,脖子下一部分在这里我将

简单介绍一下,因为

我们可以在下一个完整的 TED 演讲中完成,你

知道我们必须要播放几张幻灯片 创建一个

照明系统,这确实是我们流程的重要组成部分,

本杰明必须出现的每个位置创建照明环境,以便我们

可以将本的头放入任何场景,以及

与其他演员的照明完全匹配的场景 现实

世界 我们还必须创建一个眼睛

系统 我们发现古老的格言你

知道眼睛是灵魂之窗的眼睛

是绝对正确的,所以这里的目标

是让每个人都看着本的眼睛 眼睛

,如果你能感受到温暖,

感受到人性,感受到他的意图

从眼睛里传出来,那么我们就会

成功,所以我们有一个人专注

于眼睛系统近两年

我们还必须创建一个我们工作的嘴巴系统

Bradway 的牙科模具 他们

的牙齿随着时间的推移而老化 我们还必须

创建一个清晰的舌头,

让他能够说出他的话 所以

有一个完整的系统和软件

可以用来表达舌头 我们让

一个人在他的舌头上投入了大约

9 个月 他很受欢迎

皮肤置换 另一个大问题

皮肤必须绝对准确 而且

他也在养老院 他在

其他老人周围的疗养院 所以

他必须看起来和其他

人一模一样 所以需要做很多工作

您可以在其中一些

情况下看到皮肤变形 在某些情况下它可以工作 它看起来很

糟糕 这是我们流程中非常非常非常早期的测试

,因此我们有效地创建

了一个布拉德皮特可以操作的数字木偶

用他自己的脸说话

不需要动画师来

解释行为或增强他的

表现 我们遇到了一些事情,

尽管我们

最终称之为数字肉毒杆菌效应,

所以事情经历了这个

过程,它确实有点 Fincher

总是会说这是表演的边缘

,而

我们的流程和技术

无法做到的一件事是它无法理解

演员的意图,因此

它将微笑视为它没有的微笑'

无法识别讽刺的微笑、快乐的

微笑或沮丧的微笑,所以确实需要

人类以某种方式推动它,

但我们最终

将整个过程和所有技术都称为

情感捕捉,而

不仅仅是动作捕捉,所以采取 再看看

哇,

我听到妈妈和 tizzy 的耳语,他们

说我快死了,但

可能不太

好 耳语我

说我很快就会死,但也许不是

这样才能在 18 分钟内创造出一个数字人类

几个快速的事实 两年内真的

花了 155 个人,我们

甚至没有谈论 60 种发型

和全数字化 理发,但那是

本杰明谢谢