Your Algorithm Will See You Now

Transcriber: Petra Molnárová
Reviewer: Hani Eldalees

So today, your algorithm
will see you now

and what do I mean by that?

Well, I’m going to get into it.

So, as a dermatologist,
every day in the clinic,

we will see skin like this.

I’ll walk in and see a patient

and their back covered
with all these different spots.

As a dermatologist, I know to look
at some of the more important ones.

Why did this spot stand out more to me
then all of the rest on there?

And why did this jump out
to me as being something

like a melanoma that I
would want to biopsy?

Well, that is the importance of vision.

It’s also the importance
of decades of training,

thousands of hours of experience

and real-world patient
context that allows you

to come to that diagnosis
as a dermatologist.

Now, I’d like to talk about how
important these skin cancers are

and how important all skin cancers
and all cancers are.

On average, per year, we have about
5.4 million non-melanoma skin cancers

that are diagnosed

and that comes out to about one
skin cancer every five seconds.

So what I’d like to discuss, is how we
can leverage ubiquitous technology,

that we have all around us,

to help come to a higher
fidelity diagnosis,

a better rendering of care for patients
and a higher fidelity health care.

Specifically, this ubiquitous
technology that we’re talking about -

things like your iPhone,
your camera, smartphone,

even your Apple Watch
and other data sensor metrics,

we can use now this data technology

to get a better conclusive
diagnosis in health care

using artificial intelligence.

Now, before we jump into all
of the computer science and the fun stuff,

I want to do something that might
haven’t been done in a TED talk before.

A lot of people listen to these remotely,

they listen to a podcast, and I’d like you
to, at this moment, safely pull over

on the road or pull over from your workout
and take a look at your phone

or whatever streaming device you’re using,

we’re doing a little pop quiz.

This pop quiz is specifically a guess.

Who am I?

Who am I specifically talking
about right now?

And whenever the answer comes to
your mind, just blurt it out.

It’s OK if you’re bothering
whoever’s next to you,

they’ll probably like it and ask
about the talk in and of itself.

So, I am a male.

I have a beard.

I’m tall.

I like speeches.

I was born in a log cabin.

And I was a former president of the USA.

If you’ve come to a conclusion, again,
feel free to blurt that one out.

And that’s Abraham Lincoln over here.

Those pieces of data are all siloed off
into the individual components,

like the height, the description,
the beard, all of these things,

and it takes a little time
to tease out that data,

to see exactly what we’re talking about.

Now, leveraging instead of those

individual natural language data points
or individual data points,

if we then leverage visual data,

you now get to the second
question of this pop quiz,

which is, who is this?

And the answer would be
the greatest player in NBA history,

Kobe Bryant, and it’s a lot easier
to come to conclusions like that

based on visual data than individual
components of data points.

And I’d like to discuss today,
how we can do that same thing

in dermatology and in health care,

to get a much better diagnosis

and really leverage a lot more
data points for our patients.

So, specifically, when patients
come in today,

they’ve got all different categories
like their blood work,

which medications they’re on, various
electronic medical records,

and then all sorts of other data points
from a physical exam and paperwork.

Those are sort of individual components

that aren’t really brought together
in a way that we can do yet.

But we’re getting very close.

We’re getting close thanks to the use

of machine learning,
artificial intelligence,

and really computer vision

to come to those integrated diagnoses,
and integrated decision platforms

a lot more quickly with higher fidelity.

So, again, we’re talking today
about leveraging visual data points

to come to a better conclusion,
to diagnose things like cancer,

heart attacks, strokes,

very important health care
issues for the whole world.

So, again, we’re moving from what I like
to call artificial intelligence

into more of a clinical workflow of what
I like to coin as augmented intelligence.

That’s really bringing the data points
of artificial intelligence

into an augmented decision platform
with both patients and physicians,

rendering that as not just
its own small piece,

but really a conclusion that
they come to together.

So we’re moving from health care
to augmented health care,

really augmenting the ability to diagnose,
treat and manage.

I think a great way to think about
this is what’s been done with Tesla.

Look at Tesla, done phenomenally well
in artificial intelligence

and edge mapping, machine learning,
vision mapping, computer vision.

This is an actual footage

of Tesla’s autonomous vehicle
in real time in Silicon Valley,

driving around the streets.

Obviously, this isn’t fully autonomous.

This is working in conjunction
with the driver themselves,

you can see that it maps out
all these data points.

We’re trying to do the same thing
with data points in health care.

So how do they do that?

Lots of really great HD cameras

that are brilliantly integrated
in their vehicles.

So, how can we take some
of those high definition images

we see in dermatology
on patient’s skin, on human skin,

and really integrate that in, to do this
with machine learning in health care?

So, what a great research team
did up at Stanford,

was they took over 100,000
biopsy proven clinical images

and over 2,000 different
disease classifications

and they put that into

a convolutional neural network
through a training set model.

We won’t get into the details
of the computer science on it,

but after training that
model back and forth,

it ultimately comes to a conclusion
when being quizzed on a new lesion

or a new image, and it
renders a diagnosis,

whether that’s a melanoma,
a dysplastic nevus or whatever it may be.

So, again,

they took that now trained artificial
intelligence machine learning model

and they quizzed it on all
sorts of different images

and different kinds of cutaneous findings
that might or might not be cancerous.

This is sort of what the computer sees.

You see this sort of static edge mapping
outline on some of these lesions,

as well as this heat map
kind of rendering

what would be a concerning finding

versus what would be
a little bit more benign.

And this is real time.

A demo of this happening on a cell phone,

looking at these images
and actually scanning them

with reasonable fidelity to see whether
or not this is something concerning,

like a melanoma

or it’s something less concerning,
like a benign cutaneous growth.

They quiz that model on carcinomas

against board certified dermatologists
working together.

They found the algorithm had
about a 96% accurate diagnosis

for those carcinomas,

and about a 94% accurate
diagnosis for melanomas,

which is pretty incredible

that we’re able to do that
at this young stage

in artificial intelligence in health care.

Even more impressive is another group
back over at MIT and Harvard

that did a collaborative study taking
instead of those individual lesions,

looking at a full total body photography
or more of a global picture,

that would be more practical,
like the image at the start,

where you’re actually looking
at the global picture of a patient

and identifying which one of those points,

which one of those lesions are
more concerning than others.

And it’s really the start
of this technology.

I think this is extremely cool.

This can work in anything in health care,

not just dermatology, not just pathology,

it’s really anything that’s
visual at this point.

Hopefully, we can move into something
a bit more natural language processing

in the near future,

but this is allowing us to augment
dermatopathology as well,

with some of the teams over at Google
doing an incredible job,

really rendering those outlines of what
are concerning features under histology,

and even another group over at Stanford
doing this for X-rays as well,

for identifying this again directly
on the cell phone,

using these algorithms
to not just render a diagnosis,

but to augment what we can already do

with well-trained, brilliant physicians
and scientists taking care of patients.

So with this significant
increase in technology,

we now have our global patient,
our global human.

We’re all patients at the end of the day,

if we’re lucky enough
to make it through there.

We’ve got different data points,
whether it’s your Apple Watch,

your sleep tracking data,
genomic data, exercise data,

all of these things can come
together in the near future

and render a much more
high fidelity diagnosis.

I don’t think we’re that far off

from having something as simple
as your FaceTime on your phone

or opening your phone with Lock Screen,

rendering somewhat of a diagnosis
you may or may not be experiencing

a stroke at this point in time
based on facial droop,

hemifacial paralysis or things like that,

and that could be the difference
between saving someone’s life

with ubiquitous technology.

So, I think we’re really
moving from something

that’s currently a granular
pixellated patient picture

to a much more high fidelity,

high resolution health care.

I’m really optimistic
about the future on here,

looking forward to continue
to build that.

I think in conclusion on here,

this is really the arc of innovation

and this is sort of an arc of innovation
in anything bringing back

to the analogy of self-driving vehicles
and vehicles in general.

So, with transportation,
basic transportation is like walking.

Basic health care is like a physical
examination, you look at the patient.

The next thing you develop
over that platform is a tool

like the horse and carriage,
or a stethoscope.

And then on top of that, we’ve got
great technology like cars or MRI’s

or genomic sequencing,
and then on top of that,

you have an infrastructure type
development like roads and highways

or in health care,
electronic medical records.

I think the last frontier
is what we’re starting to see

in that autonomous ride-sharing
kind of macroeconomic macro system change

where you’re changing the whole platform,
leveraging existing technologies

to then bring out the best components
of all of the above.

And I think in health care,

we can really leverage
augmented intelligence

to deliver the best possible patient care.

So in conclusion, will your
algorithm see you now?

I don’t think it will
and I don’t think it ever should

because driving cars
autonomously is not the same

as rendering a new cancer diagnosis

and working through that with a family,

with a loved one and really coming to
a shared decision-making platform.

This will never be something
that should be autonomous

but I am optimistic we’re going to live

in a day, very short from now, where
you can walk in front of your mirror

as you get ready in the AM,
brushing your teeth

and that mirror will tell you, ″well, hey,

there’s maybe a 9% chance of rain today.

Oh, by the way, that lesion
on the side of your neck -

possibly an 89% chance that’s melanoma.

Also, thanks to all of your
exercise and sleep physiology data,

you’ve decreased your risk for
a heart attack by about 10%″.

Now, this might seem like
it’s a long way away,

but long shots are never
as long as they seem.

Thank you for your attention.

抄写员:Petra Molnárová
审稿人:Hani Eldalees

所以今天,你的算法
现在

会看到你,我的意思是什么?

好吧,我要进入它。

所以,作为皮肤科医生,
每天在诊所,

我们都会看到这样的皮肤。

我会走进去看看一个病人

,他们的背部布满
了所有这些不同的斑点。

作为一名皮肤科医生,我知道
要看一些更重要的。

为什么这个地方对我来说比
那里的其他地方更突出?

为什么这会突然出现
在我看来,

就像我想要活检的黑色素瘤一样

嗯,这就是视觉的重要性。

数十年的培训、

数千小时的经验

和真实世界的患者
背景也很重要,这使您

能够
作为皮肤科医生做出诊断。

现在,我想谈谈
这些皮肤癌

的重要性以及所有皮肤癌
和所有癌症的重要性。

平均而言,我们每年有大约
540 万例非黑色素瘤皮肤

癌被

诊断出来,每五秒就会出现大约一个
皮肤癌。

所以我想讨论的是,我们
如何利用无处不在的技术

来帮助实现更
高保真度的诊断

,更好地为患者提供护理
和更高保真度的医疗保健。

具体来说
,我们正在谈论的这种无处不在的技术

——比如你的 iPhone、
你的相机、智能手机,

甚至你的 Apple Watch
和其他数据传感器指标,

我们现在可以使用这种数据技术

在医疗保健中使用人工获得更好的结论性
诊断

智力。

现在,在我们进入
所有计算机科学和有趣的东西之前,

我想做一些
以前在 TED 演讲中可能没有做过的事情。

很多人远程收听这些,

他们收听播客,我希望
你,此时,安全地

停在路上或锻炼结束
,看看你的手机

或任何流媒体设备 你正在使用,

我们正在做一个小测验。

这个流行测验特别是一个猜测。

我是谁?

我现在具体说的是谁

每当
你想到答案时,就脱口而出。

如果您打扰
旁边的任何人,没关系,

他们可能会喜欢它并
询问谈话本身。

所以,我是男的。

我有胡子。

我很高。

我喜欢演讲。

我出生在一个小木屋里。

我是美国前总统。

如果您再次得出结论,请
随时脱口而出。

那就是亚伯拉罕·林肯。

这些数据都被隔离
到单独的组件中,

比如身高、描述
、胡须,所有这些东西

,需要一些时间
来梳理这些数据,

才能准确地看到我们在说什么。

现在,如果我们利用视觉数据,而不是利用那些

单独的自然语言数据点
或单独的数据点,

那么

您现在会
遇到这个流行测验的第二个问题,

即,这是谁?

答案将
是 NBA 历史上最伟大的球员

科比·布莱恩特,而且

基于视觉数据而不是
数据点的单个组成部分更容易得出这样的结论。

今天我想讨论
一下,我们如何

在皮肤科和医疗保健领域做同样的事情,

以获得更好的诊断,

并真正
为我们的患者利用更多的数据点。

所以,具体来说,当病人
今天进来时,

他们有所有不同的类别,
比如他们的血液检查、

他们正在服用的药物、各种
电子病历,

以及
来自体检和文书工作的各种其他数据点。

这些是一些单独的组件

,还没有真正
以我们可以做到的方式组合在一起。

但我们已经非常接近了。

由于

使用机器学习、
人工智能

和真正的计算机视觉

来实现这些集成诊断
和集成决策

平台,我们越来越接近,而且保真度更高。

因此,今天我们再次
讨论利用视觉数据

点得出更好的结论,
以诊断癌症、

心脏病发作、中风等对全世界

非常重要的医疗保健
问题。

所以,再一次,我们正在从我
喜欢称之为人工智能

的东西转向更多
我喜欢称之为增强智能的临床工作流程。

这确实将人工智能的数据点

带入了患者和医生的增强决策平台

不仅将其视为
自己的一小部分,

而且确实是
他们共同得出的结论。

因此,我们正在从医疗保健
转向增强型医疗保健,

真正增强了诊断、
治疗和管理的能力。

我认为考虑这个问题的一个好方法
是特斯拉所做的。

看看特斯拉,它
在人工智能

和边缘映射、机器学习、
视觉映射、计算机视觉方面做得非常好。

是特斯拉自动驾驶汽车
在硅谷实时行驶的真实镜头,

在街道上行驶。

显然,这不是完全自主的。


与驱动程序本身一起工作,

您可以看到它映射了
所有这些数据点。

我们正在尝试
对医疗保健中的数据点做同样的事情。

那么他们是怎么做到的呢?

许多非常出色的高清

摄像机完美地集成
在他们的车辆中。

那么,我们如何才能将

我们在皮肤病学中看到
的患者皮肤、人体皮肤上

的一些高清图像真正整合到
医疗保健中的机器学习中呢?

所以,一个伟大的研究
团队在斯坦福大学所做的

是,他们拍摄了超过 100,000 张经
活检证实的临床图像

和超过 2,000 种不同的
疾病分类,

并通过训练集模型将其

放入卷积神经网络中

我们不会深入
了解它的计算机科学细节,

但是在反复训练该
模型之后,

它最终会
在对新病变或新图像进行测试时得出结论

,并
给出诊断,

无论是 黑色素瘤
,发育不良痣或任何可能的东西。

因此,

他们再次采用了现在训练有素的
人工智能机器学习模型,

并在
各种不同的图像

和不同类型的
可能会或可能不会癌变的皮肤发现上对其进行了测试。

这就是计算机所看到的。

您会
在其中一些病变上看到这种静态边缘映射轮廓,

以及这种热图
类型,呈现

出令人担忧的发现


更良性的发现。

这是实时的。

一个在手机上发生的演示,

查看这些图像

以合理的保真度实际扫描它们,看看
这是否是令人担忧的事情,

比如黑色素瘤,

还是不那么令人担忧的事情,
比如良性皮肤生长。

他们针对癌症模型

与董事会认证的皮肤科医生
一起工作进行了测验。

他们发现该算法对这些癌症的
诊断准确率约为 96%

,对黑色素瘤的

准确诊断率约为 94%

这非常令人难以置信

,我们能够

在人工智能医疗保健的这个年轻阶段做到这一点。

更令人印象深刻的是
麻省理工学院和哈佛大学的另一个小组

,他们进行了一项合作研究,
而不是那些单独的病变,

查看全身照片
或更多的全局照片,

这将更实用,
就像一开始的图像

,您实际上是在查看
患者的整体情况

并确定其中哪一个点,

其中哪一个病变
比其他病变更受关注。

这真的
是这项技术的开始。

我认为这非常酷。

这可以在医疗保健的任何领域发挥作用,

不仅仅是皮肤病学,不仅仅是病理学,

在这一点上它实际上是任何视觉上的东西。

希望我们可以在不久的将来进入
更自然的语言处理领域

但这也使我们能够增强
皮肤病理学

,谷歌的一些团队
做了令人难以置信的工作,

真正呈现了那些关于什么的轮廓
组织学下的特征,

甚至斯坦福大学的另一个小组
也在为 X 射线做这

件事,直接
在手机上再次识别,

使用这些
算法不仅可以提供诊断,

还可以增强我们已经可以做的事情

-训练有素、才华横溢的医生
和科学家照顾病人。

因此,
随着技术的显着进步,

我们现在拥有了我们的全球患者,
我们的全球人类。

归根结底,我们都是病人,

如果我们有幸度过难关的
话。

我们有不同的数据点,
无论是你的 Apple Watch、

你的睡眠跟踪数据、
基因组数据、运动数据,

所有这些都可以
在不久的将来汇集在一起,

并提供更
高保真度的诊断。

我不认为我们离

在手机上拥有像 FaceTime

或使用锁定屏幕打开手机

这样简单
的东西还差得很远 关于面部下垂,

半面瘫痪或类似的事情

,这可能是

用无处不在的技术拯救某人的生命的区别。

所以,我认为我们
真的正在从

目前的
颗粒状像素化患者图片

转向更高保真度、

高分辨率的医疗保健。


对这里的未来非常乐观,

期待
继续建立它。

我认为在这里总结一下,

这确实是创新的弧线

,这是一种创新的弧线,
任何东西都可以

回到自动驾驶汽车
和一般车辆的类比。

所以,有了交通,
基本的交通就像步行。

基本医疗保健就像
体检,你看看病人。

您在该平台上开发的下一个东西

像马车
或听诊器这样的工具。

最重要的是,我们拥有
伟大的技术,例如汽车、核磁共振成像

或基因组测序
,最重要的是,

您拥有
道路和高速公路

或医疗保健、
电子病历等基础设施类型的发展。

我认为最后一个前沿
是我们开始看到

的那种自动驾驶共享
宏观经济宏观系统的变化

,你正在改变整个平台,
利用现有技术

,然后带出
上述所有方面的最佳组件。

我认为在医疗保健领域,

我们可以真正利用
增强智能

来提供最好的患者护理。

所以总而言之,你的
算法现在会看到你吗?

我不认为它会
,我认为它永远也不应该,

因为自动驾驶汽车

与做出新的癌症诊断

并与家人、亲人一起完成诊断

并真正
做出共同决定是不同的—— 制作平台。

这永远不会是
自主的,

但我很乐观,我们将生活

在一天之后,从现在开始很短,

你可以在早上准备好时走到镜子前,
刷牙

,然后 镜子会告诉你,“好吧,嘿,

今天可能有 9% 的几率下雨。

哦,顺便说一句,
你脖子侧面的那个病变——

可能有 89% 的可能性是黑色素瘤。

此外,多亏了你所有的
运动和睡眠生理数据,


心脏病发作的风险降低了大约 10%”。

现在,这
似乎是一个很长的路要走,

但远射从来没有
看起来那么长。

感谢您的关注。