Eye vs. camera Michael Mauser

Watch the center of this disk.

You are getting sleepy.

No, just kidding.

I’m not going to hypnotize you.

But are you starting
to see colors in the rings?

If so, your eyes
are playing tricks on you.

The disk was only ever black and white.

You see, your eyes don’t always
capture the world as a video camera would.

In fact, there are quite
a few differences,

owing to the anatomy of your eye

and the processing
that takes place in your brain

and its outgrowth, the retina.

Let’s start with some similarities.

Both have lenses to focus light
and sensors to capture it,

but even those things behave differently.

The lens in a camera moves to stay
focused on an object hurtling towards it,

while the one in your eye responds
by changing shape.

Most camera lenses are also achromatic,

meaning they focus both red
and blue light to the same point.

Your eye is different.

When red light from an object is in focus,
the blue light is out of focus.

So why don’t things look
partially out of focus all the time?

To answer that question,

we first need to look at how your eye
and the camera capture light:

photoreceptors.

The light-sensitive surface in a camera
only has one kind of photoreceptor

that is evenly distributed
throughout the focusing surface.

An array of red, green and blue filters
on top of these photoreceptors

causes them to respond selectively to
long, medium and short wavelength light.

Your eye’s retinas, on the other hand,
have several types of photoreceptors,

usually three for normal light conditions,
and only one type for lowlight,

which is why we’re color blind
in the dark.

In normal light, unlike the camera,
we have no need for a color filter

because our photoreceptors
already respond selectively

to different wavelengths of light.

Also in contrast to a camera,

your photoreceptors
are unevenly distributed,

with no receptors for dim light
in the very center.

This is why faint stars seem to disappear
when you look directly at them.

The center also has very few receptors
that can detect blue light,

which is why you don’t notice the blurred
blue image from earlier.

However, you still perceive blue there

because your brain
fills it in from context.

Also, the edges of our retinas
have relatively few receptors

for any wavelength light.

So our visual acuity
and ability to see color

falls off rapidly
from the center of our vision.

There is also an area in our eyes
called the blind spot

where there are no
photoreceptors of any kind.

We don’t notice a lack of vision there

because once again,
our brain fills in the gaps.

In a very real sense,
we see with our brains, not our eyes.

And because our brains,
including the retinas,

are so involved in the process,

we are susceptible to visual illusions.

Here’s another illusion
caused by the eye itself.

Does the center of this image
look like it’s jittering around?

That’s because your eye actually
jiggles most of the time.

If it didn’t, your vision
would eventually shut down

because the nerves on the retina
stop responding to a stationary image

of constant intensity.

And unlike a camera,

you briefly stop seeing whenever you make
a larger movement with your eyes.

That’s why you can’t see
your own eyes shift

as you look from
one to the other in a mirror.

Video cameras can
capture details our eyes miss,

magnify distant objects

and accurately record what they see.

But our eyes are remarkably
efficient adaptations,

the result of hundreds
of millions of years

of coevolution with our brains.

And so what if we don’t always see
the world exactly as it is.

There’s a certain joy to be found
watching stationary leaves

waving on an illusive breeze,

and maybe even an evolutionary advantage.

But that’s a lesson for another day.

观察这个圆盘的中心。

你越来越困了。

不只是在开玩笑。

我不会催眠你。

但是你
开始看到戒指的颜色了吗?

如果是这样,你的
眼睛在欺骗你。

磁盘只是黑白的。

你看,你的眼睛并不总是
像摄像机那样捕捉世界。

事实上,

由于眼睛的解剖结构

以及
大脑

及其产物视网膜中发生的处理,存在相当多的差异。

让我们从一些相似之处开始。

两者都有用于聚焦光线的镜头
和用于捕捉光线的传感器,

但即使这些东西的行为也有所不同。

相机中的镜头移动以保持
聚焦在朝它飞来的物体上,

而您眼睛中的镜头则通过改变形状来做出反应

大多数相机镜头也是消色差的,

这意味着它们将红光
和蓝光都聚焦到同一点。

你的眼睛不一样。

当来自物体的红光聚焦时
,蓝光失焦。

那么,为什么事情不会一直
部分失焦呢?

要回答这个问题,

我们首先需要看看你的眼睛
和相机是如何捕捉光线的:

感光器。

相机的感光面
只有一种感光体

,均匀分布
在整个对焦面上。

这些感光器顶部的红色、绿色和蓝色滤光片阵列

使它们选择性地响应
长、中和短波长的光。

另一方面,你眼睛的视网膜
有几种类型的感光器,

通常三种用于正常光照条件
,只有一种用于弱光,

这就是我们
在黑暗中色盲的原因。

在正常光线下,与相机不同,
我们不需要彩色滤光片,

因为我们的感光器
已经选择性地响应

不同波长的光。

此外,与相机相比,

您的感光
器分布不均,

在正中央没有接收微光
的接收器。

这就是为什么
当你直视它们时,微弱的星星似乎消失了。

该中心也很少
有可以检测蓝光的受体,

这就是为什么您没有注意到之前的模糊
蓝色图像的原因。

但是,您仍然会在那里感知蓝色,

因为您的大脑
会根据上下文填充它。

此外,我们视网膜的边缘对于任何波长的光
都具有相对较少的受体

因此,我们的视力
和看到颜色的能力

从我们的视觉中心迅速下降。

我们眼睛里还有一个区域
叫做盲点

,那里没有
任何类型的光感受器。

我们没有注意到那里缺乏视力

,因为
我们的大脑再次填补了空白。

在非常真实的意义上,
我们用大脑而不是眼睛看东西。

而且因为我们的大脑,
包括视网膜,

都参与了这个过程,所以

我们很容易产生视觉错觉。

这是
眼睛本身造成的另一种错觉。

这张图片的中心是否
看起来像是在晃动?

那是因为你的眼睛
实际上大部分时间都在晃动。

如果没有,你的视力
最终会关闭,

因为视网膜上的神经
停止响应

恒定强度的静止图像。

与相机不同的是,

每当您用眼睛进行较大的运动时,您都会暂时停止看东西

这就是为什么

当你在镜子里从一只眼睛看向另一只时,你看不到自己的眼睛在移动

摄像机可以
捕捉我们眼睛错过的细节,

放大远处的物体

并准确记录他们所看到的。

但我们的眼睛是非常
有效的适应能力,

我们大脑数亿年共同进化的结果。

如果我们并不总是
看到世界的本来面目,那又会怎样。

看着静止的树叶

在虚幻的微风中摇曳,这是一种乐趣,

甚至可能是一种进化优势。

但这是另一天的教训。