How do hard drives work Kanawat Senanan

Imagine an airplane flying
one millimeter above the ground

and circling the Earth
once every 25 seconds

while counting every blade of grass.

Shrink all that down so that it fits
in the palm of your hand,

and you’d have something equivalent
to a modern hard drive,

an object that can likely hold
more information than your local library.

So how does it store so much information
in such a small space?

At the heart of every hard drive
is a stack of high-speed spinning discs

with a recording head
flying over each surface.

Each disc is coated with a film
of microscopic magnetised metal grains,

and your data doesn’t live there
in a form you can recognize.

Instead, it is recorded
as a magnetic pattern

formed by groups of those tiny grains.

In each group, also known as a bit,

all of the grains have
their magnetization’s aligned

in one of two possible states,

which correspond to zeroes and ones.

Data is written onto the disc

by converting strings of bits
into electrical current

fed through an electromagnet.

This magnet generates a field
strong enough to change the direction

of the metal grain’s magnetization.

Once this information is written
onto the disc,

the drive uses a magnetic reader
to turn it back into a useful form,

much like a phonograph needle
translates a record’s grooves into music.

But how can you get so much information
out of just zeroes and ones?

Well, by putting lots of them together.

For example, a letter is represented
in one byte, or eight bits,

and your average photo
takes up several megabytes,

each of which is 8 million bits.

Because each bit must be written onto
a physical area of the disc,

we’re always seeking to increase
the disc’s areal density,

or how many bits can be squeezed
into one square inch.

The areal density of a modern hard drive
is about 600 gigabits per square inch,

300 million times greater than that
of IBM’s first hard drive from 1957.

This amazing advance in storage capacity

wasn’t just a matter
of making everything smaller,

but involved multiple innovations.

A technique called the thin film
lithography process

allowed engineers
to shrink the reader and writer.

And despite its size,
the reader became more sensitive

by taking advantage of new discoveries in
magnetic and quantum properties of matter.

Bits could also be packed closer together
thanks to mathematical algorithms

that filter out noise
from magnetic interference,

and find the most likely bit sequences
from each chunk of read-back signal.

And thermal expansion control of the head,

enabled by placing a heater
under the magnetic writer,

allowed it to fly less than
five nanometers above the disc’s surface,

about the width of two strands of DNA.

For the past several decades,

the exponential growth in computer
storage capacity and processing power

has followed a pattern
known as Moore’s Law,

which, in 1975, predicted that information
density would double every two years.

But at around 100 gigabits
per square inch,

shrinking the magnetic grains further
or cramming them closer together

posed a new risk
called the superparamagnetic effect.

When a magnetic grain volume is too small,

its magnetization is easily disturbed
by heat energy

and can cause bits
to switch unintentionally,

leading to data loss.

Scientists resolved this limitation
in a remarkably simple way:

by changing the direction of recording
from longitudinal to perpendicular,

allowing areal density to approach
one terabit per square inch.

Recently, the potential limit has been
increased yet again

through heat assisted magnetic recording.

This uses an even more thermally
stable recording medium,

whose magnetic resistance
is momentarily reduced

by heating up a particular spot
with a laser

and allowing data to be written.

And while those drives are currently
in the prototype stage,

scientists already have the next potential
trick up their sleeves:

bit-patterned media,

where bit locations are arranged
in separate, nano-sized structures,

potentially allowing for areal densities
of twenty terabits per square inch

or more.

So it’s thanks to the combined efforts
of generations of engineers,

material scientists,

and quantum physicists

that this tool of incredible power
and precision

can spin in the palm of your hand.

想象一架飞机
在距地面 1 毫米的地方飞行,每 25 秒

绕地球一周,

同时数着每一片草叶。

把所有的东西缩小到适合
你的手掌

,你就会有一个
相当于现代硬盘的东西,

一个可能
比你当地的图书馆保存更多信息的物体。

那么它是如何
在这么小的空间里存储这么多信息的呢?

每个硬盘驱动器的核心
是一堆高速旋转的磁盘

,每个表面都有一个记录
头。

每张光盘都涂有
一层微小的磁化金属颗粒薄膜

,您的数据不会
以您可以识别的形式存在在那里。

相反,它被记录

为由这些微小颗粒组成的磁性图案。

在每一组(也称为比特)中,

所有晶粒
的磁化强度均以

两种可能的状态之一排列

,对应于零和一。

通过将位串
转换为

通过电磁体馈送的电流,将数据写入磁盘。

这种磁铁产生的磁场
足以改变

金属颗粒的磁化方向。

一旦将这些信息
写入光盘

,驱动器就会使用磁性阅读
器将其转换回有用的形式,

就像留声机的针
将唱片的凹槽转换成音乐一样。

但是你怎么能从
零和一中得到这么多信息呢?

好吧,把它们放在一起。

例如,一个字母
用一个字节或八位表示,

而你的照片平均
占用几兆字节,

每个字节为 800 万位。

因为每个位都必须写入
磁盘的物理区域,所以

我们一直在寻求
增加磁盘的面密度,

或者可以将多少位压缩
到一平方英寸中。

现代硬盘驱动器的面密度
约为每平方英寸 600 Gb,

比 IBM 1957 年推出的第一块硬盘驱动器的面密度高 3 亿倍。

存储容量的这一惊人进步

不仅仅是让一切变得更小,

还涉及多个 创新。

一种称为薄膜
光刻工艺的技术

允许
工程师缩小读写器。

尽管它的大小
,读者

通过利用
物质的磁性和量子特性的新发现变得更加敏感。

由于数学算法

可以
从磁干扰中滤除噪声,


从每块回读信号中找到最可能的位序列,因此位也可以更紧密地打包在一起。

而磁头的热膨胀控制,

通过在磁性写入器下方放置一个加热器来实现

允许它
在磁盘表面上方飞行不到 5 纳米,

大约是两条 DNA 链的宽度。

在过去的几十年里,

计算机
存储容量和处理能力的指数级

增长遵循了一种
称为摩尔定律的模式,

该定律在 1975 年预测信息
密度将每两年翻一番。

但是在每平方英寸大约 100 吉比特的情况下

进一步缩小磁性颗粒
或将它们挤在一起会

带来一种
称为超顺磁效应的新风险。

当磁粒体积过小时,

它的磁化很容易
受到热能的干扰

,会导致
比特无意切换,

导致数据丢失。

科学家们
以一种非常简单的方式解决了这一限制:

通过将记录方向
从纵向更改为垂直,

使面密度接近
每平方英寸 1 TB。

最近,

通过热辅助磁记录再次增加了潜在限制。

这使用了一种热稳定性更高的
记录介质,

通过用激光加热特定点

并允许写入数据,其磁阻会暂时降低。

虽然这些驱动器目前
处于原型阶段,但

科学家们已经掌握了下一个潜在的
技巧:

位模式介质,

其中位位置排列
在单独的纳米级结构中,

可能允许
每平方 20 TB 的面密度 英寸

或更多。

因此,多亏
了几代工程师、

材料科学家

和量子物理学家的共同努力

,这款具有令人难以置信的强大功能
和精度的工具

才能在您的手掌中旋转。