How exactly does binary code work Jos Amrico N L F de Freitas

Imagine trying to use words
to describe every scene in a film,

every note in your favorite song,

or every street in your town.

Now imagine trying to do it using
only the numbers 1 and 0.

Every time you use the Internet
to watch a movie,

listen to music,

or check directions,

that’s exactly what your device is doing,

using the language of binary code.

Computers use binary because
it’s a reliable way of storing data.

For example, a computer’s main
memory is made of transistors

that switch between either high
or low voltage levels,

such as 5 volts and 0 volts.

Voltages sometimes oscillate,
but since there are only two options,

a value of 1 volt
would still be read as “low.”

That reading is done by
the computer’s processor,

which uses the transistors’ states
to control other computer devices

according to software instructions.

The genius of this system
is that a given binary sequence

doesn’t have a pre-determined meaning
on its own.

Instead, each type of data
is encoded in binary

according to a separate
set of rules.

Let’s take numbers.

In normal decimal notation,

each digit is multiplied by 10 raised
to the value of its position,

starting from zero on the right.

So 84 in decimal form is 4x10⁰ + 8x10¹.

Binary number notation works similarly,

but with each position
based on 2 raised to some power.

So 84 would be written as follows:

Meanwhile, letters are interpreted
based on standard rules like UTF-8,

which assigns each character to a specific
group of 8-digit binary strings.

In this case, 01010100 corresponds
to the letter T.

So, how can you know whether
a given instance of this sequence

is supposed to mean T or 84?

Well, you can’t from seeing
the string alone

– just as you can’t tell what the sound
“da” means from hearing it in isolation.

You need context to tell whether you’re
hearing Russian, Spanish, or English.

And you need similar context

to tell whether you’re looking
at binary numbers or binary text.

Binary code is also used for
far more complex types of data.

Each frame of this video, for instance,

is made of hundreds
of thousands of pixels.

In color images,

every pixel is represented
by three binary sequences

that correspond to the primary colors.

Each sequence encodes a number

that determines
the intensity of that particular color.

Then, a video driver program transmits
this information

to the millions of liquid crystals
in your screen

to make all the different hues
you see now.

The sound in this video
is also stored in binary,

with the help of a technique
called pulse code modulation.

Continuous sound waves are digitized

by taking “snapshots” of their
amplitudes every few milliseconds.

These are recorded as numbers
in the form of binary strings,

with as many as 44,000
for every second of sound.

When they’re read by
your computer’s audio software,

the numbers determine how quickly
the coils in your speakers should vibrate

to create sounds of different frequencies.

All of this requires billions
and billions of bits.

But that amount can be reduced
through clever compression formats.

For example, if a picture has 30 adjacent
pixels of green space,

they can be recorded as “30 green” instead
of coding each pixel separately -

a process known as run-length encoding.

These compressed formats are themselves
written in binary code.

So is binary the end-all-be-all
of computing?

Not necessarily.

There’s been research
into ternary computers,

with circuits in three possible states,

and even quantum computers,

whose circuits can be
in multiple states simultaneously.

But so far, none of these has provided

as much physical stability
for data storage and transmission.

So for now, everything you see,

hear,

and read through your screen

comes to you as the result
of a simple “true” or “false” choice,

made billions of times over.

想象一下试图用文字
来描述电影中的每一个场景

,你最喜欢的歌曲中的每一个音符,

或者你镇上的每一条街道。

现在想象一下尝试
只使用数字 1 和 0。

每次您使用
Internet 看电影、

听音乐

或查看路线时,

这正是您的设备正在

使用的二进制代码语言所做的事情。

计算机使用二进制是因为
它是一种可靠的数据存储方式。

例如,计算机的主
存储器由

在高
电压或低电压电平之间切换的晶体管组成,

例如 5 伏和 0 伏。

电压有时会发生振荡,
但由于只有两个选项,因此

1 伏的值
仍会被解读为“低”。

该读取是
由计算机的处理器完成的,

它使用晶体管的状态
根据软件指令控制其他计算机设备

这个系统的天才之
处在于给定的二进制序列

本身并没有预先确定的
含义。

相反,每种类型的数据

根据一组单独
的规则以二进制编码。

让我们拿数字。

在正常的十进制表示法中,

每个数字都乘以 10

从右边的零开始,直到其位置的值。

所以十进制形式的 84 是 4x10⁰ + 8x10¹。

二进制数符号的工作方式类似,

但每个位置都
基于 2 提高到一定的幂。

所以 84 可以写成如下:

同时,字母是
根据标准规则解释的,比如 UTF-8,

它将每个字符分配给一
组特定的 8 位二进制字符串。

在这种情况下,01010100 对应
于字母 T。

那么,您如何知道
该序列的给定实例

应该表示 T 还是 84?

好吧,你不能
只看弦乐

——就像你不能
单独听到“da”这个声音的意思一样。

您需要上下文来判断您
听到的是俄语、西班牙语还是英语。

您需要类似的上下文

来判断您查看
的是二进制数字还是二进制文本。

二进制代码也用于
更复杂的数据类型。

例如,该视频的每一帧

都由数
十万像素组成。

在彩色图像中,

每个像素
都由

对应于原色的三个二进制序列表示。

每个序列编码一个数字

,该数字确定
该特定颜色的强度。

然后,一个视频驱动程序
将此信息传输

到您屏幕中的数百万个液晶

以生成
您现在看到的所有不同色调。 借助称为脉冲编码调制的技术

,该视频中的声音
也以二进制形式存储

连续声波

通过每隔几毫秒对其幅度进行“快照”进行数字化

这些
以二进制字符串的形式记录为数字,每秒

有多达 44,000
个声音。


计算机的音频软件读取它们时,

这些数字决定
了扬声器中的线圈应该以多快的速度振动

以产生不同频率的声音。

所有这些都需要数
十亿比特。

但是这个数量可以
通过巧妙的压缩格式来减少。

例如,如果一张图片有 30 个相邻
的绿色空间像素,

则可以将它们记录为“30 个绿色”,而不是
分别对每个像素进行编码——

这一过程称为游程编码。

这些压缩格式本身
是用二进制代码编写的。

那么二进制
是计算的全部吗?

不必要。

已经研究
了三元计算机

,电路处于三种可能的状态,

甚至量子计算机,

其电路可以
同时处于多种状态。

但到目前为止,这些都没有为数据存储和传输提供

如此多的物理稳定性

因此,就目前而言,您通过屏幕看到、

听到

和阅读的所有内容

都是经过
数十亿次简单“真”或“假”选择的

结果。