How to manage your time more effectively according to machines Brian Christian

In the summer of 1997,

NASA’s Pathfinder spacecraft landed
on the surface of Mars,

and began transmitting incredible,
iconic images back to Earth.

But several days in,
something went terribly wrong.

The transmissions stopped.

Pathfinder was, in effect,
procrastinating:

keeping itself fully occupied
but failing to do its most important work.

What was going on?

There was a bug, it turned out,
in its scheduler.

Every operating system has something
called the scheduler

that tells the CPU how long
to work on each task before switching,

and what to switch to.

Done right, computers move so fluidly
between their various responsibilities,

they give the illusion
of doing everything simultaneously.

But we all know what happens
when things go wrong.

This should give us, if nothing else,
some measure of consolation.

Even computers get overwhelmed sometimes.

Maybe learning about the computer science
of scheduling

can give us some ideas about our own
human struggles with time.

One of the first insights is that all
the time you spend prioritizing your work

is time you aren’t spending doing it.

For instance, let’s say when you check
your inbox, you scan all the messages,

choosing which is the most important.

Once you’ve dealt with that one,
you repeat.

Seems sensible,
but there’s a problem here.

This is what’s known
as a quadratic-time algorithm.

With an inbox that’s twice as
full, these passes will take twice as long

and you’ll need to do
twice as many of them!

This means four times the work.

The programmers
of the operating system Linux

encountered a similar problem in 2003.

Linux would rank every single
one of its tasks in order of importance,

and sometimes spent more time
ranking tasks than doing them.

The programmers’ counterintuitive solution
was to replace this full ranking

with a limited number
of priority “buckets.”

The system was less precise
about what to do next

but more than made up for it
by spending more time making progress.

So with your emails, insisting on always
doing the very most important thing first

could lead to a meltdown.

Waking up to an inbox three times fuller
than normal

could take nine times longer to clear.

You’d be better off replying
in chronological order, or even at random!

Surprisingly, sometimes giving up
on doing things in the perfect order

may be the key to getting them done.

Another insight that emerges
from computer scheduling

has to do with one of the most prevalent
features of modern life: interruptions.

When a computer goes
from one task to another,

it has to do what’s called
a context switch,

bookmarking its place in one task,

moving old data out of its memory
and new data in.

Each of these actions comes at a cost.

The insight here is that there’s
a fundamental tradeoff

between productivity and responsiveness.

Getting serious work done
means minimizing context switches.

But being responsive means reacting
anytime something comes up.

These two principles
are fundamentally in tension.

Recognizing this tension allows us

to decide where
we want to strike that balance.

The obvious solution
is to minimize interruptions.

The less obvious one is to group them.

If no notification
or email requires a response

more urgently than once an hour, say,

then that’s exactly how often
you should check them. No more.

In computer science, this idea goes by
the name of interrupt coalescing.

Rather than dealing with
things as they come up –

Oh, the mouse was moved?

A key was pressed?

More of that file downloaded? –

the system groups these
interruptions together

based on how long they can afford to wait.

In 2013, interrupt coalescing

triggered a massive improvement
in laptop battery life.

This is because deferring interruptions
lets a system check everything at once,

then quickly re-enter a low-power state.

As with computers, so it is with us.

Perhaps adopting a similar approach

might allow us users
to reclaim our own attention,

and give us back one of the things
that feels so rare in modern life: rest.

1997 年夏天,

美国宇航局的探路者宇宙飞船降落
在火星表面,

并开始将令人难以置信的
标志性图像传回地球。

但是几天后,
发生了可怕的事情。

传输停止了。

探路者实际上是在
拖延:

让自己忙得不可开交,
但未能完成最重要的工作。

发生了什么事?

事实证明
,它的调度程序中有一个错误。

每个操作系统都有一个
叫做调度器的东西,

它告诉 CPU
在切换之前要在每个任务上工作多长时间,

以及切换到什么。

如果做得好,计算机
在各种职责之间移动如此流畅,

它们给人一种
同时做所有事情的错觉。

但是我们都知道
当事情出错时会发生什么。

如果没有别的,这应该给我们
一些安慰。

有时甚至计算机也会不堪重负。

也许学习调度的计算机科学

可以给我们一些关于
人类与时间斗争的想法。

最初的见解之一是,
你花在优先处理工作上的所有

时间都是你没有花在做这件事上的时间。

例如,假设当您检查
收件箱时,您会扫描所有邮件,

选择最重要的邮件。

一旦你处理了那个,
你重复。

看起来很明智,
但这里有一个问题。

这就是
所谓的二次时间算法。

如果收件箱
满了两倍,这些通行证将花费两倍的时间

,您需要做
两倍的时间!

这意味着四倍的工作量。

操作系统 Linux 的程序员

在 2003 年遇到了类似的问题

。Linux 会
按照重要性对每一项任务进行排序

,有时
排序任务比执行任务花费的时间更多。

程序员违反直觉的解决方案

用有限数量
的优先级“桶”代替这个完整的排名。

该系统
对下一步做什么没有那么精确

,但
通过花更多时间取得进展来弥补这一点。

因此,对于您的电子邮件,坚持始终
先做最重要的事情

可能会导致崩溃。

醒来发现收件箱比平时多三倍,

可能需要九倍的时间才能清除。

您最好
按时间顺序回复,甚至随机回复!

令人惊讶的是,有时放弃
以完美的顺序做事

可能是完成它们的关键。

计算机调度

产生的另一个见解与现代生活中最普遍的
特征之一有关:中断。

当计算机
从一项任务转到另一项任务时,

它必须执行所谓
的上下文切换,

将其在一项任务中的位置标记为书签,

将旧数据移出其内存
并移入新数据。

这些操作中的每一个都是有代价的。

这里的见解是,

生产力和响应能力之间存在根本性的权衡。

完成严肃的工作
意味着最大限度地减少上下文切换。

但反应灵敏意味着在出现
任何事情时做出反应。

这两个
原则从根本上是矛盾的。

认识到这种紧张局势使我们

能够决定
我们想要在哪里取得这种平衡。

显而易见的解决方案
是尽量减少中断。

不太明显的一个是将它们分组。

例如,如果没有通知
或电子邮件需要

比每小时一次更紧急的响应,

那么这正是
您应该检查它们的频率。 不再。

在计算机科学中,这个想法
被称为中断合并。

而不是在
事情出现时处理 -

哦,鼠标被移动了?

按键被按下?

下载了更多该文件?

– 系统

根据它们可以等待的时间将这些中断组合在一起。

2013 年,中断合并

引发
了笔记本电脑电池寿命的大幅提升。

这是因为延迟中断
可以让系统立即检查所有内容,

然后快速重新进入低功耗状态。

与计算机一样,我们也是如此。

也许采用类似的方法

可以让我们
用户重新获得我们自己的注意力,

并让我们回到
现代生活中感觉如此罕见的事情之一:休息。