In 2013, the novelist Jonathan Safran Foer gave the commencement address at Middlebury College. He subsequently adapted parts of it into a short but impactful essay published in the New York Times. It was titled: “How Not to Be Alone.”
In this piece, Foer explores the evolution of communication technology, writing:
“Most of our communication technologies began as diminished substitutes for an impossible activity. We couldn’t always see one another face to face, so the telephone made it possible to keep in touch at a distance. One is not always home, so the answering machine made a kind of interaction possible without the person being near his phone.”
From the answering machine we got to email, which was even easier, and then texting, which, being less formal and more mobile, was even easier still.
“But then a funny thing happened,” Foer writes, “we began to prefer the diminished substitute.”
This made life convenient, but introduced its own costs:
In computer programming, it’s common to split your program into multiple different threads that run simultaneously, as this often simplifies application design.
Imagine, for example, you’re creating a basic game. You might have one thread dedicated to updating the graphics on the screen, another thread dedicated to calculating the next move, and another monitoring the mouse to see if the user is trying to click.
You could, of course, write a single-threaded program that explicitly switches back and forth between working on these different tasks, but it’s often much easier for the programmer to write independent threads, each dedicated to its own part of the larger system.
In a world before multi-core processors, these threads weren’t actually running simultaneously, as the underlying processor could only execute one instruction at a time. What it did instead was switch rapidly between the threads, executing a few instructions from one, before moving on to the next, then the next, and then back to the first, and so on — providing a staccato-paced pseudo-simultaneity that was close enough to true parallel processing to serve the desired purpose.
Something I’ve noticed is that many modern knowledge workers approach their work like a multi-threaded computer program. They’ve agreed to many, many different projects, investigations, queries and small tasks, and attempt, each day, to keep advancing them all in parallel by turning their attention rapidly from one to another — replying to an email here, dashing off a quick message there, and so on — like a CPU dividing its cycles between different pieces of code.
The problem with this analogy is that the human brain is not a computer processor. A silicon chip etched with microscopic circuits switches cleanly from instruction to instruction, agnostic to the greater context from which the current instruction arrived: op codes are executed; electrons flow; the circuit clears; the next op code is loaded.
The human brain is messier.
When you switch your brain to a new “thread,” a whole complicated mess of neural activity begins to activate the proper sub-networks and suppress others. This takes time. When you then rapidly switch to another “thread,” that work doesn’t clear instantaneously like electrons emptying from a circuit, but instead lingers, causing conflict with the new task.
To make matters worse, the idle “threads” don’t sit passively in memory, waiting quietly to be summoned by your neural processor, they’re instead an active presence, generating middle-of-the-night anxiety, and pulling at your attention. To paraphrase David Allen, the more commitments lurking in your mind, the more psychic toll they exert.
This is all to say that the closer I look at the evidence regarding how our brains function, the more I’m convinced that we’re designed to be single-threaded, working on things one at a time, waiting to reach a natural stopping point before moving on to what’s next.
So why do we tolerate all the negative side-effects generated from trying to force our neurons to poorly simulate parallel programs? Because it’s easier, in the moment, than trying to develop professional workflows that better respect our brains’ fundamental nature.
This explanation is understandable. But to a computer scientist trained in optimality, it also seems far from acceptable.
A college senior I’ll call Brady recently sent me a description of his creative experiments with digital minimalism. What caught my attention about his story was that his changes centered on a radical idea: making his mobile phone much less mobile.
In more detail, Brady leaves behind his phone each day when he heads off to campus to take classes and study, allowing him to complete his academic work without distraction. As Brady reports, on returning home, usually around 6:00 pm, “I [will spend] 20 minutes or so responding to emails, texts, and the like.”
Then comes the important part of his plan: after this check, he leaves his phone plugged into the outlet — rendering it literally tethered to the wall.
His goal was to reclaim the evening leisure hours he used to lose to “mindlessly browsing the internet.” Here’s Brady’s description of his life before he detached himself from his phone:
“I would just rotate between Reddit, Facebook, and YouTube for hours. I was never even looking for anything in particular, I was just hooked on endless low-quality novel stimuli. I felt like there was so much wasted potential…I didn’t want to get old and realize that my life was spent scrolling on a backlit screen for 4 hours a day.”
Like many new digital minimalists, after Brady got more intentional about his technology, he was confronted with a sudden influx of free time. Fortunately, having read my book, he was prepared for this change, and responded by aggressively filling in these newly open hours with carefully-selected activity.
Note: This site is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.