Study Hacks Blog

The Atomic Minimalist: My Conversation with James Clear

October 16th, 2019 · 16 comments

Last October, my friend James Clear published the breakout hit book, Atomic Habits. As we both discovered in the months that followed, we have many readers in common. James’s habit-building framework, it turns out, is quite useful for those looking to increase the quality of their deep work or succeed in a transition toward digital minimalism.

In recognition of this overlap, and in celebration of Atomic Habit’s one-year anniversary, James and I recently recorded a podcast in which we geek out on the details of our work and how they overlap.

If you’re a fan of James, or are interested in learning more about how his ideas and mine work together, I recommend you give this conversation a listen (you can use the embedded player above, or access it directly here).

Not All Emails Are Created Equal

October 7th, 2019 · 15 comments

Interviews are a common part of the book publicity process, especially as you become better known as a writer. Between television, radio, print and podcasts, I ended up doing well north of 100 interviews about Digital Minimalism since its release last February.

Given this volume of appointments (which is actually modest compared to many authors), I arranged things with my publicity team at Penguin so that they could book interviews on my behalf. Using a service called Acuity, I specified what times I was available, and they then put interviews directly on my calendar during these periods, all without requiring me to participate in the scheduling conversations.

Viewed objectively, this setup shouldn’t have made a big difference in my life. Scheduling an interview takes around 3 or 4 back-and-forth messages on average. This adds up to somewhere around 300 or 400 extra emails messages diverted from my inbox.

When you consider that these scheduling threads were spread over six months, and that the average professional user sends and receives over 125 emails per day, the communication I saved with this setup should have been be lost in the noise of my frenetic inbox.

But it did matter. Not having to wrangle those scheduling emails provided a huge psychological benefit.

Read more »

How Not to Be Alone: Jonathan Safran Foer on the Dangers of Diminished Communication

September 26th, 2019 · 16 comments
Photo by Pedro Ribeiro Simões

In 2013, the novelist Jonathan Safran Foer gave the commencement address at Middlebury College. He subsequently adapted parts of it into a short but impactful essay published in the New York Times. It was titled: “How Not to Be Alone.”

In this piece, Foer explores the evolution of communication technology, writing:

“Most of our communication technologies began as diminished substitutes for an impossible activity. We couldn’t always see one another face to face, so the telephone made it possible to keep in touch at a distance. One is not always home, so the answering machine made a kind of interaction possible without the person being near his phone.” 

From the answering machine we got to email, which was even easier, and then texting, which, being less formal and more mobile, was even easier still.

“But then a funny thing happened,” Foer writes, “we began to prefer the diminished substitute.”

This made life convenient, but introduced its own costs:

Read more »

Our Brains Are Not Multi-Threaded

September 10th, 2019 · 38 comments
Photo by Anders Sandberg.

In computer programming, it’s common to split your program into multiple different threads that run simultaneously, as this often simplifies application design.

Imagine, for example, you’re creating a basic game. You might have one thread dedicated to updating the graphics on the screen, another thread dedicated to calculating the next move, and another monitoring the mouse to see if the user is trying to click.

You could, of course, write a single-threaded program that explicitly switches back and forth between working on these different tasks, but it’s often much easier for the programmer to write independent threads, each dedicated to its own part of the larger system.

In a world before multi-core processors, these threads weren’t actually running simultaneously, as the underlying processor could only execute one instruction at a time. What it did instead was switch rapidly between the threads, executing a few instructions from one, before moving on to the next, then the next, and then back to the first, and so on — providing a staccato-paced pseudo-simultaneity that was close enough to true parallel processing to serve the desired purpose.

Something I’ve noticed is that many modern knowledge workers approach their work like a multi-threaded computer program. They’ve agreed to many, many different projects, investigations, queries and small tasks, and attempt, each day, to keep advancing them all in parallel by turning their attention rapidly from one to another — replying to an email here, dashing off a quick message there, and so on — like a CPU dividing its cycles between different pieces of code.

The problem with this analogy is that the human brain is not a computer processor. A silicon chip etched with microscopic circuits switches cleanly from instruction to instruction, agnostic to the greater context from which the current instruction arrived: op codes are executed; electrons flow; the circuit clears; the next op code is loaded.

The human brain is messier.

When you switch your brain to a new “thread,” a whole complicated mess of neural activity begins to activate the proper sub-networks and suppress others. This takes time. When you then rapidly switch to another “thread,” that work doesn’t clear instantaneously like electrons emptying from a circuit, but instead lingers, causing conflict with the new task.

To make matters worse, the idle “threads” don’t sit passively in memory, waiting quietly to be summoned by your neural processor, they’re instead an active presence, generating middle-of-the-night anxiety, and pulling at your attention. To paraphrase David Allen, the more commitments lurking in your mind, the more psychic toll they exert.

This is all to say that the closer I look at the evidence regarding how our brains function, the more I’m convinced that we’re designed to be single-threaded, working on things one at a time, waiting to reach a natural stopping point before moving on to what’s next.

So why do we tolerate all the negative side-effects generated from trying to force our neurons to poorly simulate parallel programs? Because it’s easier, in the moment, than trying to develop professional workflows that better respect our brains’ fundamental nature.

This explanation is understandable. But to a computer scientist trained in optimality, it also seems far from acceptable.

On the Surprising Benefits of an Un-Mobile Phone

September 3rd, 2019 · 23 comments
Photo by Maarten van den Heuvel.

A college senior I’ll call Brady recently sent me a description of his creative experiments with digital minimalism. What caught my attention about his story was that his changes centered on a radical idea: making his mobile phone much less mobile.

In more detail, Brady leaves behind his phone each day when he heads off to campus to take classes and study, allowing him to complete his academic work without distraction. As Brady reports, on returning home, usually around 6:00 pm, “I [will spend] 20 minutes or so responding to emails, texts, and the like.”

Then comes the important part of his plan: after this check, he leaves his phone plugged into the outlet — rendering it literally tethered to the wall.

His goal was to reclaim the evening leisure hours he used to lose to “mindlessly browsing the internet.” Here’s Brady’s description of his life before he detached himself from his phone:

“I would just rotate between Reddit, Facebook, and YouTube for hours. I was never even looking for anything in particular, I was just hooked on endless low-quality novel stimuli. I felt like there was so much wasted potential…I didn’t want to get old and realize that my life was spent scrolling on a backlit screen for 4 hours a day.”

Like many new digital minimalists, after Brady got more intentional about his technology, he was confronted with a sudden influx of free time. Fortunately, having read my book, he was prepared for this change, and responded by aggressively filling in these newly open hours with carefully-selected activity.

Here’s his summary:

Read more »

Is Our Fear of Smartphones Overblown?

August 23rd, 2019 · 39 comments
Clip courtesy of the always amusing Pessimists Archive.

During interviews for Digital Minimalism, I’m frequently asked whether I think our culture’s concerns about new technologies like smartphones and social media represent a fleeting moral panic.

The argument goes something like this: There are many technologies that were once feared but that we now consider to be relatively tame, from rock music, to the radio, to the telegraph famously lamented by Thoreau. Isn’t our concern about today’s tech just more of the same?

This is a genuinely interesting question that’s worth some careful unpacking. My main issue with this approach to the issues surrounding smartphones and social media is that it implicitly builds on the following logical formulation:

Read more »

“I Was Lacking in Enough Energy, Time and Attention”: Another Digital Minimalism Case Study

August 17th, 2019 · 32 comments

Something I’ve learned reporting on digital minimalists is that the definition of “minimal” differs greatly from person to person. As part of my effort to share more case studies about this philosophy, I thought it might be fun to visit someone who falls on the extreme end of this spectrum.

Robert (as I’ll call him) recently walked me through some of the major changes he instigated to reclaim his life from his devices. He summarized his reasons for this transformation as follows:

“I was lacking in enough time, energy, and attention to get the things done that I wanted or needed to do…I didn’t like getting insufficient sleep because I was browsing nonsense on my phone until the wee hours, or that I was stressed out with my professional work due to constant procrastination/distraction…or that I wasn’t exercising consistently because I’d happen to browse the same nonsense right when I was about to start.”

Driven by these somber realities, he came to a simple revelation: “life would be better if I cut back.” A decision, as it turns out, he took seriously.

Read more »

On the Art of Learning Things (Ultra) Quickly

August 8th, 2019 · 13 comments

In the first chapter of Deep Work, I argue that the ability to concentrate is important in part because it’s necessary to learn hard things quickly, and in our economy, if you can pick up new skills or ideas fast, you have a massive competitive advantage.

Some readers subsequently asked me how to best deploy their concentration to achieve these feats of accelerated autodidacticism. My answer has been to direct people toward my longtime friend, and occasional collaborator, Scott Young, who has spent years mastering the art of mastering things (c.f., his MIT challenge.)

I’m happy to report that as of earlier this week, instead of simply waving vaguely in Scott’s general direction, I can now point to a brand new book that he’s published on the topic: Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your Career.

In this book, Scott walks through his step-by-step process of breaking down a major learning project and pushing it through to completion much faster than you might have imagined possible. It’s important to emphasize that he provides no short cuts. If anything, he highlights the surprising hardness — in terms of concentration and drive — required to succeed with these endeavors; a point I’ve been underscoring since my early books on study habits.

But if you’re willing to invest the energy, and are looking for the right techniques to make sure this energy is not wasted to the friction of ineffective activity, this is the book for you.

#####

On an unrelated note, my most recent New Yorker article was published earlier this week. It takes a look at the history of email and applies ideas from my academic field, the theory of distributed systems, to help explain what went wrong with this innovation. If you like this blog, you’ll like this article