Earlier this week, Zeynep Tufekci appeared on Ezra Klein’s podcast. If you don’t know Tufekci, you should: she’s one of my favorite academic thinkers on the intersection of technology and society.
During the interview, Tufecki discussed her investigation of YouTube’s autoplay recommendation algorithm. She noticed that YouTube tends to push users toward increasingly extreme content.
If you start with a mainstream conservative video, for example, and let YouTube’s autoplay feature keep loading your next video, it doesn’t take long until you’re watching white supremacists.
Similarly, if you start with a mainstream liberal video, it doesn’t take long until you’re mired in a swamp of wild government and health conspiracies.
Tufecki is understandably concerned about this state of affairs. But what’s the solution? She offers a suggestion that has become increasingly popular in recent years:
“We owe it to ourselves to [ask], how do we design our systems so they help us be our better selves, [rather] than constantly tempting us with things that, if we sat down and were asked about, would probably say ‘that’s not what we want.'”
This represents a standard response from the growing digital ethics movement, which believes that if we better train engineers about the ethical impact of their technology design choices, we can usher in an age in which our relationship with these tools is more humane and positive.
A Pragmatic Alternative
I agree that digital ethics is an important area of inquiry; perhaps one of the more exciting topics within modern philosophical thought.
But I don’t share the movement’s optimism that more awareness will influence the operation of major attention economy conglomerates such as YouTube. The algorithm that drives this site’s autoplay toward extremes does so not because it’s evil, but because it was tasked to optimize user engagement, which in turn optimizes revenue — the primary objective of a publicly traded corporation.
Last week, Facebook reported weaker than expected second quarter earnings and warned investors to expect diminished growth. As a result, its stock promptly fell 19%, wiping out over $120 billion in market capitalization. No publicly traded US company has ever lost more dollar value in a single day.
This seems like bad news for the social media giant, perhaps the first indication that its struggles over the past couple of years are catching up to its bottom line.
A couple weeks ago, I wrote about a new study that took a careful look at interactions in an open office. It found, contrary to popular belief, that moving to an open format made people less likely to talk face-to-face with their coworkers, and more likely to instead send distracting digital messages.
Not surprisingly, these changes led to lower productivity.
This post sparked an interesting discussion in the comment section and my personal inbox on the question of why so many organizations are so eager to embrace open concept workspaces.
A popular explanation was the cynical claim that open offices are a covert attempt to lower costs.
This might be right in some instances, but thrift can’t explain why Silicon Valley giants like Facebook or Apple, who literally have more cash than they know what to do with, embraced open formats in their new billion dollar headquarters.
On Spatial Boundaries and Face-to-Face Interaction
Why do companies deploy open office layouts? A major justification is the idea that removing spatial boundaries between colleagues will generate increased collaboration and smarter collective intelligence.
As I learned in a fascinating new study, published earlier this week in the Philosophical Transactions of the Royal Society, there was good reason to believe that this might be true. As the study’s authors, Ethan Bernstein and Stephen Turban, note:
“[T]he notion that propinquity, or proximity, predicts social interaction — driving the formation of social ties and therefore information exchange and collaboration — is one of the most robust findings in sociology.”
But when researchers turned their attention to the specific impact of open offices on interaction, the results were mixed. Perhaps troubled by this inconsistency, Bernstein and Turban decided to get to the bottom of this issue.
Prior studies of open offices had relied on imprecise measures such as self-reported activity logs to quantify interactions before and after a shift to an open office plan. Bernstein and Turban tried something more accurate: they had subjects wear devices around their neck that directly measured every face-to-face encounter. They also used email and IM server logs to determine exactly how much the volume of electronic interactions changed.
New readers of this blog might not know that back in 2012 I published a book about career satisfaction. It was titled So Good They Can’t Ignore You.
The book draws from interviews and relevant scientific research to answer a simple but important question: How do people end up passionate about what they do for a living?
Early in the book I make a provocative claim: the popular advice that you should “follow your passion” is counterproductive in the sense that it will likely reduce the probability that you end up loving your work.
I detail two reasons why “follow your passion” is bad advice:
The first reason is that most people don’t have a clear pre-defined passion to follow. This is especially true if you consider young people who are just setting out on their own for the first time. The advice to “follow your passion” is frustratingly meaningless if, like many people, you don’t have a passion to follow.
The second reason is that we don’t have much evidence that matching your job to a pre-existing interest makes you more likely to find that work satisfying. The properties we know lead people to enjoy their work — such as autonomy, mastery, and relationships — have little to do with whether or not the work matches an established inclination.
What works better? Put in the hard work to master something rare and valuable, then deploy this leverage to steer your working life in directions that resonate.
Earlier this week, the Washington Postpublished an article on the digital wellness movement, which attempts to use technology to help cure some of the issues caused by technology.
This movement, for example, is responsible for an app that “plants a tree” each time you put down your phone, and then shows the tree withering and dying when you pick the phone back up. It also produced a popular plug-in that displays, each time you go online, the number of days left in your expected lifetime.
Even Apple is getting involved in digital wellness. Their new suite of “wellbeing” features in iOS includes a wake-up screen that helps you “gently [ease] into your day” when you pick up your phone in the morning, and an improved Siri that makes suggestions about optimal notification settings.
I recognize that digital tools have a useful role to play in productivity. I’ve long advised, for example, that people use internet blocking software like Freedom to help jumpstart deep work training.
But something about this growing digital wellness movement makes me uneasy, and I think I’ve finally put my finger on the source of my concern: it’s infantilizing.
A reader recently pointed me toward a 2014 interview with Jerry Seinfeld on Alec Baldwin’s Here’s The Thing. Around 34 minutes into the conversation, Seinfeld provides a fascinating insight into the success of his eponymous television show:
“Let me tell you why my tv series in the 90s was so good, besides just an inordinate amount of just pure good fortune. In most tv series, 50 percent of the time is spent working on the show, 50 percent of the time is spent dealing with personality, political, and hierarchical issues of making something. We spent 99 percent of our time writing. Me and Larry [David]. The two of us. The door was closed. It’s closed. Somebody calls. We’re not taking the call. We were gonna make this thing funny. That’s why the show was good.”
Lurking in this quote is a lesson that applies well beyond the world of entertainment.
I'm a computer science professor who writes about the intersection of technology and society. I’m particularly interested in the impact of new technologies on our ability to perform productive work and lead satisfying lives. If you’re new to my writing, a good place to start is the about page. You can access over a decade's worth of posts in the blog archive.
Get the Latest from the
Study Hacks Blog
in your inbox:
You'll receive the blog posts via email.
Your email address is never sold or shared.
My Books for Students
Some Things I Like
(The notebook I use to create my daily plans.)
(The definitive academic treatment of deliberate practice.)
(A crazy but brilliant book. An important influence.)
Note: This site is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.