Driven to Distraction

How our multi-channel, multi-tasking society is making it harder for us to think

In the late 1920s, a Russian-born psychologist named Bluma Zeigarnik found herself sitting in a crowded Viennese coffee house, wondering how the waiters could accurately recall the minute details of numerous orders without committing them to paper. That casual observation proved to be the basis of a paper published in 1927, in which she laid out what came to be known as the Zeigarnik Effect. Her now-famous thesis was that the human mind is better at remembering incomplete tasks, an insight that proved useful to generations of marketers and managers, who devised ways of leveraging our mental response to interruptions and manipulating individuals into buying products or completing tasks more efficiently.

I first learned about the Zeigarnik Effect on a fascinating website titled Interruptions in Human-Computer Interaction (interruptions.net), which gathers the work of a diverse collection of researchers—social scientists, software engineers, psychologists, and neuroscientists—who study what may well be the defining condition of the information age: chronic distraction.

During the past three or four years, quantum leaps in wireless digital technology have brought us to the point where high-powered portable devices permit us to be in constant contact with one another, to access vast storehouses of digitized entertainment, and to plug into the Internet virtually anytime, anywhere. The unveiling earlier this year of Apple’s new iPhone anticipates an era dominated by a gadget that effortlessly functions as a cellphone, a personal digital assistant, and a camera; holds hundreds of hours of digital music; streams high-resolution digital video; receives digital satellite radio and maybe even television; and offers full Internet access regardless of the time of day and where the user happens to be.

Even before this all-in-one technology makes its grand debut, we are revelling in the miracle of nearly ubiquitous connectivity. But all this access has not come without a psychological cost that is ultimately rooted in the way our brains function. If we now find ourselves adrift in an ocean of information, our mental state increasingly resembles the slivered surface of a melting glacier. As the dozens of studies at interruptions.net attest, we have created a technological miasma that inundates us with an inexhaustible supply of electronic distractions. Rather than providing necessary interruptions to assist us in focusing on the incomplete task at hand, as Zeigarnik proposed, the deluge of multi-channel signals has produced an array of concentration-related problems, including lost productivity, cognitive overload, and a wearying diminishment in our ability to retain the very information we consume with such voraciousness. It may be that our hyper-connected world has quite simply made it difficult for us to think.

The irony is that one of the fundamental promises of information technology—the radical improvement in the efficiency of our interactions with one another—is being undermined by the technology’s enormous capacity to overwhelm us with information and thus short-circuit our need to concentrate. Cognitive psychologists are beginning to understand why the human brain isn’t well suited to the sort of communications environment we’ve built for ourselves. Yet in post-industrial urban societies, few of us are willing or able to disengage, because going offline in a wireless world is no longer an option. This raises a pair of tough questions: Do we control this technology or has it come to control us? And have we arrived at a point, fifteen years after the advent of the web, where we need to rethink our relationship with a technology that may well be altering the way our minds function?

This isn’t going to be a Luddite rant. Like many people, I spend much of my working day in front of a computer screen. I have instant access to information I could never have obtained even a decade ago. At the same time, I find myself asking why phrases like “train of thought” and “undivided attention” are part of our linguistic geography, and what’s become of the underlying mental states they refer to. It often seems as though the sheer glut of data itself has supplanted the kind of focused, reflective attention that might make this information useful in the first place.

The dysfunction of our information environment is an outgrowth of its extraordinary fecundity. Digital communications technology has demonstrated a striking capacity to subdivide our attention into smaller and smaller increments; increasingly, it seems as if the day’s work has become a matter of interrupting the interruptions. Email for many people has become an oppressive feature of work life. MySpace, YouTube, chat rooms, and the blogosphere, for all their virtues as new mediums of political debate and cultural activity, have an amazing ability to suck up time. During this decade, executives and the political class became literally addicted to BlackBerrys, and these devices are now being taken up by consumers who obsessively check them while waiting for coffee or minding the kids at the playground. Information workers spend their days pursuing multiple projects that involve serpentine email threads, thousands of files, and endless web searches. Paradoxically, the abundance of information begets a craving for even more, a compulsion that results in diminishing returns and is often remarkably undiscerning. Scientists at the Xerox Palo Alto Research Center have labelled this kind of online behaviour as “information foraging” and coined the pithy term “informavores” to describe this new species.

The technology, in theory, has the ability to emancipate individuals from tedious minutiae: we no longer need to memorize vast amounts of quotidian information (phone numbers, addresses, trivia of any sort) because a digital version is always retrievable. So, in principle, we should have more mental space to focus on the things that are important to us. And yet, the seductive nature of the technology allows us to sample almost anything and, when addicted to this foraging of bits and bytes, focus on nothing. The resulting cognitive overload has become the occupational hazard for the technorati.

Some years ago, researchers surveyed managers in the United States, the United Kingdom, Australia, Singapore, and Hong Kong. Two-thirds reported stress, tension, and loss of job satisfaction because of cognitive overload. “Information is relentlessly pushed at us, and no matter how much we get we feel we need more, and of better quality and focus,” wrote David Kirsh, a Toronto-born expert in cognitive science who runs the Interactive Cognition Lab at the University of California, San Diego. When I spoke to him recently, he said his basic concerns about information fragmentation hadn’t changed, and the problem may accelerate with the new portable technologies that have turbocharged the data environment.

The mechanics of cognitive overload are similar to the problem of insufficient RAM. “In most models of working memory and attention, everything has to go through a central executive processor before being passed into long-term memory,” explains Frank Russo, an assistant professor of psychology at Ryerson University. Our built-in CPUs are found in the brain’s frontal lobe. These centres need time to “rehearse” or “scaffold” incoming information by building the neural circuits on which the data will eventually be stored. “If it is not rehearsed enough or elaborated upon,” Russo told me, “the information never makes it to the long-term store.” When someone is bombarded by data, the executive processor doesn’t have the time or the resources to encode everything and starts to show signs of fatigue.

While “memory” is a word that has been appropriated by the information technology world, human and digital memory function very differently. Absent corrupted documents or bugs, the act of saving a file means saving it in its entirety, with an understanding that when it is retrieved the file will be in the state the user left it. Human memory is much more error-prone and subjective. Our memories decay and reshape themselves over time. To appreciate the contrast, imagine if you saved a document and when you reopened it the text contained only the parts that pleased you.

Russo illustrates the point with an experiment he does for his students. He shows them a video of a group of young men and women tossing two basketballs among themselves quite rapidly. The students are asked to count the number of passes. At a certain point in the video, a man in a gorilla suit walks through the frame. After the video, when the students are asked if they noticed anything odd, about a third say they didn’t see this absurd disruption. “I use this experiment to demonstrate the point that perception and memory are not like running a tape,” Russo says. “We do have selective attention and we miss things, especially if we’re very focused on a particular task.” As Jeffery Jones, an assistant professor of psychology at Wilfrid Laurier University, puts it,”There seems to be a limit to the amount of information we can process at one time.”

It’s not just a matter of quantity either. Kathy Sierra is a Boulder, Colorado-based educator who designed and created the bestselling “Head First” software-development guides, which are based on neuroscience research about cognition and human memory. In developing her approach, she pored over evidence that revealed how the human brain, from an evolutionary point of view, remains a machine programmed primarily to look out for its owner’s survival, like the threat of an approaching predator. “Our brain cares about things that are very different than the conscious mind wants to learn,” she says. It is geared to respond to novel, surprising, or terrifying emotional and sensory stimuli. Her conclusion: the fast-paced, visually arousing hit of video games is intensely captivating for the human brain, whereas the vast amount of text found on websites, blogs, and databases tends to wash over us.

The human mind, well-suited as it is for language, has always adapted to new information technologies such as the printing press and the telephone, so why should the latest generation be any different? It may be partly a matter of the quantity of information at our disposal, and the speed and frequency with which it comes at us. The research on cognitive overload and multi-tasking reveals that our brains are ill equipped to function effectively in an information-saturated digital environment characterized by constant disruptions. While there’s much hype about how young people weaned on the Internet and video games develop neural circuits that allow them to concentrate on many tasks at once, the science of interruptions suggests our brains aren’t nearly that plastic.

Russo cites epidemiological studies showing that drivers who are talking on a cellphone are four times more likely to be involved in an accident than those who remain focused on the road. Aviation experts have understood this phenomenon for years. A large proportion of plane crashes involving pilot error can be traced to cockpit interruptions and distractions. A 1998 study pointed out that when people are engaged in highly familiar or routine tasks—the things we say we can do in our sleep—they become vulnerable to distraction-related errors because the brain is, essentially, on autopilot and doesn’t recover well when it is called on to respond to information that is unpredictable, even casual conversation. “Cognitive research indicates that people are able to perform two tasks concurrently only in limited circumstances, even if they are skillful in performing each task separately,” concluded a recent NASA study on cockpit distractions. That’s why pilots are required to keep banter to a minimum.

Multi-tasking, however, is the signature behaviour of the wired world. We spend our days ricocheting between websites, blogs, our own files, and the various communications devices demanding our attention. Ironically, humans have misappropriated the nomenclature of digital technology to describe this phenomenon. The phrase “multi-tasking,” David Kirsh observes, was invented to describe a computer’s capabilities, not a person’s.

Yet wireless devices encourage ill-advised multi-tasking: driving and checking BlackBerrys; talking on the phone and reading email; working on two or more complex projects at once. In corporate meetings, participants discreetly text one another or check email while the boss is talking. University classrooms are now filled with students tapping away at their wireless laptops. They may be focused on a document or a website related to the lecture or they may not. Digital technologies invite disruption and pose a daunting challenge to the possibility of a group of individuals applying their collective attention to a particular chore.

Not surprisingly, a growing body of scientific literature has demonstrated that multi-tasking in an office setting is a recipe for lost productivity—a message that runs directly counter to the way many companies want their employees to work. When someone is bouncing between complex tasks, he loses time as the brain is forced to refocus. An American Psychological Association study has found that those “time costs increased with the complexity of the tasks, so it took significantly longer to switch between more complex tasks.” When multi-tasking, the brain’s executive processor performs a two-stage operation: the first is “goal shifting” (e.g., shifting from editing a text file to checking email), and the second is “rule activation” (turning off the learned rules for editing on a word processing program and turning on the rules for managing the email program that’s being used). According to the APA, Joshua Rubinstein, a psychologist with the US Federal Aviation Administration, determined that “rule activation itself takes significant amounts of time, several tenths of a second—which can add up when people switch back and forth repeatedly between tasks. Thus, multi-tasking may seem more efficient on the surface, but may actually take more time in the end.”

Uncontrolled interruptions create a similar cognitive response. You’re working on your computer and the cell rings, the BlackBerry buzzes, or the incoming email notification pings. Out of a sense of urgency, curiosity, or simply a craving for a distraction from an arduous task, you break away to deal with the interruption, which may be something very simple (a quick cellphone exchange) or something quite complex (a detailed email from a coworker that’s been marked urgent). In other cases, the interruption leads you off on an entirely new tangent and you may not end up returning to the original project for hours. By that point, you have forgotten where you were or you may have closed windows that now need to be found and reactivated. It’s like putting a novel down for days and then discovering you need to reread the last chapter in order to figure out what is happening.

When a large British company evaluated the emails sent by its employees, it discovered that almost 30 percent were unnecessarily copied, irrelevant, or difficult to understand. The annual cost in lost productivity was estimated to be about £3,400 per person, or almost £10 million across the firm. Those numbers don’t include the time lost as employees try to get back on task.

The annual Computer-Human Interaction (CHI) conference, held last year at Montreal’s cavernous Palais des congrès, was in many ways a classic nerdapalooza—hundreds of grad students, post-docs, professors, and software industry types networking and swapping business cards.

The Palais had set up hotspots throughout the building, which virtually guaranteed that the sessions would be a study in multi-tasking and fragmented attention. The presenters plugged their laptops into digital projectors and fired up their PowerPoint presentations, while the conference delegates whipped out their PowerBooks and promptly went online. Almost everyone fiddled with some kind of portable device—laptops, BlackBerrys, Palms, or camera phones. One young woman played with an Etch A Sketch key fob.

Interestingly, many of the most popular sessions dealt with finding technological solutions to the daily problems precipitated by the combination of too much communication and too little time. Similar preoccupations have turned up at other information technology conferences in the past year or two, according to Sierra, who attends many techie gatherings. At one conference she went to last year, the dominant topic of discussion had to do with ways of filtering out unwanted information. “I’ve never seen that before and I’ve been going to these tech conferences for fifteen years.”

Some groups debated the failings of recommender systems—software on movie, music, and book sites that purport to provide tips based on user profiles but typically generate unwieldy lists instead. Others deliberated on why it had become so difficult to electronically set up face-to-face meetings in the age of crushingly crowded schedules. At a session about text messaging, a young post-doc presented the results of a study on the costs of interruptions among msn users.

But it was during an esoteric debate on “information architecture” that one participant drove right to the heart of the issue that seemed to be on everyone’s mind. “If information is like the sea,” this delegate asked, “what is seamanship?” The question seemed to me to be about as profound an observation as anything I’ve come across in all the discussions about the geography of the digital universe. “We don’t talk about ‘human-wind interactions,’” he continued. “We talk about sailing. We don’t talk about ‘human-saw interactions.’ We talk about woodworking.”

His point was that we don’t have a relationship with a toaster because it is nothing more or less than an object we use to perform a discrete task. But information/communications technology is unlike any other human invention because it performs data processing tasks more adroitly than the human brain. What’s more, the wireless advances of the past decade have created portable devices that purport to augment our minds. Whether or not they do, we have become more and more dependent on these fabricated cortexes. We have complex relationships with such gadgets because, increasingly, we can’t really function without them. I could get along without a car but I can no longer earn a living without my browser.

When these technologies create an unintended consequence—i.e., a Google search that produces millions of hits that may not be ordered according to the user’s needs and is therefore a self-defeating solution to the problem of finding information on the Net—we seek to engineer our way out of the box. At the CHI conference, technical people debated technical solutions to the failings of a techno-culture that throws up too much information and too many distractions. While the participants were clearly preoccupied with this Catch-22, they largely believed that technology must deliver the solutions.

This orientation was glaringly obvious during a seminar entitled “Because I Carry My Cell Phone Anyway,” by Pam Ludford, a Ph.D. candidate at the University of Minnesota. She developed a prototype of a “place-based reminder system,” dubbed “PlaceMail.” Every day, she began, Americans spend about two-and-a-half hours doing chores at different places—the mall, the dry cleaners, the supermarket. But, she said, “people have imperfect practices for managing these tasks.” We make lists, then misplace or forget to check them. By way of a solution to such common imperfections, she has devised a “location-based reminder system.” In broad strokes, you key your to-do list into a web-interface feature on a cellphone or BlackBerry equipped with a global positioning system chip or other location-sensitive technology. Next, you input the locations of the places where said chores can be accomplished. Then, as you’re driving around, the GPS chip will detect if you are close to the supermarket, say, whereupon the phone rings and an electronic message appears, reminding you to pick up eggs and toilet paper.

After the session, Victoria Bellotti, a principal scientist and area manager at the Palo Alto Research Center, told me that such aides may be “the next big thing.” But she also seemed dubious. For most people, a mnemonic scribble—“Mother’s Day” or “Beth blah blah”—is more than enough of a trigger to retrieve the memory necessary for an intended task, especially if it is stored in some kind of chronological context, such as an appointment book. “Your brain,” Bellotti said, “is basically a pattern-matching instrument.” Such memory prostheses may prove to be overkill, she added. But then she quickly noted that she herself has a dreadful memory. “Maybe we could stop worrying about certain things and focus on other things if we had that prosthetic device on us.” On the other hand, it might simply prove to be yet another dispenser of interruptions that further atomize our capacity to concentrate.

PlaceMail, in fact, is evidence of the feedback mechanism in our over-connected culture. In our relentless drive for more data-friendly wireless communications, we have produced a surfeit of communication and information, the combination of which has a tendency to clog up our schedules, splinter our attention spans, and overwhelm our short-term memories. Given the way our brains actually function, it may turn out that what we need is more time and fewer distractions, even if that means less information.

Not all of us are looking to key our way out of this box. We are now witnessing the emergence of a non-technological response to the symptoms of an accelerated info-culture. In San Francisco, a writer and consultant named Merlin Mann runs a blog called 43 Folders, which is about “personal productivity, life hacks, and simple ways to make your life a little better.” The popular site has become a focal point of debate about ways to manage the downside of too much digital communication, but from the perspective of users rather than technophobes.

The name of Mann’s blog comes from an idea in Getting Things Done: The Art of Stress-Free Productivity, the bestselling 2001 time-management guide by David Allen, who has become a guiding light for people in BlackBerry twelve-step recovery programs. A hippie in dress pants, Allen is an Ojai, California-based consultant who was an educator and jack of all trades until the mid-1980s, when he set himself up as a productivity consultant. Getting Things Done—in the sturdy tradition of American marketing, he has trademarked the phrase and the acronym GTD—offers a smorgasbord of ideas about how to take back your life using a mixture of common sense, mind-clearing techniques, and self-discipline.

He’s big on creating paper to-do lists and eliminating the minor sources of frustration that pollute the typical workday. The forty-three folders idea involves setting up a system of forty-three ordinary manila folders in one’s office—one for each day of the month, and another dozen for the months—in which you place reminders of tasks that need to be completed and when. One of Allen’s premises is that much of the stress associated with an information- saturated workplace is that we end up over-committing ourselves without quite knowing how much we’re on the hook for. You have a vague sense of emails that have gone unanswered and interrupted projects dangling in digital limbo, going nowhere but nonetheless giving you grief.

By taking up the ideas in GTD, he contends, one can compile “a complete and current inventory of all your commitments, organized and reviewed in a systematic way [in which] you can focus clearly, view your world from optimal angles, and make trusted choices about what to do (and not do) at any moment.” One of Allen’s most popular ideas is the “Hipster PDA”—a little notebook or a sheath of index cards, which you keep on your person so you can make notations as they come to you, rather than committing them to some digital black hole or, worse, forgetting these fleeting thoughts as other sources of distraction muscle their way into your consciousness. It doesn’t get any more low-tech than that.

The growing interest in such “solutions”—to borrow a favourite techie buzzword—indicates the way portable information technologies have unwittingly created new problems while solving old ones. The BlackBerry that doesn’t stop pinging, the tsunami of email, the relentless subdividing and cross-posting of online data—these features of our daily information diet hint that something’s gone awry. If we are to establish balance in our relationship with the digital information that envelops us, we must reconsider our understanding of the inner workings of our pre-existing mental machinery and the limits of its capacity to adapt to the electronic environment.

One approach is to recognize the futility of the compulsion to inundate ourselves with information in the hope of meaningfully processing everything that comes over the digital transom. Kathy Sierra says one of the most widely read and copied posts on her blog was a cri de coeur in which she confessed that she had stopped trying to keep up with all the technical reading she was supposed to be doing. The post brought an enormous sigh of relief in response from thousands of distracted bloggers who, she says, were grateful to be released from that treadmill of surfing, reading, forgetting, repeating.

The chronic memory loss prompted by such online behaviour is, in fact, the canary in the coal mine. Our information technologies have created an epidemic of engineered forgetfulness—a symptom of the massive quantity of data we attempt to cram into our minds each day. We inevitably fail, yet our social biases about forgetting are thoroughly negative. A great memory is still considered to be a sign of mental acuity while we associate forgetfulness with aging and decline. But, as Sierra points out, a healthy brain actively rejects much of the information we’re trying to stuff into it; the brain is designed to be selective. “There’s a lot of chemistry in the brain devoted to stopping us from remembering things,” she told me. “That means forgetting is pretty darn important. We feel guilty about it. But we should have a great deal of respect for that [mechanism].” Discarding information that is not urgent or relevant is crucial to our ability to think in ways that are efficient and creative.

The point is that we must acknowledge the self-inflicted memory lapses triggered by information overload, chronic interruptions, and relentless electronic multi-tasking. The need to be much more conscious of our information diet, in turn, is a reflection of the imbalance between our technical capacity to record information digitally and our neural capacity to remember it chemically. After fifteen years of web access, we haven’t really tried to reconcile these unevenly matched features of our mental geography. Moreover, amid all the transformations, we have been devaluing those very neurological capabilities that technology has not been able to mimic, and none more thoroughly than the biological need to concentrate as a way of allowing longterm memory to transform into thought and, when necessary, action.

As a consequence, our perennially distracted Net culture seems programmed to eliminate time for thinking, which is not the same thing as time for finding and saving data. We have unleashed an explosion of digital media but, paradoxically, we have less and less opportunity to digest it, and then to allow all the information to, well, inform. Being online has become a state of being, while going off-line increasingly represents either an act of will or a tiny gesture of rebellion against the status quo. “We’re at a point when we can’t be alone,” Wilfrid Laurier psychology professor Jeffery Jones told me as we talked about his research on technology and interruptions. We were sitting in his small office: the husk of an old computer sat on the floor, and there was a new wide-screen terminal on his desk, along with a joystick and his cellphone. Toward the end of the interview, someone knocked, but he ignored it. He said he now knows that if he wants to focus, he must make a point of not picking up the phone or answering his email—even though that failure to connect leaves him feeling vaguely guilty. “But I’ve learned,” Jones reflected, “that you have to have some time when you are unavailable.”

Phillip Toledano’s photos have appeared in Vanity Fair and Esquire. He is working on his second book, The United States of Entertainment

John Lorinc
John Lorinc is a Toronto-based journalist and editor and the author of Dream States: Smart Cities, Technology, and the Pursuit of Urban Utopias, published this year.