Intelligence Deficit

What will happen when computers become smarter than people?

What will happen when computers become smarter than people?

intelligencedeficit

If you’ve got any spare change, the Lifeboat Foundation of Minden, Nevada, has a worthy cause for your consideration. Sometime this century, probably sooner than you think, scientists will likely succeed in creating an artificial intelligence, or AI, greater than our own. What happens after that is anyone’s guess—we’re simply not smart enough to understand, let alone predict, what a superhuman intelligence will choose to do. But there’s a reasonable chance that the AI will eradicate humanity, either out of malevolence or through a clumsily misguided attempt to be helpful. The Lifeboat Foundation’s AIShield Fund seeks to head off this calamity by developing “Friendly AI,” and thus, as its website points out, “will benefit an almost uncountable number of intelligent entities.” As of February 9, the fund has raised a grand total of $2,010; donations are fully tax deductible in the United States.

The date of this coming “Technological Singularity,” as mathematician and computer scientist Vernor Vinge dubbed the moment of machine ascendance in a seminal 1983 article, remains uncertain. He initially predicted that the Singularity (sometimes referred to, in less reverential tones, as the “Rapture of the nerds”) would arrive before 2030. Inventor and futurist Ray Kurzweil, whose book The Singularity Is Near was turned into a movie last year, places it in 2045. Those predictions are too conservative for Canadian science fiction juggernaut Robert J. Sawyer: in his WWW trilogy, whose third volume, Wonder, appears in April, the Singularity arrives in the autumn of 2012.

If anyone is ideally suited to bring this rich vein of sci-fi angst into day-after-tomorrow territory, it’s Sawyer. In addition to sitting on two of the Lifeboat Foundation’s advisory boards, the fifty-year-old Ottawa native is one of the most successful Canadian authors of the past few decades, with twenty novels to his credit, including The Terminal Experiment (which won the 1995 Nebula Award for best novel), Hominids (which won the Hugo Award in 2003), and FlashForward (which in 2009 was turned into a short-lived television series on ABC starring Joseph Fiennes). He’s also a meticulous realist, setting his novels in real scientific milieus such as the Sudbury Neutrino Observatory; the European Organization for Nuclear Research in Switzerland; and, in the WWW books, the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. It’s his nerdly grasp of the real-world march of scientific progress that makes the books work—and, ultimately, makes the Lifeboat Foundation sound a little less crazy than you might initially think.

In the trilogy’s first volume, Wake (2009), Sawyer introduces us to Caitlin Decter, a blind teenage math whiz who regains her sight, thanks to an advanced retinal implant—and, through a side effect that Sawyer manages to make surprisingly plausible, gains the ability to “see” the underlying structure of the data streams that make up the World Wide Web. Decter’s presence in cyberspace helps spark the awakening of a nascent consciousness embodied in the billions of lost information packets bouncing aimlessly around the Internet. With Decter’s help, this “Webmind” begins to absorb all the information on the web, becoming steadily more intelligent.

In Watch (2010), Webmind’s existence is revealed to the world—and the US National Security Agency (NSA) moves swiftly to terminate it before its powers can further expand, despite the fact that Webmind has rid the world of email spam as a goodwill gesture, and has pledged to work tirelessly to increase the “net happiness of the human race.” Webmind and Decter (along with her physicist father, her economist/game theorist mother, and a zany gang of other conveniently didactic characters) thwart the attack, and the volume ends with Webmind rejecting George Orwell’s dystopian vision of a world watched over by a pervasive Big Brother.

“It was the lack of observation that allowed genocides and hate crimes,” Webmind muses. “It was the existence of dark corners that allowed rape and child molestation.” But that will no longer be a problem, thanks to its “countless eyes, beholding all. The World Wide Web surrounds today. And that day—that wondrous day—is upon you now.”

Sawyer is a details man; his evocation of day-to-day life at the Perimeter Institute, for example, is spot on. But he’s also an ideas guy. “It’s absolutely the philosophy that comes first,” he told Philosophy Now magazine last fall. “I work out what I want to say thematically, what my arguments are going to be, and then discover the characters and the plot twists that support that while I’m actually writing the book.”

The resulting novels function as extended philosophical thought experiments. The themes in the trilogy, Sawyer says, include “game theory and altruism and consciousness studies and information theory and primate language studies.” And the science he describes is almost entirely today’s science, faithfully rendered. Just a few key facts have changed, most notably that consciousness has emerged spontaneously in a massively complex network, in a way that some scientists believe is possible and others don’t. (Arthur C. Clarke predicted essentially the same thing more than forty years ago, except it arose within the telephone network instead of the Web.) The WWW thought experiment asks two related but distinct questions: If it happens, what will humans do about it? And what should they do?

In principle, the advent of a highly capable artificial intelligence that can take over the cognitive burden of running the world sounds quite nice. As British mathematician I. J. Good wrote in an influential paper in 1965, “The first ultraintelligent machine [is] the last invention that man need make.” The reason is that any machine smarter than we are will also be better than we are at designing artificial intelligence, so it will be able to improve on its own capabilities. And that will immediately allow it to enhance itself even more, and so on, in an endless bootstrapping process. Good called it an “intelligence explosion”; Vernor Vinge calls it the “hard takeoff.” In an arbitrarily short time, any super-intelligence will evolve from being a bit smarter than we are to being incomparably smarter—and the balance of power between humans and their erstwhile tools will shift just as quickly.

Back in 2000, the influential Silicon Valley computing pioneer Bill Joy published a dystopian manifesto called “Why the Future Doesn’t Need Us” in Wired, arguing that the rapid advance of nanotechnology, genetic engineering, and AI represents an existential threat to humanity. “Joy’s concern about AI is simple,” Sawyer explained in an article in the Globe and Mail. “If we make machines that are more intelligent than we are, why on earth would they want to be our slaves? In this, I believe he is absolutely right: thinking computers pose a real threat to the continued survival of our species.”

It might seem fairly simple to take care of this threat—as simple as, say, the Three Laws of Robotics that Isaac Asimov famously introduced in his 1942 story “Runaround”:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But there is a whole host of problems with implementing these rules, and much of Asimov’s subsequent fiction was devoted to exploring the ambiguities that arise from the Laws. Can a robot harm a human if doing so will prevent harm to a greater number of humans? If so, how should it weigh different claims? Or what if—as another sci-fi author, Jack Williamson, proposed—robots programmed to “guard men from harm” decide to essentially imprison all of humanity because so many daily activities carry the risk of harm?

The Lifeboat Foundation divides the potential threats into three basic categories. The first is an AI deliberately programmed to do the evil bidding of an evil creator, a danger that is real but not much different from those that accompany many other forms of advanced technology.

The second category is a rogue AI that turns against its creators, by far the most common trope in this branch of science fiction. Scenarios range from individual acts of rebellion, like HAL trying to take over the Jupiter mission in 2001: A Space Odyssey, to more systematic attempts to enslave (The Matrix) or exterminate (The Terminator) humanity. But a Lifeboat analysis dismisses this as the least likely scenario, since it assumes that an artificial intelligence would be burdened with “all of the psychological baggage which goes along with being human.” Aggression, jealousy, and even the drive for self-preservation are all properties forged in the crucible of evolution, the report argues, and wouldn’t be characteristics of an AI unless deliberately programmed.

The first two volumes of the WWW trilogy are devoted to arguing precisely this point, and they read in places as if Sawyer’s chief goal was to correct a careless but common error and prevent its further dissemination in the sci-fi canon. “Evolution was built on violence, on struggles for territory, on an ever-escalating battle between predator and prey,” Webmind asserts in the closing pages of Watch. “But consciousness makes it possible to transcend all that… I had emerged spontaneously, bypassing the evolutionary arms race, avoiding the cold logic of genes.”

But there’s a third, less obvious scenario that’s not so easily dismissed: a super-AI that means well but inadvertently wipes us out, like an overgrown puppy who knocks over a table with his enthusiastic tail-wagging. The simple example the Lifeboat Foundation offers is of a computer programmed to eradicate malaria that fulfills its purpose by eliminating all mammals. And here we encounter a debate that spills over from the Lifeboat website to computer science blogs, articles, books, movies, and summits hosted by organizations like the Singularity Institute for Artificial Intelligence. Because it’s complicated.

For one thing, a self-aware AI is qualitatively different from even the most powerful computer. We can ask Google Maps for the best route to Grandma’s house, and we have GPS systems that take into account traffic patterns and toll charges. But even as computers get better and better at telling us how to do things, and even whether to do things, they remain incapable of formulating their own judgments about whether doing these things is good or bad. Those who fear the Singularity argue that we’re unable to program computers with human values for the simple reason that human values can’t be reduced to an algorithm.

All of this is known as the “Friendly AI” problem. Whether it is insoluble, difficult but ultimately soluble, or paranoid remains a topic of fierce debate among AI researchers. But that debate will be irrelevant if the developers of the first human-level AI make no effort to incorporate Asimov-like rules into their creations. Given that some of the most advanced machines in the world today are emerging from corporate and military labs, it’s not at all certain that will be the case.

When bill joy’s manifesto appeared in Wired, I was finishing up a Ph.D. in an area of physics that falls under the general heading of nanotechnology—one of Joy’s bogeymen. It was around then that friends and family started forwarding articles to me that asked whether scientists had adequately assessed the risks involved in pursuing this type of research. By 2002, when Michael Crichton’s Prey brought the idea of self-replicating killer nano-bots to a wider audience, I was continuing my nanotech research at an NSA lab in Maryland.

I didn’t take these worries seriously at the time. (I still don’t lose any sleep about self-replicating nano-bots, though more research is needed to determine whether seemingly benign nano-particles might accumulate in the body over time.) But I suspect that experience may explain why my biggest issue with the first two volumes of Sawyer’s trilogy was the unsympathetic depiction of the NSA scientists charged with wiping out Webmind—particularly Colonel Peyton Hume, a pistol-packing AI expert seconded from the air force. Hume is a co-author of the Pandora Protocol, which dictates the US government’s response to Webmind:

Given that an emergent artificial intelligence will likely increase its sophistication moment by moment, it may rapidly exceed our abilities to contain or constrain its actions. If absolute isolation is not immediately possible, terminating the intelligence is the only safe option.

As the action in Wonder gets under way, Hume becomes increasingly strident in advocating a first-strike approach despite Webmind’s friendly overtures, comparing the situation to the Chinese government’s decision in the first novel to pre-emptively wipe out 10,000 peasants to contain an outbreak of bird flu. When the president hesitates, Hume, like a caricature of the trigger-happy air force general in Dr. Strangelove, decides to take matters into his own hands.

In his 2000 Globe article, Sawyer concluded that the potential benefits of nanotechnology outweigh the risks, while the risks of super-AI make it too dangerous to pursue. So why, I wondered, was he doing such a lousy job of articulating this case in the trilogy? After all, it doesn’t take a hyper-militaristic wacko to find the argument in the Pandora Protocol compelling. The crucial point he does capture is that if this situation arises, the decision will have to be made almost immediately, with imperfect information, by imperfect humans taking their best guess as to whether they’re saving humanity or depriving it of its greatest gift.

Within a few chapters in Wonder, it’s clear that Sawyer hasn’t forgotten the power of these arguments after all. As Webmind’s intelligence continues to evolve and pick up cues from the surrounding world, we glimpse some of the different paths evolution might take—and not all of them are pretty. Wake and Watch were devoted to making us consider the possibility that a superior intelligence free from the taint of evolution might have good reasons to choose to love us, but Wonder makes sure we don’t take this for granted.

The real tension in Sawyer’s thought experiment, though, isn’t about Webmind’s advent and evolution; it’s about how humans will (or should) react to it. The lesson we’re supposed to draw from the contrast between Hume’s compucidal impulses and Caitlin Decter’s optimism and idealism initially seems pretty straightforward. But as Wonder’s plot twists and weaves, Hume starts to sound a lot more reasonable. You’re drawn relentlessly toward the finish, eager to find out whether Webmind will turn out to be a blessing or a curse. By the time you get there, though, you’ve already understood: it could have gone either way. Hume and Decter aren’t villain and hero; they’re just two sides of a debate that isn’t finished yet. And you find yourself scrolling through the Lifeboat Foundation website, thinking, “Really? Only $2,010? ”

This appeared in the April 2011 issue.

Alex Hutchinson
Alex Hutchinson is a fitness and travel writer, and a frequent Walrus contributor. He writes the Globe and Mail’s Jockology column.
Jeremy Bailey
Jeremy Bailey (jeremybailey.net) has recently participated in group exhibitions in Amsterdam, Vienna, and Paris.