Science

How We Finally Taught Machines to Think

Today’s launch of the Vector Institute caps a stunning run for Canadian researchers in the field of AI

Photo by MR McGill
Edinger-Westphal neurons in culture. MR McGill

Geoffrey Everest Hinton is one of the most important Canadians whose name you don’t know. Through his work on neural networks, he’s helped show how the meta-algorithmic strategy known as “backpropagation” can turbocharge the creation of artificial intelligence. That may sound hopelessly obscure. But as AI enters everyday use in everything from digital personal assistants to medical diagnostics to business decision-making, his influence is embedding itself at the very core of emerging technology.

On Thursday, at the Google Canada offices in Toronto, I bore witness to one of Hinton’s greatest challenges ever: Stuffing decades of groundbreaking research into a presentation to mark today’s launch of the Vector Institute for Artificial Intelligence in Toronto, where Hinton will take on the role of chief scientific advisor. The speech I saw was a dress rehearsal to see if he could come in under the time allotted by event organizers.

Hinton failed by a healthy margin. But my own on-board neural network generated null-set ideas for what he might have cut. This was a miniature master-class in the machine-learning technologies that will soon transform our lives, yet which few of us understand.

Computer programming, as we have always known it, involves teaching computers algorithms that we already know explicitly, Hinton said. We know how to add Column A to Column B, and divide the sum by Column C. A traditionally programmed computer can do these operations billions of times per second. But in the way computers implement each granular task, it’s basically an electric abacus.

With neural networks, the process is different: We’re teaching computers how to implement algorithms that human brains implement without actually understanding. How does the giant neural network known as the human brain distinguish a dog from a cat? How do you look at a photo and know it’s your boss and not your barista? How does the grammar computer inside our heads know that “I’ll try and do it” means the same as “I’ll try to do it”? (This last one, which Hinton cited in his remarks, is just the sort of maddening idiom that shows why linear grammar-based AI translation methods were so ineffective.) Since humans don’t really know why we know these things, we can’t use lists of rules to directly teach machines how to understand them.

Neural networks—you will hear these two words a lot in coming years—reflect a spirit of humility, a recognition that our usual human strategy of describing tasks is a complete failure when it comes to showing computers how to navigate the human environment. Think of the way we teach toddlers English by showering them with examples of language in speech and print—rather than drilling them on the rules of tense. In effect, the science of neural networks is about treating a machine like a three-year-old at a nursery-school class.

The neural metaphor works on several levels. On the granular level, the science of creating these networks crudely replicates the way our brains build knowledge—iteratively strengthening or weakening connections among neurons. Our memories and personality are built out of such stuff. But on a larger level, there also is a strong metaphorical connection to evolution through natural selection. Just as biological creatures (including their brains) brought accidental genetic glitches into their germ line when such glitches conferred an advantage in strength or longevity or mating, neural networks in machines are nudged to evolve in a way that makes them better at performing useful human-specified tasks.

Hinton showed a vivid example: a picture captioned as “a close-up of a child holding a stuffed animal.” Any toddler could look at this photo and know that this caption was completely accurate. Yet until very recently, traditional image-recognition algorithms failed at teaching machines to turn pixel matrixes into text descriptions. Only through neural networks has such a breakthrough been possible. We gave up teaching machines mathematical rules that define what a teddy bear looks like, and just fed them, in effect, an endless slideshow from what parents post to public Facebook pages. Eventually, the computer just gets it.

One big problem that researchers such as Hinton wrestled with, until just a few years ago, is that the process of building neural networks was agonizingly slow. You show the computer a cat, and it spits out the result that it’s 51 percent sure the thing is feline, but with a 49 percent possibility it is canine. Then you fiddle with one of the connections within the network that connects input (the kitty image) with output (probability), and see what happens. If the level of certainty goes up to 51.01 percent, you keep the change and run the process again. If it goes down to 50.09 percent, you reject the change, and start over.

The advantage of the aforementioned “backpropagation” (a concept first described in the scientific literature half a century ago) is that you can start from the presumption of near-100 percent certainty in kitty-ness, and then figure out how the internal neural knobs can be fiddled en masse to get you there. As one might expect, this is a fantastically computationally intensive job—and it is only in recent years that the data-processing power has been available to do it. Which is to say, modern AI is all about the genius of scientists such as Hinton tethered to the insane level of petaFLOPS cranked out by today’s machines.

When people talk about how AI will change our lives, the subject sometimes gets reduced to Siri knowing how we like our coffee, Waze getting a better handle on lane closures, and Skynet exterminating the human species. But the examples Hinton discussed seemed more compelling—like an app that can tell you if you have skin cancer based on an AI-powered analysis of a selfie. Then there’s diabetic retinopathy, the most common cause of irreversible blindness among middle-agers. Hinton showed us two retina scans, one retinopathy-positive, the other negative. The differences between them were subtle. Yet Hinton explained how AI could be used to automatically distinguish one from the other. That may sound like a recipe for throwing ophthalmologists out of work. But in parts of the world where regular treatment by heath specialists is out of reach, this kind of technology can become the difference between vision and blindness.

Canadians love to list off the innovations we’ve discovered—insulin, canola, AM radios. Yet few know how much our country is accomplishing in the field of AI, which has the potential to utterly overhaul the way humans are served by digital technology. Much of the credit goes to decades of funding from government agencies such as The Natural Sciences and Engineering Research Council of Canada (NSERC), which, as Hinton noted, had the foresight to fund purely “curiosity-based” research with no obvious industrial application. All of that is now bearing fruit. In 2009, University of Toronto researchers used backpropagation methods to revolutionize speech recognition, technology that quickly found its way into Android-enabled devices. In 2012, they did the same for the field of image recognition, while the University of Montreal has become a world leader in the field of machine translation.

You don’t need to know how neural networks function in order to benefit from all the innovations that AI will unleash. But as a matter of policy-making, it’s instructive to hear Hinton describe what generous government funding, open-ended scientific curiosity, strong academic collaborators, and long-horizon corporate players such as Google can accomplish.

Jonathan Kay (@jonkay) is a journalist, book author and editor, and public speaker.

SIGN UP FOR OUR NEWSLETTER. Get the weekly roundup from The Walrus, a collection of our best stories, delivered to your inbox. Learn More »