Sometime in the mid-1980s, a bulky magnetic tape reel—an early form of data storage—arrived at the Thomas J. Watson Research Center, IBM’s headquarters in Yorktown Heights, New York. It contained, by some reports, over 100 million words of text in French and English. It was the sum of fourteen years of Canadian parliamentary dialogue: a computer-readable version of Hansard, the official record of what transpires in legislative debate. (Hansard is named after Thomas Curson Hansard, the first official printer to the British Parliament.)

To this day, nobody is quite sure who sent the tape reel or whether they were authorized to forward it to IBM. But computer scientists there started experimenting with the data it contained, wondering if they could use it to assist in their efforts to develop a method for automated translation.

Most researchers working on computer translation at that time primarily understood it as a linguistic problem: solving it would mean uncovering how a given language was structured. Computerized translation efforts were therefore based on attempts to deeply analyze the grammars of two languages and then program complex sets of rules that would tell a computer how to transform one of those languages into the other. But IBM researchers had a different idea. They wondered what would happen if they simply looked at translation as a probability calculation, examining the frequency with which words appeared, and in what order, in any given language—sheer mathematical guesswork. Rather than a linguistic art, IBM saw translation as a matter of statistical optimization.

Running a probability analysis like that—one robust enough to map one entire language onto another solely on the basis of word frequency and order—requires a huge data set. Today, a computer can scan a book that contains side-by-side translation and extract its text relatively easily; in the 1980s, nothing remotely like that was possible. IBM researchers had a theory they had no ability to test. Then, the Hansard data arrived at their door.

Several years later, Canadian computer-translation researchers sat, shocked, as they listened to an IBM team describe for conference audiences a revolutionary new translation method the company had developed using, as quoted in their published paper, “our Hansard data.” As IBM wrote, “We have chosen to work with the English and French languages because we were able to obtain the bilingual Hansard corpus of proceedings of the Canadian parliament.”

“We were all flabbergasted,” recalled Pierre Isabelle, a computer scientist who had been active in the field since 1975. Describing the conference for a 2009 article in a linguistics journal, Isabelle remembered people “shaking their heads and spurting grunts of disbelief or even of hostility.”

In Canada, Hansard is not necessarily a verbatim log of what is said in debates: rather, what transpires is transcribed, edited, and laboriously translated overnight by government translators, English speeches into French and French into English, so there is a complete account in both languages available by the following morning. Producing it is an enormous undertaking. IBM scientists were able to test their theory and develop a functioning method for computerized translation only, in other words, because they had years of these side-by-side translations, painstakingly created by countless civil servants in Ottawa, to use as raw material. That trove of data catalyzed a revolution in computing. Today’s state-of-the-art automated translations—even Google Translate’s recently revamped system—owe their existence to the method that IBM discovered with Hansard as its very first training wheels.

This watershed moment for computing is a bitter pill for Canadian innovation and a revelatory chapter of our history. IBM wasn’t alone in trying to crack the secret of computerized translation. The Canadian government itself had, for many years, funded its own research project to do the same thing—precisely so that it could automate the translation of official documents like Hansard and ease the difficulties of operating a fully bilingual federal government. It was an ambitious, forward-thinking effort, and despite the nearly two decades that were put into it, it failed—not because researchers weren’t making headway but because politics, regionalism, and ideology got in the way. Unlike the IBM scientists, who could pursue their research freely, Canadian researchers were hampered by underlying political imperatives that would prove fatal to their work.

This is not the only time that political considerations have undermined the federal government’s own efforts to support innovation or meet its technology needs: examples of this are evident in our government’s decision making to this day. Take, for instance, our years-long saga of obtaining new military jets, which the federal government demanded not only fly well but generate economic benefits here at home. Canada has, on more than one occasion, allowed secondary policy considerations to shape its research and procurement efforts. History shows this has rarely turned out well.

The question of translation had plagued bureaucrats in Canada for decades. The federal government produces an immense quantity of documentation, and the demands of working in both French and English made this a much more complicated and time-consuming process.

There is an entire federal institution—the Translation Bureau, established in 1934—dedicated to the task of providing government materials in both languages. Hansard, though a high-priority item and the government’s longest-running bilingual publication, was just one of the bureau’s myriad government translation responsibilities. In addition to late-night Hansard work, the translators, reviewers, and proofreaders employed by the bureau filled their daylight hours translating all manner of documents—contracts, statistical tables, legislation, scientific reports, job bulletins, correspondence. In overseeing so many communication tasks, the bureau acquired considerable power as the gatekeeper of government language and the agent of French-language quality control.

The politics of bilingualism in Canada—in particular during the 1950s and 1960s, when tensions around Quebec’s place in the country were becoming inflamed—complicated this further. Because anglophones outnumbered francophones in the public service by a large margin, the French versions of documents were consistently published after the English ones. “There is one inescapable fact: the working language . . . of the state is English,” wrote Translation Bureau superintendent Pierre Daviault in 1956. “Texts are first written in English. Translation follows.”

Language was at the heart of mounting political tensions at this time. French Canadians were growing increasingly resentful of their exclusion from Canada’s political and economic life, access to which was hampered by linguistic and cultural differences. French Canadians didn’t feel like equal partners in a country whose professed bilingualism flowed only in one direction. In that context, problems with the French versions of government documents were not just irksome to French-speaking parliamentarians but were loudly criticized in French-language newspapers. Delays in French versions, deficiencies in French prose, anglicisms in French texts—all these details pointed to the continued second-tier status of French within Canada.

Seeing an upswell of separatist energy in Quebec, André Laurendeau, editor-in-chief of Le Devoir, pressed for a formal inquiry into what it would take for Canada to become truly bilingual. Prime minister Lester B. Pearson, fearing a national-unity crisis, launched the Royal Commission on Bilingualism and Biculturalism, in 1963, with a mandate to explore federal policies that supported a more equal partnership between Canada’s French and English populations. High on that list? A better, speedier, and more equitable process for translating government records. Deep in the corridors of Canada’s public service, one particular bureaucrat was wondering if new technology might hold the key.

For assistant Queen’s printer C. B. Watt, the political crisis that was unfolding in newspaper editorials and parliamentary debates played out every single day on the pages of documents he was responsible for producing. Among other tasks, Watt was duty bound to deliver a bilingual copy of each day’s parliamentary proceedings to every MP’s desk by 9 a.m. the following morning. But Watt made a proposal: he wanted to harness new technologies to help automate the daily task of translation. Watt was hopeful that new printing machines, combined with computer technology, might achieve, as he phrased it, “more simultaneous release”—not just of Hansard printouts but of all government publications.

Canada was already about a decade late to the game: the US government had spent years trying to crack machine translation—the development and use of computerized methods to translate languages—in a bid to stay on top of Soviet science, and was just about to pull the plug because its research had not generated practical results. But Watt saw the high stakes of this work. “If we can get this project off the ground,” he wrote to Henriot Mayer, assistant superintendent of the Translation Bureau, “it will have world-wide implications and will bestow a great deal of credit on your department.”

Watt managed to interest B. G. Ballard, president of the National Research Council (NRC), in the project and, after a small preliminary study, secured Treasury Board approval on August 20, 1965. The government committed $1.6 million over five years—a paltry sum for a project with such formidable aims. (The US had funnelled approximately $13 million into its research by this point.)

The project was set up in a piecemeal fashion, with dollars and risk spread across three different universities: the University of Saskatchewan; Cambridge University, in the UK (which had experience with machine translation); and the University of Montreal. It was met with skepticism from its earliest days. Translation Bureau representatives, who attended annual meetings about the research program, “regarded it with suspicion,” recalls Brian Harris, a linguist who worked on the project at its inception. “And, of course, they were right to be very skeptical. It was hard enough for a human to get around this parliamentary language.” Where were the computer-translation research teams even to begin?

The first years of the project produced a smattering of research papers as teams toiled away independently, each following its own method. The years passed and the funding bodies of the machine-translation project, in Harris’s recollection, “were getting worried, because . . . they weren’t seeing any translations of Hansard.”

The passing of the Official Languages Act, in 1969, threw translation into the spotlight. Demand for translation exploded as federal institutions were now required to provide services, including all documentation, in both official languages. The values the Translation Bureau had cultivated—linguistic excellence, conformity, standardization, and professionalism—were now being rendered more consciously and emphatically in service of national unity.

By this point, the Cambridge team had been dropped from the project, and the two remaining teams had adopted opposing research designs. The director of the Montreal group, Guy Rondeau, was a linguist rather than a computer scientist, but he was a talented recruiter with contacts in France who could help. And, in Harris’s estimation, the NRC was, “for political reasons, very keen to give a contract to a Quebec university.” His team’s approach, an Ottawa Citizen column explained in 1969, was “complex, and is based on the knowledge of the structure of language and linguistic science.” In other words, Rondeau and his team believed that a deep analysis of the grammars and structures of both languages would help achieve the high-quality automatic translation the bureau was after.

The Saskatchewan group, by contrast, was led by two scientists: Kathleen and Andrew Booth. Both had worked in computing and machine translation and brought that prior experience to the task. Their approach was, correspondingly, more prosaic. “At Saskatchewan, the problem is based principally on such factors as the frequency with which parts of speech are ordered in the natural usage of the language,” the Citizen column explained. Kathleen was a mathematician, and her statistical approach, though rudimentary, was—as evidenced by IBM’s later success—prescient. But at the time? “Nobody took her seriously,” says Harris.

Less preoccupied by the nuances of language, the Saskatchewan team wanted to deliver something that could be put to practical use. As Kathleen put it, their system tried to generate “rough and ready” output that could then be tidied up—a different mindset than the one in Montreal, which placed greater emphasis on the literary quality of the prose. But the Translation Bureau, concerned with safeguarding French and nurturing translation as an esteemed profession, was not interested in what the industry now refers to as “gistable” output—in which end users, expecting errors, read translated material for basic comprehension, to glean the gist of what was said in the original. Gistable output lacks poetry but possesses the practical virtues of being easier and faster to produce.

By this point, funding for the project had been extended beyond the initial five-year term and eventually shifted to the Translation Bureau. Its mission of improving French, of raising the professional profile of translators, and its efforts to support and encourage French-language use in the public service couldn’t help but inform the direction of the machine-translation efforts.

Though the Saskatchewan team seemed to be making faster progress — at an annual meeting in 1972, Kathleen recalls, “we were the only group to demonstrate an actual program”—its funding was soon cut.

Andrew Booth (an engineer who co-led the Saskatchewan team with his wife) later claimed that loss of federal funding in favour of the Montreal team was in keeping with the typical Liberal concern for Quebec votes. But a more likely story is simply that the Montreal group’s aspiration to produce fluid, convincing prose—however insurmountable or naive—was more in keeping with the Translation Bureau’s own thinking.

The Montreal arm of the research project had become, over the years, an enormous undertaking. “We were writing grammars, complete lexicons,” recalls Elliott Macklovitch, who joined the effort in 1977. At one point, he says, there were more than twenty-five lexicographers, linguists, and programmers involved.

These efforts still did not yield a system that was able to tackle anything as complex as translating parliamentary debates: the years of research that Watt initiated still hadn’t translated any Hansard proceedings. The machine-translation research project was entirely disbanded in 1981.

There had been just one small victory along the way, in the mid-’70s. The team’s success? Launching a machine-translation system for weather bulletins.

The details of how a magnetic tape reel with data spanning five parliamentary sessions got to an IBM research team in New York state in the 1980s are sparse, but everyone tells the same story. IBM research fellow John Cocke—known for his genius, his sociability, and his drinking—happened to be seated, some time earlier, next to someone (nobody knows who, but it is believed that they worked in some capacity for the Canadian government) on an airplane. In between visits from the drink cart, Cocke learned that the proceedings of Canadian Parliament were kept in computer-readable form in both French and English. When Cocke returned to IBM, he relayed the information to two of his colleagues.

Nobody knows quite what happened next—there are no official records of a request for the information from IBM or of any decision by the Canadian government to release the valuable data to a foreign-owned private corporation. What we do know is that, soon after this fortuitous plane ride, the tape reel containing the Hansard debates showed up in Yorktown Heights with no project brief and no strings attached.

At first, the research team’s members looked only at the English Hansard data. In fact, they used the English data to create a spell-checker. But Cocke kept prodding them to look at the English and French texts together—in order to, as IBM computer scientist Peter Brown put it, “learn something about how translation works.” It wasn’t just that the Hansard was computerized and that it contained complete passages of text with their full translations. The team needed sentences paired directly with their translations in order to create a model of each language and then calculate the probabilities of certain word sequences in one language producing certain word sequences in the other. “We were lucky that we got this Hansard data,” says Brown. “We happened to be in the right place at the right time.”

There was more to it than that, of course. IBM had previously developed algorithms for speech-recognition technology and had the computing power to make the same method work to translate languages. Crucially, and in contrast to the Canadian researchers, who were bound by the political need to make a perfect tool for use in a very particular context, IBM had the freedom to just explore and see where its translation efforts led. (The results weren’t perfect. In one surprising twist, the English word hear was equally likely to translate to French as Bravo! as it was to the expected entendre; this was because the traditional doubled phrase used for parliamentary agreement—“Hear! Hear!”—affected the statistics.)

Most Canadians have probably never heard the word Hansard. But, in 1988, everyone working in machine translation anywhere in the world knew all about it. For all that good fortune, IBM downplayed the specificity of Hansard in its landmark paper about the breakthrough, in which it swapped the facts for speculative historical fiction. “Had the early Canadian trappers been Manchurians,” researchers wrote, “later to be outnumbered by swarms of conquistadores, and had the two cultures clung stubbornly each to its native tongue, we should now be aligning Spanish and Chinese.” The anonymous civil servants who provided the indispensable data IBM had actually used were not mentioned.

Christine Mitchell
Christine Mitchell is an academic researcher and a screenwriter.
Raymond Biesinger
Raymond Biesinger has drawn for The Economist, GQ, and New Scientist.