When tory prime minister David Cameron called a referendum in 2016 on whether the United Kingdom should leave the European Union, he had no reason to think he was making a mistake. After all, the polls were clear: Remain voters would overwhelm the Leave forces at the ballot box, and Cameron would be free of a political irritant. As the campaign drew to a close on June 23, 2016, the last round of polls before voting day had the Remain side up by anywhere between two and ten points. Cameron’s bet was about to pay off.
Instead, the Leave side won by four points—and turned the UK upside down. Cameron resigned, the value of the pound fell to a thirty-one-year low, and pollsters everywhere were called out for having failed to see any of this coming. From the betting markets to the biggest polling houses in Britain, nearly everyone had gotten it wrong. Everyone, that is, except for a tiny Ottawa-based company called Advanced Symbolics.
The company, founded in 2014, claims to produce forecasts that are more accurate and reliable than those that come from traditional forms of market research. If it’s right, it might be on the verge of disrupting a multi-billion-dollar global industry—and changing the way we look at polling. Advanced Symbolics’s patented artificial intelligence, named Polly, collects millions of social-media messages, which are then fed through a proprietary algorithm that monitors how events happening in real time are being talked about. The algorithm then compares its findings to patterns Polly has uncovered in the past. In a sense, Polly resembles those computers that collect all the master chess games of history and, on the basis of that aggregated knowledge, anticipate player behaviour to win. In this case, Polly spits out numbers that express the momentum, or lack thereof, behind various political campaigns, based on an analysis of the internet chatter those campaigns generate.
In terms of the dynamics of the Brexit campaign, the turning point for Polly came after Labour MP Jo Cox—a politician the Guardian described as a “passionate defender” of Remain—was assassinated by a far-right extremist a week before the vote was to take place. “As soon as that tragedy happened, Polly flipped,” says Kenton White, the company’s cofounder and chief scientist. “She changed her mind.”
Until then, Polly’s projection had been in line with the majority of pollsters, who expected the Remain side to pull out a close victory. But, in the aftermath of Cox’s murder and the suspension of both the Remain and Leave campaigns, Polly broke from the pack. Where others interpreted the relative silence of the Leave campaign after the shooting as a sign that its support was waning, Polly instead recognized a pattern that had already happened three times in the campaign—online pro-Leave sentiment appeared to flag only to surge back the following week. That’s why Polly suddenly saw the possibility of Leave winning on June 23. “She helped me see something that everyone else was missing,” White says.
The stakes in Polly’s about-face were high for White. If Remain ended up winning, the business he was trying to build around the AI would have been dealt an embarrassing, and very public, setback. The company’s CEO, Erin Kelly, had been invited to appear on CBC Radio’s The Current the day after the vote and would have needed to explain why they were wrong. “You can imagine the pressure we’re under,” White says. “I had a very sleepless night watching the results. But we were right.”
It wasn’t the first time, as Polly had correctly predicted the outcome of the 2015 Canadian federal election. And it wouldn’t be the last time either. A few months after its Brexit call, Polly noticed rising support for Donald Trump among Black and Hispanic voters months ahead of the 2016 presidential election, something most pollsters missed. Polly also figured out Hillary Clinton’s share of the popular vote within a percentage point (prediction: 48.9; final tally: 48). Polly has since correctly predicted the outcome of the 2018 US midterm elections (its “best case” scenario for Democrats had them winning 231 House seats; they won 235) and Ontario provincial election. Polly isn’t always right. It picked Clinton to be the next president of the United States. And, in the Ontario election, while its forecast was within two seats for Doug Ford’s Progressive Conservatives, it predicted that Kathleen Wynne’s Liberals would be wiped out. Instead, they ended up with seven seats.
Still, these are the sorts of misses that many traditional pollsters would happily accept, given that they’ve been having a much harder time with their own predictions of late. In 2012, the polls predicted that Danielle Smith’s Wildrose Party would finally end the four-decade Progressive Conservative dynasty in Alberta, only to have Alison Redford’s PCs walk away with sixty-one of the eighty-seven seats in the province. A year later, major pollsters predicted that Adrian Dix’s New Democrats would knock off Christy Clark’s governing BC Liberal Party, which was in the midst of a comparatively modest twelve-year stretch in power. Instead, she actually increased her party’s seat total by five and formed yet another majority government. In 2014, all polls had Tim Hudak’s PCs and Kathleen Wynne’s Liberals in a dead heat in Ontario, only to have Wynne emerge with more than twice as many seats. And, in the 2018 Quebec election, most polls had the Coalition Avenir Québec with a one- to five-point lead over the governing Liberals. Instead, they ended up winning by over twelve.
These results are a problem for the polling industry and its claim to accurately forecast elections. But they should cause the rest of us to fret as well, because for all the criticism of polls and the people who do them, they are a key source of the kind of information that’s the lifeblood of our body politic. Their work helps voters make decisions, political parties create policy, and journalists decide which stories they should cover. If traditional pollsters can’t be relied on to do it accurately, it may open the door for someone—or something—that can.
Opinion polling as we’ve come to know it can be traced back to an American advertising executive named George Gallup who, in the 1930s, decided to take the method he developed to measure consumer preferences and apply it to voting intentions. His approach revolved around asking the right sample of people—people, that is, who represented the broader population whose moods he was trying to read. “Gallup had assured himself,” according to the 1948 Time profile of him, “that polls on toothpaste and politics were one & the same.”
His breakthrough came in 1936, when he correctly predicted that Franklin D. Roosevelt would be reelected, beating a poll run by The Literary Digest that had instead tipped Republican nominee Alf Landon. The general-interest weekly, which was famous for having called every presidential election since 1916, solicited responses from millions of people, using postcards with addresses drawn from telephone directories, vehicle-registration lists, and magazine-subscriber rolls. The Literary Digest’s poll, in other words, was overwhelmingly slanted toward the middle and upper classes, and ignored lower-income voters by default: a fatal miscalculation in a Depression-era election when economic fears were preeminent.
Gallup and his team, meanwhile, conducted in-person and mailed-out surveys of approximately 50,000 people who had been randomly selected to represent the views of the country. Gallup’s methods were far from perfect—they failed to account for Black voters—but they showed how a sample of citizens could provide a reliable measurement of public opinion. Badly discredited by the election result, The Literary Digest folded in 1938. By the 1940s, Gallup polls were running multiple times a week in newspapers across America.
But Gallup failed to consider that his polling data could do more than just reflect what he called the “people’s voice”—it could sway it. Soon, politicians (with Roosevelt being one of the earliest adopters) began using polls to not only influence and persuade voters about policies but to win elections. Gallup-style political polling first came to Canada in the 1960s, when the Liberal Party hired John F. Kennedy’s official pollster, Louis Harris, to work on its campaign. Harris, who had helped Kennedy win his 1960 presidential election over Richard Nixon, brought his comparatively advanced phone-based methods north and helped Lester Pearson’s Liberals defeat John Diefenbaker’s Tories in 1963. Harris recruited 500 people to make thousands of phone calls across the country in what the Canadian Press described in 2013 as “the most elaborate public-opinion research project in Canada’s political history.”
By the 1960s and 70s, polling was a fixture in political campaigning in the United States and Canada. “Early skepticism that a sample of respondents could say anything about the opinions of millions,” noted Duke University political scientist D. Sunshine Hillygus in a 2011 paper, “gave way to a belief in the scientific basis of probability samples.” For political parties, which had traditionally relied on a comparatively crude blend of anecdata, expert opinion, and intuition to inform their strategic directions and policy choices, this emerging science was a revelation. For the public-opinion research industry, it was nothing short of revolutionary—and a growing range of clients turned to researchers for answers about how people were feeling or thinking about a given issue, product, or trend. Telephone surveys quickly replaced door-to-door and mail-based approaches as more households got landlines, making it fairly simple to conduct a poll that delivered reliably accurate results. Darrell Bricker, who got into the business in the 1980s, remembers how much easier it was when he started. “We had response rates in the 80 percent range, so it was like falling out of a boat and hitting water—it was no trouble to get a good random sample of Canadians, and good predictions about election outcomes were pretty routine.”
The sample that Bricker, now CEO of Ipsos Public Affairs, talks about is the bedrock of all public-opinion research. It’s what allows pollsters to accurately predict the opinions of a large group of people—say, 37 million Canadians—by talking to a tiny slice of them. Éric Grenier, now the CBC’s official polls analyst, once used the analogy of a pot of soup to explain how and why a random sample can produce accurate polling data. “You don’t need to eat the entire pot to know what it tastes like,” he wrote in the Globe and Mail. “The odds of getting a spoonful that is completely unrepresentative of the entire pot of soup is low—and it is the same with polling samples.”
The Goldilocks point for a population sample is approximately 1,000 people. Survey fewer people than that, and the margin of error—that is, the odds you’ll get too much of one ingredient—increases substantially as the sample size shrinks. The math isn’t any better the other way around; for example, if you talk to twice as many people, you won’t reduce the margin of error by half. And, because you can never completely eliminate that margin of error, it doesn’t make sense for companies doing polling to spend far more money in the pursuit of a relatively tiny increase in accuracy—especially when opinions can, and do, change.
This is the science behind polling. But there’s plenty of art, too. Pollsters must decide which questions to ask and how and when to ask them. In the 1960s, Louis Harris avoided queries requiring “yes” or “no” replies and got better data as a result. Pollsters also have to compensate for sample demographics that aren’t representative. That’s done through “weighting.” Say you end up with a national poll where 53 percent of respondents are women and 47 percent are men. That’s out of line with the census, which says women make up 51 percent of Canada’s population. If you’re the pollster, you’d need to weight the poll to adjust it to that reality—and decide how best to do it.
This fusion of art and science is what makes for a good poll—and a good pollster. In a sense, pollsters are like chefs all trying to cook the same dish. They may use similar ingredients and techniques, but they all have their own way of arriving at the finished product. And, like chefs, they all believe their recipe is the right one. “Every pollster that I’ve talked to—and I’ve talked to almost every one of them in Canada—thinks their numbers are the best,” says Philippe J. Fournier, an astronomy and physics teacher at Montreal’s Cégep de Saint-Laurent who Maclean’s recently called “the ultimate oracle of Canadian elections.”
The problem is that pollsters are having a much harder time getting their spoons into the pot. Over the last two decades, Canadians (like people in most countries) either abandoned or stopped answering the landlines that used to be at the heart of the polling industry. According to the Pew Research Center, telephone-survey response rates have dropped in the US from 36 percent, in 1997, to just 6 percent, in 2018. Those that do still answer those calls tend to be from a smaller subset of demographics (generally speaking, older and less culturally diverse), which makes it far more difficult to get that all-important random sample—and far more likely that the poll’s findings will be wrong.
These sorts of technological and cultural changes raise important questions for the industry. “When 91 percent of people hang up on you,” says Research Co. president and former Angus Reid Public Opinion pollster Mario Canseco, “how is that going to be credible?” That’s why pollsters like him have turned to new technologies to build their samples. Those include online panels, where people volunteer to complete surveys (and are often given some form of compensation), and Interactive Voice Response (IVR), which uses a telephone keypad or voice recognition to interact with callers via preprogrammed questions.
The appeal of online panels and IVR is clear: they’re far cheaper than live telephone polls. But their cost effectiveness can come at the expense of a proper sample—one that IVR pollsters adjust for by effectively increasing the importance of the responses from the demographics they didn’t get enough of. That’s often where the trouble starts. “Those IVR polls where you have five respondents under the age of forty and then weight them up twenty times? That’s how you end up with those horrible election prognostications,” says David Herle, a partner at a market research firm called the Gandalf Group.
IVR was partially responsible for perhaps the worst prognostication in Canadian polling history. On October 13, 2017, the Calgary Herald and Calgary Sun published the results of a poll produced by Mainstreet Research, in partnership with Postmedia, that found Bill Smith ahead of incumbent Naheed Nenshi by thirteen points with just days until Calgarians voted on their next mayor. The poll was the latest in a series of increasingly controversial polls by Mainstreet, all of which attracted immediate criticism from other pollsters and academics for results that had Smith polling well ahead among women and young people—an outcome few thought possible. As it turned out, that was a reflection of the problems with Mainstreet’s IVR-derived sample and its failure to contact enough people with cell phones.
But Mainstreet CEO Quito Maggi and then executive vice-president David Valentin managed to turn the disagreement into a major undercard attraction, sparring with academics and industry professionals on Twitter and the radio, even threatening legal action against Nenshi’s campaign team. Just a few days before the election, Valentin said that Mainstreet planned on “singling people out” after the votes were counted for what they’d said about their work. After a poll was released that contradicted Mainstreet’s forecast, Maggi tweeted, “If polling were poker, this is the part where I would go all in; I would bet $10 million we’re closer than that pseudo poll today.”
He’s lucky nobody took him up on the bet. Nenshi won with a 7.62 percent margin of victory—Mainstreet whiffed on his projected share of the total vote by a staggering 11.98 points and Smith’s by 8.1. “All the polls were awful in the end,” Maggi tweeted later at night. “Ours worst of all.” Here, at least, he was right. While Mainstreet’s polling miss was many times larger than its supposed margin of error, the two other polls that were released prior to the vote—one from a protransit group, the LRT on the Green Foundation, and another conducted by a group of academics and carried out by Forum Research—missed badly in the other direction, with both suggesting Nenshi had a double-digit lead.
There were enough factors combined—Maggi’s braggadocio, for example, and politicians overplaying positive polling results—to make the fiasco a unique situation. But, to many, the fallout of the Calgary election was evidence that pollsters and their clients were getting what they paid for with these less-expensive techniques. Not everyone in the business, however, is quite so bearish on them. “There are bad versions of all types of polls,” Ekos Research’s president and founder Frank Graves wrote in a blog post a few months after the 2011 federal election. “There are bad phone surveys, bad mail-out surveys, and yes, bad IVR surveys.” And, when it comes to which approach is best, the results are a mixed bag. In some cases, like the 2016 Manitoba and 2017 BC provincial elections, firms that relied on online polling methods came fairly close to the final result. But in others, like the 2018 Ontario and 2019 Alberta elections, pollsters that used IVR surveys performed best.
Pollsters are quick to point out that high-profile misses like Brexit or the 2016 US presidential election, much less fiascos like Mainstreet’s behaviour in the Calgary mayoral race, are exceptions rather than the rule. Nate Silver, whose successful predictions (and even more successful website, FiveThirtyEight) have made him the world’s most famous pollster, noted in a 2018 piece on the state of his industry that the polls were “actually a bit more accurate” in 2016 than they were in 2012, when they underestimated the size of Barack Obama’s margin of victory. “The media narrative that polling accuracy has taken a nosedive is mostly bullshit,” he wrote. “Polls were never as good as the media assumed they were before 2016—and they aren’t nearly as bad as the media seems to assume they are now.”
But it’s still hard to shake the feeling that the industry is having a hard time getting the kind of results it wants, as competitive pressures, cultural changes, and technological limitations conspire to make pollsters’ jobs more difficult than they’ve ever been. “Are we measuring reality? Are our methods able to capture what people actually think and what people are doing? I’m still confident we can, but it’s a constant gut check,” says David Coletto, one of the founding partners and the CEO at Abacus Data in Ottawa. In a world where trust in expertise and faith in institutions continue to decline, those questions will only get more urgent. The answers, meanwhile, will have implications for just about everyone.
If pollsters are chefs, Polly is a whole new way of cooking—the molecular gastronomy of public-opinion research. Advanced Symbolics’s Kenton White realized this over the course of the last decade, after the soft-spoken Bay Area native and former professor of computer science at the University of Ottawa sold his last company in 2009, a startup focused on gaming as an e-learning tool for businesses and industries, and went in search of a new challenge. His long-standing interest in science fiction, and specifically Isaac Asimov’s 1951 book Foundation—its key character, Hari Seldon, is a “psychohistorian” who measures the behaviour of large populations—drew him to the idea of using statistical physics as a forecasting tool. “I started wondering, with the advent of Twitter and Facebook, if we could start to use all of that to try to predict human behaviour.”
To do that, he developed conditional independence coupling (CIC), an algorithm that can build random and representative samples of a population from online networks such as Twitter, Instagram, and Facebook—and to do it at a fraction of the cost of traditional pollsters. According to White, CIC needs at least 10,000 people to be talking about a topic before it can begin tracking the conversation by age, gender, geographical location, and other demographics (CIC can also generate samples with upwards of 100,000 people). After the algorithm creates the sample, Polly then combs through it to figure out what people are thinking and saying—and how that might weigh on their decisions, such as how they plan to vote. To get the full context around a given conversation, and to refine its understanding of the issues being debated, Polly cross-references—and fact-checks—social-media posts against a range of data points, including news articles, census data, academic journals, Wikipedia entries, and websites.
But it’s what Polly does next that has, in part, given it the edge: it studies those samples historically. By listening to what people are saying about a subject, or a range of subjects, in the present, and then testing that against the level of online engagement from, say, several months or even years ago, Polly can anticipate how a population will react in the future. “We can go back and look at how they were talking about an election from 2015,” White explains. “We know the results of that election, and they’re the same people in the sample, so we can understand how a particular conversation back in 2015 might be interpreted today.”
White is clear to draw a line between what his company is doing and the kind of microtargeting made infamous by Cambridge Analytica, the British political-consulting firm that may have materially influenced the outcome of the 2016 US election in favour of Trump. “We’re very different,” White says. “We’re not trying to use the platform to persuade people or sway them. We’re using it to measure how people feel.” More importantly, Polly is incapable of interacting with those people individually, or of allowing a client to do so, because of the way it treats and protects data. First, Advanced Symbolics scrubs the data of identifying information. Then it ensures the data has something called “k-anonymity”—a technique used by census departments around the world that protects against reidentification through linkages to other data sets. Finally, it uses another technique borrowed from the world of cryptography called “differential privacy,” which introduces “noise” into the data to guard against reidentification. “We’re dealing with aggregated data, not individuals,” White explains. “I don’t know who’s in my sample.”
Given that it has a patent on its technology, Advanced Symbolics takes an approach to gathering information that is, by definition, unique. But even if you expand that definition to include any use of AI in the creation of market research or public-opinion data, it’s still in a very small crowd. Some market-research firms are using AI to help with data-processing tasks and to free up their human researchers to unpack what that data means. They see it as a way to augment some aspect of their current business model, not replace it.
Political polling is a small part of the workload at Advanced Symbolics. Like most firms, it also produces market research for a wide range of corporate clients—businesses looking to sell a new product or service or understand some aspect of their customer base’s habits and inclinations. But elections have offered the fledgling company the best opportunity to attract attention and prove its technology works. Like all AI, Polly can learn, which means, as more data becomes available, it becomes better at making connections and predictions. In the recent Ontario provincial election, Polly’s analysis included hundreds of Indigenous social-media users, many of whom are missed in traditional polls because they answer the phone even less than other Canadians do. Polly can also be a sophisticated reader. In preparation for work it did with the Public Health Agency of Canada in 2018 to identify social-media clues that precede spikes in suicide, the team, White says, downloaded tweets from the Bell Let’s Talk campaign and trained Polly to distinguish between users who were sharing their own stories and those who were trying to raise awareness.
The fact that Polly isn’t actually asking people questions the way conventional pollsters do seems like a major stumbling block. But it might be an advantage. “People feel uncomfortable saying what they really think—even to pollsters.” White is referring to social-desirability bias, or the theory that responses to a survey can be informed by how people think those responses are viewed by others. A widely shared opinion is more likely to be expressed than a controversial or unpopular one, all other things being equal, and there’s evidence that this played a role in the Brexit campaign—telephone polls consistently showed a higher level of support for Remain than anonymous online ones did.
Instead of asking questions, Polly filters the social-media mentions of candidates and parties through its understanding of what are called “latent factors”—things like ideology, political knowledge, or consumer confidence —which may cause us to say one thing and potentially do another. “By getting down to the latent factors,” White says, “we’re cutting through a lot of those cognitive biases that interfere when we try to answer questions truthfully.” This leaves Polly to be a more objective observer. “You’re getting public-opinion research that is as accurate as a drug trial,” says Erin Kelly, Advanced Symbolics CEO.
Fournier, who runs a Nate Silver–esque site called 338Canada, is intrigued by the idea of using social-media data to do public-opinion research. Pollsters across North America continue to struggle to get urban youth to participate in their surveys, even with the shift to more user-friendly online options. But he’s wary about the risk of conflating online noise with predictive signal. “It all depends on the data. If you have data that’s bad, the artificial intelligence will not get you better results.”
Bricker, whose firm incorporates social-media listening in some of its work, is even more skeptical about the utility of Twitter data in the creation of a representative sample. “About 20 percent of Canadians actually tweet, so that means 80 percent of the population is being left out of the sample. Now, they may be able to come up with some sort of modelling that can deal with that, but unless they’re prepared to be transparent about it, and unless they’re able to do it election after election after election, I would be dubious.”
But, of course, with telephone response rates dropping as low as 5 percent in Canada, the same concerns about people being left out of samples could be directed at traditional polling. Though response rates don’t look like they’ll go up any time soon, the number of social-media users grows by the day. As for the idea that Polly is an inscrutable black box, White says he’s explained the AI in the two papers he’s published on it. “It may not be easy, if you come from a social-sciences or polling background, to understand. And I appreciate that. It took me a couple of years of full-time research to figure it out.”
Kelly recounts a scene from last spring when someone from a venture-capital firm looked at White’s work and couldn’t figure it out. “I said, ‘Don’t be concerned—you need a PhD in mathematics to understand this algorithm.’ So they brought a guy with experience in market research, AI, and advanced algorithms, and the guy said, ‘Oh, my God,’ because he realized we’d made a major scientific breakthrough. That guy is now mentoring us.”
If traditional pollsters aren’t sold on Advanced Symbolics’s technology, they’re even less willing to buy into the idea of a tiny startup coming up with a tool that has somehow eluded much larger and wealthier firms. “At some point, a bringing together of social media and new technologies is going to replace polling—at some point,” says Ipsos’s Darrell Bricker. “Do I think a little firm in Ottawa that claims it’s picking elections correctly is the one doing it? Count me in the skeptical corner.”
That skepticism—and perhaps that fear—is understandable. If White is right, Polly should be able to do polling jobs more effectively and at a lower cost to clients. Such is the promise of AI and such is the challenge it poses, more generally, to incumbents in any number of industries where the technology is currently being tested. But the biggest challenge of all might be the fact that while the status quo is the ceiling for those incumbents, it is the floor for Polly. Traditional pollsters are trying to hold the line, using new techniques to patch up the industry as they still understand it: as a field struggling with declining response rates and the impact those rates are having on results. Polly, according to White, has not only lapped rivals but accelerates with each passing day. “Nowadays, every election creates a huge signal on social media,” says Shainen Davidson, one of the AI scientists who works with Polly. “That newer information is, in some ways, a lot more valuable.”
The closely contested federal election in October will put Canada’s pollsters and their technologies of choice to the test. And, if recent results are any indication, Polly’s odds of upstaging industry Goliaths look pretty good. But don’t expect rivals to go down without a fight. Their behaviour during the 2018 Ontario provincial election, when Advanced Symbolics served as TVO’s official pollster, certainly speaks to that. There was plenty of online criticism of Polly’s methods; someone even threatened to report Kelly to Elections Canada for allegedly breaking election laws. “People in the industry know there’s a problem,” she says. “But, as with any disruptive situation, they don’t know what to do about it. So they’ve gone into protectionist mode.”
This isn’t exactly new for the polling industry. So-called pollster fights have broken out a number of times after election campaigns, and it’s not unusual to hear one pollster criticize another’s work. During the 2011 Ontario election, for example, Ipsos’s Darrell Bricker and John Wright published an open letter addressed to the province’s journalists that took aim at what they saw as a growing number of unprofessional practitioners. “Some marginal pollsters count on your ignorance and hunger to make the news to peddle an inferior product. Others are using your coverage to ‘prove’ that their untried methodology is the way forward for market research in Canada,” they write. “Instead of being their own biggest skeptics (which is what our training tells us to be), they’ve become hucksters selling methodological snake oil.”
But Polly could be their fiercest battle yet. After all, this isn’t a garden-variety dispute between pollsters over preferred methodology. It’s a threat to the way they’ve done business and to their ability to continue doing it. Just as George Gallup came along and rewrote the way people understood (and conducted) public-opinion research, so could Polly. And while competitors could theoretically incorporate AI into their own work, Kelly says her company, already active in fifteen countries, has an obvious advantage. “They’re a little bit late to the game,” she says. “And I think they know it.”
March 31, 2021: A previous version of this article stated that Mario Canseco was a former pollster for the Angus Reid Institute. In fact, he was a pollster for Angus Reid Public Opinion. The Walrus regrets the error.