A t nineteen, I moved to New York City for my first magazine internship. I was hired as a fact checker, and at the time, I knew little about the practice; it hadn’t yet gained the popularity it now enjoys, with regular headlines about “Fact-Checking the President in Real Time.” I interpreted fact-checking literally: journalists report facts, sometimes they make errors, and fact checkers clean everything up before the story is published.
This was roughly the process that awaited me at Harper’s: every day for three months, I sat with three other young journalists, meticulously researching sentences that would appear in the upcoming issue. Once we had investigated a fact to satisfaction, we pored over our work with an imperious senior editor, who would interrogate us about nuances and details. I was thrilled by the rigour of the process. When our work was done, the published product would be empirically incontestable. Over the next few years, I worked as a freelance fact checker for various publications, and I eventually became head of research at The Walrus for two years, until 2019—a job in which I took on a similar role to that of the imperious Harper’s editor.
I learned that the steps to fact-checking at The Walrus are exactingly methodological. Fact-checking a story is different from reporting it from scratch: you start with a finished product, working backward to confirm its accuracy. Before an article can be published, the writer provides what’s called a research package—typically an electronic file containing the documentation they used in their reporting, audio and transcripts of conversations with sources, and a draft of the story in which every statement is footnoted with reference to the source that should confirm it. The fact checker takes this material and starts by isolating each fact from the story (typically with coloured pens and highlighters), then verifies them with the relevant sources, which could be scientific studies, experts, or the people directly involved. Whenever possible, the magazine defers to primary sources. The number of daily new COVID-19 cases in Montreal, for example, would be confirmed not by reading news reports but by going directly to the official tally on Quebec’s health ministry website—or by calling the city’s public health authority.
No fact is too minor to be checked: celebrities’ names, basic mathematical statements, or even that winter in the northern hemisphere ends in February. (Actually, depending on whether one uses the astronomical or meteorological definitions of the seasons, winter could end in March.) Every article—and I mean every article, including this one—will require adjustments, whether it’s a small change in date or a major interpretative clarification. Once these corrections are made, the story is ready to be published, and we can be assured that it is unshakeable. At least, that’s the idea.
Of course, few people outside of journalism know about traditional fact-checking. Even within the industry, the practice has become increasingly rare over the past decade of media layoffs and budget cuts. But it’s the approach I’m most familiar with: behind-the-scenes and meticulous, with a touch of pretentiousness. This standard was established by Time and The New Yorker in the early 1900s, when magazines were most concerned with protecting themselves from public criticism and libel lawsuits. (Back then, fact-checking was a woman’s job. According to the Columbia Journalism Review, writers such as gonzo journalist Tom Wolfe saw The New Yorker’s fact-checking department as “a cabal of women and middling editors all collaborating to henpeck and emasculate the prose of the Great Writer.”)
This kind of fact-checking, however, wasn’t built for the immediacy and viral spread of online news. Amid the growing phenomenon of “fake news,” journalists needed something more reactive. The term fake news became widely used during the 2016 US presidential election, when the internet was flooded with inaccurate information. A BuzzFeed News investigation at the time showed that many of these deliberately false headlines came from an unexpected source: content writers in Macedonia were profiting off the advertising revenue from the increased traffic on their sites.
False content online has only multiplied over the years. But the fake news designation has also been used to serve all kinds of purposes—including, increasingly, to disparage real news reporters—so most experts now avoid the term. Instead, researchers usually talk about disinformation, which is purposefully false, and misinformation, which is unwittingly false (either because the publisher made a mistake or because the person sharing the content did). As false content spreads through social media networks, it can oscillate between the two, and it can manifest in various forms, including memes, tweets, or “imposter” content made to imitate real news stories. Last summer, for example, a list of advice—some accurate, some dangerously inaccurate—about COVID-19 prevention made the rounds on social media, falsely attributed to various health officials including BC’s Bonnie Henry.
We now consider disinformation a defining part of the contemporary experience. In 2016, Oxford Languages chose post-truth as its word of the year. The essential characteristic of our age, the accompanying press release stated, was the loss of a distinction between truth and feeling; we were entering an era in which “objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”
Governments and social media companies have employed various strategies to address the threat of disinformation, including closer scrutiny of political ads, flagging posts as “inaccurate,” or tweaking algorithms to favour reliable outlets. But these efforts have had little effect on the widespread production and sharing of disinformation.
Journalists and media organizations, on their end, have championed fact-checking as the silver bullet—not the prepublication kind done at Harper’s or The New Yorker but the public-facing kind done by PolitiFact or the Washington Post: instead of verifying stories written by an outlet’s own reporters, fact checkers apply the same filter to public claims, such as politicians’ statements or other outlets’ reporting, then publish the results. According to this interpretation, to fact-check someone’s claim is to find all the relevant primary sources (budget documents, election results) and point out, in a published article, any errors in their declaration. Instead of printing only what one knows to be true by virtue of having fact-checked it, journalists explicitly call a person or organization wrong in order to correct the record after the fact. In this sense, the most famous fact checker of our time is reporter Daniel Dale, who rose to fame via the ambitious goal of itemizing the lies told by Donald Trump throughout his presidency (a total of 30,573 false and misleading claims, according to the Washington Post).
In 2014, there were fewer than sixty initiatives around the world focused exclusively on checking others’ claims, according to the Duke Reporters’ Lab; today, there are more than 300. The growing instinct to fact-check isn’t particular to journalists either: it’s part of a growing cultural movement emphasizing revision and debunking. Popular podcasts such as Revisionist History and You’re Wrong About ask us to change our understanding of well-known stories, while tell-all memoirs promise to give us the “real story” about crime, government misconduct, and our favourite celebrities.
Like many journalists, I used to subscribe to what philosopher Neil Levy calls the naive view of fake news: that today’s problems of political polarization and extremism are caused at least in part by the spread of inaccurate information, and that “careful consumption and fact-checking can eliminate the problem.” According to this view, people who share false content do so because they believe it to be true. Everyone means to share real news—they are simply making a mistake when they don’t. If this were true, then by simply correcting the record, we would make all of our post-truth problems go away. Instead, those concerns have grown, and I now wonder: What if it is precisely our manner of clinging to the idea of “facts” that has aggravated the problem?
I’ve now come to believe there’s another, more salient characteristic of our age, beyond the post-truth designation. It is a relic of the past few centuries of rationalism in the Western world: the idea that there can ever be a definitive distinction between fact, on the one hand, and everything else, on the other. We maintain that journalists—our de facto heroes in the fight against mis and disinformation—are capable of distilling truth from the murky waters of interpretation, opinion, and ambiguity in such a way as to present the only true reality of the world. Implicit in the presentation of 2016 as the year after which facts needed to be differentiated from their “alternatives” is the idea that it is actually always possible to do so—that we can know immediately and with absolute certainty, for example, that homemade cloth masks provide reliable protection against COVID-19. In theory, it may seem easy enough to agree on whether a statement is true: simply check whether all available evidence supports the claim or at least does not refute it. But, in practice, we struggle to agree on what makes a fact and how to present it—even as we agree on the importance of being able to do so. We intuitively maintain that opinion and truth exist in different realms, yet removing interpretation entirely from factual reporting is impossible.
Today, I believe the naive view of facts has only fuelled the rise of disinformation and polarization. Fact check has become a political signal such that journalists’ very attempt at neutrality ruins any chance of communicating with those who don’t already believe them. This is not just a media industry problem; it is a pressing issue with consequences for everyone hoping to engage in productive dialogue. Though journalists have clearly invested in fact-checking, trust in news media has continued to erode, and researchers have found that exposure to contentious media discussions about fake news decreases trust further. According to Gallup’s annual governance poll, by 2020, 60 percent of Americans said they trusted mass media “not very much” or “not at all.” This problem cannot be solved only by fact-checking Trump’s press conferences: those who already believe Trump have no reason to accept our fact checks. Without a trusted forum for conversation, we lose the ability to establish a common ground from which to converse and debate; we lose the ability to understand or negotiate with one another at all.
Since 2016, newspapers have begun devoting columns to fact-checking the tweets, campaign promises, and speeches made by politicians and pundits. This public fact-checking has become a way for daily outlets to gain credibility and readership as their ad and subscription revenues disappear. Attach the term fact check to the headline of any news article and it has a similar effect to adding “Based on a true story” to a movie poster: it demands credulity while promising a touch of drama.
Prepublication fact-checking, on the other hand, is time consuming, laborious, and largely invisible. Due to budget and time constraints, newspapers typically do not independently fact-check their own articles. Podcasts, radio shows, and TV networks also rarely fact-check their work. Plummeting ad revenues have pushed many magazines to shutter or dramatically cut their fact-checking departments. These changes are concerning for the state of the industry. When I began working in journalism, I knew what it meant for an article to be fact-checked: the same established standards of sourcing and methodology applied. Today, as the term fact check is adopted by more publications, it is used to describe a growing number of practices that don’t necessarily conform to the same definition.
The most rigorous kind of public fact-checking is conducted by members of Poynter’s International Fact-Checking Network, a partnership of media organizations created in 2015 to unite under methodological standards and a code of principles. This includes PolitiFact, the Pulitzer-winning website that rates claims, such as politicians’ statements, based on their accuracy. In the past few years, even as many news organizations have closed their doors, the IFCN’s membership numbers have skyrocketed.
Members of the IFCN must be public-facing and must have strict principles for transparency, neutrality, and reporting, says Cristina Tardáguila, the network’s associate director. But only about ninety fact-checking organizations, out of the hundreds in existence, have made the cut. And there’s nothing stopping other publications and public figures with lesser standards from publishing their work under the fact check label, riding on the legitimacy of the term without being rigorous about the content. (This is exactly the case for many YouTube videos “fact-checking” coronavirus news.) It’s as though today, as Tardáguila puts it, “anyone can fact-check.” On the surface, that may seem like a good thing: fact-checking should not be elitist. But, without any agreement on standards, some fact checkers’ work could unwittingly add to the digital cocktail of misinformation and polarization.
“There is a pervasive idea in Western culture that humans are essentially rational, deftly sorting fact from fiction, and, ultimately, arriving at timeless truths about the world,” write Cailin O’Connor and James Weatherall, two philosophers of science, in their 2020 book, The Misinformation Age: How False Beliefs Spread. This conception of rationality dictates that, “if we want to achieve better outcomes—truer beliefs, better decisions—we need to focus on improving individual human reasoning.” It is tempting because it tells us that news consumers form inaccurate beliefs by accident and that they can be subtly steered toward more accurate beliefs if we simply present them with reliable information.
Human beings, however, are more complicated. The authors ran several mathematical models to illustrate how true and false information spreads. As soon as they allowed the people in their models to be influenced by their peers and social networks—as everyone in the real world is—the programs would sometimes conclude in whole communities adopting false beliefs even when accurate information was consistently presented to them. In other words, O’Connor and Weatherall write, “individually rational agents can form groups that are not rational at all.” According to these models, which information someone chooses to believe will depend primarily on who is passing it along; trust trumps accuracy every time. Polarization between groups with different beliefs is therefore easy to incite, and once this polarization is established, no amount of fact-checking from outside a particular community will convince the people within it to change their minds.
Media-literacy campaigns often seem like the most promising solution to this problem: instead of simply giving people facts, we should teach them how to assess the quality of information on their own. But, as a group of researchers in Denmark recently concluded, people don’t spread fake news because they think it’s real. Media-literacy programs are grounded in the same kind of naive reasoning as fact-checking is: the idea that the spread of disinformation is caused by ignorance as opposed to by issues of polarization and distrust. In the Danish study, researchers showed 1,600 Twitter users a series of educational videos teaching them to identify untrustworthy content online and examined their Twitter interactions before and after they had watched the videos. The study found that the media-literacy training effectively taught people to identify false content but that this did not dissuade them from sharing it afterward. “Participants performing well on the ‘fake news’ quiz were just as likely to share untrustworthy news stories,” the researchers wrote—leading them to conclude that, generally, people don’t share fake news because they actually believe in the content’s accuracy. Rather, they believe in its value.
Hugo Mercier, another researcher, has argued that the overwhelming majority of people who share disinformation online know that it’s inaccurate. Mercier’s social experiments suggest that, when people share “fake news,” they do so because they think that it’s funny, or that it’s interesting, or that it will demonstrate their allegiance to a particular social group. Someone may share a fake news item about Justin Trudeau “[begging] Nigeria President for one million immigrants,” for example, not because they believe it to be true but because it will publicize their membership in the social group that finds such content amusing, invigorating, or politically important.
Sometimes it feels like even using the term fact check online has become a way to signal membership in the group of people interested in rational and moral superiority.
Overwhelmingly, results from social science are telling us that fake news is not only a problem of false or misleading information but also one of social bonding. With this in mind, O’Connor says, it’s reasonable to fear that aggressive fact-checking may be both ineffective in changing false beliefs and a contributor to the very kind of polarization that perpetuates disinformation. Fact checks that begin with the implicit premise “look how wrong and stupid these people are” lead only to greater mistrust between groups—and they probably won’t convince anyone who did not already believe in the facts presented. Sometimes it feels like even using the term fact check online has become a way to signal membership in the group of people interested in rational and moral superiority.
Not all fact-checking websites reflect this attitude, of course—particularly not those that have met the strict requirements of the IFCN. For those journalists, fact check is a way of saying “we really did the research.” Still, some fact-checking websites are grounded in the same attitude as those peddling conspiracy theories: a request for the audience to be skeptical of the outside world and trust the site’s content above all else. Sure, fact checkers publish true content whereas conspiracy theorists clearly do not. But, for someone who has already decided to distrust mainstream media, fact checks are no more trustworthy than any other news article. In the end, the tone in which something is written may be just as important as its content.
Take what BuzzFeed dubbed the most-shared piece of “fake news” on Facebook during the 2016 US election, which racked up more than 960,000 engagements: “Pope Francis Shocks World, Endorses Donald Trump for President, Releases Statement,” published by ETF News. (This was, to be clear, a blatant lie: Pope Francis does not endorse political candidates.) Compare it with the piece of journalism that had the most engagements during the same time (849,000 shares, reactions, and comments): “Trump’s History of Corruption is Mind-Boggling. So Why Is Clinton Supposedly the Corrupt One?” published by the Washington Post. Without a doubt, the second article was written to meet stricter reporting and accuracy standards. But both headlines are nakedly partisan; anyone sharing either article on social media is making their political allegiance clear.
It’s hard to ignore the irony here: well-intentioned fact checkers may not realize that their work could push some people away instead of unifying them under a common truth. Arguably, however, this is not a fact checker’s problem. Their job is simply to establish an accurate record of information—not to foster trust in media or communication between polarized groups. As Tardáguila emphasizes, no public fact checker claims to be solving the problem of disinformation. “What we work for is to expose people to good facts. It’s one step behind,” she says. In fact, “it drives fact checkers a bit crazy” when people ask them to fix the problem of disinformation. “Nobody goes around accusing investigative reporters, ‘You’ve been writing about corruption for twenty years, but it’s still there, so you suck,’” Tardáguila says. Why should the case be different for fact checkers? They’re reporting on disinformation—not claiming to do anything more.
Indeed, research into fighting disinformation is beginning to steer away from fact-checking altogether. Mason Porter, a mathematician at the University of California, Los Angeles is currently working with his team to study how different content spreads online. They hope, in the long term, to develop a kind of “spam-filter” for false content. Porter’s team uses models to illustrate a news item’s “spreading tree,” which shows how many times and following what pattern a headline is retweeted, liked, and so on. Porter’s hypothesis is that content shared for its accuracy lives a different digital life than content shared for other reasons, such as for political or social signalling. “We want to know how much we can explain without taking into account the actual content,” Porter says. The next question would be what to do once inaccurate content has been identified: slapping a big red warning label on it wouldn’t be much more helpful, but the project would at least solve the problem of trying to identify disinformation in the first place, allowing for more nuanced research and responses. After all, Porter says, “flagging is much quicker than fact-checking.”
To be clear: although I am convinced that our cultural reliance on fact-checking as a catch-all solution is problematic, I don’t interpret these developments as arguments against the importance of accuracy. They simply serve as important reminders that accuracy isn’t everything. One of the problems at the root of these difficulties is the public’s loss of trust in news media, which is not always as unreasonable as some of us like to believe.
In a 2020 article for this magazine, writer and producer Pacinthe Mattar expands on what she calls a “crisis of credibility in Canadian media.” Mattar recalls travelling to Baltimore, in 2015, to put together a CBC radio documentary on the demonstrations against police brutality after the death of twenty-five-year-old Black man Freddie Gray at the hands of the city’s police department. Mattar interviewed two local men about their experiences of being mistreated by police. She later called the police department and union to request a comment, as any responsible journalist would, but received no response, which is also quite typical. When she returned to Toronto, however, her producer initially refused to air the interviews, skeptical that the men had given her their real names, and questioned the veracity of their story. “That’s when I learned that, in Canadian media, there’s an added burden of proof, for both journalists and sources, that accompanies stories about racism,” Mattar writes.
One reason Mattar’s experience is so concerning is because of the different standard to which her work was held compared to that of her colleagues. There was no more reason to believe the men had lied to her than to believe that any other source in any other CBC documentary had lied. The whole journalistic enterprise is based on trust: journalists—including fact checkers—decide to trust the sources they quote. We do as much digging as possible to feel confident and responsible about passing on the information (as Mattar did by calling the police department and union), but we also treat everyone as equally reliable until we have reason to do otherwise. If we do uncover inconsistencies, that’s when there’s reason to follow up, add qualifications, or remove the source altogether. But, until then, if a journalist is comfortable quoting a scientist about their research without conducting the same scientific experiment themselves, they should feel equally comfortable quoting a protester about their personal story.
Mattar’s experience is part of a wider conversation that broke into the mainstream last spring. As public demonstrations about systemic racism took place across North America, journalists of colour also protested the structures that, under the guise of “objectivity,” prevent certain kinds of stories from being told. What is at stake is the idea that journalists can be perfectly objective—that there exists a neutral version of every story. But, just like our emphasis on “facts,” this notion is grounded in the same historical rationalism that has made efforts to fight disinformation so unsuccessful. It relies on the assumption that only certain kinds of people can discern what the real facts are and that only certain kinds of people can be neutral—namely those uninvolved in the stories. Last year, “many Black journalists . . . said very loudly and publicly that coverage of issues of Black people and policing had not been done well,” said Denise Balkissoon, previously a long-time reporter at the Globe and Mail and currently executive editor at Chatelaine, in her recent Atkinson Lecture on trust and disinformation. “Part of the reason that it had not been done well is because of the marginalization of Black journalists in journalism organizations.”
This realization should push journalists to confront the ambiguities of “facts” head-on. When journalists or media organizations choose to distrust certain voices because of their backgrounds or experiences, we become stuck in a problematic conception of objectivity according to which emotion is a stain on the purity of “fact.” If there’s one thing I learned over the course of my time as head of research, it’s that there is no purity to defend here: what we agree upon as fact is always changing. During much of my work at The Walrus, I witnessed first-hand the harm that comes from being too stringent with standards of verification, particularly when those standards are ill-informed. For one, the kinds of sources that are available to “confirm” a fact will change drastically based on context. It may sound reasonable, for example, to require that all demographic data about the country be confirmed by primary government sources, such as the annual reports from Statistics Canada. That’s a relatively easy demand to satisfy for an article about, say, the population or development of big cities such as Calgary or Montreal. But it is an unreasonable standard for reporting on, say, First Nations communities in British Columbia. Many records relating to Indigenous people and history have been lost or destroyed—in large part because of Canadian government policy. According to the Truth and Reconciliation Commission, 200,000 Indian Affair files were destroyed between 1936 and 1944. This complicates the traditional fact-checking requirement for strict sourcing. Those records were destroyed in an attempt to obliterate history, including the details of the federal government’s management of residential schools. If I refuse to report on something because a government record does not exist to confirm it, I am essentially perpetuating the government’s erasure of Indigenous history. Instead, I should be open-minded about the kinds of sources I use, including oral history or community testimony.
Journalism is not only about getting facts right—it’s also about deciding which facts can be confirmed in the first place, which ones we choose to include in our reporting, and whom we consider fit to assess them.
Magazines with prepublication fact-checking practices can accommodate these considerations since they have the luxury of time. But, when I asked Tardáguila about the difficulties of fact-checking stories about marginalized communities, she answered that such complications are typically not the concern of organizations that fact-check public statements. “We try to focus on the very big issues that go viral across platforms,” she says. “We don’t think about different community contexts and records.” The implication is that these methodological questions about proper, ethical sourcing concern people who report stories, not people who fact-check disinformation. I understand this perspective, but I’m skeptical that there is such a distinction: we all have the same goal of publishing the truth. Sticking to topics in which the facts are “easy” or quick to correct—such as established historical narratives and reports published by government sources—means ignoring other stories altogether.
Over time, all of these considerations have strengthened my conviction that journalism is not only about getting facts right—it’s also about deciding which facts can be confirmed in the first place, which ones we choose to include in our reporting, and whom we consider fit to assess them. These considerations cannot be separated, yet we often treat them like they can be. We pretend that the job of an objective journalist is simply to pick the right, ready-made facts from a silver platter. Really, most of the time, we’re cooking from scratch.
I have come to believe that, once we have shed our naive conceptions of objectivity and rationality, journalists should be comfortable taking into account how people feel about our reporting—not because we want everyone to be pleased about the final product (an impossible and problematic goal) but because we want everyone to feel acknowledged, even if coverage is critical. We should, as a rule, be conscious of our relationship to our audience. It matters whether the people who read and participate in our work feel represented, listened to, and involved—whether they feel their experiences are being respected instead of held to unfamiliar or unfair standards. This, I believe, is how we start gaining the trust that O’Connor describes as crucial for reducing polarization and the spread of “alternative facts.”
With this goal in mind, I hope we come to place greater value on prepublication fact-checking—perhaps even prioritize it over the external, reactive kind despite its greater cost in time and resources. Establishing an agreed-upon public record of fact is undeniably helpful, but hammering people with facts, tallying their mistakes, or rejecting the legitimacy of certain communities will likely only worsen polarization and distrust. Prepublication fact-checking, on the other hand, focuses on collaboration with sources and making sure everyone who deserves to participate in a story has an opportunity to do so. Today, when I fact-check a story for a magazine, I call everyone involved not only because I want to confirm the accuracy of their quotes but also because I want to underline that everyone should be treated equally, whether or not the story about them is complimentary. It’s this notion of equal treatment that deserves more elaboration and investigation in the future: different contexts, such as reporting on controversial topics or problematic people, require different methods. This is why, to reestablish trust in media, we should focus on teaching people not only how news should be consumed (through media-literacy programs) but also how news should be made, by making our own methodologies and internal sourcing debates more transparent.
How the journalism industry should heal in the midst of the post-truth era is a difficult question; we need some way of insisting on the existence of truth while acknowledging that its boundaries are blurry—that it is reasonable, even necessary, to push against them sometimes. One of the greatest hurdles to this realization is our stubborn separation of rationality from emotion, a distinction both sides of the political spectrum rely on. People on the left will often say it is the right’s stubborn belief in a preferred alternative reality and its surrender to emotions of fear that lead it to problematic views and conspiracy theories. But people on the right use the exact same rhetoric as those on the left: as Ben Burgis points out in his recent book, Give Them an Argument, the right often criticizes the left for being too “emotional” and failing to assess situations logically, as though feelings themselves cannot be rational responses to situations. Both sides believe they are the ones best suited to make informed decisions based on available facts, and each judges the other for being incapable of doing the same.
The beginning to a possible solution is to realize that, although the world is politically divided in many ways, the main division is not between rational, intelligent people and irrational, emotional ones. Fact, opinion, and emotion often go hand in hand—in politics, journalism, and any kind of social interaction. Lately, I’ve been thinking about how these reflections may apply to the storming of the US Capitol, in January, and to the various efforts by journalists to fact-check Trump’s and his followers’ claims about election fraud. The fact-checking work done on this topic was incredibly valuable: it provided the necessary information for people to understand exactly why and how Trump’s claims that the election had been “stolen” and that he was the real winner were wrong, if that’s what they’re interested in doing. But fact-checks of claims about election fraud were published weeks before the storming of the Capitol took place; if anything, the violent reaction in January was evidence that repeating facts into the digital void over and over again will do little to change a polarized dynamic. Polarization on this topic is so extreme—fuelled by the insistence of politicians and media on both sides that the other side is cruel and hopelessly lost—that information coming from outside one’s community will likely never be trusted. We still don’t know how to engage with people who don’t agree with us on our most fundamental, sensible beliefs, yet this engagement is a crucial part of any productive way forward. In the case of the Capitol, we didn’t fail to fact-check: instead, we failed to establish, beforehand, the dialogue that is required for people to listen to and care about facts at all.