How to Fix Fake News

There will always be people who believe lies—but we can solve that

iStock / DNY59
DNY59

The furor over the vast amounts of fake news circulating on Facebook hinges on a simple question: should the company accept responsibility for what users see? That, of course, would involve an admission that Facebook is more than a platform for user- and advertiser-generated content; it is a media company making editorial decisions.

In its attempts to avoid making such a concession—which would have serious legal implications—Facebook has tied itself in such rhetorical knots that its PR team is literally asking journalists “What is truth?” But the issue is far larger than whether Facebook is a media company. It goes to the very heart of Facebook’s mission. The founder’s letter from the 2012 IPO filing claims that Facebook exists “to make the world more open and connected.” That remains Facebook’s moral thesis. The assumption is that “more open and connected” is always and everywhere a good thing.

The lesson Facebook does not want to learn is that this is not always so. Facebook can and has become a substrate for influential meme storms of hate, fear, and lies that erupt from the Internet into so-called meatspace. (Fake news convinced one man to show up with a rifle at a popular Washington pizza joint, in search of the nonexistent Democratic child-abuse dungeon that he believed lurked beneath.) Mark Zuckerberg’s reluctance to believe this can happen seems more and more like denial.

The good news is that the problem is entirely solvable. What we did not entirely realize, until Facebook’s rise, is how much our trust in journalism relies on context. When you tune to a specific channel, navigate to a particular news site, or select an individual newspaper or magazine, you are actively deciding who to trust. But when you see a headline, article, or video fly by on your Facebook feed, you are generally seeing it because: a) one of your Facebook “friends” has “engaged” with it, meaning that they have liked, shared, or commented; or b) Facebook’s algorithms believe that you are likely to engage with it as well.

This is, inadvertently, a diabolically brilliant recipe for evoking trust in fake news. Take a personal connection. Add an article that, based on your previous behavior, is of interest to you. Make sure it looks like it has been published by a respectable news outlet such as the “Denver Guardian” or “WTOE 5 News,” which are, respectively, not a newspaper and not a television station, but whose web sites have all the apparent branding of such. Suddenly, to the casual reader, their claims that an FBI agent investigating Hillary Clinton was killed, and that Pope Francis endorsed Trump for president, seem entirely plausible. Why wouldn’t they?

The antidote we need is as simple as a re-establishment of that collapsed context; an indicator whether we have just encountered the work of a serious authority doing their best to tell us the truth as they understand it, a clickbait con artist trying to monetize shock and controversy, a malevolent actor spreading deliberate disinformation, or a paranoid conspiracy theorist. Facebook could re-establish this context in any combination of three different ways, without ever having to resort to censorship:

  • Distinguish between “Trusted” and “Not Trusted” sources, and very visibly mark stories from the latter as content of dubious provenance. This is as simple as compiling a list of major and/or trusted news sources, and flagging stories which are not from them. Facebook already distinguishes between “verified users” and the overwhelming majority of the unverified, and privileges the former; this would be much the same, but would isolate fake-news peddlers such such as Breitbart and Infowars. Verification, in other words, would refer not to user identity, but to whether a specific site or organization is more interested in grinding a particular axe.
  • Allow users to flag a story as fallacious. Technically, Facebook has done this since January 2015 but only as an obscure, universally-ignored option buried deep within their menus. They need to promote it to a prominent button next to every news story. If not combined with the previous and next proposals, this would be prone to error and brigading—one can easily envision members of the so-called alt-right going after Mother Jones stories en masse, for instance. But would still be valuable, providing that Facebook is willing to ensure that only stories flagged by a fairly broad spectrum of people are marked as “disputed.”
  • Automatically verify, in real time, whether the facts stated by the story match a static corpus of “generally accepted” facts, and itemize the violations, if any. This may sound like a pie-in-the-sky nerd-harder demand, but in fact it is a solved problem, courtesy of Google, whose researchers last year published a paper entitled “Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources” describing a system which does just that, in real time, at scale. Facebook users would be free to continue sharing stories which claim that Barack Obama was not born in America—but Facebook would visibly flag them as in violation of that generally accepted fact. This would be imperfect. Nuance, after all, would be challenging; satire even harder; breaking news would have no generally accepted facts—yet. But a remarkable amount of fake news consists of bald falsehoods, such as Wikileaks confirming that Hillary Clinton sold weapons to ISIS, which this would mitigate.

Voila: context (largerly) restored, no censorship required. But note that only Facebook can meaningfully address this problem. It might, in theory, be possible for Apple, Google, Microsoft, and Mozilla to collaborate to fix it at the browser level, but that seems less realistic and more prone to failure. Individual “fake news blockers” could filter or flag fake news sites, that in the same way ad blockers attack ads, but the people inclined to install such software are the very ones who need it least.

In the absence of official Facebook efforts, other alternatives do exist. One is to methodically educate people how to recognize and reject fake news—to teach critical reading, so that they can construct their own context. Another is to try to inspire an active anti-fake-news culture, so that people who currently roll their eyes and ignore fake news instead actively debunk it. And, of course, we should teach critical reading—including examples of fake news—in schools, to develop the next generation’s memetic immune system.

Unfortunately these alternatives won’t do any good on Facebook. That’s because they run into yet another contextual land mine; Facebook’s much-vaunted “social graph”—its map of how humanity is socially connected—is, I strongly suspect, already largely partitioned into groups who are prone to sharing fake news, and groups who are not, with little overlap between them. As a result, most people who don’t know they should be critical readers are, largely because of that very problem, virtually impossible to reach. It’s hard to connect to people unlike you on Facebook at all, much less convince them of the error of their ways.

There will always be people who choose to believe falsehoods rather than deal with any conflict between their convictions and reality, but most people actually care about the truth; if they didn’t, fake news wouldn’t bother garbing itself as real journalism. Facebook says they’re not a media company, and they’re right; but while that may absolve them of legal responsibility for what their users post, it does not exempt them from the both moral and pragmatic duty to promote truth and denigrate lies. That buck, in the end, stops with Zuck.

Jon Evans
Jon Evans (@rezendi) is the author of the fantasy novel Beasts of New York and several thrillers including Dark Places and Invisible Armies. Born and raised in Canada, he now lives and works in the tech industry in San Francisco.