Immediately following the attack on Israel by Hamas on October 7, and throughout Israel’s bombing of Gaza thereafter, platforms such as Facebook, X, and YouTube have played host to a seemingly relentless flood of misinformation and disinformation—as if the true images emerging from the conflict zone are not horrifying enough. The posts and videos circulating on social media have ranged from the grotesque (kidnappings and beheadings) to the malevolent (claims of a US government psy-op, or that Ukraine supplied weapons to Hamas). One video was even promoted as footage of the Israeli military creating staged clips of fake deaths. (Turns out that was a behind-the-scenes shot from a short film. It racked up millions of views anyway.)

How have these platforms, once invaluable resources for following world events in real time, become such a mess? Taylor Owen is the Beaverbrook chair in media, ethics, and communications and the founding director of the Centre for Media, Technology and Democracy at McGill University. We spoke with him about the sources of disinformation, the changing role of social media, and where our information ecosystem goes from here.

Nathaniel Basen: With the Israel–Hamas conflict, it seemed like there was just unlimited amounts of content ready to be pushed out the second it started. How does that happen so quickly? How much of it is organized propaganda and how much of it is just people wanting to post?

Taylor Owen: How were people so quick to engage on social? I think it’s because we’ve been normalized to do so. If, for a decade, political actors, societal actors, journalistic actors, and citizens are incentivized to move their behaviour onto these platforms, then when an event happens, that’s where they go. That’s where the Israeli military goes. That’s where Hamas goes. That’s where news organizations go. That’s where commentators looking for traffic to their YouTube channels go. Everybody descends into this location to have this collective conversation.

The problem is we’re doing so largely determined by the incentives of that system itself. We’re all playing to the design of that system in how we engage. The Gaza event makes very clear that these incentives lead to behaviour that is suboptimal. We are not all our best selves when we go to these places to have this collective conversation. That’s one of the tragedies of what we’ve built. The tools we use to learn about the world and talk about the world are creating really perverse incentives.

The type and tone of the content encouraged inside these ecosystems ultimately end up having an effect on how much we care about an event and how much we think about others on the other side of an event. Ask people how they feel about using social media to learn about the world, and many increasingly feel that it’s making them angry, that it’s making them unsettled, that it’s making them uncertain. It’s particularly poignant on X or even YouTube: open them in the aftermath of an event like the Hamas attack, and what might you see? People on one side or the other of a topic being angry, and the extremes rising up the algorithmic system. That ultimately has an epistemological effect on society. We understand the world differently when we’re learning it through content that angers us.

NB: When I hear that, I think: the long history of the delivery of media getting faster has finally reached the point where our desire for immediate information has finally outstripped our ability to produce it reliably. To what extent is this even solvable when we want immediate information?

TO: At some point, the supply side of reliable information production is irrelevant if the system that is deciding what we consume doesn’t distribute it. In other words, it doesn’t matter how much journalism you pump into X if the distribution mechanism is only providing to users the crap. There is of course also a supply-side problem, in that there probably isn’t enough journalism being produced right now for all the reasons we can talk about: business models of news, politicization of news, and so on and so forth. But I think it’s more the case that the distribution system is not prioritizing reliable information over other content. X has radically changed its algorithmic prioritization away from information about events as they are currently occurring and amplifying a very particular type of content, such as that from a specific type of user—people who have decided to buy blue checkmarks, say. That, to me, is the crux of this problem: how we design these systems, how we oversee these systems, how we regulate these systems.

There’s another addition to the supply-side problem, which is generative AI. A great deal of the content we now see on social platforms is created by automated systems which are perfectly calibrated to the design of the ecosystem. The result is that we’re seeing more and more of the content that engages us and makes us angry and plays on our biases—because that is precisely how the generative AI tools that are creating most of this content are calibrated.

NB: Over the past year, there’s been an acceleration of the development of newer, smaller niche communities: whether it’s Bluesky, Mastodon, or the many designed specifically for people I don’t tend to agree with. It seems to me that would create an even sort of stronger echo chamber. Does the proliferation of these smaller communities actually make informing the public a more difficult task?

TO: I think most of these niche sites have proved unsuccessful because the network effects of the larger platforms are just so great. Even Threads, which has the resources and scale of Meta behind it, has struggled to create a new network.

That being said, it’s no question the case that niche sites tailored toward specific ideological worldviews have created a scenario of pocketed polarization across different sites. I actually don’t think we’re talking about this enough. There’s a lot of interaction on sites like X, because if everybody’s on them, you are more likely to be exposed in some way or another to a diverging view. Whereas if you’re sitting on Rumble, you’re unlikely to see a progressive view of the world. But we don’t have a great way of capturing the different discourses across these different platforms, and that’s a real challenge in the research community. It’s one we’ve been really trying to address: Can we track discourses across platforms rather than just within them? I strongly suspect that that’s going to reveal much deeper polarization than we saw on any one single platform.

NB: TikTok has become a primary newsgathering source for many. Any particular risks or benefits to that platform?

TO: After the European Commission released its warnings to platforms in the aftermath of the attack in Israel (in order for tech companies to comply with the Digital Services Act, which has just come into effect), TikTok announced it had removed more than 500,000 videos and shut down 8,000 livestreams. They have scaled their compliance quite aggressively—in many ways, better than some of the other platforms. I think they’re becoming a more normalized actor in the large-platform ecosystem. That’s a positive thing.

But one of the things that makes TikTok complicated is that the vast majority of what everyone sees on TikTok is the same as what everybody else sees. There’s a very small catalogue of content that actually gets seen by a large number of people. That allows for incredible control by either algorithms or, in many cases, people at TikTok to decide how the world is viewed on their platform. One could imagine that being used in all sorts of both positive and negative ways—it’s much more like a broadcast filter point than a traditional social network that in the past allowed for more equal access to a wider range of content. Anybody can post to TikTok but very few people get seen, and it gives the company a tremendous amount of power in shaping the narratives of events like this.

NB: Is there anything else that we haven’t touched on that you think we should talk about?

TO: What I think is most critical is that these tools clearly are no longer good at one of the core capacities we’ve attributed to them in the past—which is helping us better understand what’s going on in the world as events are unfolding. They never did that perfectly; the different platforms did it differently, and there were deep flaws. But that utility is clearly no longer there. And that means we live in a very different media ecosystem than we did even a year or two ago.

At the same time, we live in an ecosystem where there’s less news production and where traditional news—the alternative to that social system—is in bad shape as well. So, in some ways, we’re in the worst of all worlds right now. We don’t have as robust a traditional media ecosystem as we once did, and the social media ecosystem that we had hoped would augment it is increasingly broken and just not as useful as it once was. So I think that leaves us pretty listless in terms of understanding certain kinds of events.

Nathaniel Basen
Nathaniel Basen is the founder of Pastime, an online sports magazine.