It’s Time to Fix How We Fight Online

Social media was supposed to save democracy. So why do hatred and violence feel inseparable from online life?

An illustration of two cartoon faces arguing against a red background
The Walrus

For years, Carlos Maza has been a target of hatred online. A writer, producer, and host for Vox’s video series Strikethrough, which explores media in the populist Trump era, the American reporter often tackles touchy subjects from a leftist point of view. In the past year, he has focused on everything from the dangerous allure of Fox News to debates on gun control. Maza regularly receives criticism from right-wingers online, including Steven Crowder, who has used his YouTube show Louder with Crowder and his audience of more than four million subscribers in an attempt to discredit Maza’s reporting. In the past, Crowder’s criticism has leaned into discriminatory territory: more than once, he has mocked Maza for his ethnicity and sexuality, calling him “a lispy queer” in one video and a “gay Latino” in another. Last year, the abuse began spilling into Maza’s offline life: in one incident, Crowder fans began messaging Maza’s phone after the number was leaked, what’s commonly called “doxing.” Maza says he received hundreds of intimidating text messages demanding that he debate Crowder over topics covered in Strikethrough.

On several occasions, after Crowder picked on Maza in a video, Maza flagged it to YouTube for violating the platform’s policies against hate speech. But, each time, the content stayed put, Crowder untouched. (In a video response to Maza’s complaints, Crowder said that he “condemns” doxing and targeted harassment; YouTube has also stated that doxing is not permitted.) So, on May 30, Maza took to Twitter to air the issue to the world. As he saw it, YouTube was harbouring hate speech, providing a platform for extremist ideology and a space for violence. “I’m fucking pissed at YouTube, which claims to support its LGBT creators, and has explicit policies against harassment and bullying,” Maza wrote as part of a nineteen-tweet thread that included a video montage of Crowder’s insults. The tweets have since been shared more than 20,000 times.

Maza’s critique is part of a growing condemnation of social media as people become increasingly aware of these platforms’ capability to amplify extremism. Research has shown that, as the internet has grown, so has the proliferation of hateful content. Two-thirds of internet users have been exposed to hateful content concerning body type, sexual identity, religion, and skin colour, and a European report found that, between 2010 and 2014, there was a marked increase in online bullying and insults among young people. Over the past few years, misinformation and conspiracy theories have spread online—influencing massive global changes such as the presidencies of Donald Trump in the United States and Jair Bolsonaro in Brazil. According to the New York Times, YouTube has played a role in radicalizing young people by recommending misogynistic and racist content, and posts on Facebook were involved in the genocide of the Rohingya people in Myanmar. In Canada, between 2010 and 2017, 364 online hate crimes were reported to police according to Statistics Canada. But those numbers don’t account for the thousands of hateful posts, videos, and comments users encounter on an everyday basis that aren’t reported to law enforcement—consider the dozens of examples you likely scroll past each day on your own accounts—which are reported only to the platform or ignored altogether.

To address this, social-media companies have tried censorship, banning offending users from their services. They’ve also tried stricter regulations, setting limits on who can monetize their content and how. But these interventions have rarely been effective, and social-media execs have yet to let go of hopes that the problem will go away on its own. When feminist critic Anita Sarskeesian and game developer Zoë Quinn faced hundreds of death and rape threats during the 2014 Gamergate controversy, for instance, tech corporations took little action to protect the women from harassment. More recently, this summer, software engineer Brianna Wu documented a similar experience in an op-ed for the New York Times. Maza, too, has received little help—as of mid-October, YouTube had yet to remove Crowder’s offending videos.

When YouTube fails, however, critics often turn to other social media, like Twitter, to air their concerns—just as Maza did. Such is the ongoing contradiction of social networks: they are simultaneously platforms for hatred and soapboxes to speak out against it. Anyone can upload videos to YouTube, tweet, or post on Reddit—content that could provide support for underrepresented minority communities or embolden bigots to rally against those communities. In their earliest forms, forums for online communication were intended to be open, free, democratic spaces where disagreement and conflict served as necessary steps toward greater knowledge and tolerance. But the very attribute of democracy that makes it appealing is also often its biggest vulnerability: not everyone has the same intentions, and allowing everyone an equal seat at the table doesn’t always lead to universal good will. After all, democracies elect authoritarians, and popular protests can deny human rights just as easily as they can support them.

Many past and current tech leaders are starting to grasp the magnitude of the internet’s tendency to propagate harm. “People—smart, kind, thoughtful people—thought that comment boards and open discussion would heal us, would make sexism and racism negligible and tear down walls of class,” wrote Paul Ford, a programmer, in a recent Wired essay about the problems of the internet. “We thought we were amplifying individuals in all their wonder and forgot about the cruelty.” Now, he writes, “I’m watching the ideologies of our industry collapse. Our celebration of disruption of every other industry, our belief that digital platforms must always uphold free speech no matter how vile.” He is not alone in realizing that social media has amplified the problems of the world. The so-called ethical tech movement has likely grown out of what former Esalen executive director Ben Tauber called a “dawning consciousness emerging in Silicon Valley as people recognize that their conventional success isn’t necessarily making the world a better place.”

Just this August, Fredrick Brennan, the founder of 8chan, a popular message board that began as a “free-speech utopia,” called for it to be taken offline. The site was being used by violent extremists—including both the perpetrator of the mosque attack in Christchurch, New Zealand, and that of the mass shooting in El Paso, Texas, this year—to spread information and hatred. “It’s not doing the world any good,” Brennan told journalists.

How do we begin to tackle such a widespread problem when hate and bigotry feel intrinsic to online existence? Industry and government regulation are certainly the approaches that have gained the most traction in the public sphere. But, if we accept that the premise of online platforms is inherently flawed, even dangerous, it becomes clear that regulation without a larger strategy is not enough. Any effective solution would need to reinvent our online world altogether. For most (I hope), the ideal online forum is one where thoughtful discourse doesn’t devolve into name-calling, death threats, or violent extremism; where disagreement doesn’t make someone a target for doxing. Conflict itself is not the problem—In what fantasy land does conflict not exist? It’s how we fight, and how we’re allowed to fight, that needs to change.

For the few decades of its existence, social media—from the earliest days of Myspace and closed-network Facebook to today’s YouTube and Instagram—was touted as a means of connection and a possible solution to a host of the world’s problems. It would save the world from polarization and biases, allowing everyone the opportunity to speak. Former Reddit CEO Ellen Pao once called the internet “a bastion of free expression.” It was, and still is, the easiest way for someone to connect with anyone anywhere, for communities to come together across the globe. Marginalized communities ignored by mainstream media soon built their own audiences online, and with that power, they eventually became newsworthy themselves—think the Zapatistas, the Arab Spring, Black Lives Matter, and the #MeToo movement.

Even as this world became more corporatized over the years, companies like Facebook and YouTube, until recently, continued to offer the same philosophy: platforms ruled by the people. Users, for the most part, decided which content was most popular, and anyone could upload what they wanted. But that utopian vision quickly disintegrated into unproductive hate: those on the other end of the spectrum—organized for racist, sexist, homophobic, and other bigoted reasons—found social media equally useful in gaining visibility. As populism gained traction in world politics in the mid-2010s, people with far-right views became even more emboldened to speak their minds online with abandon.

Ramona Pringle, director of Ryerson University’s creative innovation studio, says so much of the internet’s hate and violence problem can be blamed on a lack of oversight: the internet is the only global industry without regulation, she says. Because Google, Facebook, and Twitter are corporate entities, they profit off of our online behaviours—and, in some cases, they profit off of our bigotry. But, in a paper for the Association of Internet Researchers, New York University’s Alice Marwick goes even further, writing that it’s misguided for anyone to believe that social media gives power to the people. Companies like Google, which owns YouTube, police the content on their platforms: “Google has final say over any decisions, creating potential conflicts between what users want and the corporation’s profit-driven needs,” Marwick argues.

Though the trend in recent years has been toward industry self-regulation—spurred, in part, by the ethical tech movement—no company has so far managed to filter out hatred and misinformation in a sustainable way. In January, months after allegations of unregulated hate and misinformation on its platform, YouTube agreed to recommend fewer videos about conspiracy theories, which typically contain unsubstantiated claims in support of ideas like the Earth being flat or vaccination being bad. And, in the first quarter of 2019, the platform removed nearly 50,000 videos for violating its cyberbullying and harassment policies. But, around the same time, YouTube celebrity Shane Dawson’s own series of conspiracy theories—about iPhones and telecom companies, the Trump presidency, the 2018 California wildfires, and more—thrived undisturbed, racking up tens of millions of views and prompting critics to wonder whether the company was taking complaints about misinformation seriously. One month later, YouTube began restricting ads and disabling comments on lengthy videos featuring children after it learned of possible threats from child-predator rings organizing on the platform. Family YouTubers were most penalized by the move, though their content didn’t technically break any rules. YouTube has also banned divisive—and virulently racist, sexist, and homophobic—personalities like Alex Jones from its platform, but in April, millennial creator Logan Paul helped Jones avoid that ban by inviting him on his video podcast, Impaulsive.

These case-by-case methods often target individual troublemakers, leaving the flawed structure of open platforms intact. Critics of YouTube, including tech journalists, say these rules leave too much room for interpretation: wrongdoers always find loopholes.

“[Social-media companies] don’t want to be responsible for the dark forces using their platforms to spread harmful messages or disturbing content,” wrote Julia Alexander, for Polygon, in 2017. “They’re also too big to be able to vet every single tweet, Facebook post or video.” An attempt to even begin to address this issue would therefore be twofold: moderators would have to sift through millions of posts, and companies would have to make monumental shifts in their operating philosophies, changing the very meaning of their open and free existence. But moderation alone is an exhausting hurdle, and it frequently fails: among extremist groups like white nationalists, dog whistles and codes are often used to fly under the radar. It’s the second half of this proposal that’s likely to have the most impact. Who, then, is capable of creating the shift needed to tackle online hate?

In February, Canada’s minister of democratic institutions, Karina Gould, said that the country is “moving in a direction where we need to require social-media companies to act” against hate speech, harassment, and disinformation online. While laws have been enshrined to protect Canadians from hate speech against identifiable groups—based on race, sex, gender identity, sexual orientation, age, ability, and more—how those laws are applied in cases of online discrimination varies. And, though Navdeep Bains, minister of innovation, science, and economic development, set out changes to Canada’s digital charter in May—including laws protecting users from extremism and misinformation online—few of these have been implemented.

Before the federal election this October, Gould said it would be up to the next governing party to decide what regulation would look like—and whether it would even exist. With the Liberals reelected and Gould’s and Bains’s plans moving forward, major tech companies could face fines for failing to moderate hate in effective ways. While Bains hasn’t specified how stiff these penalties could be, other countries have set precedents: in January, France invoked a 50 million Euro fine against Google when it breached a European Union privacy law. Meanwhile, in May, Gould worked alongside tech corporations in an attempt to protect Canadians from misinformation during the election, including cracking down on fake accounts and “intensify[ing] efforts to combat disinformation” online. Microsoft, Facebook, Google, and Twitter supported a declaration to protect the integrity of the election.

If the internet becomes a space where there are consequences to inaction, corporations could be incentivized to make changes. What those consequences look like might vary from country to country, but the underlying message should be clear: tolerance of hate can no longer be the baseline. For Pringle, such changes make sense, especially when held up to the ways other industries are regulated. “The food we eat is tested for safety, as are the cars we drive. Television, newspapers, all have guides that need to be followed,” she says. “Thirty years later, there’s no reason for the internet to be the outlier.”

But this approach still expects tech companies to fix their own platforms themselves. A larger shift in responsibility would require executives and politicians to accept that the internet can’t be the bastion of free speech Ellen Pao once described. Our online world must see harsher restrictions in order to thrive—greater moderation, explicit disapproval of hate speech, punishment for spreading hateful views or misinformation. More than that, it needs to foster a culture of safety and harm-reduction. It needs to host platforms where users are confident that disagreement will never lead to real-world violence.

YouTube was remarkably quiet after Maza’s viral Twitter thread. The company eventually disabled Louder with Crowder from running advertisements, a response that many creators with hundreds of thousands of subscribers have experienced in the past year. Crowder’s fans and sympathizers decried it as a loss in the fight for free speech. And, when Crowder’s offending videos remained on the platform, Maza and other LGBTQ activists called the move a cowardly refusal to address hate speech. In the early days of the controversy, YouTube’s Susan Wojcicki tried to remain neutral and, in effect, did little at all.

One week after an internal investigation concluded that Crowder’s treatment of Maza was not in violation of Youtube’s policies, however, Wojcicki made an apology. “I know that the decision we made [not to remove Crowder’s videos] was hurtful to the LGBTQ community, and that was not our intention at all,” she told the crowd at an Arizona tech conference. But a quick search of “Steven Crowder” and “Vox” reveals them all, their visibility only heightened by the controversy. “YouTube has decided not to punish Crowder, after he spent two years harassing me for being gay and Latino,” Maza tweeted in disappointment.

In a better-internet world, governments would hold tech execs like Wojcicki accountable by financially penalizing platforms for every video, every tweet, and every post spewing homophobia, transphobia, racism, or misogyny. The public would protest and denounce any misinformation and bigotry they encountered online. Social-media corporations would no longer be able to afford tolerating hateful and violent content. In such a world, the Maza/Crowder debacle would have ended much differently—it may not have been a controversy at all. YouTube could have ended the altercation as soon it began, at Maza’s first report of Crowder’s videos. It’s not that Crowder’s content would be gone entirely, but it would be free of anti-gay or racist slurs.

Yes, it’s a utopian vision to imagine persecution and hate speech eliminated entirely from any facet of life. Such a reality online would entail massive global and political changes, starting with governments that aren’t plagued with systemic sexism, homophobia, transphobia, racism, or classism. Government intervention is a limited solution, but it’s a strong start. When there are consequences to our actions—trickling down to the individual from tech executives and politicians in power—we are encouraged to think more about how and what we post, what our words and behaviours mean. We might not entirely redefine the online world today, but I’m confident that we have the tools to make it a little less hateful. The next step is to start discussing what we want it to look like tomorrow.

November 5, 2019: An earlier version of this story stated that Microsoft and Facebook supported a declaration to protect the integrity of the Canadian election while Google and Twitter did not. In fact, all four supported the declaration; Microsoft, Facebook, and Google signed on in May 2019, and Twitter signed on in June 2019. The Walrus regrets the error.

Erica Lenti
Erica is a senior editor at Xtra Magazine.