Since the first website went live, in 1991, the digital world has fractured in many ways, enabling bad behaviour, like revenge porn and cyberbullying, and becoming an archive of the worst of humanity. But, in the midst of a global pandemic, as so many of us sit in isolation, the World Wide Web has started to reflect what its inventor, Tim Berners-Lee, imagined: a collective of ideas shared not for glory or financial gain but for the betterment of human knowledge.
Between video-conferencing companies like Zoom offering free upgrades for schools, Canadian telecommunications businesses waiving roaming and long-distance fees while many of us work from home, and Uber delivering food to health care workers for free, the digital ecosystem is seeing a boom of organizational altruism.
This support seems to extend to human interactions online as well. People are connecting more with far-flung friends. Strangers on Twitter are offering to give money to help pay others’ expenses. Mental health professionals, yoga teachers, and authors are offering their services virtually—and for free—to help stave off the cabin fever of self-isolation. Even on Reddit, that sometime cesspool of vitriol, Bill Gates hosted an AMA (ask me anything) with both his chief scientific adviser and the head of the Gates Foundation’s global health work. With the help of these doctors, Gates answered questions on subjects like coronavirus testing, how to teach safety during a pandemic, and when we can reasonably expect a vaccine.
It’s true that other global events in the internet era have resulted in virtual support, like that directed toward the the FDNY and the people of New York after the attacks on 9/11. But these moments were also overshadowed by hateful rhetoric. Misinformation seemed to spread as quickly as the facts. While misinformation, racism, and violence are still issues under this pandemic, the citizens of the World Wide Web seem more committed to the things that make us the same—our fear and resolve in the face of COVID-19.
In 1989, Tim Berners-Lee was working at CERN (the European Organization for Nuclear Research) when he came up with a way for researchers working on different computers to share information. It took another year for that proposal, originally called Mesh, to develop into the earliest version of the World Wide Web. Berners-Lee’s boss at the time put his initial proposal aside but allowed him to begin writing the code that Berners-Lee believed could become the architecture for real-time document sharing. The beginnings of the internet already existed thanks to ARPANET and funding from the US Department of Defense, but it was Berners-Lee who became instrumental in making it accessible to the wider public. Back in those heady early days, the web was a benevolent place where many coders and programmers shared their work for free.
In April 1993, CERN and Berners-Lee put the World Wide Web software into the public domain, giving the underlying code an open licence—royalty free, forever. The following year, Berners-Lee founded the World Wide Web Consortium, an international community dedicated to developing web standards and recommendations and promoting core values of openness, interoperability, and vendor neutrality. Anything that went against those founding principles—be it developers, software, or code—presented the risk of causing tension in the growing community. This kind of cooperative ownership seems to have led to a mentality that we should not have to pay for things that are basically free to post if not free to create. Remember when we had to buy the whole album even if we only liked one song on it? Now, it’s become a pay-what-you-want adventure as we bounce between music platforms, dodging both ads and financial responsibilities. To this day, traditional industries—most notably print media—struggle to replicate the earning ability of the pre-internet era.
But the belief that the internet was free and owned by everyone who used it also lowered the standards of discourse. Anyone was allowed to post anything, and the anonymity meant that there were very little real-life repercussions. The pendulum swung from benevolent, cooperative beginnings to the dark side of human nature, where communities could gather around ideas of violence and hate. But, as the pandemic is showing, that same power in numbers might make it possible to wrest the web back from the dark side.
Pandemics of the past, from the Black Death to the Spanish flu, spread misinformation and fear almost as intensely as the viruses themselves moved through populations. In the mid-1300s, when the bubonic plague peaked, a primary way of sharing information was through word of mouth, which moved too slowly to keep pace with the spread of the virus. While those who first fell victim to the disease were being buried, Europeans and Asians in surrounding areas who had yet to encounter the disease were casually going about their lives, unaware that the plague would spread along trade routes, on ships, and through towns.
Beginning in 1918, the Spanish flu—so named because the uncensored Spanish journalists were able to report on it openly during the war, not because of the mistaken assumption that it originated in Spain—would go on to have an estimated death toll of up to 100 million people worldwide. It killed an estimated 50,000 to 55,000 Canadians and resulted in the creation of the federal Department of Health, but the news of its lethality did not dominate the media. Moreover, information on its severity and spread was likely underreported due to censorship of the press in light of the war effort. In the papers, news of the pandemic was often subordinated to coverage of the war. Sometimes, articles about the illness didn’t even make it to the front page.
Canada wasn’t the only one making this mistake. Several local authorities in countries like Italy and the US minimized the severity of the outbreak by denying death tolls or the strength of transmissibility. Wilmer Krusen, Philadelphia’s public health director, is largely remembered in history for downplaying the disease to the public. While other public health professionals were wary of allowing mass gatherings, Krusen allowed the Philadelphia Liberty Loan Parade to proceed as planned. By the spring of 1919, an estimated 12,000 people had died from the Spanish flu in Philadelphia.
Pandemics of the past didn’t have the communications network that the web offers us today. With COVID-19, the internet warned us that a new disease was spreading very early in our initial discovery of it, even as some government officials downplayed its severity. Late last December, Wuhan doctor Li Wenliang used WeChat to tell his fellow medics about a SARS-like virus he was noticing in his patients. Chinese authorities did their best to shut down his communications, but by then, the cap was off. The internet passed around first-person stories and on-the-ground data and awareness grew, though many people and governments around the world were slower to react.
Around the time the virus was declared a pandemic, when media companies like Fox News were claiming that the situation was being blown out of proportion by the Democrats, the truth was retweeted and reposted and pushed higher up on our feeds. At the same time, while people had access to information, they also had the choice to ignore it. With more people getting their news from social media—where algorithms can foreground information aligned with your interests and exclude what you don’t engage with—there was a risk that the apparent seriousness of the situation would test differently depending on one’s world view. But the information did get out, as did anecdotal COVID-19 stories from people as famous as Tom Hanks and Idris Elba. First-person accounts became more widespread, and as the death tolls continued to rise, so did the stories from medical professionals, tear-filled journaling in real-time on YouTube and Twitter and Facebook.
It felt like an uncharacteristic and unprecedented groundswell of authentic conversation. And it’s exactly what we needed to remind us why we have an internet.
There’s a terrible stereotype about the highly engaged internet user—that he’s a dude living in his mom’s basement in filth and misery, angry at the world and willing to anonymously attack anyone from behind the safety of his screen. That person may still exist, but the engaged internet user in 2020 is all of us who are using the web as a way to stay updated and stave off the loneliness of our isolated lives.
Trolls and scammers didn’t go dark under threat of pandemic and are still making use of the internet to confuse and steal from us, including making false information go viral. But software companies are responding by creating new measures to shut those actions down. Amazon has identified hoarders and barred them from reselling, Twitter and Facebook have committed to identifying and removing pandemic-related content that could cause harm, and companies are taking action to combat the rampant spread of misinformation.
People are fighting over toilet paper in grocery stores and governments are considering formal charges against citizens who break quarantine, but the temperature of the internet still seems remarkably positive, factual, and supportive overall. It will be interesting to see if the web still feels this way when we start easing back into our regular lives. We’re all (willing or unwilling) participants in what The Atlantic has aptly called “the world’s largest natural experiment in behavior change.”
The question is: Can we make these changes permanent? Can we move the dial of discourse back closer to what Berners-Lee intended? Last year, in a speech reflecting on the World Wide Web’s thirtieth birthday, Berners-Lee called the fight for the web one of the most important fights of our time. “Citizens must hold companies and governments accountable for the commitments they make and demand that both respect the web as a global community with citizens at its heart,” he said. “If we don’t elect politicians who defend a free and open web, if we don’t do our part to foster constructive, healthy conversations online, if we continue to click consent without demanding our data rights be respected, we walk away from our responsibility to put these issues on the priority agendas of our governments.”
If the majority of us go back to ignoring the parts of the web we find upsetting and only sharing information within our bubbles, then we will lose the progress we’ve made during this time—when the power shifted back toward a strong, collaborative digital community.