Is Facebook Good for the World?

As Facebook grows, many have asked whether it's now too big to fail. The real question is: what is it trying to become?

Mark zuckerberg on stage at a facebook event
Mark Zuckerberg at a 2008 keynote./Photo by Brian Solis

My mother is in her seventies and lives alone on Prince Edward Island, most of the way across the country from me and my kids. Years ago we bought her a computer, mostly for playing bridge, and then, more recently, an iPad. I don’t remember ever telling her about Facebook, but one day there it was: a friend request from Mom. Facebook makes up about 90 percent of what she thinks of as the web and, for her, it’s the perfect platform. She “likes” pictures of her grandchildren and re-posts them to show off to her friends. She saves and shares recipes. She comments, congratulates, and sends birthday wishes to family scattered near and far. This is, ostensibly, the entire point of Facebook: it’s a social network and my mother is very good at it.

It was when my mother joined that I realized everyone is on Facebook. (Or at least, everyone who doesn’t expressly go out of their way to not be there, god bless ʼem.) When the site launched in 2004, it was exclusively for anyone with a Harvard email address. A few months later, it expanded to other campuses, and then, in 2006, it finally dropped the pretense of exclusivity and opened itself to the world. It was a simpler time—you had a “wall” and you “poked” people. In 2008, Facebook surpassed MySpace to become the site all your friends were using, and it’s grown exponentially since.

Facebook now boasts over two billion active users worldwide, more than half of whom use the site or app every day. If you still don’t believe that Facebook, and its shifting visions of itself, can profoundly change human interaction, you’re ignoring that it already has. It is ubiquitous, part of how we live—in the same way our smartphone use and online shopping habits have become routine. As part of the big “Big Five” with Microsoft, Apple, Google and Amazon, it’s led many over the last few years to ponder whether it’s become too big to fail. It’s true that Facebook, of course, could technically fail—anything can. But if it does, it won’t be in the same way that MySpace, for instance, failed: replaced by something that was, essentially, the same in both function and form.

What would it even fail at? To disappear now, Facebook will have to fail the way floppy disks or MuchMusic or cameras with film failed, replaced or altered in ways we can’t really imagine until the moment it becomes obvious. What makes pondering Facebook’s death an especially complicated exercise is that it’s nearly impossible to pinpoint these days all that Facebook does. Rather than ask if it’s too big to fail, we should consider whether it’s too big to understand. Despite how people like my mom use it, it’s no longer a simple social network—it’s a massive corporation, and massive corporations come with larger agendas than helping our mothers share pictures of their grandchildren and letting others know what we ate for lunch.

The company has spent billions buying out possible “Next Big Things,” including WhatsApp and Instagram. It also owns Oculus VR, which will release the first mass-market virtual-reality headset for as little as $200 in 2018. It’s not all that difficult to imagine at least one possible future where we all have Facebook strapped to our heads, mainlining it into our eyeballs. But that’s just interface. What Facebook is and does is substantially more complicated than any technology, but it’s also the thing that really matters. Facebook, in particular, is a company with a point of view—and one with vast power to share it. After all, isn’t sharing what it supposedly does best?

This year alone, Facebook Canada has launched various projects to that effect. This month, for instance, in partnership with the non-profit MediaSmarts, it kicked off a two-year program to help users separate real news from fake news, broadcasting the launch and premiering a series of PSAs in the place it would attract the biggest audience: Facebook. In September, the company teamed up with Ryerson University to form a digital news incubator, providing funding and mentorship to startups. Throughout it all, Facebook has prompted some of its users to answer a poll: Is Facebook good for the world? Perhaps more than anything else the company has said or done, that question reveals how it truly views itself—or, at least, how it wants to be viewed. And as 2017 comes to a close, it may be one worth trying to answer.

Which brings us back to our own question: What is Facebook? For most of us, as for my mom, it is a network of friends and likes and groups and events and photos and so on. But underneath all that, it’s a business, and that business exists to make money and generate value for its shareholders. All that effort—the “innovation” that comes from a team of 10,000 well-educated employees—exists in order to show us ads. Facebook has built the most sophisticated advertising platform on Earth, slicing and dicing over two billion people into hyper-targeted demographics based on interests, geography, gender, race, and more. Every scrap of information you hand over is used to build a more robust profile of you, so they can sell you—literally you, the person reading this right now—to advertisers. As the adage goes: “If you’re not the customer, you’re the product.”

Well, we are one hell of a product. Our likes, posts, and profiles—those scraps of seemingly benign information—will add up to over $35 billion in Facebook revenue this year. It’s not sensational to suppose that every decision the company makes is in the service of one goal: selling us to people who would like us to buy things. Every new feature, every change to the newsfeed, every notification or alert is intentionally designed to make us want and need Facebook in our lives, because their model requires it. Facebook is a slot machine and we are its dopamine-addled junkies. In light of this paradigm—their need to make money and our growing dependence on all the things it is we do on Facebook—it’s easy to see how we could all be overlooking some of its worst side effects. Or rather, why we so willingly ignore them because of the hard truths they might reveal.

Given all this, it was fairly refreshing to see Facebook acknowledge, in mid-December, what research has been suggesting for a while: the platform can make us feel bad. It would have been more refreshing if Facebook had not—in what I’m sure the company thinks of as a very honest and candid blog post—blamed the problem on us, and how we use its platform. While the co-authors—the company’s director of research and one of its research scientists—acknowledge research showing, for instance, that depression can correspond with social media use, and that it can lead to negative social comparison and less in-person interaction, they posit it’s not the “whole story.” “The positive effects [of social media] were even stronger when people talked with their close friends online,” write the co-authors. “Simply broadcasting status updates wasn’t enough; people had to interact one-on-one with others in their network.” If Facebook is bringing you down, it’s because you aren’t Facebooking hard enough, or well enough. The solution to Facebook making us feel shitty about ourselves is, in their opinion, more Facebook.

Coincidentally (or incidentally, though you’d never see that in the official company line), they also reportedly rolled out a new algorithm that scans posts looking for people who are feeling extra bad about themselves. When it decides you might be suicidal, based presumably on your status updates and responses to them, it is designed to flag the post and connect you to the proper mental health officials. As elegant solutions go, it’s a bit like running into a burning building to install a fire alarm (never mind the privacy concerns). But it’s also consistent with Facebook’s own history of fiddling with people’s emotions.

Back in 2014, Facebook manipulated the newsfeeds of 700,000 users to see if it could make them feel happy or sad. On purpose. And it worked. With this kind of access and understanding, it seems like only a matter of time until a Facebook AI will know when we’re lonely or hungry or riddled with cancer even before we do: connecting us with new friends, Dominos Pizza, a company that sells mail-order cancer detection kits, or whatever solution it deems necessary—or more concerning, whatever solution from which it can profit.What we must remember, and be wary of, is that someone at Facebook is deciding these are good ideas; that they will, indeed, help make the world better.

The best dystopian literature always offers some nebulous, sinister Thing that watches and influences, if not outright controls. Facebook is not fiction, and yet it plays all of dystopia’s greatest hits: it has Orwell’s perpetual surveillance, Huxley’s homogeneity and conditioning, Bradbury’s assault on the attention span. (And, just to complete the set, it has Atwood’s shitty, awful men.)

Less than a month after Donald Trump took office, Mark Zuckerberg posted a 5,000-word treatise titled “Building Global Community” in which he asked: “Are we building the world we all want?” Is Facebook good for the world? It’s pretty cute language from a site that was forced to admit that from June 2015−May 2017 it was paid approximately $100,000 by “inauthentic accounts” that were “likely operated out of Russia” to run ads (not to mention that both the Trump and Clinton presidential campaigns each spent millions advertising on the site). In short, Facebook helped turn disinformation into misinformation, spreading propaganda to unwittingly help an extra from Home Alone 2 take over the free world.

All this makes Facebook’s early-December announcement of its latest feature even more troubling. “Messenger Kids” is the for-youth version of the company’s Messenger tool, the second-most-used messaging platform in the world behind WhatsApp (which, you’ll recall, Facebook also owns). The kids’ version is for children aged six to twelve and it’s being sold as a safe way to introduce them to social networking, since parents are given total control over with whom their kids can chat. But you only have to be so cynical to realize Messenger Kids is indoctrination, getting kids into the system as early as possible. Which, of course, has the long-term benefit of building an even more detailed advertising profile on them, and the short-term perk of backfilling all those teenagers who have Facebook accounts, but are spending more time on Snapchat.

The great premise of all those dystopian novels is that the nebulous Thing ultimately hollows out humanity entirely, leaving it a decidedly lesser version of itself. Yet I’m not sure how much lesser we can be made if we’re already blithely handing over our happiness, our democracy, and our children so that every feeling, relationship, and moment can be converted to data and sold to advertisers. We do seem willing to acknowledge that Facebook might not, in fact, be good for the world. But as Facebook lurches forward into its future self, it remains to be seen whether we, or it, will do anything about it. As with watching football while knowing it’s causing brain damage in its players, or noticing gross income disparity without radically redistributing wealth, we always seem more inclined to talk about the problem than to actually fix it.

Which makes sense, I guess. I mean, what am I going to do? Not post pictures of my kids?

Tyler Hellard
Tyler Hellard is a Calgary-based writer. His first novel will be published by Invisible Publishing in fall 2018.