Over the past three years, celebrities have been appearing across social media in improbable scenarios. You may have recently caught a grinning Tom Cruise doing magic tricks with a coin or Nicolas Cage appearing as Lois Lane in Man of Steel. Most of us now recognize these clips as deepfakes—startlingly realistic videos created using artificial intelligence. In 2017, they began circulating on message boards like Reddit as altered videos from anonymous users; the term is a portmanteau of “deep learning”—the process used to train an algorithm to doctor a scene—and “fake.” Deepfakes once required working knowledge of AI-enabled technology, but today, anyone can make their own using free software like FakeApp or Faceswap. All it takes is some sample footage and a large data set of photos (one reason celebrities are targeted is the easy availability of high-quality facial images) and the app can convincingly swap out one person’s face for another’s.

To date, mainstream reporting on deepfakes has emphasized their political danger. Outfits from the Washington Post to the Guardian have warned that the videos could, by eroding trust in media, create chaos. For Forbes, deepfakes threaten to be “a widely destructive political and social force.” Yet, in over three years of the practice, we have yet to see a single credible disinformation effort linked to the technology. Political deepfakes certainly exist. In one video, an AI-generated Barack Obama calls Donald Trump “a total and complete dipshit.” In Belgium, a political party circulated a deepfake of Trump mocking the country’s participation in the Paris climate agreement. Here in Canada, one user took footage of a Trump speech and replaced the former president’s face with that of Ontario premier Doug Ford. While these examples caused a stir, none presented a genuine national security risk. This is not to say that these fears are completely unfounded. The breakneck speed at which deepfakes are improving—often in disturbing new directions, including cloning voices—make it possible that they will be successfully weaponized politically. For the moment, however, they are not being used as feared. In warning about a crisis that doesn’t yet exist, headlines are erasing the damaging way the technology is actually being deployed: almost entirely to manufacture pornography.

A 2019 study by cybersecurity company Deeptrace Labs found that 96 percent of deepfakes involve sexually explicit scenes. There are thousands of clips in which the faces of celebrities, like Gal Gadot, Taylor Swift, Scarlett Johansson, Emma Watson, or even seventeen-year-old TikTok star Charli D’Amelio, have been superimposed onto the bodies of adult film stars. Porn deepfakes also feature the faces of nonfamous individuals—ex-wives, ex-girlfriends, high school crushes. This February, MIT Technology Review reported on a UK woman named Helen Mort who had been warned that she—or, rather, her face, lifted from various social media accounts—had surfaced on a porn site, pasted onto violent sex acts. Such deepfakes, it should be said, are often not aiming to fool viewers. For one thing, the technique frequently results in sloppy overlays with blurry edges and pixelated mouths. But realism isn’t the point. According to media scholar Milena Popova, porn deepfakes are almost always labelled fabrications, with some creators taking pride in them as a kind of fan fiction or media remix.

Like other forms of revenge porn, ethical issues of consent and objectification make it clear that the footage need not be real to inflict real harm. (“It really makes you feel powerless,” Mort said, “like you’re being put in your place.”) Activists and legal scholars widely condemn the practice as a form of media-based sexual abuse. Andrea Werhun, a sex worker whose years in the industry led to the 2018 memoir Modern Whore, describes deepfakes to me as “misogyny in action.” Platforms have taken steps to moderate the videos, with many sites (including PornHub) banning deepfakes outright. Still, porn deepfakes are abundant due to the ease of sharing and reuploading. Piracy is already a standard practice on porn aggregator sites, and deepfakes benefit from the resulting complacency around porn content theft.

Deepfakes don’t just sow humiliation and trauma among the unsuspecting women whose faces are appropriated; they also harm the sex workers who are digitally decapitated by the process. For Zahra Stardust, a fellow at the Berkman Klein Center for Internet and Society at Harvard Law School, deepfakes reflect a broader problem of sex workers losing control over their own images. Deepfakes, she says, are created to humiliate a person, but “the bodies they steal also belong to someone. They belong to a human being.” Sex workers produce these scenes for profit, and being compensated is how they survive. Whether it’s filmed under contract or created DIY-style, like a cam show, porn that is altered and shared without the consent of the performers is an affront materially as well as morally. Deepfakes can be difficult to defeat from a defamation angle, so perhaps a more effective remedy would be to take porn seriously as a part of the digital economy and crack down on deepfaking as copyright infringement.

Like musicians, filmmakers, and writers, porn performers have rights to their creative output. Unlike other media industries, however, porn is the target of a stigma that makes it difficult to fight for better treatment. “Our culture has a fundamental disdain for anyone who makes sex public and explicit,” says Werhun. As a result, few are willing to stand up for the intellectual property of sex workers. And, as long as the porn industry is the subject of moral panic instead of measured discussion, says John Paul Stadler, a media scholar from North Carolina State University, “there will be a kind of wilful forgetting around the predominant use of deepfakes.” But this crisis is bigger than porn. If porn performers can have their content brazenly stolen and modified, anyone’s images are fair game. What the porn industry now faces could be an indicator of what we can all expect from platforms in the coming years.

Sex workers have long served as canaries in the online coal mine. From being at the forefront of technologies like payment processors for online transactions to confronting censorship like deplatforming and shadowbanning (the act of social media companies blocking a user or their content without informing them), internet porn is a predictor of emerging internet norms. Interestingly, performers I spoke to have no quarrel with the deepfaking itself, only with the unethical application and stealing source material. “It’s fine to see the emergence of a new type of porn production,” says Kate Sinclaire, a Winnipeg-based performer and producer with Ciné Sinclaire, “but the foundation remains the theft of our content.” If the components were fairly sourced, deepfakes could even offer an affordable template for wish fulfillment. Custom videos, seeing yourself in a porn scene, or bringing unconventional desires to life are all within reach. But, as long as piracy runs rampant in the industry and sharing nonconsensual images is normalized outside of it, responsible alternatives to deepfakes are unlikely to thrive.

Deepfakes are a new and powerful genre of digital media. They represent a creative practice with huge potential for satire and fantasy-building as well as the threat of disinformation. But continuing to frame the technology entirely by what we anticipate—political interference—detracts from our ability to engage with the reality of how deepfakes are being harnessed: to harm women, who are harassed by anonymous creators with no regard for consent, and to harm porn workers, who are made even more precarious without adequate legal protections. Emphasizing that deepfakes could target powerful men or promote political disinformation while disregarding what harm they currently cause is not only a failure to protect some of society’s most marginalized workers—it prophesies the way all of our images will be treated by platforms in the near future: as fair for the taking.

Maggie MacDonald
Maggie MacDonald is a PhD student at the University of Toronto. You can follow her work at @internetmaggie on Twitter.