The Washington PostDemocracy Dies in Darkness

AI porn is easy to make now. For women, that’s a nightmare.

Easy access to AI imaging gives abusers new tools to target women

February 13, 2023 at 6:00 a.m. EST
(Illustration by Elena Lacey/The Washington Post; iStock)
10 min

QTCinderella built a name for herself by gaming, baking and discussing her life on the video-streaming platform Twitch, drawing hundreds of thousands of viewers at once. She pioneered “The Streamer Awards” to honor other high-performing content creators and recently appeared in a coveted guest spot in an esports champion series.

Nude photos aren’t part of the content she shares, she says. But someone on the internet made some, using QTCinderella’s likeness in computer-generated porn. This month, prominent streamer Brandon Ewing admitted to viewing those images on a website containing thousands of other deepfakes, drawing attention to a growing threat in the AI era: The technology creates a new tool to target women.

“For every person saying it’s not a big deal, you don’t know how it feels to see a picture of yourself doing things you’ve never done being sent to your family,” QTCinderella said in a live-streamed video.

Streamers typically don’t reveal their real names and go by their handles. QTCinderella did not respond to a separate request for comment. She noted in her live stream that addressing the incident has been “exhausting” and shouldn’t be part of her job.

Until recently, making realistic AI porn took computer expertise. Now, thanks in part to new, easy-to-use AI tools, anyone with access to images of a victim’s face can create realistic-looking explicit content with an AI-generated body. Incidents of harassment and extortion are likely to rise, abuse experts say, as bad actors use AI models to humiliate targets ranging from celebrities to ex-girlfriends — even children.

Women have few ways to protect themselves, they say, and victims have little recourse.

As of 2019, 96 percent of deepfakes on the internet were pornography, according to an analysis by AI firm DeepTrace Technologies, and virtually all pornographic deepfakes depicted women. The presence of deepfakes has ballooned since then, while the response from law enforcement and educators lags behind, said law professor and online abuse expert Danielle Citron. Only three U.S. states have laws addressing deepfake porn.

“This has been a pervasive problem,” Citron said. “We nonetheless have released new and different [AI] tools without any recognition of the social practices and how it’s going to be used.”

The research lab OpenAI made waves in 2022 by opening its flagship image-generation model, Dall-E, to the public, sparking delight and concerns about misinformation, copyrights and bias. Competitors Midjourney and Stable Diffusion followed close behind, with the latter making its code available for anyone to download and modify.

ChatGPT could make life easier. Here’s when it’s worth it.

Abusers didn’t need powerful machine learning to make deepfakes: “Face swap” apps available in the Apple and Google app stores already made it easy to create them. But the latest wave of AI makes deepfakes more accessible, and the models can be hostile to women in novel ways.

Since these models learn what to do by ingesting billions of images from the internet, they can reflect societal biases, sexualizing images of women by default, said Hany Farid, a professor at the University of California at Berkeley who specializes in analyzing digital images. As AI-generated images improve, Twitter users have asked if the images pose a financial threat to consensually made adult content, such as the service OnlyFans where performers willingly show their bodies or perform sex acts.

Meanwhile, AI companies continue to follow the Silicon Valley “move fast and break things” ethos, opting to deal with problems as they arise.

“The people developing these technologies are not thinking about it from a woman’s perspective, who’s been the victim of nonconsensual porn or experienced harassment online,” Farid said. “You’ve got a bunch of White dudes sitting around like ‘Hey, watch this.’”

Deepfakes’ harm is amplified by the public response

People viewing explicit images of you without your consent — whether those images are real or fake — is a form of sexual violence, said Kristen Zaleski, director of forensic mental health at Keck Human Rights Clinic at the University of Southern California. Victims are often met with judgment and confusion from their employers and communities, she said. For example, Zaleski said she’s already worked with a small-town schoolteacher who lost her job after parents learned about AI porn made in the teacher’s likeness without her consent.

“The parents at the school didn’t understand how that could be possible,” Zaleski said. “They insisted they didn’t want their kids taught by her anymore.”

The growing supply of deepfakes is driven by demand: Following Ewing’s apology, a flood of traffic to the website hosting the deepfakes caused the site to crash repeatedly, said independent researcher Genevieve Oh. The number of new videos on the site almost doubled from 2021 to 2022 as AI imaging tools proliferated, she said. Deepfake creators and app developers alike make money from the content by charging for subscriptions or soliciting donations, Oh found, and Reddit has repeatedly hosted threads dedicated to finding new deepfake tools and repositories.

Asked why it hasn’t always promptly removed these threads, a Reddit spokeswoman said the platform is working to improve its detection system. “Reddit was one of the earliest sites to establish sitewide policies that prohibit this content, and we continue to evolve our policies to ensure the safety of the platform,” she said.

Machine learning models can also spit out images depicting child abuse or rape and, because no one was harmed in the making, such content wouldn’t violate any laws, Citron said. But the availability of those images may fuel real-life victimization, Zaleski said.

Some generative image models, including Dall-E, come with boundaries that make it difficult to create explicit images. OpenAI minimizes the nude images in Dall-E’s training data, blocks people from entering certain requests and scans output before showing it to the user, lead Dall-E researcher Aditya Ramesh told The Washington Post.

Another model, Midjourney, uses a combination of blocked words and human moderation, said founder David Holz. The company plans to roll out more advanced filtering in coming weeks that will better account for the context of words, he said.

Stability AI, maker of the model Stable Diffusion, stopped including porn in the training data for its most recent releases, significantly reducing bias and sexual content, said founder and CEO Emad Mostaque.

But users have been quick to find workarounds by downloading modified versions of the publicly available code for Stable Diffusion or finding sites that offer similar capabilities.

No guardrail will be 100 percent effective in controlling a model’s output, said Berkeley’s Farid. AI models depict women with sexualized poses and expressions because of pervasive bias on the internet, the source of their training data, regardless of whether nudes and other explicit images have been filtered out.

AI selfies — and their critics — are taking the internet by storm

Social media has been flooded by AI generated images produced by an app called Lensa. Tech reporter Tatum Hunter addresses both the craze and the controversy. (Video: Monica Rodman/The Washington Post)

For example, the app Lensa, which shot to the top of app charts in November, creates AI-generated self portraits. Many women said the app sexualized their images, giving them larger breasts or portraying them shirtless.

Lauren Gutierrez, a 29-year-old from Los Angeles who tried Lensa in December, said she fed it publicly available photos of herself, such as her LinkedIn profile picture. In turn, Lensa rendered multiple naked images.

Gutierrez said she felt surprised at first. Then she felt nervous.

“It almost felt creepy,” she said. “Like if a guy were to take a woman’s photos that he just found online and put them into this app and was able to imagine what she looks like naked.

For most people, removing their presence from the internet to avoid the risks of AI abuse isn’t realistic. Instead, experts urge you to avoid consuming nonconsensual sexual content and to familiarize yourself with the ways it affects the mental health, careers and relationships of its victims.

They also recommend talking to your children about “digital consent.” People have a right to control who sees images of their bodies — real or not.

Help Desk: Making tech work for you

Help Desk is a destination built for readers looking to better understand and take control of the technology used in everyday life.

Take control: Sign up for The Tech Friend newsletter to get straight talk and advice on how to make your tech a force for good.

Tech tips to make your life easier: 10 tips and tricks to customize iOS 16 | 5 tips to make your gadget batteries last longer | How to get back control of a hacked social media account | How to avoid falling for and spreading misinformation online

Data and Privacy: A guide to every privacy setting you should change now. We have gone through the settings for the most popular (and problematic) services to give you recommendations. Google | Amazon | Facebook | Venmo | Apple | Android

Ask a question: Send the Help Desk your personal technology questions.