David Evan Harris
Why are AI companies valued in the millions and billions of dollars creating and distributing tools that can make AI-generated child sexual abuse material (CSAM)?
An image generator called Stable Diffusion version 1.5, which was created by the AI company Runway with funding from Stability AI, has been particularly implicated in the production of CSAM. And popular platforms such as Hugging Face and Civitai have been hosting that model and others that may have been trained on real images of child sexual abuse. In some cases, companies may even be breaking laws by hosting synthetic CSAM material on their servers. And why are mainstream companies and investors like Google, Nvidia, Intel, Salesforce, and
Andreesen Horowitz pumping hundreds of millions of dollars into these companies? Their support amounts to subsidizing content for pedophiles.
As AI safety experts, we’ve been asking these questions to call out these companies and pressure them to take the corrective actions we outline below. And we’re happy today to report one major triumph: seemingly in response to our questions, Stable Diffusion version 1.5 has been removed from Hugging Face. But there’s much still to do, and meaningful progress may require legislation.
The Scope of the CSAM Problem
Child safety advocates began ringing the alarm bell last year: Researchers at
Stanford’s Internet Observatory and the technology non-profit Thorn published a troubling report in June 2023. They found that broadly available and “open-source” AI image-generation tools were already being misused by malicious actors to make child sexual abuse material. In some cases, bad actors were making their own custom versions of these models (a process known as fine-tuning) with real child sexual abuse material to generate bespoke images of specific victims.
Last October, a
report from the U.K. nonprofit Internet Watch Foundation (which runs a hotline for reports of child sexual abuse material) detailed the ease with which malicious actors are now making photorealistic AI-generated child sexual abuse material, at scale. The researchers included a “snapshot” study of one dark web CSAM forum, analyzing more than 11,000 AI-generated images posted in a one-month period; of those, nearly 3,000 were judged severe enough to be classified as criminal. The report urged stronger regulatory oversight of generative AI models.
AI models can be used to create this material because they’ve seen examples before. Researchers at Stanford
discovered last December that one of the most significant data sets used to train image-generation models included thousands of pieces of CSAM. Many of the most popular downloadable open-source AI image generators, including the popular Stable Diffusion version 1.5 model, were trained using this data. That version of Stable Diffusion was created by Runway, though Stability AI paid for the computing power to produce the dataset and train the model, and Stability AI released the subsequent versions.
Runway did not respond to a request for comment. A Stability AI spokesperson emphasized that the company did not release or maintain Stable Diffusion version 1.5, and says the company has “implemented robust safeguards” against CSAM in subsequent models, including the use of filtered data sets for training.
Also last December, researchers at the social media analytics firm
Graphika found a proliferation of dozens of “undressing” services, many based on open-source AI image generators, likely including Stable Diffusion. These services allow users to upload clothed pictures of people and produce what experts term nonconsensual intimate imagery (NCII) of both minors and adults, also sometimes referred to as deepfake pornography. Such websites can be easily found through Google searches, and users can pay for the services using credit cards online. Many of these services only work on women and girls, and these types of tools have been used to target female celebrities like Taylor Swift and politicians like U.S. representative Alexandria Ocasio-Cortez.
AI-generated CSAM has real effects. The child safety ecosystem is already overtaxed, with millions of files of suspected CSAM reported to hotlines annually. Anything that adds to that torrent of content—especially photorealistic abuse material—makes it more difficult to find children that are actively in harm’s way. Making matters worse, some malicious actors are using existing CSAM to generate synthetic images of these survivors—a horrific re-violation of their rights. Others are using the readily available “nudifying” apps to create sexual content from benign imagery of real children, and then using that newly generated content in
sexual extortion schemes.
One Victory Against AI-Generated CSAM
Based on the Stanford investigation from last December, it’s well-known in the AI community that Stable Diffusion 1.5 was
trained on child sexual abuse material, as was every other model trained on the LAION-5B data set. These models are being actively misused by malicious actors to make AI-generated CSAM. And even when they’re used to generate more benign material, their use inherently revictimizes the children whose abuse images went into their training data. So we asked the popular AI hosting platforms Hugging Face and Civitai why they hosted Stable Diffusion 1.5 and derivative models, making them available for free download?
It’s worth noting that
Jeff Allen, a data scientist at the Integrity Institute, found that Stable Diffusion 1.5 was downloaded from Hugging Face over 6 million times in the past month, making it the most popular AI image-generator on the platform.
When we asked Hugging Face why it has continued to host the model, company spokesperson Brigitte Tousignant did not directly answer the question, but instead stated that the company doesn’t tolerate CSAM on its platform, that it incorporates a variety of safety tools, and that it encourages the community to use the
Safe Stable Diffusion model that identifies and suppresses inappropriate images.
Then, yesterday, we checked Hugging Face and found that Stable Diffusion 1.5 is
no longer available. Tousignant told us that Hugging Face didn’t take it down, and suggested that we contact Runway—which we did, again, but we have not yet received a response.
It’s undoubtedly a success that this model is no longer available for download from Hugging Face. Unfortunately, it’s still available on Civitai, as are hundreds of derivative models. When we contacted Civitai, a spokesperson told us that they have no knowledge of what training data Stable Diffusion 1.5 used, and that they would only take it down if there was evidence of misuse.
Platforms should be getting nervous about their liability. This past week saw
the arrest of Pavel Durov, CEO of the messaging app Telegram, as part of an investigation related to CSAM and other crimes.
What’s Being Done About AI-Generated CSAM
The steady drumbeat of disturbing reports and news about AI-generated CSAM and NCII hasn’t let up. While some companies are trying to improve their products’ safety with the help of the Tech Coalition, what progress have we seen on the broader issue?
In April, Thorn and All Tech Is Human announced an initiative to bring together mainstream tech companies, generative AI developers, model hosting platforms, and more to define and commit to Safety by Design principles, which put preventing child sexual abuse at the center of the product development process. Ten companies (including Amazon, Civitai, Google, Meta, Microsoft, OpenAI, and Stability AI) committed to these principles, and several others joined in to co-author a related paper with more detailed recommended mitigations. The principles call on companies to develop, deploy, and maintain AI models that proactively address child safety risks; to build systems to ensure that any abuse material that does get produced is reliably detected; and to limit the distribution of the underlying models and services that are used to make this abuse material.
These kinds of voluntary commitments are a start. Rebecca Portnoff, Thorn’s head of data science, says the initiative seeks accountability by requiring companies to issue reports about their progress on the mitigation steps. It’s also collaborating with standard-setting institutions such as IEEE and NIST to integrate their efforts into new and existing standards, opening the door to third party audits that would “move past the honor system,” Portnoff says. Portnoff also notes that Thorn is engaging with policy makers to help them conceive legislation that would be both technically feasible and impactful. Indeed, many experts say it’s time to move beyond voluntary commitments.
We believe that there is a reckless race to the bottom currently underway in the AI industry. Companies are so furiously fighting to be technically in the lead that many of them are ignoring the ethical and possibly even legal consequences of their products. While some governments—including the European Union—are making headway on regulating AI, they haven’t gone far enough. If, for example, laws made it illegal to provide AI systems that can produce CSAM, tech companies might take notice.
The reality is that while some companies will abide by voluntary commitments, many will not. And of those that do, many will take action too slowly, either because they’re not ready or because they’re struggling to keep their competitive advantage. In the meantime, malicious actors will gravitate to those services and wreak havoc. That outcome is unacceptable.
What Tech Companies Should Do About AI-Generated CSAM
Experts saw this problem coming from a mile away, and child safety advocates have recommended common-sense strategies to combat it. If we miss this opportunity to do something to fix the situation, we’ll all bear the responsibility. At a minimum, all companies, including those releasing open source models, should be legally required to follow the commitments laid out in Thorn’s Safety by Design principles:
- Detect, remove, and report CSAM from their training data sets before training their generative AI models.
- Incorporate robust watermarks and content provenance systems into their generative AI models so generated images can be linked to the models that created them, as would be required under a California bill that would create Digital Content Provenance Standards for companies that do business in the state. The bill will likely be up for hoped-for signature by Governor Gavin Newson in the coming month.
- Remove from their platforms any generative AI models that are known to be trained on CSAM or that are capable of producing CSAM. Refuse to rehost these models unless they’ve been fully reconstituted with the CSAM removed.
- Identify models that have been intentionally fine-tuned on CSAM and permanently remove them from their platforms.
- Remove “nudifying” apps from app stores, block search results for these tools and services, and work with payment providers to block payments to their makers.
There is no reason why generative AI needs to aid and abet the horrific abuse of children. But we will need all tools at hand—voluntary commitments, regulation, and public pressure—to change course and stop the race to the bottom.
The authors thank Rebecca Portnoff of Thorn, David Thiel of the Stanford Internet Observatory, Jeff Allen of the Integrity Institute, Ravit Dotan of TechBetter, and the tech policy researcher Owen Doyle for their help with this article.
From Your Site Articles
Related Articles Around the Web