The dark reality of Stable Diffusion

8 February, 2024

By Dr Shaunagh Downing

The dark reality of Stable Diffusion

Sophisticated open-source AI technologies for generating and altering images, such as Stable Diffusion and related tools, have emerged as a mechanism for harm, with their advanced capabilities being exploited to produce Child Sexual Abuse Material (CSAM) and other inappropriate and abusive material. 

This article delves into the misuse of Stable Diffusion for malicious purposes, and the urgent need for awareness and action against such exploitation.  

The inherent dangers of stable diffusion

We begin by unpacking the inherent dangers of Stable Diffusion that stem from its very design and the data it’s built upon. 

Unlimited potential  

There are numerous generative AI tools available for the creation and modification of images. Most of these tools, such as DALL-E and Adobe Firefly, involve some sort of moderation and restrictions on the content you can create (although these have their faults too), presenting a stark contrast to Stable Diffusion. Stable Diffusion is open-source, and therefore freely available to download and use offline without any inherent restrictions or moderation. In addition to this, a whole ecosystem of Stable Diffusion-based tools, apps, online guides and tutorials, and extra features are readily accessible to users. 

While the potential as a tool for new art can appear harmless, the accessibility and ease of use of these very sophisticated tools grant anyone the power to create or modify an image exactly how they want, with no accountability. Users have complete freedom and privacy to create images of whatever, and whoever, they want. The result? The ability to create illegal and harmful content.  

Problematic training data 

The images produced by generative AI tools are shaped by the images that were used to create the tool, i.e., the training data. 

Stable Diffusion was trained on a subset of the LAION-5B dataset, a large-scale internet-collected set of images. The LAION dataset has been found to contain highly problematic content, including pornography, malign stereotypes, racist and ethnic slurs, and misogynistic representations of women, as well as illegal content, such as images of sexual abuse, rape and non-consensual explicit images. 

What’s more, a recent investigation has discovered that this same dataset also contained thousands of instances of child sexual abuse imagery, suggesting that Stable Diffusion (at least its earlier versions), was potentially trained on CSAM. 

Research has shown the influence of problematic training data on the images generated by Stable Diffusion. There are noticeable biases related to gender and ethnicity, as well as a propensity to generate inappropriate and unsafe content.    

The misuse of stable diffusion

We now turn our attention to the misuse of Stable Diffusion. From evidence that AI is being used to create CSAM, to the disproportionate targeting of women and girls using AI, we examine the evidence that this technology is causing harm.  

Stable diffusion and child sexual abuse  

An in-depth investigation by the Internet Watch Foundation has shed light on the creation and distribution of AI-generated child sexual abuse imagery using open-source AI tools, in particular Stable Diffusion and tools based on Stable Diffusion. 

This investigation found that more than 20,000 AI-generated CSAM images had been posted to a dark web forum in a single month. Alarmingly, most of these images were photorealistic enough to be mistaken for real photographs, even by trained analysts. 

For those dedicated to protecting victims of child sexual abuse, the emergence of AI-generated CSAM is an unprecedented challenge. The flood of AI-generated abuse material threatens to overwhelm investigators and slow down the process of identifying and safeguarding real victims of abuse.  

While some AI-generated child abuse material may depict non-existent children, there are many cases where the child depicted is real. Offenders are modifying existing images of real children and creating specialised models to depict the abuse of specific children. These methods are used to target children known to the perpetrator, famous children and to re-victimise existing abuse victims.  

Gendered impact 

It is important to recognise the gendered impact of AI-generated imagery. In the IWF report mentioned earlier, 99.6% of the AI-generated dark web images were of female children.  

Much of the Stable Diffusion ecosystem is dedicated to the creation of NSFW images, with the aim of creating sexualised and pornographic images of women and girls. A large amount of AI generation guides, features, apps and websites based on Stable Diffusion are centred on this goal.  

This includes tools used to “nudify” or “undress” images of women and girls, taking a fully-clothed image of a woman and recreating it so that the person appears nude.  Many of these services only work on female bodies. In September alone, 24 million people visited undressing websites. These types of websites are easily accessible through a search engine and easy to use - you don’t need to access the dark web or be computer savvy to use it - effectively putting a powerful tool for misogynistic abuse within easy reach of anyone seeking it.  

While non-consensual imagery has long been used to degrade and abuse women and girls, a new trend of creating fake non-consensual images has emerged with the rise of unmoderated generative AI tools. It is well known that female celebrities are often targeted with this type of abuse, but with the current widespread accessibility and ease of use of AI, the threat to ordinary women and girls has increased. There is a worrying pattern of teenage girls being targeted by their male classmates, who create and circulate AI-generated nude images of them, with highly publicised cases of this happening in Spain and the US.  

The real harms of fake images

A common misconception surrounding AI-generated CSAM and non-consensual material is the belief that because the content is “not real”, it’s somehow ethically acceptable. However, just because the images are not authentic does not mean that the images and their impact are not “real”. 

 When an AI-generated explicit image contains the identity or likeness of a real person, that represents a huge violation of their privacy and autonomy. It doesn’t matter that the image is technically “fake”, the consequences for the victim are real. The image, which appears to be of the victim, can be widely believed to be an actual image of them, circulated online, and weaponised against the victim by being used as a tool for harassment, bullying and blackmail. The impact of these types of images is just as profound as if the image was authentic.  

Moreover, AI-generated abuse images that do not target specific individuals are still harmful. The proliferation of AI-generated CSAM normalises the sexual abuse of children, regardless of whether the image depicts a real child or not. Studies have shown that normalising such content can desensitise people who view this material to the abuse of children, and potentially lead to physical offences. One study found that over a third (37%) of people who view CSAM online progressed to seeking direct contact with a child. 

AI-generated CSAM has not replaced “real” abuse, it has added another dimension to it.   

Looking to the future

As we confront the challenges posed by AI-generated abuse imagery, it is important to explore actionable strategies and to ensure the voices of those most affected are at the forefront of our efforts.  

The importance of listening to victims 

In exploring measures and strategies for preventing the misuse of Stable Diffusion and similar technologies, it’s crucial that the realities of what this technology is being used for are acknowledged, and that the harms caused are taken seriously. 

The misuse of AI-generated imagery is reminiscent of the issue of deepfake technology, where someone’s face is superimposed on a video of someone else.  Much of the concern around deepfakes has been around misinformation. While this is obviously a valid concern, this does not reflect the reality of what deepfake technology is largely being used for. Researchers at Sensity AI found that 96% of deepfakes online were sexually explicit and almost exclusively featured women who didn’t consent to the videos.

Similarly with AI-generated imagery, effective solutions in mitigating abuse of the technology requires an acknowledgement of the true reality of what AI image generation is being used for - harming children and women. This recognition must come from all corners - the tech industry, government, and society at large. 

The voices of victims are essential in the move towards a meaningful solution. However, since many victims are children, who cannot advocate for themselves, it’s equally important to listen to those working in the child protection space. They provide a crucial perspective and can represent the interests and needs of these young victims. While events that discuss regulations, like the international AI safety summit, are a positive step, it’s important to ensure they genuinely tackle the concerns of victims and those working in the child protection space. 

Meaningfully engaging with those who’ve felt and seen the real-world consequences of AI imagery and deepfakes can help government initiatives recognise the harms it causes and shape effective policies and regulations.   

Mitigating misuse 

The advancement of AI must be underpinned by a strong ethical framework. This means considering the implications of AI tools before they are made public and prioritising the development of solutions that protect children and victims of non-consensual imagery. 

Big tech has a role to play in limiting access to harmful AI tools, both by delisting exploitative apps and websites and automating the removal of non-consensual images.  The burden of reporting such content shouldn’t fall on victims alone. Collaboration between the tech sector, government, and law enforcement is essential to establish laws that protect individuals from abusive AI-generated content. 

Ultimately, by fostering collaboration, transparency, and empathy, we can collectively work toward a future where technology is a force for good, protecting victims of abuse while identifying those responsible.  

Addressing the future of AI 

At CameraForensics, we recognise the gravity of the dangers posed by generative AI tools like Stable Diffusion and are committed to understanding and mitigating these risks, working hand-in-hand with partners to support investigators responding to the latest challenges in their field. 

Our blog offers a wealth of insights into open-source intelligence, AI-generated imagery, and image forensics. Join us there to learn more about how our ongoing research and development projects are empowering professionals to navigate and respond effectively to the nuanced challenges of digital forensics.


Subscribe to the Newsletter