Why praising every update matters
By Matt Burns
7 February, 2023
By Freddie Lichtenstein
2022 was the year of generative AI, marking a wide range of innovations. Thanks to applications such as DALL-E, Stable Diffusion, and Midjourney, AI-generated imagery saw a sharp rise in popularity throughout the last year, becoming the subject of talk shows, news articles, and online debates.
With such a sharp rise in popularity, it’s clear that AI-generated imagery is here to stay. However, as it’s capable of generating hyper-realistic imagery, investigators have already identified AI-generated imagery as a threat, and for good reason. In this piece we explore how image generation has rapidly evolved, the dangers it poses to the CSAM landscape, and how investigators hope to fight back.
While there are several ways that AI can generate imagery, such as through the use of generative adversarial networks (GANs), the power of creating an image from an arbitrary text input has captured the imaginations of the online world. This has been possible due to two innovations – the introduction of CLIP, and the application of diffusion models.
Introduced to the public by Musk’s (among others) OpenAI, CLIP provides creators with a language model that encodes the subtle relationship between text and imagery, enabling the transformation of text to visual cues, and vice versa. Developers can now harness this powerful relationship and feed it to an AI model to begin producing entirely synthesised imagery.
Combining CLIP with a particular deep learning model, called diffusion, has paved the way for a highly sophisticated yet straightforward algorithm. Diffusion works by continuously adding random noise to an image. The system then learns to reverse this process, carefully removing noise until it is left with an image that is consistent with the submitted text prompt. With this, users can create images of whatever they can think of in just a few seconds.
Through continuous use and feedback, diffusion generators have become adept at developing realistic & abstract images, even going as far as to add complex lines of perspective, symmetry, and meaning. Now that users can now generate images of whatever they dream up, there are obvious ethical implications.
With AI image generators already taking the world by storm, many are turning to what the future of digital imagery may involve.
Able to generate a vast range of images, in some cases without the need for human interaction, many expect AI-generated images to outnumber those created by humans.
It may not be clear yet how we intend to navigate this issue, but for the generators themselves, many experts see this causing a stunting effect. With image generators ingesting their own AI-generated images, the systems will become polluted and will reach a plateau in creativity.
Regardless of the future, it’s clear that we’ve reached a pivotal point in the history of AI (without even mentioning ChatGPT), leading many to label this period as pre- and post-algorithm.
For investigators primarily focused on eliminating child exploitation imagery, AI-generated content represents a significant threat.
AI-generated imagery can potentially be used to exploit children in several ways.
One way is through the creation of non-consensual explicit imagery, similar to deepfake imagery, but much more universal. In these cases, AI algorithms are used to generate or manipulate images or videos that depict someone engaging in sexual activity without their consent, often using real images of people as a reference. These images can then be distributed online, causing significant harm to the individuals depicted in them.
Another danger of AI-generated imagery lies in its sheer volume. As well as imagery becoming steadily more realistic, the volume of generated imagery may require investigators to spend a tortuous amount of time identifying and classifying harmful imagery, and deciphering whether it is genuine or generated. This has a detrimental effect on the speed at which victims can receive the support they need.
AI-generated imagery is also a cause for concern because of how it can perpetuate harmful behaviour; providing another route to accessing harmful content. This new avenue will make it harder to intervene to disrupt problematic behaviour and make it much trickier to provide support to those who need it.
Along with AI-generated imagery’s sharp rise in popularity, it has prompted many necessary discussions about ethics, ownership, and legality.
AI-generated imagery has already encountered many serious ethical issues, such as when an AI art piece won first place at Colorado State Fair’s annual art competition , or when artists accused Stable Diffusion of stealing their artwork for training purposes.
AI-generated sexual imagery has also been the subject of scrutiny and debate, with users debating the inclusion of NSFW imagery in Stable Diffusion. There are differing laws, dependent on location, on the legality of creating and distributing generated non-consensual explicit imagery and child abuse imagery, regardless of whether it depicts a real individual.
In these situations, the individual or individuals who create and distribute this type of content are responsible for their actions – and can be held legally accountable for them. As AI imagery continues to become more prevalent in daily life, we expect further discussions to tackle this ongoing challenge.
While AI-generated imagery is a relatively new field, the waves that it has already caused throughout the worlds of art, science, and ethics are immense.
At CameraForensics, we fully recognise the potential risks associated with AI-generated imagery in its various forms. Working closely with a network of global partners, we aim to continue aiding investigators in responding to the latest challenges in their field.
Discover more, and find out how our numerous R&D projects are empowering investigators in the face of numerous challenges, in our full range of digital forensics insights.