AI image detection for investigators – a quick guide

29 January, 2026

Fred Lichtenstein

AI image detection for investigators – a quick guide

Perpetrators are now using AI tools to create and manipulate child sexual abuse material (CSAM) at an astonishing rate. In fact, the Internet Watch Foundation (IWF) found that reports of AI-generated CSAM doubled between 2024 and 2025 (based on the same 10-month period). What’s more, some AI-generated abuse material is almost indistinguishable from camera-captured material. 

Clearly, the need for robust AI image detection technology is escalating – and quickly. Here’s why, and what you need to know about AI image detection capabilities for investigators.  

 

How do AI image detectors work? 

AI image detection tools help investigators to identify media that has been generated or manipulated with AI. This is their fundamental goal, but they operate in different ways depending on their intended use cases.  

For instance, some detectors work by identifying visual clues of manipulation – such as inconsistencies in perspective or unusual artefacts in the fine details. These clues may be so subtle that they aren’t easily spotted by the human eye. 

Other detectors, such as those designed for investigators, go beyond looking for these visual clues. These typically work by identifying forensic signals within the image itself, such as a ‘stamp’ left behind by the generative AI tool used to create it. All images created or altered by AI contain forensic signals like these, and even if they are not visible to investigators, AI image detection tools can be built to recognise them. 

Moving forward, investigators will also need tools that can detect AI-generated video. This is an escalating threat, with the IWF reporting that 1,286 AI-generated CSAM videos were discovered in the first half of 2025, compared to two in the same period the previous year.  What’s more, the organisation said: 

“All the AI videos confirmed by the IWF so far this year have been so convincing they had to be treated under UK law exactly as if they were genuine footage.” 

Consider reading next: A guide to AI-generated CSAM for investigators of online exploitation 

 

Why do LEAs need AI image detection tools? 

 
The rise of AI-generated child sexual abuse material presents many complex challenges for investigators today. Nevertheless, AI image detection tools could help to overcome some of the most pressing ones, empowering investigators to:

1. Detect AI-generated images, quickly  

Perpetrators can download powerful diffusion models, train them on their chosen datasets, and use them in offline environments with no moderation. As such, they can generate increasingly realistic AI depictions, at scale and very quickly. This leaves investigators with potentially vast datasets to analyse.  

With robust AI image detection tools, investigators don’t need to sift through hundreds, if not thousands, of media files manually. They also don’t need to rely on judgement alone to identify the signs of manipulation by AI. 

2. Uncover intelligence for offender identification 

AI image detection tools that identify forensic signals can give investigators valuable insight into the creation process of abusive material. For instance, they can uncover the ‘stamps’ left behind by the generative AI tools used, or the specific models they were created by. Some can even pull data such as the exact words used to prompt them.  

These can all act as clues to help investigators identify who is responsible for the imagery and assess the intentions behind it. 

3. Assess risks to victims 

With intelligence from AI detectors, investigators can begin to prioritise their caseloads and resources more effectively. For instance, by making sure that they aren’t directing their resources towards a fictional child. Or, by making sure they don’t mistakenly discount images of a real child as AI-generated. 

They can use the technology to identify when images have been completely generated by AI, or when images include elements of AI manipulation. For instance, benign images of children that have had abusive elements inserted, or existing CSAM that has been altered. 

Related: Unveiling the challenges of AI-generated CSAM 

What investigators need from detectors 

Many cameras and mobile phones now have built-in AI tools that enhance or edit images. This is mainly for benign and legitimate reasons, such as brightening subjects or removing objects in the background. 

This changes the questions that investigators need to answer when analysing images. The biggest questions aren’t necessarily about whether the material has been manipulated with AI tools, but are about consent, provenance, and intent.  

This also informs what investigators need from their detection capabilities.  

Recently, we spoke to Jon Rouse, Founding Partner of Onemi-Global Solutions, about the requirements of AI detection technology for investigators. During the Q&A, he said: 

“What investigators need most from an AI detection tool is clarity – not just a score or a label. Enough insight to make confident, defensible decisions about risk and next steps.” 

To achieve this clarity, and to help them answer the questions needed to move their investigations forward, LEAs need tools that: 

1. Empower explainable decision-making 

Investigators need to be able to justify their decisions to their colleagues, partner agencies, or in court with full transparency. This includes when detecting AI images and deciding upon their next steps. 

With this in mind, they need a detection tool that gives them clear, explainable outputs. Investigators need to understand why the detector tool has made that analysis and how confident it is, while being able to override the system if it’s sharing inaccurate outputs.  

2. Provide localisation and visual cues 

For investigators to distinguish between fully or partially AI-generated content, they need a tool with localisation capabilities. This means a tool that can highlight which part of an image appears to have been altered with AI. For instance, the victim’s face or body.  

3. Support confident case prioritisation 

Robust AI image detectors should also help investigators to prioritise the highest risk content. For instance, content that contains real victims who could be facing an immediate risk of real danger. 

Similarly, the tool should be able to identify similarities across multiple files. For instance, a signature detail that can be attributed to one perpetrator. Being able to surface these connections can help investigators to direct their resources as effectively as possible.  

You might find interesting: Image forensics tools need to evolve with digital threats. Here’s why 

 

The challenges of detection tools 

Tools that detect AI images are becoming essential for investigations into crimes against children. Nevertheless, as with any technology, users do need to be aware of the potential challenges and limitations of them.  

For instance, AI tools can misclassify media, such as by flagging camera-captured images as AI-generated and vice versa. This could lead investigators to overlook content that contains a real child facing real danger. As such, these tools still need a human to interpret their findings and validate them within the right operational context.  

We’ve been working with LEAs to understand these risks and develop practical workflows to support them. We’re proud to be developing our own AI image detection capabilities rooted in this understanding, and are dedicated to empowering investigators with the intelligence needed to protect more children. 

That’s all we’re able to share for now, but we look forward to updating you when we can!  

Detecting AI CSAM – A Q&A with Onemi’s Jon Rouse 

We’ve shared some of Jon’s insights into AI image detection throughout this article, but you can read the full Q&A with him in Detecting AI CSAM – a vital investigative capability.  Here, Jon dives deeper into the vital need for this technology, how it might evolve in the future, and why investigators’ own judgement is still incredibly important. 

To be the first to receive insights like these, why not sign up to The Source? Our monthly newsletter sends the articles from our team and partners straight to your inbox.


Subscribe to the Newsletter