
What is online child sexual extortion (‘sextortion’)?
CameraForensics
5 May, 2026
Jon Rouse

Content warning | This blog post discusses child sexual abuse and exploitation, including child sexual abuse material (CSAM).
Keeping pace with perpetrators who exploit and abuse children has always been a challenge for policymakers. However, the use of AI to amplify and escalate these crimes is adding a whole new layer of complexity.
It begs the question: how prepared are legislative frameworks across the globe to prevent AI misuse? And crucially, what does this mean for the frontline investigators tasked with identifying victims, arresting perpetrators, and securing convictions?
Here, we speak to Jon Rouse APM, Founding Partner of Onemi-Global Solutions, to help us answer these important questions.
The internet has fundamentally changed the geography of crime. In many investigations the offender might be in one country, the victim in another, and the platform or hosting infrastructure somewhere else entirely.
Jurisdiction is one of the most persistent challenges here. Police powers generally stop at national borders, but the crime itself does not. Different countries also have different legal thresholds, definitions of offences and investigative powers. That can create situations where investigators know an offence has occurred, but the legal pathway to obtain evidence or initiate prosecution becomes complicated and slow.
Legislative misalignment between countries is another pressing challenge. Countries differ in how they define child sexual abuse material, grooming offences, livestreaming abuse or possession offences, for instance. These inconsistencies can create policy gaps that offenders exploit, particularly when operating within online communities that involve participants from many countries.
These legislative gaps are certainly in place for AI-enabled crimes, which are emerging and escalating quickly.
Artificial intelligence is transforming child sexual exploitation at a speed that is outpacing the law, and perpetrators are exploiting specific gaps that many current laws were never designed to close.
The most fundamental of these gaps is the distinction between material depicting a real, identifiable child and material that is entirely AI-generated. Where no real child can be identified, most jurisdictions fall back on obscenity law rather than child protection statutes, producing weaker penalties, different evidentiary standards, and inconsistent outcomes.
There are clear examples of the consequences of this already. Purpose-built platforms for generating abuse material are circulating on the dark web, and the training datasets underpinning both commercial and academic AI systems have been confirmed on multiple occasions to contain known abuse imagery,[1] yet no AI developer or dataset holder has been held criminally liable for that contamination anywhere in the Five Eyes.
For more context: A guide to AI-generated CSAM for investigators of online exploitation
First things first, we need policies that close the sentencing disparity between AI-generated and camera-captured abuse material. The harm is equivalent and the law must say so plainly. Secondly, the tools that enable CSAM generation must be criminalised in their own right, and not just for the output they produce. England and Wales have led on this with the Crime and Policing Bill and others including Australia are following.
We also need to extend the liability of AI developers and the platforms that deploy, develop or maintain AI tools. This means not relying on voluntary compliance, but putting in place mandatory reporting obligations, requirements for auditing training data, and removing immunity protections for any platforms that host AI-generated abuse content.
This is just a snapshot of the change we need to see. The structural challenge is that legislative cycles move in years and AI capability moves in months. What the next decade demands is a different model entirely: adaptive regulatory architecture that empowers agencies to issue binding safety standards without waiting for primary legislation every time the technology evolves.
The hard work lies in designing the accountability mechanisms that make powerful agency discretion democratically legitimate.
You might find interesting: Collaborating to combat child sexual abuse – an international challenge
We need to pay attention to the factors that are amplifying AI abuse, but for which global legislative frameworks are almost entirely unprepared.
One of these is that offenders are downloading open-source models to generate child sexual abuse material. When a commercial platform generates harmful content, there is an identifiable company, a term of service agreement, a legal address, and potentially a reporting obligation. When an open-source model is downloaded, fine-tuned on abuse imagery, and run locally on a personal device, none of those accountability mechanisms exist.
I’m also concerned about the functional economy being created around AI-generated abuse material. Cryptocurrency and privacy-preserving payment tools have created a financial layer that is largely invisible to the transaction monitoring obligations that apply to conventional payment processors.
The legislative focus on content and platforms has not been matched by equivalent attention to the financial infrastructure that makes this a viable criminal enterprise. Until the money is as visible as the material, enforcement will remain incomplete.
AI-generated CSAM is not just being created but is being deployed as a grooming instrument. AI-generated CSAM can be used to lure real children into abusive activities, presenting synthetic material as evidence that certain behaviours are normal or common, reducing inhibition and creating a pathway toward contact offending. Legislation focused entirely on production and possession misses this functional dimension entirely.
What’s more, many legislative frameworks still depend on establishing human intent. A person creates, distributes, or possesses material, and criminal liability attaches to that act. As AI systems become increasingly agentic, capable of initiating sequences of action with minimal or no real-time human direction, that intent architecture becomes harder to sustain.
An offender who configures an AI agent to help them commit crimes autonomously is a different legal problem from one who sits at a keyboard and makes deliberate choices. No jurisdiction has yet built a framework that adequately addresses this, and the technology is not waiting.
The threshold for investigative collapse is moving. This is perhaps the most underappreciated systemic risk, and we have consistently raised the alarm about this for quite some time.
The proliferation of AI-generated content creates unprecedented challenges for law enforcement, requiring officers to distinguish between real CSAM and synthetic abuse images. The sheer volume threatens to overwhelm existing systems designed to identify and rescue victims from abuse situations. Yet, no legislative or policy reform has yet confronted this directly.
The through-line across all these factors is the same: reactive legislation is structurally inadequate for the environment we are now in. The question for policymakers, industry, and civil society alike is this: are we willing to build the adaptive, anticipatory regulatory architecture that this moment demands? Or will we continue to pass laws in response to the last crisis while the next one is already underway?
The ‘thin blue line’ is already stretched beyond its capacity, and this is a strategic issue that all law enforcement agency leaders need to address urgently to protect the well-being of the men and woman at the sharp end of the spear.
AI-enabled crimes against children continue to escalate – and so does the need for comprehensive and consistent legislation to combat them. To learn more, download our free Child Safety Online Report 2026 below.
Inside, you’ll learn about the AI-enabled tactics that perpetrators are using today, the challenges they pose to investigators, and how policymakers and tech platforms can shape a safer world for children.