
AI policy and child safety – a Q&A with Onemi’s Jon Rouse
Jon Rouse
6 May, 2026
CameraForensics

Content warning | This blog post discusses child sexual abuse and exploitation, including child sexual abuse material (CSAM).
Perpetrators are exploiting online tools and platforms in their attempts to abuse children. However, policymakers have an opportunity to develop online safety legislation that keeps pace with these threats, hold perpetrators accountable to the criminal offences they commit, and destabilise the ecosystem of online platforms that enable them.
As of early 2026, the legislative landscape surrounding children’s safety online continues to evolve in new and interesting ways. We consider some of the key legislative updates from the UK and beyond here, including the trends driving policy across the world.
In October 2023, the UK Government passed into law the Online Safety Act (OSA). This is a set of laws that aims to hold social media and search services (including online forums, instant messaging, and cloud storage sites) responsible for the safety of users on their platforms. These laws are designed to protect users of all ages, but focus especially on protecting children against age-inappropriate experiences and harmful content.
The UK Online Safety Act is being implemented with a phased approach, with updates being made as new online threats emerge and escalate. As of 2026, some of the key updates relate to:
AI chatbots and companions
In February 2026, the UK Government announced that it will be requiring AI chatbot providers to comply with the Online Safety Act’s duties regarding illegal content. These duties apply to online content that relates to child sexual abuse, controlling and coercive behaviour, sexual violence, and other harms.
We recently wrote about the harms that AI chatbots and companions present to children’s safety, both on- and offline. To learn more about these harms, and why it’s so important that AI chatbot companies are held to account for enabling them, you can read our blog here: How is AI changing the scale and scope of online enticement?
Reporting duties
As of April 2026, user-to-user services in scope of the Act are subject to new reporting duties related to child sexual exploitation and abuse content (CSEA). This means that in-scope service providers must report “detected and unreported CSEA content” across their platforms to the National Crime Agency (NCA), instead of just removing it. This duty aims to equip law enforcement with intelligence to aid their investigations and protect more child victims.
In the same month, the European Union (EU) faced criticism for allowing an important legal basis – one that gave online service providers in the EU the legal ability to detect child sexual abuse material (CSAM) across their platforms – to expire. According to the Internet Watch Foundation (IWF), this could cause companies to face “legal uncertainty about whether they can search for and block child sexual abuse on their services.”
Thorn highlights the very real impact that this gap in legislation could have; during one seven-month period in which there was no legal basis for companies to detect CSAM on their sites, Thorn notes an almost 60% decline in material reported to the National Center for Missing & Exploited Children (NCMEC).
Many countries across the world are either considering implementing or are implementing age-related restrictions on social media usage.
Australia led the way with its Online Safety Amendment (Social Media Minimum Age) Bill 2024, which came into effect in late 2025. The Bill puts the onus on in-scope social media services – including Facebook, Instagram, and X – to prevent under-16s in Australia from holding accounts across their platforms. This includes deactivating accounts that under-16s already hold and preventing them from creating new ones via age checks. The goal is to protect younger users against the harms they may face on these platforms, including addictive features and threats from online perpetrators.
A new update in March 2026 expanded the definition of age-restricted social media platforms (i.e. those in scope of the Social Media Minimum Age Bill). This now includes services with:
Other countries that are enforcing or considering age-related social media restrictions in 2026 include:
We explored some of the risks facing children on social media in our Child Safety Online Report 2025, from being targeted with grooming tactics to being exposed to adult content. You can download a copy of the report by sharing your details below.
*Submission form for the Child Safety Online report.
Generative AI tools are being exploited by perpetrators in their attempts to harm children online. Their tactics vary, from using AI to generate child sexual abuse material to rehearsing grooming strategies with AI chatbots. As these crimes escalate, jurisdictions are attempting to keep pace with AI-focused online safety legislation.
One piece of legislation responding to the AI threat is the UK’s Crime and Policing Bill, which was introduced in 2025 and is currently being examined. The generation, possession, and supply of AI CSAM is already illegal under UK law, but this Bill will provide further protections by:
Other countries and jurisdictions have also taken steps to prevent AI misuse in 2026. Indonesia and Malaysia, for instance, both banned the AI chatbot Grok in January. In March, the EU proposed a wider ban on AI tools that generate non-consensual intimate material featuring real people.
To learn more about the AI policy landscape, we recommend reading AI, child safety, and the pace of global policy. In this Q&A with Jon Rouse APM, Founding Partner of Onemi-Global Solutions, we explore some of the policy gaps that are enabling AI misuse and failing to hold those responsible to account.
Jon also shares his thoughts on what a more effective legislative model – one that proactively protects children against AI harms – could look like:
“The structural challenge is that legislative cycles move in years and AI capability moves in months. What the next decade demands is a different model entirely: adaptive regulatory architecture that empowers agencies to issue binding safety standards without waiting for primary legislation every time the technology evolves.”
To be the first to read articles like these, we recommend signing up to our monthly newsletter The Source via the submission form below. Once a month, we’ll send you insights from the CameraForensics team and our partners, covering topics such as AI, law enforcement challenges, and child safety technology.