British Tech Firms and Child Protection Agencies to Examine AI's Capability to Generate Exploitation Images
Tech firms and child protection agencies will receive permission to evaluate whether AI tools can generate child exploitation images under new British legislation.
Significant Rise in AI-Generated Illegal Content
The announcement coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the authorities will permit approved AI companies and child protection groups to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and ensure they have sufficient safeguards to stop them from creating images of child sexual abuse.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now detect the danger in AI systems promptly."
Tackling Legal Challenges
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation regime. Previously, authorities had to wait until AI-generated CSAM was published online before addressing it.
This legislation is designed to preventing that problem by enabling to stop the production of those materials at their origin.
Legislative Framework
The amendments are being added by the government as modifications to the criminal justice legislation, which is also implementing a prohibition on owning, creating or sharing AI systems developed to create child sexual abuse material.
Practical Impact
This recently, the official visited the London headquarters of Childline and heard a simulated call to counsellors involving a account of AI-based exploitation. The interaction depicted a teenager seeking help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.
"When I learn about young people facing blackmail online, it is a source of intense frustration in me and rightful concern amongst parents," he said.
Concerning Data
A leading internet monitoring foundation reported that cases of AI-generated exploitation material – such as online pages that may include numerous images – had more than doubled so far this year.
Instances of the most severe content – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly targeted, accounting for 94% of illegal AI depictions in 2025
- Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Industry Reaction
The law change could "constitute a crucial step to guarantee AI products are secure before they are released," commented the chief executive of the online safety organization.
"AI tools have enabled so survivors can be victimised all over again with just a simple actions, providing criminals the capability to create potentially limitless amounts of sophisticated, photorealistic exploitative content," she added. "Material which additionally commodifies survivors' suffering, and makes young people, especially girls, less safe both online and offline."
Support Interaction Information
The children's helpline also published details of support interactions where AI has been referenced. AI-related harms discussed in the sessions include:
- Using AI to rate weight, body and looks
- Chatbots dissuading children from talking to trusted adults about harm
- Being bullied online with AI-generated content
- Online blackmail using AI-faked images
Between April and September this year, Childline conducted 367 support interactions where AI, chatbots and associated topics were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing utilizing AI assistants for assistance and AI therapy apps.