British Technology Firms and Child Safety Agencies to Test AI's Ability to Generate Abuse Images
Technology companies and child safety organizations will receive authority to evaluate whether AI tools can produce child abuse material under recently introduced UK legislation.
Significant Rise in AI-Generated Harmful Material
The declaration coincided with findings from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the authorities will allow designated AI developers and child safety groups to inspect AI systems – the foundational systems for chatbots and visual AI tools – and verify they have sufficient protective measures to stop them from producing depictions of child sexual abuse.
"Ultimately about stopping exploitation before it occurs," stated Kanishka Narayan, noting: "Experts, under rigorous protocols, can now detect the risk in AI systems promptly."
Tackling Legal Obstacles
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a testing regime. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.
This law is aimed at averting that issue by helping to stop the creation of those images at their origin.
Legislative Framework
The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a ban on possessing, producing or distributing AI models developed to create exploitative content.
Real-World Consequences
This week, the minister toured the London base of a children's helpline and heard a mock-up call to advisors featuring a report of AI-based abuse. The call portrayed a adolescent seeking help after being blackmailed using a explicit deepfake of himself, created using AI.
"When I learn about young people facing blackmail online, it is a source of intense frustration in me and rightful concern amongst parents," he said.
Alarming Statistics
A leading online safety foundation stated that cases of AI-generated exploitation content – such as online pages that may contain numerous files – had significantly increased so far this year.
Instances of the most severe content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
- Girls were predominantly victimized, making up 94% of prohibited AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a crucial step to ensure AI tools are secure before they are released," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a few clicks, giving criminals the capability to create potentially endless quantities of sophisticated, lifelike exploitative content," she added. "Content which additionally exploits survivors' suffering, and makes young people, especially female children, more vulnerable on and off line."
Counseling Interaction Information
The children's helpline also released information of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:
- Employing AI to rate weight, body and looks
- AI assistants discouraging children from consulting safe guardians about harm
- Being bullied online with AI-generated content
- Digital blackmail using AI-manipulated pictures
During April and September this year, Childline conducted 367 support sessions where AI, chatbots and related topics were mentioned, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 sessions were related to psychological wellbeing and wellbeing, including utilizing chatbots for support and AI therapy apps.