UK government bans artificial intelligence nudification applications

Saturday 20th December 2025 07:01 EST
 

The United Kingdom government announced that it will implement a comprehensive ban on “nudification” applications as part of a strategic crackdown on online abuse.

These generative artificial intelligence tools, often referred to as “de-clothing” software, allow users to digitally remove clothing from images or videos to create non-consensual explicit content. The new legislation aims to target the creators and suppliers of this technology, moving beyond existing laws that penalise the individual sharing of deepfakes to instead dismantle the digital infrastructure that enables such exploitation.

This policy shift is a central pillar of a broader Home Office initiative to halve violence against women and girls within the next decade. Technology Secretary Liz Kendall emphasised that the creation and supply of these tools fuel misogyny and provide a platform for humiliation and exploitation. While the Online Safety Act already criminalises the non-consensual creation of sexually explicit deepfake images, the new measures specifically make it illegal to profit from or distribute the software itself.

 The government intends to work closely with technology firms and safety experts, such as the UK-based firm SafeToNet, to develop AI-driven detection systems that can block harmful content before it is disseminated.

The decision follows sustained pressure from child protection advocates, including the Children’s Commissioner for England, Dame Rachel de Souza, who called for a total ban in April 2025. Data from the Internet Watch Foundation recently highlighted the scale of the crisis, revealing that 19 per cent of young people using its “Report Remove” service discovered that their images had been digitally altered.

Experts warned that these apps have no legitimate purpose and significantly increase the risk of child sexual abuse material being generated and shared in harmful online spaces. The government’s stated objective is to create a digital environment where it is virtually impossible for children to take, view, or share such manipulated imagery.

Despite the move, some advocacy groups, including the NSPCC, have suggested that the proposals should go further by mandating safety protections directly into device hardware. The charity expressed disappointment that the current plans do not yet include a requirement for "safety by design" in private messaging services, which remains a primary channel for the spread of illegal material.

Nevertheless, the ban represents a significant legal escalation against the "nudification" industry, ensuring that those who facilitate high-tech abuse face severe legal consequences alongside the individuals who use their services.


comments powered by Disqus