devxlogo

Children’s commissioner calls for AI app ban

AI app ban
AI app ban

The children’s commissioner for England is urging the government to ban apps that use artificial intelligence to create sexually explicit images of children. Dame Rachel de Souza emphasized the need for a total ban on “nudification” apps, which alter photos of real people to make them appear naked. She expressed concern that the government was allowing such apps to go unchecked with extreme real-world consequences.

A government spokesperson stated that child sexual abuse material is illegal and highlighted plans to introduce further offences for creating, possessing, or distributing AI tools designed for such purposes. Deepfakes, which are videos, pictures, or audio clips made with AI to appear realistic, are becoming a growing concern. Dame Rachel pointed out that this technology is disproportionately targeting girls and young women, with many apps seemingly designed to work only on female bodies.

As a result, girls are avoiding posting images or engaging online to reduce their risk of being targeted. The report also revealed children’s fears that a stranger, a classmate, or even a friend could target them using technologies readily found on popular search and social media platforms.

Ban on harmful AI apps

Dame Rachel warned of the rapid evolution of these tools, stating, “We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children’s lives.”

She criticized the government’s existing measures, arguing that simply making it illegal to share or threaten to share explicit deepfake images is insufficient. Her spokesman stated, “There should be no nudifying apps, not just no apps that are classed as child sexual abuse generators.”

See also  Mercedes-Benz Plans 12 Dubai Skyscrapers

In February, the Internet Watch Foundation confirmed a significant increase in reports of AI-generated child sexual abuse, rising from 51 cases in 2023 to 245 cases in 2024—an increase of 380%. IWF Interim Chief Executive Derek Ray-Hill noted that these apps are being abused in schools, leading to widespread dissemination of such imagery.

A spokesperson for the Department for Science, Innovation and Technology reiterated that creating, possessing, or distributing child sexual abuse material, including AI-generated images, is abhorrent and illegal. Under the Online Safety Act, platforms must remove this kind of content or face significant fines. The UK is the first country globally to introduce specific AI child sexual abuse offences.

Dame Rachel also called for the government to impose legal obligations on developers of generative AI tools to identify and address the risks their products pose to children, establish a systematic process to remove sexually explicit deepfake images of children from the internet, and recognize deepfake sexual abuse as a form of violence against women and girls. Paul Whiteman, general secretary of the school leaders’ union NAHT, supported the commissioner’s concerns, indicating that the technology might be advancing faster than the law and education systems can respond.

Image Credits: Photo by William Hook on Unsplash

April Isaacs is a news contributor for DevX.com She is long-term, self-proclaimed nerd. She loves all things tech and computers and still has her first Dreamcast system. It is lovingly named Joni, after Joni Mitchell.

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.