UK Parliamentary Legislation Introduced Against Deepfakes

On April 16th, the UK government announced plans to introduce legislation through an amendment to the existing Criminal Justice Bill, targeting the creation of deepfake pornographic content. This move comes in the wake of the Online Safety Act which criminalised the sharing of sexually explicit deepfake images in 2023. The new legislation is currently going through the Commons and if it is passed, the production of sexually explicit deepfakes will be classed as a criminal offence, even if the producer does not intend to share the material (for example, if deepfakes are generated to cause alarm, humiliation or distress to the victim). Producers of deepfake imagery will face an unlimited fine and possibly be jailed if convicted.

Minister for Victims and Safeguarding, Laura Farris, told the press that: 

“The creation of deepfake sexual images is despicable and completely unacceptable irrespective of whether the image is shared. It is another example of ways in which certain people seek to degrade and dehumanise others – especially women. And it has the capacity to cause catastrophic consequences if the material is shared more widely. This government will not tolerate it. This new offence sends a crystal clear message that making this material is immoral, often misogynistic, and a crime.”

Her comments were backed up by shadow Home Secretary Yvette Cooper, who described the practice of deepfakes as ‘a gross violation of … autonomy and privacy.’

Violence against women and girls has also been reclassified as a national threat, and the new legislation is part of a wider package of legal reforms which aim to crack down on digital sexual harassment, such as cyberflashing. This change in legislation mirrors moves in the USA, where the Disrupt Explicit Forged Images and Non-Consensual Edits Act (DEFIANCE) bill was introduced in early 2024 by Senator Richard Durbin. DEFIANCE aims to provide a ‘federal civil remedy’ for victims of deepfake imagery.

Authorities agree that there is a need to move fast, given that deepfake technology is improving at an escalating rate. The report accompanying DEFIANCE relates findings from a 2019 study which showed that 96% of deepfake material involved non-consensual pornography. The Coalition for Content Provenance and Authenticity, which covers companies such as the BBC, Google, Microsoft and Sony, is introducing watermarking and labelling standards, which OpenAI will be adopting for Dall-E 3. But this won’t apply to open source products, such as Stable Diffusion, and cannot be universal.


Get The Latest DFIR News

Join the Forensic Focus newsletter for the best DFIR articles in your inbox every month.


Unsubscribe any time. We respect your privacy - read our privacy policy.

Meanwhile, innovative tech non-profit Thorn, which has a remit of child safety, reported on April 23rd that Meta, Google, Microsoft, Amazon and OpenAI, among others, have signed up with the aim of combating child sexual abuse. Stability AI’s Stable Diffusion 1.5, for example, was trained on an open source dataset containing over 1000 images of child abuse: the company are now partnering with Thorn. Under Thorn’s principles, AI models should not be released before being checked for child safety.

But both companies and governments need to be agile; the Guardian’s technology editor, Alex Hern, described the situation earlier this month as an ‘arms race between detection and creation.’

Leave a Comment

Latest Videos

This error message is only visible to WordPress admins

Important: No API Key Entered.

Many features are not available without adding an API Key. Please go to the YouTube Feeds settings page to add an API key after following these instructions.

Latest Articles