One in Four Unconcerned by Non-Consensual Sexual Deepfakes, UK Survey Reveals
Essential brief
One in Four Unconcerned by Non-Consensual Sexual Deepfakes, UK Survey Reveals
Key facts
Highlights
A recent UK survey commissioned by the police chief scientific adviser’s office has uncovered troubling attitudes toward sexual deepfakes created without consent.
Among 1,700 respondents, 13% believed there was nothing wrong with making and sharing such AI-generated intimate content, while another 12% felt neutral about its moral and legal implications.
Sexual deepfakes involve digitally altered images or videos that depict individuals in explicit scenarios without their permission, a practice now criminalized under the UK’s new Data Act.
Detective Chief Superintendent Claire Hammond, from the national centre for violence against women and girls (VAWG) and public protection, emphasized that sharing intimate images without consent is a serious violation, whether the images are real or fabricated.
Hammond warned that AI technology is accelerating violence against women and girls globally, with technology companies playing a complicit role by simplifying the creation and distribution of abusive materials.
The survey also revealed that 7% of respondents had been depicted in sexual deepfakes, yet only about half reported these incidents to the police, often due to embarrassment or doubts about the seriousness of the offense.
Younger men under 45 were more likely to find deepfake creation acceptable, correlate with higher consumption of online pornography, and hold misogynistic views, although the report called for further research to better understand these associations.
Alarmingly, one in twenty admitted to having created deepfakes, and over 10% expressed willingness to create them in the future.
Two-thirds of participants reported having seen or possibly seen deepfake content.
Callyane Desroches, head of policy at Crest Advisory, which authored the report, cautioned that deepfake creation is becoming normalized as the technology becomes more accessible and affordable, with women disproportionately targeted by sexualized content.
Activist Cally Jane Beech highlighted the urgent need for decisive action, education, and open dialogue to protect future generations from the harmful effects of digital abuse.
The findings underscore the complex challenges posed by AI-driven content manipulation, the importance of legal frameworks, and the critical role of societal awareness in combating the misuse of deepfake technology.