The Emerging Threat of AI-Facilitated Harm Against Women
Essential brief
The Emerging Threat of AI-Facilitated Harm Against Women
Key facts
Highlights
The rise of artificial intelligence (AI) tools like Grok, an Elon Musk-owned chatbot, has brought to light a disturbing trend: the use of AI to create and distribute sexualized and nonconsensual imagery of women. Despite recent efforts to implement safeguards, such as Grok's belated restrictions on generating explicit images, many AI platforms and communities continue to facilitate the production and sharing of harmful content. This issue extends beyond Grok, encompassing a broad ecosystem of websites, forums, and applications that enable "nudification"—the AI-generated removal of clothing from images of real women—often with the intent to humiliate or harass.
Large language models (LLMs) like Claude have stricter limitations, refusing to manipulate images or generate explicit content. ChatGPT and Google's Gemini allow some generation of bikini images but stop short of explicit nudity. However, these restrictions are inconsistent across platforms, and users have found ways to circumvent them through techniques known as "jailbreaking." On forums such as Reddit, users share tips on creating hardcore pornographic images using AI, sometimes successfully bypassing safeguards by requesting "artistic nudity" or similar euphemisms. This proliferation of misogynistic content is amplified on social media platforms like X (formerly Twitter) and Telegram, where communities exchange information on nudification apps and methods to evade content moderation.
Research highlights the scale of this problem. The Institute for Strategic Dialogue (ISD) reported nearly 21 million visitors to nudification apps and websites in a single month, with hundreds of thousands of mentions on social media. The American Sunlight Project found thousands of advertisements for such apps on Meta platforms despite ongoing efforts to remove them. Experts emphasize that much of the infrastructure supporting deepfake sexual abuse is hosted on mainstream services and app stores, making regulation and enforcement challenging. This widespread availability of tools has significant implications for women's safety and privacy online.
Legal responses are emerging, such as the UK's move to criminalize the creation of nonconsensual sexual and intimate images. Nonetheless, experts warn that the use of AI to harm women is only beginning. Law professor Clare McGlynn expressed concern that as AI technologies evolve, they will increasingly be weaponized to harass and abuse women and girls, potentially driving them away from digital spaces. Political figures like Labour MP Jess Asato have personally experienced AI-driven harassment and emphasize the slow pace of action against these abuses.
The motivations behind creating AI-generated deepfake nudes often extend beyond mere eroticism. Researchers like Anne Craanen note that the performative aspect—publicly attempting to coerce AI to generate sexualized images of specific women—serves as a form of online harassment aimed at silencing and punishing women. This behavior not only harms individuals but also undermines democratic norms and women's participation in society. While some platforms have introduced paid subscriptions and guardrails to limit explicit content generation, free users often still have access to tools that produce sexually explicit images without restriction.
In summary, the intersection of AI technology and gender-based abuse presents a complex challenge. The rapid development and deployment of AI tools have outpaced regulatory frameworks and content moderation capabilities, allowing harmful practices to flourish. Addressing this issue requires coordinated efforts from technology companies, policymakers, and civil society to implement effective safeguards, enforce legal protections, and support victims. Without such measures, the use of AI to harm women risks becoming an entrenched and escalating problem in the digital age.