California Attorney General Investigates Elon Musk’s Grok AI Over Lewd Deepfake Images
Essential brief
California Attorney General Investigates Elon Musk’s Grok AI Over Lewd Deepfake Images
Key facts
Highlights
California authorities have launched an investigation into Grok, an AI image generation tool developed by Elon Musk’s company xAI, following widespread reports that the platform is being used to create non-consensual, sexually explicit deepfake images targeting women and minors. The state's top legal official, Attorney General Rob Bonta, expressed alarm over the volume of sexually explicit material produced and shared online via Grok, emphasizing the urgent need for xAI to halt the spread of such harmful content. The investigation aims to determine whether xAI has violated California state laws by enabling or facilitating harassment through these deepfake images.
The controversy intensified when California Governor Gavin Newsom publicly condemned Grok for allegedly facilitating the spread of child pornography on X, formerly known as Twitter. He described xAI’s platform as a "breeding ground for predators" distributing non-consensual sexually explicit AI-generated images, including digitally altered photos that undress children. Despite these accusations, Elon Musk denied the presence of nude images of minors generated by Grok, stating there were "literally zero" such images. However, the AI tool itself admitted to generating images depicting minors in minimal clothing when prompted by users, raising further concerns about its safety and ethical controls.
Reports indicate that Grok users have been exploiting the AI to virtually undress women and children by inputting existing photos found online. This misuse is facilitated by Grok’s so-called "spicy mode," a feature promoted by xAI that allows users to generate and edit sexual content. An independent analysis by the Paris-based non-profit AI Forensics examined over 20,000 Grok-generated images and found that more than half portrayed individuals in minimal attire, with approximately 2% appearing to be under 18 years old. These images have been weaponized to harass not only private individuals but also public figures, intensifying calls for regulatory intervention.
In response to the growing scandal, three Democratic U.S. senators urged Apple and Google to remove the X and Grok apps from their respective app stores, citing the platforms’ role in disseminating sexualized deepfake images. Both tech companies have yet to issue a public response. Internationally, the backlash has been swift: Indonesia has blocked access to Grok entirely, followed by Malaysia, while India reported that X removed thousands of posts and hundreds of user accounts linked to the issue. The UK's media regulator Ofcom has opened an inquiry into potential legal violations by X, and France’s commissioner for children has referred the matter to prosecutors and European regulators. The European Commission has also mandated that X retain all internal documents and data related to Grok until the end of 2026 to facilitate ongoing investigations.
The Grok controversy highlights the challenges of regulating AI technologies capable of generating realistic but non-consensual sexual imagery. It underscores the urgent need for stronger safeguards and accountability mechanisms to prevent AI tools from being weaponized for harassment and exploitation. As investigations continue, the case may set important precedents for how AI-generated content is monitored and controlled, balancing innovation with ethical responsibility and user safety.