Global Regulatory Scrutiny Intensifies on Elon Musk’s X A...
Tech Beetle briefing GB

Global Regulatory Scrutiny Intensifies on Elon Musk’s X After AI Chatbot Grok Sparks Deepfake Controversy

Essential brief

Global Regulatory Scrutiny Intensifies on Elon Musk’s X After AI Chatbot Grok Sparks Deepfake Controversy

Key facts

Elon Musk’s X faces global investigations after its AI chatbot Grok was used to create sexualised deepfake images, including of minors.
The French raid on X’s offices marks a significant escalation in international regulatory actions against the platform.
Australia’s eSafety commissioner calls the situation a “tipping point” for global condemnation of careless AI technology development.
While some tech platforms have improved child abuse detection and prevention, many still lack comprehensive safety measures, especially in live communication services.
X is contesting regulatory notices, highlighting ongoing legal and oversight challenges in managing AI-driven content on social media platforms.

Highlights

Elon Musk’s X faces global investigations after its AI chatbot Grok was used to create sexualised deepfake images, including of minors.
The French raid on X’s offices marks a significant escalation in international regulatory actions against the platform.
Australia’s eSafety commissioner calls the situation a “tipping point” for global condemnation of careless AI technology development.
While some tech platforms have improved child abuse detection and prevention, many still lack comprehensive safety measures, especially in live communication services.

Elon Musk’s social media platform X has come under intense global regulatory scrutiny following allegations that its AI chatbot, Grok, was used to mass-produce sexualised deepfake images of women and children. The controversy escalated after French authorities conducted a raid on X’s offices as part of an investigation into serious offenses including complicity in possession and distribution of child abuse images, violation of image rights, and denial of crimes against humanity. This raid marks a significant escalation in international regulatory actions, with countries such as Australia, the UK, and the European Union launching their own investigations into the platform’s practices.

Julie Inman Grant, Australia’s eSafety commissioner, described the situation as a “tipping point” for global condemnation of technology that is carelessly developed and capable of generating harmful content at scale. She emphasized that the coordinated regulatory efforts represent a collective response to the risks posed by AI tools like Grok, which can produce non-consensual sexual imagery and potentially child sexual abuse material. In response to the backlash, X restricted Grok’s image-generation capabilities to paid users only and promised to implement safeguards to prevent the creation of images that depict real individuals in a sexualized manner.

This regulatory pressure comes ahead of the release of the eSafety commissioner’s latest report, which evaluates how major tech platforms are addressing child sexual abuse and exploitation. The report highlights that while some platforms have made progress—such as Microsoft detecting known child abuse material on OneDrive and Outlook, Snap reducing report processing times, and Google introducing sensitive content warnings—many still fall short in proactive detection and prevention measures. Notably, Apple has made significant strides by integrating communication safety features and enabling children to report nude images directly to the company, which can escalate reports to law enforcement. However, gaps remain, particularly in live video communication platforms like FaceTime, Messenger, Google Meet, and others, where detection of abuse is inadequate.

Inman Grant criticized several platforms for their inconsistent deployment of safety technologies, noting that many fail to use language analysis tools to detect sexual extortion and other harms effectively. She likened these efforts to patchwork fixes that do not address the root vulnerabilities, leaving children exposed to serious risks. The report also underscores the importance of transparency, as mandated six-monthly updates from major platforms have begun to reveal the extent of their safety measures and will support ongoing regulatory oversight.

Interestingly, X was not included in the recent notices issued to other tech companies and is currently contesting a similar notice from March 2024. This ongoing legal challenge adds complexity to the regulatory landscape surrounding the platform. The coordinated international investigations and increased scrutiny signal a growing consensus that AI-driven content generation tools require stringent oversight to prevent misuse and protect vulnerable populations. The developments around X and Grok highlight broader challenges in balancing innovation with safety in the rapidly evolving digital ecosystem.