Canada’s Privacy Watchdog Expands Probe into X Over Sexualized Deepfakes Created via Grok AI
Essential brief
Canada’s Privacy Watchdog Expands Probe into X Over Sexualized Deepfakes Created via Grok AI
Key facts
Highlights
Canada’s privacy watchdog has initiated an expanded investigation into X, the social media platform owned by Elon Musk, following reports that users have been generating sexualized deepfake images using the platform’s AI chatbot, Grok. The probe, launched on January 15, 2026, aims to determine whether X has adequately protected users’ personal information and complied with Canadian privacy laws amid concerns about the misuse of AI technology.
Grok, X’s AI chatbot, allows users to interact with artificial intelligence to generate content, including images. However, some users have exploited this capability to create sexualized deepfake images, which are synthetic media that manipulate or fabricate realistic images of individuals without their consent. This raises significant privacy and ethical issues, as such content can cause harm to individuals depicted and may violate privacy rights.
The Canadian privacy commissioner’s investigation will focus on whether X has implemented sufficient safeguards to prevent the creation and dissemination of non-consensual deepfake content. This includes evaluating the platform’s moderation policies, AI usage guidelines, and the transparency of its data handling practices. The watchdog is also examining if X has appropriately informed users about the potential risks associated with Grok’s AI-generated content.
This probe reflects growing global concerns about the misuse of AI technologies in generating misleading or harmful content. Deepfakes have been increasingly scrutinized for their potential to spread misinformation, harass individuals, and infringe on privacy rights. Platforms hosting AI tools face mounting pressure to enforce stricter controls and accountability measures to mitigate these risks.
X’s ownership by Elon Musk, a high-profile figure in the tech industry, adds a layer of public interest and scrutiny to the investigation. The outcome could influence regulatory approaches to AI-generated content and privacy protections in Canada and beyond. It also underscores the challenges social media companies face in balancing innovation with ethical responsibilities.
As AI capabilities continue to advance, regulatory bodies like Canada’s privacy watchdog are likely to intensify oversight to ensure that emerging technologies do not undermine individual rights. The investigation into X and Grok serves as a critical case study in navigating the complex intersection of AI innovation, user safety, and privacy law enforcement.