Ofcom Investigates Musk’s X Over Sexually Explicit AI Deepfakes
Essential brief
Ofcom Investigates Musk’s X Over Sexually Explicit AI Deepfakes
Key facts
Highlights
The UK communications regulator Ofcom has initiated a formal investigation into X, the social media platform formerly known as Twitter and owned by Elon Musk, following concerns about the proliferation of sexually explicit AI-generated deepfake images on the site. These deepfakes, which use artificial intelligence to create realistic but fabricated images, have raised alarm due to their potential to harm individuals’ reputations and privacy, as well as to spread misinformation.
Ofcom’s investigation was triggered by reports highlighting the presence of sexually explicit AI content on X, which the regulator described as “deeply concerning.” The regulator is examining whether X has adequately complied with its legal obligations under the UK’s Online Safety Act, which requires platforms to take proactive measures to protect users from harmful content, including AI-generated material that can be damaging or exploitative.
The emergence of AI deepfakes on social media platforms like X represents a growing challenge for regulators worldwide. These images can be used maliciously to impersonate individuals, often without their consent, leading to potential harassment, defamation, or psychological harm. The investigation into X underscores the increasing scrutiny social media companies face regarding their content moderation policies and the effectiveness of their technological safeguards against AI-enabled abuse.
Elon Musk’s ownership of X has been marked by significant changes in content moderation policies, which some critics argue have led to a more permissive environment for harmful content. The Ofcom probe will assess whether X’s current moderation framework sufficiently addresses the risks posed by AI-generated sexually explicit deepfakes, and whether the platform is meeting the standards set by UK law to protect users.
This investigation also highlights the broader implications of AI technology in digital communication. While AI can enhance creativity and user engagement, it also introduces new vectors for abuse that require updated regulatory approaches. Ofcom’s actions may set a precedent for how AI-generated content is managed on social media platforms, influencing future policy both in the UK and internationally.
In response to the investigation, X has the opportunity to review and strengthen its content moderation strategies, potentially incorporating more advanced AI detection tools and clearer user reporting mechanisms. The outcome of Ofcom’s inquiry will be closely watched by other regulators, social media companies, and users concerned about privacy, safety, and the ethical use of AI online.