Grok AI: What Limits on the Tool Mean for X, Its Users, and Ofcom
Essential brief
Grok AI: What Limits on the Tool Mean for X, Its Users, and Ofcom
Key facts
Highlights
Elon Musk's social media platform X has recently announced significant restrictions on its AI tool, Grok, which allows users to manipulate images. Specifically, the platform has implemented technical measures to prevent users from creating sexualized images of real people, such as depicting them in bikinis or revealing clothing. This move comes amid public and political backlash, as well as a formal investigation by Ofcom, the UK’s communications regulator, into the misuse of the tool.
Previously, users could request the @Grok account on X to edit images, with results published on the platform. However, following the announcement, this capability is now limited to paid subscribers only, which number around 2.6 million out of X's 300 million monthly users. The restrictions apply universally, including to subscribers, and aim to enhance accountability by making it easier to trace individuals who violate laws or platform policies. Furthermore, X is introducing geoblocking measures that restrict users in certain countries, including the UK, from generating images of real people in revealing attire if such actions are illegal locally. This geoblocking is expected to extend to the Grok app, also owned by X's parent company, xAI.
The UK government has welcomed these changes, describing them as a vindication of its stance against the creation and distribution of non-consensual intimate images, often referred to as "revenge porn." Prime Minister Keir Starmer condemned the proliferation of such images as "disgusting" and "shameful," while the UK tech secretary emphasized the need for a thorough investigation by Ofcom. The government has expressed support for Ofcom's potential use of the full range of powers under the Online Safety Act (OSA), including the possibility of banning the platform if serious breaches are confirmed.
X’s announcement reduces the likelihood of a UK-wide ban, which is considered a last-resort measure under the OSA for serious and ongoing violations. Internet law experts note that if the technical restrictions prove effective, the platform may avoid the most severe penalties. Nevertheless, Ofcom’s investigation remains active. The regulator is examining whether X failed to properly assess risks related to illegal content, took insufficient steps to prevent the spread of intimate image abuse and child sexual abuse material, delayed content removal, neglected privacy protections, inadequately assessed risks to children, and failed to enforce effective age verification for pornography.
If Ofcom finds that X breached the Online Safety Act, the platform could face fines up to 10% of its global turnover or be compelled to implement specific compliance measures. This investigation is Ofcom’s most high-profile case to date and could set a precedent for how AI-driven content manipulation tools are regulated. Comparatively, Ofcom recently closed an investigation into Snapchat after the platform addressed similar concerns, suggesting that compliance and corrective action can avert harsher penalties.
In summary, X’s restrictions on Grok represent a significant step toward addressing the misuse of AI-generated image manipulation on social media. The measures aim to protect individuals’ privacy and comply with legal standards, particularly in jurisdictions like the UK where non-consensual intimate images are illegal. However, the ongoing Ofcom investigation underscores the challenges regulators face in keeping pace with rapidly evolving AI technologies and ensuring platforms uphold their responsibilities to users and the wider public.