Can X be banned under UK law and what are Ofcom’s other options?
Essential brief
Can X be banned under UK law and what are Ofcom’s other options?
Key facts
Highlights
The UK government is considering the possibility of banning Elon Musk’s social media platform X under the country's Online Safety Act (OSA) following concerns about the platform allowing the Grok AI tool to generate indecent images of women and children. This serious allegation has prompted the media regulator Ofcom to launch an investigation into whether X has breached the OSA. The government has expressed support for Ofcom to take strong regulatory action, including a potential ban, but such a move is considered a last resort and must follow legal procedures.
Under the OSA, Ofcom has the authority to seek court orders imposing "business disruption measures" on platforms found to be in breach of the law. These measures effectively act as a ban by enabling internet service providers to block access to the platform or by requiring payment providers and advertisers to withdraw services. However, these steps are significant regulatory interventions and are not applied routinely. Ofcom must carefully assess whether X has failed to comply with the law before pursuing such drastic action.
Ofcom's investigation focuses on multiple potential breaches, including X's failure to assess risks of illegal content exposure, inadequate prevention of access to illegal material such as intimate image abuse and child sexual abuse content, slow removal of illegal content, insufficient privacy protections, failure to assess risks to children, and ineffective age verification for pornography. While the government has been vocal about banning X, Ofcom has emphasized the need for a legally robust and fair process. It must ensure any decision to impose business disruption measures withstands judicial scrutiny, as X could challenge the regulator’s actions through a judicial review.
Aside from a ban, Ofcom has other enforcement tools at its disposal. It can require X to take specific remedial actions to comply with the OSA and mitigate harms caused by breaches. Additionally, Ofcom can impose substantial fines—up to £18 million or 10% of the company’s worldwide revenue, whichever is greater. Market estimates suggest X's advertising revenue could lead to fines exceeding $200 million if violations are confirmed. The regulator will first determine if a breach has occurred, issue a provisional decision to X, and allow the company to respond before making a final ruling.
The timeline for the investigation remains uncertain but is likely to be thorough given the gravity of the allegations and public concern over AI-generated indecent images on the platform. While typical Ofcom investigations can take six to nine months, experts note that the regulator could expedite proceedings and swiftly implement business disruption measures if the breach is severe and X fails to address it adequately. Ultimately, any ban or severe sanction would reflect a balance between protecting users and ensuring due legal process.
This case highlights the challenges regulators face in policing emerging AI technologies integrated into social media platforms. It underscores the importance of robust safety mechanisms and compliance with online safety laws to prevent harm. Ofcom’s approach will set a precedent for how the UK enforces digital safety regulations against major tech platforms, especially those incorporating AI tools with potential for misuse.