UK Parliamentarians Demand Binding Regulations on Powerful AI Systems Amid Security Concerns
Essential brief
UK Parliamentarians Demand Binding Regulations on Powerful AI Systems Amid Security Concerns
Key facts
Highlights
Over 100 UK parliamentarians from various parties and devolved legislatures have united to urge the government to implement binding regulations on the most powerful artificial intelligence (AI) systems.
This cross-party coalition includes former ministers and peers who warn that the rapid development of superintelligent AI poses significant risks to national and global security.
The campaign, coordinated by the nonprofit Control AI and supported by notable figures like Skype co-founder Jaan Tallinn, calls on Prime Minister Keir Starmer to assert independence from the US administration, which has opposed AI regulation.
Experts such as AI pioneer Yoshua Bengio have highlighted the current lack of oversight, comparing AI’s regulatory status to being less controlled than a sandwich.
Former defence secretary Des Browne described superintelligent AI as the most perilous technological threat since nuclear weapons, emphasizing the need for international cooperation to avoid a reckless race for AI dominance.
Despite hosting the 2023 AI Safety Summit at Bletchley Park and establishing the AI Security Institute, the UK government has been criticized for insufficient focus on global collaboration to mitigate AI risks.
Conservative peer Zac Goldsmith advocates for the UK to lead an international agreement to halt superintelligence development until safety measures are understood and established.
Silicon Valley AI scientist Jared Kaplan warns that humanity faces a critical decision by 2030 on whether to allow AI systems to self-improve autonomously.
While Labour’s 2024 program promises legislation for powerful AI models, no bill has yet been introduced, amid pressure from US tech companies to avoid restrictive measures.
The Department for Science, Innovation and Technology maintains that existing UK regulations address AI challenges but acknowledges the need for readiness.
The Bishop of Oxford, Steven Croft, supports an independent AI watchdog and mandatory testing standards for AI releases, citing risks to mental health, the environment, and ethical alignment.
Jonathan Berry, the UK’s first AI minister, stresses the urgency of global binding rules with safeguards like off switches and retrainability for high-risk AI models.
Control AI’s CEO Andrea Miotti criticizes the current “timid approach” and highlights aggressive lobbying by AI firms to delay regulation, despite acknowledging existential threats posed by AI.
The rapid pace of AI advancement suggests mandatory safety standards may become necessary within the next one to two years, underscoring the campaigners’ call for immediate and decisive government action.