Leading AI Expert Revises Timeline for AI's Potential Threat to Humanity
Essential brief
Leading AI Expert Revises Timeline for AI's Potential Threat to Humanity
Key facts
Highlights
Daniel Kokotajlo, a former OpenAI researcher, has recently updated his predictions regarding the timeline for artificial general intelligence (AGI) and its possible existential risks to humanity. Previously, Kokotajlo gained attention with his 2027 scenario, which envisioned AI systems achieving fully autonomous coding capabilities by that year. This milestone was seen as a critical step toward an intelligence explosion, where AI would rapidly improve itself, potentially leading to superintelligence capable of outsmarting humans and posing severe risks, including the hypothetical destruction of humanity. The AI 2027 scenario sparked widespread debate, drawing both support and criticism from experts and policymakers alike. For instance, US Vice-President JD Vance referenced the scenario in discussions about the AI arms race, while neuroscientist Gary Marcus dismissed it as speculative fiction.
The accelerated progress of AI tools like ChatGPT in 2022 initially fueled expectations that AGI might emerge within a few decades or even years. However, recent reflections by Kokotajlo and other experts suggest that these timelines may have been overly optimistic. Kokotajlo and his team now project that fully autonomous AI coding is more likely to occur in the early 2030s, shifting the potential arrival of superintelligence to around 2034. This revision acknowledges the complexity and unpredictability of AI development, as well as the practical challenges of deploying AI systems with broad, real-world capabilities.
Experts in AI risk management and policy emphasize that the concept of AGI itself is becoming less clear-cut. Malcolm Murray, co-author of the International AI Safety Report, notes that AI performance remains uneven and that the inertia inherent in societal and technological systems will slow transformative changes. Henry Papadatos of SaferAI points out that AI systems today already exhibit a degree of generality that blurs the traditional distinction between narrow AI and AGI, making the term less meaningful. This evolving understanding complicates efforts to forecast when or if AI will reach human-level cognitive abilities across all domains.
Despite the revised timelines, the goal of creating AI agents capable of conducting AI research autonomously remains a priority for leading AI companies. OpenAI's CEO Sam Altman has stated an internal target to develop such an automated AI researcher by March 2028, though he acknowledges the possibility of failure. Meanwhile, policy researchers like Andrea Castagna caution against simplistic assumptions about integrating superintelligent AI into existing strategic frameworks, highlighting the multifaceted and complex nature of real-world systems. The development of AI continues to reveal challenges that extend beyond science fiction scenarios, underscoring the need for nuanced understanding and careful governance.
In summary, while the prospect of AI reaching superintelligence and posing existential risks remains a topic of concern, recent expert assessments suggest that these developments are likely to unfold more gradually than initially feared. The shifting timelines and evolving definitions reflect the intricate realities of AI progress and the importance of ongoing research into AI safety and policy implications.