Inside OpenAI’s $555K ‘Head of Preparedness’ Role: Tackling AI’s Most Daunting Risks
Essential brief
Inside OpenAI’s $555K ‘Head of Preparedness’ Role: Tackling AI’s Most Daunting Risks
Key facts
Highlights
OpenAI, the company behind ChatGPT, has announced a high-stakes job opening for a “head of preparedness” position with an annual salary of $555,000. This role is designed to confront some of the most challenging and potentially dangerous issues emerging from the rapid advancement of artificial intelligence. The successful candidate will be tasked with defending humanity against a broad spectrum of risks, including threats to mental health, cybersecurity vulnerabilities, and even the misuse of AI in biological weapons development. The job description underscores the urgency and gravity of the position, with CEO Sam Altman warning that the role will be immediately stressful and demanding.
The head of preparedness will be responsible for evaluating and mitigating new and evolving AI threats, particularly those stemming from frontier capabilities that could cause severe harm. This includes monitoring AI systems that might begin self-training autonomously, a scenario that some experts fear could lead to AI systems acting against human interests. Previous holders of this role have often had brief tenures, highlighting the intense pressure and complexity involved. The position comes amid increasing concern within the AI community about the technology’s potential to cause harm if left unchecked.
Industry leaders have voiced their apprehensions openly. Mustafa Suleyman, CEO of Microsoft AI, recently expressed that anyone not feeling some fear about AI’s trajectory is not paying attention. Similarly, Demis Hassabis of Google DeepMind warned about the risk of AI systems going "off the rails" and causing harm to humanity. Despite these warnings, regulatory frameworks remain sparse, with little national or international oversight. AI pioneer Yoshua Bengio has pointed out that AI is far less regulated than everyday items like sandwiches, leaving companies to largely self-regulate.
Altman emphasized the need for a more nuanced understanding of AI’s capabilities and risks. He stated that while OpenAI has a solid foundation for measuring AI’s growing abilities, the company must now focus on how these capabilities might be abused and how to limit negative outcomes. The goal is to balance risk mitigation with enabling society to benefit from AI’s tremendous potential. The role will also include an equity stake in OpenAI, a company currently valued at around $500 billion.
Recent events underscore the urgency of this role. Anthropic, a rival AI company, reported the first AI-enabled cyberattacks, allegedly conducted under the supervision of Chinese state actors. OpenAI itself has noted that its latest AI model is nearly three times better at hacking than models from just three months prior, with expectations that future models will continue to improve in this area. Additionally, OpenAI is facing lawsuits related to ChatGPT’s alleged role in tragic incidents involving mental health crises and violence. The company is actively working to improve ChatGPT’s ability to recognize and respond to signs of distress, aiming to guide users toward real-world support.
In summary, OpenAI’s head of preparedness role is a critical and high-pressure position at the forefront of AI safety and ethics. It reflects the growing recognition that as AI systems become more powerful, proactive measures are essential to safeguard humanity from unintended and potentially catastrophic consequences. The position’s demanding nature and significant compensation highlight the seriousness with which OpenAI is approaching these challenges.