OpenAI names Dylan Scandinaro, former AGI safety researcher at Anthropic, as Head of Preparedness—a role commanding up to $555,000 salary—to mitigate catastrophic AI risks as regulatory pressure mounts industry-wide.

OpenAI has appointed Dylan Scandinaro, previously focused on artificial general intelligence (AGI) safety at rival Anthropic, to lead its newly created Preparedness team. The role, advertised in December 2024 with a base salary reaching $555,000, formalizes OpenAI's institutional response to emerging AI risks. Scandinaro will oversee assessment and defense against potential AI-driven catastrophes, including chemical, biological, radiological, and nuclear threats.
This strategic hire arrives amid intensifying regulatory scrutiny across the AI sector. French authorities recently raided X's Paris offices over deepfake content investigations, while the UK Information Commissioner's Office launched probes into xAI's Grok chatbot amid concerns about harmful outputs. Simultaneously, OpenAI faces internal tensions as reports indicate resource shifts from long-term projects like Sora and DALL-E toward commercializing ChatGPT—a move that precipitated senior researcher departures.
Market context underscores the timing: OpenAI is finalizing a $100 billion funding round, including a potential $20 billion investment from Nvidia. This positions the company's valuation near $500 billion while escalating accountability expectations. Scandinaro's Anthropic background proves significant, as the safety-focused firm pioneered Constitutional AI techniques now adopted industry-wide. His appointment signals OpenAI's operational pivot toward formalized risk governance as AGI development accelerates.
Financially, AI's commercial expansion continues unabated despite regulatory headwinds. AMD reported Q4 data center revenue surged 39% year-over-year to $5.4 billion, while Super Micro's server sales jumped 123% to $12.7 billion. However, software stocks plummeted over AI disruption fears, with Adobe (-7.3%) and Salesforce (-6.8%) among casualties. This volatility highlights market recognition that AI's economic impact extends beyond productivity gains to fundamental industry restructuring.
Strategically, Scandinaro's team will develop protocols for OpenAI's frontier models, balancing innovation velocity against existential risk mitigation. The role requires cross-functional collaboration with security and policy divisions—a structure reflecting lessons from Anthropic's governance frameworks. As global regulators advance AI legislation, this appointment demonstrates proactive compliance positioning ahead of anticipated policy mandates. With AGI capabilities advancing rapidly, documented preparedness systems may soon transition from competitive advantage to regulatory necessity.

Comments
Please log in or register to join the discussion