Finance

OpenAI just snagged an Anthropic safety researcher for its high-profile head of preparedness role

OpenAI hired an Anthropic safety researcher to fill the role of head of preparedness amid rising AI safety concerns.

OpenAI hired an Anthropic safety researcher to fill the role of head of preparedness amid rising AI safety concerns. Mandel Ngan/AFP/Getty Images

OpenAI hired an Anthropic safety researcher to fill the role of head of preparedness amid rising AI safety concerns. Mandel Ngan/AFP/Getty Images

lighning bolt icon An icon in the shape of a lightning bolt.

lighning bolt icon An icon in the shape of a lightning bolt. Impact Link

This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.

OpenAI has filled a key safety role by hiring from a rival lab.

The company has brought on Dylan Scandinaro, a former AI safety researcher at Anthropic, as its new head of preparedness, a role that carries a salary of up to $555,000 plus equity. The role caught attention last month thanks to its eye-catching pay package amid OpenAI’s rising AI safety concerns.

Sam Altman announced the move in a post on X on Wednesday, saying that he is “extremely excited” to welcome Scandinaro to OpenAI.

“Things are about to move quite fast and we will be working with extremely powerful models soon,” Altman wrote.

“Dylan will lead our efforts to prepare for and mitigate these severe risks. He is by far the best candidate I have met, anywhere, for this role,” he added.

Scandinaro said in a post on X on Wednesday about his move that he’s “deeply grateful for my time at Anthropic and the extraordinary people I worked alongside.”

“AI is advancing rapidly. The potential benefits are great — and so are the risks of extreme and even irrecoverable harm,” he added.

Last month, Altman described the job as “stressful.”

“You’ll jump into the deep end almost immediately,” he wrote on X.

In the job posting, OpenAI said the role is best suited for someone who can lead technical teams, make high-stakes calls under uncertainty, and align competing stakeholders around safety decisions. The company also said candidates should have deep expertise in machine learning, AI safety, and related risk areas.

Tensions have arisen over OpenAI’s approach to safety. Several early employees — including a former head of its safety team — have left the company in recent years.

OpenAI has also faced lawsuits from users who allege its tools contributed to harmful behavior.

In October, the company said that some ChatGPT users had shown possible signs of mental health distress. An estimated 560,000 users a week show “possible signs of mental health emergencies,” it said.

The company also said it was consulting mental health specialists to refine how the chatbot responds when users show signs of psychological distress or unhealthy dependence.

Correction: February 5, 2026 — An earlier version of this story misspelled Dylan Scandinaro’s last name.

Source: RhinoEasy News

Leave a Reply

Your email address will not be published. Required fields are marked *