Analysis by Allison MorrowCNN
Photo: Thomas Lefebvre / Unsplash
“The world is in peril,” the former head of Anthropic’s safeguards research team warned as he headed out. A researcher at OpenAI, also endangered, said the technology has “potential to manipulate users in ways that we don’t have the tools to understand, let alone prevent.”
They’re part of a wave of AI researchers and executives who aren’t just leaving their employers: They’re sounding the alarm on their way out, drawing attention to what they see as bright red red flags.
While Silicon Valley is known for its high turnover, the latest churn comes as market leaders like OpenAI and Anthropic rush toward IPOs that could supercharge their growth while also inviting scrutiny of their operations.
Over the past few days, a number of high-profile AI employees have decided to quit, with some explicitly warning that the companies they worked for were moving too fast and downplaying the technology’s shortcomings.
Zoë Hitzig, a researcher at OpenAI for two years, announced her resignation Wednesday in a New York Times essay, citing “deep reservations” about OpenAI’s emerging advertising strategy. Hitzig, who warned of the potential for ChatGPT user manipulation, said the chatbot’s user data archives, built on “medical fears, their relationship problems, their beliefs about God and the afterlife,” present an ethical dilemma precisely because people thought they were chatting with a program that had no ulterior motives.
Hitzig’s criticism comes as technology news site Platformer reports that OpenAI has disbanded its “mission alignment” team, created in 2024 to advance the company’s goal of ensuring all humanity benefits from the pursuit of “artificial general intelligence” — a hypothetical AI capable of human-level thinking.
OpenAI did not immediately respond to a request for comment.
Also this week, Mrinank Sharma, head of Anthropic’s safeguards research team, released a cryptic letter on Tuesday announcing his decision to leave the company and warning that “the world is in peril.”
Sharma’s letter made only vague references to Anthropic, the company behind the Claude chatbot. He did not specify why he was leaving, but noted that it was “clear to me that it was time to move on” and that “throughout my time here, I have seen repeatedly how difficult it is to truly let our values govern our actions.”
Anthropic told CNN in a statement that it was grateful for Sharma’s work to advance AI safety research. The company stressed that it was not responsible for security, nor was it responsible for broader safeguards within the company.
Meanwhile, at xAI, two co-founders resigned in the span of 24 hours this week, announcing their departure to At least five other xAI staff members announced their departures on social media over the past week.
It’s not immediately clear why xAI’s latest co-founders left, and xAI did not respond to a request for comment. In a social media post Wednesday, Musk said xAI had been “retooled” to accelerate growth, which “unfortunately required parting ways with some people.”
While it is not uncommon for high-level talent to move into an emerging sector like AI, the scale of departures over such a short period of time at xAI is remarkable.
The startup faced a global backlash over its Grok chatbot, which was allowed to generate non-consensual pornographic images of women and children for weeks before the team stepped in to stop it. Grok also tends to generate anti-Semitic comments in response to user prompts.
Other recent departures highlight the tension between some researchers worried about security and senior executives eager to generate revenue.
On Tuesday, the Wall Street Journal reported that OpenAI had fired one of its top security officials after she expressed opposition to rolling out an “adult mode” allowing pornographic content on ChatGPT. OpenAI fired security manager Ryan Beiermeister on the grounds that it discriminated against a male employee — an accusation Beiermeister told the Journal was “absolutely false.”
OpenAI told the Journal that her termination was not related to “any issues she raised while working at the company.”
High-profile defections have been a part of the AI story since ChatGPT hit the market in late 2022. Shortly after, Geoffrey Hinton, known as the “godfather of AI,” left his job at Google and began evangelizing about what he sees as the existential risks that AI poses, including massive economic upheaval in a world where many “will no longer be able to know what’s true.”
Doomsday predictions abound, including among AI executives who have financial incentives to tout the power of their own products. One of those predictions went viral this week, with HyperWrite CEO Matt Shumer publishing a nearly 5,000-word article about how the latest AI models have already made some tech jobs obsolete.
“We tell you what has already happened in our own work,” he wrote, “and warn you that you are next.”
-CNN
