Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Infosys partners with US company ExxonMobil to advance immersion cooling for sustainable AI infrastructure

February 13, 2026

Investment Trends in the AI ​​Startup Market

February 13, 2026

AI researchers sound the alarm before going out

February 13, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI Research Updates»AI researchers sound the alarm before going out
AI Research Updates

AI researchers sound the alarm before going out

February 13, 2026015 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
4lud5jl copyright image 289988.jpeg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Analysis by Allison MorrowCNN

hands on laptop

Photo: Thomas Lefebvre / Unsplash

“The world is in peril,” the former head of Anthropic’s safeguards research team warned as he headed out. A researcher at OpenAI, also endangered, said the technology has “potential to manipulate users in ways that we don’t have the tools to understand, let alone prevent.”

They’re part of a wave of AI researchers and executives who aren’t just leaving their employers: They’re sounding the alarm on their way out, drawing attention to what they see as bright red red flags.

While Silicon Valley is known for its high turnover, the latest churn comes as market leaders like OpenAI and Anthropic rush toward IPOs that could supercharge their growth while also inviting scrutiny of their operations.

Over the past few days, a number of high-profile AI employees have decided to quit, with some explicitly warning that the companies they worked for were moving too fast and downplaying the technology’s shortcomings.

Zoë Hitzig, a researcher at OpenAI for two years, announced her resignation Wednesday in a New York Times essay, citing “deep reservations” about OpenAI’s emerging advertising strategy. Hitzig, who warned of the potential for ChatGPT user manipulation, said the chatbot’s user data archives, built on “medical fears, their relationship problems, their beliefs about God and the afterlife,” present an ethical dilemma precisely because people thought they were chatting with a program that had no ulterior motives.

Hitzig’s criticism comes as technology news site Platformer reports that OpenAI has disbanded its “mission alignment” team, created in 2024 to advance the company’s goal of ensuring all humanity benefits from the pursuit of “artificial general intelligence” — a hypothetical AI capable of human-level thinking.

OpenAI did not immediately respond to a request for comment.

Also this week, Mrinank Sharma, head of Anthropic’s safeguards research team, released a cryptic letter on Tuesday announcing his decision to leave the company and warning that “the world is in peril.”

Sharma’s letter made only vague references to Anthropic, the company behind the Claude chatbot. He did not specify why he was leaving, but noted that it was “clear to me that it was time to move on” and that “throughout my time here, I have seen repeatedly how difficult it is to truly let our values ​​govern our actions.”

Anthropic told CNN in a statement that it was grateful for Sharma’s work to advance AI safety research. The company stressed that it was not responsible for security, nor was it responsible for broader safeguards within the company.

Meanwhile, at xAI, two co-founders resigned in the span of 24 hours this week, announcing their departure to At least five other xAI staff members announced their departures on social media over the past week.

It’s not immediately clear why xAI’s latest co-founders left, and xAI did not respond to a request for comment. In a social media post Wednesday, Musk said xAI had been “retooled” to accelerate growth, which “unfortunately required parting ways with some people.”

While it is not uncommon for high-level talent to move into an emerging sector like AI, the scale of departures over such a short period of time at xAI is remarkable.

The startup faced a global backlash over its Grok chatbot, which was allowed to generate non-consensual pornographic images of women and children for weeks before the team stepped in to stop it. Grok also tends to generate anti-Semitic comments in response to user prompts.

Other recent departures highlight the tension between some researchers worried about security and senior executives eager to generate revenue.

On Tuesday, the Wall Street Journal reported that OpenAI had fired one of its top security officials after she expressed opposition to rolling out an “adult mode” allowing pornographic content on ChatGPT. OpenAI fired security manager Ryan Beiermeister on the grounds that it discriminated against a male employee — an accusation Beiermeister told the Journal was “absolutely false.”

OpenAI told the Journal that her termination was not related to “any issues she raised while working at the company.”

High-profile defections have been a part of the AI ​​story since ChatGPT hit the market in late 2022. Shortly after, Geoffrey Hinton, known as the “godfather of AI,” left his job at Google and began evangelizing about what he sees as the existential risks that AI poses, including massive economic upheaval in a world where many “will no longer be able to know what’s true.”

Doomsday predictions abound, including among AI executives who have financial incentives to tout the power of their own products. One of those predictions went viral this week, with HyperWrite CEO Matt Shumer publishing a nearly 5,000-word article about how the latest AI models have already made some tech jobs obsolete.

“We tell you what has already happened in our own work,” he wrote, “and warn you that you are next.”

-CNN

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

ChatGPT gets full-screen document viewer for AI research

February 11, 2026

BBC study shows weekly AI usage has tripled since 2023

February 10, 2026

New research suggests AI model updates now constitute ‘significant social events’ involving real grief

February 8, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (68)
  • AI in Business (359)
  • AI in Healthcare (298)
  • AI in Technology (349)
  • AI Logistics (50)
  • AI Research Updates (119)
  • AI Startups & Investments (288)
  • Chain Risk (85)
  • Smart Chain (110)
  • Supply AI (97)
  • Track AI (64)

Infosys partners with US company ExxonMobil to advance immersion cooling for sustainable AI infrastructure

February 13, 2026

Investment Trends in the AI ​​Startup Market

February 13, 2026

AI researchers sound the alarm before going out

February 13, 2026

To Thrive in the AI ​​Age, Businesses Need Agent Managers

February 13, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (68)
  • AI in Business (359)
  • AI in Healthcare (298)
  • AI in Technology (349)
  • AI Logistics (50)
  • AI Research Updates (119)
  • AI Startups & Investments (288)
  • Chain Risk (85)
  • Smart Chain (110)
  • Supply AI (97)
  • Track AI (64)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.