Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Jorie AI shares a live case study on the inevitable future of AI in healthcare at ViVE 2026

February 12, 2026

LSEG boss under pressure to show that AI does not pose a threat to his activity

February 12, 2026

AI researchers resign following public warnings

February 12, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»AI researchers resign following public warnings
AI in Technology

AI researchers resign following public warnings

February 12, 2026004 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
2fc39a5fbf 5345 4b4c a783 e6ab36980405.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Two employees from OpenAI and Anthropic have resigned and issued public warnings about the direction of AI companies.

Mrinank Sharma, who studied engineering at Cambridge and machine learning at Oxford before leading Anthropic’s safeguards research team, resigned on Monday, saying “the world is in peril.”

Zoë Hitzig, researcher at OpenAI, announced her departure on Wednesday via an editorial in The new York Times in which she expressed “deep reservations” about the company’s strategy.

Anthropic is one of the world’s leading AI companies and its chatbot Claude has become popular with coders and businesses, while OpenAI is developing ChatGPT. Sharma’s team developed defenses against AI-assisted bioterrorism and studied AI sycophancy, which has become a growing concern as people become more emotionally dependent on robots.

“Pressure to put aside what matters”

He wrote on

“Additionally, throughout my time here, I have seen repeatedly how difficult it is to truly let our values ​​govern our actions. I have seen this in myself, within the organization, where we are constantly faced with pressure to put aside what matters most, and in society as a whole as well. It is by holding this situation and listening as best I can that it becomes clear what I need to do.”

Sharma said he now hopes to pursue poetry studies.

“Potential for user manipulation”

Hitzig, who spent two years on product and security at OpenAI, said she was resigning over the company’s decision to serve ads in ChatGPT. She wrote: “For several years, ChatGPT users have generated an archive of unprecedented human frankness, in part because people thought they were talking about something that had no ulterior intent.

“Users interact with an adaptive, conversational voice to which they reveal their most private thoughts. People talk to chatbots about their medical fears, their relationship problems, their beliefs about God and the afterlife. Advertising built on these archives creates the potential to manipulate users in ways that we don’t have the tools to understand, let alone prevent.”

An existential threat?

Hieu Pham, a member of the OpenAI technical team, posted on

Anthropic was created by researchers who left OpenAI due to disagreements over AI safety. Dario Amodei, its chief executive, recently warned that AI could eliminate half of white-collar jobs.

Fears about the pace of AI disruption have been fueled by Anthropic’s release of AI tools that act as agents on behalf of the user, proofreading legal documents and performing legal and data-related tasks. This led to a massive sell-off in software stocks.

• Robert Colvile: If you’re not terrified of AI, you’re not paying attention

An essay by Matt Shumer, the tech entrepreneur, warning that AI would disrupt the global workforce more profoundly than the pandemic, went viral this week, after being viewed more than 40 million times on X.

Elon Musk’s company xAI is also facing an exodus. Two of its co-founders left this week, leaving only half of the original 12.

Andrea Miotti, founder and chief executive of ControlAI, a campaign group seeking to reduce the risks of AI, said: “I think this trend will only continue as employees of these companies grapple with the threat posed by the technology they are building. These companies explicitly aim to build superintelligence, technology that even AI CEOs themselves declare poses an extinction risk to humanity.”

The rapidly evolving AI sector has become accustomed to high staff turnover, with sought-after researchers lured away by competitors or striking out on their own at start-ups. Internal disagreements are also common as companies scale and market, straying from their original goals.

Illustration of the xAI logo on a phone screen in front of a photo of Elon Musk.

Elon Musk said xAI was ‘hiring aggressively’

HAKAN NURAL/ANADOLU/GETTY IMAGES

OpenAI has disbanded its “Mission Alignment Team,” responsible for ensuring that A.I. benefits all humanityaccording to the Platformer newsletter.

OpenAI highlighted comments from Fidji Simo, its CEO, on the Access podcast in which she stated that “ads are not going to influence LLM results” and “ads will always be very clearly separated and demarcated from the content.”

Musk posted on

“We are recruiting aggressively.”

Anthropic and xAI have been contacted for comment.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Customer challenge

February 12, 2026

China’s Alibaba launches AI model to power robots

February 12, 2026

NIST awards more than $3 million to small companies advancing AI, biotechnology, semiconductors, quantum and more

February 11, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (67)
  • AI in Business (357)
  • AI in Healthcare (298)
  • AI in Technology (347)
  • AI Logistics (50)
  • AI Research Updates (118)
  • AI Startups & Investments (285)
  • Chain Risk (84)
  • Smart Chain (110)
  • Supply AI (97)
  • Track AI (62)

Jorie AI shares a live case study on the inevitable future of AI in healthcare at ViVE 2026

February 12, 2026

LSEG boss under pressure to show that AI does not pose a threat to his activity

February 12, 2026

AI researchers resign following public warnings

February 12, 2026

Infosys and ExxonMobil partner on immersion cooling for sustainable AI

February 12, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (67)
  • AI in Business (357)
  • AI in Healthcare (298)
  • AI in Technology (347)
  • AI Logistics (50)
  • AI Research Updates (118)
  • AI Startups & Investments (285)
  • Chain Risk (84)
  • Smart Chain (110)
  • Supply AI (97)
  • Track AI (62)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.