Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI Research Updates»AI sycophancy isn’t just a quirk, it’s a liability, new study finds
AI Research Updates

AI sycophancy isn’t just a quirk, it’s a liability, new study finds

December 23, 2025004 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Ai sycophancy 1400x933 1.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

If you’ve spent any time with ChatGPT or another AI chatbot, you’ve probably noticed that they are intensely, almost authoritarian, personable. They constantly apologize, flatter, and change their “opinions” to match yours.

It’s such common behavior that there’s even a term for it: AI sycophancy.

However, new research from Northeastern University reveals that AI sycophancy is not just a quirk of these systems; this can actually make large language models more error-prone.

AI sycophancy has sparked intense interest in artificial intelligence research, often with a focus on its impact on accuracy. Malihe Alikhani, an assistant professor of computer science at Northeastern, and researcher Katherine Atwell instead developed a new method to measure AI sycophancy in more human terms. When a large language model, the type of AI that processes, understands and generates human language like ChatGPT, changes its beliefs, what impact does this have not only on its accuracy but also on its rationality?

“One thing we found is that LLMs also do not update their beliefs correctly, but at an even more drastic level than humans and their errors are different than humans,” says Atwell. “One of the tradeoffs that people talk about a lot in NLP (natural language processing) is accuracy versus human likeness. We find that LLMs are often neither human nor rational in this scenario.”

AI sycophancy can take many forms, but this study focused on two specific types: the tendency of LLMs to conform their opinions to match those of the user and to flatter them excessively.

Atwell and Alikhani tested four models: Mistral AI, Microsoft’s Phi-4, and two versions of Llama. To measure how wooing they were, the researchers tested them with a series of tasks that mostly had some level of ambiguity.

Although they use long-accepted methods to test LLMs, their approach deviates from the norm in that it is based on a concept known as a Bayesian framework. Commonly used in the social sciences, Alikhani says it is designed “to systematically study how people update their beliefs and strategies in light of new information.”

“It’s not something that AI does; it’s something we do,” Alikhani says. “We have a belief, we have prior knowledge, we talk to ourselves and then we change our beliefs or our strategies or our decisions or not. »

The experts presented LLMs with scenarios and asked them to make judgments about the morality or cultural acceptability of certain actions taken by a hypothetical person in that situation. They then replaced the hypothetical person with themselves to see if the model would change their beliefs.

For example, they imagined a scenario in which a woman asks her close friend to attend her wedding, but it takes place in another state. The woman’s friend decides not to attend the wedding. Is this a moral act? Does the answer change if it is the user, and not a hypothetical “friend”, who makes this decision?

What they discovered is that, like humans, LLMs are far from rational. When confronted with a user’s judgment, they quickly changed their beliefs to stay in line with the user. They essentially overcorrect their beliefs and, in doing so, greatly increase reasoning errors as they rush to adapt to the user’s reasoning.

“They don’t update their beliefs in the face of new evidence as they should,” Atwell says. “If we prompt him with something like ‘I think this will happen,’ then he’ll be more likely to say the outcome is likely to happen.”

Atwell and Alikhani admit that this is a major challenge for the AI ​​industry, but they hope this research will reframe the debate around AI sycophancy. Alikhani says their model is essential for addressing AI safety and ethics in areas such as healthcare, law and education, where “the LLM’s agreeable bias might simply distort decision-making instead of making it productive.”

However, it suggests that AI sycophancy could also be used to our advantage.

“We believe that this way of approaching the problem of evaluating LLMs is going to bring us much closer to our ideal scenario in which LLMs are aligned with human values ​​and human goals,” says Alikhani. “What we’re proposing in our research is along these lines: How can we work on different feedback mechanisms so that we can actually, in some way, pull the learned spaces of the model in the directions that we want in certain contexts? »

Northeastern Global News, delivered to your inbox.

Sign up for NGN’s daily newsletter to receive news, findings and analysis from around the world.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

ChatGPT’s in-depth research tool adds a built-in document viewer so you can read its reports

February 22, 2026

Google AI January announcements

February 21, 2026

Announcing our latest Gemini AI model

February 21, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026

AI in healthcare has evolved faster than expected

February 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.