Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Business Reporter – Management – Managing supply chain risks in 2026: when disruption is the norm

January 23, 2026

San Francisco AI Startup Emergent Secures $70M Series B Funding to Grow Its Team

January 23, 2026

New Brunswick professor named among world’s top AI researchers. This is how he sees his future

January 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»We need to talk about how we talk about “AI”
AI in Technology

We need to talk about how we talk about “AI”

January 10, 2026007 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Bd3c99e5dc5df9b6565d72f6cf0f8f16b63abbb2 1200x675.png
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

“AI” is not your friend. Nor is he an intelligent tutor, an empathetic ear, or a helpful assistant. He cannot “invent” facts and does not make “mistakes”. This doesn’t actually answer your questions. Such anthropomorphizing language, however, permeates the public debate on so-called artificial intelligence technologies. The problem with anthropomorphic descriptions is that they risk masking significant limits of probabilistic automation systemswhich makes them fundamentally different from human cognition.

People and companies selling “AI” technologies routinely use language that describes their systems as human-like: “reasoning abilities,” “mind-blowing,” and “artificial intelligence.” The media has largely let them define the terms of the debate, right down to the terminology used in any discussion of these systems. But even the most perfect execution of a task generally associated with intelligence is not enough to an “intelligent” system, and defining systems as human or human-like is misleading at best, at worst fatal.

Anthropomorphizing language influences how people perceive a multi-level system. It oversells a system that may not work well and paints a worldview in which people Those responsible for developing the systems are not responsible for inaccurate, inappropriate and sometimes fatal results of the system. This promotes misplaced trust, overdependence and dehumanization.

The problematic nature of anthropomorphization – wishful mnemonics – is by no means a new criticism in the field of computer science. In fact, this question was raised half a century ago by computer scientist Drew McDermott who written in 1976at a time when “artificial intelligence” was still a relatively new field:

If a researcher (…) calls the main loop of his program “UNDERSTAND”, he (until proven otherwise) is only asking the question. He can mislead many people, especially himself. (…) What it should do instead is refer to this main loop as “G0034”, and see if it can convince itself or someone other than G0034 implements part of the understanding. (…) Many instructive examples of wishful mnemonics written by AI researchers come to mind once one understands the point.

In order to make more informed decisions about what we call AI, it is helpful to be able to recognize the different ways in which the language used to describe it is anthropomorphizing and therefore misleading. The most important category of anthropomorphization includes terms that describe systems in terms of cognition or even emotions. These words can be verbs describing what the system is supposed to do (“think”, “recognize”, “understand”) or nouns describing these actions or the result of these actions (“chain of thought”, “reasoning”, “skills”). Words that describe cognitive failures, like “ignore,” also have a place here, since they describe the “ignorant” entity as something that might conversely pay attention. The term “artificial intelligence” may even be particularly problematic, given some research showed that people associate high machine competence with this term compared to, for example, “decision support systems”, “sophisticated statistical models” or even “machine learning”.

Metaphors are useful shortcuts, but they are also seductive because they create a sense of understanding. Communicating accurate mental models of “AI” systems is challenging when technical descriptions are not meaningful to the average user. This difficulty does not, however, exempt journalists and researchers. It’s up to us to find clear, non-misleading ways to talk about technology.

Our content delivered to your inbox.

Join our newsletter on issues and ideas at the intersection of technology and democracy

THANKS!

You have successfully joined our subscriber list.

The anthropomorphizing language metaphor can also be misleading by placing the automated system in control, treating it as an automated system. agent in its own right. This is pervasive and serves to obscure the actions and responsibility of the people who build and use the systems. Examples include phrases like “ChatGPT helps students…”, “the model creates a realistic video,” or “AI systems need more and more power every year.” A variation of this positions a model as a collaborator of the person using it, rather than a tool they use, with words like “co-write”, “co-create”, etc.

We also anthropomorphize automated systems when we describe them as participating in acts of communication. If you say you “asked” the system a question, that it “told” you something, or that it “lied,” you are exaggerating what really happened. These words imply communicative capacity and intention, that is, the ability to understand communicative acts and the desire to communicate in return, as well as the choice to do so in a certain way. Rephrasing the language we use to describe these interactions really goes against the grain, because not only do the companies selling these systems describe them as communicatorsthey also make many design choices to support this illusion. From the chat interface itself to the use of I/me pronouns by many such systems, these systems are designed to give the illusion of an interlocutor. But they produce a text for which no one is responsible and play on our very human tendency to make sense of all linguistic activity in languages ​​that are familiar to us.

No matter how much comfort, relief, or even connection a person feels with a chatbot, that does not make the chatbot a friend, therapist, or romantic partner. People may have friendly feelings toward inanimate objects or technologies, but they are entirely unidirectional. We certainly wouldn’t call a child’s stuffed toy their friend without at least the prefix “imaginary.” Framing is an exceptionally powerful cognitive device that can differentiate between what we consider real and unreal. Take for example the numerous recent cases of “AI” psychosis and disastrous “therapeutic” interactions between people and chatbots – for people prone to delusions, the tendency to anthropomorphize chatbots is particularly perilous. The frequent use of these technologies in “conversations” imitating romantic exchanges is directly associated with higher levels of depression and lower life satisfaction.

From pious mnemonics to precise nomenclature

We argue that we should aim for greater linguistic precision in our descriptions of “AI” systems. In scientific and journalistic writing, in public debate and in daily use. This requires deliberate rephrasing and may seem awkward at first. But the problem with patterns of language use is that we learn them from each other — and yesterday’s quirks, if used persistently enough, become part of our linguistic landscape.

Inaccuracies caused by anthropomorphic descriptions are likely to have a disproportionate impact on vulnerable populations. An article from 2025 shows a negative correlation between individuals’ knowledge of AI and their receptivity to AI; The more people know about how “AI” works, the less likely they are to want to use it: “people unfamiliar with AI are more likely to perceive AI as magical and to have feelings of awe when AI performs tasks that appear to require uniquely human attributes.” » Somewhat absurdly, the authors use this finding as an argument against educating the public about “AI,” as this would consequently reduce their ability to adapt. We believe language is a way to increase people’s knowledge of AI, helping them make informed choices about accepting the technology.

A more deliberate and thoughtful way forward is to talk about “AI” systems in terms of what we use to make the systemsoften specifying input and/or output. In other words, let’s talk about features that serve our purposes, rather than “capabilities” of the system. Rather than saying that a model is “good at” something (which suggests that the model has skills), we can talk about what it is “good at”. Who uses the model to do something and for what?

It takes effort to trace the tide of anthropomorphizing language embedded in commonly used technical terms and popular discourse, both to recognize the language but also to find appropriate alternatives. Whether we participate in local discussions making decisions about our workplaces, schools, or communities or write for broad audiences, we share the responsibility to create and use empowering metaphors rather than the misleading language that embeds tech companies’ marketing pitches.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

AI in the exam room: combining technology and human contact

January 23, 2026

Transforming Healthcare with Technology in the Age of AI (Part 1) | ASUS Press Room

January 23, 2026

How agentic, physical and sovereign AI is rewriting the rules of business innovation

January 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (54)
  • AI in Business (278)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (224)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)

Business Reporter – Management – Managing supply chain risks in 2026: when disruption is the norm

January 23, 2026

San Francisco AI Startup Emergent Secures $70M Series B Funding to Grow Its Team

January 23, 2026

New Brunswick professor named among world’s top AI researchers. This is how he sees his future

January 23, 2026

AI for Business: Practical Tools for Small Businesses

January 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (54)
  • AI in Business (278)
  • AI in Healthcare (249)
  • AI in Technology (263)
  • AI Logistics (47)
  • AI Research Updates (105)
  • AI Startups & Investments (224)
  • Chain Risk (70)
  • Smart Chain (91)
  • Supply AI (73)
  • Track AI (57)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.