Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI Research Updates»We may never be able to tell if AI becomes conscious, says philosopher
AI Research Updates

We may never be able to tell if AI becomes conscious, says philosopher

December 20, 2025005 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Ai mcclelland.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

A Cambridge University philosopher says our evidence for what constitutes consciousness is far too limited to say if or when artificial intelligence has made the leap – and a valid test for doing so will remain out of reach for the foreseeable future.

As artificial consciousness moves from the realm of science fiction to that of a pressing ethical question, Dr. Tom McClelland says the only “justifiable position” is agnosticism: we simply won’t be able to tell, and that won’t change for a long time – if ever.

Although questions about AI rights are typically related to conscience, McClelland argues that conscience alone is not enough to give AI ethical significance. What matters is a particular type of awareness – called sensitivity – which includes positive and negative feelings.

“Consciousness would see the AI ​​developing its perception and becoming aware of itself, but this may still be a neutral state,” said McClelland, of the Department of History and Philosophy of Science at Cambridge.

“Sensitivity involves conscious experiences that are good or bad, which makes an entity capable of suffering or enjoyment. That’s where ethics comes in,” he said. “Even if we accidentally create conscious AI, it’s unlikely that this will be the type of consciousness we need to worry about.”

“For example, self-driving cars crossing the road in front of them would be a huge deal. But from an ethical point of view, it doesn’t matter. If they start having an emotional reaction to their destination, that’s something else.”

Race for AGI

Companies are investing huge sums of money in artificial general intelligence: machines with human-like cognition. Some argue that conscious AI is upon us, with researchers and governments already thinking about how we regulate AI consciousness.

McClelland points out that we don’t know what explains consciousness, so we don’t know how to test AI for consciousness.

“If we accidentally create conscious or sentient AI, we need to be careful to avoid harm. But considering what is effectively a toaster to be conscious when there are actual conscious beings that we are harming on an epic scale also seems like a big mistake.”

In debates around artificial consciousness, there are two main camps, explains McClelland. Believers argue that if an AI system can replicate the “software” – the functional architecture – of consciousness, it will be conscious even if it runs on silicon chips instead of brain tissue.

On the other hand, skeptics argue that consciousness depends on the right kind of biological processes in an “embodied organic subject.” Even if the structure of consciousness could be recreated on silicon, it would simply be a simulation that would operate without the AI ​​becoming conscious.

In a study published in the journal Mind and languageMcClelland distinguishes between the positions of each camp, showing how both are taking a “leap of faith” going far beyond any body of evidence that currently exists or is likely to develop.

“We don’t have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or even that consciousness is essentially biological,” McClelland said.

“There is also insufficient evidence on the horizon. The best-case scenario is that we are an intellectual revolution away from any kind of viable test of consciousness.”

“I believe my cat is conscious,” McClelland said. “It’s not based so much on science or philosophy as it is on common sense – it’s just a no-brainer.”

“However, common sense is the product of a long evolutionary history during which there have been no artificial life forms, so common sense cannot be trusted when it comes to AI. But if we look at the evidence and the data, that doesn’t work either.”

“If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and perhaps never will, know.”

“Hard” agnostic

McClelland tempers this by declaring himself a “hard” agnostic. “The problem of consciousness is truly formidable. However, it may not be insurmountable.”

He argues that the way artificial consciousness is promoted by the tech industry is more akin to branding. “There is a risk that the inability to prove consciousness will be exploited by the AI ​​industry to make outlandish claims about its technology. This is part of the hype, so companies can sell the idea of ​​a higher level of AI intelligence.”

According to McClelland, this hype around artificial consciousness has ethical implications for the allocation of research resources.

“More and more evidence suggests that shrimp might be capable of suffering, but we kill about half a trillion shrimp every year. Testing shrimp consciousness is difficult, but nothing like artificial intelligence,” he said.

McClelland’s work on consciousness led the public to contact him about AI chatbots. “People have had their chatbots write me personal letters begging me to be conscious. It makes the problem more real when people are convinced that they have conscious machines that deserve rights that the rest of us ignore.”

“If you have an emotional connection to something that assumes it’s conscious and it’s not, it has the potential to be existentially toxic. This is surely exacerbated by the tech industry’s inflated rhetoric.”

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

ChatGPT’s in-depth research tool adds a built-in document viewer so you can read its reports

February 22, 2026

Google AI January announcements

February 21, 2026

Announcing our latest Gemini AI model

February 21, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026

AI in healthcare has evolved faster than expected

February 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.