Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»Top AI expert delays timeline for possible destruction of humanity | AI (artificial intelligence)
AI in Technology

Top AI expert delays timeline for possible destruction of humanity | AI (artificial intelligence)

January 7, 2026004 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
4751.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

A leading expert in artificial intelligence has rolled back his timeline for the AI ​​catastrophe, arguing that it will take longer than initially expected for AI systems to be able to code autonomously and thus accelerate their own development toward superintelligence.

Daniel Kokotajlo, a former OpenAI employee, sparked a heated debate in April by posting AI 2027a scenario that envisions uncontrolled development of AI leading to the creation of a superintelligence that – after outwitting world leaders – destroys humanity.

The scenario quickly gained admirers and detractors. US Vice President JD Vance appeared reference AI 2027 in an interview last May while discussing the U.S.-China arms race in artificial intelligence. Gary Marcus, professor emeritus of neuroscience at New York University, called the piece is a “work of fiction” and various of its conclusions are “pure science fiction gibberish”.

“Our deadlines… are still a little longer,” wrote Daniel Kokotajlo. Photography: Twitter/X

Deadlines for transformative artificial intelligence – sometimes called AGI (artificial general intelligence) or AI capable of replacing humans in most cognitive tasks – have become a fixture in AI security communities. The release of ChatGPT in 2022 has significantly accelerated these timelines, with civil servants And experts predict the arrival of AGI within decades or years.

Kokotajlo and his team named 2027 as the year AI would reach “fully autonomous coding,” although they said that was a “most likely” hypothesis and some had longer timelines. Today, some doubts seem to surface about the imminence of the AGI and about the meaning of the term.

“Many others have pushed back their deadlines over the past year as they realize how inconsistent AI performance is,” said Malcolm Murray, an expert in AI risk management and one of the authors of the International AI Safety Report.

“For a scenario like AI 2027 to come true, (AI) would need a lot more practical skills, useful in real-world complexities. I think people are starting to realize the enormous inertia in the real world that will delay complete societal change.”

“The term AGI had meaning from afar, at a time when AI systems were very restricted – playing chess and playing Go,” said Henry Papadatos, executive director of SaferAI, a French AI nonprofit. “Now we have systems that are already quite general and the term doesn’t make much sense.”

Kokotajlo’s AI 2027 is based on the idea that AI agents will fully automate AI coding and R&D by 2027, leading to an “intelligence explosion” in which AI agents will create increasingly intelligent versions of themselves, then – in one possible ending – kill all humans by mid-2030 in order to make way for more solar panels and energy centers. data.

However, in their update, Kokotajlo and his co-authors revise their expectations for when AI might be able to code autonomously, estimating that it will likely happen in the early 2030s, rather than 2027. The new forecasts set 2034 as the new horizon for “superintelligence” and contain no estimates for when AI might destroy humanity.

“Things seem to be going a little slower than the AI ​​2027 scenario. Our timelines were longer than 2027 when we published and now they are a little longer again,” wrote Kokotajlo in a post on X.

Creating AIs capable of performing AI research remains a strong goal of major AI companies. OpenAI CEO Sam Altman said in October, having an automated AI researcher by March 2028 was an “internal goal” of his company, but he added: “We could totally fail in that goal.”

Andrea Castagna, a Brussels-based AI policy researcher, said there are a number of complexities that the AGI’s dramatic deadlines fail to resolve. “The fact that you have a superintelligent computer focused on military activity does not mean that you can integrate it into the strategic documents that we have developed over the last 20 years.

“The more we develop AI, the more we see that the world is not science fiction. The world is much more complicated than that.”

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

BofA’s struggles with AI adoption reflect broader problem in banking

February 23, 2026

BBCUrgent research needed to combat AI threats, says Google AI boss. But the head of the American delegation to the AI ​​Impact Summit in Delhi says: "We totally reject global AI governance."0.2 days ago

February 23, 2026

The AI ​​Alarm Cycle: Much Talk, Little Action | Science and technology

February 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026

AI in healthcare has evolved faster than expected

February 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.