Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Former Silicon Labs CEO Tyson Tuttle Raises $30 Million for Austin AI Startup

February 19, 2026

The world is on AI’s path to disaster, warns a former Google executive

February 19, 2026

Vanguard and SEI back $25M funding round in Wealth AI startup Avantos

February 19, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»The world is on AI’s path to disaster, warns a former Google executive
AI in Technology

The world is on AI’s path to disaster, warns a former Google executive

February 19, 2026015 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
2ffc36ff29 e621 4518 8b97 9ff0c9740d57.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The world is descending into disaster as artificial intelligence will create an elite class living in luxury while the majority languish, a former Google executive has warned.

Dex Hunter-Torricke, who was previously head of communications at Google DeepMind and also worked for Mark Zuckerberg and Elon Musk, sounded a warning to governments about the march of AI, saying “the path we are currently on is leading to disaster.”

In an essay entitled Another future is possibleHunter-Torricke, who joined Treasury as a non-executive board member after leaving Google AI Divisionurged governments to work together. “It is very clear to me now: there is no plan,” he wrote.

At the heart of his warning was the displacement of jobs due to AI. He said the International Monetary Fund’s estimate that 60 percent of jobs in advanced economies were vulnerable to displacement “likely understates the true impacts because they don’t account for the advancement of AI over the next decade.”

He said: “The writing is on the wall in every industry that processes information for a living. Productivity gains will be real – but there is no automatic mechanism that translates them into widely shared prosperity. The most likely outcome is an economy in which corporate profits explode as labor costs fall, while workers’ share of production declines. Wealth is concentrated at an unprecedented rate at the top, while the vast middle loses ground.

“By mid-century, on this trajectory, we arrive at something that goes beyond inequality and begins to resemble economic speciation: an elite class with AI-augmented capabilities enabling luxury living, blessed with medical breakthroughs that enable longer lifespans, living in parallel with a global majority whose economic prospects, access to health care, and political power have been permanently reduced. This is not a prediction I make at home. light.”

In the meantime, George Osborne, the former chancellorsaid countries that do not adopt AI risk “Fomo” and could end up weaker and poorer.

George Osborne speaking on stage during the "The Political Currency Podcast" round table.

George Osborne speaking at last year’s SXSW festival

JACK TAYLOR/GETTY IMAGES

Osborne, who has been leading OpenAI’s “country” program for two months, told leaders gathered for the AI ​​Impact Summit in Delhi: “Don’t be left behind.” He said that if they don’t implement AI deployment, their workforce may be “less willing to stay put,” hoping instead to seek AI-related riches elsewhere.

“Many countries that are not the United States of America and are not the People’s Republic of China are essentially dealing with two slightly contradictory types of feelings at the same time,” Osborne said.

“The first is a Fomo: are we missing out on this immense technological revolution? How can we be part of it? How can we ensure that our companies feel the benefits? How can we ensure that our societies feel the benefits?”

At the same time, he explained, these countries want to safeguard their national sovereignty while relying on AI systems controlled by the United States and China.

Osborne said: “There is another kind of sovereignty, which is not to be left behind, because then you will be a weaker nation, a poorer nation, a nation whose workforce will be less willing to stay put. »

• Tony Blair Institute warns UK risks falling behind in biotech

The trial is the latest warning from those in the tech industry or those abandoning it over concerns. Last week, a security researcher left Anthropic, leader in AIsaying “the world is in peril” and that an OpenAI employee resigned over their decision to run ads in ChatGPT.

Separately, Michael Wooldridge, professor of AI at the University of Oxford, said the lack of safety testing meant AI risked a Hindenburg-style disaster. Delivering the Royal Society Michael Faraday Prize Lecture, entitled This is not the AI ​​we were promisedWooldridge said Wednesday: “The Hindenburg disaster destroyed global interest in airships; it was now a dead technology, and a similar moment poses a real risk to AI.”

Leading AI models from Anthropic, OpenAI and Google appear to be getting better and better, especially in the area of ​​coding. This runs counter to the views of some critics who say the technology’s capabilities are flattening.

DeepMind CEO says artificial general intelligence could be around in the next five years

Hunter-Torricke said: “Researchers at major laboratories sometimes struggle to keep pace with advances in the industry. Last week, new models launched made systems from six months ago almost obsolete. The curve has not flattened.”

After 15 years in Big Tech, Hunter-Torricke turned his back on the industry to create a new London-based non-profit, the Center for Tomorrow, which will tackle the problem. He has pledged not to take money from Big Tech and benefits from funding from Sir Tom Hunter, the Scottish billionaire, who is also the uncle of Hunter-Torricke’s wife.

• Google DeepMind to create automated science lab in UK

Hunter-Torricke left Google DeepMind in October and signed a nondisclosure agreement, but suggests that what he saw in the industry was not positive. He said: “What I had seen in these rooms, over the years, now made it impossible to stay there. »

He recounted his career to Time magazine: “I only told half the story…it’s something that, personally, I consider a failure.” »

A sign reading "Google DeepMind" in the reception area of ​​the company's London headquarters.

Hunter-Torricke signed a non-disclosure agreement when he left Google

JOSÉ SARMENTO MATOS/BLOOMBERG/GETTY IMAGES

Hunter-Torricke predicted that the world would have ten years to readjust its institutions and policies to deal with AI. Among his proposals for solving these problems was a global Marshall Plan, referring to the post-war plan to rebuild Europe by the United States. This would involve “sharing technologies and economic surpluses across borders, rather than hoarding them as instruments of domination”.

He also called for progressive taxation of AI-powered companies, well-funded support for displaced workers, and a universal basic income: an unconditional state payment for all citizens.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

Chinese Military AI Wishlist

February 19, 2026

Financial Times Accenture combats AI refusals by tying promotions to connections. Consulting firms use ‘carrot and stick’ with some senior managers less willing to use technology than their junior colleagues.. 3 hours ago

February 19, 2026

World Leaders Discuss Future of AI at Global India Summit in New Delhi | Technology News

February 19, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (70)
  • AI in Business (386)
  • AI in Healthcare (307)
  • AI in Technology (377)
  • AI Logistics (50)
  • AI Research Updates (127)
  • AI Startups & Investments (314)
  • Chain Risk (88)
  • Smart Chain (114)
  • Supply AI (103)
  • Track AI (69)

Former Silicon Labs CEO Tyson Tuttle Raises $30 Million for Austin AI Startup

February 19, 2026

The world is on AI’s path to disaster, warns a former Google executive

February 19, 2026

Vanguard and SEI back $25M funding round in Wealth AI startup Avantos

February 19, 2026

Nikkei AsiaReliance to invest $110 billion in AI as India’s data center sector heats upNEW DELHI — Indian billionaires pledge massive investment to boost the use of artificial intelligence in the country,…11 hours ago

February 19, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (70)
  • AI in Business (386)
  • AI in Healthcare (307)
  • AI in Technology (377)
  • AI Logistics (50)
  • AI Research Updates (127)
  • AI Startups & Investments (314)
  • Chain Risk (88)
  • Smart Chain (114)
  • Supply AI (103)
  • Track AI (69)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.