The world is descending into disaster as artificial intelligence will create an elite class living in luxury while the majority languish, a former Google executive has warned.
Dex Hunter-Torricke, who was previously head of communications at Google DeepMind and also worked for Mark Zuckerberg and Elon Musk, sounded a warning to governments about the march of AI, saying “the path we are currently on is leading to disaster.”
In an essay entitled Another future is possibleHunter-Torricke, who joined Treasury as a non-executive board member after leaving Google AI Divisionurged governments to work together. “It is very clear to me now: there is no plan,” he wrote.
At the heart of his warning was the displacement of jobs due to AI. He said the International Monetary Fund’s estimate that 60 percent of jobs in advanced economies were vulnerable to displacement “likely understates the true impacts because they don’t account for the advancement of AI over the next decade.”
He said: “The writing is on the wall in every industry that processes information for a living. Productivity gains will be real – but there is no automatic mechanism that translates them into widely shared prosperity. The most likely outcome is an economy in which corporate profits explode as labor costs fall, while workers’ share of production declines. Wealth is concentrated at an unprecedented rate at the top, while the vast middle loses ground.
“By mid-century, on this trajectory, we arrive at something that goes beyond inequality and begins to resemble economic speciation: an elite class with AI-augmented capabilities enabling luxury living, blessed with medical breakthroughs that enable longer lifespans, living in parallel with a global majority whose economic prospects, access to health care, and political power have been permanently reduced. This is not a prediction I make at home. light.”
In the meantime, George Osborne, the former chancellorsaid countries that do not adopt AI risk “Fomo” and could end up weaker and poorer.
George Osborne speaking at last year’s SXSW festival
JACK TAYLOR/GETTY IMAGES
Osborne, who has been leading OpenAI’s “country” program for two months, told leaders gathered for the AI Impact Summit in Delhi: “Don’t be left behind.” He said that if they don’t implement AI deployment, their workforce may be “less willing to stay put,” hoping instead to seek AI-related riches elsewhere.
“Many countries that are not the United States of America and are not the People’s Republic of China are essentially dealing with two slightly contradictory types of feelings at the same time,” Osborne said.
“The first is a Fomo: are we missing out on this immense technological revolution? How can we be part of it? How can we ensure that our companies feel the benefits? How can we ensure that our societies feel the benefits?”
At the same time, he explained, these countries want to safeguard their national sovereignty while relying on AI systems controlled by the United States and China.
Osborne said: “There is another kind of sovereignty, which is not to be left behind, because then you will be a weaker nation, a poorer nation, a nation whose workforce will be less willing to stay put. »
• Tony Blair Institute warns UK risks falling behind in biotech
The trial is the latest warning from those in the tech industry or those abandoning it over concerns. Last week, a security researcher left Anthropic, leader in AIsaying “the world is in peril” and that an OpenAI employee resigned over their decision to run ads in ChatGPT.
Separately, Michael Wooldridge, professor of AI at the University of Oxford, said the lack of safety testing meant AI risked a Hindenburg-style disaster. Delivering the Royal Society Michael Faraday Prize Lecture, entitled This is not the AI we were promisedWooldridge said Wednesday: “The Hindenburg disaster destroyed global interest in airships; it was now a dead technology, and a similar moment poses a real risk to AI.”
Leading AI models from Anthropic, OpenAI and Google appear to be getting better and better, especially in the area of coding. This runs counter to the views of some critics who say the technology’s capabilities are flattening.
DeepMind CEO says artificial general intelligence could be around in the next five years
Hunter-Torricke said: “Researchers at major laboratories sometimes struggle to keep pace with advances in the industry. Last week, new models launched made systems from six months ago almost obsolete. The curve has not flattened.”
After 15 years in Big Tech, Hunter-Torricke turned his back on the industry to create a new London-based non-profit, the Center for Tomorrow, which will tackle the problem. He has pledged not to take money from Big Tech and benefits from funding from Sir Tom Hunter, the Scottish billionaire, who is also the uncle of Hunter-Torricke’s wife.
• Google DeepMind to create automated science lab in UK
Hunter-Torricke left Google DeepMind in October and signed a nondisclosure agreement, but suggests that what he saw in the industry was not positive. He said: “What I had seen in these rooms, over the years, now made it impossible to stay there. »
He recounted his career to Time magazine: “I only told half the story…it’s something that, personally, I consider a failure.” »
Hunter-Torricke signed a non-disclosure agreement when he left Google
JOSÉ SARMENTO MATOS/BLOOMBERG/GETTY IMAGES
Hunter-Torricke predicted that the world would have ten years to readjust its institutions and policies to deal with AI. Among his proposals for solving these problems was a global Marshall Plan, referring to the post-war plan to rebuild Europe by the United States. This would involve “sharing technologies and economic surpluses across borders, rather than hoarding them as instruments of domination”.
He also called for progressive taxation of AI-powered companies, well-funded support for displaced workers, and a universal basic income: an unconditional state payment for all citizens.


