TMerriam-Webster’s American Dictionary word of the year for 2025, it is “slop,” which he defines as “low-quality digital content produced, usually in quantity, using artificial intelligence.” This choice highlights the fact that although AI is widely adopted, particularly by business leaders keen to reduce labor costs, its downsides are also becoming evident. In 2026, reality-based AI represents a growing economic risk.
Ed Zitron, the crude figurehead of AI skepticism, argues quite convincingly that as things stand, the “unit economics” of the entire industry – the cost of processing a single customer’s requests relative to the price companies are able to charge them – does not count. In typically colorful language, he calls them “dogshit.”
AI revenue is growing rapidly as more paying customers sign up, but so far that’s not enough to cover the staggering levels of investment underway: $400 billion (£297 billion) in 2025, with much bigger forecasts for the next 12 months.
Another vocal skeptic, Cory Doctorow, says: “These companies are not profitable. They cannot be profitable. They keep the light alive by sucking up hundreds of billions of dollars in other people’s money and then lighting it on fire.”
It is not new that border companies are in deficit, sometimes for years. But the move toward profitability tends to occur as costs decline. Until now, each iteration of large language models (LLMs) has tended to be more expensive, consuming more data, energy, and the time of highly paid technology experts.
The vast data centers needed to train and operate the models are so expensive to build and equip that, in many cases, they are financed by loans secured by future revenues.
Recent analysis from Bloomberg suggested there were $178.5 billion of these data center credit deals in 2025 alone, with new, inexperienced operators joining Wall Street firms in a “gold rush.”
However, the valuable Nvidia chips that data centers are equipped with have a limited shelf life, potentially shorter than that of loan agreements.
Besides leverage – borrowing – the boom increasingly involves another bubble indicator: financial engineering, including types of complex and circular financing systems which carry disturbing echoes of past corporate crashes.
Believing that generative AI will eventually produce enough revenue to match the colossal sums invested relies – as in all bubbles – on telling big, dramatic stories about the scale of the transformation underway.
LLMs are therefore not only brilliant tools for analyzing and synthesizing large amounts of information. They are rapidly approaching “superintelligence,” as OpenAI CEO Sam Altman puts it; or about to replace human friendshipsaccording to Mark Zuckerberg.
They certainly seem to be replacing some unfortunate human employees in specific industries. Brian Merchant, the author of Blood in the Machine, who compares the backlash against big tech to the Luddite Rebellion of the 19th century, collected dozens of first-hand accounts of writers, coders, and marketers laid off in favor of AI-generated results.
Yet many point to the bland quality of the work produced by their digital replacements, or worse, the risks involved when sensitive tasks are shifted out of human control.
Indeed, the dangers of rushing headlong into the massive replacement of human workers have become increasingly evident in recent months.
In the United Kingdom, the High Court issued a warning regarding the use of AI by lawyers after two cases in which examples of completely fictitious case law were cited.
Police officers in Heber City, Utah, learned to manually check the work of a transcription tool they used to write articles based on body camera footage after mistakenly claiming an officer had turned into a frog. Disney’s Princess and the Frog was playing in the background.
Specific examples like these don’t take into account the costs of what Merchant calls the “slope layer” of AI-generated content that circulates in every online space, making it harder to identify what’s real or true.
Doctorow says: “AI is not the wave of ‘impending superintelligence’. Nor is it going to provide ‘human intelligence’. It is a bag of useful (sometimes very useful) tools that can sometimes improve the lives of workers, when they decide how and when they are used.”
Viewed in this light, these technologies can still deliver significant productivity gains, but perhaps not enough to justify today’s high valuations and the tsunami of investments underway.
Any new thinking would cause chaos in the financial markets. Like the Bank for International Settlements (BIS) recently highlightedthe technology stocks of the “Magnificent Seven” now represent 35% of the S&P500, compared to 20% three years ago.
A correction in stock prices would have very real consequences far beyond Silicon Valley, rippling through retail investors on both sides of the Atlantic, Asian technology exporters and lenders, including loosely regulated private equity firms, who have financed the sector’s expansion.
In the United Kingdom, the Office for Budget Responsibility (OBR) estimated in its budget forecaststhat a “global correction” scenario, in which UK and global stock prices fall by 35% over the coming year, would reduce the country’s GDP by 0.6% and result in a £16 billion deterioration in public finances.
This would be relatively manageable compared to the 2008 global financial crisis, in which UK institutions were leading players. But this would remain keenly felt in an economy that is struggling to find its feet.
So while it’s perhaps understandable to anticipate a thrill of schadenfreude at the idea of big tech’s super-rich boss class being humiliated, we all live in their world and we wouldn’t escape the consequences.
