Matt Shumer thinks we’re in a similar time to February 2020. Not the Covid lockdowns, but the weeks before, when the pandemic was accelerating and most people still didn’t know it.
“I think we are in the ‘this seems far-fetched’ phase of something much, much bigger than Covid,” writes Shumer, CEO of AutresideAI, in a statement. essay (opens in a new tab) titled “Something Big Is Happening” which has since attracted over 80 million views and 36,000 reposts on X.
His point: Tech people have spent the last year watching AI go from being a “useful tool” to “doing my job better than me,” and everyone is about to experience the same thing. “Nothing that can be done on a computer is secure in the medium term,” he writes.
He is not the first to sound the alarm. Dario Amodei, CEO of Anthropic warned (opens in a new tab) last year, AI could eliminate half of all entry-level white-collar jobs within one to five years. Dan Schulman, CEO of Verizon recently floated (opens in a new tab) the possibility of global unemployment reaching 20%, even 30% within two to five years.
And yet, so far, the evidence does not match the alarm.
A recent analysis (opens in a new tab) Yale’s Budget Lab found that the share of workers in occupations with high exposure to AI has remained stable since ChatGPT’s release. The researchers concluded that “while concern about the effects of AI on today’s job market is widespread, our data suggests it remains largely speculative.”
Are we in a period of AI similar to the first periods of the pandemic?
The brick wall
The arguments for skepticism boil down to two observations. Technology evolves quickly, but organizations do not, and economies are more complex than technology.
Shumer rightly points out that technology is improving rapidly. Latest AI lab models perform comparable to or better than human professionals on a range of realistic work tasks, a study suggests. popular reference (opens in a new tab) from OpenAI. The gains have been particularly impressive in AI coding tools like Anthropic’s Claude Code and OpenAI’s Codex. Tech engineers are increasingly spending their days directing AI agents rather than writing code themselves.
These agents may be capable of generating tens of thousands of lines of code, but engineers still need to manage them and understand the code they produce. Kian Katanforoosh (opens in a new tab)CEO of AI startup Workera and adjunct lecturer in deep learning at Stanford, says many startup founders who have prioritized building quickly with AI “hit a wall” because they don’t understand their codebases. “They don’t understand how to improve it. They don’t even know what to ask the AI, because it’s so complicated and complex,” he says. “The tax really takes a toll on the human spirit – how do you take the human with you, because it’s still the human who directs the agents?”
Automating knowledge work is not as simple as it sometimes seems. Tasks within jobs are often messily intertwined, says Brian Jabarian (opens in a new tab)economist and new assistant professor at Carnegie Mellon. He gives the example of a company trying to automate the candidate interview process with an AI voice agent. They may think they’re just replacing information gathering, but the task is closely related to others, like candidate evaluation and team fit. Automating one of these tasks changes how the others are performed, and not always for the better.
Even when automation is successful, the result is not always what you expect. In the 19th century, technology automated 98% of the work needed to weave a yard of fabric, according to research (opens in a new tab) by economist James Bessen. But when the price of cloth fell, demand exploded, increasing the total employment of weavers for several decades before finally falling. If AI reduces the cost of legal services, the demand for legal work could increase, and with it, the demand for lawyers using AI. The economic effects of a technology are always more complex than the technology itself.
Why “ultimately” still matters
None of this means Shumer is wrong to be concerned. AI will disrupt, or even eliminate, many jobs. But it will probably take longer than expected.
The bottleneck is not technology. It’s about everything around it: organizational change, the complexity of real-world jobs, the new work created by AI, the regulatory frictions that slow adoption. The rate of disruption will ultimately be determined by the slowest force, not the fastest force. Each year counts for the adaptability of workers.
The Covid comparison is a better rhetorical tool than a precise analogy. The pandemic forced a sudden change on everyone. AI adoption happens within a business, team or IT department at a time.
The only thing everyone agrees on
Where Shumer is on firmer ground is in his advice to lean in and learn how to use AI well.
“The one thing I encourage everyone to do is download Cursor,” Sebastian Siemiatkowski, CEO of Klarna. recently said (opens in a new tab)referring to the AI coding tool. Experiment with it and see if it can create an idea you’ve been working on. “If you haven’t, you won’t fully appreciate the change we’re going to experience.”
