There are few technological events that can be considered true.Big BangMoments: moments when our understanding of the world and technology’s place in it changes.
The advent of world Wide Web was one of those “before and after” moments. The release of the iPhone in 2007 was another, sparking the smartphone revolution.
The November 2022 release of ChatGPT was a similarly seismic shift. Before that, artificial intelligence (AI) was something that few people outside of the tech world really knew about or cared about.
But the Extended Language Model (LLM) quickly became the fastest-growing application in history and launched what we now call the “generative AI revolution.”
However, revolutions cannot always maintain the same momentum.
Three years after the release of ChatGPT and despite heartbreaking headlines about massive job losses due to AI, many of us remain employed and, it seems, more than half of Britons I still have never used an AI chatbot.
Whether the revolution has stalled is debatable, but even AI’s most ardent disciples suggest things aren’t moving as quickly as expected. So, is AI as smart as it will ever be?
By the way, what is intelligence?
Whether AI intelligence has plateaued depends on your definition of ‘intelligent,’ says professor Catherine FlickProfessor of AI Ethics at Staffordshire University.
“In my opinion, AI is not intelligent at all, but a programmatic ability to answer human questions with seemingly intelligent answers,” she says.
For her, the answer to the question of whether AI has become as intelligent as it will be is yes – because it never has been and never can be.
“All that can happen is that we can better program these tools to return an ever more deceptive simulacrum of intelligence. But the underlying capacity to think, experience and reflect will be forever off-limits to artificial agents,” she says.
Part of the disappointment around AI comes from a group of AI advocates who suggested that AI could do everything a human could do – and do it better – from the moment it was unleashed on the world.
This group included the AI companies themselves and their executives. Dario Amodei, CEO of Anthropic, creator of the chatbot Claude, is one of the strongest supporters.

He recently suggested AI could develop beyond the limits of human intelligence within three years – although he has previously made similarly optimistic predictions that turned out to be wrong.
Flick recognizes that “intelligence” in AI means different things to different people. What if the question really is “Will AI models like ChatGPT and Claude get even better?” ”, his answer changes.
“(They will probably improve) as other methods are found that can more accurately simulate (human-like interaction), but they will never take that magical step from being a sophisticated processor of statistical data weighting to true experiential, reflective, reflective intelligence.”
Nevertheless, the debate over which AI models begin to produce less and less powerful improvements is lively in the AI industry.
OpenAI’s highly anticipated GPT-5 model turned out to be a damp squib – largely because the company tried to present it as something superhuman in its pre-release marketing.
So when the slightly more capable model was released, people found it disappointing. For opponents of AI, this is a sign that we have already reached a ceiling. But are they correct?
Learn more:
Two-way system
“The perception that AI progress is plateauing is actually an illusion, shaped by the fact that most people only access it through consumer-facing applications, like chatbots,” says Éléonore WatsonAI ethics engineer and faculty member at Singularity University – an educational company and research center.
Even then, these chatbots are improving, but often incrementally, Watson says. “Like a car that gets a nicer paint job or a better GPS every year,” she explains.
“What this vision misses are the revolutionary changes happening under the hood. In reality, the engine has been fundamentally redesigned and is accelerating at an exponential rate.”
Even though AI chatbots work largely the same as they did three years ago for the average user who doesn’t dig into the details, AI is being used in a range of applications that weren’t used before, and with success. Medicine, for example.
This pace should continue, she believes, for several reasons. One of them is the power of energy put behind the generative AI revolution.
According to the International Energy Agencyby 2030 the demand for electricity to power AI systems will be bigger than what is used to make steel, cement, chemicals and any other energy-intensive goods combined.

Tech companies are spending huge sums on data centers to process our AI queries.
In 2021, the year before ChatGPT was released, four of the biggest tech companies – Alphabet (Google’s parent company), Amazon, Microsoft and Meta (owners of Facebook) – spent just over $100 billion (£73 billion) on everything needed to host and operate these data centers.
In 2025 it is closer to $350 billion (£256 billion), and expected to exceed $500 billion (£366 billion) by 2029.
In addition to building larger data centers with more powerful and reliable sources of electricity to power the models, AI companies are getting smarter about how those models work.
“The brute force method of adding more data and computing power always produces surprising gains, but the most important thing is efficiency,” says Watson.
“Models are becoming much more capable. A task that once required a sprawling giant can now be done by a system a fraction of its size, cheaper and faster, with capacity density increasing at an astonishing rate.”
Techniques such as rounding numbers or quantizing inputs in LLMs (meaning you reduce the precision of information in less important areas) can all make models more effective.
Find me an agent
One area of “intelligence” – if we define it as “effectiveness” – where AI still has room to grow is in the “agentic” use of AI.
This involves changing what an AI does and how we interact with it, and this is still in its early stages. “An agentic AI can manage your finances, anticipate your needs, and design subgoals toward a larger goal,” Watson says.
All major AI companies, including OpenAI, are integrating agentic AI tools into their systems, which would transform the use of the technology from simple discussions to AI-enabled colleagues who can complete their tasks independently while you focus on something else.
Increasingly, these AI agents are able to work independently for hours on end – which most people agree is a testament to advances in AI intelligence.
Yet AI agents present their own challenges.
Researchers have already identified problems with agentic AI, which can be tricked into carrying out harmful instructions through so-called “rapid injection” attacks, in which they are asked to execute commands that could be dangerous when they encounter instructions on a website that the AI agent considers to be harmless.
It is for this reason that many companies maintain close monitoring of these AI agents.
But the simple idea that AI can be sent to perform tasks on autopilot suggests there is room to grow. This, coupled with investments in computing power and the continued renewal of AI products, suggests that AI is not stagnating. Far from it.
“The smart bet is to bet on continued, exponential growth,” Watson says. “The (tech) moguls are right about the trajectory, but they tend to underestimate the governance and security challenge that must accompany this trajectory. »
Learn more:
