Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

AI still doesn’t work very well in business, we’ll soon realize that • The Register

March 17, 2026

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Business»AI still doesn’t work very well in business, we’ll soon realize that • The Register
AI in Business

AI still doesn’t work very well in business, we’ll soon realize that • The Register

March 17, 2026008 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Lego business lemmings.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

interview Companies are still struggling to understand how AI fits into their business, and perhaps that’s a good thing, because it will take time to understand the problems caused by AI-generated code and content.

“No one currently knows what the right reference architectures or use cases are for their institution,” said Dorian Smiley, co-founder and CTO of the AI ​​consulting service. Codestrapin an interview with The register. “A lot of people claim they know. But there’s no guide to rely on.”

Smiley and his co-founder, CEO Connor Deeks, worked at global consultancy PwC and set up their own shop to help organizations adopt an AI strategy.

They argue that companies pursuing AI have gotten ahead of the game.

“From a large-scale linguistic model perspective, people don’t really address the fallibility of the underlying text,” Deeks said.

Deeks argues that if you built an AI system from the ground up, it would be radically different from what is proposed today. All the talk about the disappearance of software engineering and office work, he said, “we don’t subscribe to any of that.”

He also claims that businesses don’t want to believe it either. “For the most part, they don’t want to believe that everyone is going to be laid off and there’s going to be no one under them, especially in technology or information organizations within these institutions,” he said.

Missing Metrics

The first step for organizations considering AI is to experiment and iterate in a feedback loop, says Smiley. And the reason is, he says, that AI still doesn’t work very well.

“Even at the coding level, it doesn’t work well,” Smiley said. “I’ll give you an example. Code can look good and pass unit tests and still be wrong. The way you measure it is usually in benchmark testing. So a lot of these companies haven’t engaged in a proper feedback loop to see what the impact of AI coding is on the outcomes they care about. Lines of code, number of (pull requests), those are liabilities. Those are not measures of engineering excellence.”

Engineering excellence metrics, Smiley said, include metrics such as deployment frequency, time to production, change failure rate, mean time to restore and incident severity. And we need a new set of metrics, he insists, to measure AI’s impact on engineering performance.

“We don’t know what it is yet,” he said.

That’s 3.7 times more lines of code and 2,000 times worse performance

One metric that could be helpful, he said, is measuring tokens burned to achieve an approved pull request – a formally accepted software change. This is the kind of thing that needs to be evaluated to determine whether AI helps an organization’s engineering practices.

To highlight the consequences of not having this type of data, Smiley highlighted a recent attempt to rewrite SQLite in Rust using AI.

“It passed all the unit tests, the code form looks correct,” he said. That’s 3.7 times more lines of code that perform 2,000 times worse than current SQLite. An unsustainable product is two thousand times worse for a database. It’s a dumpster fire. Throw it away. All that money you spent is worthless. »

According to Smiley, all the optimism about using AI for coding comes from measuring the wrong things.

“Coding works if you measure lines of code and pull requests,” he said. “Coding doesn’t work if you measure team quality and performance. There’s no evidence that it’s moving in a positive direction.”

No free lunches

Deeks highlighted recent outages at Amazon and AWS – incidents Amazon insisted it had nothing to do with AI – as indicators of what is to come.

“The other way to look at it is there’s no free lunch here,” Smiley said. “We know the limitations of the model. It’s hard to teach them new facts. It’s hard to reliably retrieve facts. Passing through neural networks is not deterministic, especially when you have reasoning models that engage in internal monologue to increase the efficiency of predicting the next token, which means you’ll get a different answer each time, right? That monologue will be different.

“And they have no inductive reasoning skills. A model can’t check its own work. It doesn’t know if the answer it gave you is correct. These are fundamental problems that no one has solved in LLM technology. And you mean to tell me that this won’t manifest itself in code quality problems? Of course it will manifest itself.”

New measurements are essential, Smiley says, because we already have millions of lines of AI-generated code that humans will never be able to review.

In the context of business applications, Deeks emphasized reimbursement consulting firm Deloitte had to confide to the Australian government due to a report containing AI-generated errors.

“We know that large consulting firms are now adopting it on a large scale to write their PowerPoint presentations,” Deeks said. “It’s going to result in huge lawsuits and lost money because the quality isn’t actually tracked. Everyone bought into this fairy tale that it was already just perfect.”

Smiley expects that applying AI to office work will encounter similar problems to those AI faces when applied to coding. But it will be harder to spot AI errors due to the lack of benchmark tests for crazy trading advice.

“The other challenge here is that the incentives are not aligned,” Smiley said. At four large firms like PwC, he said, the partner wants more revenue and higher margins.

“You give them AI, what are they going to do?” he asked. “More work, less human work. So you get more revenue, higher margin. It doesn’t say that all the humans on the team will use the AI but will look at every output of the AI. Those incentives don’t match. The incentive for the manager is to stop talking to the associates, because the associates don’t know anything. (The manager will) use the AI to do the associates’ work. For the associate, the incentive is to do the work faster and go at the beach. All these incentives are not aligned in a way that makes AI complementary to the business and produces results.

Businesses will ask for discounts when they know a service company is using AI

Smiley predicts “code quality issues that will appear in eight to nine months for people who are heavy users of AI.”

Deeks predicts an increasing number of lawsuits, because that’s what happens when bad advice causes problems.

“People are going to continue to feel pressure like, ‘I have to adopt this stuff, I have to make AI decisions.’ They’re going to put these things into production, whether it’s in a business workflow or in an engineering group. And this accelerated collapse will then cost many people their jobs. »

Another likely outcome, Smiley said, is pricing pressure: Companies will ask for discounts when they know a services company is using AI tools.

Deeks said extreme pricing pressures are starting to surface. “Even KPMG pressured another accounting firm to lower their prices because they claimed to use AI,” he said. “Clients are now saying things like, ‘Oh, you produce your PowerPoint presentations with AI. Well, I want to pay you less.'”

Another looming problem is that large insurers are wary of underwriting policies that hedge companies against AI-related risks.

“Insurers are seriously trying to remove coverage from policies where AI is applied and there is no clear chain of custody,” Smiley said. “So now let’s say you’re the big four and you get sued and pricing pressures apply, the market outpaces your ability to adapt, and now your underwriters say, ‘oh, by the way, we’re not going to cover you.'”

Deeks said, “A friend of ours is an executive vice president at one of the largest insurers in the country and he told us bluntly that this is a very real problem and he doesn’t know why people aren’t talking about it more.”

Insurers, he said, are already lobbying state-level insurance regulators for an exclusion in commercial liability insurance policies so they are not required to cover AI-related workflows. “It kills the whole system,” Deeks said.

Smiley added: “The question here is if everything is so great, why do insurers bother banning coverage for these things? They are generally pretty good at risk profiling. »

Deeks said that instead of seeing these problems as a sign of impending collapse, he hopes people in the industry will find the motivation to talk seriously about the problems that need to be overcome.

“Can we actually have a conversation about this?” he asks. “Is anyone going to talk about the opposite of AGI (artificial general intelligence) and how it’s going to take over everything in a utopian future?”

We need to be clearer, says Deeks, about what AI means for finance, for underwriting, as well as for real business and the practical operation of enterprise systems. ®

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

JIM.com brings its AI-powered business platform to Android, opening the door to millions more American micro-entrepreneurs

February 23, 2026

Teenagers are starting artificial intelligence companies and making big money

February 22, 2026

Salil Parekh, CEO of Infosys, on AI tools replacing engineers: Not everything will be replaced overnight, like in big companies…

February 22, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (70)
  • AI in Business (403)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)

AI still doesn’t work very well in business, we’ll soon realize that • The Register

March 17, 2026

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (70)
  • AI in Business (403)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.