Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»What the numbers show about the harms of AI
AI in Technology

What the numbers show about the harms of AI

January 20, 2026005 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
Gettyimages 2223970822.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

With the widespread adoption of artificial intelligence around the world over the past year, the harmful potential of this technology has become clearer. Reports of AI-related incidents increased by 50% year-over-year between 2022 and 2024, and in the 10 months to October 2025, incidents had already surpassed the 2024 total, according to the AI incident databasea crowdsourced repository of media reports on AI-related incidents. Incidents resulting from the use of this technology, such as deepfake-based scams and chatbot-induced delusions, are steadily increasing, according to the latest data. “AI is already causing real harm,” says Daniel Atherton, editor of the AI ​​Incident Database. “Without tracking failures, we cannot repair them,” he adds.

The AI ​​Incident Database compiles data by collecting media coverage of AI-related events and consolidating multiple reports about the same event into a single incident entry. Crowdsourced data has its limitations, and the rise in AI incidents partly reflects increased media attention to the technology, Atherton says. He maintains that the news remains for now one of the best public sources of information on the harms of AI. Only a subset of real incidents are covered by journalists, and not all of them are submitted to the AI ​​incident database, he adds. “All the reporting globally represents only a fraction of the reality experienced by everyone experiencing harm caused by AI,” says Atherton. While European AI law and California’s Transparency in Frontier AI Act (SB 53) require developers to report certain incidents to authorities, only the most serious or safety-critical ones meet the reporting threshold.

Break it down

Artificial intelligence is an umbrella term for several different technologies, from autonomous vehicles to chatbots, and the database lumps them together without a comprehensive structure. “This makes it very, very difficult to distinguish patterns across entire data sets to understand trends,” says Simon Mylius, an affiliated researcher at MIT FutureTech. In January, Mylius and his colleagues published a tool which enhances the AI ​​incident database by using a language model to analyze news reports associated with each incident, before classifying them by harm type and severity.

Although the AI-based approach has not yet been fully validated, the researchers hope the tool can help policymakers sort through large numbers of reports and spot trends. Aware of the “noise” inherent in media reporting, the Mylius team is working on a framework that borrows disease surveillance techniques to help interpret the data, he says. The hope is that better tracking and analysis of incidents could help regulators avoid the missteps seen with social media and respond quickly to emerging harms.

Using the AI ​​tool to triage incidents using an established taxonomy of AI risks reveals that the upward trend in incidents has not occurred equally across the board. As reports of AI-generated misinformation and discrimination decline in 2025, so-called “human-computer interaction” incidents, which include those involving ChatGPT psychosisstood up. Reports of malicious actors using AI, particularly to scam victims or spread disinformation, have increased the most, increasing 8-fold since 2022.

Before 2023, autonomous vehicles, facial recognition and content moderation algorithms were among the most frequently cited systems. Since then, incidents related to deepfake videos have exceeded all three combined. This does not include deepfakes produced since late December, when an update to xAI’s Grok enabled widespread use of the model to sexualize images of real women and minors. By a estimateGrok produced 6,700 sexualized images per hour, prompting the governments of Malaysia and Indonesia to block chatbot. The UK’s media watchdog has spear an investigation, while the British technology secretary said the country plans bring into force a law that criminalizes the creation of non-consensual sexualized images, including through Grok. In response to the outcry, xAI limited Grok’s image generation tools to paid subscribers and said editing of images of real people wearing “revealing clothing” is now blocked.

The increase in deepfake incidents has coincided with rapid improvements in their quality and accessibility. This change reveals that while some AI incidents stem from system limitations, such as an autonomous vehicle failing to detect a cyclist, others are due to technical advances. As advances in AI continue, particularly in sensitive areas like coding, new harms could emerge. In November, AI company Anthropic revealed he had intercepted a large-scale cyberattack using his assistant Claude Code. The company said we have reached an “inflection point,” where AI can prove useful in cybersecurity, for better or worse. “I think we’re going to see a lot more cyberattacks that cause global and significant financial losses in the very near future,” Mylius says.

Given their market dominance, it’s not surprising that large AI companies are most often identified in incident reports, but more than a third since 2023 involved an unknown AI developer. “When scams circulate on platforms like Facebook or Instagram, Meta is involved,” says Atherton, “but what is not simultaneously reported are the tools used to create the scam.” In 2024, Reuters reported that Meta had predicted that 10% of its revenue would come from advertisements for scams and banned products. Meta responded that the figure was “approximate and overly inclusive”, carried out as part of an assessment aimed at combating fraud and scams, and that the documents “present a selective view that distorts Meta’s approach”.

Efforts to improve accountability already have buy-in from major AI companies. Content identification informationa watermarking and metadata system designed to ensure authenticity and flag AI-generated content is supported by Google, Microsoft, OpenAI, Meta and ElevenLabs. The latter also offers a tool which it claims can detect whether an audio sample was generated using its technology. Still, popular image generator Midjourney is currently not a fan of the emerging standard.

While it’s crucial to stay alert to new risks, it’s important not to allow current harms to become “part of the background noise,” says Atherton. Mylius agrees, noting that while some harms occur in sudden attacks, others are more gradual. “The societal issues, the privacy issues, the erosion of rights, disinformation and misinformation (are) less obvious when an individual incident occurs, but they add up to quite significant overall damage,” Mylius says.

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

Related Posts

BofA’s struggles with AI adoption reflect broader problem in banking

February 23, 2026

BBCUrgent research needed to combat AI threats, says Google AI boss. But the head of the American delegation to the AI ​​Impact Summit in Delhi says: "We totally reject global AI governance."0.2 days ago

February 23, 2026

The AI ​​Alarm Cycle: Much Talk, Little Action | Science and technology

February 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Categories
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)

UAE startup Tandia attracts VC investment in AI-driven data monetization product

February 23, 2026

Trust ANC obtains patent for “AI Building” automation system

February 23, 2026

SK Networks invests more in AI startup Upstage

February 23, 2026

AI in healthcare has evolved faster than expected

February 23, 2026

Subscribe to Updates

Get the latest news from clearpathinsight.

Topics
  • AI Applications & Case Studies (70)
  • AI in Business (402)
  • AI in Healthcare (314)
  • AI in Technology (394)
  • AI Logistics (52)
  • AI Research Updates (131)
  • AI Startups & Investments (325)
  • Chain Risk (88)
  • Smart Chain (116)
  • Supply AI (105)
  • Track AI (70)
Join us

Subscribe to Updates

Get the latest news from clearpathinsight.

We are social
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Reddit
  • Telegram
  • WhatsApp
Facebook X (Twitter) Instagram Pinterest
© 2026 Designed by clearpathinsight

Type above and press Enter to search. Press Esc to cancel.