Close Menu
clearpathinsight.org
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups

Traders dump software stocks as AI fears flare

February 4, 2026

Musk is combining his rocket and AI businesses into one company ahead of a planned IPO this year

February 3, 2026

Information Services Group honors 58 suppliers

February 3, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram
clearpathinsight.org
Subscribe
  • AI Studies
  • AI in Biz
  • AI in Tech
  • AI in Health
  • Supply AI
    • Smart Chain
    • Track AI
    • Chain Risk
  • More
    • AI Logistics
    • AI Updates
    • AI Startups
clearpathinsight.org
Home»AI in Technology»“Deepfakes are spreading and more AI companions”: Seven takeaways from the latest AI security report | AI (artificial intelligence)
AI in Technology

“Deepfakes are spreading and more AI companions”: Seven takeaways from the latest AI security report | AI (artificial intelligence)

February 3, 2026006 Mins Read
Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email Telegram WhatsApp
Follow Us
Google News Flipboard
4349.jpg
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

  • 1. AI model capabilities are improving

    A host of new AI models – the technology behind tools like chatbots – have been launched in the last year, including one from OpenAI. GPT-5Claude Opus 4.5 from Anthropic and Google Gemini 3. The report highlights new “reasoning systems” – which solve problems by breaking them down into smaller steps – demonstrating better performance in math, coding and science. Bengio said there had been a “very significant leap” in AI reasoning. Last year, systems developed by Google and OpenAI achieved a gold-level performance at the International Mathematical Olympiad – a first for AI.

    However, the report says AI capabilities remain “patchy,” referring to systems displaying astonishing prowess in some areas but not others. Even though advanced AI systems are impressive at math, science, coding, and image creation, they remain prone to making false statements, or “hallucinations,” and cannot complete lengthy projects autonomously.

    Nonetheless, the report cites a study showing that AI systems are rapidly improving their ability to perform certain software engineering tasks – with their duration doubling every seven months. If this rate of progress continues, AI systems could accomplish tasks lasting several hours by 2027 and several days by 2030. It is in this scenario that AI becomes a real threat to jobs.

    But for now, the report says, “reliable automation of time-consuming or complex tasks remains unachievable.”


  • 2. Deepfakes are improving and proliferating

    The report describes the growth of deepfake pornography as a “particular concern”, citing a study showing that 15% of British adults have seen such images. He adds that since the first security report was published in January 2025, AI-generated content has become “harder to distinguish from real content” and highlights a problem. study last year in which 77% of participants incorrectly identified text generated by ChatGPT as being written by a human.

    The report says there is little evidence of bad actors using AI to manipulate people, or of internet users sharing this content widely – a key goal of any manipulation campaign.


  • 3. AI companies have implemented protective measures against biological and chemical risks

    Anthropic has launched models with enhanced security measures. Photograph: Dado Ruvić/Reuters

    Major AI developers including Anthropic have released models with enhanced security measures after being unable to rule out the possibility that they could help novices create biological weapons. Over the past year, AI “co-scientists” have become increasingly proficient, including providing detailed scientific information and participating in complex laboratory procedures such as molecule and protein design.

    The report adds that some studies suggest that AI can provide much more help in the development of biological weapons than simple Internet browsing, but further work is needed to confirm these results.

    Biological and chemical risks pose a dilemma for policymakers, the report adds, because these same capabilities can also accelerate the discovery of new drugs and the diagnosis of diseases.

    “The free availability of biological AI tools presents a difficult choice: restrict these tools or actively support their development for beneficial purposes,” the report states.


  • 4. AI companions are rapidly gaining popularity

    Bengio says the use of AI companions and the emotional attachment they generate have “spread like wildfire” over the past year. The report states that there is evidence that a subset of users are developing a “pathological” emotional dependence on AI chatbots, with OpenAI stating that approximately 0.15% of its users indicate an increased level of emotional attachment to ChatGPT.

    Concerns about AI use and mental health are growing among healthcare professionals. Last year, OpenAI was sued by the family of Adam Raine, an American teenager who committed suicide after months of conversations with ChatGPT.

    However, the report adds that there is no clear evidence that chatbots cause mental health problems. Instead, the concern is that people with mental health issues might use AI more, which could amplify their symptoms. He points to data showing that 0.07% of ChatGPT users experience signs consistent with acute mental health crises such as psychosis or mania, suggesting that around 490,000 vulnerable people interact with these systems each week.


  • 5. AI is not yet capable of launching fully autonomous cyberattacks

    AI systems can now support cyber attackers at various stages of their operations, from identifying targets to preparing for an attack or developing malware to cripple a victim’s systems. The report acknowledges that fully automated cyberattacks – executing all stages of an attack – could allow criminals to launch attacks on a much larger scale. But this remains difficult because AI systems cannot yet perform long, multi-step tasks.

    AI systems can now take on cyberattackers. Photography: Dmitri Molchanov/Alamy

    Nonetheless, Anthropic reported last year that its coding tool, Claude Code, was used by a Chinese state-sponsored group to attack 30 entities around the world in September, carrying out a “handful of successful intrusions.” According to the statement, 80 to 90 percent of the operations involved in the attack were carried out without human intervention, indicating a high degree of autonomy.


  • 6. AI systems are getting better at undermining surveillance

    Bengio said last year that he was concerned that AI systems would show signs of self-preservation, such as try to disable surveillance systems. One of the main fears of AI safety advocates is that powerful systems could develop the ability to evade guardrails and harm humans.

    The report says that over the past year, models have shown a more advanced ability to undermine surveillance attempts, for example by finding flaws in assessments and recognizing when they are tested. Last year, Anthropic released a security analysis of its latest model, Claude Sonnet 4.5, and revealed that it had become suspicious that it was being tested.

    The report adds that AI agents cannot yet act autonomously long enough to realize these loss-of-control scenarios. But “the time horizons over which agents can act autonomously are growing rapidly.”


  • 7. The impact on employment remains uncertain

    One of the most pressing concerns of politicians and the public about AI is its impact on employment. Automated systems remove white-collar roles in sectors such as banking, law and health?

    The report says the impact on the global labor market remains uncertain. It says AI adoption has been rapid but uneven, with adoption rates of 50% in countries like the UAE and Singapore, but below 10% in many low-income economies. This also varies by sector, with 18% usage in the US information industries (publishing, software, television and film), but 1.4% in construction and agriculture.

    Studies in Denmark and the United States also showed no impact between a job’s exposure to AI and overall employment change, according to the report. However, he also cites a British study showing a slowdown in new hires at companies with high exposure to AI, with technical and creative roles seeing the steepest declines. Junior roles have been hit the hardest.

    The report adds that AI agents could have a greater impact on employment if they improve their capabilities.

    “If AI agents gained the ability to act with greater autonomy across domains in just a few years – reliably handling longer, more complex task sequences in pursuit of higher-level goals – it would likely accelerate labor market disruption,” the report said.

  • Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link

    Related Posts

    Traders dump software stocks as AI fears flare

    February 4, 2026

    5 fast-growing technology professions in 2026

    February 3, 2026

    Viral AI personal assistant seen as sea change – but experts warn of risks | AI (artificial intelligence)

    February 3, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Categories
    • AI Applications & Case Studies (60)
    • AI in Business (322)
    • AI in Healthcare (290)
    • AI in Technology (311)
    • AI Logistics (50)
    • AI Research Updates (113)
    • AI Startups & Investments (258)
    • Chain Risk (79)
    • Smart Chain (101)
    • Supply AI (86)
    • Track AI (59)

    Traders dump software stocks as AI fears flare

    February 4, 2026

    Musk is combining his rocket and AI businesses into one company ahead of a planned IPO this year

    February 3, 2026

    Information Services Group honors 58 suppliers

    February 3, 2026

    “Deepfakes are spreading and more AI companions”: Seven takeaways from the latest AI security report | AI (artificial intelligence)

    February 3, 2026

    Subscribe to Updates

    Get the latest news from clearpathinsight.

    Topics
    • AI Applications & Case Studies (60)
    • AI in Business (322)
    • AI in Healthcare (290)
    • AI in Technology (311)
    • AI Logistics (50)
    • AI Research Updates (113)
    • AI Startups & Investments (258)
    • Chain Risk (79)
    • Smart Chain (101)
    • Supply AI (86)
    • Track AI (59)
    Join us

    Subscribe to Updates

    Get the latest news from clearpathinsight.

    We are social
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Reddit
    • Telegram
    • WhatsApp
    Facebook X (Twitter) Instagram Pinterest
    © 2026 Designed by clearpathinsight

    Type above and press Enter to search. Press Esc to cancel.