Cherepanov and Strýček were convinced that their discovery, which they named PromptLock, marked a turning point in generative AI, showing how the technology could be leveraged to create highly flexible malware attacks. They published a blog post stating that it had discovered the first example of AI-powered ransomware, which quickly became the subject of criticism. widespread global media attention.
But the threat was not as dramatic as it first seemed. The day after the blog post went live, a team of researchers from New York University claimed responsibility, explaining that the malware was not, in fact, a full attack launched in the wild but a research project, simply designed to prove that it was possible to automate every step of a ransomware campaign, which they said they did.
PromptLock may have turned out to be an academic project, but the real bad guys are using the latest AI tools. Just like software engineers use artificial intelligence to write code and check for bugsHackers use these tools to reduce the time and effort needed to orchestrate an attack, thereby lowering the barriers for less experienced attackers to try something.
The likelihood that cyberattacks will now become more common and effective over time is not a remote possibility but “a pure reality,” says Lorenzo Cavallaro, professor of computer science at University College London.
Some in Silicon Valley warn that AI is on the verge of being able to carry out fully automated attacks. But most security researchers believe this claim is exaggerated. “For some reason everyone is just focused on this idea of malware, like AI superhackers, which is just absurd,” says Marcus Hutchins, a senior threat researcher at security firm Expel and famous in the security world for stopping a giant global ransomware attack called WannaCry in 2017.
Instead, experts say, we should pay closer attention to the much more immediate risks posed by AI, which is already accelerating and increasing the volume of scams. Criminals are increasingly exploiting the latest deepfake technologies to impersonate people and defraud their victims of vast sums of money. These AI-enhanced cyberattacks will only become more frequent and more destructive, and we must be ready.
Spam and beyond
Attackers began adopting generative AI tools almost immediately after ChatGPT exploded in late 2022. These efforts began, as you might imagine, with the creation of spam, and a lot of it. Last year, a report from Microsoft said that in the year to April 2025, the company had blocked $4 billion in scams and fraudulent transactions, “many of which were likely aided by AI content.”
At least half of all spam is now generated using LLMs, according to estimates by researchers at Columbia University, the University of Chicago and Barracuda Networks, who analysis nearly 500,000 malicious messages collected before and after the launch of ChatGPT. They also found that AI is increasingly being deployed in more sophisticated projects. They examined targeted email attacks, which impersonate a trusted person in order to deceive an employee of an organization into extracting funds or sensitive information. In April 2025, they found that at least 14% of these types of targeted email attacks were generated using LLM, up from 7.6% in April 2024.
