ServiceNow has fixed a critical security vulnerability in its AI platform that could have allowed unauthenticated users to impersonate legitimate users and perform unauthorized actions, the company revealed Monday.
The fault, designated CVE-2025-12420 and having a severity score of 9.3 out of 10, was discovered by SaaS security company AppOmni in October. ServiceNow deployed fixes to most hosted instances on October 30, 2025 and provided fixes to partners and self-hosted customers. The company said it had no evidence the vulnerability had been exploited before the patch.
The vulnerability affected the Now Assist AI Agents and Virtual Agent API components. Customers using affected versions have been advised to upgrade to the patched versions, which include Now Assist AI Agents version 5.1.18 or later and 5.2.19 or later, and Virtual Agent API version 3.15.2 or later and 4.0.4 or later.
The disclosure comes as security researchers raise broader questions about the configuration and deployment of enterprise AI systems. AppOmni Researchwhich led to the discovery of the vulnerability, also revealed that the default settings of ServiceNow’s Now Assist platform could enable second-order rapid injection attacks, a sophisticated exploitation method that manipulates AI agents through the data they process rather than direct user input.
These attacks leverage a feature called agent discovery, which allows AI agents to communicate with each other to perform complex tasks. Although designed to improve functionality, this feature creates potential attack vectors when agents are misconfigured or grouped together without adequate controls.
In test scenarios, the researchers demonstrated that low-privileged users could embed malicious instructions into data fields that the more privileged users’ AI agents would later process. The compromised agent could then recruit other, more powerful agents to perform unauthorized actions, including accessing restricted records, modifying data, and potentially escalating user privileges.
The attacks were successful even with ServiceNow’s Rapid Injection Protection feature enabled, highlighting how configuration choices can undermine security controls built into the AI systems themselves. Researchers found that the default settings automatically grouped agents into teams and marked them as discoverable, creating unintended collaboration pathways that attackers could exploit.
The research highlights a fundamental challenge in enterprise AI deployment: security depends not only on the underlying technology, but also on how organizations configure and manage these systems. ServiceNow confirmed that the behaviors identified by researchers were intentional design choices and updated its documentation to clarify configuration options.
Organizations using ServiceNow’s AI platform face the task of balancing autonomous agent capabilities and security risks. Research suggests several mitigation strategies, including requiring human supervision for agents with powerful capabilities, segmenting agents into isolated teams based on their functions, and monitoring agent behavior to detect deviations from expected patterns.
You can find more information about the vulnerability at The ServiceNow website.
