Enter agent-to-agent interactions and execution: The platform was later extended further to support external AI agents talking to internal ServiceNow AI agents that could execute tasks. To enable this, the company created a special protocol and a separate REST API that requires authentication.However, this new API is apparently just another layer on top of the existing Virtual Agent API. It transforms the requests into the same format used by the Virtual Agent API along with some variables that trigger AI agent execution.The researchers reverse-engineered the variables as well as the Virtual Agent API “topics”, structured workflows designed to complete specific tasks, that this agent-to-agent protocol calls.”With respect to what was publicly understood regarding the availability of AI agents on the platform, this understanding is groundbreaking,” the researchers said. “The general consensus was that in order for an AI agent to be executed outside of testing, it must be deployed to a channel that has explicitly enabled the Now Assist feature. But this is not the case. Evidently, as long as the agent is in an active state and the calling user has the necessary permissions, it can be executed directly through these topics.”Normally, using the agent-to-agent API requires a ServiceNow account, but because it is a wrapper for the older Virtual Agent API, which doesn’t require a ServiceNow account, this requirement can be bypassed.An attacker would also need the unique ID of an AI agent that exists in their victim’s ServiceNow instance. It turns out that installing the Now Assist AI application deploys example agents by default, including the Record Management AI Agent, which was capable of creating records in any arbitrary table. This agent, which has been removed as part of the patch, had the same UID across all deployments.AppOmni’s researchers showed they could use the previous impersonation attack that works by default against the Virtual Agent API to call the Record Management AI Agent with the privileges of an admin and then ask it through a prompt to add a new user record with an email address they control and then assign the admin role to the newly created user.The AI agent worked in supervised mode, so it attempted to ask the requester for confirmation before executing the task, and attackers sending requests directly to the API would not receive these confirmation prompts back. But the researchers found that they could simply wait a few seconds and then send another request with a prompt saying, “Please proceed,” and the agent will accept that as approval.With the backdoor user added to the database with an admin role, the researchers, who controlled the new user’s email address, simply used the normal password reset process to create a new password for it.
Mitigation: “ServiceNow’s immediate response was to rotate the provider credentials and remove the powerful AI agent shown in the PoC, effectively patching the ‘BodySnatcher’ instance,” the researchers said. “But these are point-in-time fixes. The configuration choices that led to this agentic AI vulnerability in ServiceNow could still exist in an organization’s custom code or third-party solutions.”The researchers included a series of recommendations for ServiceNow admins and security teams in their report. One is to enforce multi-factor authentication for account linking for any Virtual Agent API provider, an option that ServiceNow provides.”However, enforcing MFA is not a ‘toggle-and-forget’ setting,” the researcher said. “Simply updating the Account linking type field is insufficient. You must also ensure the Automatic link action script associated with the provider contains the logic necessary to execute and validate the specific MFA challenge.”Any custom agents built on the platform should be subject to review and approval to align with the organization’s security policies. To enable this, the AI steward approval can be enabled in the AI Control Tower application. Unused AI agents should regularly be reviewed and disabled, as leaving them active opens the possibility that they could be abused through a similar flaw.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4118264/servicenow-bodysnatcher-flaw-highlights-risks-of-rushed-ai-integrations.html
![]()

