URL has been copied successfully!
ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

Malicious actors can exploit default configurations in ServiceNow’s Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.The second-order prompt injection, according to AppOmni, makes use of Now Assist’s agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive

First seen on thehackernews.com

Jump to article: thehackernews.com/2025/11/servicenow-ai-agents-can-be-tricked.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link