URL has been copied successfully!
Copilot and Agentforce fall to form-based prompt injection tricks
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Copilot and Agentforce fall to form-based prompt injection tricks

PipeLeak: Salesforce Agentforce hijacked by a simple lead: In the Salesforce Agentforce case, attackers embed malicious instructions inside a public-facing lead form. When an internal user later asks the agent to review or process that lead, the agent executes the embedded instructions as if they were part of its task.According to a Capsule demonstration, the agent retrieves CRM data via the “GetLeadsInformation” function and then sends it externally via email.The compromise isn’t limited to a single record. Researchers demonstrated that a hijacked agent could query and exfiltrate multiple lead records in bulk, effectively turning a single form submission into a database extraction pipeline.The researchers said Salesforce acknowledged the prompt injection issue but characterized the exfiltration vector as “configuration-specific,” pointing to optional human-in-the-loop (HITL) controls. Capsule’s pushback on that framing argues that requiring manual approvals undermines the very purpose of autonomous agents.The deeper issue, they noted, is insecure defaults. Systems designed for automation should not allow untrusted inputs to redefine agent goals.Both disclosures converge on a baseline that calls for treating all external inputs as untrusted and having filters in place that separate data from instructions. This would entail enforcing input validation, least-privilege access, and strict controls on actions like outbound email.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4159079/copilot-and-agentforce-fall-to-form-based-prompt-injection-tricks.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link