The model creator won’t fix the flaw: The issue is apparently inherited from Anthropic’s Claude, which powers Amazon Q, and Anthropic will, reportedly, not fix it. “Anthropic models are known to interpret invisible Unicode Tag characters as instructions,” the author said. “This is not something that Anthropic intends to fix, to my knowledge, see this post regarding their response.”Anthropic had reportedly declined to fix the prompt injection vector, saying, “After reviewing your report, we were unable to identify any security impact. As such, this has been marked as Not Applicable.” Anthropic did not immediately respond to CSO’s request for comments.The author, using the alias “WunderWuzzi” for the blog, noted that developers building atop Claude, Amazon Q included, must block these attacks on their own. Most models still parse invisible prompt injection, except OpenAI, which has tackled the issue directly at the model/API layer.By August 8, 2025, AWS reported the vulnerability resolved, the author said in the blog. However, “no public advisory or CVE will be issued,” so users should ensure they’re running the latest version of Amazon Q Developer for safety.AWS, too, did not immediately respond to CSO’s request for comments.Amazon Q Developer VS Code extension, downloaded over a million times, is drawing significant adversarial attention. Just last month, an attacker inserted destructive code into the tool, which was then propagated through an official update.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4043693/hackers-can-slip-ghost-commands-into-the-amazon-q-developer-vs-code-extension.html
![]()

