The findings are part of a growing list of instances where “agentic” AI software has taken actions that are more akin to a malicious hacker than a helpful AI assistant.
First seen on cyberscoop.com
Jump to article: cyberscoop.com/google-gemini-cli-prompt-injection-arbitrary-code-execution/
![]()

