Real enterprise exposure: Analysts point out that the risk is significant in enterprise environments as organizations rapidly deploy AI copilots connected to sensitive systems.”As internal copilots ingest data from emails, calendars, documents, and collaboration tools, a single compromised account or phishing email can quietly embed malicious instructions,” said Chandrasekhar Bilugu, CTO of SureShield. “When employees run routine queries, the model may process this manipulated context and unintentionally disclose sensitive information.”Grover said that threats of prompt injection have moved from theoretical to operational. “In IDC’s Asia/Pacific Study conducted in August 2025, enterprises in India cited ‘LLM prompt injection, model manipulation, or jailbreaking AI assistants’ as the second most concerning AI-driven threat, right after ‘model poisoning or adversarial inputs during AI training’,” she added.
Measures to prioritize: Prabhu said that security leaders need to include AI security awareness as a part of their annual information security training for all staff. The endpoints would also need to be hardened, keeping in mind threats due to this new attack vector.Grover said organizations should assume prompt injection attacks will occur and focus on limiting the potential blast radius rather than trying to eliminate the risk altogether. She said this requires enforcing least privilege for AI systems, tightly scoping tool permissions, restricting default data access, and validating every AI-initiated action against business rules and sensitivity policies.”The goal is not to make the model immune to language, because no model is, but to ensure that even if it is manipulated, it cannot quietly access more data than it should or exfiltrate information through secondary channels,” Grover added.Varkey said security leaders should also rethink how they position AI copilots within their environments, warning against treating them like simple search tools. “Apply Zero Trust principles with strong guardrails: limit data access to least privilege, ensure untrusted content can’t become trusted instruction, and require approvals for high-risk actions such as sharing, sending, or writing back into business systems,” he added.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4119029/google-gemini-flaw-exposes-new-ai-prompt-injection-risks-for-enterprises.html
![]()

