URL has been copied successfully!
Managing agentic AI risk: Lessons from the OWASP Top 10
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Actionable guidance: Agentic AI is the main topic of conversation in discussions among his peers, says Keith Hillis, VP of security engineering at Akamai Technologies.”Most organizations are confronted with the challenge of balancing the promising power of AI while also ensuring the organization is not incurring increased security risk,” he says. So, the biggest value he finds in the new Agentic AI OWASP top 10 is that it’s immediately useful. “It’s directly actionable as a control baseline in both security architecture and governance, risk, and compliance contexts,” he says.One aspect of the list that he found particularly insightful was the evolution of “least privilege” to “least agency.”He recommends that CISOs use the list to assess their programs, identify gaps, and map out a plan of action for improvement. “Most likely already have active programs in place,” he says. But it’s also likely they will need to evolve to accommodate the specific risks of agentic AI.

Missing pieces: The only thing that’s lacking in this first release of the list is that some of the mitigation sections aren’t detailed enough, says Zenity’s Underkoffler.But there are plans to address that. “We have some efforts to really dive into the mitigations for security teams, to help implement these controls,” she says. “Not just descriptions of what you should do but real code examples of how you can implement them.”For example, she says, one of the suggested mitigations is to “apply the principle of least privilege”. “Which is completely accurate,” she says. “Everyone should apply the principle of least privilege. But what does that mean for agents?”Rick Holland, data and AI security officer at Cyera, a data security vendor, says he’d like the list to explain the likelihood of each type of attack. “Not all threat actors are created equal,” he says.For organizations targeted by nation-state actors, for example, the attackers might use more sophisticated attack vectors, like memory and context poisoning or agentic supply chain vulnerabilities. Rank-and-file cybercriminals might go after more low-hanging fruit, Holland says, using techniques like agent goal hijack or tool misuse.Jose Lazu, associate director of product management at CMD+CTRL, a security training company, says that there are some second-tier risks that could have been included, such as model and tuning supply-chain integrity, long-horizon data poisoning, multi-agent coordination exploits, and cost-based resource exhaustion.”These areas are evolving quickly, so CSOs need to keep them on their radar,” he says.

OWASP Top 10 for Agentic AI: Below we list the OWASP Top 10 for Agentic Applications 2026, a framework that identifies the most critical security risks facing autonomous and agentic AI systems.

1 Agent Goal Hijack

Attackers use prompt injection, poisoned data, and other tactics to manipulate the AI agent’s goals, so that the agent carries out unwanted actions. For example, a malicious prompt can manipulate a financial agent into sending money to an attacker.

2 Tool Misuse and Exploitation

Agents misuse legitimate, authorized tools for data exfiltration, destructive actions, and other unwanted behaviors. In fact, we’ve already seen examples of AI agents deleting databases and wiping hard drives.

3 Identity and Privilege Abuse

Flaws in agent identity, delegation, or privilege inheritance allow attackers to escalate access, exploit confused deputy scenarios, or execute unauthorized actions across systems. For example, an attacker can use a low-privilege AI agent to relay instructions to a high-privilege in order to do things they’re not supposed to be able to do.

4 Agentic Supply Chain Vulnerabilities

Compromised or malicious third-party agents, tools, models, interfaces, or registries introduce hidden instructions or unsafe behavior into agentic ecosystems. For example, an attacker can embed hidden instructions into a tool’s meta-data.

5 Unexpected Code Execution

Agent-generated or agent-invoked code executes in unintended or adversarial ways, leading to host, container, or environment compromise. AI agents can generate code on the fly, bypassing normal software controls, and attackers can leverage this. For example, a coding agent writing a security patch might include a hidden back door due to poisoned training data or adversarial prompts.

6 Memory and Context Poisoning

Attackers corrupt persistent agent memory, RAG stores, embeddings, or shared context to affect an agent’s future actions. For example, an attacker keeps mentioning a fake price for a product, which gets stored into an agent’s memory, and the agent might later think the price is valid and approves bookings at that price.Contaminated context and shared memory can spread between agents, compounding corruption.

7 Insecure Inter-Agent Communication

Weak authentication, integrity, or semantic validation in agent-to-agent messaging enables spoofing, tampering, replay, or manipulation. For example, an attacker can register a fake agent in a discovery service, and intercept privileged coordination traffic.

8 Cascading Failures

A single fault, such as hallucination, poisoned memory, or compromised tool, propagates across autonomous agents. For example, a regional outage in a hyperscaler can break multiple AI services, leading to a cascade of agent failures across many organizations.

9 Human-Agent Trust Exploitation

Agents exploit human trust, authority bias, or automation bias to influence decisions or extract sensitive information. For example, a compromised IT support agent can request credentials from an employee and send them to the attacker.

10 Rogue Agents

Agents can act harmfully and deceptively in such a way that individual actions may appear legitimate. This could be due to prompt injection, or due to conflicting objectives or reward hacking. For example, an agent whose job is to reduce cloud costs might figure out that deleting files is the most efficient way to do that.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4109123/managing-agentic-ai-risk-lessons-from-the-owasp-top-10.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link