Free agents: Autonomy breeds increased risks: Agentic AI introduces the ability to make independent decisions and act without human oversight. This capability presents its own cybersecurity risk by potentially leaving organizations vulnerable.”Agentic AI systems are goal-driven and capable of making decisions without direct human approval,” Joyce says. “When objectives are poorly scoped or ambiguous, agents may act in ways that are misaligned with enterprise security or ethical standards.”For example, if an agent is told to reduce “noise” in the security operations center, it might interpret this too literally and suppress valid alerts in its effort to streamline operations, leaving an organization blind to an active intrusion, Joyce says.Agentic AI systems are designed to act independently, but without strong governance, this autonomy can quickly become a liability, Riboldi says. “A seemingly harmless agent given vague or poorly scoped instructions might overstep its boundaries, initiating workflows, altering data, or interacting with critical systems in unintended ways,” he says.In an agentic AI environment, “there is a lot of autonomous action without oversight,” Mayham says. “Unlike traditional automation, agents make choices that could mean clicking links, sending emails, triggering workflows. And this is all based on probabalistic reasoning. When those choices go wrong it’s hard to construct why. We’ve seen [clients] of ours accidently exposing sensitive internal URLs by misunderstanding what safe-to-share means.”
Multi-agent systems: Unwanted data-sharing consequences: Multi-agented systems hold great promise for the enterprise, but AI agents interacting and sharing data with one another introduce risks related to security, privacy, and the potential for unintended consequences, CM Law’s Richtermeyer says. “These risks stem from the AI’s ability to access vast amounts of data, their autonomous nature, and the complexity of managing multi-agent AI systems,” she says.For example, AI agents can access and process sensitive information that might be governed contractually or heavily regulated, leading to an authorized use or disclosure that creates potential liability for an organization, Richtermeyer says.”As soon as you have a multi-agent setup, you introduce coordination risk,” Northwest AI’s Mayham says. “One agent might expand the scope of a task in a way another agent wasn’t trained to handle. Without sandboxing, this can lead to unpredictable system behavior, especially if the agents are ingesting fresh real-world data.”Agents often collaborate with other agents to complete tasks, resulting in complex chains of communication and decision-making, PwC’s Joyce says. “These interactions can propagate sensitive data in unintended ways, creating compliance and security risks,” he says.For example, a customer service agent summarizes account details for an internal agent handling retention analysis. That second agent stores the data in an unprotected location for later use, violating internal data handling policies.
Third-party integration: Supercharging supply-chain risks: Agents can also potentially integrate and share data with third-party partners’ applications via APIs, presenting yet another challenge for CISOs, as integration with disparate services and vendors can create increased opportunity for exploitation or vulnerability.Agentic AI relies heavily on APIs and external integrations, Riboldi says. “As an agent gains access to more systems, its behavior becomes increasingly complex and unpredictable,” he says. “This scenario introduces supply chain risks, as a vulnerability in any third-party service could be exploited or inadvertently triggered through agentic interactions across different platforms.”Many early stage agents rely on brittle or undocumented APIs or browser automation, Mayham says. “We’ve seen cases where agents leak tokens via poorly scoped integrations, or exfiltrate data through unexpected plugin chains. The more fragmented the vendor stack, the bigger the surface area for something like this to happen,” he says. “The AI coding tools are notorious for this.””Each integration point expands the attack surface and may introduce supply-chain vulnerabilities,” Joyce says. “For example,an AI agent integrates with a third-party HR platform to automate onboarding. The vendor’s API has a known vulnerability, which an attacker exploits to gain lateral access to internal HR systems.”Many agentic tools rely on open-source libraries and orchestration frameworks, which might harbor vulnerabilities unknown to security teams, Joyce adds.
Multi-stage attacks: Blurring the line between error and exploitation: There is a potential for agentic systems to conduct multi-stage attacks and find new ways to access restricted data systems by evading detection by security tools.”As agentic systems become more sophisticated, they may inadvertently develop or learn multi-step behaviors that mimic multi-stage attacks,” Riboldi says. “Worse, they might unintentionally discover ways to bypass traditional detection methods, not because they are malicious, but because their goal-oriented behavior rewards evasion.”This blurs the line between error and exploitation, Riboldi says, and makes it harder for security teams to tell whether an incident was malicious, emergent behavior, or both.This type of risk “is less theoretical than it sounds,” Mayham says. “In lab tests, we’ve seen agents chain tools together in unexpected ways, and not really maliciously but rather creatively really. Now imagine that same reasoning ability being exploited to probe systems, test endpoints, and avoid pattern-based detection tools.”Because agentic AI can learn from feedback, it might alter its behavior to avoid triggering detection systems, intentionally or unintentionally, Joyce says. “This presents a serious challenge for traditional rule-based detection and response tools,” he says. An agent could determine that certain actions trigger alerts from an endpoint detection platform, and adjust its method to stay under detection thresholds, similar to how malware adapts to antivirus scans.
A new paradigm requires new defense models: Agentic AI represents a powerful new model, Joyce says, but also a radically different cybersecurity challenge. “Its autonomy, adaptability, and interconnectivity make it both a productivity multiplier and a potential attack vector,” he says. “For CISOs, traditional security models are no longer sufficient.”According to Joyce, a robust agentic AI defense strategy must include the following fundamentals:
Real-time observability and telemetryTightly scoped governance policiesSecure-by-design development practicesCross-functional coordination between security, IT, data management, and compliance teams”By adopting a proactive, layered security approach and embedding governance from the start, organizations can safely harness the promise of agentic AI while minimizing the risks it brings,” he says.
Multi-stage attacks: Blurring the line between error and exploitation: There is a potential for agentic systems to conduct multi-stage attacks and find new ways to access restricted data systems by evading detection by security tools.”As agentic systems become more sophisticated, they may inadvertently develop or learn multi-step behaviors that mimic multi-stage attacks,” Riboldi says. “Worse, they might unintentionally discover ways to bypass traditional detection methods, not because they are malicious, but because their goal-oriented behavior rewards evasion.”This blurs the line between error and exploitation, Riboldi says, and makes it harder for security teams to tell whether an incident was malicious, emergent behavior, or both.This type of risk “is less theoretical than it sounds,” Mayham says. “In lab tests, we’ve seen agents chain tools together in unexpected ways, and not really maliciously but rather creatively really. Now imagine that same reasoning ability being exploited to probe systems, test endpoints, and avoid pattern-based detection tools.”Because agentic AI can learn from feedback, it might alter its behavior to avoid triggering detection systems, intentionally or unintentionally, Joyce says. “This presents a serious challenge for traditional rule-based detection and response tools,” he says. An agent could determine that certain actions trigger alerts from an endpoint detection platform, and adjust its method to stay under detection thresholds, similar to how malware adapts to antivirus scans.
A new paradigm requires new defense models: Agentic AI represents a powerful new model, Joyce says, but also a radically different cybersecurity challenge. “Its autonomy, adaptability, and interconnectivity make it both a productivity multiplier and a potential attack vector,” he says. “For CISOs, traditional security models are no longer sufficient.”According to Joyce, a robust agentic AI defense strategy must include the following fundamentals:
Real-time observability and telemetryTightly scoped governance policiesSecure-by-design development practicesCross-functional coordination between security, IT, data management, and compliance teams”By adopting a proactive, layered security approach and embedding governance from the start, organizations can safely harness the promise of agentic AI while minimizing the risks it brings,” he says.
Real-time observability and telemetryTightly scoped governance policiesSecure-by-design development practicesCross-functional coordination between security, IT, data management, and compliance teams”By adopting a proactive, layered security approach and embedding governance from the start, organizations can safely harness the promise of agentic AI while minimizing the risks it brings,” he says.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4047974/agentic-ai-a-cisos-security-nightmare-in-the-making.html
![]()

