Agentic AI’s identity crisis: Authentication and agentic experts interviewed, three of whom estimate that less than 5% of enterprises experimenting with autonomous agents have deployed agentic identity systems, say the reasons for this lack of security hardening are varied.First, many of these efforts are effectively shadow IT, where a line of business (LOB) executive has authorized the proof of concept to see what these agents can do. In these cases, IT or cyber teams haven’t likely been involved, and so security hasn’t been a top priority for the POC.Second, many executives, including third-party business partners handling supply chain, distribution, or manufacturing, have historically cut corners for POCs because they are traditionally confined to sandboxes isolated from the enterprise’s live environments. But agentic systems don’t work that way. To test their capabilities, they typically need to be released into the general environment. The proper way to proceed is for every agent in your environment, whether IT authorized, LOB launched, or that of a third party, to be tracked and controlled by PKI identities from agentic authentication vendors. Extreme defense would include instructing all authorized agents to refuse communication from any agent without full identification. Unfortunately, autonomous agents, like their gen AI cousins, often ignore instructions (aka guardrails). “Agentic-friendly encounters conflict with essential security principles. Enterprises cannot risk scenarios where agents autonomously discover each other, establish communication channels, and form transactional relationships,” says Kanwar Preet Singh Sandhu, who tracks cybersecurity strategies for Tata Consultancy Services.”When IT designs a system, its tasks and objectives should be clearly defined and restricted to those duties,” he adds. “While agent-to-agent encounters are technically possible, they pose serious risks to principles like least privilege and segregation of duties.For structured and planned collaboration or integration, organizations must follow stringent protocols such as MCP [Model Context Protocol] and A2A [Agent to Agent], which were created precisely for this purpose.”DigiCert’s Sabin says his interactions with enterprises revealed “little to none” creating identities for their autonomous agents. “Definitely less than 10%, probably less than 5%. There is a huge gap in identity.”
Agentic IDs: Putting the genie back in the bottle: Once agentic experiments begin without proper identities established, it’s far more difficult to add identity authentication later, Sabin notes.”How do we start adding in identity after the fact? They don’t have these processes established. The agent can and will be hijacked, compromised. You have to have a kill switch,” he says. “AI agents’ ability to verify who is issuing a command and whether that human/system has authority is one of the defining security issues of agentic AI.”To address that issue, CISOs will likely need to rethink identity, authentication, and privilege. “What is truly challenging about this is that we are no longer determining how a human authenticates to a system. We are now asked to determine how an autonomous agent determines that the individual providing instructions is legitimate and that the instructions are within the expected pattern of action,” Cisco’s Kale says. “The shift to determining legitimacy based on the autonomous agent’s assessment of the human’s intent, rather than simply identifying the human, introduces a whole new range of risk factors that were never anticipated by traditional authentication methods.”Ishraq Khan, CEO of coding productivity tool vendor Kodezi, also believes CISOs are likely underestimating the security threats that exist within agentic AI systems.”Traditional authentication frameworks assume static identities and predictable request patterns. Autonomous agents create a new category of risk because they initiate actions independently, escalate behavior based on memory, and form new communication pathways on their own. The threat surface becomes dynamic, not static,” Khan says. “When agents update their own internal state, learn from prior interactions, or modify their role within a workflow, their identity from a security perspective changes over time. Most organizations are not prepared for agents whose capabilities and behavior evolve after authentication.”Khan adds: “A compromised agent can impersonate collaboration patterns, fabricate system state or manipulate other agents into cascading failures. This is not simply malware. It is a behavioral attack on decision-making.”Harish Peri, SVP and general manager of AI Security at Okta, puts it more directly: “This is not just an NHI problem. This is a recipe for disaster. It is a new kind of identity, a new kind of relentless user.”Regarding the problem of being unable to undo the damage when a hijacked agent gives malicious instructions to legitimate agents, Peri says it can be a challenging problem that no one seems to have solved yet.”If the risk signal is strong enough, we do have the capability to revoke not just the privilege but the access token,” Peri says. But “the real-time kind of chaining requires more thought.”
Unwinding agent interactions will be a tall order: One issue is that tracking interactions for backward chaining will require a massive amount of data to be captured from every agent in the enterprise environment. And given that autonomous agents act at non-human speed, a data warehouse for that activity will likely fill up quickly.”By the time the agent does something and identity gets revoked, all of the downstream agents have already interacted with that compromised agent. They have already accepted assignments and have already cued up its next step actions,” Cisco’s Kale explains. “There is no mechanism to propagate that revocation backwards. Kill switches are necessary but they are incomplete.”The process to go backwards to all contacted agents “sounds like a straightforward script. It looks easy until you try and do it properly,” he says. “You need to know every instruction an agent has issued and the hard part is deciding what to undo”, a scenario Kale likens to alert fatigue. “This could absolutely collapse from its own weight. This could all become noise and not security at that point.”Jason Soroko, a senior fellow at Sectigo, agrees that backward alerting of impacted agents “is nowhere near to being fully solved at this time.” But he argues that agentic cybersecurity has inadvertently painted itself into a corner. “A lot of autonomous AI agent authentication will rely on a simple API token to verify itself. We have inadvertently built a weapon waiting for a stolen shared secret,” Soroko says. “To fix this, we must move beyond shared secrets to cryptographic proof of possession, ensuring the agent verifies the ‘who’ behind the command, not just the ‘concert wristband’ authenticator.”
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4109999/agentic-ai-already-hinting-at-cybersecuritys-pending-identity-crisis.html
![]()

