Trust, transparency, and moving slowly are crucial: Like all technologies, and perhaps more dramatically than most, agentic AI carries both risks and benefits. One obvious risk of AI agents is that, like most LLM models, they will hallucinate or make errors that could cause problems.”If you want to remove or give agency to a platform tool to make decisions on your behalf, you have to gain a lot of trust in the system to make sure that it is acting in your best interest,” Seri says. “It can hallucinate, and you have to be vigilant in maintaining a chain of evidence between a conclusion that the system gave you and where it came from.”Together with supply chain knowledge, it’s crucial to have transparency when using agentic AI technologies. “We emphasize that transparency is a big part of this,” Ian Riopel, CEO and co-founder of Root.io, tells CSO. “Everything that we publish or that gets shipped to our customers, they can go in and see the source code. They need to be able to see what’s changed and understand it. Security through obscurity is not a great approach.”Another risk is that in the frenzied rush to incorporate AI agents, organizations might overlook fundamental security concerns.”It’s new technology and people are moving fast to ship it and to innovate and to make new things,” Hillai Ben-Sasson, cloud security researcher at Wiz, says. “Everyone’s creating MCP servers for their services to have AI interact with them. But an MCP at the end of the day is the same thing as an API. [Don’t make] all the same mistakes that people made when they started creating APIs ten years ago. All these authentication problems and tokens, everything that’s just API security.”
Agentic AI can be a game-changer: Despite what many consider to be hype surrounding the advent of AI, experts say that if implemented with deliberateness and due diligence, the benefits of AI agents are game-changing for cybersecurity.AI agents are “the future,” Wiz’s Ben-Sasson says. “However, given that the current stage of AI development is still immature, AI agents might, like a junior engineer, make a lot of mistakes. That’s why we have different permission sets. That’s why we have guardrails and so on.”The real benefit of AI agents is that they can tackle the boring but necessary tasks of cybersecurity to free up talent to take on more complex tasks, thereby accelerating security programs and becoming a workforce multiplier.”We did a bake-off of gen three of some of our agents against one of our best security researchers to create a backported patch for a critical vulnerability on a very popular piece of open-source software,” Root.io’s Riopel says. “And that researcher took eight days to create a patch that otherwise wasn’t available. It required modifying 17 different snippets of code across three different software commits. The AI agents did it in under 15 minutes. When you think about that, that’s not 10x multiplier, it’s 1,000x.”
Force multiplication means skill set shifts, not job losses: Despite the potential for cutting out tasks that many security analysts perform today, agentic AI will likely not reduce the size of the current cybersecurity workforce. “No one’s getting fired in lieu of agents,” Riopel says.”I think we are going through a skill set shift, and I wouldn’t call it an all-out replacement,” RAD Security’s Mesta says. “What AI is going to do is impact the kind of lower-level paper shuffling style jobs where I had a CSV report, I’m going to put it in Excel, and I’m going to create a ticket,” Mesta adds.But, he says, “it will unlock extreme productivity for security teams for those who know how to use it, which is, I think, the big asterisk. If you’re anti-AI and that’s not a skill you think should be in your toolbox, it’s going to be challenging going forward to maintain the same level of job seniority you have now.”Zafran Security’s Seri thinks it’s wrong to say that the advent of AI agents means we will now need fewer cybersecurity experts. “We will need more of them,” he says. “There is an opportunity with these tools to automate and to make your life easier, but it’s not to replace the expertise that people accumulate over time.”
How CISOs should proceed in deploying AI agents: All experts say that the deployment of AI agents inside organizations is a done deal and will arrive faster than any other technology shift, including the adoption of cloud computing. “This train has not only left the station; it’s a bullet train,” Mesta says. “It’s like the fastest train ever made.”CISOs need to immediately strap in and grapple with the implications of a technology that they do not always fully control, if for no other reason than their team members will likely turn to AI platforms to develop their security solutions. “Saying no doesn’t work. You have to say yes with guardrails,” says Mesta.At this still nascent stage of agentic AI, CISOs should ask questions, Riopel says. But he stresses that the main “question you should be asking is: How can I force multiply the output or the effectiveness of my team in a very short period of time? And by a short period of time, it’s not months; it should be days. That is the type of return that our customers, even in enterprise-type environments, are seeing.”Not everyone agrees that pursuing compressed timeframes is the right strategy. “In many cases, from the CISO perspective, the takeaway here is that the agentic AI services that they are using are still immature,” Wiz’s Riancho says. “It’s still an immature industry. We need years of security improvements to make sure that everything is more stable and secure for companies and end users.”But Riancho also thinks CISOs should be asking a lot of questions now. “I would ask difficult questions. So, before actually connecting an agent to my endpoint devices, to my infrastructure, to my SOC, to anything, ask the difficult question: Which actions are going to be performed by these agents?”One critical question that CISOs should be asking is what happens to their organizations’ information once it has been fed into any given vendor’s agentic AI product.”I don’t want my data to go to other vendors like OpenAI or Anthropic or anybody else that is not the security vendor,” Zafran Security’s Seri says. “This is fundamental: Make sure that the data that you are sharing is not driving around the world and seeing the sights.”
How CISOs should proceed in deploying AI agents: All experts say that the deployment of AI agents inside organizations is a done deal and will arrive faster than any other technology shift, including the adoption of cloud computing. “This train has not only left the station; it’s a bullet train,” Mesta says. “It’s like the fastest train ever made.”CISOs need to immediately strap in and grapple with the implications of a technology that they do not always fully control, if for no other reason than their team members will likely turn to AI platforms to develop their security solutions. “Saying no doesn’t work. You have to say yes with guardrails,” says Mesta.At this still nascent stage of agentic AI, CISOs should ask questions, Riopel says. But he stresses that the main “question you should be asking is: How can I force multiply the output or the effectiveness of my team in a very short period of time? And by a short period of time, it’s not months; it should be days. That is the type of return that our customers, even in enterprise-type environments, are seeing.”Not everyone agrees that pursuing compressed timeframes is the right strategy. “In many cases, from the CISO perspective, the takeaway here is that the agentic AI services that they are using are still immature,” Wiz’s Riancho says. “It’s still an immature industry. We need years of security improvements to make sure that everything is more stable and secure for companies and end users.”But Riancho also thinks CISOs should be asking a lot of questions now. “I would ask difficult questions. So, before actually connecting an agent to my endpoint devices, to my infrastructure, to my SOC, to anything, ask the difficult question: Which actions are going to be performed by these agents?”One critical question that CISOs should be asking is what happens to their organizations’ information once it has been fed into any given vendor’s agentic AI product.”I don’t want my data to go to other vendors like OpenAI or Anthropic or anybody else that is not the security vendor,” Zafran Security’s Seri says. “This is fundamental: Make sure that the data that you are sharing is not driving around the world and seeing the sights.”
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4040145/agentic-ai-promises-a-cybersecurity-revolution-with-asterisks.html
![]()

