The limits of ‘AI is just software’: NIST’s instinct to frame AI as an extension of traditional software allows organizations to reuse familiar concepts, risk assessment, access control, logging, defense in depth, rather than starting from zero. Workshop participants repeatedly emphasized that many controls do transfer, at least in principle.But some experts argue that the analogy breaks down quickly in practice. AI systems behave probabilistically, not deterministically, they say. Their outputs depend on data that may change continuously after deployment. And in the case of agents, they may take actions that were not explicitly scripted in advance.For CISOs, the risk is not that AI is unrecognizable, but that it appears recognizable enough to lull organizations into applying controls mechanically. Treating AI as “just another application” can obscure new failure modes, particularly those involving indirect manipulation through data or prompts rather than direct exploitation of code.”AI agent systems really face a range of security threats and risks,” CAISI’s Hamin said at the workshop. “Some of these overlap with traditional software, but others kind of arise from the unique challenge of combining AI model outputs, which are non-deterministic, with the affordances and abilities of software tools.”
CISOs should watch out for framework fatigue: In kicking off the workshop, NIST senior policy advisor Katerina Megas explained that NIST reached out to the CISO community to ask them what they need in terms of AI security guidance.”Before we started down any path, we spoke to the CISO community, and we asked them, ‘So how are you all dealing with artificial intelligence? How is this affecting your day-to-day? Is this something that keeps you up at night?’ And overwhelmingly, the answer was yes, this is absolutely something that is top of mind for us. Our leadership is asking us, what are we doing?” she said at the event.But the CISOs also told NIST that they were overwhelmed with AI documentation. A lot of these publications had some overlap, but were not identical, Megas said. “If you were a consumer of all of these documents, it was very difficult for you to look at them and understand how they relate to what you are doing and also understand how to identify where two documents may be talking about the same thing and where they overlap.””If the guidance is super long, then people may not actually use it,” one workshop attendee, Naveen Konrajankuppam Mahavishnu, co-founder and CTO at Aira Security, tells CSO, suggesting that much of the material can be reduced to more digestible components.”We can have a very detailed version, maybe a hundred pages long, but also have some sort of checklist that kind of summarizes the entire 100-page paper or something into a few pages where people can easily consume it, and then they can start implementing it,” Mahavishnu says.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4123196/nists-ai-guidance-pushes-cybersecurity-boundaries.html
![]()

