Gaining visibility: CISOs say they’re aware of the consequences of having blind spots, with data leaks and problematic AI outputs being common ones.They’re now working to gain the needed visibility to prevent such issues, says Aaron Momin, CISO and chief risk officer for Synechron, a digital consulting and technology services firm.”The business has a mandate to adopt AI, but the trouble with this is that the business has been moving at lightspeed and CISOs are just catching up,” Momin adds.Like other security chiefs, Momin is leaning on a well-formed security strategy, security and AI frameworks, and a clear understanding of the company’s risk appetite and risk tolerance to do that work. He’s also leaning on people, process, and technology to secure his organization’s AI deployments and improve visibility.Still, he acknowledges blind spots could remain, explaining that traditional security tools, such as URL filtering and data loss prevention (DLP) solutions, provide a layer of control but don’t deliver the comprehensive view of AI use that CISOs need.”They’re not necessarily sufficient. They could get to maybe 80% or 90% of what you need, but to get higher visibility, you have to add additional tools,” Momin says.That, though, presents another challenge for CISOs.”Those tools have to be matured, have to be extended, have to be broader to get full visibility,” Momin says. “Now some vendors are upgrading the capabilities [offered in their security tools,] and new tools are coming on the market. And they’re starting to give you full visibility.”Thoughtworks’ Raina has a similar take to improving visibility, endorsing a multiprong approach to ensure his security team has a full picture of the organization’s AI deployments, their vulnerabilities, and their risks. That approach combines administrative, governance, and technology controls, a combination that has a long history of success in security.But experts say that tried-and-true combination is not enough to gain full visibility when it comes to AI.According to Pentera’s survey, no CISOs reported full visibility and no shadow AI. One-third said they had good visibility with shadow AI likely, while 66% said they had limited visibility with shadow AI a known issue, and 1% said they had no visibility.Full visibility may not be possible, at least not at present, says Jared Oluoch, professor and director of Eastern Michigan University’s School of Information Security and Applied Computing. Today’s tools and security strategies limit blind spots but do not eliminate them completely. “They can minimize the negative effects,” he adds.That’s the goal, says Tal Hornstein, CISO of Cast & Crew, a provider of production software, payroll, and services for the entertainment industry.Like others, Hornstein relies on longstanding security principles, citing the confidentiality, integrity, and availability (CIA) triad as the foundation for his approach to ensure that AI works within established guardrails and that he can observe its behavior.Hornstein is also looking to emerging technologies to deliver better observability and enforcement. But he acknowledges that security tech doesn’t enable full visibility at this time. “They are not fully mature yet,” he says.That has to be enough for now, he adds, saying CISOs can’t let visibility challenges slow down AI adoption.”AI is the most amazing technology, and whoever doesn’t use it will be left behind,” Hornstein says. “So, it’s important for me as a CISO and as a business leader to not put up barriers and block AI but to build up guardrails that allow the organization to move at the velocity it wants and the amount it wants while providing risk mitigation.”
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4157486/cisos-tackle-the-ai-visibility-gap.html
![]()

