URL has been copied successfully!
Boards don’t need cyber metrics, they need risk signals
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

The seduction of counting: Even when metrics are not too technical and align with business impact, another problem emerges: What gets counted can crowd out what matters.Wendy Nather, a longtime CISO who is now an advisor at EPSD, cautions against equating measurement with understanding. “When you are reporting to the board, there are some things you just cannot count that you have to report anyway,” she tells CSO.She points to incidents, near misses, and changes in assumptions as examples. “Anything that changes your assumptions about how you’re managing your security program, you should be bringing those to the board, even if you can’t count them,” Nather says.Regular metrics can create a rhythm of predictability, and that predictability could lull board members into a false sense of security. “Metrics are very seductive,” she says. “They lead us toward things that can be counted, that happen on a regular basis.” The result may be a steady flow of data that obscures structural risk or emerging weaknesses, Nather warns.Metrics also influence behavior across the organization. In phishing programs, Nather favors measures that reinforce reporting rather than punish error. “You want to incentivize the reporting, and you want to praise people for doing it,” Nather says, emphasizing that what boards choose to measure ultimately shapes how the organization behaves.George Tsantes, partner at business advisory firm Newport, highlights the burden of proving a security program’s effectiveness. “I think it’s shocking when I talk to different boards or different companies and discover how much time they spend proving themselves instead of actually doing things,” he tells CSO.This dynamic is especially pronounced in regulated environments, where assurance work consumes resources that might otherwise be directed toward risk reduction. Regulatory scrutiny can also reorder priorities. “Regulators may focus on an item that was 20th on your list, but if they write you up, now it becomes No. 1,” Tsantes says. Boards, he argues, need visibility into those tradeoffs. A mature program reduces the proving burden wherever possible so that security effort is directed toward reducing risk rather than generating documentation.

How AI is stress testing board-level cyber metrics: Despite reshaping many aspects of cybersecurity operations, the rapid adoption of artificial intelligence has not yet produced a distinct set of board-level security metrics. Instead, AI is exposing long-standing weaknesses in how organizations translate security activity into risk signals directors can act on.Boards are not yet asking for AI-specific dashboards, experts say. What they are asking, often implicitly, is whether AI is increasing exposure, weakening controls, or altering the organization’s ability to limit damage when things go wrong.”I don’t think we have any output-based metrics yet,” says Corelight’s Bejtlich. Before organizations can measure AI risk, he argues, they must first establish basic governance signals: where AI is in use, how widely it is deployed, and whether it is expanding the attack surface or reducing operational burden.That visibility gap is already a concern for many security leaders. “When I talk to CISOs, their biggest concern is that they can’t always see what AI is being used inside of their enterprise,” says EPSD’s Nather. Without that awareness, boards are left with activity metrics that obscure the more fundamental question of whether the organization understands the risks it has introduced.For Bernard Brantley, CISO at Corelight, AI does not warrant a new measurement framework so much as stricter discipline around existing ones. “I don’t think that they should differ from your standard metrics,” he tells CSO. In practice, AI amplifies familiar security challenges, initial access, lateral movement, and data exfiltration, by increasing their scale and speed.That amplification changes what board-level metrics must signal. Expanded AI usage can increase coverage requirements, stretching teams and controls. At the same time, AI-driven automation can compress response timelines.”We were able to reduce MTTR [mean time to remediation] for this portion of our coverage by 60% because we threw an agent at it,” Brantley says. The governance signal for boards is not the presence of AI itself, but how it shifts risk concentration, response capacity, and resource tradeoffs.For Newport’s Tsantes, AI oversight is a test of enforcement rather than measurement. “What the board needs to know is that there are good uses of AI and bad uses of AI,” he says. But visibility without consequence is not governance. “Even knowing where the AI agents might be within your assets is difficult,” Tsantes adds. “If you can’t fire somebody for using the wrong AI, then you really don’t have any teeth in that policy.”

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4136995/boards-dont-need-cyber-metrics-they-need-risk-signals.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link