The normal choices are the dangerous ones: Consider the stack a typical large enterprise was running in 2024: One vendor for ERP and supply chain, another for perimeter enforcement, another for networking and another for endpoint protection. Standard choices, responsibly made. Within a twelve-month window, each of those categories experienced significant disruptions, from zero-day exploits to update failures that disrupted global operations. Any single event was survivable. The accumulation was something else entirely.I lived this as a Global CISO. My team planned for sequential crises with recovery time between them. What we got was overlapping disruptions across interdependent systems. One week, we were triaging an emergency patch on our perimeter while a second advisory was escalating on a different platform. The assumption that these events would arrive one at a time, that we’d have breathing room, turned out to be a planning fiction. When you are sustaining the operation itself, crises expose the seams in real time.A firewall vulnerability isn’t just a network issue when the ERP behind it processes every financial transaction. An endpoint agent failure isn’t just a security tool outage when it takes down the operating systems running your logistics. These platforms don’t fail in isolation because they don’t operate in isolation. Increasingly, neither do the industries that depend on them. A disruption to a cloud provider ripples through healthcare systems processing claims, financial institutions settling trades and manufacturers coordinating supply chains on the same platform.The July 2024 CrowdStrike incident made this impossible to dismiss. A routine content update, no attacker, no exploit, bricked millions of Windows systems worldwide. Airlines grounded flights. Hospitals diverted patients. Financial services went dark. The protective tool itself became the failure vector. That should have ended the debate about whether cybersecurity is a technical problem contained within organizational boundaries or a systemic risk that spans them.My background in industrial automation made this grimly familiar. In material handling, we knew the integration layer was the highest-risk surface. We designed systems assuming any component could fail and built degradation paths so the operation didn’t stop. Enterprise cybersecurity had somehow convinced itself that assembling best-of-breed tools was the same as building a resilient system. It isn’t. And as digital transformation pushes more critical infrastructure, from energy grids and water systems to transportation networks and medical devices, onto the same interconnected platforms, the consequences of that confusion multiply.
Resilience is a design problem, not a compliance problem: Across healthcare, financial services and manufacturing, I watched the same pattern. The compliance apparatus measured whether controls existed. It rarely measured whether the organization, or the broader infrastructure it depended on, could survive its failure. In healthcare, we demonstrate compliance while knowing our resilience to a coordinated supply-chain attack is largely untested. In financial services, we pass examinations while the insurers underwriting our risk price off the same compliance signals the examiners accept, and neither captures the systemic interdependencies between our platforms and our counterparties. In manufacturing, we secure the IT network while operational technology controlling physical processes is increasingly exposed through the same digital transformation the business is accelerating. We are weak at the seams.The question that followed me from role to role was simple: If a critical platform failed tomorrow, not breached, just failed, could the business keep operating? Could the critical services it provides keep functioning? The paper processes and theoretical exercises always existed, but never in a way that could forecast the cascading impacts.The internet itself offers a better model. It was engineered to survive the loss of any individual node. Routes break and traffic finds another path. Organizations need that same architectural quality, and so does the interconnected infrastructure that sits on top of them. The goal can’t be preventing every compromise. It has to ensure that no single failure cascades into systemic disruption that takes critical services offline across industries. This sets the priority. You can’t audit your way to that. You have to build it.The external pressures are converging on this conclusion. Insurance is becoming harder to buy at meaningful coverage levels and carriers are grappling with correlated risk they can’t yet price. Regulators are pushing accountability to the C-suite. Boards want evidence of survivability, not maturity scores. And the scope of what “cybersecurity” is expected to protect keeps expanding, from AI and enterprise data to operational technology to the critical infrastructure communities depend on.The industry built an economy around demonstrating that organizations are secure. It is optimized for audits, certifications and framework alignment. What it never solved for was proving that an organization, and the infrastructure around it, can absorb serious disruption and keep running. That is the seam that matters most.Digital transformation didn’t just increase each organization’s attack surface. It wove those surfaces together into an emergent network of interdependency that spans sectors and borders. The question every security and risk leader should be asking themselves is no longer whether their controls are sufficient. It’s whether they are, along with their programs or offerings, aligned to a sustainable future, or holding together an increasingly heavy past.This article is published as part of the Foundry Expert Contributor Network.Want to join?
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4155921/weak-at-the-seams.html
![]()

