URL has been copied successfully!
Anthropic ban heralds new era of supply chain risk, with no clear playbook
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Compliance pressure before policy clarity: For organizations that do business with the federal government, the implications extend beyond technical challenges into legal and contractual risk. Alex Major, co-chair of government contracts and global trade practice at law firm McCarter and English, tells CSO that supply chain designations like the Anthropic ban tend to move quickly from policy statements into enforceable requirements, even when formal acquisition rules lag.”You can’t manage what you haven’t found,” Major says, emphasizing that the immediate task for CISOs is to determine where Anthropic dependencies exist across their systems and supplier networks.That process, he says, must be approached as both a technical and a compliance exercise. Organizations may need to document how they identified affected systems, what steps they took to remove or replace components, and how they validated that those steps were effective. In a certification environment, the ability to demonstrate due diligence can be as important as the technical outcome.At the same time, Major cautioned against acting too quickly in regulated environments without appropriate controls.”Slow down,” he advises. “Get your supply chain analysis in shape and don’t do anything until those things have happened.” He adds, “If you’re moving quickly, the compliance risk of a hasty removal in a sensitive environment can exceed the compliance risk of a deliberate, documented transition plan.”

No agreement on when to act: That tension is reflected in the lack of consensus among experts about how CISOs should respond in the near term. The Pentagon’s directive provides a clear signal for defense-related systems, but the broader policy landscape remains unsettled, leaving organizations to interpret how aggressively to act.Daniel Bardenstein, CEO and co-founder of Manifest, argues that the current policy framework does not yet provide the specificity needed to justify sweeping changes across enterprise environments. “It is not an executive order,” he tells CSO. “It’s not an OMB memo.”He described the guidance as incomplete and insufficiently detailed to translate into operational requirements, particularly given the complexity of AI systems and the existing gaps in software supply chain security.Pace takes a more pragmatic view for organizations already operating within federal environments. “If you are part of the federal government, you have to remove all evidence and use of Anthropic, period,” he says.At the same time, Pace acknowledged that many organizations are likely to delay action until requirements are formalized across procurement and regulatory frameworks. That hesitation reflects a broader uncertainty about how to respond to a policy that is still evolving, even as early enforcement signals emerge.

The visibility problem predates AI: The difficulty of identifying AI dependencies is not entirely new. It builds on longstanding challenges in software supply chain visibility, where organizations have struggled to maintain accurate inventories of the components in their systems.Chris Wysopal, founder and chief security evangelist of Veracode, tells CSO that the Anthropic situation highlights how those challenges are now extending into AI. “It’s a huge change for people selling software to the federal government,” he says, noting that companies are being asked to account for the models inside their products in ways they have not previously had to do.Wysopal said that some form of bill of materials can help organizations determine whether a specific technology appears in their software, particularly when responding to customer or regulatory requirements. At the same time, he cautioned that replacing models may not be trivial if applications have been built around specific capabilities, requiring adjustments to code, workflows, and testing processes.

AI-BOM or SBOM?: The question of how to achieve that visibility has sparked an active debate about whether existing software bill of materials (SBOM) frameworks are sufficient for AI, or whether organizations need a new approach.Amy Chang, leader of AI threat intelligence and security researcher at Cisco Systems, argues that traditional SBOMs do not capture the full scope of AI systems. “AI systems include models, agents, prompts, and data,” she says. “If you only track packages, you’re missing how the system actually functions.”Her view is that organizations need a more dynamic representation of how AI systems operate, including how models interact with data and other components, to understand risk and manage change effectively.Allan Friedman, the “father” of SBOM and now technologist in residence at TPO group, offers a more measured perspective. He agrees that transparency is essential but cautions against assuming that visibility alone will solve the problem.”Transparency will not solve all your problems,” he tells CSO, noting that organizations must integrate that information into broader risk management processes. “We still need a red team, and some of these basic security techniques to remind people that SBOM has never picked up my dry cleaning, not once,” he adds. “So thinking about how you take that transparency data and integrate it into your broader supply program is going to be important.”NetRise’s Pace rejects the premise that AI requires its own new bill of materials category, arguing that a properly implemented SBOM should already capture AI-related components. In his view, the problem is not the absence of a new framework, but the incomplete adoption of existing ones. “AI-BOMs are stupid,” he says. “There’s no such thing as an AI-BOM. You have an SBOM, which identifies AI components. AI is software, last time I checked.”The disagreement reflects a deeper uncertainty about how to model AI supply chain risk at a time when organizations are being asked to act on it.

Removal is not the same as replacement: Even if organizations can identify where Anthropic technology is used, removing it is only part of the challenge. Replacement introduces its own set of complexities, particularly when applications have been designed around specific model behaviors.Dependencies may be embedded deep within applications or introduced through third-party software, requiring coordination across vendors and development teams. In some cases, replacing a model may require reworking prompts, retraining systems, or revalidating outputs to ensure that functionality and performance are maintained.Anand Oswal, EVP at Palo Alto Networks, emphasizes that visibility is only one component of a broader security strategy. Organizations also need continuous discovery, testing, and runtime controls to manage AI risk as systems evolve.”You need a full AI security solution,” he tells CSO, arguing that AI systems are dynamic, with models, data, and behaviors that change over time, making static inventories insufficient without ongoing monitoring and governance. “You want complete visibility into your AI applications, your AI agents, your AI tools, your plugins, the data they’re accessing, everything around that whole infrastructure of AI that is being used to build your applications or agents. Once you do that, that’s discovery. It’s a good thing. It’s a start.”

A new category of supply chain risk: The Anthropic case represents a shift in how governments approach AI technologies, treating models and their associated ecosystems as supply chain components that can be restricted or removed.For CISOs, the challenge is not simply responding to a single directive, but preparing for a future in which similar actions could be applied to other AI providers not only by the US government, but also by regulators and customers. That requires visibility into AI dependencies, clarity about how those dependencies are used, and a strategy for replacing them without disrupting critical systems.As those expectations take shape, organizations are being asked to operate at a level of insight and control that many have not yet achieved. As Friedman cautions, “Everyone is moving quickly to build on these systems without really understanding what’s inside them.”Greater collaboration across the software and AI supply chain may eventually make that problem more manageable, he said, but for now the gap between what organizations are expected to know and what they can actually see remains wide.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4147298/anthropic-ban-heralds-new-era-of-supply-chain-risk-with-no-clear-playbook.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link