The Mythos factor: The discussion follows Anthropic’s recent introduction of Mythos, a model the company has described as representing a watershed moment for cybersecurity.Anthropic has said Mythos Preview has found thousands of high-severity vulnerabilities, including some in every major operating system and web browser, and that AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. In one benchmark, the company reported significantly higher success rates compared to earlier models.Anthropic has not released the model publicly. Instead, it launched Project Glasswing, committing up to $100 million in usage credits to a select group of technology and cybersecurity companies to use Mythos for defensive purposes, finding and patching vulnerabilities before malicious actors can exploit them.Anthropic has also been briefing the Cybersecurity and Infrastructure Security Agency, the Commerce Department, and other stakeholders on the potential risks and benefits of Mythos Preview. OpenAI has developed a comparable model and has released it to a small set of companies through an existing trusted-access program.
What a review might mean: Pre-release evaluation of AI models is not a new idea, but it remains poorly defined in the US policy context. The Biden executive order Trump revoked had required developers of the largest AI systems to notify the government and share safety test results before deployment, one of several provisions the Trump administration characterized as burdensome obstacles to innovation.The institutional picture has also shifted. The US AI Safety Institute, created under the Biden order to conduct pre-deployment evaluation and housed within the National Institute of Standards and Technology, was substantially reorganized after Trump took office. In June 2025, the agency was renamed the Center for AI Standards and Innovation, and its mission was revised.Commerce Secretary Howard Lutnick framed the change as a repudiation of what he called the use of safety as a pretext for censorship and regulation. The renamed center’s mandate now includes leading unclassified evaluations of AI capabilities that may pose risks to national security, with a stated focus on demonstrable risks such as cybersecurity, biosecurity, and chemical weapons, potentially positioning it to play a role in any future review process.Other governments have moved further and faster. The UK’s AI Security Institute has conducted pre-deployment evaluations of several frontier models, working directly with labs, including Anthropic and OpenAI, to assess risk thresholds before release. The EU AI Act, which began phasing in last year, establishes mandatory conformity assessments for high-risk AI applications.The US has not established a comparable framework or legal authority to require such reviews.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4166824/anthropic-mythos-spurs-white-house-to-weigh-pre-release-reviews-for-high-risk-ai-models.html
![]()

