URL has been copied successfully!
US government agency to safety test frontier AI models before release
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Executive order ‘taking shape’: Following the announcement from CAISI, a published report on Wednesday indicated that the White House is on the verge of preparing an executive order that would see the creation of a vetting system for all new artificial intelligence models, key among them Anthropic’s Mythos.Bloomberg reported, “the directive is taking shape weeks after Anthropic revealed that its breakthrough Mythos model was adept at finding network vulnerabilities and could pose a global cybersecurity risk.”

Significant change in policy direction: Carmi Levy, an independent technology analyst, said, “it is patently obvious that this week’s announcement that establishes the Center for AI Standards and Innovation as the testing ground for frontier AI models is directly linked to the potential executive order that would lead to a vetting system for AI models.”It isn’t coincidental, he said, “that the announcements were made in rapid succession, and it reinforces the growing urgency for governments in the US and elsewhere to tighten partnerships with key AI vendors to maximize AI-related security and minimize the potential for systemic risk.”This latest flurry of activity from Washington marks a significant shift in policy direction from an administration that up until recently had been following a more laissez-faire approach to regulation, Levy pointed out.Concerns around Anthropic’s Claude Mythos model, and the relative ease with which it could discover and exploit vulnerabilities in digital systems, “might have helped shift the federal government’s position on AI-related regulation, particularly around the renewed push to enforce standards for AI-related deployments across government infrastructure,” he said.AI vendors like Google, Microsoft, and xAI, Levy added, “must walk a political highwire of sorts as they balance the need to release models into the marketplace in a timely, cost-effective manner with increasingly defined rules around AI-related cybersecurity and safety. The industry can’t afford a scenario where vendors themselves make up the rules as they go along.”At the same time, he said, the recent showdown between Anthropic and the Pentagon illustrates why the vendors might be forgiven for viewing the federal government’s growing interest in AI testing and regulation with at least a certain degree of caution.According  to Levy, “while the administration’s efforts to centralize testing and oversight should streamline the go-to-market process for vendors and accelerate the development of best practices around frontier model development, the political overtones of recent government-industry partnerships cannot be ignored.”This article originally appeared on CIO.com.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4168135/us-government-agency-to-safety-test-frontier-ai-models-before-release-2.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link