URL has been copied successfully!
Malicious Hugging Face model masquerading as OpenAI release hits 244K downloads
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Part of a broader AI supply chain targeting: HiddenLayer, in its advisory, said that it identified six additional Hugging Face repositories uploaded under a separate account that used nearly identical loader logic and shared infrastructure with the campaign.The researchers also linked elements of the operation to earlier software supply-chain attacks involving npm typosquatting campaigns and fake AI packages distributed through PyPI. The shared infrastructure “suggests these campaigns are possibly linked and likely part of a broader supply chain operation targeting open-source ecosystems,” HiddenLayer wrote.The incident follows earlier warnings from researchers about malicious code embedded inside Pickle-serialized AI model files on Hugging Face, as well as separate campaigns involving poisoned AI SDKs and fake OpenClaw installers.

Traditional security controls are falling short: The incident also exposes limitations in existing software composition analysis and application security tooling when applied to AI artifacts, analysts said.”Traditional SCA was designed to inspect dependency manifests, libraries, and container images, not the increasingly complex behaviors associated with AI development workflows,” said Sakshi Grover, senior research manager for cybersecurity services at IDC. “It is far less effective at identifying malicious loader logic concealed within seemingly legitimate AI repositories.”Jaishiv Prakash, director analyst at Gartner, said enterprises now need dedicated governance controls at the AI registry layer itself.”Enterprises must establish dedicated controls for model sources, approved versions, access, and runtime validation at the registry layer,” Prakash said, adding that model repositories distribute executable artifacts and embedded logic that often fall outside the effective scope of traditional SCA tools.IDC’s November 2025 FutureScape report predicts that by 2027, 60% of enterprises deploying agentic AI systems will require an AI bill of materials to support continuous vulnerability scanning and compliance assurance, Grover said.

What should enterprises do now: HiddenLayer urged affected users to treat impacted systems as fully compromised and prioritize reimaging over cleanup efforts.”If you cloned Open-OSS/privacy-filter and executed start.bat, python loader.py, or any file from the repository on a Windows host, treat the system as fully compromised,” the advisory said. Browser sessions should also be considered compromised even where passwords were not stored locally, the researchers added, because stolen session cookies can bypass multifactor authentication protections.The company also recommended blocking listed indicators of compromise, rotating credentials, invalidating active sessions, and conducting historical network hunts for connections tied to the campaign.Hugging Face confirmed to HiddenLayer that the repository violated its terms of service and removed it from the platform, according to the advisory.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4169407/malicious-hugging-face-model-masquerading-as-openai-release-hits-244k-downloads.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link