URL has been copied successfully!
AI startups leak sensitive credentials on GitHub, exposing models and training data
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

AI startups leak sensitive credentials on GitHub, exposing models and training data

Compliance and governance: The Wiz findings highlight how exposed API keys can escalate into full-scale compromises across AI ecosystems, according to Sakshi Grover, senior research manager for IDC Asia Pacific Cybersecurity Services. “Stolen credentials can be used to manipulate model behavior or extract training data, undermining trust in deployed systems.”Grover noted that such exposures are often linked to the way AI development environments operate. “AI projects often operate in loosely governed, experimentation-driven environments where notebooks, pre-trained models, and repositories are shared frequently, leaving secrets unscanned or unrotated,” Grover added.She pointed to data from IDC’s Asia/Pacific Security Study, which showed that 50% of enterprises in APAC alone plan to invest in API security when selecting CNAPP vendors, reflecting how exposed APIs have become a major attack vector.With regulators sharpening their focus on AI safety and data protection, secret management and API governance are likely to become auditable elements of emerging AI compliance frameworks, Grover said.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4087983/ai-startups-leak-sensitive-credentials-on-github-exposing-models-and-training-data.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link