URL has been copied successfully!
LLMs are guessing login URLs, and it’s a cybersecurity time bomb
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

LLMs are guessing login URLs, and it’s a cybersecurity time bomb

Github poisoning for AI training: Not all hallucinated URLs were unintentional. In an unrelated research, Netcraft found evidence of attackers deliberately poisoning AI systems by seeding GitHub with malicious code repositories.”Multiple fake GitHub accounts shared a project called Moonshot-Volume-Bot, seeded across accounts with rich bios, profile images, social media accounts and credible coding activity,” researchers said. “These weren’t throwaway accounts”, they were crafted to be indexed by AI training pipelines.” The Moonshot project involved a counterfeit Solana blockchain API that rerouted funds directly into an attacker’s wallet.”The compromise of data corpuses used in the AI training pipeline underscores a growing AI supply chain risk,” Carignan said. “This is not just a hallucination, it’s targeted manipulation. Data integrity, sourcing, cleansing, and verification are critical to ensuring the safety of LLM outputs.”While researchers recommended reactive solutions like monitoring and takedown to tackle the issue, Gal Moyal, CTO office at Noma Security, suggested a proactive approach. “AI Guardrails should validate domain ownership before recommending login,” he said. ” You can’t just let models ‘guess’ URLs. Every request with a URL needs to be vetted.”

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4015404/llms-are-guessing-login-urls-and-its-a-cybersecurity-time-bomb.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link