URL has been copied successfully!
AI hallucinations lead to a new cyber threat: Slopsquatting
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

AI hallucinations lead to a new cyber threat: Slopsquatting

These hallucinations are bad news: These package hallucinations are particularly dangerous as they were found to be persistent, repetitive, and believable.When researchers reran 500 prompts that had previously produced hallucinated packages, 43% of hallucinations reappeared every time in 10 successive re-runs, with 58% of them appearing in more than one run.The study concluded that this persistence indicates “that the majority of hallucinations are not just random noise, but repeatable artifacts of how the models respond to certain prompts.” This increases their value to attackers, it added.Additionally, these hallucinated package names were observed to be “semantically convincing”. Thirty-eight percent of them had moderate string similarity to real packages, suggesting a similar naming structure. “Only 13% of hallucinations were simple off-by-one typos,” Socket added.While neither the Socket analysis nor the research paper mentioned any in-the-wild Slopsquatting instances, both cautioned protective measures. Socket recommended developers >install dependency scanners before production and runtime to fish out malicious packages. Rushing through security testing is one of the reasons AI models succumb to hallucinations. Recently, OpenAI was blamed for slashing its models’ testing time and resources significantly, exposing its usage to significant threats.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/3961304/ai-hallucinations-lead-to-new-cyber-threat-slopsquatting.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link