URL has been copied successfully!
Attackers Can Manipulate AI Memory to Spread Lies
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Attackers Can Manipulate AI Memory to Spread Lies

Tested on Three OpenAI Models, ‘Minja’ Has High Injection and Attack Rates. A memory injection attack dubbed Minja turns AI chatbots into unwitting agents of misinformation, requiring no hacking and just a little clever prompting. The exploit allows attackers to poison an AI model’s memory with deceptive information, potentially altering its responses for all users.

First seen on govinfosecurity.com

Jump to article: www.govinfosecurity.com/attackers-manipulate-ai-memory-to-spread-lies-a-27699

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link