URL has been copied successfully!
ChatGPT, Claude, and Gemini Among 11 AI Models Vulnerable to One-Line Jailbreak
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

ChatGPT, Claude, and Gemini Among 11 AI Models Vulnerable to One-Line Jailbreak

A newly discovered jailbreak technique named >>sockpuppeting<< successfully forces 11 leading artificial intelligence models, including ChatGPT, Claude, and Gemini, to bypass their safety guardrails. By exploiting a standard application programming interface (API) feature with a single line of code, attackers can trick these models into generating malicious outputs without requiring complex mathematical optimisation. When a […] The post ChatGPT, Claude, and Gemini Among 11 AI Models Vulnerable to One-Line Jailbreak appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

First seen on gbhackers.com

Jump to article: gbhackers.com/11-ai-models-vulnerable-to-one-line-jailbreak/

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link