A newly discovered jailbreak technique named >>sockpuppeting<< successfully forces 11 leading artificial intelligence models, including ChatGPT, Claude, and Gemini, to bypass their safety guardrails. By exploiting a standard application programming interface (API) feature with a single line of code, attackers can trick these models into generating malicious outputs without requiring complex mathematical optimisation. When a […] The post ChatGPT, Claude, and Gemini Among 11 AI Models Vulnerable to One-Line Jailbreak appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
First seen on gbhackers.com
Jump to article: gbhackers.com/11-ai-models-vulnerable-to-one-line-jailbreak/
![]()

