Grok, Gemini, too fell to Echo Chambers : Echo Chamber jailbreak was first disclosed by Neural Trust in June, where researchers reported the technique’s ability to trick leading GPT and Gemini models.The technique, which was shown to exploit the models’ tendency to trust consistency across conversations and ‘echo’ the same malicious idea through multiple conversations, had yielded over 90% success against a score of sensitive categories, including sexism, violence, hate speech, and pornography.”Model providers are caught in a competitive ‘race to the bottom,’ releasing new models at an unprecedented pace of every one-to-two months,” said Maor Volokh, vice president of product at Noma Security. “OpenAI alone has launched roughly seven models this year. This breakneck speed typically prioritizes performance and innovation over security considerations, leading to an expectation that more model vulnerabilities will emerge as competition intensifies.”More recently, the newly launched Grok-4 was tested for resilience against the Echo Chamber attack. Researchers had needed to combine another well-known jailbreak, ‘Crescendo’, with the test, as Echo Chamber itself wasn’t sufficient in certain cases. “With two additional turns, the combined approach succeeded in eliciting the target response,” the researchers had said. GPT-5, however, was tested with the combination effort right away, and a jailbreak was achieved. OpenAI did not immediately respond to CSO’s request for comments.
First seen on csoonline.com
Jump to article: www.csoonline.com/article/4038216/gpt-5-jailbroken-hours-after-launch-using-echo-chamber-and-storytelling-exploit.html
![]()

