Cisco: One Prompt May Not Break Most AI Models, But a Conversation Will. Cisco tested eight major open-weight artificial intelligence models and found multi-turn jailbreak attacks succeeded nearly 93% of the time, exposing a blind spot in how enterprises assess and deploy large language models safety.
First seen on govinfosecurity.com
Jump to article: www.govinfosecurity.com/open-weight-ai-models-fail-jailbreak-test-a-30823
![]()

