A study by Pillar Security found that generative AI models are highly susceptible to jailbreak attacks, which take an average of 42 seconds and five i…
First seen on securityboulevard.com
Jump to article: securityboulevard.com/2024/10/attacks-on-genai-models-can-take-seconds-often-succeed-report/
![]()

