The popular generative AI (GenAI) model allows hallucinations, easily avoidable guardrails, susceptibility to jailbreaking and malware creation requests, and more at critically high rates, researchers find.
First seen on darkreading.com
Jump to article: www.darkreading.com/cyber-risk/deepseek-fails-multiple-security-tests-business-use
![]()

