URL has been copied successfully!
Mistral AI Models Fail Key Safety Tests, Report Finds
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Mistral AI Models Fail Key Safety Tests, Report Finds

Pixtral Models 60 Times More Likely to Generate Harmful Content Than Rivals. Publicly available artificial intelligence models made by Mistral produce child sexual abuse material and instructions for chemical weapons manufacturing at rates far exceeding those of competing systems, found researchers from Enkrypt AI.

First seen on govinfosecurity.com

Jump to article: www.govinfosecurity.com/mistral-ai-models-fail-key-safety-tests-report-finds-a-28358

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link