URL has been copied successfully!
Risk assessment vital when choosing an AI model, say experts
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Risk assessment vital when choosing an AI model, say experts

Advice to CSOs: Lee said that CSOs should consider the following before approving any LLM:
Training data: figure out where the model got its info. Random web grabs expose your secrets;Prompt history: if your questions stick around on their servers, they’ll turn up in the next breach bulletin;Credentials: stolen API keys and weak passwords keep attackers fed. Push for MFA (multifactor authentication) and real-time alerts;Infrastructure: make sure TLS is tight, patches land without delay, and networks are sealed off. Half-baked configs get popped;Access controls: lock down roles, log every AI call, and stream logs into SIEM/DLP. Shadow AI is the silent assailant;Incident drills: insist on immediate breach notifications. Practice leaked-key and prompt-injection scenarios so you’re not flailing when it hits the fan.”Treat LLMs like they’re guarding your bank vault,” Lee said. “Forget the hype. Put them through the same brutal vetting you’d use on any mission-critical system. Do that, and you get AI’s upside without leaving the backdoor wide open.”

First seen on csoonline.com

Jump to article: www.csoonline.com/article/3997429/risk-assessment-vital-when-choosing-an-ai-model-say-experts.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link