URL has been copied successfully!
Beef up AI security with zero trust principles
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Strategies for CSOs: Brauchler offered three AI threat modelling strategies CSOs should consider:
Trust flow tracking, the tracking of the movement of data throughout an application, and monitoring the level of trust that is associated with that data. It’s a defense against an attacker who is able to get untrusted data into an application to control its behavior to abuse trust;Source sink mapping: A data source is any system whose output goes into the context window of an LLM. A sink is any system that consumes the output of an LLM model (like a function call or another system downstream). The purpose of mapping sources and sinks is to discover if there is an attack path through which a threat actor can get untrusted data into a data source that accesses a data sink the threat actor doesn’t already have access to;Models as threat actors: Look at your threat model landscape and replace any LLMs with a threat actor. There’s a vulnerability if the theoretical threat actor at those points can access something they normally couldn’t. “Your team should make absolutely certain there is no way for the language model at that vantage point to be exposed to untrusted data,” he said. “Otherwise you risk critical level threats within your application.””If we implement these security control primitives, we can begin to eliminate attack classes that right now we are seeing in every AI system we test,” he said.One of the most critical strategies, Brauchler said, comes down to segmentation: LLM models that run in high trust contexts should never be exposed to untrusted data. And models exposed to untrusted data should never have access to high privilege functionality. “It’s a matter of segmenting those models that are operating in high trusted zones, and those operating with low trusted data.”In addition, CSOs should approach AI defense beginning with their architecture teams. “AI security is not something you can add as a patch-on solution,” he said. “You can’t add layers of guardrails, you can’t add something in the middle to make your application magically secure. Your teams need to be developing your systems with security from the ground up. And the encouraging aspect is, this isn’t a new lesson. Security and its fundamentals are still applying in the same way we’ve seen in the last 30 years. What’s changed is how they’re integrated into environments that leverage AI.”He also referred CSOs and developers to:
the ISO 42001 standard for establishing, implementing, maintaining an Artificial Intelligence Management System;the MITRE Atlas knowledge base of adversary tactics and techniques against Al-enabled systems;the OWASP Top 10 Risks and Mitigations for LLMs

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4035385/beef-up-ai-security-with-zero-trust-principles.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link