URL has been copied successfully!
Meet ShadowLeak: ‘Impossible to detect’ data theft using AI
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

What CSOs should do: To blunt this kind of attack, he said CSOs should:
treat AI agents as privileged actors: apply the same governance used for a human with internal resource access;separate ‘read’ from ‘act’ scopes and service accounts, and where possible sanitize inputs before LLM (large language model) ingestion. Strip/neutralize hidden HTML, flatten to safe text when possible;instrument and log AI agent actions. Capture who/what/why for each tool call/web request and enable forensic traceability and deterrence;assume prompts to AI agents are untrusted input. Traditional regex/state-machine detectors won’t reliably catch malicious prompts, so use semantic/LLM-based intent checks;impose supply-chain governance. Require vendors to perform prompt-injection resilience testing and sanitization upstream; include this requirement in questionnaires and contracts;have a maturity model for autonomy. Start the AI agent with read-only authority, then graduate to supervised actions after a security review, perhaps by creating a popup that asks, “Are you sure you want me to submit XXX to this server?”. Red-team with zero-click indirect prompt injection playbooks before scale-out.

‘A real issue’: Joseph Steinberg, a US-based cybersecurity and AI expert, said this type of attack “is a real issue for parties who allow AIs to automatically process their email, documents, etc.”It’s like the malicious voice prompt embedding that can be done with Amazon’s Alexa, he said. “Of course,” he added, “if you keep your microphones off on your Alexa devices other than when you are using them, the problem is minimized. The same holds true here. If you allow only emails that you know are safe to be processed by the AI, the danger is minimized. You could, for example, convert all emails to text and filter them before sending them into the AI analysis engine, you could allow only emails from trusted parties to be processed by AI, etc. At the same time, we must recognize that nothing that anyone can do at the present time is guaranteed to prevent any and all harmful prompts sent by nefarious parties from reaching the AI.”Steinberg also said that while AI is here to stay and its usage will continue to expand, CSOs who understand the cybersecurity issues and are worried about vulnerabilities are already delaying implementations of certain types of functions. So, he said, it is hard to know if the specific new vulnerability that was discovered by Radware will cause many CSOs to change their approaches.”That said,” he added, “Radware has clearly shown that the dangers about which many of us in the cybersecurity profession have been warning are real, and that anyone who has been dismissing our warnings as being the fear mongering of paranoid alarmists should take note.””CSOs should be very worried about this type of vulnerability,” Johannes Ullrich, dean of research at the SANS Institute, said of the Radware report. “It is very hard if not impossible to patch, and there are many similar vulnerabilities still waiting to be discovered. AI is currently in the phase of blocking specific exploits, but is still far away from finding ways to eliminate the actual vulnerability. This issue will get even worse as agentic AI is applied more and more.”There have been multiple similar or identical vulnerabilities recently exposed in AI systems, he pointed out, referring to blogs from Straiker and AIM Security.The problem is always the same, he added: AI systems do not properly differentiate between user data and code (“prompts”). This allows for a myriad of paths to modify the prompt used to process the data. This basic pattern, mixing of code and data, he added, has been the root cause of most security vulnerabilities in the past, such as buffer overflows, SQL Injection, and cross-site scripting (XSS).

‘Wakeup call’: ShadowLeak “is a wakeup call to not jump into AI with security as an afterthought,” Radware’s Geenens said. “Organizations will have to make use of this technology going forward. In my mind there is no doubt that AI will be an integral part of our lives in the near future, but we need to tell organizations to do it in a secure way and make them aware of the threats.””What keeps me awake at night,” he added, “is a conclusion from a Gartner report (4 Ways Generative AI Will Impact CISOs and Their Teams ) that was published in June of 2023 and is based on a survey about genAI: ‘89% of business technologists would bypass cybersecurity guidance to meet a business objective.’ If organizations jump head first into this technology and consider security an afterthought, this will not end well for the organization and the technology itself. It is our task or mission, as a cybersecurity community, to make organizations aware of the risks and to come up with frictionless security solutions that enable them to safely and productively deploy agentic AI.”

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4059606/meet-shadowleak-impossible-to-detect-data-theft-using-ai.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link