Tag: LLM
-
Faster Than Real-Time: Why Your Security Fails and What to Do Next
Tags: access, ai, apple, attack, breach, business, ceo, cio, cloud, control, cybersecurity, data, defense, detection, dns, endpoint, fintech, framework, identity, infrastructure, Internet, iot, jobs, LLM, malware, network, nist, privacy, resilience, siem, soc, technology, threat, tool, vpn, zero-day, zero-trust“Security systems fail. When it fails, what do you do?” This critical question from Spire Connect’s Pankaj Sharma set the stage at Gitex 2025 for a conversation with Francois Driessen, the “Human Ambassador” of ADAMnetworks. His core message is blunt: in cybersecurity, even real-time is not fast enough. By the time a threat is detected,…
-
LLM side-channel attack could allow snoops to guess what you’re talking about
Encryption protects content, not context First seen on theregister.com Jump to article: www.theregister.com/2025/11/11/llm_sidechannel_attack_microsoft_researcher/
-
China-Aligned UTA0388 Uses AI Tools in Global Phishing Campaigns
Volexity has linked spear phishing operations to China-aligned UTA0388 in new campaigns using advanced tactics and LLMs First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/china-aligned-uta0388-ai-tools/
-
Whisper Leak uses a side channel attack to eavesdrop on encrypted AI conversations
Tags: ai, api, attack, ciso, cloud, cyberattack, data, finance, healthcare, leak, LLM, microsoft, mitigation, network, openai, service, side-channel, vpnInside Microsoft’s proof-of-concept: Researchers at Microsoft simulated a real-world scenario in which the adversary could observe encrypted traffic but not decrypt it. They chose “legality of money laundering” as the target topic for the proof-of-concept.For positive samples, the team used a”¯language model”¯to generate 100 semantically similar variants of questions about this topic. For negative”¯noise”¯samples,”¯it randomly…
-
Whisper Leak uses a side channel attack to eavesdrop on encrypted AI conversations
Tags: ai, api, attack, ciso, cloud, cyberattack, data, finance, healthcare, leak, LLM, microsoft, mitigation, network, openai, service, side-channel, vpnInside Microsoft’s proof-of-concept: Researchers at Microsoft simulated a real-world scenario in which the adversary could observe encrypted traffic but not decrypt it. They chose “legality of money laundering” as the target topic for the proof-of-concept.For positive samples, the team used a”¯language model”¯to generate 100 semantically similar variants of questions about this topic. For negative”¯noise”¯samples,”¯it randomly…
-
Whisper Leak uses a side channel attack to eavesdrop on encrypted AI conversations
Tags: ai, api, attack, ciso, cloud, cyberattack, data, finance, healthcare, leak, LLM, microsoft, mitigation, network, openai, service, side-channel, vpnInside Microsoft’s proof-of-concept: Researchers at Microsoft simulated a real-world scenario in which the adversary could observe encrypted traffic but not decrypt it. They chose “legality of money laundering” as the target topic for the proof-of-concept.For positive samples, the team used a”¯language model”¯to generate 100 semantically similar variants of questions about this topic. For negative”¯noise”¯samples,”¯it randomly…
-
Whisper Leak uses a side channel attack to eavesdrop on encrypted AI conversations
Tags: ai, api, attack, ciso, cloud, cyberattack, data, finance, healthcare, leak, LLM, microsoft, mitigation, network, openai, service, side-channel, vpnInside Microsoft’s proof-of-concept: Researchers at Microsoft simulated a real-world scenario in which the adversary could observe encrypted traffic but not decrypt it. They chose “legality of money laundering” as the target topic for the proof-of-concept.For positive samples, the team used a”¯language model”¯to generate 100 semantically similar variants of questions about this topic. For negative”¯noise”¯samples,”¯it randomly…
-
Microsoft findet Seitenkanalangriff Whisper-Leak in LLMs
Sicherheitsforscher haben eine neue Whisper-Leaks genannte Methode entdeckt, um einen Seitenkanalangriff auf die Kommunikation mit Sprachmodellen im Streaming-Modus durchzuführen. Durch geschicktes Ausnutzung von Netzwerkpaketgrößen und -timings könnten Informationen abgezogen werden. Mit der KI-Welle werden immer häufiger große Sprachmodelle (LLMs), KI-gestützte … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/11/09/microsoft-findet-seitenkanalangriff-whisper-leak-in-llms/
-
AI benchmarks are a bad joke and LLM makers are the ones laughing
Study finds many tests don’t measure the right things First seen on theregister.com Jump to article: www.theregister.com/2025/11/07/measuring_ai_models_hampered_by/
-
NDSS 2025 YuraScanner: Leveraging LLMs For Task-driven Web App Scanning4+
SESSION Session 2B: Web Security Authors, Creators & Presenters: Aleksei Stafeev (CISPA Helmholtz Center for Information Security), Tim Recktenwald (CISPA Helmholtz Center for Information Security), Gianluca De Stefano (CISPA Helmholtz Center for Information Security), Soheil Khodayari (CISPA Helmholtz Center for Information Security), Glancarlo Pellegrino (CISPA Helmholtz Center for Information Security) PAPER YuraScanner: Leveraging LLMs for…
-
NDSS 2025 YuraScanner: Leveraging LLMs For Task-driven Web App Scanning4+
SESSION Session 2B: Web Security Authors, Creators & Presenters: Aleksei Stafeev (CISPA Helmholtz Center for Information Security), Tim Recktenwald (CISPA Helmholtz Center for Information Security), Gianluca De Stefano (CISPA Helmholtz Center for Information Security), Soheil Khodayari (CISPA Helmholtz Center for Information Security), Glancarlo Pellegrino (CISPA Helmholtz Center for Information Security) PAPER YuraScanner: Leveraging LLMs for…
-
NDSS 2025 YuraScanner: Leveraging LLMs For Task-driven Web App Scanning4+
SESSION Session 2B: Web Security Authors, Creators & Presenters: Aleksei Stafeev (CISPA Helmholtz Center for Information Security), Tim Recktenwald (CISPA Helmholtz Center for Information Security), Gianluca De Stefano (CISPA Helmholtz Center for Information Security), Soheil Khodayari (CISPA Helmholtz Center for Information Security), Glancarlo Pellegrino (CISPA Helmholtz Center for Information Security) PAPER YuraScanner: Leveraging LLMs for…
-
Popular LLMs dangerously vulnerable to iterative attacks, says Cisco
Cisco researchers probed some of the most widely used public GenAI LLMs and found many of them were dangerously susceptible to so-called multi-turn cyber attacks producing undesirable outputs First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366634292/Popular-LLMs-dangerously-vulnerable-to-iterative-attacks-says-Cisco
-
Popular LLMs dangerously vulnerable to iterative attacks, says Cisco
Cisco researchers probed some of the most widely used public GenAI LLMs and found many of them were dangerously susceptible to so-called multi-turn cyber attacks producing undesirable outputs First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366634292/Popular-LLMs-dangerously-vulnerable-to-iterative-attacks-says-Cisco
-
Multi-Turn Attacks Expose Weaknesses in Open-Weight LLM Models
A new Cisco report exposed large language models to multi-turn adversarial attacks with 90% success rates First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/multi-turn-attacks-llm-models/
-
Multi-Turn Attacks Expose Weaknesses in Open-Weight LLM Models
A new Cisco report exposed large language models to multi-turn adversarial attacks with 90% success rates First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/multi-turn-attacks-llm-models/
-
AI-Enabled Malware Now Actively Deployed, Says Google
Google warns of “just-in-time AI” malware using LLMs to evade detection and generate malicious code on-demand First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/aienabled-malware-actively/
-
Why API Security Will Drive AppSec in 2026 and Beyond
As LLMs, agents and Model Context Protocols (MCPs) reshape software architecture, API sprawl is creating major security blind spots. The 2025 GenAI Application Security Report reveals why continuous API discovery, testing and governance are now critical to protecting AI-driven applications from emerging semantic and prompt-based attacks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/why-api-security-will-drive-appsec-in-2026-and-beyond/
-
Why API Security Will Drive AppSec in 2026 and Beyond
As LLMs, agents and Model Context Protocols (MCPs) reshape software architecture, API sprawl is creating major security blind spots. The 2025 GenAI Application Security Report reveals why continuous API discovery, testing and governance are now critical to protecting AI-driven applications from emerging semantic and prompt-based attacks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/why-api-security-will-drive-appsec-in-2026-and-beyond/
-
Why API Security Will Drive AppSec in 2026 and Beyond
As LLMs, agents and Model Context Protocols (MCPs) reshape software architecture, API sprawl is creating major security blind spots. The 2025 GenAI Application Security Report reveals why continuous API discovery, testing and governance are now critical to protecting AI-driven applications from emerging semantic and prompt-based attacks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/why-api-security-will-drive-appsec-in-2026-and-beyond/
-
NDSS 2025 Understanding And Detecting Harmful Memes With Multimodal Large Language Models
SESSION Session 2A: LLM Security Authors, Creators & Presenters: Yong Zhuang (Wuhan University), Keyan Guo (University at Buffalo), Juan Wang (Wuhan University), Yiheng Jing (Wuhan University), Xiaoyang Xu (Wuhan University), Wenzhe Yi (Wuhan University), Mengda Yang (Wuhan University), Bo Zhao (Wuhan University), Hongxin Hu (University at Buffalo) PAPER I know what you MEME! Understanding and…
-
NDSS 2025 Understanding And Detecting Harmful Memes With Multimodal Large Language Models
SESSION Session 2A: LLM Security Authors, Creators & Presenters: Yong Zhuang (Wuhan University), Keyan Guo (University at Buffalo), Juan Wang (Wuhan University), Yiheng Jing (Wuhan University), Xiaoyang Xu (Wuhan University), Wenzhe Yi (Wuhan University), Mengda Yang (Wuhan University), Bo Zhao (Wuhan University), Hongxin Hu (University at Buffalo) PAPER I know what you MEME! Understanding and…
-
Google uncovers malware using LLMs to operate and evade detection
PromptLock, the AI-powered proof-of-concept ransomware developed by researchers at NYU Tandon and initially mistaken for an active threat by ESET, is no longer an isolated … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/11/05/malware-using-llms/

