Tag: LLM
-
Copy-paste vulnerability hits AI inference frameworks at Meta, Nvidia, and Microsoft
Tags: ai, authentication, cloud, data, data-breach, exploit, framework, google, infrastructure, Internet, linkedin, LLM, microsoft, nvidia, oracle, risk, vulnerabilityWhy this matters for AI infrastructure: The vulnerable inference servers form the backbone of many enterprise-grade AI stacks, processing sensitive prompts, model weights, and customer data. Oligo reported identifying thousands of exposed ZeroMQ sockets on the public internet, some tied to these inference clusters.If exploited, an attacker could execute arbitrary code on GPU clusters, escalate…
-
Copy-paste vulnerability hits AI inference frameworks at Meta, Nvidia, and Microsoft
Tags: ai, authentication, cloud, data, data-breach, exploit, framework, google, infrastructure, Internet, linkedin, LLM, microsoft, nvidia, oracle, risk, vulnerabilityWhy this matters for AI infrastructure: The vulnerable inference servers form the backbone of many enterprise-grade AI stacks, processing sensitive prompts, model weights, and customer data. Oligo reported identifying thousands of exposed ZeroMQ sockets on the public internet, some tied to these inference clusters.If exploited, an attacker could execute arbitrary code on GPU clusters, escalate…
-
Security Degradation in AI-Generated Code: A Threat Vector CISOs Can’t Ignore
A new study shows LLMs introduce more vulnerabilities with each code iteration, highlighting critical risks for CISOs and the need for skilled human oversight. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/security-degradation-in-ai-generated-code-a-threat-vector-cisos-cant-ignore/
-
Germany’s BSI issues guidelines to counter evasion attacks targeting LLMs
Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems. Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems and mitigate related risks. A significant and evolving threat to AI systems based on large…
-
Security Degradation in AI-Generated Code: A Threat Vector CISOs Can’t Ignore
A new study shows LLMs introduce more vulnerabilities with each code iteration, highlighting critical risks for CISOs and the need for skilled human oversight. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/security-degradation-in-ai-generated-code-a-threat-vector-cisos-cant-ignore/
-
Germany’s BSI issues guidelines to counter evasion attacks targeting LLMs
Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems. Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems and mitigate related risks. A significant and evolving threat to AI systems based on large…
-
Survey Surfaces Sharp Rise in Cybersecurity Incidents Involving AI
A survey of 500 security practitioners and decision-makers across the United States and Europe published today finds cyberattacks aimed at artificial intelligence (AI) applications are rising, with prompt injections involving large language models (LLMs) at the top of the list (76%), followed by vulnerable LLM code (66%) and LLM jailbreaking (65%). Conducted by Traceable by..…
-
Wie ChatGPT sich selbst eine Prompt Injection zufügt
Forscher haben neue Methoden für Angriffe über ChatGPT aufgedeckt.Forscher des Sicherheitsunternehmens Tenable haben sieben neue Möglichkeiten entdeckt, wie Angreifer ChatGPT dazu bringen können, private Informationen aus den Chat-Verläufen der Nutzer preiszugeben. Bei den meisten dieser Angriffe handelt es sich um indirekte Prompt Injections, die die Standard-Tools und -funktionen von ChatGPT ausnutzen. Etwa die Fähigkeit, den…
-
AI-Sicherheit: Fast alle LLMs leaken private API-Keys auf Github; ChatGPT hat Schwachstellen
AI bzw. der Gebrauch von Sprachmodellen (LLMs) wie ChatGPT ist ja Dauerthema angeblich so wichtig wie geschnitten Brot. Insidern ist klar, dass diese Technologie die größte Gefahr für Daten-und Unternehmenssicherheit seit Einführung des Internet ist. Mir sind zwei Informationssplitter … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/11/12/ai-llms-die-grossen-leaker-der-zukunft/
-
Faster Than Real-Time: Why Your Security Fails and What to Do Next
Tags: access, ai, apple, attack, breach, business, ceo, cio, cloud, control, cybersecurity, data, defense, detection, dns, endpoint, fintech, framework, identity, infrastructure, Internet, iot, jobs, LLM, malware, network, nist, privacy, resilience, siem, soc, technology, threat, tool, vpn, zero-day, zero-trust“Security systems fail. When it fails, what do you do?” This critical question from Spire Connect’s Pankaj Sharma set the stage at Gitex 2025 for a conversation with Francois Driessen, the “Human Ambassador” of ADAMnetworks. His core message is blunt: in cybersecurity, even real-time is not fast enough. By the time a threat is detected,…
-
Faster Than Real-Time: Why Your Security Fails and What to Do Next
Tags: access, ai, apple, attack, breach, business, ceo, cio, cloud, control, cybersecurity, data, defense, detection, dns, endpoint, fintech, framework, identity, infrastructure, Internet, iot, jobs, LLM, malware, network, nist, privacy, resilience, siem, soc, technology, threat, tool, vpn, zero-day, zero-trust“Security systems fail. When it fails, what do you do?” This critical question from Spire Connect’s Pankaj Sharma set the stage at Gitex 2025 for a conversation with Francois Driessen, the “Human Ambassador” of ADAMnetworks. His core message is blunt: in cybersecurity, even real-time is not fast enough. By the time a threat is detected,…
-
LLM side-channel attack could allow snoops to guess what you’re talking about
Encryption protects content, not context First seen on theregister.com Jump to article: www.theregister.com/2025/11/11/llm_sidechannel_attack_microsoft_researcher/
-
China-Aligned UTA0388 Uses AI Tools in Global Phishing Campaigns
Volexity has linked spear phishing operations to China-aligned UTA0388 in new campaigns using advanced tactics and LLMs First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/china-aligned-uta0388-ai-tools/
-
Whisper Leak uses a side channel attack to eavesdrop on encrypted AI conversations
Tags: ai, api, attack, ciso, cloud, cyberattack, data, finance, healthcare, leak, LLM, microsoft, mitigation, network, openai, service, side-channel, vpnInside Microsoft’s proof-of-concept: Researchers at Microsoft simulated a real-world scenario in which the adversary could observe encrypted traffic but not decrypt it. They chose “legality of money laundering” as the target topic for the proof-of-concept.For positive samples, the team used a”¯language model”¯to generate 100 semantically similar variants of questions about this topic. For negative”¯noise”¯samples,”¯it randomly…
-
Whisper Leak uses a side channel attack to eavesdrop on encrypted AI conversations
Tags: ai, api, attack, ciso, cloud, cyberattack, data, finance, healthcare, leak, LLM, microsoft, mitigation, network, openai, service, side-channel, vpnInside Microsoft’s proof-of-concept: Researchers at Microsoft simulated a real-world scenario in which the adversary could observe encrypted traffic but not decrypt it. They chose “legality of money laundering” as the target topic for the proof-of-concept.For positive samples, the team used a”¯language model”¯to generate 100 semantically similar variants of questions about this topic. For negative”¯noise”¯samples,”¯it randomly…
-
Whisper Leak uses a side channel attack to eavesdrop on encrypted AI conversations
Tags: ai, api, attack, ciso, cloud, cyberattack, data, finance, healthcare, leak, LLM, microsoft, mitigation, network, openai, service, side-channel, vpnInside Microsoft’s proof-of-concept: Researchers at Microsoft simulated a real-world scenario in which the adversary could observe encrypted traffic but not decrypt it. They chose “legality of money laundering” as the target topic for the proof-of-concept.For positive samples, the team used a”¯language model”¯to generate 100 semantically similar variants of questions about this topic. For negative”¯noise”¯samples,”¯it randomly…
-
Whisper Leak uses a side channel attack to eavesdrop on encrypted AI conversations
Tags: ai, api, attack, ciso, cloud, cyberattack, data, finance, healthcare, leak, LLM, microsoft, mitigation, network, openai, service, side-channel, vpnInside Microsoft’s proof-of-concept: Researchers at Microsoft simulated a real-world scenario in which the adversary could observe encrypted traffic but not decrypt it. They chose “legality of money laundering” as the target topic for the proof-of-concept.For positive samples, the team used a”¯language model”¯to generate 100 semantically similar variants of questions about this topic. For negative”¯noise”¯samples,”¯it randomly…
-
Microsoft findet Seitenkanalangriff Whisper-Leak in LLMs
Sicherheitsforscher haben eine neue Whisper-Leaks genannte Methode entdeckt, um einen Seitenkanalangriff auf die Kommunikation mit Sprachmodellen im Streaming-Modus durchzuführen. Durch geschicktes Ausnutzung von Netzwerkpaketgrößen und -timings könnten Informationen abgezogen werden. Mit der KI-Welle werden immer häufiger große Sprachmodelle (LLMs), KI-gestützte … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/11/09/microsoft-findet-seitenkanalangriff-whisper-leak-in-llms/
-
AI benchmarks are a bad joke and LLM makers are the ones laughing
Study finds many tests don’t measure the right things First seen on theregister.com Jump to article: www.theregister.com/2025/11/07/measuring_ai_models_hampered_by/
-
NDSS 2025 YuraScanner: Leveraging LLMs For Task-driven Web App Scanning4+
SESSION Session 2B: Web Security Authors, Creators & Presenters: Aleksei Stafeev (CISPA Helmholtz Center for Information Security), Tim Recktenwald (CISPA Helmholtz Center for Information Security), Gianluca De Stefano (CISPA Helmholtz Center for Information Security), Soheil Khodayari (CISPA Helmholtz Center for Information Security), Glancarlo Pellegrino (CISPA Helmholtz Center for Information Security) PAPER YuraScanner: Leveraging LLMs for…
-
NDSS 2025 YuraScanner: Leveraging LLMs For Task-driven Web App Scanning4+
SESSION Session 2B: Web Security Authors, Creators & Presenters: Aleksei Stafeev (CISPA Helmholtz Center for Information Security), Tim Recktenwald (CISPA Helmholtz Center for Information Security), Gianluca De Stefano (CISPA Helmholtz Center for Information Security), Soheil Khodayari (CISPA Helmholtz Center for Information Security), Glancarlo Pellegrino (CISPA Helmholtz Center for Information Security) PAPER YuraScanner: Leveraging LLMs for…
-
NDSS 2025 YuraScanner: Leveraging LLMs For Task-driven Web App Scanning4+
SESSION Session 2B: Web Security Authors, Creators & Presenters: Aleksei Stafeev (CISPA Helmholtz Center for Information Security), Tim Recktenwald (CISPA Helmholtz Center for Information Security), Gianluca De Stefano (CISPA Helmholtz Center for Information Security), Soheil Khodayari (CISPA Helmholtz Center for Information Security), Glancarlo Pellegrino (CISPA Helmholtz Center for Information Security) PAPER YuraScanner: Leveraging LLMs for…
-
Popular LLMs dangerously vulnerable to iterative attacks, says Cisco
Cisco researchers probed some of the most widely used public GenAI LLMs and found many of them were dangerously susceptible to so-called multi-turn cyber attacks producing undesirable outputs First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366634292/Popular-LLMs-dangerously-vulnerable-to-iterative-attacks-says-Cisco
-
Popular LLMs dangerously vulnerable to iterative attacks, says Cisco
Cisco researchers probed some of the most widely used public GenAI LLMs and found many of them were dangerously susceptible to so-called multi-turn cyber attacks producing undesirable outputs First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366634292/Popular-LLMs-dangerously-vulnerable-to-iterative-attacks-says-Cisco

