Tag: LLM
-
Multi-Turn Attacks Expose Weaknesses in Open-Weight LLM Models
A new Cisco report exposed large language models to multi-turn adversarial attacks with 90% success rates First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/multi-turn-attacks-llm-models/
-
Multi-Turn Attacks Expose Weaknesses in Open-Weight LLM Models
A new Cisco report exposed large language models to multi-turn adversarial attacks with 90% success rates First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/multi-turn-attacks-llm-models/
-
AI-Enabled Malware Now Actively Deployed, Says Google
Google warns of “just-in-time AI” malware using LLMs to evade detection and generate malicious code on-demand First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/aienabled-malware-actively/
-
Why API Security Will Drive AppSec in 2026 and Beyond
As LLMs, agents and Model Context Protocols (MCPs) reshape software architecture, API sprawl is creating major security blind spots. The 2025 GenAI Application Security Report reveals why continuous API discovery, testing and governance are now critical to protecting AI-driven applications from emerging semantic and prompt-based attacks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/why-api-security-will-drive-appsec-in-2026-and-beyond/
-
Why API Security Will Drive AppSec in 2026 and Beyond
As LLMs, agents and Model Context Protocols (MCPs) reshape software architecture, API sprawl is creating major security blind spots. The 2025 GenAI Application Security Report reveals why continuous API discovery, testing and governance are now critical to protecting AI-driven applications from emerging semantic and prompt-based attacks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/why-api-security-will-drive-appsec-in-2026-and-beyond/
-
Why API Security Will Drive AppSec in 2026 and Beyond
As LLMs, agents and Model Context Protocols (MCPs) reshape software architecture, API sprawl is creating major security blind spots. The 2025 GenAI Application Security Report reveals why continuous API discovery, testing and governance are now critical to protecting AI-driven applications from emerging semantic and prompt-based attacks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/why-api-security-will-drive-appsec-in-2026-and-beyond/
-
NDSS 2025 Understanding And Detecting Harmful Memes With Multimodal Large Language Models
SESSION Session 2A: LLM Security Authors, Creators & Presenters: Yong Zhuang (Wuhan University), Keyan Guo (University at Buffalo), Juan Wang (Wuhan University), Yiheng Jing (Wuhan University), Xiaoyang Xu (Wuhan University), Wenzhe Yi (Wuhan University), Mengda Yang (Wuhan University), Bo Zhao (Wuhan University), Hongxin Hu (University at Buffalo) PAPER I know what you MEME! Understanding and…
-
NDSS 2025 Understanding And Detecting Harmful Memes With Multimodal Large Language Models
SESSION Session 2A: LLM Security Authors, Creators & Presenters: Yong Zhuang (Wuhan University), Keyan Guo (University at Buffalo), Juan Wang (Wuhan University), Yiheng Jing (Wuhan University), Xiaoyang Xu (Wuhan University), Wenzhe Yi (Wuhan University), Mengda Yang (Wuhan University), Bo Zhao (Wuhan University), Hongxin Hu (University at Buffalo) PAPER I know what you MEME! Understanding and…
-
Google uncovers malware using LLMs to operate and evade detection
PromptLock, the AI-powered proof-of-concept ransomware developed by researchers at NYU Tandon and initially mistaken for an active threat by ESET, is no longer an isolated … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/11/05/malware-using-llms/
-
NDSS 2025 Safety Misalignment Against Large Language Models
SESSION Session 2A: LLM Security Authors, Creators & Presenters: Yichen Gong (Tsinghua University), Delong Ran (Tsinghua University), Xinlei He (Hong Kong University of Science and Technology (Guangzhou)), Tianshuo Cong (Tsinghua University), Anyu Wang (Tsinghua University), Xiaoyun Wang (Tsinghua University) PAPER Safety Misalignment Against Large Language Models The safety alignment of Large Language Models (LLMs) is…
-
Malware Developers Test AI for Adaptive Code Generation
Google Details How Attackers Could Use LLMs to Mutate Scripts. Malware authors are experimenting with a new breed of artificial intelligence-driven attacks, with code that could potentially rewrite itself as it runs. Large language models are allowing hackers to generate, modify and execute commands on demand, instead of relying on static payloads First seen on…
-
Malware Developers Test AI for Adaptive Code Generation
Google Details How Attackers Could Use LLMs to Mutate Scripts. Malware authors are experimenting with a new breed of artificial intelligence-driven attacks, with code that could potentially rewrite itself as it runs. Large language models are allowing hackers to generate, modify and execute commands on demand, instead of relying on static payloads First seen on…
-
Malware Developers Test AI for Adaptive Code Generation
Google Details How Attackers Could Use LLMs to Mutate Scripts. Malware authors are experimenting with a new breed of artificial intelligence-driven attacks, with code that could potentially rewrite itself as it runs. Large language models are allowing hackers to generate, modify and execute commands on demand, instead of relying on static payloads First seen on…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
NDSS 2025 The Philosopher’s Stone: Trojaning Plugins Of Large Language Models
Tags: attack, conference, control, data, defense, exploit, LLM, malicious, malware, network, open-source, phishing, spear-phishingSESSION Session 2A: LLM Security Authors, Creators & Presenters: Tian Dong (Shanghai Jiao Tong University), Minhui Xue (CSIRO’s Data61), Guoxing Chen (Shanghai Jiao Tong University), Rayne Holland (CSIRO’s Data61), Yan Meng (Shanghai Jiao Tong University), Shaofeng Li (Southeast University), Zhen Liu (Shanghai Jiao Tong University), Haojin Zhu (Shanghai Jiao Tong University) PAPER The Philosopher’s Stone:…
-
Ryt Bank taps agentic AI for conversational banking
Malaysia’s Ryt Bank is using its own LLM and agentic AI framework to allow customers to perform banking transactions in natural language, replacing traditional menus and buttons First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366634082/Ryt-Bank-taps-agentic-AI-for-conversational-banking
-
Sophos entwickelt LLM-Salting-Technik zum Schutz vor Jailbreak-Prompts
Konkret haben die Forschenden einen Bereich in den sogenannten Modellaktivierungen identifiziert, der für das ‘Verweigerungsverhalten” zuständig ist also dafür, wann die KI bestimmte Anfragen ablehnt. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/sophos-entwickelt-llm-salting-technik-zum-schutz-vor-jailbreak-pompts/a42603/
-
OpenAIs Aardvark soll Fehler im Code erkennen und beheben
Tags: ai, ceo, chatgpt, cve, cyberattack, LLM, open-source, openai, risk, software, supply-chain, tool, update, vulnerabilityKI soll das Thema Sicherheit frühzeitig in den Development-Prozess miteinbeziehen.OpenAI hat Aardvark vorgestellt, einen autonomen Agenten auf Basis von GPT-5. Er soll wie ein menschlicher Sicherheitsforscher in der Lage sein, Code zu scannen, zu verstehen und zu patchen.Im Gegensatz zu herkömmlichen Scannern, die verdächtigen Code mechanisch markieren, versucht Aardvark zu analysieren, wie und warum sich…
-
AI Developed Code: 5 Critical Security Checkpoints for Human Oversight
To write secure code with LLMs developers must have the skills to use AI as a collaborative assistant rather than an autonomous tool, Madou argues. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/ai-code-security-checkpoints
-
AI Developed Code: 5 Critical Security Checkpoints for Human Oversight
To write secure code with LLMs developers must have the skills to use AI as a collaborative assistant rather than an autonomous tool, Madou argues. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/ai-code-security-checkpoints
-
Why API Security Is Central to AI Governance
APIs are now the action layer of AI that make up your API fabric. Every LLM workflow, agent, and MCP tool call rides on an API. This makes API governance the working heart of AI governance, especially with the arrival of landmark frameworks like the EU AI Act and ISO/IEC 42001. These new regulations turn…
-
OpenAI launches Aardvark to detect and patch hidden bugs in code
Tags: ai, attack, cve, flaw, framework, LLM, open-source, openai, software, supply-chain, update, vulnerabilitySecuring open source and shifting security left: Aardvark’s role extends beyond enterprise environments. OpenAI has already deployed it across open-source repositories, where it claims to have discovered multiple real-world vulnerabilities, ten of which have received official CVE identifiers. The LLM giant said it plans to provide pro-bono scanning for selected non-commercial open-source projects, under a…
-
OpenAI launches Aardvark to detect and patch hidden bugs in code
Tags: ai, attack, cve, flaw, framework, LLM, open-source, openai, software, supply-chain, update, vulnerabilitySecuring open source and shifting security left: Aardvark’s role extends beyond enterprise environments. OpenAI has already deployed it across open-source repositories, where it claims to have discovered multiple real-world vulnerabilities, ten of which have received official CVE identifiers. The LLM giant said it plans to provide pro-bono scanning for selected non-commercial open-source projects, under a…
-
Large-Language-Models in KI-Agenten schützen
Der von Check Point Software Technologies akquirierte KI-Spezialist Lakera hat einen völlig neuartigen Benchmark zusammen mit Sicherheitsforschern des britischen AI Security Institute entwickelt. Dieser hilft vornehmlich, Large-Language-Models in KI-Agenten zu schützen. Der völlig neuartige Benchmark b3 ist ein Open-Source-Projekt zur Sicherheitsevaluierung, das speziell für den Schutz von LLMs in KI-Agenten entworfen worden ist. Der Benchmark…

