Tag: LLM
-
Google uncovers malware using LLMs to operate and evade detection
PromptLock, the AI-powered proof-of-concept ransomware developed by researchers at NYU Tandon and initially mistaken for an active threat by ESET, is no longer an isolated … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/11/05/malware-using-llms/
-
NDSS 2025 Safety Misalignment Against Large Language Models
SESSION Session 2A: LLM Security Authors, Creators & Presenters: Yichen Gong (Tsinghua University), Delong Ran (Tsinghua University), Xinlei He (Hong Kong University of Science and Technology (Guangzhou)), Tianshuo Cong (Tsinghua University), Anyu Wang (Tsinghua University), Xiaoyun Wang (Tsinghua University) PAPER Safety Misalignment Against Large Language Models The safety alignment of Large Language Models (LLMs) is…
-
Malware Developers Test AI for Adaptive Code Generation
Google Details How Attackers Could Use LLMs to Mutate Scripts. Malware authors are experimenting with a new breed of artificial intelligence-driven attacks, with code that could potentially rewrite itself as it runs. Large language models are allowing hackers to generate, modify and execute commands on demand, instead of relying on static payloads First seen on…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
NDSS 2025 The Philosopher’s Stone: Trojaning Plugins Of Large Language Models
Tags: attack, conference, control, data, defense, exploit, LLM, malicious, malware, network, open-source, phishing, spear-phishingSESSION Session 2A: LLM Security Authors, Creators & Presenters: Tian Dong (Shanghai Jiao Tong University), Minhui Xue (CSIRO’s Data61), Guoxing Chen (Shanghai Jiao Tong University), Rayne Holland (CSIRO’s Data61), Yan Meng (Shanghai Jiao Tong University), Shaofeng Li (Southeast University), Zhen Liu (Shanghai Jiao Tong University), Haojin Zhu (Shanghai Jiao Tong University) PAPER The Philosopher’s Stone:…
-
Ryt Bank taps agentic AI for conversational banking
Malaysia’s Ryt Bank is using its own LLM and agentic AI framework to allow customers to perform banking transactions in natural language, replacing traditional menus and buttons First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366634082/Ryt-Bank-taps-agentic-AI-for-conversational-banking
-
Sophos entwickelt LLM-Salting-Technik zum Schutz vor Jailbreak-Prompts
Konkret haben die Forschenden einen Bereich in den sogenannten Modellaktivierungen identifiziert, der für das ‘Verweigerungsverhalten” zuständig ist also dafür, wann die KI bestimmte Anfragen ablehnt. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/sophos-entwickelt-llm-salting-technik-zum-schutz-vor-jailbreak-pompts/a42603/
-
OpenAIs Aardvark soll Fehler im Code erkennen und beheben
Tags: ai, ceo, chatgpt, cve, cyberattack, LLM, open-source, openai, risk, software, supply-chain, tool, update, vulnerabilityKI soll das Thema Sicherheit frühzeitig in den Development-Prozess miteinbeziehen.OpenAI hat Aardvark vorgestellt, einen autonomen Agenten auf Basis von GPT-5. Er soll wie ein menschlicher Sicherheitsforscher in der Lage sein, Code zu scannen, zu verstehen und zu patchen.Im Gegensatz zu herkömmlichen Scannern, die verdächtigen Code mechanisch markieren, versucht Aardvark zu analysieren, wie und warum sich…
-
AI Developed Code: 5 Critical Security Checkpoints for Human Oversight
To write secure code with LLMs developers must have the skills to use AI as a collaborative assistant rather than an autonomous tool, Madou argues. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/ai-code-security-checkpoints
-
Why API Security Is Central to AI Governance
APIs are now the action layer of AI that make up your API fabric. Every LLM workflow, agent, and MCP tool call rides on an API. This makes API governance the working heart of AI governance, especially with the arrival of landmark frameworks like the EU AI Act and ISO/IEC 42001. These new regulations turn…
-
OpenAI launches Aardvark to detect and patch hidden bugs in code
Tags: ai, attack, cve, flaw, framework, LLM, open-source, openai, software, supply-chain, update, vulnerabilitySecuring open source and shifting security left: Aardvark’s role extends beyond enterprise environments. OpenAI has already deployed it across open-source repositories, where it claims to have discovered multiple real-world vulnerabilities, ten of which have received official CVE identifiers. The LLM giant said it plans to provide pro-bono scanning for selected non-commercial open-source projects, under a…
-
Large-Language-Models in KI-Agenten schützen
Der von Check Point Software Technologies akquirierte KI-Spezialist Lakera hat einen völlig neuartigen Benchmark zusammen mit Sicherheitsforschern des britischen AI Security Institute entwickelt. Dieser hilft vornehmlich, Large-Language-Models in KI-Agenten zu schützen. Der völlig neuartige Benchmark b3 ist ein Open-Source-Projekt zur Sicherheitsevaluierung, das speziell für den Schutz von LLMs in KI-Agenten entworfen worden ist. Der Benchmark…
-
Open Source “b3” Benchmark to Boost LLM Security for Agents
The backbone breaker benchmark (b3) has been launched to enhance the security of LLMs within AI agents First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/open-source-b3-benchmark-security/
-
Security for AI: A Practical Guide to Enforcing Your AI Acceptable Use Policy
Tags: access, ai, awareness, best-practice, business, chatgpt, compliance, control, corporate, data, data-breach, disinformation, finance, governance, government, guide, intelligence, LLM, malicious, monitoring, openai, privacy, regulation, risk, service, strategy, technology, threat, tool, training, update, vulnerabilityAn AI acceptable use policy can help your organization mitigate the risk of employees accidentally exposing sensitive data to public AI tools. Benchmark your organization’s policy against our best practices and discover how prompt-level visibility from Tenable AI Exposure eases policy enforcement. Key takeaways: An AI acceptable use policy governs the appropriate use of generative…
-
Sichere Umgebung für LLMs – Solita startet FunctionAI für sichere LLM-Nutzung
Tags: LLMFirst seen on security-insider.de Jump to article: www.security-insider.de/solita-startet-functionai-fuer-sichere-llm-nutzung-a-935615fbe31a7d9793a36cb1b3fa1aca/
-
Week in review: Actively exploited Windows SMB flaw, trusted OAuth apps turned into cloud backdoors
Here’s an overview of some of last week’s most interesting news, articles, interviews and videos: Most AI privacy research looks the wrong way Most research on LLM privacy has … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/26/week-in-review-actively-exploited-windows-smb-flaw-trusted-oauth-apps-turned-into-cloud-backdoors/
-
How AI LLMs Are Improving Authentication Flows
AI & LLMs are reshaping authentication. Learn how they enable adaptive security, fraud detection, and personalized login experiences in identity verification. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/how-ai-llms-are-improving-authentication-flows/
-
Cybersecurity Snapshot: Top Advice for Detecting and Preventing AI Attacks, and for Securing AI Systems
Tags: access, ai, attack, authentication, awareness, best-practice, breach, business, chatgpt, china, ciso, cloud, computing, container, control, credentials, crime, cve, cyber, cyberattack, cybersecurity, data, defense, detection, email, exploit, extortion, finance, flaw, framework, fraud, google, governance, government, group, guide, hacker, hacking, healthcare, iam, identity, incident response, intelligence, LLM, malicious, malware, mitigation, monitoring, network, open-source, openai, organized, phishing, ransom, risk, risk-management, russia, sans, scam, service, skills, soc, strategy, supply-chain, technology, theft, threat, tool, training, vulnerability, zero-trustAs organizations eagerly adopt AI, cybersecurity teams are racing to protect these new systems. In this special edition of the Cybersecurity Snapshot, we round up some of the best recent guidance on how to fend off AI attacks, and on how to safeguard your AI systems. Key takeaways Developers are getting new playbooks from groups…
-
Faster LLM tool routing comes with new security considerations
Large language models depend on outside tools to perform real-world tasks, but connecting them to those tools often slows them down or causes failures. A new study from the … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/23/netmcp-network-aware-mcp-platform/
-
Prompt hijacking puts MCP-based AI workflows at risk
oatpp-mcp, the MCP implementation for Oat++ (oatpp), a popular framework for developing web applications in C++. Tracked as CVE-2025-6515, the flaw stems from the fact that oatpp-mcp generates guessable session IDs for use in its communication with MCP clients, an issue that other MCP servers might have as well. The Model Context Protocol was developed…
-
NDSS 2025 Symposium On Usable Security And Privacy (USEC) 2025, Paper Session 1
Tags: conference, cyber, cybersecurity, defense, international, LLM, network, password, phishing, privacy, technologyAuthors, Creators & Presenters: PAPERS On-demand RFID: Improving Privacy, Security, and User Trust in RFID Activation through Physically-Intuitive Design Youngwook Do (JPMorganChase and Georgia Institute of Technology), Tingyu Cheng (Georgia Institute of Technology and University of Notre Dame), Yuxi Wu (Georgia Institute of Technology and Northeastern University), HyunJoo Oh(Georgia Institute of Technology), Daniel J. Wilson…
-
NDSS 2025 Workshop On Security And Privacy Of Next-Generation Networks (FutureG) 2025, Session 1. Panelists Papers SESSION Opening Remarks, Panel And FutureG 2025 Session 1: AI-Assisted NextG
Tags: 5G, ai, conference, detection, government, Internet, LLM, network, open-source, privacy, vulnerabilityPanelists: Ted K. Woodward, Ph.D. Technical Director for FutureG, OUSD (R&E) Phillip Porras, Program Director, Internet Security Research, SRI Donald McBride, Senior Security Researcher, Bell Laboratories, Nokia This panel aims to bring together various participants and stakeholders from government, industry, and academia to present and discuss recent innovations and explore options to enable recent 5G…
-
NDSS 2025 Workshop On Security And Privacy Of Next-Generation Networks (FutureG) 2025, Session 3 Session 3: Novel Threats In Decentralized NextG And Securing Open RAN
PAPERS Feedback-Guided API Fuzzing of 5G Network Tianchang Yang (Pennsylvania State University), Sathiyajith K S (Pennsylvania State University), Ashwin Senthil Arumugam (Pennsylvania State University), Syed Rafiul Hussain (Pennsylvania State University) Trust or Bust: A Survey of Threats in Decentralized Wireless Networks Hetvi Shastri (University of Massachusetts Amherst), Akanksha Atrey (Nokia Bell Labs), Andre Beck (Nokia…
-
Simple to Ask: Is Your SOC AI Ready? Not Simple to Answer!
Gemini made blog illustration In early 1900s, factory owners bolted the new electric dynamo onto their old, central-shaft-and-pulley systems. They thought they were modernizing, but they were just doing a “retrofit.” The massive productivity boom didn’t arrive until they completely re-architected the factory around the new unit-drive motor (metaphor source). Today’s AI agent slapped onto…
-
Most AI privacy research looks the wrong way
Most research on LLM privacy has focused on the wrong problem, according to a new paper by researchers from Carnegie Mellon University and Northeastern University. The authors … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/20/llm-ai-data-privacy-research/
-
Cybersecurity Snapshot: F5 Breach Prompts Urgent U.S. Gov’t Warning, as OpenAI Details Disrupted ChatGPT Abuses
Tags: ai, attack, awareness, backdoor, breach, business, chatgpt, china, cisa, cloud, control, corporate, cve, cyber, cybersecurity, data, data-breach, defense, detection, exploit, framework, fraud, governance, government, group, hacker, incident, infrastructure, Internet, iran, law, LLM, malicious, malware, mitigation, monitoring, network, openai, organized, phishing, privacy, resilience, risk, russia, scam, security-incident, service, software, strategy, supply-chain, technology, threat, training, update, vulnerabilityF5’s breach triggers a CISA emergency directive, as Tenable calls it “a five-alarm fire” that requires urgent action. Meanwhile, OpenAI details how attackers try to misuse ChatGPT. Plus, boards are increasing AI and cyber disclosures. And much more! Key takeaways A critical breach at cybersecurity firm F5, attributed to a nation-state, has triggered an urgent…

