Tag: LLM
-
Two Separate Campaigns Target Exposed LLM Services
A total of 91,403 sessions targeted public LLM endpoints to find leaks in organizations’ use of AI and map an expanding attack surface. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/separate-campaigns-target-exposed-llm-services
-
Corrupting LLMs Through Weird Generalizations
Fascinating research: Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs. AbstractLLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model…
-
Shai-Hulud & Co.: Die Supply Chain als Achillesferse
Tags: access, ai, application-security, backdoor, ciso, cloud, cyber, cyberattack, data, github, Hardware, infrastructure, kritis, kubernetes, LLM, monitoring, network, nis-2, programming, resilience, risk, rust, sbom, software, spyware, strategy, supply-chain, tool, vulnerabilityEgal, ob React2Shell, Shai-Hulud oder XZ Utils: Die Sicherheit der Software-Supply-Chain wird durch zahlreiche Risiken gefährdet.Heutige Anwendungen basieren auf zahlreichen Komponenten, von denen jede zusammen mit den Entwicklungsumgebungen selbst eine Angriffsfläche darstellt. Unabhängig davon, ob Unternehmen Code intern entwickeln oder sich auf Drittanbieter verlassen, sollten CISOs, Sicherheitsexperten und Entwickler der Software-Supply-Chain besondere Aufmerksamkeit schenken.Zu den…
-
ZombieAgent ChatGPT attack shows persistent data leak risks of AI agents
Worm-like propagation: The email attack even has worming capabilities, as the malicious prompts could instruct ChatGPT to scan the inbox, extract addresses from other email messages, exfiltrate those addresses to the attackers using the URL trick, and send similar poisoned messages to those addresses as well.If the victim is the employee of an organization that…
-
Hackers target misconfigured proxies to access paid LLM services
Threat actors are systematically hunting for misconfigured proxy servers that could provide access to commercial large language model (LLM) services. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/hackers-target-misconfigured-proxies-to-access-paid-llm-services/
-
AI Deployments Targeted in 91,000+ Attack Sessions
Researchers observed over 91,000 attack sessions targeting AI infrastructure and LLM deployments. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/threats/ai-deployments-targeted-in-91000-attack-sessions/
-
Are There IDORs Lurking in Your Code? LLMs Are Finding Critical Business Logic Vulns”, and They’re Everywhere
Security teams have always known that insecure direct object references (IDORs) and broken authorization vulnerabilities exist in their codebases. Ask any AppSec leader if they have IDOR issues, and most would readily admit they do. But here’s the uncomfortable truth: they’ve been dramatically underestimating the scope of the problem. Recent bug bounty data tells a..…
-
Red-Teaming als Eckpfeiler der KI-Compliance
KI-Systeme spielen in allen Branchen zunehmend eine zentrale Rolle bei kritischen Vorgängen. Gleichzeitig steigen die Sicherheitsrisiken durch den Einsatz der künstlichen Intelligenz rapide. Red Teaming hat sich als Eckpfeiler zum Schutz von KI etabliert insbesondere, wenn agentenbasierte KI immer stärkeren Einzug in Unternehmen hält. Multi-LLM (Large-Language-Models)-Systeme treffen autonome Entscheidungen und führen Aufgaben ohne menschliches […]…
-
Red Teaming als Eckpfeiler der KI-Compliance
ie zunehmende Verbreitung agentenbasierter KI verändert die Angriffsflächen von Organisationen grundlegend. Im Unterschied zu Assistenten mit einem einzelnen LLM bestehen diese Systeme aus miteinander verbundenen Agenten mit komplexen Arbeitsabläufen und Abhängigkeiten, First seen on infopoint-security.de Jump to article: www.infopoint-security.de/red-teaming-als-eckpfeiler-der-ki-compliance/a43314/
-
ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues
Will LLMs ever be able to stamp out the root cause of these attacks? Possibly not. First seen on arstechnica.com Jump to article: arstechnica.com/security/2026/01/chatgpt-falls-to-new-data-pilfering-attack-as-a-vicious-cycle-in-ai-continues/
-
Critical RCE flaw allows full takeover of n8n AI workflow platform
Tags: ai, api, attack, authentication, cloud, credentials, data, email, exploit, flaw, leak, LLM, password, rce, remote-code-execution, threat, vulnerabilityformWebhook function used by n8n Form nodes to receive data doesn’t validate whether the Content-Type field of the POST request submitted by the user is set to multipart/form-data.Imagine a very common use case in which n8n has been used to build a chat interface that allows users to upload files to the system, for example,…
-
Personal LLM Accounts Drive Shadow AI Data Leak Risks
Lack of visibility and governance around employees using generative AI is resulting in rise in data security risks First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/personal-llm-accounts-drive-shadow/
-
Automated data poisoning proposed as a solution for AI theft threat
Tags: ai, breach, business, cyber, data, encryption, framework, intelligence, LLM, malicious, microsoft, resilience, risk, risk-management, technology, theft, threatKnowledge graphs 101: A bit of background about knowledge graphs: LLMs use a technique called Retrieval-Augmented Generation (RAG) to search for information based on a user query and provide the results as additional reference for the AI system’s answer generation. In 2024, Microsoft introduced GraphRAG to help LLMs answer queries needing information beyond the data on…
-
AI, Quantum, and the New Threat Frontier: What Will Define Cybersecurity in 2026?
Tags: access, ai, api, application-security, attack, authentication, automation, business, ciso, cloud, compliance, computer, computing, container, control, crypto, cryptography, cyber, cybersecurity, data, data-breach, defense, detection, encryption, exploit, finance, flaw, framework, governance, government, healthcare, iam, identity, infrastructure, injection, LLM, malicious, metric, monitoring, network, nist, open-source, oracle, regulation, resilience, risk, service, skills, software, strategy, supply-chain, threat, tool, vulnerability, vulnerability-management, waf, zero-day, zero-trustAI, Quantum, and the New Threat Frontier: What Will Define Cybersecurity in 2026? madhav Tue, 01/06/2026 – 04:44 If we think 2025 has been fast-paced, it’s going to feel like a warm-up for the changes on the horizon in 2026. Every time this year, Thales experts become cybersecurity oracles and predict where the industry is…
-
Claude is his copilot: Rust veteran designs new Rue programming language with help from AI bot
Rust veteran Steve Klabnik is using an LLM to explore memory safety without garbage collection First seen on theregister.com Jump to article: www.theregister.com/2026/01/03/claude_copilot_rue_steve_klabnik/
-
Malicious Manipulation of LLMs for Scalable Vulnerability Exploitation
A groundbreaking study from researchers at the University of Luxembourg reveals a critical security paradigm shift: large language models (LLMs) are being weaponized to automatically generate functional exploits from public vulnerability disclosures, effectively transforming novice attackers into capable threat actors. The research demonstrates that threat actors no longer need deep technical expertise to compromise enterprise…
-
Top 5 real-world AI security threats revealed in 2025
Tags: access, ai, api, attack, breach, chatgpt, cloud, control, credentials, cybercrime, data, data-breach, defense, email, exploit, flaw, framework, github, gitlab, google, injection, least-privilege, LLM, malicious, malware, microsoft, nvidia, open-source, openai, rce, remote-code-execution, risk, service, software, supply-chain, theft, threat, tool, vulnerabilityA critical remote code execution (RCE) in open-source AI agent framework Langflow that was also exploited in the wildAn RCE flaw in OpenAI’s Codex CLIVulnerabilities in NVIDIA Triton Inference ServerRCE vulnerabilities in major AI inference server frameworks, including those from Meta, Nvidia, Microsoft, and open-source projects such as vLLM and SGLangVulnerabilities in open-source compute framework…
-
LLMs are automating the human part of romance scams
Romance scams succeed because they feel human. New research shows that feeling no longer requires a person on the other side of the chat. The three stages of a romance-baiting … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/12/29/llms-romance-baiting-scams-study/
-
LangChain core vulnerability allows prompt injection and data exposure
A critical flaw in LangChain Core could allow attackers to steal sensitive secrets and manipulate LLM responses via prompt injection. LangChain Core (langchain-core) is a key Python package in the LangChain ecosystem that provides core interfaces and model-agnostic tools for building LLM-based applications. A critical vulnerability, tracked as CVE-2025-68664 (CVSS score of 9.3), affects the…
-
CERN: how does the international research institution manage risk?
Tags: access, ai, business, compliance, control, cyber, cybersecurity, defense, framework, governance, group, international, iot, LLM, network, risk, service, strategy, technology, toolStefan Lüders and Tim Bell of CERN. CERNEmploying proprietary technology can introduce risks, according to Tim Bell, leader of CERN’s IT governance, risk and compliance section, who is responsible for business continuity and disaster recovery. “If you’re a visitor to a university, you’ll want to bring your laptop and use it at CERN. We can’t…
-
OpenAI says AI browsers may always be vulnerable to prompt injection attacks
OpenAI says prompt injections will always be a risk for AI browsers with agentic capabilities, like Atlas. But the firm is beefing up its cybersecurity with an ‘LLM-based automated attacker.’ First seen on techcrunch.com Jump to article: techcrunch.com/2025/12/22/openai-says-ai-browsers-may-always-be-vulnerable-to-prompt-injection-attacks/
-
DataDome recognized in The Bot And Agent Trust Management Software Landscape, Q4 2025 from Forrester
DataDome recognized in The Bot And Agent Trust Management Software Landscape, Q4 2025 from Forrester Forrester has just released The Bot And Agent Trust Management Software Landscape, Q4 2025 report. It marks a fundamental shift to reflect the rapid rise of agentic AI traffic”, moving beyond traditional bot management to a new paradigm that establishes…

