Tag: LLM
-
2 Separate Campaigns Probe Corporate LLMs for Secrets
A total of 91,403 sessions targeted public LLM endpoints to find leaks in organizations’ use of AI and map an expanding attack surface. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/separate-campaigns-target-exposed-llm-services
-
NDSS 2025 LLMPirate: LLMs For Black-box Hardware IP Piracy
Tags: attack, conference, detection, firmware, Hardware, Internet, LLM, mitigation, network, software, vulnerabilitySession 8C: Hard & Firmware Security Authors, Creators & Presenters: Vasudev Gohil (Texas A&M University), Matthew DeLorenzo (Texas A&M University), Veera Vishwa Achuta Sai Venkat Nallam (Texas A&M University), Joey See (Texas A&M University), Jeyavijayan Rajendran (Texas A&M University) PAPER LLMPirate: LLMs for Black-box Hardware IP Piracy The rapid advancement of large language models (LLMs)…
-
Threat Actors Launch Mass Reconnaissance of AI Systems
More Than 91,000 Attacks Target Exposed LLM Endpoints in Coordinated Campaigns. Two coordinated campaigns generated more than 91,000 attack sessions against AI infrastructure between October and January, with threat actors probing more than 70 model endpoints from OpenAI, Anthropic and Google to build target lists for future exploitation. First seen on govinfosecurity.com Jump to article:…
-
Attackers Probing Popular LLMs Looking for Access to APIs: Report
Security researchers with GreyNoise say they’ve detected a campaign in which the threat actors are targeting more than 70 popular AI LLM models in a likely reconnaissance mission that will feed into what they call a “larger exploitation pipeline.” First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/01/attackers-probing-popular-llms-looking-for-access-to-apis-report/
-
Corrupting LLMs Through Weird Generalizations
Fascinating research: Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs. AbstractLLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model…
-
Two Separate Campaigns Target Exposed LLM Services
A total of 91,403 sessions targeted public LLM endpoints to find leaks in organizations’ use of AI and map an expanding attack surface. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/separate-campaigns-target-exposed-llm-services
-
Shai-Hulud & Co.: Die Supply Chain als Achillesferse
Tags: access, ai, application-security, backdoor, ciso, cloud, cyber, cyberattack, data, github, Hardware, infrastructure, kritis, kubernetes, LLM, monitoring, network, nis-2, programming, resilience, risk, rust, sbom, software, spyware, strategy, supply-chain, tool, vulnerabilityEgal, ob React2Shell, Shai-Hulud oder XZ Utils: Die Sicherheit der Software-Supply-Chain wird durch zahlreiche Risiken gefährdet.Heutige Anwendungen basieren auf zahlreichen Komponenten, von denen jede zusammen mit den Entwicklungsumgebungen selbst eine Angriffsfläche darstellt. Unabhängig davon, ob Unternehmen Code intern entwickeln oder sich auf Drittanbieter verlassen, sollten CISOs, Sicherheitsexperten und Entwickler der Software-Supply-Chain besondere Aufmerksamkeit schenken.Zu den…
-
ZombieAgent ChatGPT attack shows persistent data leak risks of AI agents
Worm-like propagation: The email attack even has worming capabilities, as the malicious prompts could instruct ChatGPT to scan the inbox, extract addresses from other email messages, exfiltrate those addresses to the attackers using the URL trick, and send similar poisoned messages to those addresses as well.If the victim is the employee of an organization that…
-
Hackers target misconfigured proxies to access paid LLM services
Threat actors are systematically hunting for misconfigured proxy servers that could provide access to commercial large language model (LLM) services. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/hackers-target-misconfigured-proxies-to-access-paid-llm-services/
-
AI Deployments Targeted in 91,000+ Attack Sessions
Researchers observed over 91,000 attack sessions targeting AI infrastructure and LLM deployments. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/threats/ai-deployments-targeted-in-91000-attack-sessions/
-
Are There IDORs Lurking in Your Code? LLMs Are Finding Critical Business Logic Vulns”, and They’re Everywhere
Security teams have always known that insecure direct object references (IDORs) and broken authorization vulnerabilities exist in their codebases. Ask any AppSec leader if they have IDOR issues, and most would readily admit they do. But here’s the uncomfortable truth: they’ve been dramatically underestimating the scope of the problem. Recent bug bounty data tells a..…
-
Red-Teaming als Eckpfeiler der KI-Compliance
KI-Systeme spielen in allen Branchen zunehmend eine zentrale Rolle bei kritischen Vorgängen. Gleichzeitig steigen die Sicherheitsrisiken durch den Einsatz der künstlichen Intelligenz rapide. Red Teaming hat sich als Eckpfeiler zum Schutz von KI etabliert insbesondere, wenn agentenbasierte KI immer stärkeren Einzug in Unternehmen hält. Multi-LLM (Large-Language-Models)-Systeme treffen autonome Entscheidungen und führen Aufgaben ohne menschliches […]…
-
Red Teaming als Eckpfeiler der KI-Compliance
ie zunehmende Verbreitung agentenbasierter KI verändert die Angriffsflächen von Organisationen grundlegend. Im Unterschied zu Assistenten mit einem einzelnen LLM bestehen diese Systeme aus miteinander verbundenen Agenten mit komplexen Arbeitsabläufen und Abhängigkeiten, First seen on infopoint-security.de Jump to article: www.infopoint-security.de/red-teaming-als-eckpfeiler-der-ki-compliance/a43314/
-
ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues
Will LLMs ever be able to stamp out the root cause of these attacks? Possibly not. First seen on arstechnica.com Jump to article: arstechnica.com/security/2026/01/chatgpt-falls-to-new-data-pilfering-attack-as-a-vicious-cycle-in-ai-continues/
-
Critical RCE flaw allows full takeover of n8n AI workflow platform
Tags: ai, api, attack, authentication, cloud, credentials, data, email, exploit, flaw, leak, LLM, password, rce, remote-code-execution, threat, vulnerabilityformWebhook function used by n8n Form nodes to receive data doesn’t validate whether the Content-Type field of the POST request submitted by the user is set to multipart/form-data.Imagine a very common use case in which n8n has been used to build a chat interface that allows users to upload files to the system, for example,…
-
Personal LLM Accounts Drive Shadow AI Data Leak Risks
Lack of visibility and governance around employees using generative AI is resulting in rise in data security risks First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/personal-llm-accounts-drive-shadow/
-
Automated data poisoning proposed as a solution for AI theft threat
Tags: ai, breach, business, cyber, data, encryption, framework, intelligence, LLM, malicious, microsoft, resilience, risk, risk-management, technology, theft, threatKnowledge graphs 101: A bit of background about knowledge graphs: LLMs use a technique called Retrieval-Augmented Generation (RAG) to search for information based on a user query and provide the results as additional reference for the AI system’s answer generation. In 2024, Microsoft introduced GraphRAG to help LLMs answer queries needing information beyond the data on…
-
AI, Quantum, and the New Threat Frontier: What Will Define Cybersecurity in 2026?
Tags: access, ai, api, application-security, attack, authentication, automation, business, ciso, cloud, compliance, computer, computing, container, control, crypto, cryptography, cyber, cybersecurity, data, data-breach, defense, detection, encryption, exploit, finance, flaw, framework, governance, government, healthcare, iam, identity, infrastructure, injection, LLM, malicious, metric, monitoring, network, nist, open-source, oracle, regulation, resilience, risk, service, skills, software, strategy, supply-chain, threat, tool, vulnerability, vulnerability-management, waf, zero-day, zero-trustAI, Quantum, and the New Threat Frontier: What Will Define Cybersecurity in 2026? madhav Tue, 01/06/2026 – 04:44 If we think 2025 has been fast-paced, it’s going to feel like a warm-up for the changes on the horizon in 2026. Every time this year, Thales experts become cybersecurity oracles and predict where the industry is…
-
Claude is his copilot: Rust veteran designs new Rue programming language with help from AI bot
Rust veteran Steve Klabnik is using an LLM to explore memory safety without garbage collection First seen on theregister.com Jump to article: www.theregister.com/2026/01/03/claude_copilot_rue_steve_klabnik/
-
Malicious Manipulation of LLMs for Scalable Vulnerability Exploitation
A groundbreaking study from researchers at the University of Luxembourg reveals a critical security paradigm shift: large language models (LLMs) are being weaponized to automatically generate functional exploits from public vulnerability disclosures, effectively transforming novice attackers into capable threat actors. The research demonstrates that threat actors no longer need deep technical expertise to compromise enterprise…
-
Top 5 real-world AI security threats revealed in 2025
Tags: access, ai, api, attack, breach, chatgpt, cloud, control, credentials, cybercrime, data, data-breach, defense, email, exploit, flaw, framework, github, gitlab, google, injection, least-privilege, LLM, malicious, malware, microsoft, nvidia, open-source, openai, rce, remote-code-execution, risk, service, software, supply-chain, theft, threat, tool, vulnerabilityA critical remote code execution (RCE) in open-source AI agent framework Langflow that was also exploited in the wildAn RCE flaw in OpenAI’s Codex CLIVulnerabilities in NVIDIA Triton Inference ServerRCE vulnerabilities in major AI inference server frameworks, including those from Meta, Nvidia, Microsoft, and open-source projects such as vLLM and SGLangVulnerabilities in open-source compute framework…
-
LLMs are automating the human part of romance scams
Romance scams succeed because they feel human. New research shows that feeling no longer requires a person on the other side of the chat. The three stages of a romance-baiting … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/12/29/llms-romance-baiting-scams-study/

