Tag: LLM
-
APT28’s Toolkit: AI, Wi-Fi Intrusions, Cloud C2
APT28’s new “LameHug” malware uses LLMs to generate basic commands, a strikingly clumsy move from an otherwise advanced threat group. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/threats/apt28s-toolkit-ai-wi-fi-intrusions-cloud-c2/
-
LLMs are everywhere in your stack and every layer brings new risk
LLMs are moving deeper into enterprise products and workflows, and that shift is creating new pressure on security leaders. A new guide from DryRun Security outlines how these … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/12/10/enterprise-llm-security-risks-analysis/
-
Gemini for Chrome gets a second AI agent to watch over it
Google’s two-model defense: To address these risks, Google’s solution splits the work between two AI models. The main Gemini model reads web content and decides what actions to take. The user alignment critic sees only metadata about proposed actions, not the web content that might contain malicious instructions.”This component is architected to see only metadata…
-
Malicious Models im Untergrund – Unit-42 zeigt wachsenden Schwarzmarkt für Dark-LLMs
First seen on security-insider.de Jump to article: www.security-insider.de/unit-42-zeigt-wachsenden-schwarzmarkt-fuer-dark-llms-a-d2a5e7bd2dadf1936b14944af8c3670b/
-
Hacking as a Prompt: Malicious LLMs Find Users
WormGPT 4 Sells for $50 Monthly, While KawaiiGPT Goes Open Source. The cybercrime-as-a-service model has a new product line, with malicious large language models built without ethical guardrails selling on Telegram for $50 monthly or distributed free on GitHub. Others groups are taking the open-source route. First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/hacking-as-prompt-malicious-llms-find-users-a-30224
-
UK cyber agency warns LLMs will always be vulnerable to prompt injection
The comments echo many in the research community who have said the flaw is an inherent trait of generative AI technology. First seen on cyberscoop.com Jump to article: cyberscoop.com/uk-warns-ai-prompt-injection-unfixable-security-flaw/
-
How Agentic BAS AI Turns Threat Headlines Into Defense Strategies
Picus Security explains why relying on LLM-generated attack scripts is risky and how an agentic approach maps real threat intel to safe, validated TTPs. Their breakdown shows how teams can turn headline threats into reliable defense checks without unsafe automation. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/how-agentic-bas-ai-turns-threat-headlines-into-defense-strategies/
-
KI schafft neue Sicherheitsrisiken für OT-Netzwerke
Sicherheitsbehörden sehen in der vermehrten Nutzung von KI eine Gefahr für die Sicherheit von OT-Systemen.Die Sicherheit der Betriebstechnik (Operational Technology OT) in kritischen Infrastrukturen ist seit Jahren ein immer wiederkehrendes Thema. Nach Ansicht von Sicherheitsorganisationen könnte die vermehrte Nutzung von KI in der OT die Lage noch verschlimmern.Die US-Cybersicherheitsbehörde CISA hat deshalb vor kurzem gemeinsam…
-
LLM-Sicherheit – Cisco-Studie: Multi-Turn-Angriffe knacken Open-Weight-LLMs
First seen on security-insider.de Jump to article: www.security-insider.de/cisco-studie-multi-turn-angriffe-knacken-open-weight-llms-a-a206993ac451107393a3e25f98163544/
-
MIT-Studie: ChatGPT- bzw. LLM-Nutzung und Gehirn-Aktivitäten
Es ist eine Studie des Massachussets Institutes of Technology (MIT) zur Frage “wie beeinflusst die LLM-Nutzung unsere Gehirn-Aktivitäten, die im Sommer erschienen ist und Debatten auslöste. Das Ergebnis in einem Satz: “ChatGPT & Co. machen das Gehirn fauler und lassen … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/12/07/mit-studie-chatgpt-bzw-llm-nutzung-und-gehirn-aktivitaeten/
-
AI creates new security risks for OT networks, warns NSA
Tags: ai, cisa, compliance, control, cyber, data, data-breach, government, healthcare, infrastructure, injection, intelligence, LLM, network, risk, technology, trainingPrinciples for the Secure Integration of Artificial Intelligence in Operational Technology, authored by the NSA in conjunction with the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) and a global alliance of national security agencies.While the use of AI in critical infrastructure OT is in its early days, the guidance reads like an attempt…
-
AI creates new security risks for OT networks, warns NSA
Tags: ai, cisa, compliance, control, cyber, data, data-breach, government, healthcare, infrastructure, injection, intelligence, LLM, network, risk, technology, trainingPrinciples for the Secure Integration of Artificial Intelligence in Operational Technology, authored by the NSA in conjunction with the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) and a global alliance of national security agencies.While the use of AI in critical infrastructure OT is in its early days, the guidance reads like an attempt…
-
OAuth Isn’t Enough For Agents
OAuth is a broadly accepted standard. It’s used all over the internet. But as the usage of LLM agents continues to expand, OAuth isn’t going to be enough. In fact, relying on OAuth will be dangerous. We won’t be able to set permissions at an appropriate granularity, giving LLMs access to far too much. More..…
-
OAuth Isn’t Enough For Agents
OAuth is a broadly accepted standard. It’s used all over the internet. But as the usage of LLM agents continues to expand, OAuth isn’t going to be enough. In fact, relying on OAuth will be dangerous. We won’t be able to set permissions at an appropriate granularity, giving LLMs access to far too much. More..…
-
Key questions CISOs must ask before adopting AI-enabled cyber solutions
Questions to ask vendors about their AI security offerings: There are several areas where CISOs will want to focus their attention when considering AI-powered cyber solutions, including the following:Shadow AI: Uncovering and addressing shadow AI throughout the organization is a key issue for security leaders today. But so too is ensuring that sanctioned AI-enabled solutions…
-
Key questions CISOs must ask before adopting AI-enabled cyber solutions
Questions to ask vendors about their AI security offerings: There are several areas where CISOs will want to focus their attention when considering AI-powered cyber solutions, including the following:Shadow AI: Uncovering and addressing shadow AI throughout the organization is a key issue for security leaders today. But so too is ensuring that sanctioned AI-enabled solutions…
-
Key questions CISOs must ask before adopting AI-enabled cyber solutions
Questions to ask vendors about their AI security offerings: There are several areas where CISOs will want to focus their attention when considering AI-powered cyber solutions, including the following:Shadow AI: Uncovering and addressing shadow AI throughout the organization is a key issue for security leaders today. But so too is ensuring that sanctioned AI-enabled solutions…
-
Malicious LLMs empower inexperienced hackers with advanced tools
Unrestricted large language models (LLMs) like WormGPT 4 and KawaiiGPT are improving their capabilities to generate malicious code, delivering functional scripts for ransomware encryptors and lateral movement. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/malicious-llms-empower-inexperienced-hackers-with-advanced-tools/
-
Von LLM generierte Malware wird immer besser
Forscher tricksen Chatbots aus, stoßen aber auf unzuverlässige Ergebnisse.Cyberkriminelle versuchen bereits seit geraumer Zeit, mit Hilfe von Large Language Models (LLM) ihre dunklen Machenschaften zu automatisieren. Aber können sie schon bösartigen Code generieren, der ‘marktreif” und bereit für den operativen Einsatz ist? Das wollten die Forschenden von Netskope Threat Labs herausfinden, indem sie Chatbots dazu…
-
How Malware Authors Are Incorporating LLMs to Evade Detection
Cyberattackers are integrating large language models (LLMs) into the malware, running prompts at runtime to evade detection and augment their code on demand. First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/malware-authors-incorporate-llms-evade-detection
-
How Malware Authors Are Incorporating LLMs to Evade Detection
Cyberattackers are integrating large language models (LLMs) into the malware, running prompts at runtime to evade detection and augment their code on demand. First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/malware-authors-incorporate-llms-evade-detection
-
‘Dark LLMs’ Aid Petty Criminals, But Underwhelm Technically
As in the wider world, AI is not quite living up to the hype in the cyber underground. But it’s definitely helping low-level cybercriminals do competent work. First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/dark-llms-petty-criminals
-
‘Dark LLMs’ Aid Petty Criminals, But Underwhelm Technically
As in the wider world, AI is not quite living up to the hype in the cyber underground. But it’s definitely helping low-level cybercriminals do competent work. First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/dark-llms-petty-criminals
-
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/11/26/deepteam-open-source-llm-red-teaming-framework/
-
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/11/26/deepteam-open-source-llm-red-teaming-framework/
-
Radware Adds Firewall for LLMs to Security Portfolio
Radware has developed a firewall for large language models (LLMs) that ensures governance and security policies are enforced in real time. Provided as an add-on to the company’s Cloud Application Protection Services, Radware LLM Firewall addresses the top 10 risks and mitigations for LLMs and generative artificial intelligence (AI) applications defined by the OWASP GenAI..…

