Tag: LLM
-
Hackers Use LLM to Create React2Shell Malware, the Latest Example of AI-Generated Threat
Darktrace researchers caught a sample of malware that was created by AI and LLMs to exploit the high-profiled React2Shell vulnerability, putting defenders on notice that the technology lets even lesser-skilled hackers create malicious code and build complex exploit frameworks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/02/hackers-use-llm-to-create-react2shell-malware-the-latest-example-of-ai-generated-threat/
-
North Korea’s UNC1069 Hammers Crypto Firms With AI
In moving away from traditional banks to focus on Web3 companies, the threat actor is leveraging LLMs, deepfakes, legitimate platforms, and ClickFix. First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/north-koreas-unc1069-hammers-crypto-firms
-
North Korea’s UNC1069 Hammers Crypto Firms With AI
In moving away from traditional banks to focus on Web3 companies, the threat actor is leveraging LLMs, deepfakes, legitimate platforms, and ClickFix. First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/north-koreas-unc1069-hammers-crypto-firms
-
North Korea’s UNC1069 Hammers Crypto Firms With AI
In moving away from traditional banks to focus on Web3 companies, the threat actor is leveraging LLMs, deepfakes, legitimate platforms, and ClickFix. First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/north-koreas-unc1069-hammers-crypto-firms
-
AI-Generated Malware Exploits React2Shell for Tiny Profit
LLM-Built Toolkit Hit 91 Hosts, Mined Funds in Monero. Security researchers detected artificial intelligence-generated malware exploiting the React2Shell vulnerability, allowing attackers with no coding expertise to build functional exploits. The attacker may have circumvented an AI model’s safeguards by framing the malicious coding request as homework. First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/ai-generated-malware-exploits-react2shell-for-tiny-profit-a-30734
-
Microsoft boffins figured out how to break LLM safety guardrails with one simple prompt
Chaos-inciting fake news right this way First seen on theregister.com Jump to article: www.theregister.com/2026/02/09/microsoft_one_prompt_attack/
-
Microsoft boffins figured out how to break LLM safety guardrails with one simple prompt
Chaos-inciting fake news right this way First seen on theregister.com Jump to article: www.theregister.com/2026/02/09/microsoft_one_prompt_attack/
-
Single prompt breaks AI safety in 15 major language models
Fundamental changes to safety mechanisms: The research went beyond measuring attack success rates to examine how the technique alters models’ internal safety mechanisms. When Microsoft tested Gemma3-12B-It on 100 diverse prompts, asking the model to rate their harmfulness on a 0-9 scale, the unaligned version systematically assigned lower scores, with mean ratings dropping from 7.97…
-
VoidLink Linux C2 Uses LLM-Generated Malware with Kernel-Level Stealth
VoidLink represents a concerning evolution in malware development: a sophisticated Linux command-and-control framework that shows clear signs of being built with AI assistance. This Linux malware operates as a modular implant designed for long-term access to compromised systems. It doesn’t discriminate between cloud providers, actively harvesting credentials from AWS, Google Cloud Platform, Microsoft Azure, Alibaba Cloud, and…
-
Anthropic’s DXT poses “critical RCE vulnerability” by running with full system privileges
Difference are ‘stark’: Principal AI Security Researcher at LayerX Security Roy Paz said that he tested DXT against Perplexity’s Comet, OpenAI’s Atlas, and Microsoft’s CoPilot, and the differences were stark.”When you ask Copilot, Atlas, or Perplexity to use a tool, then it will use that tool for you. But Claude DXT allows tools to talk…
-
Anthropic’s DXT poses “critical RCE vulnerability” by running with full system privileges
Difference are ‘stark’: Principal AI Security Researcher at LayerX Security Roy Paz said that he tested DXT against Perplexity’s Comet, OpenAI’s Atlas, and Microsoft’s CoPilot, and the differences were stark.”When you ask Copilot, Atlas, or Perplexity to use a tool, then it will use that tool for you. But Claude DXT allows tools to talk…
-
âš¡ Weekly Recap: AI Skill Malware, 31Tbps DDoS, Notepad++ Hack, LLM Backdoors and More
Cyber threats are no longer coming from just malware or exploits. They’re showing up inside the tools, platforms, and ecosystems organizations use every day. As companies connect AI, cloud apps, developer tools, and communication systems, attackers are following those same paths.A clear pattern this week: attackers are abusing trust. Trusted updates, trusted marketplaces, trusted apps,…
-
Bug Hunting With LLMs: Expert Tool Seeks More ‘True’ Flaws
Open Source ‘Vulnhalla’ Promises ‘Up to 96% Reduction in False Positives’. Using large language models to automatically identify only real code vulnerabilities – not false positives – remains a holy grail. Eschewing a moonshot approach, a tool called Vulnhalla helps senior researchers use guided questioning with LLMs to more rapidly triage actual vulnerabilities. First seen…
-
Attackers Used AI to Breach an AWS Environment in 8 Minutes
Threat actors using LLMs needed only eight minutes to move from initial access to full admin privileges in an attack on a company’s AWS cloud environment in the latest example of cybercriminals expanding their use of AI in their operations, Sysdig researchers said. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/02/attackers-used-ai-to-breach-an-aws-environment-in-8-minutes/
-
KI als AWS-Angriffsturbo
Kriminelle Hacker haben ihre Angriffe auf AWS-Umgebungen mit KI beschleunigt.Forscher des Sicherheitsanbieters Sysdig haben einen Angriff aufgedeckt, bei dem kriminelle Angreifer eine AWS-Umgebung in weniger als acht Minuten vollständig kompromittieren konnten. Laut den Threat-Spezialisten nutzten die Bedrohungsakteure dabei eine Cloud-Fehlkonfiguration mit der Hilfe von Large Language Models (LLMs) aus, um den gesamten Angriffs-Lebenszyklus zu komprimieren…
-
Varonis Acquires AllTrue to Strengthen AI Security Capabilities
The deal underscores a broader industry shift as security vendors race to address the risks introduced by LLMs, copilots, and autonomous AI agents. The post Varonis Acquires AllTrue to Strengthen AI Security Capabilities appeared first on TechRepublic. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-varonis-buys-alltrue/
-
Microsoft develops a new scanner to detect hidden backdoors in LLMs
Effectiveness of the scanner: Microsoft said the scanner does not require retraining models or prior knowledge of backdoor behavior and operates using forward passes only, avoiding gradient calculations or backpropagation to keep computing costs low.The company also said it works with most causal, GPT-style language models and can be used across a wide range of…
-
Three clues that your LLM may be poisoned with a sleeper-agent back door
It’s a threat straight out of sci-fi, and fiendishly hard to detect First seen on theregister.com Jump to article: www.theregister.com/2026/02/05/llm_poisoned_how_to_tell/
-
NDSS 2025 Beyond Classification
Session 11B: Binary Analysis Authors, Creators & Presenters: Linxi Jiang (The Ohio State University), Xin Jin (The Ohio State University), Zhiqiang Lin (The Ohio State University) PAPER Beyond Classification: Inferring Function Names in Stripped Binaries via Domain Adapted LLMs Function name inference in stripped binaries is an important yet challenging task for many security applications,…
-
From credentials to cloud admin in 8 minutes: AI supercharges AWS attack chain
Tags: access, ai, attack, ciso, cloud, credentials, detection, framework, group, iam, least-privilege, LLM, monitoring, trainingLateral movement, LLMjacking, and GPU abuse: Once administrative access was obtained, the attacker moved laterally across 19 distinct AWS principals, assuming multiple roles and creating new users to spread activity across identities. This approach enabled persistence and complicated detection, the researchers noted.The attackers then shifted focus to Amazon Bedrock, enumerating available models and confirming that…
-
Analysis of the Attack Surface in the Agent SKILL Architecture: Case Studies and Ecosystem Research
Background As LLMs and intelligent agents expand from dialogue to task execution, the encapsulation, reuse and orchestration of LLM capabilities have become key issues. As a capability abstraction mechanism, SKILL encapsulates reasoning logic, tool calls and execution processes into reusable skill units, enabling the model to achieve stable, consistent and manageable operations when performing complex…The…
-
NDSS 2025 PropertyGPT
Tags: blockchain, bug-bounty, conference, crypto, guide, Internet, LLM, network, oracle, strategy, tool, vulnerability, zero-daySession 11A: Blockchain Security 2 Authors, Creators & Presenters: Ye Liu (Singapore Management University), Yue Xue (MetaTrust Labs), Daoyuan Wu (The Hong Kong University of Science and Technology), Yuqiang Sun (Nanyang Technological University), Yi Li (Nanyang Technological University), Miaolei Shi (MetaTrust Labs), Yang Liu (Nanyang Technological University) PAPER PropertyGPT: LLM-driven Formal Verification of Smart Contracts…

