Tag: LLM
-
Attackers Used AI to Breach an AWS Environment in 8 Minutes
Threat actors using LLMs needed only eight minutes to move from initial access to full admin privileges in an attack on a company’s AWS cloud environment in the latest example of cybercriminals expanding their use of AI in their operations, Sysdig researchers said. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/02/attackers-used-ai-to-breach-an-aws-environment-in-8-minutes/
-
KI als AWS-Angriffsturbo
Kriminelle Hacker haben ihre Angriffe auf AWS-Umgebungen mit KI beschleunigt.Forscher des Sicherheitsanbieters Sysdig haben einen Angriff aufgedeckt, bei dem kriminelle Angreifer eine AWS-Umgebung in weniger als acht Minuten vollständig kompromittieren konnten. Laut den Threat-Spezialisten nutzten die Bedrohungsakteure dabei eine Cloud-Fehlkonfiguration mit der Hilfe von Large Language Models (LLMs) aus, um den gesamten Angriffs-Lebenszyklus zu komprimieren…
-
Varonis Acquires AllTrue to Strengthen AI Security Capabilities
The deal underscores a broader industry shift as security vendors race to address the risks introduced by LLMs, copilots, and autonomous AI agents. The post Varonis Acquires AllTrue to Strengthen AI Security Capabilities appeared first on TechRepublic. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-varonis-buys-alltrue/
-
Microsoft develops a new scanner to detect hidden backdoors in LLMs
Effectiveness of the scanner: Microsoft said the scanner does not require retraining models or prior knowledge of backdoor behavior and operates using forward passes only, avoiding gradient calculations or backpropagation to keep computing costs low.The company also said it works with most causal, GPT-style language models and can be used across a wide range of…
-
Three clues that your LLM may be poisoned with a sleeper-agent back door
It’s a threat straight out of sci-fi, and fiendishly hard to detect First seen on theregister.com Jump to article: www.theregister.com/2026/02/05/llm_poisoned_how_to_tell/
-
NDSS 2025 Beyond Classification
Session 11B: Binary Analysis Authors, Creators & Presenters: Linxi Jiang (The Ohio State University), Xin Jin (The Ohio State University), Zhiqiang Lin (The Ohio State University) PAPER Beyond Classification: Inferring Function Names in Stripped Binaries via Domain Adapted LLMs Function name inference in stripped binaries is an important yet challenging task for many security applications,…
-
From credentials to cloud admin in 8 minutes: AI supercharges AWS attack chain
Tags: access, ai, attack, ciso, cloud, credentials, detection, framework, group, iam, least-privilege, LLM, monitoring, trainingLateral movement, LLMjacking, and GPU abuse: Once administrative access was obtained, the attacker moved laterally across 19 distinct AWS principals, assuming multiple roles and creating new users to spread activity across identities. This approach enabled persistence and complicated detection, the researchers noted.The attackers then shifted focus to Amazon Bedrock, enumerating available models and confirming that…
-
Analysis of the Attack Surface in the Agent SKILL Architecture: Case Studies and Ecosystem Research
Background As LLMs and intelligent agents expand from dialogue to task execution, the encapsulation, reuse and orchestration of LLM capabilities have become key issues. As a capability abstraction mechanism, SKILL encapsulates reasoning logic, tool calls and execution processes into reusable skill units, enabling the model to achieve stable, consistent and manageable operations when performing complex…The…
-
NDSS 2025 PropertyGPT
Tags: blockchain, bug-bounty, conference, crypto, guide, Internet, LLM, network, oracle, strategy, tool, vulnerability, zero-daySession 11A: Blockchain Security 2 Authors, Creators & Presenters: Ye Liu (Singapore Management University), Yue Xue (MetaTrust Labs), Daoyuan Wu (The Hong Kong University of Science and Technology), Yuqiang Sun (Nanyang Technological University), Yi Li (Nanyang Technological University), Miaolei Shi (MetaTrust Labs), Yang Liu (Nanyang Technological University) PAPER PropertyGPT: LLM-driven Formal Verification of Smart Contracts…
-
Roughly half of employees are using unsanctioned AI tools, and enterprise leaders are major culprits
51% have connected AI tools to work systems or apps without the approval or knowledge of IT;63% believe it’s acceptable to use AI when there is no corporate-approved option or IT oversight;60% say speed is worth the security risk;21% think employers will simply “turn a blind eye” as long as they’re getting their work done.And…
-
NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance
SANTA CLARA, Calif., Jan 29, 2026 Security is a prerequisite for the application and development of LLM technology. Only by addressing security risks when integrating LLMs can businesses ensure healthy and sustainable growth. NSFOCUS first proposed the AI LLM Risk Threat Matrix in 2024. The Matrix addresses security from multiple perspectives: foundational security, data security,…The…
-
Crooks are hijacking and reselling AI infrastructure: Report
Tags: access, ai, api, attack, authentication, business, cloud, communications, control, credentials, cybersecurity, data, data-breach, endpoint, exploit, firewall, group, infosec, infrastructure, intelligence, Internet, LLM, malicious, marketplace, risk, service, skills, technology, theft, threat, training, vulnerabilityexposed endpoints on default ports of common LLM inference services;unauthenticated API access without proper access controls;development/staging environments with public IP addresses;MCP servers connecting LLMs to file systems, databases and internal APIs.Common misconfigurations leveraged by these threat actors include:Ollama running on port 11434 without authentication;OpenAI-compatible APIs on port 8000 exposed to the internet;MCP servers accessible without…
-
Multi-Agent-Systeme werden zum neuen Betriebsmodell für Unternehmen
Der Databricks-Bericht ‘State of AI Agents” zeigt: Model-Flexibility (oder Flexible Modellauswahl) ist die neue KI-Strategie, wobei 78 Prozent der Unternehmen zwei oder mehr LLM-Modellfamilien verwenden. Der Mehrwert von KI-Agenten wird sich 2026 unter Beweis stellen. Beobachtungen in der gesamten Branche zeigen, dass KI sich bereits zu einem festen Bestandteil kritischer Arbeitsabläufe entwickelt hat. Einer der…
-
AI & the Death of Accuracy: What It Means for Zero-Trust
AI model collapse, where LLMs over time train on more and more AI-generated data and become degraded as a result, can introduce inaccuracies, promulgate malicious activity, and impact PII protections. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/ai-death-accuracy-zero-trust
-
Anthropic writes 23,000-word ‘constitution’ for Claude, suggests it may have feelings
Tags: LLMDescribes its LLMs as an ‘entity’ that probably has something like emotions First seen on theregister.com Jump to article: www.theregister.com/2026/01/22/anthropic_claude_constitution/
-
Overrun with AI slop, cURL scraps bug bounties to ensure intact mental health
The onslaught includes LLMs finding bogus vulnerabilities and code that won’t compile. First seen on arstechnica.com Jump to article: arstechnica.com/security/2026/01/overrun-with-ai-slop-curl-scraps-bug-bounties-to-ensure-intact-mental-health/
-
Securing Generative AI: A Technical Guide to Protecting Your LLM Infrastructure
The GenAI Gold Rush: Why Network infrastructure Security Is Paramount Generative AI (GenAI) and Large Language Models (LLMs) are rapidly reshaping enterprise IT, powering everything from developer copilots and customer support automation to advanced analytics and decision-making. As adoption accelerates, GenAI is quickly becoming embedded in business”‘critical workflows. However, this rapid innovation creates a double”‘edged……
-
Three vulnerabilities in Anthropic Git MCP Server could let attackers tamper with LLMs
mcp-server-git versions prior to 2025-12.18.The three vulnerabilities are·CVE-2025-68143, an unrestricted git_init.·CVE-2025-68145, a path validation bypass.·CVE-2025-68144, an argument injection in git_diff.Unlike other vulnerabilities in MCP servers that required specific configurations, these work on any configuration of Anthropic’s official server, out of the box, Cyata says.Model Context Protocol (MCP) is an open standard introduced by Anthropic in 2024 to…
-
Flaws in Chainlit AI dev framework expose servers to compromise
/proc/self/environ file is used to store environment variables, and these can contain API keys, credentials, internal file paths, database paths, tokens for AWS and other cloud services, and even CHAINLIT_AUTH_SECRET, a secret that’s used to sign authentication tokens when authentication is enabled.On top of that, if LangChain is used as the orchestration layer behind Chainlit…
-
The LimaCharlie Manifesto: Security for an Autonomous Future
Tags: access, advisory, ai, api, automation, cloud, control, cybersecurity, data, infrastructure, LLM, technology, threat, toolCybersecurity is standing at an inflection point. The proliferation of agentic AI and LLMs does not signal a gradual shift, but a radical transformation. The security tools, assumptions, and architectures of the last twenty years can no longer keep pace with the challenges and threats of today. AI changed the rules. Attackers have quickly adapted. …
-
When Language Becomes the Attack Surface: Inside the Google Gemini Calendar Exploit
Tags: ai, attack, cybersecurity, data-breach, exploit, flaw, google, LLM, malicious, software, vulnerabilitySecurity teams have spent decades hardening software against malicious input, yet a recent vulnerability involving Google Gemini demonstrates how those assumptions begin to fracture when language itself becomes executable. The issue, disclosed by cybersecurity researchers at Miggo Security, exposed a subtle but powerful flaw in how natural language interfaces like AI LLMs interact with privileged…

