Tag: ai
-
SentinelOne autonomous detection blocks trojaned LiteLLM triggered by Claude Code
SentinelOne AI stopped a LiteLLM supply chain attack in seconds, blocking malicious code automatically without human intervention. SentinelOne’s AI-based security detected and blocked a supply chain attack involving a compromised LiteLLM package. SentinelOne’s macOS agent detected and stopped a malicious process chain triggered by Claude Code after it unknowingly installed a compromised LiteLLM package. The…
-
AI Startup Mercor Hit by Supply Chain Attack Linked to LiteLLM
Tags: ai, attack, breach, cyberattack, data, data-breach, malicious, open-source, risk, software, startup, supply-chainA recent Mercor cyberattack has brought renewed attention to the risks associated with open-source software dependencies, after the AI recruiting startup confirmed it was impacted by a broader supply chain compromise. The Mercor data breach, which is still under investigation, has been linked to a malicious incident involving the widely used LiteLLM project. First seen…
-
AI Startup Mercor Hit by Supply Chain Attack Linked to LiteLLM
Tags: ai, attack, breach, cyberattack, data, data-breach, malicious, open-source, risk, software, startup, supply-chainA recent Mercor cyberattack has brought renewed attention to the risks associated with open-source software dependencies, after the AI recruiting startup confirmed it was impacted by a broader supply chain compromise. The Mercor data breach, which is still under investigation, has been linked to a malicious incident involving the widely used LiteLLM project. First seen…
-
Digital Trust Index von Thales zeigt die Vertrauensgrenze der KI: Assistenz ja – Autonomie nein
Tags: ai93 Prozent der IT-Führungskräfte setzen GenAI ein, doch nur 23 Prozent der Verbraucher vertrauen Unternehmen, die KI zur Verarbeitung ihrer Daten nutzen. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/digital-trust-index-von-thales-zeigt-die-vertrauensgrenze-der-ki-assistenz-ja-autonomie-nein/a44467/
-
When AI Becomes the Punchline
Tags: aiAn April Fools’ Reflection After RSAC The RSAC Reality Check We just got back from RSAC, and if you spent any time on the floor, one thing was impossible to… First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/04/when-ai-becomes-the-punchline/
-
CrewAI Hit by Critical Vulnerabilities Enabling Sandbox Escape and Host Compromise
CrewAI, a prominent tool used by developers to orchestrate multi-agent AI systems, is currently vulnerable to a chain of critical security flaws. By using direct or indirect prompt injection, attackers can manipulate AI agents to escape secure sandboxes and compromise the host machine. The primary threat stems from insecure fallback behaviors and configuration settings within…
-
Google Drive ransomware detection now on by default for paying users
Google announced that the AI-powered Google Drive ransomware detection feature has reached general availability and is now enabled by default for all paying users. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/google-drive-ransomware-detection-now-on-by-default-for-paying-users/
-
Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms
Anthropic on Tuesday confirmed that internal code for its popular artificial intelligence (AI) coding assistant, Claude Code, had been inadvertently released due to a human error.”No sensitive customer data or credentials were involved or exposed,” an Anthropic spokesperson said in a statement shared with CNBC News. “This was a release packaging issue caused by human…
-
Google Cloud’s Vertex AI Hit by Vulnerability Enabling Sensitive Data Access
Artificial intelligence agents are transforming enterprise workflows, but they also introduce dangerous new attack vectors. Security researchers from Palo Alto Networks’ Unit 42 recently uncovered a significant vulnerability in Google Cloud Platform’s (GCP) Vertex AI Agent Engine. By exploiting overly broad default permissions, attackers can deploy a malicious >>double agent<< to secretly exfiltrate sensitive data…
-
Financial groups lay out a plan to fight AI identity attacks
Generative AI tools have brought the cost of deepfake production low enough that criminals and state-sponsored actors now use them routinely against financial institutions. A … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2026/04/01/fight-ai-identity-fraud/
-
Low statt Critical – KI-Patch-Dienst stuft Schwachstelle fatal falsch ein
First seen on security-insider.de Jump to article: www.security-insider.de/llm-falsche-priorisierung-rce-ticket-low-critical-a-c5de65fa74f6df41233b76be4ed05e85/
-
Granular Policy Enforcement Engines for Post-Quantum MCP Governance
Learn how to secure Model Context Protocol (MCP) deployments using granular policy engines and post-quantum cryptography to prevent AI tool poisoning and puppet attacks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/03/granular-policy-enforcement-engines-for-post-quantum-mcp-governance/
-
Why be optimistic about the future of Agentic AI?
How Do Non-Human Identities Revolutionize Cloud Security? Have you ever wondered about the hidden complexities lurking behind cloud security? Organizations are increasingly reliant on cloud-based solutions, and one of the most innovative strategies to bolster security is through effective management of Non-Human Identities (NHIs). These NHIs are crucial players in cybersecurity, particularly when dealing with……
-
What makes Agentic AI a powerful ally in cybersecurity?
How Do Non-Human Identities Elevate Cybersecurity Strategies? Evolving cybersecurity demands innovative approaches to safeguard digital assets, and Non-Human Identities (NHIs) are at the forefront of this transformation. But what exactly are NHIs, and how do they fit into the broader context of cybersecurity, particularly in cloud environments? NHIs represent machine identities used within cybersecurity frameworks….…
-
Anthropic employee error exposes Claude Code source
Tags: access, ai, computer, control, credentials, cybercrime, data, data-breach, malicious, open-source, service, technology, tool, vulnerabilityCSO, “no sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again.”But it wasn’t the first time this had happened; according to Fortune and other news sources, the same thing happened last…
-
Mercor says it was hit by cyberattack tied to compromise of open-source LiteLLM project
The AI recruiting startup confirmed a security incident after an extortion hacking crew took credit for stealing data from the company’s systems. First seen on techcrunch.com Jump to article: techcrunch.com/2026/03/31/mercor-says-it-was-hit-by-cyberattack-tied-to-compromise-of-open-source-litellm-project/
-
Asking AI for personal advice is a bad idea, Stanford study shows
AI chatbots, including ChatGPT, Claude, and Gemini, were all too willing to validate and hype up their users, a new Stanford study showed. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/03/asking-ai-for-personal-advice-is-a-bad-idea-stanford-study-shows/
-
New North Korean AI Hiring Scheme Targets US Companies
North Korean operatives are using AI-generated resumes and stolen identities to infiltrate US companies, turning hiring pipelines into a new attack vector. The post New North Korean AI Hiring Scheme Targets US Companies appeared first on TechRepublic. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-north-korean-ai-hiring-scheme-us-companies/
-
AI SOC Firm Tenex Raises $250M to Drive Faster Response
Founder and CEO Eric Foster Wants to Reduce Dwell Time and Scale Engineering Teams. Tenex plans to use its $250 million Series B funding to expand its AI-driven SOC platform and hire hundreds of engineers. The company aims to improve alert coverage, automate response and reduce attacker dwell time while maintaining human oversight for complex…
-
Cloud Security Alliance Wins 2026 SC Award for AI Security Certification
CSA won a 2026 SC Award for its AI security certification, reflecting rising demand for AI risk and governance training. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/news/cloud-security-alliance-wins-2026-sc-award-for-ai-security-certification/
-
Claude AI finds Vim, Emacs RCE bugs that trigger on file open
Vulnerabilities in the Vim and GNU Emacs text editors, discovered using simple prompts with the Claude assistant, allow remote code execution simply by opening a file. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/claude-ai-finds-vim-emacs-rce-bugs-that-trigger-on-file-open/
-
Bridging the Gap: CSA’s AI Security Initiatives at RSAC
Alan Shimel sits down with longtime friend and cybersecurity veteran Rich Mogull to discuss his new role as chief analyst at the Cloud Security Alliance. The conversation covers a lot of ground, from the rapid rise of agentic AI to how CSA is working to bridge the gap between high-level security frameworks and the practitioners..…
-
When AI Writes the Code, What Changes for Security?
Tags: ai<div cla First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/03/when-ai-writes-the-code-what-changes-for-security/
-
Attackers trojanize Axios HTTP library in highest-impact npm supply chain attack
Tags: ai, attack, breach, cloud, control, credentials, crypto, github, incident response, linux, LLM, macOS, malicious, malware, monitoring, open-source, openai, powershell, pypi, rat, spam, supply-chain, tool, windowspostinstall hook that would execute a dropper script when it was pulled in by a different package as a dependency.Shortly after midnight UTC on March 31 a new version of the Axios package, axios@1.14.1, was published on npm followed by axios@0.30.4 39 minutes later. Both listed plain-crypto-js@4.2.1 as a dependency in their package.json files, but…
-
Agentic AI Uncertainty Dominates Dialog at RSAC Conference
A Disorienting Future: Rapid Pace of Change and AI Agents in the Hands of Attackers Reflecting the current state of cybersecurity, uncertainty dominated at this year’s annual RSAC Conference in San Francisco, as advances in artificial intelligence, including agentic artificial intelligence, now pose risks experts never saw coming. It’s a disorientating state of affairs for…
-
Google’s Vertex AI Has an Over-Privileged Problem
Palo Alto researchers show how attackers could exploit AI agents on Google’s Vertex AI to steal data and break into restricted cloud infrastructure. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/googles-vertex-ai-over-privilege-problem
-
Cybersecurity risks shape AI adoption, but investment accelerates nonetheless
Companies see cybersecurity as a top investment priority within their AI budgets, according to KPMG. First seen on cybersecuritydive.com Jump to article: www.cybersecuritydive.com/news/ai-cybersecurity-concerns-adoption-agentic-investment/816262/
-
Uncertainty Dominates Discussions at RSAC Conference 2026
Rapid Pace of Change – Now Featuring Agentic AI – Poses Struggle and Opportunity Reflecting the current state of cybersecurity, uncertainty dominated at this year’s annual RSAC Conference in San Francisco, as advances in artificial intelligence, including agentic AI, now pose risks experts never saw coming. This is a disorientating state of affairs for all…
-
RSAC 2026 News: RSA Security and Microsoft Advance Identity Security for AI Era
I sat down with RSA Security at RSAC 2026 to discuss identity security. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/cybersecurity/rsac-2026-news-rsa-security-and-microsoft-advance-identity-security-for-ai-era/

