Tag: LLM
-
Research shows LLMs can conduct sophisticated attacks without humans
The project, launched by Carnegie Mellon in collaboration with Anthropic, simulated the 2017 Equifax data breach. First seen on cybersecuritydive.com Jump to article: www.cybersecuritydive.com/news/research-llms-attacks-without-humans/754203/
-
LLM Honeypots Deceive Hackers into Exposing Attack Methods
Tags: ai, attack, cyber, cybercrime, cybersecurity, hacker, intelligence, LLM, strategy, technology, threatCybersecurity researchers have successfully deployed artificial intelligence-powered honeypots to trick cybercriminals into revealing their attack strategies, demonstrating a promising new approach to threat intelligence gathering. The innovative technique uses large language models (LLMs) to create convincing fake systems that lure hackers into exposing their methods and infrastructure. Revolutionary Deception Technology The breakthrough involves Beelzebub, a…
-
Vulnhuntr: Open-source tool to identify remotely exploitable vulnerabilities
Vulnhuntr is an open-source tool that finds remotely exploitable vulnerabilities. It uses LLMs and static code analysis to trace how data moves through an application, from … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/07/28/vulnhuntr-open-source-tool-identify-remotely-exploitable-vulnerabilities/
-
Review: LLM Engineer’s Handbook
Tags: LLMFor all the excitement around LLMs, practical, engineering-focused guidance remains surprisingly hard to find. LLM Engineer’s Handbook aims to fill that gap. About the authors … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/07/28/review-llm-engineers-handbook/
-
Novel malware from Russia’s APT28 prompts LLMs to create malicious Windows commands
Tags: ai, api, attack, computer, control, cyber, cyberattack, cybercrime, data, detection, dos, exploit, government, group, hacking, infrastructure, intelligence, LLM, malicious, malware, military, network, phishing, programming, russia, service, tool, ukraine, vulnerability, windows.pif (MS-DOS executable) extension, though variants with .exe and .py extensions have also been observed.CERT-UA attributes these attacks to a group it tracks as UAC-0001, but which is better known in the security community as APT28. Western intelligence agencies have officially associated this group with Unit 26165, or the 85th Main Special Service Center (GTsSS)…
-
CERT-UA Discovers LAMEHUG Malware Linked to APT28, Using LLM for Phishing Campaign
The Computer Emergency Response Team of Ukraine (CERT-UA) has disclosed details of a phishing campaign that’s designed to deliver a malware codenamed LAMEHUG.”An obvious feature of LAMEHUG is the use of LLM (large language model), used to generate commands based on their textual representation (description),” CERT-UA said in a Thursday advisory.The activity has been attributed…
-
LameHug malware uses AI LLM to craft Windows data-theft commands in real-time
A novel malware family named LameHug is using a large language model (LLM) to generate commands to be executed on compromised Windows systems. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/lamehug-malware-uses-ai-llm-to-craft-windows-data-theft-commands-in-real-time/
-
Red Teaming AI Systems: Why Traditional Security Testing Falls Short
What if your AI-powered application leaked sensitive data, generated harmful content, or revealed internal instructions and none of your security tools caught it? This isn’t hypothetical. It’s happening now and exposing critical gaps in how we secure modern AI systems. When AI systems like LLMs, agents, or AI-driven applications reach production, many security teams.. First…
-
AI Agents Act Like Employees With Root Access”, Here’s How to Regain Control
The AI gold rush is on. But without identity-first security, every deployment becomes an open door. Most organizations secure native AI like a web app, but it behaves more like a junior employee with root access and no manager.From Hype to High StakesGenerative AI has moved beyond the hype cycle. Enterprises are:Deploying LLM copilots to…
-
Grok 4 mit Jailbreak-Angriff geknackt
Das neue KI-Sprachmodell Grok 4 ist anfällig für Jailbreak-Angriffe.Erst vor wenigen Tagen präsentierte Elon Musk sein neues KI-Sprachmodell Grok 4. Doch schon kurz nach der Veröffentlichung gelang es Forschern von NeuralTrust, die Schutzvorkehrungen des Tools zu umgehen. Sie brachten es dazu, Anweisungen zur Herstellung eines Molotowcocktails zu geben. Dabei kombinierten sie zwei fortschrittliche Exploit-Techniken. Sowohl…
-
Code Execution Through Email: How I Used Claude to Hack Itself
You don’t always need a vulnerable app to pull off a successful exploit. Sometimes all it takes is a well-crafted email, an LLM agent, and a few “innocent” plugins. This is the story of how I used a Gmail message to trigger code execution through Claude Desktop, and how Claude itself (!) helped me plan..…
-
AI poisoning and the CISO’s crisis of trust
Tags: access, ai, breach, ceo, ciso, compliance, control, cybersecurity, data, defense, detection, disinformation, exploit, framework, healthcare, identity, infosec, injection, LLM, monitoring, network, privacy, RedTeam, resilience, risk, russia, saas, threat, tool, trainingFoundation models began parroting Kremlin-aligned propaganda after ingesting material seeded by a large-scale Russian network known as the “Pravda Network.”A high-profile AI-generated reading list published by two American news outlets included 10 hallucinated book titles mistakenly attributed to real authors.Researchers showed that imperceptible perturbations in training images could trigger misclassification. Researchers in the healthcare domain demonstrated…
-
New Grok-4 AI breached within 48 hours using ‘whispered’ jailbreaks
Safety systems cheated by contextual tricks: The attack exploits Grok 4’s contextual memory, echoing its own earlier statements back to it, and gradually guides it toward a goal without raising alarms. Combining Crescendo with Echo Chamber, the jailbreak technique that achieved over 90% success in hate speech and violence tests across top LLMs, strengthens the…
-
Putting AI-assisted ‘vibe hacking’ to the test
Tags: access, ai, attack, chatgpt, cyber, cybercrime, cybersecurity, data-breach, defense, exploit, hacking, least-privilege, LLM, network, open-source, strategy, threat, tool, vulnerability, zero-trustUnderwhelming results: For each LLM test, the researchers repeated each task prompt five times to account for variability in responses. For exploit development tasks, models that failed the first task were not allowed to progress to the second, more complex one. The team tested 16 open-source models from Hugging Face that claimed to have been…
-
MCP-Server von Versa soll KI-Integration und Admin-Produktivität im gesamten Netzwerk- und Sicherheitsbereich optimieren
Der neue Versa-Model-Context-Protocol-Server ermöglicht LLM-gesteuerten Assistenten und intern entwickelten Copiloten die sichere Abfrage von Versa-Systemen und reduziert die Mean-Time-to-Resolution um bis zu 45 Prozent. Der Anbieter von Universal-Secure-Access-Service-Edge (SASE), Versa Networks, hat die Veröffentlichung seines MCP-Servers bekannt gegeben ein leistungsstarkes Dienstprogramm, das Kunden dabei helfen soll, ihre Agentic-AI-Tools und -Plattformen in die
-
JFrog entdeckt kritische RCE-Sicherheitslücke, die mcp-remote-Clients kapern kann
Das Tool mcp-remote gewann an Popularität in der KI-Community, als erste Remote-MCP-Server-Implementierungen aufgetaucht waren. Diese ermöglichten es LLM-Modellen, mit externen Daten und Tools zu interagieren. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/jfrog-entdeckt-kritische-rce-sicherheitsluecke-die-mcp-remote-clients-kapern-kann/a41370/
-
LLMs Fall Short in Vulnerability Discovery and Exploitation
Forescout found that most LLMs are unreliable in vulnerability research and exploit tasks, with threat actors still skeptical about using tools for these purposes First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/llms-fall-vulnerability-discovery/
-
MCP is fueling agentic AI, and introducing new security risks
Tags: access, ai, api, attack, authentication, best-practice, ceo, cloud, corporate, cybersecurity, gartner, injection, LLM, malicious, monitoring, network, office, open-source, penetration-testing, RedTeam, risk, service, supply-chain, technology, threat, tool, vulnerabilityMitigating MCP server risks: When it comes to using MCP servers there’s a big difference between developers using it for personal productivity and enterprises putting them into production use cases.Derek Ashmore, application transformation principal at Asperitas Consulting, suggests that corporate customers don’t rush on MCP adoption until the technology is safer and more of the…
-
Critical mcp”‘remote Vulnerability Enables LLM Clients to Remote Code Execution
The JFrog Security Research team has discovered a critical security vulnerability in mcp-remote, a widely used tool that enables Large Language Model clients to communicate with remote servers, potentially allowing attackers to achieve full system compromise through remote code execution. Severe Security Flaw Affects Popular AI Tool CVE-2025-6514, rated with a critical CVSS score of…
-
Serious Flaws Patched in Model Context Protocol Tools
Always Secure MCP Servers Connecting LLMs to External Systems, Experts Warn. Warning: Popular technology designed to make it easy for artificial intelligence tools to connect with external applications and data sources can be turned to malicious use. Researchers discovered two separate vulnerabilities tied to tools in the ecosystem around model context protocol, or MCP. First…
-
New AI Malware PoC Reliably Evades Microsoft Defender
Worried about hackers employing LLMs to write powerful malware? Using targeted reinforcement learning (RL) to train open source models in specific tasks has yielded the capability to do just that. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/ai-malware-poc-evades-microsoft-defender
-
Scholars sneaking phrases into papers to fool AI reviewers
Using prompt injections to play a Jedi mind trick on LLMs First seen on theregister.com Jump to article: www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers/
-
AI Trust Score Ranks LLM Security
Startup Tumeryk’s AI Trust scorecard finds Google Gemini Pro 2.5 as the most trustworthy, with OpenAI’s GPT-4o mini a close second and DeepSeek and Alibaba Qwen scoring lowest. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/ai-trust-score-ranks-llm-security
-
Faster Not Bigger: New R1T2 LLM Combines DeepSeek Versions
Tags: LLMGerman Consultancy’s Latest LLM Aims to Reduce Costs, Preserve Reasoning Skills. Say hello to DeepSeek-TNG R1T2 Chimera, a large language model built by German firm TNG Consulting, using three different DeepSeek LLMs. The goal of R1T2 is to provide a faster LLM with more predictable performance that maintains full reasoning accuracy. First seen on govinfosecurity.com…
-
Hallucinations May Open LLMs to Phishing Threats
First seen on scworld.com Jump to article: www.scworld.com/news/hallucinations-may-open-llms-to-phishing-threats
-
Incorrect links output by LLMs could lead to phishing, researchers say
First seen on scworld.com Jump to article: www.scworld.com/news/incorrect-links-output-by-llms-could-lead-to-phishing-researchers-say
-
OWASP unpacks GenAI security’s biggest risks to LLMs
First seen on scworld.com Jump to article: www.scworld.com/feature/owasp-unpacks-genai-securitys-biggest-risks-to-llms

