Tag: LLM
-
LLMs’ AI-Generated Code Remains Wildly Insecure
Security debt ahoy: only about half of the code that the latest large language models (LLMs) create is cybersecure, and more and more of it is being created all the time. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/llms-ai-generated-code-wildly-insecure
-
LLMs Boost Offensive RD by Identifying and Exploiting Trapped COM Objects
Outflank is pioneering the integration of large language models (LLMs) to expedite research and development workflows while maintaining rigorous quality standards. This approach allows teams to focus on refining and testing techniques for their Outflank Security Tooling (OST) suite, which delivers evasive capabilities for complex operations. A recent case study exemplifies this by demonstrating how…
-
How bright are AI agents? Not very, recent reports suggest
CSOs should ‘skip the fluff’: Meghu’s advice to CSOs: Stop reading the marketing and betting too much of your business on AI/LLM technology as it exists today. Start small and always have a human operator to guide it.”If you skip the fluff and get to the practical application, we have a new technology that could…
-
Getting a Cybersecurity Vibe Check on Vibe Coding
Following a number of high-profile security and development issues surrounding the use of LLMs and GenAI to code and create applications, it’s worth taking a temperature check to ask: Is this technology ready for prime time? First seen on darkreading.com Jump to article: www.darkreading.com/application-security/cybersecurity-vibe-check-vibe-coding
-
Using LLMs as a reverse engineering sidekick
LLMs may serve as powerful assistants to malware analysts to streamline workflows, enhance efficiency, and provide actionable insights during malware analysis. First seen on blog.talosintelligence.com Jump to article: blog.talosintelligence.com/using-llm-as-a-reverse-engineering-sidekick/
-
From LLM scrapers to AI agents: mapping the AI bot landscape for detection teams
AI bots, AI scrapers, AI agents”, you’ve seen these terms thrown around in product announcements, Hacker News posts, and marketing decks. But behind the hype, what do these bots actually do? And more importantly, how are they changing the fraud and bot detection landscape? This article introduces First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/07/from-llm-scrapers-to-ai-agents-mapping-the-ai-bot-landscape-for-detection-teams/
-
LLM Honeypots Can Deceive Threat Actors into Exposing Binaries and Known Exploits
Large language model (LLM)-powered honeypots are becoming increasingly complex instruments for luring and examining threat actors in the rapidly changing field of cybersecurity. A recent deployment using Beelzebub, a low-code honeypot framework, demonstrated how such systems can simulate vulnerable SSH services to capture malicious activities in real-time. By configuring a single YAML file, defenders can…
-
Enterprise LLMs Vulnerable to Prompt-Based Attacks Leading to Data Breaches
Security researchers have discovered alarming vulnerabilities in enterprise Large Language Model (LLM) applications that could allow attackers to bypass authentication systems and access sensitive corporate data through sophisticated prompt injection techniques. The findings reveal that many organizations deploying AI-powered chatbots and automated systems may be inadvertently exposing critical information to malicious actors. The vulnerability stems…
-
New Microsoft Guidance Targets Defense Against Indirect Prompt Injection
Microsoft has unveiled new guidance addressing one of the most pressing security challenges facing enterprise AI deployments: indirect prompt injection attacks. This emerging threat vector has become the top entry in the OWASP Top 10 for LLM Applications & Generative AI 2025, prompting the tech giant to develop a multi-layered defense strategy spanning prevention, detection,…
-
MCP”‘Sicherheit: Das Rückgrat von Agentic AI sichern
Tags: access, ai, api, authentication, ciso, credentials, cyberattack, cyersecurity, firewall, infrastructure, LLM, mfa, risk, toolIm Zuge von Agentic AI sollten sich CISOs mit MCP-Sicherheit auseinandersetzen. Das Model Context Protocol (MCP) wurde erst Ende 2024 vorgestellt, dennoch sind die technologischen Folgen in vielen Architekturen bereits deutlich spürbar. Damit Entwickler nicht jede Schnittstelle mühsam von Hand programmieren müssen, stellt MCP eine einheitliche ‘Sprache” für LL-Agenten bereit. Dadurch können sie Tools, Datenbanken und SaaS”‘Dienste…
-
Research shows LLMs can conduct sophisticated attacks without humans
The project, launched by Carnegie Mellon in collaboration with Anthropic, simulated the 2017 Equifax data breach. First seen on cybersecuritydive.com Jump to article: www.cybersecuritydive.com/news/research-llms-attacks-without-humans/754203/
-
LLM Honeypots Deceive Hackers into Exposing Attack Methods
Tags: ai, attack, cyber, cybercrime, cybersecurity, hacker, intelligence, LLM, strategy, technology, threatCybersecurity researchers have successfully deployed artificial intelligence-powered honeypots to trick cybercriminals into revealing their attack strategies, demonstrating a promising new approach to threat intelligence gathering. The innovative technique uses large language models (LLMs) to create convincing fake systems that lure hackers into exposing their methods and infrastructure. Revolutionary Deception Technology The breakthrough involves Beelzebub, a…
-
Vulnhuntr: Open-source tool to identify remotely exploitable vulnerabilities
Vulnhuntr is an open-source tool that finds remotely exploitable vulnerabilities. It uses LLMs and static code analysis to trace how data moves through an application, from … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/07/28/vulnhuntr-open-source-tool-identify-remotely-exploitable-vulnerabilities/
-
Review: LLM Engineer’s Handbook
Tags: LLMFor all the excitement around LLMs, practical, engineering-focused guidance remains surprisingly hard to find. LLM Engineer’s Handbook aims to fill that gap. About the authors … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/07/28/review-llm-engineers-handbook/
-
Novel malware from Russia’s APT28 prompts LLMs to create malicious Windows commands
Tags: ai, api, attack, computer, control, cyber, cyberattack, cybercrime, data, detection, dos, exploit, government, group, hacking, infrastructure, intelligence, LLM, malicious, malware, military, network, phishing, programming, russia, service, tool, ukraine, vulnerability, windows.pif (MS-DOS executable) extension, though variants with .exe and .py extensions have also been observed.CERT-UA attributes these attacks to a group it tracks as UAC-0001, but which is better known in the security community as APT28. Western intelligence agencies have officially associated this group with Unit 26165, or the 85th Main Special Service Center (GTsSS)…
-
CERT-UA Discovers LAMEHUG Malware Linked to APT28, Using LLM for Phishing Campaign
The Computer Emergency Response Team of Ukraine (CERT-UA) has disclosed details of a phishing campaign that’s designed to deliver a malware codenamed LAMEHUG.”An obvious feature of LAMEHUG is the use of LLM (large language model), used to generate commands based on their textual representation (description),” CERT-UA said in a Thursday advisory.The activity has been attributed…
-
LameHug malware uses AI LLM to craft Windows data-theft commands in real-time
A novel malware family named LameHug is using a large language model (LLM) to generate commands to be executed on compromised Windows systems. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/lamehug-malware-uses-ai-llm-to-craft-windows-data-theft-commands-in-real-time/
-
Red Teaming AI Systems: Why Traditional Security Testing Falls Short
What if your AI-powered application leaked sensitive data, generated harmful content, or revealed internal instructions and none of your security tools caught it? This isn’t hypothetical. It’s happening now and exposing critical gaps in how we secure modern AI systems. When AI systems like LLMs, agents, or AI-driven applications reach production, many security teams.. First…
-
AI Agents Act Like Employees With Root Access”, Here’s How to Regain Control
The AI gold rush is on. But without identity-first security, every deployment becomes an open door. Most organizations secure native AI like a web app, but it behaves more like a junior employee with root access and no manager.From Hype to High StakesGenerative AI has moved beyond the hype cycle. Enterprises are:Deploying LLM copilots to…
-
Grok 4 mit Jailbreak-Angriff geknackt
Das neue KI-Sprachmodell Grok 4 ist anfällig für Jailbreak-Angriffe.Erst vor wenigen Tagen präsentierte Elon Musk sein neues KI-Sprachmodell Grok 4. Doch schon kurz nach der Veröffentlichung gelang es Forschern von NeuralTrust, die Schutzvorkehrungen des Tools zu umgehen. Sie brachten es dazu, Anweisungen zur Herstellung eines Molotowcocktails zu geben. Dabei kombinierten sie zwei fortschrittliche Exploit-Techniken. Sowohl…
-
Code Execution Through Email: How I Used Claude to Hack Itself
You don’t always need a vulnerable app to pull off a successful exploit. Sometimes all it takes is a well-crafted email, an LLM agent, and a few “innocent” plugins. This is the story of how I used a Gmail message to trigger code execution through Claude Desktop, and how Claude itself (!) helped me plan..…
-
AI poisoning and the CISO’s crisis of trust
Tags: access, ai, breach, ceo, ciso, compliance, control, cybersecurity, data, defense, detection, disinformation, exploit, framework, healthcare, identity, infosec, injection, LLM, monitoring, network, privacy, RedTeam, resilience, risk, russia, saas, threat, tool, trainingFoundation models began parroting Kremlin-aligned propaganda after ingesting material seeded by a large-scale Russian network known as the “Pravda Network.”A high-profile AI-generated reading list published by two American news outlets included 10 hallucinated book titles mistakenly attributed to real authors.Researchers showed that imperceptible perturbations in training images could trigger misclassification. Researchers in the healthcare domain demonstrated…
-
New Grok-4 AI breached within 48 hours using ‘whispered’ jailbreaks
Safety systems cheated by contextual tricks: The attack exploits Grok 4’s contextual memory, echoing its own earlier statements back to it, and gradually guides it toward a goal without raising alarms. Combining Crescendo with Echo Chamber, the jailbreak technique that achieved over 90% success in hate speech and violence tests across top LLMs, strengthens the…
-
Putting AI-assisted ‘vibe hacking’ to the test
Tags: access, ai, attack, chatgpt, cyber, cybercrime, cybersecurity, data-breach, defense, exploit, hacking, least-privilege, LLM, network, open-source, strategy, threat, tool, vulnerability, zero-trustUnderwhelming results: For each LLM test, the researchers repeated each task prompt five times to account for variability in responses. For exploit development tasks, models that failed the first task were not allowed to progress to the second, more complex one. The team tested 16 open-source models from Hugging Face that claimed to have been…
-
MCP-Server von Versa soll KI-Integration und Admin-Produktivität im gesamten Netzwerk- und Sicherheitsbereich optimieren
Der neue Versa-Model-Context-Protocol-Server ermöglicht LLM-gesteuerten Assistenten und intern entwickelten Copiloten die sichere Abfrage von Versa-Systemen und reduziert die Mean-Time-to-Resolution um bis zu 45 Prozent. Der Anbieter von Universal-Secure-Access-Service-Edge (SASE), Versa Networks, hat die Veröffentlichung seines MCP-Servers bekannt gegeben ein leistungsstarkes Dienstprogramm, das Kunden dabei helfen soll, ihre Agentic-AI-Tools und -Plattformen in die
-
JFrog entdeckt kritische RCE-Sicherheitslücke, die mcp-remote-Clients kapern kann
Das Tool mcp-remote gewann an Popularität in der KI-Community, als erste Remote-MCP-Server-Implementierungen aufgetaucht waren. Diese ermöglichten es LLM-Modellen, mit externen Daten und Tools zu interagieren. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/jfrog-entdeckt-kritische-rce-sicherheitsluecke-die-mcp-remote-clients-kapern-kann/a41370/

