Tag: LLM
-
LLM-Powered MalTerminal Malware Uses OpenAI GPT-4 to Create Ransomware Code
LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code. Our research discovered hitherto unknown samples, and what may be the earliest example known to date of an LLM-enabled malware we dubbed “MalTerminal.” Our methodology also uncovered other offensive LLM applications, including…
-
USENIX 2025: PEPR ’25 OneShield Privacy Guard: Deployable Privacy Solutions for LLMs
Creator, Author and Presenter: Shubhi Asthana, IBM Research Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-oneshield-privacy-guard-deployable-privacy-solutions-for-llms/
-
It’s trivially easy to poison LLMs into spitting out gibberish, says Anthropic
Just 250 malicious training documents can poison a 13B parameter model – that’s 0.00016% of a whole dataset First seen on theregister.com Jump to article: www.theregister.com/2025/10/09/its_trivially_easy_to_poison/
-
AI Security Goes Mainstream as Vendors Spend Heavily on M&A
Platform Vendors Target Runtime Defense, Prompt Flow, Agent Identity and Output As autonomous AI grows, so does the security risk. Prompt injection, identity control and AI observability are at the center of a dozen recent acquisitions, as vendors including Cisco, CrowdStrike, Palo Alto Networks and SentinelOne try to adapt to the autonomy and unpredictability of…
-
USENIX 2025: PEPR ’25 When Privacy Guarantees Meet Pre-Trained LLMs: A Case Study In Synthetic Data
Creators, Authors and Presenters: Yash Maurya and Aman Priyanshu, Carnegie Mellon University Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-when-privacy-guarantees-meet-pre-trained-llms-a-case-study-in-synthetic-data/
-
USENIX 2025: PEPR ’25 When Privacy Guarantees Meet Pre-Trained LLMs: A Case Study In Synthetic Data
Creators, Authors and Presenters: Yash Maurya and Aman Priyanshu, Carnegie Mellon University Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-when-privacy-guarantees-meet-pre-trained-llms-a-case-study-in-synthetic-data/
-
USENIX 2025: PEPR ’25 When Privacy Guarantees Meet Pre-Trained LLMs: A Case Study In Synthetic Data
Creators, Authors and Presenters: Yash Maurya and Aman Priyanshu, Carnegie Mellon University Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-when-privacy-guarantees-meet-pre-trained-llms-a-case-study-in-synthetic-data/
-
Why Enterprises Continue to Stick With Traditional AI
Explainability, Cost, Compliance Drive AI Choices in Enterprises. LLMs may dominate headlines, but enterprises are taking a more measured approach. Sujatha S Iyer, AI security head at ManageEngine, says the future of AI for many businesses lies not in deploying massive models but in explainable, efficient and compliant systems designed to solve specific problems. First…
-
Ghosts in the Machine: ASCII Smuggling across Various LLMs FireTail Blog
Oct 06, 2025 – Alan Fagan – Operationalizing Defense The key to catching ASCII Smuggling is monitoring the raw input payload, the exact string the LLM tokenization engine receives, not just the visible text. Ingestion: FireTail continuously records LLM activity logs from all your integrated platforms. Analysis: Our platform analyzes the raw payload data for…
-
USENIX 2025: PEPR ’25 Harnessing LLMs for Scalable Data Minimization
Creators, Authors and Presenters: Charles de Bourcy, OpenAI Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-harnessing-llms-for-scalable-data-minimization/
-
GPT needs to be rewired for security
LLMs and agentic systems already shine at everyday productivity, including transcribing and summarizing meetings, extracting action items, prioritizing critical emails, and … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/02/llms-soc-automation/
-
GPT needs to be rewired for security
LLMs and agentic systems already shine at everyday productivity, including transcribing and summarizing meetings, extracting action items, prioritizing critical emails, and … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/02/llms-soc-automation/
-
LLM07: System Prompt Leakage FireTail Blog
Sep 30, 2025 – Lina Romero – In 2025, AI is everywhere, and so are AI vulnerabilities. OWASP’s Top Ten Risks for LLMs provides developers and security researchers with a comprehensive resource for breaking down the most common risks to AI models. In previous blogs, we’ve covered the first 6 items on the list, and…
-
The Web’s Bot Problem Isn’t Getting Better: Insights From the 2025 Global Bot Security Report
Over 60% of websites remain unprotected against basic bots in 2025. Explore key findings from DataDome’s Global Bot Security Report to see how LLM crawlers and sophisticated automation are reshaping online threat landscapes and what businesses can do to defend themselves. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/09/the-webs-bot-problem-isnt-getting-better-insights-from-the-2025-global-bot-security-report/
-
Microsoft Flags AI Phishing Attack Hiding in SVG Files
Microsoft Threat Intelligence detected a new AI-powered phishing campaign using LLMs to hide malicious code inside SVG files disguised as business dashboards. First seen on hackread.com Jump to article: hackread.com/microsoft-ai-phishing-attack-hiding-svg-files/
-
Microsoft Flags AI Phishing Attack Hiding in SVG Files
Microsoft Threat Intelligence detected a new AI-powered phishing campaign using LLMs to hide malicious code inside SVG files disguised as business dashboards. First seen on hackread.com Jump to article: hackread.com/microsoft-ai-phishing-attack-hiding-svg-files/
-
Evolving Enterprise Defense to Secure the Modern AI Supply Chain
The world of enterprise technology is undergoing a dramatic shift. Gen-AI adoption is accelerating at an unprecedented pace, and SaaS vendors are embedding powerful LLMs directly into their platforms. Organizations are embracing AI-powered applications across every function, from marketing and development to finance and HR. This transformation unlocks innovation and efficiency, but it also First…
-
Evolving Enterprise Defense to Secure the Modern AI Supply Chain
The world of enterprise technology is undergoing a dramatic shift. Gen-AI adoption is accelerating at an unprecedented pace, and SaaS vendors are embedding powerful LLMs directly into their platforms. Organizations are embracing AI-powered applications across every function, from marketing and development to finance and HR. This transformation unlocks innovation and efficiency, but it also First…
-
Cloudian launches object storage AI platform at corporate LLM
Object storage specialist teams up with Nvidia to provide RAG-based chatbot capability for organisations that want to mine in-house information in an air-gapped large language model First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366632045/Cloudian-launches-object-storage-AI-platform-at-corporate-LLM
-
Cloudian launches object storage AI platform at corporate LLM
Object storage specialist teams up with Nvidia to provide RAG-based chatbot capability for organisations that want to mine in-house information in an air-gapped large language model First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366632045/Cloudian-launches-object-storage-AI-platform-at-corporate-LLM
-
Risk of Prompt Injection in LLM-Integrated Apps
Large Language Models (LLMs) are at the core of today’s AI revolution, powering advanced tools and other intelligent chatbots. These sophisticated neural networks are trained on vast amounts of text data, enabling them to understand context, language nuances, and complex patterns. As a result, LLMs can perform a wide array of tasks”, from generating coherent…
-
KI-Gefahren rücken Integritätsschutz in den Mittelpunkt
Tags: ai, ciso, cloud, compliance, cyberattack, data, data-breach, DSGVO, exploit, governance, injection, LLM, ml, risk, tool, training, updateData Poisoning gefährdet die Integrität von KI-Modellen.Für CISOs reduziert KI selten die Komplexität, sondern füllt vielmehr ihre ohnehin schon volle Agenda. Neben den traditionellen Sicherheitsprioritäten müssen sie sich nun auch mit neuen KI-bedingten Risiken auseinandersetzen, etwa wenn KI-Lösungen unkontrolliert für geschäftliche Zwecke genutzt, Modelle manipuliert und neue Vorschriften nicht eingehalten werden. Eine der drängendsten Herausforderungen…
-
Microsoft Sniffs Out AI-Based Phishing Campaign Using Its AI-Based Tools
Microsoft used AI-based tools in Defender for Office 365 to detect and block a phishing campaign in which Security Copilot determined the malicious code was likely written by a LLM, marking the latest incident in which AI security tools were used to combat an AI-based cyberattack. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/09/microsoft-sniffs-out-ai-based-phishing-campaign-using-its-ai-based-tools/
-
Abusing Notion’s AI Agent for Data Theft
Notion just released version 3.0, complete with AI agents. Because the system contains Simon Willson’s lethal trifecta, it’s vulnerable to data theft though prompt injection. First, the trifecta: The lethal trifecta of capabilities is: Access to your private data”, one of the most common purposes of tools in the first place! Exposure to untrusted content”,…
-
Microsoft Flags AI-Driven Phishing: LLM-Crafted SVG Files Outsmart Email Security
Microsoft is calling attention to a new phishing campaign primarily aimed at U.S.-based organizations that has likely utilized code generated using large language models (LLMs) to obfuscate payloads and evade security defenses.”Appearing to be aided by a large language model (LLM), the activity obfuscated its behavior within an SVG file, leveraging business terminology and a…
-
SECURITY AFFAIRS MALWARE NEWSLETTER ROUND 64
Security Affairs Malware newsletter includes a collection of the best articles and research on malware in the international landscape Malware Newsletter Brewing Trouble, Dissecting a macOS Malware Campaign Large-Scale Attack Targeting Macs via GitHub Pages Impersonating Companies to Attempt to Deliver Stealer Malware Prompts as Code & Embedded Keys – The Hunt for LLM-Enabled […]…
-
How to Protect Monetize Your Content in The Age of AI
Discover how publishers and e-commerce platforms can protect content from AI scraping, regain visibility into LLM traffic, and unlock new monetization opportunities with DataDome’s real-time AI detection and monetization tools. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/09/how-to-protect-monetize-your-content-in-the-age-of-ai/
-
LAMEHUG: An LLM-Driven Malware for Dynamic Reconnaissance and Data Exfiltration
A novel AI-driven threat leverages LLMs on Hugging Face to execute adaptive reconnaissance and data exfiltration in real time. Rather than relying on static scripts or prewritten payloads, LAMEHUG dynamically queries a Qwen 2.5-Coder-32B-Instruct model via the Hugging Face API to generate Windows command-shell instructions tailored to its current environment. This capability enables on-the-fly reconnaissance,…
-
LAMEHUG: An LLM-Driven Malware for Dynamic Reconnaissance and Data Exfiltration
A novel AI-driven threat leverages LLMs on Hugging Face to execute adaptive reconnaissance and data exfiltration in real time. Rather than relying on static scripts or prewritten payloads, LAMEHUG dynamically queries a Qwen 2.5-Coder-32B-Instruct model via the Hugging Face API to generate Windows command-shell instructions tailored to its current environment. This capability enables on-the-fly reconnaissance,…
-
Microsoft spots LLM-obfuscated phishing attack
Cybercriminals are increasingly using AI-powered tools and (malicious) large language models to create convincing, error-free emails, deepfakes, online personas, … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/09/25/microsoft-spots-llm-obfuscated-phishing-attack/

