Tag: LLM
-
Novel TokenBreak Attack Method Can Bypass LLM Security Features
Researchers with HiddenLayers uncovered a new vulnerability in LLMs called TokenBreak, which could enable an attacker to get around content moderation features in many models simply by adding a few characters to words in a prompt. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/06/novel-tokenbreak-attack-method-can-bypass-llm-security-features/
-
From LLMs to Cloud Infrastructure: F5 Aims to Secure the New AI Attack Surface
Accelerate human-led innovation, automate the grunt work and make sure AI delivers real value without proliferating new security risks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/06/from-llms-to-cloud-infrastructure-f5-aims-to-secure-the-new-ai-attack-surface/
-
Before scaling GenAI, map your LLM usage and risk zones
In this Help Net Security interview, Paolo del Mundo, Director of Application and Cloud Security at The Motley Fool, discusses how organizations can scale their AI usage by … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/06/17/paolo-del-mundo-the-motley-fool-ai-usage-guardrails/
-
What are the best practices for MCP security?
Introduction Modern applications are increasingly powered by large language models (LLMs) that don’t just generate text”, they can call live APIs, query databases, and even trigger automated workflows. The Model Context Protocol (MCP) makes this possible by standardizing how LLMs interface with external tools, turning your AI assistant into a fully programmable agent. With great…
-
Erster Zero-Click-Angriff auf Microsoft 365 Copilot
Eine Lücke in Microsoft 365 Copilot ermöglicht es, sensible Daten zu stehlen.Stellen Sie sich einen Angriff vor, der so heimlich ist, dass er keine Klicks, keine Downloads und keine Warnungen erfordert es reicht eine einzelne E-Mail, die in Ihrem Posteingang landet. Das ist der Fall bei EchoLeak, einer kritischen Sicherheitslücke in Microsoft 365 Copilot. Sie…
-
Salesforce study finds LLM agents flunk CRM and confidentiality tests
Tags: LLM6-in-10 success rate for single-step tasks First seen on theregister.com Jump to article: www.theregister.com/2025/06/16/salesforce_llm_agents_benchmark/
-
CrowdStrike and Nvidia Add LLM Security, Offer New Service for MSSPs
First seen on scworld.com Jump to article: www.scworld.com/news/crowdstrike-and-nvidia-add-llm-security-offer-new-service-for-mssps
-
Neues GenAI-Tool soll Open-Source-Sicherheit erhöhen
Tags: ai, bug, chatgpt, cvss, exploit, github, incident response, linux, LLM, open-source, tool, update, vulnerabilityEin neu entwickeltes GenAI-Tool soll helfen, Schwachstellen in großen Open-Source-Repositories zu erkennen und zu patchen.Niederländische und iranische Sicherheitsforscher haben ein neues Tool auf Basis von generativer KI (GenAI) ins Leben gerufen, das Plattformen wie ChatGPT ermöglichen soll, Bugs in Code-Repositories zu erkennen und zu patchen.Die Anwendung wurde getestet, indem GitHub nach einer bestimmten Schwachstelle durch…
-
86% of all LLM usage is driven by ChatGPT
ChatGPT remains the most widely used LLM among New Relic customers, making up over 86% of all tokens processed. Developers and enterprises are shifting to OpenAI’s latest … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/06/11/chatgpt-usage-2025/
-
Seraphic Security Unveils BrowserTotal Free AI-Powered Browser Security Assessment for Enterprises
srcset=”https://b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?quality=50&strip=all 1200w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=300%2C180&quality=50&strip=all 300w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=768%2C461&quality=50&strip=all 768w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=1024%2C614&quality=50&strip=all 1024w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=1162%2C697&quality=50&strip=all 1162w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=280%2C168&quality=50&strip=all 280w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=140%2C84&quality=50&strip=all 140w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=800%2C480&quality=50&strip=all 800w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=600%2C360&quality=50&strip=all 600w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=417%2C250&quality=50&strip=all 417w” width=”1024″ height=”614″ sizes=”(max-width: 1024px) 100vw, 1024px”> Cyber NewsWirePowered by AI, BrowserTotal offers CISOs and security teams a comprehensive, hands-on environment to test browser security defenses against today’s most sophisticated threats. Key features of the platform include: Posture…
-
LLM04: Data Model Poisoning FireTail Blog
Jun 06, 2025 – Lina Romero – LLM04: Data & Model Poisoning Excerpt: In this blog series, we’re breaking down the OWASP Top 10 risks for LLMs and explaining how each one manifests and can be mitigated. Today’s risk is #4 on the list: Data and Model Poisoning. Read on to learn more”¦ Summary: Data…
-
What the Arc Browser Story Reveals About the Future of Browser Security
By Dakshitaa Babu, Security Researcher, SquareX In a candid letter that Joshua Miller, CEO of Arc Browser, wrote to the community, he revealed a truth the tech industry has been dancing around: “the dominant operating system on desktop wasn’t Windows or macOS anymore”Š”, “Šit was the browser.” The evidence is everywhere”Š”, “Šcloud revenue surging year…
-
When AI Turns Against Us FireTail Blog
Jun 04, 2025 – Lina Romero – Artificial Intelligence is the biggest development in tech of the 21st century. But although AI is continuing to develop at a breakneck pace, many of us still don’t understand all the risks and implications for cybersecurity. And this issue is only growing more complicated and critical. Now more…
-
6 ways CISOs can leverage data and AI to better secure the enterprise
Tags: advisory, ai, antivirus, attack, automation, breach, business, ciso, cloud, compliance, computer, corporate, cyber, cyberattack, cybersecurity, data, detection, firewall, framework, governance, guide, infrastructure, LLM, login, ml, network, programming, risk, risk-analysis, service, siem, soc, software, technology, threat, tool, trainingEmphasize the ‘learning’ part of ML: To be truly effective, models need to be retrained with new data to keep up with changing threat vectors and shifting cyber criminal behavior.”Machine learning models get smarter with your help,” Riboldi says. “Make sure to have feedback loops. Letting analysts label events and adjust settings constantly improves their…
-
The hidden risks of LLM autonomy
Large language models (LLMs) have come a long way from the once passive and simple chatbots that could respond to basic user prompts or look up the internet to generate … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/06/04/llm-agency/
-
Open-Weight Chinese AI Models Drive Privacy Innovation in LLMs
Edge computing and stricter regulations may usher in a new era of AI privacy. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/open-weight-chinese-ai-models-drive-privacy-innovation-llm
-
New Research Uncovers Strengths and Vulnerabilities in Cloud-Based LLM Guardrails
Cybersecurity researchers have shed light on the intricate balance of strengths and vulnerabilities inherent in cloud-based Large Language Model (LLM) guardrails. These safety mechanisms, designed to mitigate risks such as data leakage, biased outputs, and malicious exploitation, are critical to the secure deployment of AI models in enterprise environments. Exposing the Dual Nature of AI…
-
The Sequential Kill Chain for AI FireTail Blog
May 30, 2025 – Timo Rüppell – The Sequential Kill Chain for AI-Powered Attacks Excerpt: We’ve talked before about Mean Time To Attack, or MTTA, which has grown alarmingly short for new vulnerabilities across the cyber landscape. In this blog, we’ll dive into the “how” and “why” of this”¦ Summary: In our current cyber landscape,…
-
Linux Zero-Day Vulnerability Discovered Using Frontier AI
Vulnerability Researchers: Start Tracking LLM Capabilities, Says Veteran Bug Hunter. Large language models have taken a big step forward in their ability to help chase down code flaws, said a vulnerability researcher who successfully trained OpenAI’s o3 to review Linux kernel code, leading to the LLM – in an apparent first – discovering a new…
-
Most LLMs don’t pass the security sniff test
Advice to CSOs: Lee said that CSOs should consider the following before approving any LLM:Training data: figure out where the model got its info. Random web grabs expose your secrets;Prompt history: if your questions stick around on their servers, they’ll turn up in the next breach bulletin;Credentials: stolen API keys and weak passwords keep attackers…
-
Risk assessment vital when choosing an AI model, say experts
Advice to CSOs: Lee said that CSOs should consider the following before approving any LLM:Training data: figure out where the model got its info. Random web grabs expose your secrets;Prompt history: if your questions stick around on their servers, they’ll turn up in the next breach bulletin;Credentials: stolen API keys and weak passwords keep attackers…
-
Mistral Launches Devstral: Open-Source LLM for Coding Agents
Discover Mistral’s Devstral, an open-source LLM revolutionizing software engineering automation. Explore its features and download today! First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/05/mistral-launches-devstral-open-source-llm-for-coding-agents/
-
Building a Secure LLM Gateway (and an MCP Server) with GitGuardian AWS Lambda
How I wrapped large-language-model power in a safety blanket of secrets-detection, chunking, and serverless scale. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/05/building-a-secure-llm-gateway-and-an-mcp-server-with-gitguardian-aws-lambda/
-
LlamaFirewall: Open-source framework to detect and mitigate AI centric security risks
LlamaFirewall is a system-level security framework for LLM-powered applications, built with a modular design to support layered, adaptive defense. It is designed to mitigate a … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/05/26/llamafirewall-open-source-framework-detect-mitigate-ai-centric-security-risks/
-
Attackers Abuse TikTok and Instagram APIs
It must be the season for API security incidents. Hot on the heels of a developer leaking an API key for private Tesla and SpaceX LLMs, researchers have now discovered a set of tools for validating account information via API abuse, leveraging undocumented TikTok and Instagram APIs. The tools, and assumed exploitation, involve malicious Python…
-
Keeping LLMs on the Rails Poses Design, Engineering Challenges
Despite adding alignment training, guardrails, and filters, large language models continue to jump their imposed rails and give up secrets, make unfiltered statements, and provide dangerous information. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/llms-on-rails-design-engineering-challenges
-
AI Chatbot Jailbreaking Security Threat is ‘Immediate, Tangible, and Deeply Concerning’
Dark LLMs like WormGPT bypass safety limits to aid scams and hacking. Researchers warn AI jailbreaks remain active, with weak response from tech firms. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-ai-chatbot-jailbreak-vulnerabilities/

