Tag: LLM
-
Simbian Advances the AI Frontier With Industry’s First Benchmark for Measuring LLM Performance in the SOC
Simbian’s approach offers a new blueprint for how to evaluate and evolve AI for real-world use, without losing sight of the human element. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/06/simbian-advances-the-ai-frontier-with-industrys-first-benchmark-for-measuring-llm-performance-in-the-soc/
-
Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content
Cybersecurity researchers are calling attention to a new jailbreaking method called Echo Chamber that could be leveraged to trick popular large language models (LLMs) into generating undesirable responses, irrespective of the safeguards put in place.”Unlike traditional jailbreaks that rely on adversarial phrasing or character obfuscation, Echo Chamber weaponizes indirect references, semantic First seen on thehackernews.com…
-
MCP-Bug bei Asana könnte Unternehmensdaten offengelegt haben
Tags: access, ai, api, authentication, bug, business, chatgpt, ciso, cybersecurity, data-breach, LLM, microsoft, open-source, service, siem, software, tool, trainingCISOs mit einem MCP-Server von Asana in ihrer Umgebung sollten ihre Protokolle und Metadaten auf Datenlecks überprüfen.Die Software-as-a-Service-Plattform Asana zählt zu den beliebtesten Projektmanagement-Tools in Unternehmen. Der Anbieter gab kürzlich bekannt, dass sein MCP-Server (Model Context Protocol) vorübergehend aufgrund eines Bugs offline genommen wurde. Der Server war allerdings bereits nach kurzer Zeit wieder online.Laut Forschern…
-
MCP-Bug bei Asana könnte Unternehmensdaten offengelegt haben
Tags: access, ai, api, authentication, bug, business, chatgpt, ciso, cybersecurity, data-breach, LLM, microsoft, open-source, service, siem, software, tool, trainingCISOs mit einem MCP-Server von Asana in ihrer Umgebung sollten ihre Protokolle und Metadaten auf Datenlecks überprüfen.Die Software-as-a-Service-Plattform Asana zählt zu den beliebtesten Projektmanagement-Tools in Unternehmen. Der Anbieter gab kürzlich bekannt, dass sein MCP-Server (Model Context Protocol) vorübergehend aufgrund eines Bugs offline genommen wurde. Der Server war allerdings bereits nach kurzer Zeit wieder online.Laut Forschern…
-
MCP-Bug bei Asana könnte Unternehmensdaten offengelegt haben
Tags: access, ai, api, authentication, bug, business, chatgpt, ciso, cybersecurity, data-breach, LLM, microsoft, open-source, service, siem, software, tool, trainingCISOs mit einem MCP-Server von Asana in ihrer Umgebung sollten ihre Protokolle und Metadaten auf Datenlecks überprüfen.Die Software-as-a-Service-Plattform Asana zählt zu den beliebtesten Projektmanagement-Tools in Unternehmen. Der Anbieter gab kürzlich bekannt, dass sein MCP-Server (Model Context Protocol) vorübergehend aufgrund eines Bugs offline genommen wurde. Der Server war allerdings bereits nach kurzer Zeit wieder online.Laut Forschern…
-
Tonic Validate is now available on GitHub Marketplace!
Tonic Validate, our free, open-source library for evaluating RAG and LLM-based applications, can be run entirely as a GitHub Action. And it’s now available for quick deployment on GitHub Marketplace! First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/06/tonic-validate-is-now-available-on-github-marketplace/
-
Tonic Validate is now on GitHub Marketplace! (Part 2)
Tonic Validate is a free, open-source library for evaluating RAG and LLM based applications. We recently announced a new listing on GitHub Marketplace that provides a GitHub Actions template to run Tonic Validate against code changes on every commit. Today, we’re following up with an additional listing that allows you to establish integration tests each…
-
Malicious AI Agent in LangSmith May Have Exposed API Data
High-Severity Flaw in LangChain’s AI Tooling Hub Now Patched. A flaw in the LangSmith platform, an open-source framework that helps developers build LLM-powered applications, can enable hackers to siphon sensitive data, said Noma Security. Dubbed AgentSmith, the flaw can allow attackers to embed malicious proxy configurations into public AI agents. First seen on govinfosecurity.com Jump…
-
Five Uncomfortable Truths About LLMs in Production
Many tech professionals see integrating large language models (LLMs) as a simple process -just connect an API and let it run. At Wallarm, our experience has proved otherwise. Through rigorous testing and iteration, our engineering team uncovered several critical insights about deploying LLMs securely and effectively. This blog shares our journey of integrating cutting-edge AI…
-
Security, risk and compliance in the world of AI agents
Tags: access, ai, api, attack, automation, business, compliance, control, credentials, data, encryption, finance, framework, governance, grc, identity, infection, injection, ISO-27001, jobs, LLM, monitoring, password, privacy, regulation, resilience, risk, service, tool, trainingUnderstand and interpret natural language Access internal and external data sources dynamically Invoke tools (like APIs, databases, search engines) Carry memory to recall prior interactions or results Chain logic to reason through complex multi-step tasks They may be deployed through: Open-source frameworks like LangChain or Semantic Kernel Custom-built agent stacks powered by internal LLM APIs Hybrid orchestration models integrated across business platforms Real-world examples…
-
Novel TokenBreak Attack Method Can Bypass LLM Security Features
Researchers with HiddenLayers uncovered a new vulnerability in LLMs called TokenBreak, which could enable an attacker to get around content moderation features in many models simply by adding a few characters to words in a prompt. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/06/novel-tokenbreak-attack-method-can-bypass-llm-security-features/
-
From LLMs to Cloud Infrastructure: F5 Aims to Secure the New AI Attack Surface
Accelerate human-led innovation, automate the grunt work and make sure AI delivers real value without proliferating new security risks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/06/from-llms-to-cloud-infrastructure-f5-aims-to-secure-the-new-ai-attack-surface/
-
Before scaling GenAI, map your LLM usage and risk zones
In this Help Net Security interview, Paolo del Mundo, Director of Application and Cloud Security at The Motley Fool, discusses how organizations can scale their AI usage by … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/06/17/paolo-del-mundo-the-motley-fool-ai-usage-guardrails/
-
What are the best practices for MCP security?
Introduction Modern applications are increasingly powered by large language models (LLMs) that don’t just generate text”, they can call live APIs, query databases, and even trigger automated workflows. The Model Context Protocol (MCP) makes this possible by standardizing how LLMs interface with external tools, turning your AI assistant into a fully programmable agent. With great…
-
Erster Zero-Click-Angriff auf Microsoft 365 Copilot
Eine Lücke in Microsoft 365 Copilot ermöglicht es, sensible Daten zu stehlen.Stellen Sie sich einen Angriff vor, der so heimlich ist, dass er keine Klicks, keine Downloads und keine Warnungen erfordert es reicht eine einzelne E-Mail, die in Ihrem Posteingang landet. Das ist der Fall bei EchoLeak, einer kritischen Sicherheitslücke in Microsoft 365 Copilot. Sie…
-
Salesforce study finds LLM agents flunk CRM and confidentiality tests
Tags: LLM6-in-10 success rate for single-step tasks First seen on theregister.com Jump to article: www.theregister.com/2025/06/16/salesforce_llm_agents_benchmark/
-
CrowdStrike and Nvidia Add LLM Security, Offer New Service for MSSPs
First seen on scworld.com Jump to article: www.scworld.com/news/crowdstrike-and-nvidia-add-llm-security-offer-new-service-for-mssps
-
Neues GenAI-Tool soll Open-Source-Sicherheit erhöhen
Tags: ai, bug, chatgpt, cvss, exploit, github, incident response, linux, LLM, open-source, tool, update, vulnerabilityEin neu entwickeltes GenAI-Tool soll helfen, Schwachstellen in großen Open-Source-Repositories zu erkennen und zu patchen.Niederländische und iranische Sicherheitsforscher haben ein neues Tool auf Basis von generativer KI (GenAI) ins Leben gerufen, das Plattformen wie ChatGPT ermöglichen soll, Bugs in Code-Repositories zu erkennen und zu patchen.Die Anwendung wurde getestet, indem GitHub nach einer bestimmten Schwachstelle durch…
-
86% of all LLM usage is driven by ChatGPT
ChatGPT remains the most widely used LLM among New Relic customers, making up over 86% of all tokens processed. Developers and enterprises are shifting to OpenAI’s latest … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/06/11/chatgpt-usage-2025/
-
Seraphic Security Unveils BrowserTotal Free AI-Powered Browser Security Assessment for Enterprises
srcset=”https://b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?quality=50&strip=all 1200w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=300%2C180&quality=50&strip=all 300w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=768%2C461&quality=50&strip=all 768w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=1024%2C614&quality=50&strip=all 1024w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=1162%2C697&quality=50&strip=all 1162w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=280%2C168&quality=50&strip=all 280w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=140%2C84&quality=50&strip=all 140w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=800%2C480&quality=50&strip=all 800w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=600%2C360&quality=50&strip=all 600w, b2b-contenthub.com/wp-content/uploads/2025/06/dashboard1200x720_2_1749468214vL4nUEOAEX.jpg?resize=417%2C250&quality=50&strip=all 417w” width=”1024″ height=”614″ sizes=”(max-width: 1024px) 100vw, 1024px”> Cyber NewsWirePowered by AI, BrowserTotal offers CISOs and security teams a comprehensive, hands-on environment to test browser security defenses against today’s most sophisticated threats. Key features of the platform include: Posture…
-
LLM04: Data Model Poisoning FireTail Blog
Jun 06, 2025 – Lina Romero – LLM04: Data & Model Poisoning Excerpt: In this blog series, we’re breaking down the OWASP Top 10 risks for LLMs and explaining how each one manifests and can be mitigated. Today’s risk is #4 on the list: Data and Model Poisoning. Read on to learn more”¦ Summary: Data…
-
What the Arc Browser Story Reveals About the Future of Browser Security
By Dakshitaa Babu, Security Researcher, SquareX In a candid letter that Joshua Miller, CEO of Arc Browser, wrote to the community, he revealed a truth the tech industry has been dancing around: “the dominant operating system on desktop wasn’t Windows or macOS anymore”Š”, “Šit was the browser.” The evidence is everywhere”Š”, “Šcloud revenue surging year…
-
When AI Turns Against Us FireTail Blog
Jun 04, 2025 – Lina Romero – Artificial Intelligence is the biggest development in tech of the 21st century. But although AI is continuing to develop at a breakneck pace, many of us still don’t understand all the risks and implications for cybersecurity. And this issue is only growing more complicated and critical. Now more…
-
6 ways CISOs can leverage data and AI to better secure the enterprise
Tags: advisory, ai, antivirus, attack, automation, breach, business, ciso, cloud, compliance, computer, corporate, cyber, cyberattack, cybersecurity, data, detection, firewall, framework, governance, guide, infrastructure, LLM, login, ml, network, programming, risk, risk-analysis, service, siem, soc, software, technology, threat, tool, trainingEmphasize the ‘learning’ part of ML: To be truly effective, models need to be retrained with new data to keep up with changing threat vectors and shifting cyber criminal behavior.”Machine learning models get smarter with your help,” Riboldi says. “Make sure to have feedback loops. Letting analysts label events and adjust settings constantly improves their…
-
The hidden risks of LLM autonomy
Large language models (LLMs) have come a long way from the once passive and simple chatbots that could respond to basic user prompts or look up the internet to generate … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/06/04/llm-agency/
-
Open-Weight Chinese AI Models Drive Privacy Innovation in LLMs
Edge computing and stricter regulations may usher in a new era of AI privacy. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/open-weight-chinese-ai-models-drive-privacy-innovation-llm
-
New Research Uncovers Strengths and Vulnerabilities in Cloud-Based LLM Guardrails
Cybersecurity researchers have shed light on the intricate balance of strengths and vulnerabilities inherent in cloud-based Large Language Model (LLM) guardrails. These safety mechanisms, designed to mitigate risks such as data leakage, biased outputs, and malicious exploitation, are critical to the secure deployment of AI models in enterprise environments. Exposing the Dual Nature of AI…

