Tag: LLM
-
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/11/26/deepteam-open-source-llm-red-teaming-framework/
-
DeepTeam: Open-source LLM red teaming framework
Security teams are pushing large language models into products faster than they can test them, which makes any new red teaming method worth paying attention to. DeepTeam is an … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/11/26/deepteam-open-source-llm-red-teaming-framework/
-
Radware Adds Firewall for LLMs to Security Portfolio
Radware has developed a firewall for large language models (LLMs) that ensures governance and security policies are enforced in real time. Provided as an add-on to the company’s Cloud Application Protection Services, Radware LLM Firewall addresses the top 10 risks and mitigations for LLMs and generative artificial intelligence (AI) applications defined by the OWASP GenAI..…
-
Radware Adds Firewall for LLMs to Security Portfolio
Radware has developed a firewall for large language models (LLMs) that ensures governance and security policies are enforced in real time. Provided as an add-on to the company’s Cloud Application Protection Services, Radware LLM Firewall addresses the top 10 risks and mitigations for LLMs and generative artificial intelligence (AI) applications defined by the OWASP GenAI..…
-
Find the Invisible: Salt MCP Finder Technology for Proactive MCP Discovery
The conversation about AI security has shifted. For the past year, the focus has been on the model itself: poisoning data, prompt injection, and protecting intellectual property. These are critical concerns, but they miss the bigger picture of how AI is actually being operationalized in the enterprise. We are entering the era of Agentic AI.…
-
What keeps CISOs awake at night, and why Zurich might hold the cure
Tags: access, ai, api, attack, breach, ciso, conference, control, cve, cyber, cybersecurity, deep-fake, detection, endpoint, exploit, finance, firmware, framework, group, incident response, injection, LLM, malware, mandiant, microsoft, mitre, network, phishing, phone, ransomware, resilience, risk, soc, strategy, supply-chain, threat, tool, training, update, zero-dayA safe space in the Alps: Over two days at Zurich’s stunning Dolder Grand, hosted by the Swiss Cyber Institute, I witnessed something I’ve seldom seen at cybersecurity events: real vulnerability. In a closed, attribution-free environment, leaders shared not just strategies, but doubts. And that made this event stand out, not as another conference, but…
-
What keeps CISOs awake at night, and why Zurich might hold the cure
Tags: access, ai, api, attack, breach, ciso, conference, control, cve, cyber, cybersecurity, deep-fake, detection, endpoint, exploit, finance, firmware, framework, group, incident response, injection, LLM, malware, mandiant, microsoft, mitre, network, phishing, phone, ransomware, resilience, risk, soc, strategy, supply-chain, threat, tool, training, update, zero-dayA safe space in the Alps: Over two days at Zurich’s stunning Dolder Grand, hosted by the Swiss Cyber Institute, I witnessed something I’ve seldom seen at cybersecurity events: real vulnerability. In a closed, attribution-free environment, leaders shared not just strategies, but doubts. And that made this event stand out, not as another conference, but…
-
LLMs Tools Like GPT-3.5-Turbo and GPT-4 Fuel the Development of Fully Autonomous Malware
The rapid proliferation of large language models has transformed how organizations approach automation, coding, and research. Yet this technological advancement presents a double-edged sword: threat actors are increasingly exploring how to weaponize these tools for creating next-generation, autonomously operating malware. Recent research from Netskope Threat Labs reveals that GPT-3.5-Turbo and GPT-4 can be manipulated to…
-
Pravda-Netzwerk vergiftet mit LLM-Grooming ChatGPT Co.
Forscher sind auf ein mit Russland verbündete Pravda-Netzwerk gestoßen, welches “LLM-Grooming” betreibt. Das Netzwerk überschwemmt das Internet mit Desinformationen, um Suchmaschinen, und nun auch Chatbots wie ChatGPT zu beeinflussen. Quasi der nächste Sargnagel in die Glaubwürdigkeit von Chatbots. Chatbots wie … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/11/23/pravda-netzwerk-vergiftet-mit-llm-grooming-chatgpt-co/
-
LLM09: Misinformation FireTail Blog
Tags: ai, api, awareness, breach, cybersecurity, data, defense, healthcare, intelligence, LLM, mitigation, risk, training, vulnerabilityNov 21, 2025 – Lina Romero – In 2025, Artificial Intelligence is everywhere, and so are AI vulnerabilities. In fact, according to our research, these vulnerabilities are up across the board. The OWASP Top 10 list of Risks to LLMs can help teams track the biggest challenges facing AI security in our current landscape. Misinformation…
-
From code to boardroom: A GenAI GRC approach to supply chain risk
Tags: ai, blockchain, business, ciso, compliance, dark-web, data, defense, finance, framework, gartner, grc, intelligence, LLM, metric, open-source, regulation, resilience, risk, strategy, supply-chain, threat, vulnerabilityThe GenAI GRC mandate: From reporting to prediction: To counter a threat that moves at the speed of computation, our GRC must also become generative and predictive. The GenAI GRC mandate is to shift the focus from documenting compliance to predicting systemic failure.Current GRC methods are designed for documentation. They verify that a policy exists.…
-
From code to boardroom: A GenAI GRC approach to supply chain risk
Tags: ai, blockchain, business, ciso, compliance, dark-web, data, defense, finance, framework, gartner, grc, intelligence, LLM, metric, open-source, regulation, resilience, risk, strategy, supply-chain, threat, vulnerabilityThe GenAI GRC mandate: From reporting to prediction: To counter a threat that moves at the speed of computation, our GRC must also become generative and predictive. The GenAI GRC mandate is to shift the focus from documenting compliance to predicting systemic failure.Current GRC methods are designed for documentation. They verify that a policy exists.…
-
LLM-generated malware is improving, but don’t expect autonomous attacks tomorrow
Researchers tried to get ChatGPT to do evil, but it didn’t do a good job First seen on theregister.com Jump to article: www.theregister.com/2025/11/20/llmgenerated_malware_improving/
-
New Technique Shows Gaps in LLM Safety Screening
Attackers Can Flip Safety Filters Using Short Token Sequences. A few stray characters, sometimes as small as oz or generic as =coffee may be all it takes to steer past an AI system’s safety checks. HiddenLayer researchers have found a way to identify short token sequences that can cause guardrail models to misclassify malicious prompts…
-
New Technique Shows Gaps in LLM Safety Screening
Attackers Can Flip Safety Filters Using Short Token Sequences. A few stray characters, sometimes as small as oz or generic as =coffee may be all it takes to steer past an AI system’s safety checks. HiddenLayer researchers have found a way to identify short token sequences that can cause guardrail models to misclassify malicious prompts…
-
Rethinking identity for the AI era: CISOs must build trust at machine speed
Tags: access, ai, api, attack, authentication, business, ciso, cloud, control, cybersecurity, data, data-breach, google, governance, group, identity, infrastructure, injection, Internet, LLM, malicious, mitigation, network, risk, theft, threat, tool, training, vulnerabilityIdentity as a trust fabric: Most organizations currently rely on a welter of identity and access management systems for a variety of reasons. Some systems might be tied to a specific vendor’s technology; some might be legacy systems from mergers or acquisitions; some might be in place due to legal or regulatory requirements.”What happens even…
-
EchoGram Flaw Bypasses Guardrails in Major LLMs
HiddenLayer reveals the EchoGram vulnerability, which bypasses safety guardrails on GPT-5.1 and other major LLMs, giving security teams just a 3-month head start. First seen on hackread.com Jump to article: hackread.com/echogram-flaw-bypass-guardrails-major-llms/
-
Level up your Solidity LLM tooling with Slither-MCP
We’re releasing Slither-MCP, a new tool that augments LLMs with Slither’s unmatched static analysis engine. Slither-MCP benefits virtually every use case for LLMs by exposing Slither’s static analysis API via tools, allowing LLMs to find critical code faster, navigate codebases more efficiently, and ultimately improve smart contract authoring and auditing performance. How Slither-MCP works Slither-MCP…
-
Copy-paste vulnerability hits AI inference frameworks at Meta, Nvidia, and Microsoft
Tags: ai, authentication, cloud, data, data-breach, exploit, framework, google, infrastructure, Internet, linkedin, LLM, microsoft, nvidia, oracle, risk, vulnerabilityWhy this matters for AI infrastructure: The vulnerable inference servers form the backbone of many enterprise-grade AI stacks, processing sensitive prompts, model weights, and customer data. Oligo reported identifying thousands of exposed ZeroMQ sockets on the public internet, some tied to these inference clusters.If exploited, an attacker could execute arbitrary code on GPU clusters, escalate…
-
Copy-paste vulnerability hits AI inference frameworks at Meta, Nvidia, and Microsoft
Tags: ai, authentication, cloud, data, data-breach, exploit, framework, google, infrastructure, Internet, linkedin, LLM, microsoft, nvidia, oracle, risk, vulnerabilityWhy this matters for AI infrastructure: The vulnerable inference servers form the backbone of many enterprise-grade AI stacks, processing sensitive prompts, model weights, and customer data. Oligo reported identifying thousands of exposed ZeroMQ sockets on the public internet, some tied to these inference clusters.If exploited, an attacker could execute arbitrary code on GPU clusters, escalate…
-
Security Degradation in AI-Generated Code: A Threat Vector CISOs Can’t Ignore
A new study shows LLMs introduce more vulnerabilities with each code iteration, highlighting critical risks for CISOs and the need for skilled human oversight. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/security-degradation-in-ai-generated-code-a-threat-vector-cisos-cant-ignore/
-
Germany’s BSI issues guidelines to counter evasion attacks targeting LLMs
Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems. Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems and mitigate related risks. A significant and evolving threat to AI systems based on large…
-
Security Degradation in AI-Generated Code: A Threat Vector CISOs Can’t Ignore
A new study shows LLMs introduce more vulnerabilities with each code iteration, highlighting critical risks for CISOs and the need for skilled human oversight. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/security-degradation-in-ai-generated-code-a-threat-vector-cisos-cant-ignore/
-
Germany’s BSI issues guidelines to counter evasion attacks targeting LLMs
Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems. Germany’s BSI warns of rising evasion attacks on LLMs, issuing guidance to help developers and IT managers secure AI systems and mitigate related risks. A significant and evolving threat to AI systems based on large…
-
Survey Surfaces Sharp Rise in Cybersecurity Incidents Involving AI
A survey of 500 security practitioners and decision-makers across the United States and Europe published today finds cyberattacks aimed at artificial intelligence (AI) applications are rising, with prompt injections involving large language models (LLMs) at the top of the list (76%), followed by vulnerable LLM code (66%) and LLM jailbreaking (65%). Conducted by Traceable by..…
-
Wie ChatGPT sich selbst eine Prompt Injection zufügt
Forscher haben neue Methoden für Angriffe über ChatGPT aufgedeckt.Forscher des Sicherheitsunternehmens Tenable haben sieben neue Möglichkeiten entdeckt, wie Angreifer ChatGPT dazu bringen können, private Informationen aus den Chat-Verläufen der Nutzer preiszugeben. Bei den meisten dieser Angriffe handelt es sich um indirekte Prompt Injections, die die Standard-Tools und -funktionen von ChatGPT ausnutzen. Etwa die Fähigkeit, den…
-
AI-Sicherheit: Fast alle LLMs leaken private API-Keys auf Github; ChatGPT hat Schwachstellen
AI bzw. der Gebrauch von Sprachmodellen (LLMs) wie ChatGPT ist ja Dauerthema angeblich so wichtig wie geschnitten Brot. Insidern ist klar, dass diese Technologie die größte Gefahr für Daten-und Unternehmenssicherheit seit Einführung des Internet ist. Mir sind zwei Informationssplitter … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/11/12/ai-llms-die-grossen-leaker-der-zukunft/
-
Faster Than Real-Time: Why Your Security Fails and What to Do Next
Tags: access, ai, apple, attack, breach, business, ceo, cio, cloud, control, cybersecurity, data, defense, detection, dns, endpoint, fintech, framework, identity, infrastructure, Internet, iot, jobs, LLM, malware, network, nist, privacy, resilience, siem, soc, technology, threat, tool, vpn, zero-day, zero-trust“Security systems fail. When it fails, what do you do?” This critical question from Spire Connect’s Pankaj Sharma set the stage at Gitex 2025 for a conversation with Francois Driessen, the “Human Ambassador” of ADAMnetworks. His core message is blunt: in cybersecurity, even real-time is not fast enough. By the time a threat is detected,…

