Tag: LLM
-
The Sequential Kill Chain for AI FireTail Blog
May 30, 2025 – Timo Rüppell – The Sequential Kill Chain for AI-Powered Attacks Excerpt: We’ve talked before about Mean Time To Attack, or MTTA, which has grown alarmingly short for new vulnerabilities across the cyber landscape. In this blog, we’ll dive into the “how” and “why” of this”¦ Summary: In our current cyber landscape,…
-
Linux Zero-Day Vulnerability Discovered Using Frontier AI
Vulnerability Researchers: Start Tracking LLM Capabilities, Says Veteran Bug Hunter. Large language models have taken a big step forward in their ability to help chase down code flaws, said a vulnerability researcher who successfully trained OpenAI’s o3 to review Linux kernel code, leading to the LLM – in an apparent first – discovering a new…
-
Most LLMs don’t pass the security sniff test
Advice to CSOs: Lee said that CSOs should consider the following before approving any LLM:Training data: figure out where the model got its info. Random web grabs expose your secrets;Prompt history: if your questions stick around on their servers, they’ll turn up in the next breach bulletin;Credentials: stolen API keys and weak passwords keep attackers…
-
Risk assessment vital when choosing an AI model, say experts
Advice to CSOs: Lee said that CSOs should consider the following before approving any LLM:Training data: figure out where the model got its info. Random web grabs expose your secrets;Prompt history: if your questions stick around on their servers, they’ll turn up in the next breach bulletin;Credentials: stolen API keys and weak passwords keep attackers…
-
Mistral Launches Devstral: Open-Source LLM for Coding Agents
Discover Mistral’s Devstral, an open-source LLM revolutionizing software engineering automation. Explore its features and download today! First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/05/mistral-launches-devstral-open-source-llm-for-coding-agents/
-
Building a Secure LLM Gateway (and an MCP Server) with GitGuardian AWS Lambda
How I wrapped large-language-model power in a safety blanket of secrets-detection, chunking, and serverless scale. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/05/building-a-secure-llm-gateway-and-an-mcp-server-with-gitguardian-aws-lambda/
-
LlamaFirewall: Open-source framework to detect and mitigate AI centric security risks
LlamaFirewall is a system-level security framework for LLM-powered applications, built with a modular design to support layered, adaptive defense. It is designed to mitigate a … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/05/26/llamafirewall-open-source-framework-detect-mitigate-ai-centric-security-risks/
-
Attackers Abuse TikTok and Instagram APIs
It must be the season for API security incidents. Hot on the heels of a developer leaking an API key for private Tesla and SpaceX LLMs, researchers have now discovered a set of tools for validating account information via API abuse, leveraging undocumented TikTok and Instagram APIs. The tools, and assumed exploitation, involve malicious Python…
-
Keeping LLMs on the Rails Poses Design, Engineering Challenges
Despite adding alignment training, guardrails, and filters, large language models continue to jump their imposed rails and give up secrets, make unfiltered statements, and provide dangerous information. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/llms-on-rails-design-engineering-challenges
-
AI Chatbot Jailbreaking Security Threat is ‘Immediate, Tangible, and Deeply Concerning’
Dark LLMs like WormGPT bypass safety limits to aid scams and hacking. Researchers warn AI jailbreaks remain active, with weak response from tech firms. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-ai-chatbot-jailbreak-vulnerabilities/
-
LLM03: Supply Chain FireTail Blog
Tags: ai, compliance, cyber, data, encryption, exploit, LLM, malicious, mitigation, monitoring, open-source, organized, privacy, risk, service, software, strategy, supply-chain, training, update, vulnerabilityMay 21, 2025 – Lina Romero – LLM03: Supply Chain 20/5/2025 Excerpt The OWASP Top 10 List of Risks for LLMs helps developers and security teams determine where the biggest risk factors lay. In this blog series from FireTail, we are exploring each risk one by one, how it manifests, and mitigation strategies. This week,…
-
8 KI-Sicherheitsrisiken, die Unternehmen übersehen
Tags: access, ai, api, application-security, authentication, cisco, ciso, compliance, cyber, cyberattack, cybersecurity, data, data-breach, framework, governance, hacker, injection, LLM, RedTeam, risk, risk-management, security-incident, software, threat, tool, vulnerabilityIn ihrem Wettlauf um Produktivitätssteigerungen durch generative KI übersehen die meisten Unternehmen die damit verbundenen Sicherheitsrisiken.Laut einer Studie des Weltwirtschaftsforums, die in Zusammenarbeit mit Accenture durchgeführt wurde, versäumen es 63 Prozent der Unternehmen, die Sicherheit von KI-Tools vor deren Einsatz zu überprüfen. Dadurch gehen sie eine Reihe von Risiken für ihr Unternehmen ein.Dies gilt sowohl…
-
Cyber! Take your dadgum Medicine!
Learn the Bitter Lesson Bitter Lesson, an essay by one of the creators of reinforcement learning, first published back in 2019, recently made the rounds again now that its author, Professor Richard Sutton, was named a winner of this year’s ACM Turing Award. In it, he points out that general methods have won, again and again,…
-
Securing LLM Applications in 2025
Tags: LLMFirst seen on thesecurityblogger.com Jump to article: www.thesecurityblogger.com/securing-llm-applications-in-2025/
-
Threat Actors Exploit AI and LLM Tools for Offensive Cyber Operations
A recent report from the S2W Threat Intelligence Center, TALON, sheds light on the escalating misuse of generative AI and large language models (LLMs) by threat actors on the dark web for malicious cyber operations. As LLMs like ChatGPT, Claude, and DeepSeek grow in capability, they are increasingly weaponized as offensive tools for exploit generation,…
-
12 AI terms you (and your flirty chatbot) should know by now
1. Artificial general intelligence (AGI) The ultimate manifestation of AI has already played a featured role in dozens of apocalyptic movies. AGI is the point at which machines become capable of original thought and either a) save us from our worst impulses or b) decide they’ve had enough of us puny humans. While some AI…
-
GenAI’s New Attack Surface: Why MCP Agents Demand a Rethink in Cybersecurity Strategy
Anthropic’s Model Context Protocol (MCP) is a breakthrough standard that allows LLM models to interact with external tools and data systems with unprecedented flexibility. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/05/genais-new-attack-surface-why-mcp-agents-demand-a-rethink-in-cybersecurity-strategy/
-
Encrypt AI, Protect Your IP: DataKrypto Tackles the LLM Security Crisis While Redefining What Encryption Should Be!
Talking to Luigi Caramico, Founder, CTO, and Chairman of DataKrypto, a company that’s fundamentally reshaping how we think about encryption. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/05/encrypt-ai-protect-your-ip-datakrypto-tackles-the-llm-security-crisis-while-redefining-what-encryption-should-be/
-
Indirect Prompt Injection Exploits LLMs’ Lack of Informational Context
A new wave of cyber threats targeting large language models (LLMs) has emerged, exploiting their inherent inability to differentiate between informational content and actionable instructions. Termed >>indirect prompt injection attacks,
-
Google Deploys On-Device AI to Thwart Scams on Chrome and Android
The tech giant plans to leverage its Gemini Nano LLM on-device to enhance scam detection on Chrome First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/google-ai-gemini-nano-scams-chrome/
-
LLM02: Sensitive Information Disclosure FireTail Blog
May 08, 2025 – Lina Romero – In 2025, AI security is a relevant issue. With the landscape changing so rapidly and new risks emerging every day, it is difficult for developers and security teams to stay on top of AI security. The OWASP Top 10 Risks for LLM attempts to break down the most prevalent…
-
Even the best safeguards can’t stop LLMs from being fooled
In this Help Net Security interview, Michael Pound, Associate Professor at the University of Nottingham shares his insights on the cybersecurity risks associated with LLMs. He … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/05/08/michael-pound-university-of-nottingham-llms-prompts-risks/
-
Uncovering the Security Risks of Data Exposure in AI-Powered Tools like Snowflake’s CORTEX
As artificial intelligence continues to reshape the technological landscape, tools like Snowflake’s CORTEX Search Service are revolutionizing data retrieval with advanced fuzzy search and LLM-driven Retrieval Augmented Generation (RAG) capabilities. However, beneath the promise of efficiency lies a critical security concern: unintended data exposure. A recent analysis highlights how even tightly configured access and masking…
-
GenAI- und LLM-Risiken von der Entwicklung bis zur Bereitstellung eliminieren
Künstliche Intelligenz verändert Unternehmen grundlegend. Von der Automatisierung des Kundenservice bis zur Beschleunigung der Codegenerierung große Sprachmodelle (LLMs) werden immer schneller in die Geschäftsabläufe und Wettbewerbsstrategien von Unternehmen integriert. Doch während Unternehmen diese Innovation begrüßen, öffnen sie damit auch Tür und Tor für neue, schwer zu erkennende Risiken. Laut einer aktuellen Studie sind 72 % […]…
-
xAI Developer Accidentally Leaks API Key Granting Access to SpaceX, Tesla, and X LLMs
An employee at Elon Musk’s artificial intelligence venture, xAI, inadvertently disclosed a sensitive API key on GitHub, potentially exposing proprietary large language models (LLMs) linked to SpaceX, Tesla, and Twitter/X. Cybersecurity specialists estimate the leak remained active for two months, offering outsiders the capability to access and query highly confidential AI systems engineered with internal…

