Tag: LLM
-
LLM03: Supply Chain FireTail Blog
Tags: ai, compliance, cyber, data, encryption, exploit, LLM, malicious, mitigation, monitoring, open-source, organized, privacy, risk, service, software, strategy, supply-chain, training, update, vulnerabilityMay 21, 2025 – Lina Romero – LLM03: Supply Chain 20/5/2025 Excerpt The OWASP Top 10 List of Risks for LLMs helps developers and security teams determine where the biggest risk factors lay. In this blog series from FireTail, we are exploring each risk one by one, how it manifests, and mitigation strategies. This week,…
-
8 KI-Sicherheitsrisiken, die Unternehmen übersehen
Tags: access, ai, api, application-security, authentication, cisco, ciso, compliance, cyber, cyberattack, cybersecurity, data, data-breach, framework, governance, hacker, injection, LLM, RedTeam, risk, risk-management, security-incident, software, threat, tool, vulnerabilityIn ihrem Wettlauf um Produktivitätssteigerungen durch generative KI übersehen die meisten Unternehmen die damit verbundenen Sicherheitsrisiken.Laut einer Studie des Weltwirtschaftsforums, die in Zusammenarbeit mit Accenture durchgeführt wurde, versäumen es 63 Prozent der Unternehmen, die Sicherheit von KI-Tools vor deren Einsatz zu überprüfen. Dadurch gehen sie eine Reihe von Risiken für ihr Unternehmen ein.Dies gilt sowohl…
-
Cyber! Take your dadgum Medicine!
Learn the Bitter Lesson Bitter Lesson, an essay by one of the creators of reinforcement learning, first published back in 2019, recently made the rounds again now that its author, Professor Richard Sutton, was named a winner of this year’s ACM Turing Award. In it, he points out that general methods have won, again and again,…
-
Securing LLM Applications in 2025
Tags: LLMFirst seen on thesecurityblogger.com Jump to article: www.thesecurityblogger.com/securing-llm-applications-in-2025/
-
Threat Actors Exploit AI and LLM Tools for Offensive Cyber Operations
A recent report from the S2W Threat Intelligence Center, TALON, sheds light on the escalating misuse of generative AI and large language models (LLMs) by threat actors on the dark web for malicious cyber operations. As LLMs like ChatGPT, Claude, and DeepSeek grow in capability, they are increasingly weaponized as offensive tools for exploit generation,…
-
12 AI terms you (and your flirty chatbot) should know by now
1. Artificial general intelligence (AGI) The ultimate manifestation of AI has already played a featured role in dozens of apocalyptic movies. AGI is the point at which machines become capable of original thought and either a) save us from our worst impulses or b) decide they’ve had enough of us puny humans. While some AI…
-
GenAI’s New Attack Surface: Why MCP Agents Demand a Rethink in Cybersecurity Strategy
Anthropic’s Model Context Protocol (MCP) is a breakthrough standard that allows LLM models to interact with external tools and data systems with unprecedented flexibility. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/05/genais-new-attack-surface-why-mcp-agents-demand-a-rethink-in-cybersecurity-strategy/
-
Encrypt AI, Protect Your IP: DataKrypto Tackles the LLM Security Crisis While Redefining What Encryption Should Be!
Talking to Luigi Caramico, Founder, CTO, and Chairman of DataKrypto, a company that’s fundamentally reshaping how we think about encryption. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/05/encrypt-ai-protect-your-ip-datakrypto-tackles-the-llm-security-crisis-while-redefining-what-encryption-should-be/
-
Indirect Prompt Injection Exploits LLMs’ Lack of Informational Context
A new wave of cyber threats targeting large language models (LLMs) has emerged, exploiting their inherent inability to differentiate between informational content and actionable instructions. Termed >>indirect prompt injection attacks,
-
Google Deploys On-Device AI to Thwart Scams on Chrome and Android
The tech giant plans to leverage its Gemini Nano LLM on-device to enhance scam detection on Chrome First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/google-ai-gemini-nano-scams-chrome/
-
LLM02: Sensitive Information Disclosure FireTail Blog
May 08, 2025 – Lina Romero – In 2025, AI security is a relevant issue. With the landscape changing so rapidly and new risks emerging every day, it is difficult for developers and security teams to stay on top of AI security. The OWASP Top 10 Risks for LLM attempts to break down the most prevalent…
-
Even the best safeguards can’t stop LLMs from being fooled
In this Help Net Security interview, Michael Pound, Associate Professor at the University of Nottingham shares his insights on the cybersecurity risks associated with LLMs. He … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/05/08/michael-pound-university-of-nottingham-llms-prompts-risks/
-
Uncovering the Security Risks of Data Exposure in AI-Powered Tools like Snowflake’s CORTEX
As artificial intelligence continues to reshape the technological landscape, tools like Snowflake’s CORTEX Search Service are revolutionizing data retrieval with advanced fuzzy search and LLM-driven Retrieval Augmented Generation (RAG) capabilities. However, beneath the promise of efficiency lies a critical security concern: unintended data exposure. A recent analysis highlights how even tightly configured access and masking…
-
GenAI- und LLM-Risiken von der Entwicklung bis zur Bereitstellung eliminieren
Künstliche Intelligenz verändert Unternehmen grundlegend. Von der Automatisierung des Kundenservice bis zur Beschleunigung der Codegenerierung große Sprachmodelle (LLMs) werden immer schneller in die Geschäftsabläufe und Wettbewerbsstrategien von Unternehmen integriert. Doch während Unternehmen diese Innovation begrüßen, öffnen sie damit auch Tür und Tor für neue, schwer zu erkennende Risiken. Laut einer aktuellen Studie sind 72 % […]…
-
xAI Developer Accidentally Leaks API Key Granting Access to SpaceX, Tesla, and X LLMs
An employee at Elon Musk’s artificial intelligence venture, xAI, inadvertently disclosed a sensitive API key on GitHub, potentially exposing proprietary large language models (LLMs) linked to SpaceX, Tesla, and Twitter/X. Cybersecurity specialists estimate the leak remained active for two months, offering outsiders the capability to access and query highly confidential AI systems engineered with internal…
-
LLM-Risiken verstehen und reduzieren
Es ist grundlegend zu verstehen, dass KI-Assistenten in aller Regel immer die gleichen Zugriffsrechte haben wie die jeweiligen Nutzer. Und diese sind in aller Regel viel zu weit gefasst. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/llm-risiken-verstehen-und-reduzieren/a40655/
-
Open source AI hiring bots favor men, leave women hanging by the phone
Easy fix: Telling LLMs to cosplay Lenin makes ’em more gender blind First seen on theregister.com Jump to article: www.theregister.com/2025/05/02/open_source_ai_models_gender_bias/
-
AI models routinely lie when honesty conflicts with their goals
Keep plugging those LLMs into your apps, folks. This neural network told me it’ll be fine First seen on theregister.com Jump to article: www.theregister.com/2025/05/01/ai_models_lie_research/
-
NVIDIA TensorRT-LLM Vulnerability Let Hackers Run Malicious Code
NVIDIA has issued an urgent security advisory after discovering a significant vulnerability (CVE-2025-23254) in its popular TensorRT-LLM framework, urging all users to update to the latest version (0.18.2) to safeguard their systems against potential attacks. Overview of the Vulnerability The vulnerability, identified as CVE-2025-23254, affects all versions of the NVIDIA TensorRT-LLM framework before 0.18.2 across…
-
30 percent of some Microsoft code now written by AI – especially the new stuff
Satya Nadella reveals attempts to merge Word, PowerPoint, Excel, which may now happen with LLMs First seen on theregister.com Jump to article: www.theregister.com/2025/04/30/microsoft_meta_autocoding/
-
Cisco Boosts XDR Platform, Splunk With Agentic AI
Cisco joins the agentic AI wave with the introduction of advanced LLMs to autonomously verify and investigate attacks. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/cisco-boosts-xdr-platform-splunk-agentic-ai
-
RSAC 2025: Being realistic about fixing code with LLMs
Tags: LLMFirst seen on scworld.com Jump to article: www.scworld.com/news/rsac-2025-being-realistic-about-fixing-code-with-llms
-
🚀 Agentic Runtime Protection Rules Makes Us the First Truly Self-Writing Security System – Impart Security
Agentic Runtime Rules: The First Self-Writing Security System for Runtime The End of Manual Security Management Is Here Say goodbye to regex repositories and ticket fatigue”, Impart delivers instant detections and autonomous investigations for security teams. For years, security teams have been trapped in reactive mode. Every investigation, detection rule update, or WAF configuration change…
-
AI-generated code could be a disaster for the software supply chain. Here’s why.
LLM-produced code could make us much more vulnerable to supply-chain attacks. First seen on arstechnica.com Jump to article: arstechnica.com/security/2025/04/ai-generated-code-could-be-a-disaster-for-the-software-supply-chain-heres-why/
-
RSAC 2025: Using an ‘MRI’ for neural networks to understand LLM jailbreaks
First seen on scworld.com Jump to article: www.scworld.com/news/rsac-2025-using-an-mri-for-neural-networks-to-understand-llm-jailbreaks

