Tag: LLM
-
NDSS 2025 Symposium On Usable Security And Privacy (USEC) 2025, Paper Session 1
Tags: conference, cyber, cybersecurity, defense, international, LLM, network, password, phishing, privacy, technologyAuthors, Creators & Presenters: PAPERS On-demand RFID: Improving Privacy, Security, and User Trust in RFID Activation through Physically-Intuitive Design Youngwook Do (JPMorganChase and Georgia Institute of Technology), Tingyu Cheng (Georgia Institute of Technology and University of Notre Dame), Yuxi Wu (Georgia Institute of Technology and Northeastern University), HyunJoo Oh(Georgia Institute of Technology), Daniel J. Wilson…
-
NDSS 2025 Workshop On Security And Privacy Of Next-Generation Networks (FutureG) 2025, Session 1. Panelists Papers SESSION Opening Remarks, Panel And FutureG 2025 Session 1: AI-Assisted NextG
Tags: 5G, ai, conference, detection, government, Internet, LLM, network, open-source, privacy, vulnerabilityPanelists: Ted K. Woodward, Ph.D. Technical Director for FutureG, OUSD (R&E) Phillip Porras, Program Director, Internet Security Research, SRI Donald McBride, Senior Security Researcher, Bell Laboratories, Nokia This panel aims to bring together various participants and stakeholders from government, industry, and academia to present and discuss recent innovations and explore options to enable recent 5G…
-
NDSS 2025 Workshop On Security And Privacy Of Next-Generation Networks (FutureG) 2025, Session 3 Session 3: Novel Threats In Decentralized NextG And Securing Open RAN
PAPERS Feedback-Guided API Fuzzing of 5G Network Tianchang Yang (Pennsylvania State University), Sathiyajith K S (Pennsylvania State University), Ashwin Senthil Arumugam (Pennsylvania State University), Syed Rafiul Hussain (Pennsylvania State University) Trust or Bust: A Survey of Threats in Decentralized Wireless Networks Hetvi Shastri (University of Massachusetts Amherst), Akanksha Atrey (Nokia Bell Labs), Andre Beck (Nokia…
-
Simple to Ask: Is Your SOC AI Ready? Not Simple to Answer!
Gemini made blog illustration In early 1900s, factory owners bolted the new electric dynamo onto their old, central-shaft-and-pulley systems. They thought they were modernizing, but they were just doing a “retrofit.” The massive productivity boom didn’t arrive until they completely re-architected the factory around the new unit-drive motor (metaphor source). Today’s AI agent slapped onto…
-
Most AI privacy research looks the wrong way
Most research on LLM privacy has focused on the wrong problem, according to a new paper by researchers from Carnegie Mellon University and Northeastern University. The authors … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/20/llm-ai-data-privacy-research/
-
Cybersecurity Snapshot: F5 Breach Prompts Urgent U.S. Gov’t Warning, as OpenAI Details Disrupted ChatGPT Abuses
Tags: ai, attack, awareness, backdoor, breach, business, chatgpt, china, cisa, cloud, control, corporate, cve, cyber, cybersecurity, data, data-breach, defense, detection, exploit, framework, fraud, governance, government, group, hacker, incident, infrastructure, Internet, iran, law, LLM, malicious, malware, mitigation, monitoring, network, openai, organized, phishing, privacy, resilience, risk, russia, scam, security-incident, service, software, strategy, supply-chain, technology, threat, training, update, vulnerabilityF5’s breach triggers a CISA emergency directive, as Tenable calls it “a five-alarm fire” that requires urgent action. Meanwhile, OpenAI details how attackers try to misuse ChatGPT. Plus, boards are increasing AI and cyber disclosures. And much more! Key takeaways A critical breach at cybersecurity firm F5, attributed to a nation-state, has triggered an urgent…
-
When trusted AI connections turn hostile
Researchers have revealed a new security blind spot in how LLM applications connect to external systems. Their study shows that malicious Model Context Protocol (MCP) servers … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/16/research-mcp-server-attacks/
-
When trusted AI connections turn hostile
Researchers have revealed a new security blind spot in how LLM applications connect to external systems. Their study shows that malicious Model Context Protocol (MCP) servers … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/16/research-mcp-server-attacks/
-
A View from the C-suite: Aligning AI security to the NIST RMF FireTail Blog
Tags: access, ai, attack, breach, csf, cybersecurity, data, data-breach, defense, detection, framework, governance, grc, guide, incident response, infrastructure, injection, jobs, LLM, malicious, nist, RedTeam, risk, risk-management, strategy, supply-chain, theft, tool, vulnerabilityOct 15, 2025 – Jeremy Snyder – In 2025, the AI race is surging ahead and the pressure to innovate is intense. For years, the NIST Cybersecurity Framework (CSF) has been our trusted guide for managing risk. It consists of five principles: identify, protect, detect, respond, and recover. But with the rise of AI revolutionizing…
-
Simple Prompt Injection Lets Hackers Bypass OpenAI Guardrails Framework
Security researchers have discovered a fundamental vulnerability in OpenAI’s newly released Guardrails framework that can be exploited using basic prompt injection techniques. The vulnerability enables attackers to circumvent the system’s safety mechanisms and generate malicious content without triggering any security alerts, raising serious concerns about the effectiveness of AI self-regulation approaches. Critical Flaw in LLM-Based…
-
Simple Prompt Injection Lets Hackers Bypass OpenAI Guardrails Framework
Security researchers have discovered a fundamental vulnerability in OpenAI’s newly released Guardrails framework that can be exploited using basic prompt injection techniques. The vulnerability enables attackers to circumvent the system’s safety mechanisms and generate malicious content without triggering any security alerts, raising serious concerns about the effectiveness of AI self-regulation approaches. Critical Flaw in LLM-Based…
-
Sovereign Data, Sovereign Access: Introducing Modern FIDO Authentication for SAS PCE
Sovereign Data, Sovereign Access: Introducing Modern FIDO Authentication for SAS PCE andrew.gertz@t“¦ Mon, 10/13/2025 – 14:53 Discover how Thales empowers enterprises with sovereign access through FIDO authentication in SAS PCE”, ensuring secure, phishing-resistant identity control for hybrid environments. Identity & Access Management Access Control Guido Gerrits – Field Channel Director, EMEA More About This Author…
-
Absicherung von LLMs, GenAI und KI-Agenten – Check Point übernimmt Lakera
First seen on security-insider.de Jump to article: www.security-insider.de/check-point-uebernimmt-lakera-a-0fca6f6efd9e64b5f713ff37dde94af3/
-
LLM-Powered MalTerminal Malware Uses OpenAI GPT-4 to Create Ransomware Code
LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code. Our research discovered hitherto unknown samples, and what may be the earliest example known to date of an LLM-enabled malware we dubbed “MalTerminal.” Our methodology also uncovered other offensive LLM applications, including…
-
LLM-Powered MalTerminal Malware Uses OpenAI GPT-4 to Create Ransomware Code
LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code. Our research discovered hitherto unknown samples, and what may be the earliest example known to date of an LLM-enabled malware we dubbed “MalTerminal.” Our methodology also uncovered other offensive LLM applications, including…
-
It’s trivially easy to poison LLMs into spitting out gibberish, says Anthropic
Just 250 malicious training documents can poison a 13B parameter model – that’s 0.00016% of a whole dataset First seen on theregister.com Jump to article: www.theregister.com/2025/10/09/its_trivially_easy_to_poison/
-
USENIX 2025: PEPR ’25 OneShield Privacy Guard: Deployable Privacy Solutions for LLMs
Creator, Author and Presenter: Shubhi Asthana, IBM Research Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-oneshield-privacy-guard-deployable-privacy-solutions-for-llms/
-
USENIX 2025: PEPR ’25 When Privacy Guarantees Meet Pre-Trained LLMs: A Case Study In Synthetic Data
Creators, Authors and Presenters: Yash Maurya and Aman Priyanshu, Carnegie Mellon University Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-when-privacy-guarantees-meet-pre-trained-llms-a-case-study-in-synthetic-data/
-
USENIX 2025: PEPR ’25 When Privacy Guarantees Meet Pre-Trained LLMs: A Case Study In Synthetic Data
Creators, Authors and Presenters: Yash Maurya and Aman Priyanshu, Carnegie Mellon University Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-when-privacy-guarantees-meet-pre-trained-llms-a-case-study-in-synthetic-data/
-
USENIX 2025: PEPR ’25 When Privacy Guarantees Meet Pre-Trained LLMs: A Case Study In Synthetic Data
Creators, Authors and Presenters: Yash Maurya and Aman Priyanshu, Carnegie Mellon University Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-when-privacy-guarantees-meet-pre-trained-llms-a-case-study-in-synthetic-data/
-
AI Security Goes Mainstream as Vendors Spend Heavily on M&A
Platform Vendors Target Runtime Defense, Prompt Flow, Agent Identity and Output As autonomous AI grows, so does the security risk. Prompt injection, identity control and AI observability are at the center of a dozen recent acquisitions, as vendors including Cisco, CrowdStrike, Palo Alto Networks and SentinelOne try to adapt to the autonomy and unpredictability of…
-
Why Enterprises Continue to Stick With Traditional AI
Explainability, Cost, Compliance Drive AI Choices in Enterprises. LLMs may dominate headlines, but enterprises are taking a more measured approach. Sujatha S Iyer, AI security head at ManageEngine, says the future of AI for many businesses lies not in deploying massive models but in explainable, efficient and compliant systems designed to solve specific problems. First…
-
Ghosts in the Machine: ASCII Smuggling across Various LLMs FireTail Blog
Oct 06, 2025 – Alan Fagan – Operationalizing Defense The key to catching ASCII Smuggling is monitoring the raw input payload, the exact string the LLM tokenization engine receives, not just the visible text. Ingestion: FireTail continuously records LLM activity logs from all your integrated platforms. Analysis: Our platform analyzes the raw payload data for…
-
USENIX 2025: PEPR ’25 Harnessing LLMs for Scalable Data Minimization
Creators, Authors and Presenters: Charles de Bourcy, OpenAI Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-harnessing-llms-for-scalable-data-minimization/
-
GPT needs to be rewired for security
LLMs and agentic systems already shine at everyday productivity, including transcribing and summarizing meetings, extracting action items, prioritizing critical emails, and … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/02/llms-soc-automation/

