Tag: ml
-
Lattice-Based Identity and Access Management for AI Agents
Secure your AI agents with lattice-based IAM. Learn how ML-KEM and ML-DSA protect Model Context Protocol (MCP) from quantum threats and puppet attacks. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/03/lattice-based-identity-and-access-management-for-ai-agents/
-
GlassWorm Attack Uses Stolen GitHub Tokens to Force-Push Malware Into Python Repos
The GlassWorm malware campaign is being used to fuel an ongoing attack that leverages the stolen GitHub tokens to inject malware into hundreds of Python repositories.”The attack targets Python projects, including Django apps, ML research code, Streamlit dashboards, and PyPI packages, by appending obfuscated code to files like setup.py, main.py, and app.py,” StepSecurity said. “Anyone…
-
Post-Quantum Cryptography for Authentication: The Enterprise Migration Guide 2026
NIST finalized the first three PQC standards in August 2024. NSS compliance deadlines start January 2027. Learn what ML-KEM, ML-DSA, and SLH-DSA mean for authentication, why the migration cannot wait, and how to build a quantum-safe infrastructure today. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/03/post-quantum-cryptography-for-authentication-the-enterprise-migration-guide-2026/
-
Hardware Security Module Integration for Post-Quantum Key Encapsulation
Learn how to integrate HSMs for Post-Quantum Key Encapsulation in MCP environments. Protect AI infrastructure with ML-KEM and quantum-resistant hardware. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/03/hardware-security-module-integration-for-post-quantum-key-encapsulation/
-
AI Shocks the Cybersecurity Market
Tags: ai, business, compliance, crowdstrike, cybersecurity, data, defense, detection, governance, identity, incident response, intelligence, ml, okta, risk, service, software, threat, tool, update, vulnerabilityThe cybersecurity market was jolted last week after Anthropic dropped a bombshell announcement. The company’s new AI Claude model identified 500 previously unknown high-risk vulnerabilities hidden in widely used software. That is not a minor milestone. It is a technically significant achievement and a clear demonstration of how quickly AI capabilities are advancing. What came…
-
AI Shocks the Cybersecurity Market
Tags: ai, business, compliance, crowdstrike, cybersecurity, data, defense, detection, governance, identity, incident response, intelligence, ml, okta, risk, service, software, threat, tool, update, vulnerabilityThe cybersecurity market was jolted last week after Anthropic dropped a bombshell announcement. The company’s new AI Claude model identified 500 previously unknown high-risk vulnerabilities hidden in widely used software. That is not a minor milestone. It is a technically significant achievement and a clear demonstration of how quickly AI capabilities are advancing. What came…
-
NDSS 2025 NDSS 2025 BARBIE: Robust Backdoor Detection Based On Latent Separability
Session 12D: ML Backdoors Authors, Creators & Presenters: Hanlei Zhang (Zhejiang University), Yijie Bai (Zhejiang University), Yanjiao Chen (Zhejiang University), Zhongming Ma (Zhejiang University), Wenyuan Xu (Zhejiang University) PAPER BARBIE: Robust Backdoor Detection Based On Latent Separability Backdoor attacks are an essential risk to deep learning model sharing. Fundamentally, backdoored models are different from benign…
-
NDSS 2025 NDSS 2025 BARBIE: Robust Backdoor Detection Based On Latent Separability
Session 12D: ML Backdoors Authors, Creators & Presenters: Hanlei Zhang (Zhejiang University), Yijie Bai (Zhejiang University), Yanjiao Chen (Zhejiang University), Zhongming Ma (Zhejiang University), Wenyuan Xu (Zhejiang University) PAPER BARBIE: Robust Backdoor Detection Based On Latent Separability Backdoor attacks are an essential risk to deep learning model sharing. Fundamentally, backdoored models are different from benign…
-
NDSS 2025 Defending Against Backdoor Attacks On Graph Neural Networks Via Discrepancy Learning
Tags: attack, backdoor, conference, defense, framework, Internet, ml, network, risk, technology, threat, vulnerabilitySession 12D: ML Backdoors Authors, Creators & Presenters: Hao Yu (National University of Defense Technology), Chuan Ma (Chongqing University), Xinhang Wan (National University of Defense Technology), Jun Wang (National University of Defense Technology), Tao Xiang (Chongqing University), Meng Shen (Beijing Institute of Technology, Beijing, China), Xinwang Liu (National University of Defense Technology) PAPER DShield: Defending…
-
NDSS 2025 Try to Poison My Deep Learning Data? Nowhere To Hide Your Trajectory Spectrum!
Session 12D: ML Backdoors Authors, Creators & Presenters: Yansong Gao (The University of Western Australia), Huaibing Peng (Nanjing University of Science and Technology), Hua Ma (CSIRO’s Data61), Zhi Zhang (The University of Western Australia), Shuo Wang (Shanghai Jiao Tong University), Rayne Holland (CSIRO’s Data61), Anmin Fu (Nanjing University of Science and Technology), Minhui Xue (CSIRO’s…
-
NDSS 2025 CLIBE: Detecting Dynamic Backdoors In Transformer-based NLP Models
Session 12D: ML Backdoors Authors, Creators & Presenters: Rui Zeng (Zhejiang University), Xi Chen (Zhejiang University), Yuwen Pu (Zhejiang University), Xuhong Zhang (Zhejiang University), Tianyu Du (Zhejiang University), Shouling Ji (Zhejiang University) PAPER CLIBE: Detecting Dynamic Backdoors in Transformer-based NLP Models Backdoors can be injected into NLP models to induce misbehavior when the input text…
-
NDSS 2025 LADDER: Multi-Objective Backdoor Attack Via Evolutionary Algorithm
Session 12D: ML Backdoors Authors, Creators & Presenters: Dazhuang Liu (Delft University of Technology), Yanqi Qiao (Delft University of Technology), Rui Wang (Delft University of Technology), Kaitai Liang (Delft University of Technology), Georgios Smaragdakis (Delft University of Technology) PAPER LADDER: Multi-Objective Backdoor Attack via Evolutionary Algorithm Current black-box backdoor attacks in convolutional neural networks formulate…
-
NDSS 2025 A Method To Facilitate Membership Inference Attacks In Deep Learning Models
Session 12C: Membership Inference Authors, Creators & Presenters: Zitao Chen (University of British Columbia), Karthik Pattabiraman (University of British Columbia) PAPER A Method to Facilitate Membership Inference Attacks in Deep Learning Models Modern machine learning (ML) ecosystems offer a surging number of ML frameworks and code repositories that can greatly facilitate the development of ML…
-
NDSS 2025 PBP: Post-Training Backdoor Purification For Malware Classifiers
Session 12B: Malware Authors, Creators & Presenters: Dung Thuy Nguyen (Vanderbilt University), Ngoc N. Tran (Vanderbilt University), Taylor T. Johnson (Vanderbilt University), Kevin Leach (Vanderbilt University) PAPER PBP: Post-Training Backdoor Purification for Malware Classifiers In recent years, the rise of machine learning (ML) in cybersecurity has brought new challenges, including the increasing threat of backdoor…
-
NDSS 2025 PBP: Post-Training Backdoor Purification For Malware Classifiers
Session 12B: Malware Authors, Creators & Presenters: Dung Thuy Nguyen (Vanderbilt University), Ngoc N. Tran (Vanderbilt University), Taylor T. Johnson (Vanderbilt University), Kevin Leach (Vanderbilt University) PAPER PBP: Post-Training Backdoor Purification for Malware Classifiers In recent years, the rise of machine learning (ML) in cybersecurity has brought new challenges, including the increasing threat of backdoor…
-
NDSS 2025 Secure Data Analytics
Session 10A: Confidential Computing 2 Authors, Creators & Presenters: Byeongwook Kim (Seoul National University), Jaewon Hur (Seoul National University), Adil Ahmad (Arizona State University), Byoungyoung Lee (Seoul National University) PAPER Secure Data Analytics in Apache Spark with Fine-grained Policy Enforcement and Isolated Execution Cloud based Spark platform is a tempting approach for sharing data, as…
-
NDSS 2025 Secure Data Analytics
Session 10A: Confidential Computing 2 Authors, Creators & Presenters: Byeongwook Kim (Seoul National University), Jaewon Hur (Seoul National University), Adil Ahmad (Arizona State University), Byoungyoung Lee (Seoul National University) PAPER Secure Data Analytics in Apache Spark with Fine-grained Policy Enforcement and Isolated Execution Cloud based Spark platform is a tempting approach for sharing data, as…
-
NDSS 2025 Secure Data Analytics
Session 10A: Confidential Computing 2 Authors, Creators & Presenters: Byeongwook Kim (Seoul National University), Jaewon Hur (Seoul National University), Adil Ahmad (Arizona State University), Byoungyoung Lee (Seoul National University) PAPER Secure Data Analytics in Apache Spark with Fine-grained Policy Enforcement and Isolated Execution Cloud based Spark platform is a tempting approach for sharing data, as…
-
Machine learningpowered Android Trojans bypass script-based Ad Click detection
A new Android click-fraud trojan family uses TensorFlow ML to visually detect and tap ads, bypassing traditional script-based click techniques. Researchers at cybersecurity firm Dr.Web discovered a new Android click-fraud trojan family that uses TensorFlow.js ML models to visually detect and tap ads, avoiding traditional script-based methods. The malware is distributed via Xiaomi’s GetApps, it…
-
New AI-Powered Android Malware Automatically Clicks Ads on Infected Devices
A sophisticated new Android malware family dubbed >>Android.Phantom<>phantom<>signaling<< controlled from the hxxps://dllpgd[.]click command server. The ML model downloads from hxxps://app-download[.]cn-wlcb[.]ufileos[.]com and analyzes screenshots of virtual screens to identify and automatically click ad […] The post New AI-Powered Android Malware Automatically Clicks Ads on Infected Devices appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform. First…
-
Python-Bibliotheken für Hugging-Face-Modelle vergiftet
Tags: ai, apple, cve, exploit, intelligence, malware, ml, network, nvidia, rce, remote-code-execution, tool, vulnerabilityPython-Libraries sind mit manipulierten Metadaten in KI-Modellen infiziert und können beim Laden Schadcode ausgeführen.NeMo, Uni2TS und FlexTok, Python-Bibliotheken für Künstliche Intelligenz (KI) und Machine Learning (ML), die in Hugging-Face-Modellen verwendet werden, haben gravierende Schwächen. Wie Forschende von Palo Alto Networks’ Unit 42 herausgefunden haben, können Kriminelle diese nutzen, um Schadcode in Metadaten zu verstecken. Einmal…
-
For application security: SCA, SAST, DAST and MAST. What next?
Tags: advisory, ai, application-security, automation, best-practice, business, cisa, cisco, cloud, compliance, container, control, cve, data, exploit, flaw, framework, gartner, government, guide, ibm, incident response, infrastructure, injection, kubernetes, least-privilege, ml, mobile, network, nist, resilience, risk, sbom, service, software, sql, supply-chain, threat, tool, training, update, vulnerability, waf<img loading="lazy" decoding="async" src="https://b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?quality=50&strip=all&w=1024" alt="Chart: Posture, provenance and proof." class="wp-image-4115680" srcset="https://b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?quality=50&strip=all 1430w, b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?resize=300%2C168&quality=50&strip=all 300w, b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?resize=768%2C431&quality=50&strip=all 768w, b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?resize=1024%2C575&quality=50&strip=all 1024w, b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?resize=1240%2C697&quality=50&strip=all 1240w, b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?resize=150%2C84&quality=50&strip=all 150w, b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?resize=854%2C480&quality=50&strip=all 854w, b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?resize=640%2C360&quality=50&strip=all 640w, b2b-contenthub.com/wp-content/uploads/2026/01/posture-provenance-proof.jpg?resize=444%2C250&quality=50&strip=all 444w” width=”1024″ height=”575″ sizes=”auto, (max-width: 1024px) 100vw, 1024px” /> Sunil GentyalaOver the past year the community has admitted the obvious: the battleground is the software supply chain and…
-
NDSS 2025 ProbeNot: Protecting Pre-trained Encoders From Malicious Probing
Session 7D: ML Security Authors, Creators & Presenters: Ruyi Ding (Northeastern University), Tong Zhou (Northeastern University), Lili Su (Northeastern University), Aidong Adam Ding (Northeastern University), Xiaolin Xu (Northeastern University), Yunsi Fei (Northeastern University) PAPER Probe-Me-Not: Protecting Pre-Trained Encoders From Malicious Probing Adapting pre-trained deep learning models to customized tasks has become a popular choice for…
-
NDSS 2025 A New PPML Paradigm For Quantized Models
Session 7D: ML Security Authors, Creators & Presenters: Tianpei Lu (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Bingsheng Zhang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Xiaoyuan Zhang (The State Key Laboratory of Blockchain and Data Security, Zhejiang University), Kui Ren (The State Key Laboratory of Blockchain…
-
NDSS 2025 DLBox: New Model Training Framework For Protecting Training Data
Session 7D: ML Security Authors, Creators & Presenters: Jaewon Hur (Seoul National University), Juheon Yi (Nokia Bell Labs, Cambridge, UK), Cheolwoo Myung (Seoul National University), Sangyun Kim (Seoul National University), Youngki Lee (Seoul National University), Byoungyoung Lee (Seoul National University) PAPER DLBox: New Model Training Framework For Protecting Training Data Sharing training data for deep…
-
NDSS 2025 Understanding Data Importance In Machine Learning Attacks
Session 7D: ML Security Authors, Creators & Presenters: Rui Wen (CISPA Helmholtz Center for Information Security), Michael Backes (CISPA Helmholtz Center for Information Security), Yang Zhang (CISPA Helmholtz Center for Information Security) PAPER Understanding Data Importance in Machine Learning Attacks: Does Valuable Data Pose Greater Harm? Machine learning has revolutionized numerous domains, playing a crucial…
-
NDSS 2025 AlphaDog: No-Box Camouflage Attacks Via Alpha Channel Oversight
Session 7D: ML Security Authors, Creators & Presenters: Qi Xia (University of Texas at San Antonio), Qian Chen (University of Texas at San Antonio) PAPER AlphaDog: No-Box Camouflage Attacks via Alpha Channel Oversight Traditional black-box adversarial attacks on computer vision models face significant limitations, including intensive querying requirements, time-consuming iterative processes, a lack of universality,…
-
Demystifying risk in AI
Tags: access, ai, best-practice, bsi, business, ciso, cloud, compliance, control, corporate, csf, cyber, cybersecurity, data, framework, google, governance, group, infrastructure, intelligence, ISO-27001, LLM, mitre, ml, monitoring, nist, PCI, risk, risk-management, strategy, technology, threat, training, vulnerabilityThe data that is inserted in a request.This data is evaluated by a training model that involves an entire architecture.The result of the information that will be delivered From an information security point of view. That is the point that we, information security professionals, must judge in the scope of evaluation from the perspective of…

