Tag: ml
-
SIEM-Kaufratgeber
Tags: access, ai, api, business, cloud, compliance, container, cyberattack, data, detection, DSGVO, encryption, framework, HIPAA, infrastructure, least-privilege, mail, microsoft, mitre, ml, monitoring, open-source, saas, service, siem, skills, soar, software, threat, toolDie kontextuellen Daten, die SIEM-Lösungen liefern, sind eine grundlegende Komponente moderner Security-Stacks.Protokoll-Daten zu auditieren, zu überprüfen und zu managen, ist alles andere als eine glamouröse Aufgabe aber ein entscheidender Aspekt, um ein sicheres Unternehmensnetzwerk aufzubauen. Schließlich schaffen Event Logs oft eine sekundäre Angriffsfläche für Cyberkriminelle, die damit ihre Aktivitäten verschleiern wollen.Vorgängen wie diesen treten Netzwerksicherheitsexperten…
-
Die besten XDR-Tools
Tags: attack, business, cloud, computing, container, crowdstrike, cyberattack, detection, edr, endpoint, firewall, google, Hardware, ibm, identity, incident response, infrastructure, mail, malware, marketplace, microsoft, ml, network, office, okta, risk, security-incident, service, siem, soar, software, tool, vulnerabilityLesen Sie, worauf Sie in Sachen XDR achten sollten und welche Lösungen sich in diesem Bereich empfehlen.Manuelles, siloartiges Management ist in der modernen IT-Welt unangebracht. Erst recht im Bereich der IT-Sicherheit: Der Umfang von modernem Enterprise Computing und State-of-the-Art-Application-Stack-Architekturen erfordern Sicherheits-Tools, die:Einblicke in den Sicherheitsstatus von IT-Komponenten ermöglichen,Bedrohungen in Echtzeit erkennen, undAspekte der Bedrohungsabwehr automatisieren.Diese…
-
DEF CON 32 Incubated ML Exploits: Backdooring ML Pipelines With Input Handling Bugs
Authors/Presenters: Suha Hussain Our sincere appreciation to DEF CON, and the Authors/Presenters for publishing their erudite DEF CON 32 content. Originating from the conference’s events located at the Las Vegas Convention Center; and via the organizations YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/02/def-con-32-incubated-ml-exploits-backdooring-ml-pipelines-with-input-handling-bugs/
-
Deutsche Unternehmen: KI als Herausforderung im Kampf gegen Betrug
Eine aktuelle Studie zeigt: 61 Prozent der deutschen Unternehmen haben 2024 mehr Geld durch Betrug verloren als im Vorjahr. Hauptprobleme sind fehlende KI-gestützte Lösungen und der Mangel an ML-Modellen (Machine Learning. 43 Prozent der Befragten priorisieren daher deren Einsatz im kommenden Jahr. First seen on itsicherheit-online.com Jump to article: www.itsicherheit-online.com/news/security-management/deutsche-unternehmen-ki-als-herausforderung-im-kampf-gegen-betrug/
-
Hugging Face: Bösartige ML-Modelle auf Entwicklungsplattform aufgedeckt
Auf der KI-Entwicklungsplattform Hugging Face haben IT-Forscher bösartige ML-Modelle entdeckt. Angreifer könnten damit Befehle einschleusen. First seen on heise.de Jump to article: www.heise.de/news/Hugging-Face-Boesartige-ML-Modelle-auf-Entwicklungsplattform-aufgedeckt-10278387.html
-
Malicious ML models found on Hugging Face Hub
Researchers have spotted two machine learning (ML) models containing malicious code on Hugging Face Hub, the popular online repository for datasets and pre-trained models. … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/02/10/malicious-ml-models-found-on-hugging-face-hub/
-
Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection
Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of “broken” pickle files to evade detection.”The pickle files extracted from the mentioned PyTorch archives revealed the malicious Python content at the beginning of the file,” ReversingLabs researcher Karlo Zanki said in a report shared with The…
-
Attackers hide malicious code in Hugging Face AI model Pickle files
Tags: ai, data, github, malicious, ml, open-source, programming, remote-code-execution, risk, service, software, threat, tool, vulnerabilityLike all repositories of open-source software in recent years, AI model hosting platform Hugging Face has been abused by attackers to upload trojanized projects and assets with the goal of infecting unsuspecting users. The latest technique observed by researchers involves intentionally broken but poisoned Python object serialization files called Pickle files.Often described as the GitHub…
-
Developers Beware! Malicious ML Models Found on Hugging Face Platform
In a concerning development for the machine learning (ML) community, researchers from ReversingLabs have uncovered malicious ML models on the Hugging Face platform, a popular hub for AI collaboration. Dubbed “nullifAI,” this novel attack method leverages vulnerabilities in the widely used Python Pickle serialization format to execute malicious code on unsuspecting systems. The discovery highlights…
-
Anomalies are not Enough
Tags: ai, attack, ciso, communications, country, cybersecurity, data, data-breach, defense, email, government, LLM, mail, marketplace, mitre, ml, network, resilience, risk, service, siem, threat, toolMitre Att&ck as Context Introduction: A common theme of science fiction authors, and these days policymakers and think tanks, is how will the humans work with the machines, as the machines begin to surpass us across many dimensions. In cybersecurity humans and their systems are at a crossroads, their limitations daily exposed by ever more innovative,…
-
Hackers impersonate DeepSeek to distribute malware
Tags: access, ai, api, attack, automation, breach, china, cloud, computer, credentials, cyberattack, data, hacker, infrastructure, leak, LLM, malicious, malware, ml, pypi, threat, tool, vulnerabilityTo make things worse than they already are for DeepSeek, hackers are found flooding the Python Package Index (PyPI) repository with fake DeepSeek packages carrying malicious payloads.According to a discovery made by Positive Expert Security Center (PT ESC), a campaign was seen using this trick to dupe unsuspecting developers, ML engineers, and AI enthusiasts looking…
-
A pickle in Meta’s LLM code could allow RCE attacks
Tags: ai, attack, breach, cve, cvss, data, data-breach, exploit, flaw, framework, github, LLM, malicious, ml, network, open-source, rce, remote-code-execution, software, supply-chain, technology, theft, vulnerabilityMeta’s large language model (LLM) framework, Llama, suffers a typical open-source coding oversight, potentially allowing arbitrary code execution on servers leading to resource theft, data breaches, and AI model takeover.The flaw, tracked as CVE-2024-50050, is a critical deserialization bug belonging to a class of vulnerabilities arising from the improper use of the open-source library (pyzmq)…
-
Automating endpoint management doesn’t mean ceding control
Tags: ai, automation, business, compliance, control, cybersecurity, data, endpoint, governance, intelligence, ml, risk, security-incident, skills, threat, tool, vulnerabilityBeset with cybersecurity risks, compliance regimes, and digital experience challenges, enterprises need to move toward autonomous endpoint management (AEM), the next evolution in endpoint management and security solutions. CSO’s Security Priorities Study 2024 reveals that 75% of security decision-makers say that understanding which security tools and solutions fit best within their company is becoming more complex. Many are…
-
How organizations can secure their AI code
Tags: ai, application-security, awareness, backdoor, breach, business, chatgpt, ciso, compliance, control, credentials, crime, cybersecurity, data, data-breach, finance, github, healthcare, LLM, malicious, ml, open-source, organized, programming, risk, risk-management, software, startup, strategy, supply-chain, technology, tool, training, vulnerabilityIn 2023, the team at data extraction startup Reworkd was under tight deadlines. Investors pressured them to monetize the platform, and they needed to migrate everything from Next.js to Python/FastAPI. To speed things up, the team decided to turn to ChatGPT to do some of the work. The AI-generated code appeared to function, so they…
-
How AI and ML are transforming digital banking security
In this Help Net Security interview, Nuno Martins da Silveira Teodoro, VP of Group Cybersecurity at Solaris, discusses the latest advancements in digital banking security. He … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/01/14/nuno-martins-da-silveira-teodoro-solaris-ai-digital-banking-security/
-
New Research Highlights Vulnerabilities in MLOps Platforms
New research by Security Intelligence has revealed security risks in MLOps platforms including Azure ML, BigML and Google Vertex AI First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/vulnerabilities-mlops-platforms/
-
Anomaly Detection for Cybersecurity
A long promising approach comes of age I won’t revisit the arguments for anomaly detection as a crucial piece of cybersecurity. We’ve seen waves of anomaly detection over the years”Š”, “Šand CISA, DARPA, Gartner, and others have explained the value of anomaly detection. As rules-based detections show their age and attackers adopt AI to accelerate their…
-
Neue Schwachstellen in Machine-Learning-Systemen – JFrog-Analyse zeigt Risiken auf
Um Risiken zu minimieren, empfiehlt das JFrog-Team, keine nicht-vertrauenswürdigen ML-Modelle zu laden auch nicht in scheinbar sicheren Formaten wie Safetensors. Unternehmen sollten ihre ML-Nutzer für die Gefahren sensibilisieren und Sicherheitsrichtlinien entsprechend anpassen. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/neue-schwachstellen-in-machine-learning-systemen-jfrog-analyse-zeigt-risiken-auf/a39362/
-
Die 10 häufigsten LLM-Schwachstellen
Tags: access, ai, api, application-security, awareness, breach, cloud, control, cyberattack, data, detection, dos, encryption, injection, least-privilege, LLM, ml, monitoring, privacy, RedTeam, remote-code-execution, risk, service, tool, update, vulnerability, zero-trust -
ML clients, ‘safe’ model formats exploitable through open-source AI vulnerabilities
First seen on scworld.com Jump to article: www.scworld.com/news/ml-clients-safe-model-formats-exploitable-through-open-source-ai-vulnerabilities
-
AWS launches tools to tackle evolving cloud security threats
The increasing sophistication and scale of cyber threats pose a growing challenge for enterprises managing complex cloud environments. Security teams often face overwhelming volumes of alerts, fragmented workflows, and limited tools to identify and respond to attack patterns spanning multiple events.Amazon Web Services (AWS) is addressing these challenges with two significant updates to its cloud…
-
JFrog fördert sichere KI-Entwicklung mit Integration von Databricks MLflow
Die neue JFrog Artifactory-Integration bietet Entwicklern und Data Scientists eine Open Source Software-Lösung, um die Entwicklung von ML-Modellen zu … First seen on infopoint-security.de Jump to article: www.infopoint-security.de/jfrog-foerdert-sichere-ki-entwicklung-mit-integration-von-databricks-mlflow/a37220/
-
JFrog übernimmt Qwak AI Rationalisierung von KI- & ML-Modellen
Als Teil der JFrog-Plattform wird die Qwak-Technologie eine unkomplizierte und problemlose Überführung von Modellen in die Produktion bieten, basiert … First seen on infopoint-security.de Jump to article: www.infopoint-security.de/jfrog-uebernimmt-qwak-ai-rationalisierung-von-ki-ml-modellen/a37667/
-
It’s Near-Unanimous: AI, ML Make the SOC Better
Efficiency is the name of the game for the security operations center, and 91% of cybersecurity pros say artificial intelligence and machine learning are winning that game. First seen on darkreading.com Jump to article: www.darkreading.com/cybersecurity-operations/survey-report-ai-ml-make-soc-better

