Tag: LLM
-
News alert: Hunters announces ‘Pathfinder AI’ to enhance detection and response in SOC workflows
Boston and Tel Aviv, Mar. 4, 2025, CyberNewswire, Hunters, the leader in next-generation SIEM, today announced Pathfinder AI, a major step toward a more AI-driven SOC. Building on Copilot AI, which is already transforming SOC workflows with LLM-powered… (more”¦) First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/news-alert-hunters-announces-pathfinder-ai-to-enhance-detection-and-response-in-soc-workflows/
-
Pathfinder AI Hunters Announces New AI Capabilities for Smarter SOC Automation
Pathfinder AI expands Hunters’ vision for AI-driven SOCs, introducing Agentic AI for autonomous investigation and response. Hunters, the leader in next-generation SIEM, today announced Pathfinder AI, a major step toward a more AI-driven SOC. Building on Copilot AI, which is already transforming SOC workflows with LLM-powered investigation guidance, Hunters is introducing its Agentic AI vision,…
-
it’s.BB e.V. lädt zu Online-Veranstaltung ein: Sicherheitsfragen im Kontext der LLM-Nutzung
Tags: LLMFirst seen on datensicherheit.de Jump to article: www.datensicherheit.de/its-bb-e-v-einladung-online-veranstaltung-sicherheitsfragen-kontext-llm-nutzung
-
LLMjacking Hackers Abuse GenAI With AWS NHIs to Hijack Cloud LLMs
In a concerning development, cybercriminals are increasingly targeting cloud-based generative AI (GenAI) services in a new attack vector dubbed >>LLMjacking.
-
LLMs Are Posing a Threat to Content Security
With the wide application of large language models (LLM) in various fields, their potential risks and threats have gradually become prominent. “Content security” caused by inaccurate or misleading information is becoming a security concern that cannot be ignored. Unfairness and bias, adversarial attacks, malicious code generation, and exploitation of security vulnerabilities continue to raise risk…The…
-
Forscher entdecken LLM-Sicherheitsrisiko
Forscher haben Anmeldeinformationen in den Trainingsdaten von Large Language Models entdeckt.Beliebte LLMs wie DeepSeek werden mit Common Crawl trainiert, einem riesigen Datensatz mit Website-Informationen. Forscher von Truffle Security haben kürzlich einen Datensatz des Webarchives analysiert, der über 250 Milliarden Seiten umfasst und Daten von 47,5 Millionen Hosts enthält. Dabei stellten sie fest, dass rund 12.000…
-
Microsoft files lawsuit against LLMjacking gang that bypassed AI safeguards
LLMjacking can cost organizations a lot of money: LLMjacking is a continuation of the cybercriminal practice of abusing stolen cloud account credentials for various illegal operations, such as cryptojacking, abusing hacked cloud computing resources to mine cryptocurrency. The difference is that large quantities of API calls to LLMs can quickly rack up huge costs, with…
-
ISMG Editors: Black Basta Falls, Is Ransomware on the Ropes?
Also: U.S. Health Data Privacy Crackdowns, Reality vs. Hype of LLMs in Security. In this week’s update, four editors with ISMG explore the crumbling state of ransomware group Black Basta and implications for other cybercrime gangs, the expanding impact of U.S. health data privacy laws, and whether large language models are truly what they seem.…
-
12,000+ API Keys and Passwords Found in Public Datasets Used for LLM Training
A dataset used to train large language models (LLMs) has been found to contain nearly 12,000 live secrets, which allow for successful authentication.The findings once again highlight how hard-coded credentials pose a severe security risk to users and organizations alike, not to mention compounding the problem when LLMs end up suggesting insecure coding practices to…
-
Integration with Gloo Gateway – Impart Security
Securing Web apps, APIs, & LLMs Just Got Easier: Impart’s Native Integration with Gloo Gateway APIs are the backbone of modern applications, but they’re also one of the biggest attack surfaces. As API threats evolve and Large Language Model (LLM) security becomes a pressing concern, organizations need fast, efficient, and easy-to-deploy solutions to protect their…
-
More Research Showing AI Breaking the Rules
These researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating. Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any human, or any of the AI models…
-
DEF CON 32 Decoding Galah, An LLM Powered Web Honeypot
Authors/Presenters: Adel Karimi Our sincere appreciation to DEF CON, and the Authors/Presenters for publishing their erudite DEF CON 32 content. Originating from the conference’s events located at the Las Vegas Convention Center; and via the organizations YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/02/def-con-32-decoding-galah-an-llm-powered-web-honeypot/
-
What is SIEM? Improving security posture through event log data
Tags: access, ai, api, automation, ciso, cloud, compliance, data, defense, detection, edr, endpoint, firewall, fortinet, gartner, google, guide, ibm, infrastructure, intelligence, kubernetes, LLM, microsoft, mitigation, mobile, monitoring, network, openai, regulation, risk, router, security-incident, service, siem, soar, soc, software, threat, toolAt its core, a SIEM is designed to parse and analyze various log files, including firewalls, servers, routers and so forth. This means that SIEMs can become the central “nerve center” of a security operations center, driving other monitoring functions to resolve the various daily alerts.Added to this data are various threat intelligence feeds that…
-
New LLM Vulnerability Exposes AI Models Like ChatGPT to Exploitation
A significant vulnerability has been identified in large language models (LLMs) such as ChatGPT, raising concerns over their susceptibility to adversarial attacks. Researchers have highlighted how these models can be manipulated through techniques like prompt injection, which exploit their text-generation capabilities to produce harmful outputs or compromise sensitive information. Prompt Injection: A Growing Cybersecurity Challenge…
-
DarkMind: A Novel Backdoor Attack Exploiting Customized LLMs’ Reasoning Capabilities
The rise of customized large language models (LLMs) has revolutionized artificial intelligence applications, enabling businesses and individuals to leverage advanced reasoning capabilities for complex tasks. However, this rapid adoption has also exposed critical vulnerabilities. A groundbreaking study by Zhen Guo and Reza Tourani introduces DarkMind, a novel backdoor attack targeting the reasoning processes of customized…
-
CISOs Brace for LLM-Powered Attacks: Key Strategies to Stay Ahead
For chief information security officers (CISOs), understanding and mitigating the security risks associated with LLMs is paramount. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/02/cisos-brace-for-llm-powered-attacks-key-strategies-to-stay-ahead/
-
Top 5 ways attackers use generative AI to exploit your systems
Tags: access, ai, attack, authentication, awareness, banking, captcha, chatgpt, china, control, cyber, cybercrime, cybersecurity, defense, detection, exploit, extortion, finance, flaw, fraud, group, hacker, intelligence, LLM, malicious, malware, network, phishing, ransomware, resilience, service, spam, tactics, theft, threat, tool, vulnerability, zero-dayFacilitating malware development: Artificial intelligence can also be used to generate more sophisticated or at least less labour-intensive malware.For example, cybercriminals are using gen AI to create malicious HTML documents. The XWorm attack, initiated by HTML smuggling, which contains malicious code that downloads and runs the malware, bears the hallmarks of development via AI.”The loader’s…
-
Datenleck durch GenAI-Nutzung
Tags: ai, chatgpt, ciso, compliance, data-breach, gartner, LLM, risk, strategy, tool, training, vulnerabilityViele Mitarbeiter teilen sensible Unternehmensdaten, wenn sie generative KI-Apps anwenden.Laut einem aktuellen Bericht über Gen-AI-Datenlecks von Harmonic enthielten 8,5 Prozent der Mitarbeiteranfragen an beliebte LLMs sensible Daten, was zu Sicherheits-, Compliance-, Datenschutz- und rechtlichen Bedenken führte.Der Security-Spezialist hat im vierten Quartal 2024 Zehntausende von Eingabeaufforderungen an ChatGPT, Copilot, Gemini, Claude und Perplexity analysiert. Dabei stellte…
-
LLMJacking: Sysdig entdeckt neue Angriffe auf DeepSeek
Mit der steigenden Nachfrage nach leistungsfähigen LLMs wächst auch der Missbrauch durch LLMjacking. Schwarzmarktplätze für gestohlene API-Zugänge florieren, und Untergrund-Anbieter passen ihre Dienste kontinuierlich an. Angreifer haben ihre Techniken verfeinert und implementieren neue Modelle wie DeepSeek in kürzester Zeit. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/llmjacking-sysdig-entdeckt-neue-angriffe-auf-deepseek/a39732/
-
Could you Spot a Digital Twin at Work? Get Ready for Hyper-Personalized Attacks
The world is worried about deepfakes. Research conducted in the U.S. and Australia finds that nearly three-quarters of respondents feel negatively about them, associating the AI-generated phenomenon with fraud and misinformation. But in the workplace, we’re more likely to let our guard down. That’s bad news for businesses as the prospect of LLM-trained malicious digital..…
-
LLMjacking-Angriffe auf KI-Modelle wie DeepSeek
Benutzt ihr KI-Modelle wie DeepSeek? Dann findet jetzt heraus, wie LLMjacking-Angriffe diese LLMs und eure Daten bedrohen. First seen on tarnkappe.info Jump to article: tarnkappe.info/artikel/kuenstliche-intelligenz/llmjacking-angriffe-auf-ki-modelle-wie-deepseek-309920.html
-
Hackers Monetize LLMjacking, Selling Stolen AI Access for $30 per Month
LLMjacking attacks target DeepSeek, racking up huge cloud costs. Sysdig reveals a black market for LLM access has… First seen on hackread.com Jump to article: hackread.com/hackers-monetize-llmjacking-selling-stolen-ai-access/
-
Autonomous LLMs Reshaping Pen Testing: Real-World AD Breaches and the Future of Cybersecurity
Large Language Models (LLMs) are transforming penetration testing (pen testing), leveraging their advanced reasoning and automation capabilities to simulate sophisticated cyberattacks. Recent research demonstrates how autonomous LLM-driven systems can effectively perform assumed breach simulations in enterprise environments, particularly targeting Microsoft Active Directory (AD) networks. These advancements mark a significant departure from traditional pen testing methods,…
-
DeepSeek-R1 LLM Fails Over Half of Jailbreak Attacks in Security Analysis
DeepSeek-R1 LLM fails 58% of jailbreak attacks in Qualys security analysis. Learn about the vulnerabilities, compliance concerns, and risks for enterprise adoption. First seen on hackread.com Jump to article: hackread.com/deepseek-r1-llm-fail-jailbreak-attack-security-analysis/

