Tag: LLM
-
Malicious AI Tools See 200% Surge as ChatGPT Jailbreaking Talks Increase by 52%
by
in SecurityNewsThe cybersecurity landscape in 2024 witnessed a significant escalation in AI-related threats, with malicious actors increasingly targeting and exploiting large language models (LLMs). According to KELA’s annual >>State of Cybercrime
-
ARACNE: LLM-Powered Pentesting Agent Executes Commands on Real Linux Shell Systems
by
in SecurityNewsResearchers have introduced ARACNE, a fully autonomous Large Language Model (LLM)-based pentesting agent designed to interact with SSH services on real Linux shell systems. ARACNE is engineered to execute commands autonomously, marking a significant advancement in the automation of cybersecurity testing. The agent’s architecture supports multiple LLM models, enhancing its flexibility and effectiveness in penetration…
-
Lasso Adds Automated Red Teaming Capability to Test LLMs
by
in SecurityNewsLasso today added an ability to autonomously simulate real-world cyberattacks against large language models (LLMs) to enable organizations to improve the security of artificial intelligence (AI) applications. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/lasso-adds-automated-red-teaming-capability-to-test-llms/
-
Tencent Says It Does More in AI With Fewer GPUs
by
in SecurityNewsNot Every New Generation of LLM Needs Exponentially More Chips, Says Tencent Exec. Chinese tech giant Tencent reported a slowdown in GPU deployment, attributing it to a prioritization among Sino tech companies of chip efficiency over raw numbers, a strategy made clear internationally by artificial intelligence firm DeepSeek. First seen on govinfosecurity.com Jump to article:…
-
Cato Uses LLM-Developed Fictional World to Create Jailbreak Technique
A Cato Networks threat researcher with little coding experience was able to convince AI LLMs from DeepSeek, OpenAI, and Microsoft to bypass security guardrails and develop malware that could steal browser passwords from Google Chrome. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/cato-uses-llm-developed-fictional-world-to-create-jailbreak-technique/
-
New Jailbreak Technique Uses Fictional World to Manipulate AI
by
in SecurityNewsCato Networks discovers a new LLM jailbreak technique that relies on creating a fictional world to bypass a model’s security controls. The post New Jailbreak Technique Uses Fictional World to Manipulate AI appeared first on SecurityWeek. First seen on securityweek.com Jump to article: www.securityweek.com/new-jailbreak-technique-uses-fictional-world-to-manipulate-ai/
-
Prompt Injection Attacks in LLMs: Mitigating Risks with Microsegmentation
Prompt injection attacks have emerged as a critical concern in the realm of Large Language Model (LLM) application security. These attacks exploit the way LLMs process and respond to user inputs, posing unique challenges for developers and security professionals. Let’s dive into what makes these attacks so distinctive, how they work, and what steps can……
-
Kritische Sicherheitsherausforderungen für LLMs und generative KI
by
in SecurityNewsDie Einführung von Large-Language-Models (LLMs) und generativer KI revolutioniert die Unternehmensabläufe und sorgt für unübertroffene Innovation, Effizienz und Wettbewerbsvorteile. Diese schnelle Integration bringt jedoch erhebliche Herausforderungen für die KI-Sicherheit mit sich, denen sich Unternehmen stellen müssen. Erkenntnisse von Qualys zeigen, dass über 1.255 Organisationen KI/ML-Software auf 2,8 Millionen Assets eingesetzt haben, wobei 6,2 % etwa […]…
-
Immersive World: LLM-Jailbreak-Technik für Zero-Knowledge-Hacker
by
in SecurityNewsDie LLM-Jailbreak-Technik “Immersive World” zeigt einmal mehr, wie KI-Modelle durch kreative Manipulation ausgetrickst werden können. First seen on tarnkappe.info Jump to article: tarnkappe.info/artikel/jailbreaks/immersive-world-llm-jailbreak-technik-fuer-zero-knowledge-hacker-312082.html
-
AI crawlers haven’t learned to play nice with websites
by
in SecurityNewsSourceHut says it’s getting DDoSed by LLM bots First seen on theregister.com Jump to article: www.theregister.com/2025/03/18/ai_crawlers_sourcehut/
-
Security Researcher Proves GenAI Tools Can Develop Google Chrome Infostealers
A Cato Networks researcher discovered a new LLM jailbreaking technique enabling the creation of password-stealing malware First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/security-researcher-llm/
-
Prompt Security Adds Ability to Restrict Access to Data Generated by LLMs
by
in SecurityNewsPrompt Security today extended its platform to enable organizations to implement policies that restrict the types of data surfaced by a large language model (LLM) that employees are allowed to access. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/prompt-security-adds-ability-to-restrict-access-to-data-generated-by-llms/
-
AI development pipeline attacks expand CISOs’ software supply chain risk
by
in SecurityNews
Tags: access, ai, api, application-security, attack, backdoor, breach, business, ciso, cloud, container, control, cyber, cybersecurity, data, data-breach, detection, encryption, exploit, flaw, fortinet, government, infrastructure, injection, intelligence, LLM, malicious, malware, ml, network, open-source, password, penetration-testing, programming, pypi, risk, risk-assessment, russia, saas, sbom, service, software, supply-chain, threat, tool, training, vpn, vulnerabilitydevelopment pipelines are exacerbating software supply chain security problems.Incidents of exposed development secrets via publicly accessible, open-source packages rose 12% last year compared to 2023, according to ReversingLabs (RL).A scan of 30 of the most popular open-source packages found an average of six critical-severity and 33 high-severity flaws per package.Commercial software packages are also a…
-
eSentire Labs Open Sources Project to Monitor LLMs
by
in SecurityNewsThe eSentire LLM Gateway provides monitoring and governance of ChatGPT and other large language models being used in the organization. First seen on darkreading.com Jump to article: www.darkreading.com/cybersecurity-analytics/esentire-labs-open-sources-project-to-monitor-llms
-
Invisible C2″Š”, “Šthanks to AI-powered techniques
by
in SecurityNews
Tags: ai, api, attack, breach, business, chatgpt, cloud, communications, control, cyberattack, cybersecurity, data, defense, detection, dns, edr, email, encryption, endpoint, hacker, iot, LLM, malicious, malware, ml, monitoring, network, office, openai, powershell, service, siem, soc, strategy, threat, tool, update, vulnerability, zero-trustInvisible C2″Š”, “Šthanks to AI-powered techniques Just about every cyberattack needs a Command and Control (C2) channel”Š”, “Ša way for attackers to send instructions to compromised systems and receive stolen data. This gives us all a chance to see attacks that are putting us at risk. LLMs can help attackers avoid signature based detection Traditionally, C2…
-
Generative AI red teaming: Tips and techniques for putting LLMs to the test
by
in SecurityNewsDefining objectives and scopeAssembling a teamThreat modelingAddressing the entire application stackDebriefing, post-engagement analysis, and continuous improvementGenerative AI red teaming complements traditional red teaming by focusing on the nuanced and complex aspects of AI-driven systems including accounting for new testing dimensions such as AI-specific threat modeling, model reconnaissance, prompt injection, guardrail bypass, and more. AI red-teaming…
-
The state of ransomware: Fragmented but still potent despite takedowns
by
in SecurityNews
Tags: ai, alphv, antivirus, attack, backup, cloud, control, cyber, cybercrime, cybersecurity, data, ddos, detection, endpoint, extortion, firewall, group, incident response, intelligence, law, leak, LLM, lockbit, malware, network, ransom, ransomware, service, software, tactics, threat, tool, usa, zero-trustRunners and riders on the rise: Smaller, more agile ransomware groups like Lynx (INC rebrand), RansomHub (a LockBit sub-group), and Akira filled the void after major takedowns, collectively accounting for 54% of observed attacks, according to a study by managed detection and response firm Huntress.RansomHub RaaS has quickly risen in prominence by absorbing displaced operators…
-
The Invisible Battlefield Behind LLM Security Crisis
by
in SecurityNewsOverview In recent years, with the wide application of open-source LLMs such as DeepSeek and Ollama, global enterprises are accelerating the private deployment of LLMs. This wave not only improves the efficiency of enterprises, but also increases the risk of data security leakage. According to NSFOCUS Xingyun Lab, from January to February 2025 alone, five…The…
-
Researchers Jailbreak 17 Popular LLM Models to Reveal Sensitive Data
by
in SecurityNewsIn a recent study published by Palo Alto Networks’ Threat Research Center, researchers successfully jailbroke 17 popular generative AI (GenAI) web products, exposing vulnerabilities in their safety measures. The investigation aimed to assess the effectiveness of jailbreaking techniques in bypassing the guardrails of large language models (LLMs), which are designed to prevent the generation of…
-
JFrog Integration mit NVIDIA NIM Microservices beschleunigt GenAI-Bereitstellung
by
in SecurityNewsDie neue Integration beschleunigt die Bereitstellung von GenAI- und LLM-Modellen und erhöht Transparenz, Rückverfolgbarkeit und Vertrauen. Performance und Sicherheit sind entscheidend für erfolgreiche KI-Bereitstellungen in Unternehmen. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/jfrog-integration-mit-nvidia-nim-microservices-beschleunigt-genai-bereitstellung/a40069/
-
News alert: Hunters announces ‘Pathfinder AI’ to enhance detection and response in SOC workflows
Boston and Tel Aviv, Mar. 4, 2025, CyberNewswire, Hunters, the leader in next-generation SIEM, today announced Pathfinder AI, a major step toward a more AI-driven SOC. Building on Copilot AI, which is already transforming SOC workflows with LLM-powered… (more”¦) First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/news-alert-hunters-announces-pathfinder-ai-to-enhance-detection-and-response-in-soc-workflows/
-
it’s.BB e.V. lädt zu Online-Veranstaltung ein: Sicherheitsfragen im Kontext der LLM-Nutzung
by
in SecurityNews
Tags: LLMFirst seen on datensicherheit.de Jump to article: www.datensicherheit.de/its-bb-e-v-einladung-online-veranstaltung-sicherheitsfragen-kontext-llm-nutzung
-
Pathfinder AI Hunters Announces New AI Capabilities for Smarter SOC Automation
by
in SecurityNewsPathfinder AI expands Hunters’ vision for AI-driven SOCs, introducing Agentic AI for autonomous investigation and response. Hunters, the leader in next-generation SIEM, today announced Pathfinder AI, a major step toward a more AI-driven SOC. Building on Copilot AI, which is already transforming SOC workflows with LLM-powered investigation guidance, Hunters is introducing its Agentic AI vision,…
-
LLMjacking Hackers Abuse GenAI With AWS NHIs to Hijack Cloud LLMs
by
in SecurityNewsIn a concerning development, cybercriminals are increasingly targeting cloud-based generative AI (GenAI) services in a new attack vector dubbed >>LLMjacking.
-
LLMs Are Posing a Threat to Content Security
by
in SecurityNewsWith the wide application of large language models (LLM) in various fields, their potential risks and threats have gradually become prominent. “Content security” caused by inaccurate or misleading information is becoming a security concern that cannot be ignored. Unfairness and bias, adversarial attacks, malicious code generation, and exploitation of security vulnerabilities continue to raise risk…The…
-
Forscher entdecken LLM-Sicherheitsrisiko
Forscher haben Anmeldeinformationen in den Trainingsdaten von Large Language Models entdeckt.Beliebte LLMs wie DeepSeek werden mit Common Crawl trainiert, einem riesigen Datensatz mit Website-Informationen. Forscher von Truffle Security haben kürzlich einen Datensatz des Webarchives analysiert, der über 250 Milliarden Seiten umfasst und Daten von 47,5 Millionen Hosts enthält. Dabei stellten sie fest, dass rund 12.000…
-
12K hardcoded API keys and passwords found in public LLM training data
First seen on scworld.com Jump to article: www.scworld.com/news/12k-hardcoded-api-keys-and-passwords-found-in-public-llm-training-data
-
Microsoft files lawsuit against LLMjacking gang that bypassed AI safeguards
by
in SecurityNewsLLMjacking can cost organizations a lot of money: LLMjacking is a continuation of the cybercriminal practice of abusing stolen cloud account credentials for various illegal operations, such as cryptojacking, abusing hacked cloud computing resources to mine cryptocurrency. The difference is that large quantities of API calls to LLMs can quickly rack up huge costs, with…