Tag: LLM
-
Thales named Growth Index leader in Frost Radar: Data Security Platforms Report
Tags: access, ai, business, cloud, compliance, container, control, data, defense, detection, edr, encryption, endpoint, governance, identity, intelligence, LLM, monitoring, risk, saas, service, siem, soc, technology, toolThales named Growth Index leader in Frost Radar: Data Security Platforms Report madhav Tue, 01/20/2026 – 04:29 Data has always been the backbone of enterprise operations, but the rise of cloud, big data, and GenAI has multiplied its value and, with it, the motivation for attackers. In parallel, regulatory expectations are increasing and evolving. The…
-
For the price of Netflix, crooks can now rent AI to run cybercrime
Group-IB says crims forking out for Dark LLMs, deepfakes, and more at subscription prices First seen on theregister.com Jump to article: www.theregister.com/2026/01/20/group_ib_ai_cycercrime_subscriptions/
-
Google Gemini flaw exposes new AI prompt injection risks for enterprises
Real enterprise exposure: Analysts point out that the risk is significant in enterprise environments as organizations rapidly deploy AI copilots connected to sensitive systems.”As internal copilots ingest data from emails, calendars, documents, and collaboration tools, a single compromised account or phishing email can quietly embed malicious instructions,” said Chandrasekhar Bilugu, CTO of SureShield. “When employees…
-
Relevante Entwicklungen und Cyberrisiken der künstlichen Intelligenz
Sowohl auf Angreifer- als auch auf Verteidigerseite kommt vermehrt Künstliche-Intelligenz-Technologie zum Einsatz. Vor allem Large-Language-Models (LLMs) werden von Entwicklerinnen und Entwicklern etwa zum ‘Wipe Coding”, also für das Erstellen von Skripten und Codes, genutzt. Cyberkriminelle nutzen KI gleichermaßen für ‘Wipe Hacking”. Auch wenn sich Angreifer mithilfe von LLMs (noch) keine Exploits erstellen lassen können, so…
-
7 top cybersecurity projects for 2026
Tags: access, ai, api, attack, authentication, business, cisco, ciso, cloud, communications, compliance, control, credentials, cybersecurity, data, defense, detection, email, framework, governance, infrastructure, LLM, mail, phishing, programming, resilience, risk, software, strategy, technology, threat, tool, vulnerability, zero-trust2. Strengthening email security: Phishing continues to be a primary attack vector for stealing credentials and defrauding victims, says Mary Ann Blair, CISO at Carnegie Mellon University. She warns that threat actors are now generating increasingly sophisticated phishing attacks, effectively evading mail providers’ detection capabilities. “Legacy multifactor authentication techniques are now regularly defeated, and threat…
-
7 top cybersecurity projects for 2026
Tags: access, ai, api, attack, authentication, business, cisco, ciso, cloud, communications, compliance, control, credentials, cybersecurity, data, defense, detection, email, framework, governance, infrastructure, LLM, mail, phishing, programming, resilience, risk, software, strategy, technology, threat, tool, vulnerability, zero-trust2. Strengthening email security: Phishing continues to be a primary attack vector for stealing credentials and defrauding victims, says Mary Ann Blair, CISO at Carnegie Mellon University. She warns that threat actors are now generating increasingly sophisticated phishing attacks, effectively evading mail providers’ detection capabilities. “Legacy multifactor authentication techniques are now regularly defeated, and threat…
-
One click is all it takes: How ‘Reprompt’ turned Microsoft Copilot into data exfiltration tools
What devs and security teams should do now: As in usual security practice, enterprise users should always treat URLs and external inputs as untrusted, experts advised. Be cautious with links, be on the lookout for unusual behavior, and always pause to review pre-filled prompts.”This attack, like many others, originates with a phishing email or text…
-
enclaive unterstützt mit GenAI-Firewall Garnet eine sichere KI-Nutzung
Mit der GenAI-Firewall Garnet adressiert enclaive diese Herausforderungen ganzheitlich. Die Lösung ermöglicht Unternehmen, LLMs und SLMs sicher zu nutzen, ohne die Kontrolle über ihre Daten aus der Hand zu geben. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/enclaive-unterstuetzt-mit-genai-firewall-garnet-eine-sichere-ki-nutzung/a43350/
-
2 Separate Campaigns Probe Corporate LLMs for Secrets
A total of 91,403 sessions targeted public LLM endpoints to find leaks in organizations’ use of AI and map an expanding attack surface. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/separate-campaigns-target-exposed-llm-services
-
NDSS 2025 LLMPirate: LLMs For Black-box Hardware IP Piracy
Tags: attack, conference, detection, firmware, Hardware, Internet, LLM, mitigation, network, software, vulnerabilitySession 8C: Hard & Firmware Security Authors, Creators & Presenters: Vasudev Gohil (Texas A&M University), Matthew DeLorenzo (Texas A&M University), Veera Vishwa Achuta Sai Venkat Nallam (Texas A&M University), Joey See (Texas A&M University), Jeyavijayan Rajendran (Texas A&M University) PAPER LLMPirate: LLMs for Black-box Hardware IP Piracy The rapid advancement of large language models (LLMs)…
-
Threat Actors Launch Mass Reconnaissance of AI Systems
More Than 91,000 Attacks Target Exposed LLM Endpoints in Coordinated Campaigns. Two coordinated campaigns generated more than 91,000 attack sessions against AI infrastructure between October and January, with threat actors probing more than 70 model endpoints from OpenAI, Anthropic and Google to build target lists for future exploitation. First seen on govinfosecurity.com Jump to article:…
-
Attackers Probing Popular LLMs Looking for Access to APIs: Report
Security researchers with GreyNoise say they’ve detected a campaign in which the threat actors are targeting more than 70 popular AI LLM models in a likely reconnaissance mission that will feed into what they call a “larger exploitation pipeline.” First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/01/attackers-probing-popular-llms-looking-for-access-to-apis-report/
-
Corrupting LLMs Through Weird Generalizations
Fascinating research: Weird Generalization and Inductive Backdoors: New Ways to Corrupt LLMs. AbstractLLMs are useful because they generalize so well. But can you have too much of a good thing? We show that a small amount of finetuning in narrow contexts can dramatically shift behavior outside those contexts. In one experiment, we finetune a model…
-
Two Separate Campaigns Target Exposed LLM Services
A total of 91,403 sessions targeted public LLM endpoints to find leaks in organizations’ use of AI and map an expanding attack surface. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/separate-campaigns-target-exposed-llm-services
-
Shai-Hulud & Co.: Die Supply Chain als Achillesferse
Tags: access, ai, application-security, backdoor, ciso, cloud, cyber, cyberattack, data, github, Hardware, infrastructure, kritis, kubernetes, LLM, monitoring, network, nis-2, programming, resilience, risk, rust, sbom, software, spyware, strategy, supply-chain, tool, vulnerabilityEgal, ob React2Shell, Shai-Hulud oder XZ Utils: Die Sicherheit der Software-Supply-Chain wird durch zahlreiche Risiken gefährdet.Heutige Anwendungen basieren auf zahlreichen Komponenten, von denen jede zusammen mit den Entwicklungsumgebungen selbst eine Angriffsfläche darstellt. Unabhängig davon, ob Unternehmen Code intern entwickeln oder sich auf Drittanbieter verlassen, sollten CISOs, Sicherheitsexperten und Entwickler der Software-Supply-Chain besondere Aufmerksamkeit schenken.Zu den…
-
ZombieAgent ChatGPT attack shows persistent data leak risks of AI agents
Worm-like propagation: The email attack even has worming capabilities, as the malicious prompts could instruct ChatGPT to scan the inbox, extract addresses from other email messages, exfiltrate those addresses to the attackers using the URL trick, and send similar poisoned messages to those addresses as well.If the victim is the employee of an organization that…
-
Hackers target misconfigured proxies to access paid LLM services
Threat actors are systematically hunting for misconfigured proxy servers that could provide access to commercial large language model (LLM) services. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/hackers-target-misconfigured-proxies-to-access-paid-llm-services/
-
AI Deployments Targeted in 91,000+ Attack Sessions
Researchers observed over 91,000 attack sessions targeting AI infrastructure and LLM deployments. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/threats/ai-deployments-targeted-in-91000-attack-sessions/
-
Are There IDORs Lurking in Your Code? LLMs Are Finding Critical Business Logic Vulns”, and They’re Everywhere
Security teams have always known that insecure direct object references (IDORs) and broken authorization vulnerabilities exist in their codebases. Ask any AppSec leader if they have IDOR issues, and most would readily admit they do. But here’s the uncomfortable truth: they’ve been dramatically underestimating the scope of the problem. Recent bug bounty data tells a..…
-
Red-Teaming als Eckpfeiler der KI-Compliance
KI-Systeme spielen in allen Branchen zunehmend eine zentrale Rolle bei kritischen Vorgängen. Gleichzeitig steigen die Sicherheitsrisiken durch den Einsatz der künstlichen Intelligenz rapide. Red Teaming hat sich als Eckpfeiler zum Schutz von KI etabliert insbesondere, wenn agentenbasierte KI immer stärkeren Einzug in Unternehmen hält. Multi-LLM (Large-Language-Models)-Systeme treffen autonome Entscheidungen und führen Aufgaben ohne menschliches […]…
-
Red Teaming als Eckpfeiler der KI-Compliance
ie zunehmende Verbreitung agentenbasierter KI verändert die Angriffsflächen von Organisationen grundlegend. Im Unterschied zu Assistenten mit einem einzelnen LLM bestehen diese Systeme aus miteinander verbundenen Agenten mit komplexen Arbeitsabläufen und Abhängigkeiten, First seen on infopoint-security.de Jump to article: www.infopoint-security.de/red-teaming-als-eckpfeiler-der-ki-compliance/a43314/
-
ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues
Will LLMs ever be able to stamp out the root cause of these attacks? Possibly not. First seen on arstechnica.com Jump to article: arstechnica.com/security/2026/01/chatgpt-falls-to-new-data-pilfering-attack-as-a-vicious-cycle-in-ai-continues/

