Tag: LLM
-
NSFOCUS Unveils Enhanced AI LLM Risk Threat Matrix for Holistic AI Security Governance
SANTA CLARA, Calif., Jan 29, 2026 Security is a prerequisite for the application and development of LLM technology. Only by addressing security risks when integrating LLMs can businesses ensure healthy and sustainable growth. NSFOCUS first proposed the AI LLM Risk Threat Matrix in 2024. The Matrix addresses security from multiple perspectives: foundational security, data security,…The…
-
Crooks are hijacking and reselling AI infrastructure: Report
Tags: access, ai, api, attack, authentication, business, cloud, communications, control, credentials, cybersecurity, data, data-breach, endpoint, exploit, firewall, group, infosec, infrastructure, intelligence, Internet, LLM, malicious, marketplace, risk, service, skills, technology, theft, threat, training, vulnerabilityexposed endpoints on default ports of common LLM inference services;unauthenticated API access without proper access controls;development/staging environments with public IP addresses;MCP servers connecting LLMs to file systems, databases and internal APIs.Common misconfigurations leveraged by these threat actors include:Ollama running on port 11434 without authentication;OpenAI-compatible APIs on port 8000 exposed to the internet;MCP servers accessible without…
-
Multi-Agent-Systeme werden zum neuen Betriebsmodell für Unternehmen
Der Databricks-Bericht ‘State of AI Agents” zeigt: Model-Flexibility (oder Flexible Modellauswahl) ist die neue KI-Strategie, wobei 78 Prozent der Unternehmen zwei oder mehr LLM-Modellfamilien verwenden. Der Mehrwert von KI-Agenten wird sich 2026 unter Beweis stellen. Beobachtungen in der gesamten Branche zeigen, dass KI sich bereits zu einem festen Bestandteil kritischer Arbeitsabläufe entwickelt hat. Einer der…
-
AI & the Death of Accuracy: What It Means for Zero-Trust
AI model collapse, where LLMs over time train on more and more AI-generated data and become degraded as a result, can introduce inaccuracies, promulgate malicious activity, and impact PII protections. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/ai-death-accuracy-zero-trust
-
Anthropic writes 23,000-word ‘constitution’ for Claude, suggests it may have feelings
Tags: LLMDescribes its LLMs as an ‘entity’ that probably has something like emotions First seen on theregister.com Jump to article: www.theregister.com/2026/01/22/anthropic_claude_constitution/
-
Overrun with AI slop, cURL scraps bug bounties to ensure intact mental health
The onslaught includes LLMs finding bogus vulnerabilities and code that won’t compile. First seen on arstechnica.com Jump to article: arstechnica.com/security/2026/01/overrun-with-ai-slop-curl-scraps-bug-bounties-to-ensure-intact-mental-health/
-
Securing Generative AI: A Technical Guide to Protecting Your LLM Infrastructure
The GenAI Gold Rush: Why Network infrastructure Security Is Paramount Generative AI (GenAI) and Large Language Models (LLMs) are rapidly reshaping enterprise IT, powering everything from developer copilots and customer support automation to advanced analytics and decision-making. As adoption accelerates, GenAI is quickly becoming embedded in business”‘critical workflows. However, this rapid innovation creates a double”‘edged……
-
Three vulnerabilities in Anthropic Git MCP Server could let attackers tamper with LLMs
mcp-server-git versions prior to 2025-12.18.The three vulnerabilities are·CVE-2025-68143, an unrestricted git_init.·CVE-2025-68145, a path validation bypass.·CVE-2025-68144, an argument injection in git_diff.Unlike other vulnerabilities in MCP servers that required specific configurations, these work on any configuration of Anthropic’s official server, out of the box, Cyata says.Model Context Protocol (MCP) is an open standard introduced by Anthropic in 2024 to…
-
Flaws in Chainlit AI dev framework expose servers to compromise
/proc/self/environ file is used to store environment variables, and these can contain API keys, credentials, internal file paths, database paths, tokens for AWS and other cloud services, and even CHAINLIT_AUTH_SECRET, a secret that’s used to sign authentication tokens when authentication is enabled.On top of that, if LangChain is used as the orchestration layer behind Chainlit…
-
The LimaCharlie Manifesto: Security for an Autonomous Future
Tags: access, advisory, ai, api, automation, cloud, control, cybersecurity, data, infrastructure, LLM, technology, threat, toolCybersecurity is standing at an inflection point. The proliferation of agentic AI and LLMs does not signal a gradual shift, but a radical transformation. The security tools, assumptions, and architectures of the last twenty years can no longer keep pace with the challenges and threats of today. AI changed the rules. Attackers have quickly adapted. …
-
When Language Becomes the Attack Surface: Inside the Google Gemini Calendar Exploit
Tags: ai, attack, cybersecurity, data-breach, exploit, flaw, google, LLM, malicious, software, vulnerabilitySecurity teams have spent decades hardening software against malicious input, yet a recent vulnerability involving Google Gemini demonstrates how those assumptions begin to fracture when language itself becomes executable. The issue, disclosed by cybersecurity researchers at Miggo Security, exposed a subtle but powerful flaw in how natural language interfaces like AI LLMs interact with privileged…
-
Thales named Growth Index leader in Frost Radar: Data Security Platforms Report
Tags: access, ai, business, cloud, compliance, container, control, data, defense, detection, edr, encryption, endpoint, governance, identity, intelligence, LLM, monitoring, risk, saas, service, siem, soc, technology, toolThales named Growth Index leader in Frost Radar: Data Security Platforms Report madhav Tue, 01/20/2026 – 04:29 Data has always been the backbone of enterprise operations, but the rise of cloud, big data, and GenAI has multiplied its value and, with it, the motivation for attackers. In parallel, regulatory expectations are increasing and evolving. The…
-
For the price of Netflix, crooks can now rent AI to run cybercrime
Group-IB says crims forking out for Dark LLMs, deepfakes, and more at subscription prices First seen on theregister.com Jump to article: www.theregister.com/2026/01/20/group_ib_ai_cycercrime_subscriptions/
-
Google Gemini flaw exposes new AI prompt injection risks for enterprises
Real enterprise exposure: Analysts point out that the risk is significant in enterprise environments as organizations rapidly deploy AI copilots connected to sensitive systems.”As internal copilots ingest data from emails, calendars, documents, and collaboration tools, a single compromised account or phishing email can quietly embed malicious instructions,” said Chandrasekhar Bilugu, CTO of SureShield. “When employees…
-
Relevante Entwicklungen und Cyberrisiken der künstlichen Intelligenz
Sowohl auf Angreifer- als auch auf Verteidigerseite kommt vermehrt Künstliche-Intelligenz-Technologie zum Einsatz. Vor allem Large-Language-Models (LLMs) werden von Entwicklerinnen und Entwicklern etwa zum ‘Wipe Coding”, also für das Erstellen von Skripten und Codes, genutzt. Cyberkriminelle nutzen KI gleichermaßen für ‘Wipe Hacking”. Auch wenn sich Angreifer mithilfe von LLMs (noch) keine Exploits erstellen lassen können, so…
-
7 top cybersecurity projects for 2026
Tags: access, ai, api, attack, authentication, business, cisco, ciso, cloud, communications, compliance, control, credentials, cybersecurity, data, defense, detection, email, framework, governance, infrastructure, LLM, mail, phishing, programming, resilience, risk, software, strategy, technology, threat, tool, vulnerability, zero-trust2. Strengthening email security: Phishing continues to be a primary attack vector for stealing credentials and defrauding victims, says Mary Ann Blair, CISO at Carnegie Mellon University. She warns that threat actors are now generating increasingly sophisticated phishing attacks, effectively evading mail providers’ detection capabilities. “Legacy multifactor authentication techniques are now regularly defeated, and threat…
-
7 top cybersecurity projects for 2026
Tags: access, ai, api, attack, authentication, business, cisco, ciso, cloud, communications, compliance, control, credentials, cybersecurity, data, defense, detection, email, framework, governance, infrastructure, LLM, mail, phishing, programming, resilience, risk, software, strategy, technology, threat, tool, vulnerability, zero-trust2. Strengthening email security: Phishing continues to be a primary attack vector for stealing credentials and defrauding victims, says Mary Ann Blair, CISO at Carnegie Mellon University. She warns that threat actors are now generating increasingly sophisticated phishing attacks, effectively evading mail providers’ detection capabilities. “Legacy multifactor authentication techniques are now regularly defeated, and threat…
-
One click is all it takes: How ‘Reprompt’ turned Microsoft Copilot into data exfiltration tools
What devs and security teams should do now: As in usual security practice, enterprise users should always treat URLs and external inputs as untrusted, experts advised. Be cautious with links, be on the lookout for unusual behavior, and always pause to review pre-filled prompts.”This attack, like many others, originates with a phishing email or text…
-
enclaive unterstützt mit GenAI-Firewall Garnet eine sichere KI-Nutzung
Mit der GenAI-Firewall Garnet adressiert enclaive diese Herausforderungen ganzheitlich. Die Lösung ermöglicht Unternehmen, LLMs und SLMs sicher zu nutzen, ohne die Kontrolle über ihre Daten aus der Hand zu geben. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/enclaive-unterstuetzt-mit-genai-firewall-garnet-eine-sichere-ki-nutzung/a43350/
-
2 Separate Campaigns Probe Corporate LLMs for Secrets
A total of 91,403 sessions targeted public LLM endpoints to find leaks in organizations’ use of AI and map an expanding attack surface. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/separate-campaigns-target-exposed-llm-services
-
NDSS 2025 LLMPirate: LLMs For Black-box Hardware IP Piracy
Tags: attack, conference, detection, firmware, Hardware, Internet, LLM, mitigation, network, software, vulnerabilitySession 8C: Hard & Firmware Security Authors, Creators & Presenters: Vasudev Gohil (Texas A&M University), Matthew DeLorenzo (Texas A&M University), Veera Vishwa Achuta Sai Venkat Nallam (Texas A&M University), Joey See (Texas A&M University), Jeyavijayan Rajendran (Texas A&M University) PAPER LLMPirate: LLMs for Black-box Hardware IP Piracy The rapid advancement of large language models (LLMs)…
-
Threat Actors Launch Mass Reconnaissance of AI Systems
More Than 91,000 Attacks Target Exposed LLM Endpoints in Coordinated Campaigns. Two coordinated campaigns generated more than 91,000 attack sessions against AI infrastructure between October and January, with threat actors probing more than 70 model endpoints from OpenAI, Anthropic and Google to build target lists for future exploitation. First seen on govinfosecurity.com Jump to article:…
-
Attackers Probing Popular LLMs Looking for Access to APIs: Report
Security researchers with GreyNoise say they’ve detected a campaign in which the threat actors are targeting more than 70 popular AI LLM models in a likely reconnaissance mission that will feed into what they call a “larger exploitation pipeline.” First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/01/attackers-probing-popular-llms-looking-for-access-to-apis-report/

