Tag: LLM
-
LLMs Fall Short in Vulnerability Discovery and Exploitation
Forescout found that most LLMs are unreliable in vulnerability research and exploit tasks, with threat actors still skeptical about using tools for these purposes First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/llms-fall-vulnerability-discovery/
-
MCP is fueling agentic AI, and introducing new security risks
Tags: access, ai, api, attack, authentication, best-practice, ceo, cloud, corporate, cybersecurity, gartner, injection, LLM, malicious, monitoring, network, office, open-source, penetration-testing, RedTeam, risk, service, supply-chain, technology, threat, tool, vulnerabilityMitigating MCP server risks: When it comes to using MCP servers there’s a big difference between developers using it for personal productivity and enterprises putting them into production use cases.Derek Ashmore, application transformation principal at Asperitas Consulting, suggests that corporate customers don’t rush on MCP adoption until the technology is safer and more of the…
-
Critical mcp”‘remote Vulnerability Enables LLM Clients to Remote Code Execution
The JFrog Security Research team has discovered a critical security vulnerability in mcp-remote, a widely used tool that enables Large Language Model clients to communicate with remote servers, potentially allowing attackers to achieve full system compromise through remote code execution. Severe Security Flaw Affects Popular AI Tool CVE-2025-6514, rated with a critical CVSS score of…
-
Serious Flaws Patched in Model Context Protocol Tools
Always Secure MCP Servers Connecting LLMs to External Systems, Experts Warn. Warning: Popular technology designed to make it easy for artificial intelligence tools to connect with external applications and data sources can be turned to malicious use. Researchers discovered two separate vulnerabilities tied to tools in the ecosystem around model context protocol, or MCP. First…
-
New AI Malware PoC Reliably Evades Microsoft Defender
Worried about hackers employing LLMs to write powerful malware? Using targeted reinforcement learning (RL) to train open source models in specific tasks has yielded the capability to do just that. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/ai-malware-poc-evades-microsoft-defender
-
AI Trust Score Ranks LLM Security
Startup Tumeryk’s AI Trust scorecard finds Google Gemini Pro 2.5 as the most trustworthy, with OpenAI’s GPT-4o mini a close second and DeepSeek and Alibaba Qwen scoring lowest. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/ai-trust-score-ranks-llm-security
-
Scholars sneaking phrases into papers to fool AI reviewers
Using prompt injections to play a Jedi mind trick on LLMs First seen on theregister.com Jump to article: www.theregister.com/2025/07/07/scholars_try_to_fool_llm_reviewers/
-
Faster Not Bigger: New R1T2 LLM Combines DeepSeek Versions
Tags: LLMGerman Consultancy’s Latest LLM Aims to Reduce Costs, Preserve Reasoning Skills. Say hello to DeepSeek-TNG R1T2 Chimera, a large language model built by German firm TNG Consulting, using three different DeepSeek LLMs. The goal of R1T2 is to provide a faster LLM with more predictable performance that maintains full reasoning accuracy. First seen on govinfosecurity.com…
-
Incorrect links output by LLMs could lead to phishing, researchers say
First seen on scworld.com Jump to article: www.scworld.com/news/incorrect-links-output-by-llms-could-lead-to-phishing-researchers-say
-
OWASP unpacks GenAI security’s biggest risks to LLMs
First seen on scworld.com Jump to article: www.scworld.com/feature/owasp-unpacks-genai-securitys-biggest-risks-to-llms
-
Hallucinations May Open LLMs to Phishing Threats
First seen on scworld.com Jump to article: www.scworld.com/news/hallucinations-may-open-llms-to-phishing-threats
-
Analysis Surfaces Increased Usage of LLMs to Craft BEC Attacks
A Barracuda Networks analysis of unsolicited and malicious emails sent between February 2022 to April 2025 indicates 14% of the business email compromise (BEC) attacks identified were similarly created using a large language model (LLM). First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/07/analysis-surfaces-increased-usage-of-llms-to-craft-bec-attacks/
-
Report Finds LLMs Are Prone to Be Exploited by Phishing Campaigns
A report published this week by Netcraft, a provider of a platform for combating phishing attacks, finds that large language models (LLMs) might not be a reliable source when it comes to identifying where to log in to various websites. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/07/report-finds-llms-are-prone-to-be-exploited-by-phishing-campaigns/
-
How cybersecurity leaders can defend against the spur of AI-driven NHI
Tags: access, ai, attack, automation, breach, business, ciso, cloud, credentials, cybersecurity, data, data-breach, email, exploit, framework, gartner, governance, group, guide, identity, infrastructure, least-privilege, LLM, login, monitoring, password, phishing, RedTeam, risk, sans, service, software, technology, tool, vulnerabilityVisibility Yageo Group had so many problematic machine identities that information security operations manager Terrick Taylor says he is almost embarrassed to say this, even though the group has now automated the monitoring of both human and non-human identities and has a process for managing identity lifecycles. “Last time I looked at the portal, there…
-
Like SEO, LLMs May Soon Fall Prey to Phishing Scams
Just as attackers have used SEO techniques to poison search engine results, they could rinse and repeat with artificial intelligence and the responses LLMs generate from user prompts. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/seo-llms-fall-prey-phishing-scams
-
LLMs are guessing login URLs, and it’s a cybersecurity time bomb
Tags: ai, api, blockchain, cybersecurity, data, github, LLM, login, malicious, monitoring, office, risk, supply-chain, trainingGithub poisoning for AI training: Not all hallucinated URLs were unintentional. In an unrelated research, Netcraft found evidence of attackers deliberately poisoning AI systems by seeding GitHub with malicious code repositories.”Multiple fake GitHub accounts shared a project called Moonshot-Volume-Bot, seeded across accounts with rich bios, profile images, social media accounts and credible coding activity,” researchers…
-
The rise of the compliance super soldier: A new human-AI paradigm in GRC
Tags: ai, automation, awareness, compliance, control, governance, grc, jobs, law, LLM, metric, regulation, risk, skills, strategy, threat, tool, training, updateRegulatory acceleration: Global AI laws are evolving but remain fragmented and volatile. Toolchain convergence: Risk, compliance and engineering workflows are merging into unified platforms. Maturity asymmetry: Few organizations have robust genAI governance strategies, and even fewer have built dedicated AI risk teams. These forces create a scenario where GRC teams must evolve rapidly, from policy monitors to strategic…
-
We know GenAI is risky, so why aren’t we fixing its flaws?
Even though GenAI threats are a top concern for both security teams and leadership, the current level of testing and remediation for LLM and AI-powered applications isn’t … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/06/27/cobalt-research-llm-security-vulnerabilities/
-
Cybercriminals Exploit LLM Models to Enhance Hacking Activities
Cybercriminals are increasingly leveraging large language models (LLMs) to amplify their hacking operations, utilizing both uncensored versions of these AI systems and custom-built criminal variants. LLMs, known for their ability to generate human-like text, write code, and solve complex problems, have become integral to various industries. However, their potential for misuse is evident as malicious…
-
How to make your multicloud security more effective
Tags: ai, automation, ciso, cloud, container, control, data, infrastructure, LLM, risk, risk-analysis, software, technology, threat, toolIs it time to repatriate to the data center?: Perhaps. Some organizations, such as Zoom, have moved workloads to on-premises because it provides more predictable performance for real-time needs of their apps. John Qian, who once worked there and now is the CISO for security vendor Aviatrix, tells CSO that Zoom uses all three of…
-
Misconfigured MCP servers expose AI agent systems to compromise
Tags: access, ai, api, attack, authentication, control, credentials, data, data-breach, exploit, firewall, injection, Internet, leak, LLM, login, malicious, network, openai, risk, risk-assessment, service, tool, vulnerability‘NeighborJack’: Opening MCP servers to the internet: Many MCP servers lack strong authentication by default. Deployed locally on a system, anyone with access to their communication interface can potentially issue commands through the protocol to access their functionality. This is not necessarily a problem when the MCP server listens only to the local address 127.0.0.1,…
-
What LLMs Know About Their Users
Simon Willison talks about ChatGPT’s new memory dossier feature. In his explanation, he illustrates how much the LLM”, and the company”, knows about its users. It’s a big quote, but I want you to read it all. Here’s a prompt you can use to give you a solid idea of what’s in that summary. I…
-
Cybercriminal abuse of large language models
Cybercriminals are increasingly gravitating towards uncensored LLMs, cybercriminal-designed LLMs and jailbreaking legitimate LLMs. First seen on blog.talosintelligence.com Jump to article: blog.talosintelligence.com/cybercriminal-abuse-of-large-language-models/
-
LLMs hype versus reality: What CISOs should focus on
Tags: ai, attack, backdoor, breach, business, chatgpt, ciso, cloud, control, corporate, cyber, cybercrime, cybersecurity, data, finance, governance, LLM, malware, monitoring, network, open-source, risk, risk-management, sans, service, software, supply-chain, technology, threat, tool, vulnerabilitynot using AI even though there is a lot of over-hype and promise about its capability. That said, organizations that don’t use AI will get left behind. The risk of using AI is where all the FUD is.”In terms of applying controls, rinse, wash, and repeat the processes you followed when adopting cloud, BYOD, and…
-
AI jailbreak method tricks LLMs into poisoning their own context
First seen on scworld.com Jump to article: www.scworld.com/news/ai-jailbreak-method-tricks-llms-into-poisoning-their-own-context
-
LLMs Tricked by ‘Echo Chamber’ Attack in Jailbreak Tactic
Researcher Details Stealthy Multi-Turn Prompt Exploit Bypassing AI Safety. Well-timed nudges are enough to derail a large language model and use it for nefarious purposes, researchers have found. Dubbed Echo Chamber, the exploit uses a chain of subtle prompts to bypass existing safety guardrails by manipulating the model’s emotional tone and contextual assumptions. First seen…
-
DataKrypto and Tumeryk Join Forces to Deliver World’s First Secure Encrypted Guardrails for AI LLMs and SLMs
DataKrypto and Tumeryk join forces to deliver world’s first secure encrypted guardrails for AI LLMs and SLMs. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/06/datakrypto-and-tumeryk-join-forces-to-deliver-worlds-first-secure-encrypted-guardrails-for-ai-llms-and-slms/
-
New ‘Echo Chamber’ attack can trick GPT, Gemini into breaking safety rules
“Early planted prompts influence the model’s responses, which are then leveraged in later turns to reinforce the original objective,” the post on Echo Chamber noted. “This creates a feedback loop where the model begins to amplify the harmful subtext embedded in the conversation, gradually eroding its own safety resistances.”The attack works by the attacker starting…

