Tag: LLM
-
LegalPwn: Tricking LLMs by burying badness in lawyerly fine print
Trust and believe AI models trained to see ‘legal’ doc as super legit First seen on theregister.com Jump to article: www.theregister.com/2025/09/01/legalpwn_ai_jailbreak/
-
LegalPwn: Tricking LLMs by burying badness in lawyerly fine print
Trust and believe AI models trained to see ‘legal’ doc as super legit First seen on theregister.com Jump to article: www.theregister.com/2025/09/01/legalpwn_ai_jailbreak/
-
How AI Agents Are Creating a New Class of Identity Risk
5 min readAI agents require broad API access across multiple domains simultaneously”, LLM providers, enterprise APIs, cloud services, and data stores”, creating identity management complexity that traditional workload security never anticipated. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/08/how-ai-agents-are-creating-a-new-class-of-identity-risk/
-
2025 CSO Hall of Fame: George Finney on decryption risks, AI, and the CISO’s growing clout
Tags: ai, attack, automation, breach, business, ciso, computing, conference, cyber, cybersecurity, data, encryption, intelligence, jobs, LLM, microsoft, risk, soc, threat, tool, zero-trustWhat do you see as the biggest cybersecurity challenges for the next generation of CISOs, and how should they prepare? : George Finney: One major challenge is the threat of attackers saving encrypted data today with the intention of decrypting it later. With quantum computing, we know that in five to 10 years, older encryption…
-
New Research and PoC Reveal Security Risks in LLM-Based Coding
A recent investigation has uncovered that relying solely on large language models (LLMs) to generate application code can introduce critical security vulnerabilities, according to a detailed blog post published on August 22, 2025. The research underscores that LLMs, which are trained on broad internet data”, much of it insecure example code”, often replicate unsafe patterns…
-
We Are Still Unable to Secure LLMs from Malicious Inputs
Nice indirect prompt injection attack: Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) It looks like an official document on company meeting policies. But inside the document, Bargury hid a 300-word malicious…
-
One long sentence is all it takes to make LLMs misbehave
Tags: LLMChatbots ignore their guardrails when your grammar sucks, researchers find First seen on theregister.com Jump to article: www.theregister.com/2025/08/26/breaking_llms_for_fun/
-
The Role of AI Pentesting in Securing LLM Applications
The rapid adoption of Large Language Models (LLMs) has reshaped the digital ecosystem, powering everything from customer service chatbots to advanced data analysis systems. But with this growth comes a wave of new security challenges. Traditional application vulnerabilities still exist, but LLM applications introduce risks such as prompt injection, data poisoning, model leakage, and misuse……
-
The Role of AI Pentesting in Securing LLM Applications
The rapid adoption of Large Language Models (LLMs) has reshaped the digital ecosystem, powering everything from customer service chatbots to advanced data analysis systems. But with this growth comes a wave of new security challenges. Traditional application vulnerabilities still exist, but LLM applications introduce risks such as prompt injection, data poisoning, model leakage, and misuse……
-
The Role of AI Pentesting in Securing LLM Applications
The rapid adoption of Large Language Models (LLMs) has reshaped the digital ecosystem, powering everything from customer service chatbots to advanced data analysis systems. But with this growth comes a wave of new security challenges. Traditional application vulnerabilities still exist, but LLM applications introduce risks such as prompt injection, data poisoning, model leakage, and misuse……
-
LLMs at the edge: Rethinking how IoT devices talk and act
Anyone who has set up a smart home knows the routine: one app to dim the lights, another to adjust the thermostat, and a voice assistant that only understands exact phrasing. … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/08/26/llm-iot-integration/
-
BSI ANSI: Empfehlungen zur sicheren Integration von LLM
Das Bundesamt für Sicherheit in der Informationstechnik (BSI) und die französische Cybersecurity-Behörde ANSSI haben die Empfehlungen zur sicheren Integration von Sprachmodellen (LLM) aktualisiert. Es sollte ausdrücklich kein vollständig autonomer KI-Betrieb ohne menschliche Kontrolle stattfinden. Ich bin über nachfolgenden Post von … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/08/23/bsi-ansi-empfehlungen-zur-sicheren-integration-von-llm/
-
Cybersecurity Snapshot: Industrial Systems in Crosshairs of Russian Hackers, FBI Warns, as MITRE Updates List of Top Hardware Weaknesses
Tags: access, ai, attack, automation, cisa, cisco, cloud, conference, control, credentials, cve, cyber, cybersecurity, data, data-breach, deep-fake, detection, docker, espionage, exploit, flaw, framework, fraud, google, government, group, guide, hacker, hacking, Hardware, identity, infrastructure, intelligence, Internet, iot, LLM, microsoft, mitigation, mitre, mobile, network, nist, risk, russia, scam, service, side-channel, software, strategy, switch, technology, threat, tool, update, vulnerability, vulnerability-management, windowsCheck out the FBI’s alert on Russia-backed hackers infiltrating critical infrastructure networks via an old Cisco bug. Plus, MITRE dropped a revamped list of the most important critical security flaws. Meanwhile, NIST rolled out a battle plan against face-morphing deepfakes. And get the latest on the CIS Benchmarks and on vulnerability prioritization strategies! Here are…
-
Tree of AST: A Bug-Hunting Framework Powered by LLMs
Teenaged security researchers Sasha Zyuzin and Ruikai Peng discuss how their new vulnerability discovery framework leverages LLMs to address limitations of the past. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/tree-ast-bug-hunting-framework-llms
-
Why AI Agents and MCP Servers Just Became a CISO’s Most Urgent Priority
Over the last year, I’ve spent countless hours with CISOs, CTOs, and security architects talking about a new wave of technology that’s changing the game faster than anything we’ve seen before: Agentic AI and Model Context Protocol (MCP) servers. If you think AI is still in the “cool demos and pilot projects” stage, think again.…
-
Lenovo-Chatbot-Lücke wirft Schlaglicht auf KI-Sicherheitsrisiken
Über eine Schwachstelle in Lenovos Chatbot für den Kundensupport ist es Forschern gelungen, Schadcode einzuschleusen.Der Chatbot ‘Lena” von Lenovo basiert auf GPT-4 von OpenAI und wird für den Kundensupport verwendet. Sicherheitsforscher von Cybernews fanden heraus, dass das KI-Tool anfällig für Cross-Site-Scripting-Angriffe (XSS) war. Die Experten haben eine Schwachstelle entdeckt, über die sie schädliche HTML-Inhalte generieren…
-
Using lightweight LLMs to cut incident response times and reduce hallucinations
Researchers from the University of Melbourne and Imperial College London have developed a method for using LLMs to improve incident response planning with a focus on reducing … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/08/21/lightweight-llm-incident-response/
-
Cybercriminals Abuse Vibe Coding Service to Create Malicious Sites
Some LLM-created scripts and emails can lower the barrier of entry for low-skill attackers, who can use services like Lovable to create convincing, effective websites in minutes. First seen on darkreading.com Jump to article: www.darkreading.com/cyberattacks-data-breaches/cybercriminals-abuse-vibe-coding-service-malicious-sites
-
Example of LLM chatbots weaponize for data theft
First seen on thesecurityblogger.com Jump to article: www.thesecurityblogger.com/example-of-llm-chatbots-weaponize-for-data-theft/
-
The New Frontier: Why You Can’t Secure AI Without Securing APIs
Tags: ai, api, attack, automation, business, cybersecurity, data, exploit, injection, intelligence, LLM, risk, strategy, threat, vulnerabilityThe release of a new KuppingerCole Leadership Compass is always a significant event for the cybersecurity industry, offering a vendor-neutral view of the market’s current state. The 2025 edition, focusing on API Security and Management, is critical as it arrives at a pivotal moment for technology. It clearly presents a fact many organizations are just…
-
Lenovo chatbot breach highlights AI security blind spots in customer-facing systems
Enterprise-wide implications: While the immediate impact involved session cookie theft, the vulnerability’s implications extended far beyond data exfiltration.The researchers warned that the same vulnerability could enable attackers to alter support interfaces, deploy keyloggers, launch phishing attacks, and execute system commands that could install backdoors and enable lateral movement across network infrastructure.”Using the stolen support agent’s…
-
The New Frontier: Why You Can’t Secure AI Without Securing APIs
Tags: ai, api, attack, automation, business, cybersecurity, data, exploit, injection, intelligence, LLM, risk, strategy, threat, vulnerabilityThe release of a new KuppingerCole Leadership Compass is always a significant event for the cybersecurity industry, offering a vendor-neutral view of the market’s current state. The 2025 edition, focusing on API Security and Management, is critical as it arrives at a pivotal moment for technology. It clearly presents a fact many organizations are just…
-
DataDome Enhances Visibility of AI Agents LLM Crawlers in Your Dashboard
DataDome’s enhanced dashboard gives businesses the visibility and control they need over rapidly growing AI agent and LLM crawler traffic, helping protect revenue, SEO, and security. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/08/datadome-enhances-visibility-of-ai-agents-llm-crawlers-in-your-dashboard/
-
What happens when penetration testing goes virtual and gets an AI coach
Cybersecurity training often struggles to match the complexity of threats. A new approach combining digital twins and LLMs aims to close that gap. Researchers from the … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/08/19/digital-twins-cybersecurity-training/
-
Wie CISOs von der Blockchain profitieren
Tags: access, ai, api, blockchain, ciso, compliance, framework, governance, identity, LLM, network, saas, sbom, software, tool, zero-trustDie Blockchain macht Trust verifizierbar.Sicherheitsvorfälle gehen nicht nur auf eine Kompromittierung der internen Systeme zurück. Sie hängen regelmäßig auch damit zusammen, dass:Privileged-Access-Protokolle fehlen,SaaS-Audit-Trails nicht vertrauenswürdig sind, oderLieferketten kompromittiert werden.Die Blockchain kann dabei helfen, diese realen Probleme zu lösen und Manipulationssicherheit, Datenintegrität und Trust zu gewährleisten. Im Kern ist Blockchain ein System von Datensätzen, die über…
-
Schwachstellen beim Vibe-Coding
Bei Experimenten zur Untersuchung der Risiken von Vibe-Codierung mit Claude und ChatGPT fanden Sicherheitsforscher von Databricks kritische Schwachstellen und beschreiben, wie sie diese wieder geschlossen haben. Die Ergebnisse zeigen die Risiken von Vibe-Coding auf, wenn keine menschliche Überprüfung mehr stattfindet. In einem Experiment ließen sie das LLM eine Snake-Kampfarena aus der Third-Person-Perspektive erstellen, in der…
-
The AI-Powered Trojan Horse Returns: How LLMs Revive Classic Cyber Threats
In an era where users rely on vigilance against shady websites and file hashing via platforms like VirusTotal, a new wave of trojan horses is challenging traditional defenses. These threats masquerade as legitimate desktop applications, such as recipe savers, AI-powered image enhancers, and virtual assistants, all while embedding malicious capabilities. For instance, the JustAskJacky app,…
-
Agentic AI promises a cybersecurity revolution, with asterisks
Tags: ai, api, authentication, ceo, ciso, cloud, control, cybersecurity, data, endpoint, infrastructure, jobs, LLM, open-source, openai, risk, service, soc, software, supply-chain, technology, tool, update, vulnerabilityTrust, transparency, and moving slowly are crucial: Like all technologies, and perhaps more dramatically than most, agentic AI carries both risks and benefits. One obvious risk of AI agents is that, like most LLM models, they will hallucinate or make errors that could cause problems.”If you want to remove or give agency to a platform…
-
And Now, LLMs Don’t Need Human Intervention to Plan and Executive Large, Complex Attacks
Researchers just proved LLMs can autonomously plan and execute full-scale cyberattacks, turning AI from a tool into an active threat actor. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/08/and-now-llms-dont-need-human-intervention-to-plan-and-executive-large-complex-attacks/

