Tag: LLM
-
When AI nukes your database: The dark side of vibe coding
Tags: ai, application-security, attack, authentication, automation, ciso, computer, control, corporate, data, data-breach, defense, dos, email, flaw, governance, incident response, injection, jobs, LLM, microsoft, open-source, password, risk, saas, skills, supply-chain, threat, tool, training, zero-trustprivate paths, on another instance.Worthington warns this is one of the most frequent red flags in threat intel. When vibe-coded applications reach incident response, she says, “You’ll often see absence of logging, lack of source control, or weak authentication alongside hardcoded secrets. Rather than a single fingerprint, it’s a collection of sloppy behaviors that point…
-
BSidesSF 2025: Everyday AI: Leveraging LLMs For Simple, Effective Security Automation
Creator, Author and Presenter: Matthew Sullivan, Dominic Zanardi Our deep appreciation to Security BSides – San Francisco and the Creators, Authors and Presenters for publishing their BSidesSF 2025 video content on YouTube. Originating from the conference’s events held at the lauded CityView / AMC Metreon – certainly a venue like no other; and via the…
-
Hackers Turn Red Team AI Tool Into Citrix Exploit Engine
HexStrike-AI Connects LLMs to Over 150 Existing Security Tools. A red-team framework released for penetration testing has become a weapon in the wild, repurposed by hackers to accelerate exploitation of newly disclosed Citrix vulnerabilities. Check Point Research observed chatter suggesting n-day attacks may unfold in minutes, shrinking defender response time. First seen on govinfosecurity.com Jump…
-
LLM06: Excessive Agency FireTail Blog
Tags: access, ai, application-security, best-practice, breach, data, finance, flaw, jobs, LLM, risk, vulnerabilitySep 05, 2025 – Lina Romero – In 2025, we are seeing an unprecedented rise in the volume and scale of AI attacks. Since AI is still a relatively new beast, developers and security teams alike are struggling to keep up with the changing landscape. The OWASP Top 10 Risks for LLMs is a great…
-
Avnet unlocks vendor lock-in and reinvents security data management
Tags: ai, attack, business, cio, ciso, cloud, compliance, conference, control, cybersecurity, data, LLM, microsoft, PCI, siem, strategy, technology, toolOwn and manage its data directly rather than leaving it siloed in vendor systems.Start large-scale extract, transform, and load (ETL) operations, allowing engineers to run analytics and AI-based use cases like retrieval-augmented generation (RAG).Reduce costs associated with rigid SIEM licensing and storage tiers.Improve compliance with new PCI DSS v4.0 requirements for automated log review in…
-
NYU Scientists Develop, ESET Detects First AI-Powered Ransomware
Scientists at NYU developed a ransomware prototype that uses LLMs to autonomously to plan, adapt, and execute ransomware attacks. ESET researchers, not knowing about the NYU project, apparently detected the ransomware, saying it appeared to be a proof-of-concept and a harbinger of what’s to come. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/09/nyu-scientists-develop-eset-detects-first-ai-powered-ransomware/
-
Exposed LLM Servers Expose Ollama Risks
Over 1,100 Ollama Servers Leave Enterprise Models Vulnerable: Cisco Talos. More than a thousand servers running Ollama, a tool that can deploy artificial intelligence models locally, are exposed to the open internet, leaving many of them vulnerable to misuse and potential attacks. The bulk are dormant, but could be exploited through misconfiguration, Cisco Talos said.…
-
Indirect Prompt Injection Attacks Against LLM Assistants
Tags: attack, automation, control, data, disinformation, email, framework, google, injection, LLM, malicious, mitigation, mobile, phishing, risk, risk-assessment, threat, toolReally good research on practical attacks against LLM agents. “Invitation Is All You Need! Promptware Attacks Against LLM-Powered Assistants in Production Are Practical and Dangerous” Abstract: The growing integration of LLMs into applications has introduced new security risks, notably known as Promptware”, maliciously engineered prompts designed to manipulate LLMs to compromise the CIA triad of…
-
BruteForceAI: Free AI-powered login brute force tool
BruteForceAI is a penetration testing tool that uses LLMs to improve the way brute-force attacks are carried out. Instead of relying on manual setup, the tool can analyze HTML … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/09/03/bruteforceai-free-ai-powered-login-brute-force-tool/
-
Shadow AI Discovery: A Critical Part of Enterprise AI Governance
The Harsh Truths of AI AdoptionMITs State of AI in Business report revealed that while 40% of organizations have purchased enterprise LLM subscriptions, over 90% of employees are actively using AI tools in their daily work. Similarly, research from Harmonic Security found that 45.4% of sensitive AI interactions are coming from personal email accounts, where…
-
LegalPwn: Tricking LLMs by burying badness in lawyerly fine print
Trust and believe AI models trained to see ‘legal’ doc as super legit First seen on theregister.com Jump to article: www.theregister.com/2025/09/01/legalpwn_ai_jailbreak/
-
LegalPwn: Tricking LLMs by burying badness in lawyerly fine print
Trust and believe AI models trained to see ‘legal’ doc as super legit First seen on theregister.com Jump to article: www.theregister.com/2025/09/01/legalpwn_ai_jailbreak/
-
How AI Agents Are Creating a New Class of Identity Risk
5 min readAI agents require broad API access across multiple domains simultaneously”, LLM providers, enterprise APIs, cloud services, and data stores”, creating identity management complexity that traditional workload security never anticipated. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/08/how-ai-agents-are-creating-a-new-class-of-identity-risk/
-
2025 CSO Hall of Fame: George Finney on decryption risks, AI, and the CISO’s growing clout
Tags: ai, attack, automation, breach, business, ciso, computing, conference, cyber, cybersecurity, data, encryption, intelligence, jobs, LLM, microsoft, risk, soc, threat, tool, zero-trustWhat do you see as the biggest cybersecurity challenges for the next generation of CISOs, and how should they prepare? : George Finney: One major challenge is the threat of attackers saving encrypted data today with the intention of decrypting it later. With quantum computing, we know that in five to 10 years, older encryption…
-
New Research and PoC Reveal Security Risks in LLM-Based Coding
A recent investigation has uncovered that relying solely on large language models (LLMs) to generate application code can introduce critical security vulnerabilities, according to a detailed blog post published on August 22, 2025. The research underscores that LLMs, which are trained on broad internet data”, much of it insecure example code”, often replicate unsafe patterns…
-
We Are Still Unable to Secure LLMs from Malicious Inputs
Nice indirect prompt injection attack: Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) It looks like an official document on company meeting policies. But inside the document, Bargury hid a 300-word malicious…
-
One long sentence is all it takes to make LLMs misbehave
Tags: LLMChatbots ignore their guardrails when your grammar sucks, researchers find First seen on theregister.com Jump to article: www.theregister.com/2025/08/26/breaking_llms_for_fun/
-
The Role of AI Pentesting in Securing LLM Applications
The rapid adoption of Large Language Models (LLMs) has reshaped the digital ecosystem, powering everything from customer service chatbots to advanced data analysis systems. But with this growth comes a wave of new security challenges. Traditional application vulnerabilities still exist, but LLM applications introduce risks such as prompt injection, data poisoning, model leakage, and misuse……
-
The Role of AI Pentesting in Securing LLM Applications
The rapid adoption of Large Language Models (LLMs) has reshaped the digital ecosystem, powering everything from customer service chatbots to advanced data analysis systems. But with this growth comes a wave of new security challenges. Traditional application vulnerabilities still exist, but LLM applications introduce risks such as prompt injection, data poisoning, model leakage, and misuse……
-
The Role of AI Pentesting in Securing LLM Applications
The rapid adoption of Large Language Models (LLMs) has reshaped the digital ecosystem, powering everything from customer service chatbots to advanced data analysis systems. But with this growth comes a wave of new security challenges. Traditional application vulnerabilities still exist, but LLM applications introduce risks such as prompt injection, data poisoning, model leakage, and misuse……
-
LLMs at the edge: Rethinking how IoT devices talk and act
Anyone who has set up a smart home knows the routine: one app to dim the lights, another to adjust the thermostat, and a voice assistant that only understands exact phrasing. … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/08/26/llm-iot-integration/
-
BSI ANSI: Empfehlungen zur sicheren Integration von LLM
Das Bundesamt für Sicherheit in der Informationstechnik (BSI) und die französische Cybersecurity-Behörde ANSSI haben die Empfehlungen zur sicheren Integration von Sprachmodellen (LLM) aktualisiert. Es sollte ausdrücklich kein vollständig autonomer KI-Betrieb ohne menschliche Kontrolle stattfinden. Ich bin über nachfolgenden Post von … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/08/23/bsi-ansi-empfehlungen-zur-sicheren-integration-von-llm/
-
Cybersecurity Snapshot: Industrial Systems in Crosshairs of Russian Hackers, FBI Warns, as MITRE Updates List of Top Hardware Weaknesses
Tags: access, ai, attack, automation, cisa, cisco, cloud, conference, control, credentials, cve, cyber, cybersecurity, data, data-breach, deep-fake, detection, docker, espionage, exploit, flaw, framework, fraud, google, government, group, guide, hacker, hacking, Hardware, identity, infrastructure, intelligence, Internet, iot, LLM, microsoft, mitigation, mitre, mobile, network, nist, risk, russia, scam, service, side-channel, software, strategy, switch, technology, threat, tool, update, vulnerability, vulnerability-management, windowsCheck out the FBI’s alert on Russia-backed hackers infiltrating critical infrastructure networks via an old Cisco bug. Plus, MITRE dropped a revamped list of the most important critical security flaws. Meanwhile, NIST rolled out a battle plan against face-morphing deepfakes. And get the latest on the CIS Benchmarks and on vulnerability prioritization strategies! Here are…
-
Tree of AST: A Bug-Hunting Framework Powered by LLMs
Teenaged security researchers Sasha Zyuzin and Ruikai Peng discuss how their new vulnerability discovery framework leverages LLMs to address limitations of the past. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/tree-ast-bug-hunting-framework-llms
-
Why AI Agents and MCP Servers Just Became a CISO’s Most Urgent Priority
Over the last year, I’ve spent countless hours with CISOs, CTOs, and security architects talking about a new wave of technology that’s changing the game faster than anything we’ve seen before: Agentic AI and Model Context Protocol (MCP) servers. If you think AI is still in the “cool demos and pilot projects” stage, think again.…
-
Lenovo-Chatbot-Lücke wirft Schlaglicht auf KI-Sicherheitsrisiken
Über eine Schwachstelle in Lenovos Chatbot für den Kundensupport ist es Forschern gelungen, Schadcode einzuschleusen.Der Chatbot ‘Lena” von Lenovo basiert auf GPT-4 von OpenAI und wird für den Kundensupport verwendet. Sicherheitsforscher von Cybernews fanden heraus, dass das KI-Tool anfällig für Cross-Site-Scripting-Angriffe (XSS) war. Die Experten haben eine Schwachstelle entdeckt, über die sie schädliche HTML-Inhalte generieren…
-
Using lightweight LLMs to cut incident response times and reduce hallucinations
Researchers from the University of Melbourne and Imperial College London have developed a method for using LLMs to improve incident response planning with a focus on reducing … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/08/21/lightweight-llm-incident-response/
-
Cybercriminals Abuse Vibe Coding Service to Create Malicious Sites
Some LLM-created scripts and emails can lower the barrier of entry for low-skill attackers, who can use services like Lovable to create convincing, effective websites in minutes. First seen on darkreading.com Jump to article: www.darkreading.com/cyberattacks-data-breaches/cybercriminals-abuse-vibe-coding-service-malicious-sites

