Tag: LLM
-
Echo Chamber, Prompts Used to Jailbreak GPT-5 in 24 Hours
Researchers paired the jailbreaking technique with storytelling in an attack flow that used no inappropriate language to guide the LLM into producing directions for making a Molotov cocktail. First seen on darkreading.com Jump to article: www.darkreading.com/cyberattacks-data-breaches/echo-chamber-prompts-jailbreak-gpt-5-24-hours
-
13 Produkt-Highlights der Black Hat USA
Tags: access, ai, api, application-security, business, chatgpt, cisco, cloud, compliance, credentials, crowdstrike, cybersecurity, data, detection, google, governance, Hardware, identity, leak, LLM, malware, marketplace, microsoft, monitoring, network, openai, phishing, risk, saas, service, soc, threat, tool, usa, vulnerability, zero-trustDas Mandalay Bay Convention Center wird zur Black Hat USA zum Cybersecurity-Hub 2025 lag der Fokus dabei insbesondere auf Agentic und Generative AI.Zur Black-Hat-Konferenz haben sich auch 2025 Tausende von Sicherheitsexperten in Las Vegas zusammengefunden, um sich über die neuesten Entwicklungen im Bereich Cybersecurity zu informieren und auszutauschen. Der thematische Fokus lag dabei in erster…
-
Black Hat 2025 Recap: A look at new offerings announced at the show
Tags: access, ai, api, application-security, automation, chatgpt, cisco, cloud, compliance, control, crowdstrike, dark-web, data, detection, google, governance, group, identity, intelligence, LLM, malware, microsoft, monitoring, network, openai, password, risk, saas, service, soc, software, threat, tool, vulnerability, zero-trustSnyk secures AI from inception: Snyk’s new platform capability, Secure at Inception, includes real-time security scanning that begins at the moment of code generation or execution. It offers visibility into generative AI, agentic, and model context protocol (MCP) components in software, and also features a new, experimental scanner for detecting AI-specific MCP vulnerabilities.Secure AI Inception…
-
AI wrote my code and all I got was this broken prototype
Can AI really write safer code? Martin dusts off his software engineer skills to put it it to the test. Find out what AI code failed at, and what it was surprisingly good at. Also, we discuss new research on how AI LLM models can be used to assist in the reverse engineering of malware.…
-
Project Ire: Microsoft’s autonomous AI agent that can reverse engineer malware
Tags: ai, attack, ceo, cloud, compliance, computing, control, cybersecurity, defense, detection, exploit, finance, governance, government, healthcare, infrastructure, LLM, malicious, malware, microsoft, programming, risk, service, siem, soar, soc, software, threat, tool, trainingReal-world testing: In real-world tests on 4,000 “hard-target” files that had stumped automated tools, Project Ire flagged 9 malicious files out of 10 files correctly, and a low 4% false positive rate.This makes Project Ire suitable for organizations that operate in high-risk, high-volume, and time-sensitive environments where traditional human-based threat triage is insufficient.Rawat added that…
-
Microsoft unveils Project Ire: AI that autonomously detects malware
Microsoft’s Project Ire uses AI to autonomously reverse engineer and classify software as malicious or benign. Microsoft announced Project Ire, an autonomous artificial intelligence (AI) system that can autonomously reverse engineer and classify software. Project Ire is an LLM-powered autonomous malware classification system that uses decompilers and other tools, reviews their output, and determines the…
-
Microsoft unveils Project Ire: AI that autonomously detects malware
Microsoft’s Project Ire uses AI to autonomously reverse engineer and classify software as malicious or benign. Microsoft announced Project Ire, an autonomous artificial intelligence (AI) system that can autonomously reverse engineer and classify software. Project Ire is an LLM-powered autonomous malware classification system that uses decompilers and other tools, reviews their output, and determines the…
-
Microsoft unveils Project Ire: AI that autonomously detects malware
Microsoft’s Project Ire uses AI to autonomously reverse engineer and classify software as malicious or benign. Microsoft announced Project Ire, an autonomous artificial intelligence (AI) system that can autonomously reverse engineer and classify software. Project Ire is an LLM-powered autonomous malware classification system that uses decompilers and other tools, reviews their output, and determines the…
-
Beef up AI security with zero trust principles
Tags: access, ai, attack, control, data, data-breach, defense, intelligence, LLM, mitigation, mitre, monitoring, risk, strategy, tactics, threat, update, vulnerability, zero-trustStrategies for CSOs: Brauchler offered three AI threat modelling strategies CSOs should consider:Trust flow tracking, the tracking of the movement of data throughout an application, and monitoring the level of trust that is associated with that data. It’s a defense against an attacker who is able to get untrusted data into an application to control…
-
Full Stack Development in the Age of LLMs: What CTOs and Product Leaders Must Know
In 2025, code isn’t just written it’s generated, interpreted, and augmented by AI. GitHub Copilot is already writing 46% of code in supported languages, and…Read More First seen on securityboulevard.com Jump to article: https://securityboulevard.com/2025/08/full-stack-development-in-the-age-of-llms-what-ctos-and-product-leaders-must-know/
-
Microsoft researchers bullish on AI security agent even though it let 74% of malware slip through
Project Ire promises to use LLMs to detect whether code is malicious or benign First seen on theregister.com Jump to article: www.theregister.com/2025/08/06/microsofts_ai_agent_malware_detecting/
-
Cisco Talos Researcher Reveals Method That Causes LLMs to Expose Training Data
In this TechRepublic interview, researcher Amy Chang details the decomposition method and shares how organizations can protect themselves from LLM data extraction. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-cisco-talos-generative-ai-llm-decomposition/
-
OWASP LLM Risk #5: Improper Output Handling FireTail Blog
Tags: ai, application-security, attack, awareness, cyber, detection, email, injection, LLM, mitigation, monitoring, phishing, remote-code-execution, risk, sql, strategy, threat, vulnerabilityAug 04, 2025 – Lina Romero – 2025 is seeing an unprecedented surge of cyber attacks and breaches. AI, in particular, has introduced a whole new set of risks to the landscape and researchers are struggling to keep up. The OWASP Top 10 Risks for LLMs goes into detail about the ten most prevalent risks…
-
Cisco Talos Researcher Reveals Method That Causes LLMs to Reveal Training Data
In this TechRepublic interview, researcher Amy Chang details the decomposition method and shares how organizations can protect themselves from LLM data extraction. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-cisco-talos-generative-ai-llm-decomposition/
-
Week in review: Food sector cybersecurity risks, cyber threats to space infrastructure
Here’s an overview of some of last week’s most interesting news, articles, interviews and videos: Review: LLM Engineer’s Handbook For all the excitement around LLMs, … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/08/03/week-in-review-food-sector-cybersecurity-risks-cyber-threats-to-space-infrastructure/
-
‘Man in the Prompt’Attacke auf LLMs
Large Language Modelle (LLMs) lassen sich über Prompts angreifen, um den Modellen unbefugt Daten zu entlocken. Dabei könnten auch ‘Man in the Prompt’-Browser-Angriffe benutzt werden, um AI-Anfragen von Benutzern zu manipulieren und für kriminelle Aktivitäten zu benutzen. Mit dem Einzug … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/08/03/man-in-the-prompt-browser-attacke-auf-llms/
-
Tonic.ai product updates: May 2024
Textual is the first secure data lakehouse for LLMs, subsetting has arrived for Db2 LUW, Ephemeral now supports Oracle, + Avro is on Structural! Learn more about all the latest releases from Tonic.ai. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/08/tonic-ai-product-updates-may-2024/
-
Black Hat 2025: Latest news and insights
Tags: access, ai, api, attack, ciso, cloud, conference, crowdstrike, cvss, cyber, cybersecurity, data, defense, email, exploit, finance, firmware, flaw, group, hacker, hacking, identity, Internet, LLM, malicious, malware, reverse-engineering, sap, service, threat, tool, training, update, usa, vulnerability, windowsBlack Hat USAAugust 2-7, 2025Las Vegas, NVBlack Hat USA 2025 returns to the Mandalay Bay Convention Center in Las Vegas on August 2-7. The annual event is a perennial magnet for cybersecurity professionals, researchers, vendors and othersThe week kicks off on August 2 with four days of cybersecurity training courses. The courses cover a range…
-
LLMs’ AI-Generated Code Remains Wildly Insecure
Security debt ahoy: only about half of the code that the latest large language models (LLMs) create is cybersecure, and more and more of it is being created all the time. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/llms-ai-generated-code-wildly-insecure
-
LLMs Boost Offensive RD by Identifying and Exploiting Trapped COM Objects
Outflank is pioneering the integration of large language models (LLMs) to expedite research and development workflows while maintaining rigorous quality standards. This approach allows teams to focus on refining and testing techniques for their Outflank Security Tooling (OST) suite, which delivers evasive capabilities for complex operations. A recent case study exemplifies this by demonstrating how…
-
How bright are AI agents? Not very, recent reports suggest
CSOs should ‘skip the fluff’: Meghu’s advice to CSOs: Stop reading the marketing and betting too much of your business on AI/LLM technology as it exists today. Start small and always have a human operator to guide it.”If you skip the fluff and get to the practical application, we have a new technology that could…
-
Getting a Cybersecurity Vibe Check on Vibe Coding
Following a number of high-profile security and development issues surrounding the use of LLMs and GenAI to code and create applications, it’s worth taking a temperature check to ask: Is this technology ready for prime time? First seen on darkreading.com Jump to article: www.darkreading.com/application-security/cybersecurity-vibe-check-vibe-coding
-
Using LLMs as a reverse engineering sidekick
LLMs may serve as powerful assistants to malware analysts to streamline workflows, enhance efficiency, and provide actionable insights during malware analysis. First seen on blog.talosintelligence.com Jump to article: blog.talosintelligence.com/using-llm-as-a-reverse-engineering-sidekick/
-
From LLM scrapers to AI agents: mapping the AI bot landscape for detection teams
AI bots, AI scrapers, AI agents”, you’ve seen these terms thrown around in product announcements, Hacker News posts, and marketing decks. But behind the hype, what do these bots actually do? And more importantly, how are they changing the fraud and bot detection landscape? This article introduces First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/07/from-llm-scrapers-to-ai-agents-mapping-the-ai-bot-landscape-for-detection-teams/
-
LLM Honeypots Can Deceive Threat Actors into Exposing Binaries and Known Exploits
Large language model (LLM)-powered honeypots are becoming increasingly complex instruments for luring and examining threat actors in the rapidly changing field of cybersecurity. A recent deployment using Beelzebub, a low-code honeypot framework, demonstrated how such systems can simulate vulnerable SSH services to capture malicious activities in real-time. By configuring a single YAML file, defenders can…
-
Enterprise LLMs Vulnerable to Prompt-Based Attacks Leading to Data Breaches
Security researchers have discovered alarming vulnerabilities in enterprise Large Language Model (LLM) applications that could allow attackers to bypass authentication systems and access sensitive corporate data through sophisticated prompt injection techniques. The findings reveal that many organizations deploying AI-powered chatbots and automated systems may be inadvertently exposing critical information to malicious actors. The vulnerability stems…
-
New Microsoft Guidance Targets Defense Against Indirect Prompt Injection
Microsoft has unveiled new guidance addressing one of the most pressing security challenges facing enterprise AI deployments: indirect prompt injection attacks. This emerging threat vector has become the top entry in the OWASP Top 10 for LLM Applications & Generative AI 2025, prompting the tech giant to develop a multi-layered defense strategy spanning prevention, detection,…
-
MCP”‘Sicherheit: Das Rückgrat von Agentic AI sichern
Tags: access, ai, api, authentication, ciso, credentials, cyberattack, cyersecurity, firewall, infrastructure, LLM, mfa, risk, toolIm Zuge von Agentic AI sollten sich CISOs mit MCP-Sicherheit auseinandersetzen. Das Model Context Protocol (MCP) wurde erst Ende 2024 vorgestellt, dennoch sind die technologischen Folgen in vielen Architekturen bereits deutlich spürbar. Damit Entwickler nicht jede Schnittstelle mühsam von Hand programmieren müssen, stellt MCP eine einheitliche ‘Sprache” für LL-Agenten bereit. Dadurch können sie Tools, Datenbanken und SaaS”‘Dienste…

