Tag: LLM
-
LLM-Risiken verstehen und reduzieren
Es ist grundlegend zu verstehen, dass KI-Assistenten in aller Regel immer die gleichen Zugriffsrechte haben wie die jeweiligen Nutzer. Und diese sind in aller Regel viel zu weit gefasst. First seen on infopoint-security.de Jump to article: www.infopoint-security.de/llm-risiken-verstehen-und-reduzieren/a40655/
-
Open source AI hiring bots favor men, leave women hanging by the phone
Easy fix: Telling LLMs to cosplay Lenin makes ’em more gender blind First seen on theregister.com Jump to article: www.theregister.com/2025/05/02/open_source_ai_models_gender_bias/
-
AI models routinely lie when honesty conflicts with their goals
Keep plugging those LLMs into your apps, folks. This neural network told me it’ll be fine First seen on theregister.com Jump to article: www.theregister.com/2025/05/01/ai_models_lie_research/
-
NVIDIA TensorRT-LLM Vulnerability Let Hackers Run Malicious Code
NVIDIA has issued an urgent security advisory after discovering a significant vulnerability (CVE-2025-23254) in its popular TensorRT-LLM framework, urging all users to update to the latest version (0.18.2) to safeguard their systems against potential attacks. Overview of the Vulnerability The vulnerability, identified as CVE-2025-23254, affects all versions of the NVIDIA TensorRT-LLM framework before 0.18.2 across…
-
30 percent of some Microsoft code now written by AI – especially the new stuff
Satya Nadella reveals attempts to merge Word, PowerPoint, Excel, which may now happen with LLMs First seen on theregister.com Jump to article: www.theregister.com/2025/04/30/microsoft_meta_autocoding/
-
Cisco Boosts XDR Platform, Splunk With Agentic AI
Cisco joins the agentic AI wave with the introduction of advanced LLMs to autonomously verify and investigate attacks. First seen on darkreading.com Jump to article: www.darkreading.com/endpoint-security/cisco-boosts-xdr-platform-splunk-agentic-ai
-
RSAC 2025: Being realistic about fixing code with LLMs
Tags: LLMFirst seen on scworld.com Jump to article: www.scworld.com/news/rsac-2025-being-realistic-about-fixing-code-with-llms
-
🚀 Agentic Runtime Protection Rules Makes Us the First Truly Self-Writing Security System – Impart Security
Agentic Runtime Rules: The First Self-Writing Security System for Runtime The End of Manual Security Management Is Here Say goodbye to regex repositories and ticket fatigue”, Impart delivers instant detections and autonomous investigations for security teams. For years, security teams have been trapped in reactive mode. Every investigation, detection rule update, or WAF configuration change…
-
AI-generated code could be a disaster for the software supply chain. Here’s why.
LLM-produced code could make us much more vulnerable to supply-chain attacks. First seen on arstechnica.com Jump to article: arstechnica.com/security/2025/04/ai-generated-code-could-be-a-disaster-for-the-software-supply-chain-heres-why/
-
RSAC 2025: Using an ‘MRI’ for neural networks to understand LLM jailbreaks
First seen on scworld.com Jump to article: www.scworld.com/news/rsac-2025-using-an-mri-for-neural-networks-to-understand-llm-jailbreaks
-
Cisco, former Google, Meta experts train cybersecurity LLM
Cisco’s new Foundation AI group, which includes engineers from multiple companies, has released a compact AI reasoning model based on Llama 3 for cybersecurity to open source. First seen on techtarget.com Jump to article: www.techtarget.com/searchitoperations/news/366623089/Cisco-Google-Meta-collab-trains-cybersecurity-LLM
-
Popular LLMs Found to Produce Vulnerable Code by Default
Backslash Security found that naïve prompts resulted in code vulnerable to at least four of the of the 10 most common vulnerabilities across popular LLMs First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/llms-vulnerable-code-default/
-
Dems fret over DOGE feeding sensitive data into random AI
Using LLMs to pick programs, people, contracts to cut is bad enough but doing it with Musk’s Grok? Yikes First seen on theregister.com Jump to article: www.theregister.com/2025/04/18/house_democrats_doge/
-
Cybersecurity Snapshot: NIST Aligns Its Privacy and Cyber Frameworks, While Researchers Warn About Hallucination Risks from GenAI Code Generators
Tags: access, advisory, ai, attack, breach, china, cisa, cisco, ciso, cloud, computer, control, csf, cve, cyber, cyberattack, cybersecurity, data, defense, encryption, espionage, exploit, firmware, framework, governance, government, group, hacker, hacking, healthcare, identity, infrastructure, Internet, LLM, malicious, mfa, mitigation, mitre, network, nist, open-source, password, phishing, privacy, risk, risk-assessment, router, service, software, strategy, supply-chain, technology, threat, tool, update, vulnerabilityCheck out NIST’s effort to further mesh its privacy and cyber frameworks. Plus, learn why code-writing GenAI tools can put developers at risk of package-confusion attacks. Also, find out what Tenable webinar attendees said about identity security. And get the latest on the MITRE CVE program and on attacks against edge routers. Dive into five…
-
ATLSecCon 2025: Security Readiness Means Human Readiness
LLMs won’t fix a broken SOC, but apprenticeship might. ATLSecCon 2025 revealed how outdated hiring and cultural gatekeeping are breaking cybersecurity from the inside out. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/04/atlseccon-2025-security-readiness-means-human-readiness/
-
Agentic AI is both boon and bane for security pros
Recent agentic security signposts: Recently, we have seen numerous examples of how quickly building your own autonomous AI agents has taken root. Microsoft last month demonstrated six new AI agents that work with its Copilot software that talk directly to its various security tools to identify vulnerabilities, flag identity and asset compromises. Simbian is hosting…
-
AI Awful at Fixing Buggy Code
LLMs Falter on Real-world Bugs, Even With Debugger Access: Microsoft. Artificial intelligence can code but it can’t debug says Microsoft after observing how large language models performed when given a series of real world software programming tests. Most LLMs struggle to resolve software bugs, even when given access to traditional developer tools such as debuggers.…
-
Kritik an OpenAI: Experten warnen vor verkürzten Sicherheitstests
OpenAI hat offenbar seine Sicherheitstests verkürzt.OpenAI ist bekannt für seine KI-Projekte wie der GPT-Reihe, Codec, DALL-E und Whisper. Experten befürchten nun, dass das KI-Forschungsunternehmen seine KI-Angebote ohne angemessenen Schutz bereitstellen könnte.Laut einem Bericht der Financial Times (FT) gibt der Hersteller von ChatGPT seinen Mitarbeitenden und externen Gruppen nur noch wenige Tage Zeit, um die Risiken…
-
AI Hallucinations Create a New Software Supply Chain Threat
Researchers uncover new software supply chain threat from LLM-generated package hallucinations. The post AI Hallucinations Create a New Software Supply Chain Threat appeared first on SecurityWeek. First seen on securityweek.com Jump to article: www.securityweek.com/ai-hallucinations-create-a-new-software-supply-chain-threat/
-
Package hallucination: LLMs may deliver malicious code to careless devs
LLMs’ tendency to >>hallucinate
-
Frequently Asked Questions About Model Context Protocol (MCP) and Integrating with AI for Agentic Applications
The emergence of Model Context Protocol for AI is gaining significant interest due to its standardization of connecting external data sources to large language models (LLMs). While these updates are good news for AI developers, they raise some security concerns. In this blog we address FAQs about MCP. Background Tenable Research has compiled this blog…
-
BSidesLV24 Breaking Ground BOLABuster: Harnessing LLMs For Automating BOLA Detection
Authors/Presenters: Jay Chen, Ravid Mazon Our sincere appreciation to BSidesLV, and the Presenters/Authors for publishing their erudite Security BSidesLV24 content. Originating from the conference’s events located at the Tuscany Suites & Casino; and via the organizations YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/04/bsideslv24-breaking-ground-bolabuster-harnessing-llms-for-automating-bola-detection/
-
Why Palo Alto Networks Is Eyeing a $700M Buy of Protect AI
Largest Palo Alto Purchase Since 2020 Would Aid AI Model Security and Governance Palo Alto Networks is eyeing its largest startup deal since December 2020, with the platform giant targeting Protect AI, a startup that offers AI scanning, LLM security and Gen AI red teaming. Palo Alto Networks is prepared to pay between $650 million…
-
Report: Weaponized LLMs escalating cybersecurity risks
First seen on scworld.com Jump to article: www.scworld.com/brief/report-weaponized-llms-escalating-cybersecurity-risks
-
Excessive agency in LLMs: The growing risk of unchecked autonomy
For an AI agent to “think” and act autonomously, it must be granted agency; that is, it must be allowed to integrate with other systems, read and analyze data, and have … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/04/08/llm-excessive-agency-risk/
-
Google Pushing ‘Sec-Gemini’ AI Model for Threat-Intel Workflows
Experimental Sec-Gemini v1 touts a combination of Google’s Gemini LLM capabilities with real-time security data and tooling from Mandiant. The post Google Pushing ‘Sec-Gemini’ AI Model for Threat-Intel Workflows appeared first on SecurityWeek. First seen on securityweek.com Jump to article: www.securityweek.com/google-pushing-sec-gemini-ai-model-for-threat-intel-workflows/
-
The rise of compromised LLM attacks
In this Help Net Security video, Sohrob Kazerounian, Distinguished AI Researcher at Vectra AI, discusses how the ongoing rapid adoption of LLM-based applications has already … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/04/07/compromised-llm-attacks-video/
-
AI programming copilots are worsening code security and leaking more secrets
Tags: access, ai, api, application-security, attack, authentication, best-practice, breach, ceo, ciso, container, control, credentials, cybersecurity, data, data-breach, github, government, incident response, injection, least-privilege, LLM, monitoring, open-source, openai, password, programming, risk, skills, software, strategy, tool, training, vulnerabilityOverlooked security controls: Ellen Benaim, CISO at enterprise content mangement firm Templafy, said AI coding assistants often fail to adhere to the robust secret management practices typically observed in traditional systems.”For example, they may insert sensitive information in plain text within source code or configuration files,” Benaim said. “Furthermore, because large portions of code are…

