Tag: LLM
-
Cisco, former Google, Meta experts train cybersecurity LLM
Cisco’s new Foundation AI group, which includes engineers from multiple companies, has released a compact AI reasoning model based on Llama 3 for cybersecurity to open source. First seen on techtarget.com Jump to article: www.techtarget.com/searchitoperations/news/366623089/Cisco-Google-Meta-collab-trains-cybersecurity-LLM
-
Popular LLMs Found to Produce Vulnerable Code by Default
Backslash Security found that naïve prompts resulted in code vulnerable to at least four of the of the 10 most common vulnerabilities across popular LLMs First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/llms-vulnerable-code-default/
-
Dems fret over DOGE feeding sensitive data into random AI
Using LLMs to pick programs, people, contracts to cut is bad enough but doing it with Musk’s Grok? Yikes First seen on theregister.com Jump to article: www.theregister.com/2025/04/18/house_democrats_doge/
-
Cybersecurity Snapshot: NIST Aligns Its Privacy and Cyber Frameworks, While Researchers Warn About Hallucination Risks from GenAI Code Generators
Tags: access, advisory, ai, attack, breach, china, cisa, cisco, ciso, cloud, computer, control, csf, cve, cyber, cyberattack, cybersecurity, data, defense, encryption, espionage, exploit, firmware, framework, governance, government, group, hacker, hacking, healthcare, identity, infrastructure, Internet, LLM, malicious, mfa, mitigation, mitre, network, nist, open-source, password, phishing, privacy, risk, risk-assessment, router, service, software, strategy, supply-chain, technology, threat, tool, update, vulnerabilityCheck out NIST’s effort to further mesh its privacy and cyber frameworks. Plus, learn why code-writing GenAI tools can put developers at risk of package-confusion attacks. Also, find out what Tenable webinar attendees said about identity security. And get the latest on the MITRE CVE program and on attacks against edge routers. Dive into five…
-
ATLSecCon 2025: Security Readiness Means Human Readiness
LLMs won’t fix a broken SOC, but apprenticeship might. ATLSecCon 2025 revealed how outdated hiring and cultural gatekeeping are breaking cybersecurity from the inside out. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/04/atlseccon-2025-security-readiness-means-human-readiness/
-
Agentic AI is both boon and bane for security pros
Recent agentic security signposts: Recently, we have seen numerous examples of how quickly building your own autonomous AI agents has taken root. Microsoft last month demonstrated six new AI agents that work with its Copilot software that talk directly to its various security tools to identify vulnerabilities, flag identity and asset compromises. Simbian is hosting…
-
AI Awful at Fixing Buggy Code
LLMs Falter on Real-world Bugs, Even With Debugger Access: Microsoft. Artificial intelligence can code but it can’t debug says Microsoft after observing how large language models performed when given a series of real world software programming tests. Most LLMs struggle to resolve software bugs, even when given access to traditional developer tools such as debuggers.…
-
Kritik an OpenAI: Experten warnen vor verkürzten Sicherheitstests
OpenAI hat offenbar seine Sicherheitstests verkürzt.OpenAI ist bekannt für seine KI-Projekte wie der GPT-Reihe, Codec, DALL-E und Whisper. Experten befürchten nun, dass das KI-Forschungsunternehmen seine KI-Angebote ohne angemessenen Schutz bereitstellen könnte.Laut einem Bericht der Financial Times (FT) gibt der Hersteller von ChatGPT seinen Mitarbeitenden und externen Gruppen nur noch wenige Tage Zeit, um die Risiken…
-
AI Hallucinations Create a New Software Supply Chain Threat
Researchers uncover new software supply chain threat from LLM-generated package hallucinations. The post AI Hallucinations Create a New Software Supply Chain Threat appeared first on SecurityWeek. First seen on securityweek.com Jump to article: www.securityweek.com/ai-hallucinations-create-a-new-software-supply-chain-threat/
-
Package hallucination: LLMs may deliver malicious code to careless devs
LLMs’ tendency to >>hallucinate
-
Frequently Asked Questions About Model Context Protocol (MCP) and Integrating with AI for Agentic Applications
The emergence of Model Context Protocol for AI is gaining significant interest due to its standardization of connecting external data sources to large language models (LLMs). While these updates are good news for AI developers, they raise some security concerns. In this blog we address FAQs about MCP. Background Tenable Research has compiled this blog…
-
BSidesLV24 Breaking Ground BOLABuster: Harnessing LLMs For Automating BOLA Detection
Authors/Presenters: Jay Chen, Ravid Mazon Our sincere appreciation to BSidesLV, and the Presenters/Authors for publishing their erudite Security BSidesLV24 content. Originating from the conference’s events located at the Tuscany Suites & Casino; and via the organizations YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/04/bsideslv24-breaking-ground-bolabuster-harnessing-llms-for-automating-bola-detection/
-
Why Palo Alto Networks Is Eyeing a $700M Buy of Protect AI
Largest Palo Alto Purchase Since 2020 Would Aid AI Model Security and Governance Palo Alto Networks is eyeing its largest startup deal since December 2020, with the platform giant targeting Protect AI, a startup that offers AI scanning, LLM security and Gen AI red teaming. Palo Alto Networks is prepared to pay between $650 million…
-
Report: Weaponized LLMs escalating cybersecurity risks
First seen on scworld.com Jump to article: www.scworld.com/brief/report-weaponized-llms-escalating-cybersecurity-risks
-
Excessive agency in LLMs: The growing risk of unchecked autonomy
For an AI agent to “think” and act autonomously, it must be granted agency; that is, it must be allowed to integrate with other systems, read and analyze data, and have … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/04/08/llm-excessive-agency-risk/
-
Google Pushing ‘Sec-Gemini’ AI Model for Threat-Intel Workflows
Experimental Sec-Gemini v1 touts a combination of Google’s Gemini LLM capabilities with real-time security data and tooling from Mandiant. The post Google Pushing ‘Sec-Gemini’ AI Model for Threat-Intel Workflows appeared first on SecurityWeek. First seen on securityweek.com Jump to article: www.securityweek.com/google-pushing-sec-gemini-ai-model-for-threat-intel-workflows/
-
The rise of compromised LLM attacks
In this Help Net Security video, Sohrob Kazerounian, Distinguished AI Researcher at Vectra AI, discusses how the ongoing rapid adoption of LLM-based applications has already … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/04/07/compromised-llm-attacks-video/
-
AI programming copilots are worsening code security and leaking more secrets
Tags: access, ai, api, application-security, attack, authentication, best-practice, breach, ceo, ciso, container, control, credentials, cybersecurity, data, data-breach, github, government, incident response, injection, least-privilege, LLM, monitoring, open-source, openai, password, programming, risk, skills, software, strategy, tool, training, vulnerabilityOverlooked security controls: Ellen Benaim, CISO at enterprise content mangement firm Templafy, said AI coding assistants often fail to adhere to the robust secret management practices typically observed in traditional systems.”For example, they may insert sensitive information in plain text within source code or configuration files,” Benaim said. “Furthermore, because large portions of code are…
-
Are LLM firewalls the future of AI security?
As large language models permeate industries, experts at Black Hat Asia 2025 debate the need for LLM firewalls and explore their role in fending off emerging AI threats First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366621934/Are-LLM-firewalls-the-future-of-AI-security
-
Hackers Use DeepSeek and Remote Desktop Apps to Deploy TookPS Malware
A recent investigation by cybersecurity researchers has uncovered a large-scale malware campaign leveraging the DeepSeek LLM and popular remote desktop applications to distribute the Trojan-Downloader.Win32.TookPS malware. The attackers targeted both individual users and organizations by disguising malicious software as legitimate business tools, including UltraViewer, AutoCAD, and SketchUp. Malicious Infrastructure and Infection Chain The TookPS malware…
-
LLMs are now available in snack size but digest with care
Passed down wisdom can distort reality: Rather than developing their own contextual understanding, student models rely heavily on their teacher models’ pre-learned conclusions. Whether this limitation can lead to model hallucination is highly debated by experts.Brauchler is of the opinion that the efficiency of the student models is tied to that of their teachers, irrespective…
-
LLM providers on the cusp of an ‘extinction’ phase as capex realities bite
Tags: LLMOnly the strong will survive, but analyst says cull will not be as rapid as during dotcom era First seen on theregister.com Jump to article: www.theregister.com/2025/03/31/llm_providers_extinction/
-
Gemini hackers can deliver more potent attacks with a helping hand from”¦ Gemini
Hacking LLMs has always been more art than science. A new attack on Gemini could change that. First seen on arstechnica.com Jump to article: arstechnica.com/security/2025/03/gemini-hackers-can-deliver-more-potent-attacks-with-a-helping-hand-from-gemini/
-
Nir Zuk: Google’s Multi-Cloud Security Strategy Won’t Work
Palo Alto Networks CTO Nir Zuk predicts Google’s security push through its $32 billion buy of Wiz won’t succeed, as customers are reluctant to buy multi-cloud tools from cloud vendors. Zuk details how adversaries use LLMs at scale and how Palo Alto is unifying SOC tools under its Cortex platform. First seen on govinfosecurity.com Jump…
-
Rising attack exposure, threat sophistication spur interest in detection engineering
Tags: access, ai, attack, automation, banking, ceo, ciso, cloud, compliance, cyber, cybersecurity, data, detection, endpoint, exploit, finance, framework, healthcare, infrastructure, insurance, intelligence, LLM, malware, mitre, network, programming, ransomware, RedTeam, risk, sans, siem, software, supply-chain, tactics, technology, threat, tool, update, vulnerability, zero-dayMore than the usual threat detection practices: Proponents argue that detection engineering differs from traditional threat detection practices in approach, methodology, and integration with the development lifecycle. Threat detection processes are typically more reactive and rely on pre-built rules and signatures from vendors that offer limited customization for the organizations using them. In contrast, detection…
-
Malicious AI Tools See 200% Surge as ChatGPT Jailbreaking Talks Increase by 52%
The cybersecurity landscape in 2024 witnessed a significant escalation in AI-related threats, with malicious actors increasingly targeting and exploiting large language models (LLMs). According to KELA’s annual >>State of Cybercrime
-
ARACNE: LLM-Powered Pentesting Agent Executes Commands on Real Linux Shell Systems
Researchers have introduced ARACNE, a fully autonomous Large Language Model (LLM)-based pentesting agent designed to interact with SSH services on real Linux shell systems. ARACNE is engineered to execute commands autonomously, marking a significant advancement in the automation of cybersecurity testing. The agent’s architecture supports multiple LLM models, enhancing its flexibility and effectiveness in penetration…
-
Lasso Adds Automated Red Teaming Capability to Test LLMs
Lasso today added an ability to autonomously simulate real-world cyberattacks against large language models (LLMs) to enable organizations to improve the security of artificial intelligence (AI) applications. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/03/lasso-adds-automated-red-teaming-capability-to-test-llms/

