Tag: LLM
-
SophosAI-Team erstellt neue Benchmarks im Bereich Maschinelles Lernen
Tags: LLMei der Zusammenfassung von Vorfallinformationen aus Rohdaten erbringen die meisten LLMs eine ausreichende Leistung, es gibt jedoch Raum für Verbesseru… First seen on infopoint-security.de Jump to article: www.infopoint-security.de/sophosai-team-erstellt-neue-benchmarks-im-bereich-maschinelles-lernen/a36923/
-
WithSecure bringt GenAITool Luminen auf den Markt
WithSecure™ Luminen nutzt fortschrittliche LLM-Funktionen (Large Language Models) sowie andere KI-Techniken, um die Produktivität von IT-Sicher… First seen on infopoint-security.de Jump to article: www.infopoint-security.de/withsecure-bringt-genai-cybersecurity-tool-luminen-auf-den-markt/a37443/
-
Unternehmen können von innovativen Datenquellen für generative KI, LLMs, FinOps und Nachhaltigkeit profitieren
Der Datenfluss in den Unternehmen wird nach wie vor durch zahlreiche Herausforderungen beeinträchtigt, darunter solche, die mit Menschen, Prozessen un… First seen on infopoint-security.de Jump to article: www.infopoint-security.de/unternehmen-koennen-von-innovativen-datenquellen-fuer-generative-ki-llms-finops-und-nachhaltigkeit-profitieren/a38048/
-
How LLMs could help defenders write better and faster detection
First seen on blog.talosintelligence.com Jump to article: blog.talosintelligence.com/how-llms-could-help-defenders-write-better-and-faster-detection/
-
Careful Where You Code: Multiple Vulnerabilities in AI-Powered PR-Agent
Introduction There is a push to use LLMs in all aspects of software engineering, far beyond merely generating code snippets. This push includes integr… First seen on research.kudelskisecurity.com Jump to article: research.kudelskisecurity.com/2024/08/29/careful-where-you-code-multiple-vulnerabilities-in-ai-powered-pr-agent/
-
Black Friday Fake Stores Surge 110%: How LLMs and Cheap Domains Empower Cybercrime
The 2024 holiday shopping season is witnessing an alarming rise in fraudulent e-commerce activity. According to Netcraft, fake online stores have surged by 110% between August and October, capitalizing on... First seen on securityonline.info Jump to article: securityonline.info/black-friday-fake-stores-surge-110-how-llms-and-cheap-domains-empower-cybercrime/
-
How a 2-Hour Interview With an LLM Makes a Digital Twin
Scientists Devise Technique to Make AI Models Mimic Specific People. Researchers have devised a technique to train artificial intelligence models to impersonate people’s behavior based on just two hours of interviews, creating a virtual replica that can mimic an individual’s values and preferences. First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/how-2-hour-interview-llm-makes-digital-twin-a-26910
-
Cybersecurity Snapshot: Prompt Injection and Data Disclosure Top OWASP’s List of Cyber Risks for GenAI LLM Apps
Tags: access, advisory, ai, application-security, attack, backup, best-practice, breach, cisa, cloud, computer, cve, cyber, cyberattack, cybercrime, cybersecurity, data, exploit, extortion, firewall, framework, governance, government, group, guide, Hardware, incident, incident response, infrastructure, injection, intelligence, Internet, LLM, malicious, microsoft, mitigation, mitre, monitoring, network, nist, office, open-source, powershell, privacy, ransomware, regulation, risk, risk-management, russia, service, skills, software, sql, strategy, supply-chain, tactics, technology, theft, threat, tool, update, vulnerability, vulnerability-management, windowsDon’t miss OWASP’s update to its “Top 10 Risks for LLMs” list. Plus, the ranking of the most harmful software weaknesses is out. Meanwhile, critical infrastructure orgs have a new framework for using AI securely. And get the latest on the BianLian ransomware gang and on the challenges of protecting water and transportation systems against…
-
Google OSS-Fuzz Harnesses AI to Expose 26 Hidden Security Vulnerabilities
One of these flaws detected using LLMs was in the widely used OpenSSL library First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/google-oss-fuzz-ai-expose-26/
-
Google’s AI bug hunters sniff out two dozen-plus code gremlins that humans missed
OSS-Fuzz is making a strong argument for LLMs in security research First seen on theregister.com Jump to article: www.theregister.com/2024/11/20/google_ossfuzz/
-
OWASP Warns of Growing Data Exposure Risk from AI in New Top 10 List for LLMs
OWASP has updated its Top 10 list of risks for LLMs and GenAI, upgrading several areas and introducing new categories First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/owasp-data-exposure-risk-ai/
-
AI About-Face: ‘Mantis’ Turns LLM Attackers Into Prey
Experimental counter-offensive system responds to malicious AI probes with their own surreptitious prompt-injection commands. First seen on darkreading.com Jump to article: www.darkreading.com/cybersecurity-operations/deceptive-framework-defense-mislead-attacking-ai
-
It’s ‘Alarmingly Easy’ to Jailbreak LLM-Controlled Robots
Researchers Manipulate LLM-Driven Robots into Detonating Bombs in Sandbox. Robots controlled by large language models can be jailbroken alarmingly easily, found researchers who manipulated machines into detonating bombs. Jailbreaking attacks are applicable and arguably, significantly more effective on AI-powered robots, researchers said. First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/its-alarmingly-easy-to-jailbreak-llm-controlled-robots-a-26837
-
Letting chatbots run robots ends as badly as you’d expect
Tags: LLMLLM-controlled droids easily jailbroken to perform mayhem, researchers warn First seen on theregister.com Jump to article: www.theregister.com/2024/11/16/chatbots_run_robots/
-
Open source LLM tool primed to sniff out Python zero-days
First seen on theregister.com Jump to article: www.theregister.com/2024/10/20/python_zero_day_tool/
-
Google AI Platform Bugs Leak Proprietary Enterprise LLMs
The tech giant fixed privilege-escalation and model-exfiltration vulnerabilities in Vertex AI that could have allowed attackers to steal or poison custom-built AI models. First seen on darkreading.com Jump to article: www.darkreading.com/cloud-security/google-ai-platform-bugs-proprietary-enterprise-llms
-
Big Sleep AI Agent Puts SQLite Software Bug to Bed
A research tool by the company found a vulnerability in the SQLite open source database, demonstrating the defensive potential for using LLMs to find … First seen on darkreading.com Jump to article: www.darkreading.com/application-security/google-big-sleep-ai-agent-sqlite-software-bug
-
AI & LLMs Show Promise in Squashing Software Bugs
Large language models (LLMs) can help app security firms find and fix software vulnerabilities. Malicious actors are on to them, too, but here’s why defenders may retain the edge. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/ai-llms-show-promise-squashing-software-bugs
-
Google’s Big Sleep LLM agent discovers exploitable bug in SQLite
First seen on scworld.com Jump to article: www.scworld.com/news/googles-big-sleep-llm-agent-discovers-exploitable-bug-in-sqlite
-
Subverting LLM Coders
Really interesting research: “An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities against Strong Detection“: Abstract: Large Language Models (LLMs) have transformed code com- pletion tasks, providing context-based suggestions to boost developer productivity in software engineering. As users often fine-tune these models for specific applications, poisoning and backdoor attacks can covertly alter…
-
Google Says Its AI Found SQLite Vulnerability That Fuzzing Missed
Google has showcased the capabilities of its Big Sleep LLM agent, which found a previously unknown exploitable memory safety issue in SQLite. The post… First seen on securityweek.com Jump to article: www.securityweek.com/google-says-its-ai-found-sqlite-vulnerability-that-fuzzing-missed/
-
Google Uses Its Big Sleep AI Agent to Find SQLite Security Flaw
Google researchers behind the vendor’s Big Sleep project used the LLM-based AI agent to detect a security flaw in SQLite, illustrating the value the e… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/11/google-uses-its-big-sleep-ai-agent-to-find-sqlite-security-flaw/
-
Strategien für den Einsatz von Large Language Models – LLMs für Cybersecurity-Aufgaben nutzen
First seen on security-insider.de Jump to article: www.security-insider.de/large-language-models-cybersicherheit-a-886bd13a853e6c2639c6cc39de5fdc41/
-
ChatGPT-4o can be used for autonomous voice-based scams
Researchers have shown that it’s possible to abuse OpenAI’s real-time voice API for ChatGPT-4o, an advanced LLM chatbot, to conduct financial scams wi… First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/chatgpt-4o-can-be-used-for-autonomous-voice-based-scams/
-
Mozilla: ChatGPT Can Be Manipulated Using Hex Code
LLMs tend to miss the forest for the trees, understanding specific instructions but not their broader context. Bad actors can take advantage of this m… First seen on darkreading.com Jump to article: www.darkreading.com/application-security/chatgpt-manipulated-hex-code
-
Open Source LLM Tool Sniffs Out Python Zero-Days
First seen on darkreading.com Jump to article: www.darkreading.com/application-security/open-source-llm-tool-finds-python-zero-days
-
dope.security Embeds LLM in CASB to Improve Data Security
dope.security this week added a cloud access security broker (CASB) to its portfolio that identifies any externally shared file and leverages a large … First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/10/dope-security-embeds-llm-in-casb-to-improve-data-security/

