Tag: LLM
-
RSA Conference 2024: AI and the Future Of Security
RSA 2024 explored AI’s impact on security, featuring sessions on AI governance, LLMs, cloud security, and CISO roles. Here are just a few of the exper… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/05/rsa-conference-2024-ai-and-the-future-of-security/
-
Reality Defender Wins RSAC Innovation Sandbox Competition
In a field thick with cybersecurity startups showing off how they use AI and LLMs, Reality Defender stood out for its tool for detecting and labeling … First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/reality-defender-wins-rsac-innovation-sandbox
-
LLMs & Malicious Code Injections: ‘We Have to Assume It’s Coming’
First seen on darkreading.com Jump to article: www.darkreading.com/application-security/llms-malicious-code-injections-we-have-to-assume-its-coming-
-
Looking closer at Microsoft’s investment in UAE AI vendor G42
The tech giant will own a minor stake, and G42’s LLM will be on Azure. The move helps the cloud provider expand globally and helps the U.S. court the … First seen on techtarget.com Jump to article: www.techtarget.com/searchenterpriseai/news/366581197/Looking-closer-at-Microsoft-investment-in-UAE-AI-vendor-G42
-
Novel LLMjacking Attacks Target Cloud-Based AI Models
It was probably inevitable. Threat researchers detected bad actors using stolen credentials to target LLMs, with the eventual goal of selling the acce… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/05/novel-llmjacking-attacks-target-cloud-based-ai-models/
-
New LLMjacking Used Stolen Cloud Credentials to Attack Cloud LLM Servers
Researchers have identified a new form of cyberattack termed >>LLMjacking,
-
More than ChatGPT: Privacy and Confidentiality in the Age of LLMs
Much has been made about the privacy and confidentiality issues with ChatGPT. Just take a look at the press for a list of companies prohibiting ChatGP… First seen on modernciso.com Jump to article: modernciso.com/2023/06/01/more-than-chatgpt-privacy-and-confidentiality-in-the-age-of-llms/
-
Dear Stack Overflow denizens, thanks for helping train OpenAI’s billion-dollar LLMs
First seen on theregister.com Jump to article: www.theregister.com/2024/05/07/stack_overflow_openai/
-
Prompt Fuzzer: Open-source tool for strengthening GenAI apps
Prompt Fuzzer is an open-source tool that evaluates the security of your GenAI application’s system prompt against dynamic LLM-based threats. Prompt F… First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2024/04/29/prompt-fuzzer-open-source-genai-applications-security/
-
Lessons for CISOs From OWASP’s LLM Top 10
First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/top-lessons-cisos-owasp-llm-top-10
-
Using LLMs to Unredact Text
Tags: LLMFirst seen on thesecurityblogger.com Jump to article: www.thesecurityblogger.com/using-llms-to-unredact-text/
-
AI Watchdog Defends Against New LLM Jailbreak Method
First seen on packetstormsecurity.com Jump to article: packetstormsecurity.com/news/view/35785/AI-Watchdog-Defends-Against-New-LLM-Jailbreak-Method.html
-
Vulnerabilities for AI and ML Applications are Skyrocketing
In their haste to deploy LLM tools, organizations may overlook crucial security practices. The rise in threats like Remote Code Execution indicates an… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/04/vulnerabilities-for-ai-and-ml-applications-are-skyrocketing/
-
Knostic Brings Access Control to LLMs
Led by industry veterans Gadi Evron and Sounil Yu, the new company lets organizations adjust how much information LLMs provide based on the user’s rol… First seen on darkreading.com Jump to article: www.darkreading.com/data-privacy/knostic-brings-access-control-to-llms
-
Microsoft’s ‘AI Watchdog’ defends against new LLM jailbreak method
First seen on scmagazine.com Jump to article: www.scmagazine.com/news/microsofts-ai-watchdog-defends-against-new-llm-jailbreak-method
-
TA547 Uses an LLM-Generated Dropper to Infect German Orgs
It’s finally happening: Rather than just for productivity and research, threat actors are using LLMs to write malware. But companies need not worry ju… First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/ta547-uses-llm-generated-dropper-infect-german-orgs
-
How Do We Integrate LLMs Security Into Application Development?
Tags: LLMFirst seen on darkreading.com Jump to article: www.darkreading.com/application-security/how-do-we-integrate-llm-security-into-application-development-
-
Google Extends Generative AI Reach Deeper into Security
The Google Chronicle cybersecurity platform extensions are based on the Gemini LLM with the addition of cybersecurity data. The post le Chronicle cybe… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/04/google-extends-generative-ai-reach-deeper-into-security/
-
OWASP Top 10 for LLM Applications: A Quick Guide
An overview of the top vulnerabilities affecting large language model (LLM) applications. The post iew of the top vulnerabilities affecting large lang… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/04/owasp-top-10-for-llm-applications-a-quick-guide/
-
Should We Just Accept the Lies We Get From AI Chatbots?
NYC’s New Chatbot, Hallucinating LLMs Just Can’t Be Fixed, Says Linguistics Expert. Employers can now fire an employee who complains about sexual hara… First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/should-we-just-accept-lies-we-get-from-ai-chatbots-a-24821
-
Cybercriminals Weigh Options for Using LLMs: Buy, Build, or Break?
While some cybercriminals have bypassed guardrails to force legitimate AI models to turn bad, building their own malicious chatbot platforms and makin… First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/cybercriminals-options-lms-buy-build-break
-
AI Package Hallucination Hackers Abusing ChatGPT, Gemini to Spread Malware
The research investigates the persistence and scale of AI package hallucination, a technique where LLMs recommend non-existent malicious packages. The… First seen on gbhackers.com Jump to article: gbhackers.com/ai-package-hallucination/
-
Pervasive LLM Hallucinations Expand Code Developer Attack Surface
The tendency of popular AI-based tools to recommend nonexistent code libraries offers a bigger opportunity than thought to distribute malicious packag… First seen on darkreading.com Jump to article: www.darkreading.com/application-security/pervasive-llm-hallucinations-expand-code-developer-attack-surface
-
Picus Security Melds Security Knowledge Graph with Open AI LLM
Picus Security today added an artificial intelligence (AI) capability to enable cybersecurity teams to automate tasks via a natural language interface… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/04/picus-security-melds-security-knowledge-graph-with-open-ai-llm/
-
Generative AI Security – Secure Your Business in a World Powered by LLMs
Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolut… First seen on thehackernews.com Jump to article: thehackernews.com/2024/03/generative-ai-security-secure-your.html
-
New Nvidia, GitHub AI coding assistants expand devs’ options
GitHub Copilot Enterprise and StarCoder2 LLMs, both released this week, will add to an array of AI coding assistants, but caution, especially with sec… First seen on techtarget.com Jump to article: www.techtarget.com/searchsoftwarequality/news/366571641/New-Nvidia-GitHub-AI-coding-assistants-expand-devs-options
-
Cloudflare wants to put a firewall in front of your LLM
First seen on theregister.com Jump to article: www.theregister.com/2024/03/05/cloudflare_firewall_ai/
-
Researchers Highlight Google’s Gemini AI Susceptibility to LLM Threats
Google’s;Gemini;large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content,… First seen on thehackernews.com Jump to article: thehackernews.com/2024/03/researchers-highlight-googles-gemini-ai.html
-
How To Weaponize LLMs To Auto-Hijack Websites
Tags: LLMFirst seen on packetstormsecurity.com Jump to article: packetstormsecurity.com/news/view/35550/How-To-Weaponize-LLMs-To-Auto-Hijack-Websites.html

