Tag: LLM
-
Hallucination Control: Benefits and Risks of Deploying LLMs as Part of Security Processes
LLMs have introduced a greater risk of the unexpected, so, their integration, usage and maintenance protocols should be extensive and closely monitore… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/07/hallucination-control-benefits-and-risks-of-deploying-llms-as-part-of-security-processes/
-
Neue Funktionen für die Plattform von Aqua Security – Schutz von LLM-Applikationen vom Code bis zur Cloud
First seen on security-insider.de Jump to article: www.security-insider.de/aqua-security-cnapp-schuetzt-ki-anwendungen-auf-llm-basis-a-da066fb2029dcff2321a825bd9495949/
-
How companies increase risk exposure with rushed LLM deployments
In this Help Net Security interview, Jake King, Head of Threat Security Intelligence at Elastic, discusses companies’ exposure to new security risks a… First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2024/07/10/jake-king-elastic-llms-security-risks/
-
Monocle: Open-source LLM for binary analysis search
Monocle is open-source tooling backed by a large language model (LLM) for performing natural language searches against compiled target binaries. Monoc… First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2024/07/08/monocle-open-source-llm-binary-analysis-search/
-
Mastering Efficient Data Processing for LLMs, Generative AI, and Semantic Search
Discover cutting-edge techniques for optimizing data processing in LLMs, generative AI, and semantic search. Learn to leverage vector databases, imple… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/07/mastering-efficient-data-processing-for-llms-generative-ai-and-semantic-search/
-
Five strategies for mitigating LLM risks in cybersecurity apps
First seen on scmagazine.com Jump to article: www.scmagazine.com/perspective/five-strategies-for-mitigating-llm-risks-in-cybersecurity-apps
-
New infosec products of the week: June 28, 2024
Here’s a look at the most interesting products from the past week, featuring releases from ARMO, Cofense, Datadog, and eSentire. Datadog LLM Observabi… First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2024/06/28/new-infosec-products-of-the-week-june-28-2024/
-
Google Framework Helps LLMs Perform Basic Vuln Research
First seen on packetstormsecurity.com Jump to article: packetstormsecurity.com/news/view/36027/Google-Framework-Helps-LLMs-Perform-Basic-Vuln-Research.html
-
Google framework helps LLMs perform basic vulnerability research
First seen on scmagazine.com Jump to article: www.scmagazine.com/news/google-framework-helps-llms-perform-basic-vulnerability-research
-
Academics Develop Testing Benchmark for LLMs in Cyber Threat Intelligence
Researchers from the Rochester Institute of Technology introduced a benchmark designed to assess large language models’ performance in cyber threat in… First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/testing-benchmark-llm-cyber-threat/
-
Meta Pauses European GenAI Development Over Privacy Concerns
Meta has delayed plans to train its LLMs using public content shared by adults on Facebook and Instagram following a request by Ireland’s data protect… First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/meta-pauses-europe-gen-ai-privacy/
-
Data Poisoning: EmbedAI-Bug gefährdet LLMs
First seen on csoonline.com Jump to article: www.csoonline.com/de/a/embedai-bug-gefaehrdet-llms
-
OpenAI, Anthropic Research Reveals More About How LLMs Affect Security and Bias
Anthropic opened a window into the ‘black box’ where ‘features’ steer a large language model’s output. OpenAI dug into the same concept two weeks late… First seen on techrepublic.com Jump to article: www.techrepublic.com/article/anthropic-claude-openai-large-language-model-research/
-
Mozilla Launches 0Din Gen-AI Bug Bounty Program
Mozilla has announced a 0Day Investigative Network (0Din) bug bounty program for LLMs and other deep learning tech. The post has announced a 0Day Inve… First seen on securityweek.com Jump to article: www.securityweek.com/mozilla-launches-0din-gen-ai-bug-bounty-program/
-
Flawed AI Tools Create Worries for Private LLMs, Chatbots
Companies are looking to large language models to help their employees glean information from unstructured data, but vulnerabilities could lead to dis… First seen on darkreading.com Jump to article: www.darkreading.com/application-security/flawed-ai-tools-create-worries-for-private-llms-chatbots
-
How DataDome Protects AI Apps from Prompt Injection Denial of Wallet Attacks
LLM prompt injection and denial of wallet attacks are new ways malicious actors can attack your company through generative AI apps, such as a chatbot…. First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/06/how-datadome-protects-ai-apps-from-prompt-injection-denial-of-wallet-attacks/
-
New Mindset Needed for Large Language Models
Tags: LLMWith the right mix of caution, creativity, and commitment, we can build a future where LLMs are not just powerful, but also fundamentally trustworthy…. First seen on darkreading.com Jump to article: www.darkreading.com/cybersecurity-operations/new-mindset-needed-for-large-language-models
-
Introducing Secure LLM Workload Access from Aembit
4 min read… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/05/introducing-secure-llm-workload-access-from-aembit/
-
Training LLMs: Questions Rise Over AI Auto Opt-In by Vendors
Few Restrictions Appear to Exist, Provided Companies Behave Transparently Can individuals’ personal data and content be used by artificial intelligenc… First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/blogs/training-llms-questions-rise-over-ai-auto-opt-in-by-vendors-p-3625
-
Anthropic’s Generative AI Research Reveals More About How LLMs Affect Security and Bias
First seen on techrepublic.com Jump to article: www.techrepublic.com/article/anthropic-claude-large-language-model-research/
-
Leading LLMs Insecure, Highly Vulnerable to Basic Jailbreaks
All tested LLMs remain highly vulnerable to basic jailbreaks, and some will provide harmful outputs even without dedicated attempts to circumvent thei… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/05/leading-llms-insecure-highly-vulnerable-to-basic-jailbreaks/
-
Definition LLM | Large Language Model – Was ist ein Large Language Model (LLM)?
Tags: LLMFirst seen on security-insider.de Jump to article: www.security-insider.de/was-ist-ein-large-language-model-llm-a-35a596711b0a925200a929cc5fdb8ab3/
-
RSA Conference 2024: AI and the Future Of Security
RSA 2024 explored AI’s impact on security, featuring sessions on AI governance, LLMs, cloud security, and CISO roles. Here are just a few of the exper… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/05/rsa-conference-2024-ai-and-the-future-of-security/
-
Reality Defender Wins RSAC Innovation Sandbox Competition
In a field thick with cybersecurity startups showing off how they use AI and LLMs, Reality Defender stood out for its tool for detecting and labeling … First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/reality-defender-wins-rsac-innovation-sandbox
-
LLMs & Malicious Code Injections: ‘We Have to Assume It’s Coming’
First seen on darkreading.com Jump to article: www.darkreading.com/application-security/llms-malicious-code-injections-we-have-to-assume-its-coming-
-
Looking closer at Microsoft’s investment in UAE AI vendor G42
The tech giant will own a minor stake, and G42’s LLM will be on Azure. The move helps the cloud provider expand globally and helps the U.S. court the … First seen on techtarget.com Jump to article: www.techtarget.com/searchenterpriseai/news/366581197/Looking-closer-at-Microsoft-investment-in-UAE-AI-vendor-G42
-
Novel LLMjacking Attacks Target Cloud-Based AI Models
It was probably inevitable. Threat researchers detected bad actors using stolen credentials to target LLMs, with the eventual goal of selling the acce… First seen on securityboulevard.com Jump to article: securityboulevard.com/2024/05/novel-llmjacking-attacks-target-cloud-based-ai-models/
-
New LLMjacking Used Stolen Cloud Credentials to Attack Cloud LLM Servers
Researchers have identified a new form of cyberattack termed >>LLMjacking,
-
More than ChatGPT: Privacy and Confidentiality in the Age of LLMs
Much has been made about the privacy and confidentiality issues with ChatGPT. Just take a look at the press for a list of companies prohibiting ChatGP… First seen on modernciso.com Jump to article: modernciso.com/2023/06/01/more-than-chatgpt-privacy-and-confidentiality-in-the-age-of-llms/
-
Dear Stack Overflow denizens, thanks for helping train OpenAI’s billion-dollar LLMs
First seen on theregister.com Jump to article: www.theregister.com/2024/05/07/stack_overflow_openai/

