Tag: chatgpt
-
AtlasExploit ermöglicht Angriff auf ChatGPT-Speicher
Security-Forscher haben eine neue Schwachstelle entdeckt, die den ChatGPT Atlas-Browser von OpenAI betrifft.Nur wenige Tage, nachdem Cybersicherheitsanalysten davor gewarnt hatten, den neuen Atlas-Browser von OpenAI zu installieren, haben Forscher von LayerX Security eine Schwachstelle entdeckt. Die Lücke soll es Angreifen ermöglichen, bösartige Befehle direkt in den ChatGPT-Speicher der Anwender einzuschleusen und Remote-Code auszuführen. Wie Or…
-
AI Search Tools Easily Fooled by Fake Content
New research shows AI crawlers like Perplexity, Atlas, and ChatGPT are surprisingly easy to fool. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/ai-search-tools-easily-fooled-by-fake-content
-
AI Search Tools Easily Fooled by Fake Content
New research shows AI crawlers like Perplexity, Atlas, and ChatGPT are surprisingly easy to fool. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/ai-search-tools-easily-fooled-by-fake-content
-
AI Search Tools Easily Fooled by Fake Content
New research shows AI crawlers like Perplexity, Atlas, and ChatGPT are surprisingly easy to fool. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/ai-search-tools-easily-fooled-by-fake-content
-
AtlasExploit ermöglicht Angriff auf ChatGPT-Speicher
Security-Forscher haben eine neue Schwachstelle entdeckt, die den ChatGPT Atlas-Browser von OpenAI betrifft.Nur wenige Tage, nachdem Cybersicherheitsanalysten davor gewarnt hatten, den neuen Atlas-Browser von OpenAI zu installieren, haben Forscher von LayerX Security eine Schwachstelle entdeckt. Die Lücke soll es Angreifen ermöglichen, bösartige Befehle direkt in den ChatGPT-Speicher der Anwender einzuschleusen und Remote-Code auszuführen. Wie Or…
-
New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts
Cybersecurity researchers have flagged a new security issue in agentic web browsers like OpenAI ChatGPT Atlas that exposes underlying artificial intelligence (AI) models to context poisoning attacks.In the attack devised by AI security company SPLX, a bad actor can set up websites that serve different content to browsers and AI crawlers run by ChatGPT and…
-
Ethical Prompt Injection: Fighting Shadow AI with Its Own Weapon
AI language models like ChatGPT, DeepSeek, and Copilot are transforming business operations at lightning speed. They help us generate documents, summarise meetings, and even make decisions faster than ever before. But this rapid adoption comes at a price. Employees often use unapproved AI tools on personal devices, risking sensitive company information leaking into ungoverned spaces.…
-
Security for AI: A Practical Guide to Enforcing Your AI Acceptable Use Policy
Tags: access, ai, awareness, best-practice, business, chatgpt, compliance, control, corporate, data, data-breach, disinformation, finance, governance, government, guide, intelligence, LLM, malicious, monitoring, openai, privacy, regulation, risk, service, strategy, technology, threat, tool, training, update, vulnerabilityAn AI acceptable use policy can help your organization mitigate the risk of employees accidentally exposing sensitive data to public AI tools. Benchmark your organization’s policy against our best practices and discover how prompt-level visibility from Tenable AI Exposure eases policy enforcement. Key takeaways: An AI acceptable use policy governs the appropriate use of generative…
-
LayerX Exposes Critical Flaw in OpenAI’s ChatGPT Atlas Browser
LayerX found a flaw in ChatGPT’s Atlas browser letting hackers inject malicious code and exploit AI memory for remote access. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/news/openai-atlas-vulnerability/
-
Atlas browser exploit lets attackers hijack ChatGPT memory
Tags: ai, attack, browser, business, ceo, chatgpt, chrome, cloud, credentials, detection, exploit, identity, mitigation, monitoring, phishing, soc, threat, update, vulnerabilityHow to detect a hit: Detecting a memory-based compromise in ChatGPT Atlas is not like hunting for traditional malware. There are no files, registry keys, or executables to isolate. Instead, security teams need to look for behavioral anomalies such as subtle shifts in how the assistant responds, what it suggests, and when it does so.”There…
-
Atlas browser exploit lets attackers hijack ChatGPT memory
Tags: ai, attack, browser, business, ceo, chatgpt, chrome, cloud, credentials, detection, exploit, identity, mitigation, monitoring, phishing, soc, threat, update, vulnerabilityHow to detect a hit: Detecting a memory-based compromise in ChatGPT Atlas is not like hunting for traditional malware. There are no files, registry keys, or executables to isolate. Instead, security teams need to look for behavioral anomalies such as subtle shifts in how the assistant responds, what it suggests, and when it does so.”There…
-
Atlas browser exploit lets attackers hijack ChatGPT memory
Tags: ai, attack, browser, business, ceo, chatgpt, chrome, cloud, credentials, detection, exploit, identity, mitigation, monitoring, phishing, soc, threat, update, vulnerabilityHow to detect a hit: Detecting a memory-based compromise in ChatGPT Atlas is not like hunting for traditional malware. There are no files, registry keys, or executables to isolate. Instead, security teams need to look for behavioral anomalies such as subtle shifts in how the assistant responds, what it suggests, and when it does so.”There…
-
Darknet-Betreiber in den USA – Zwei ChatGPT-Prompts als Grundlage für Gerichtsbeschluss
US-Behörden nutzten zwei ChatGPT-Prompts als Grundlage, um Verdächtige zu identifizieren. Diese Rückwärtssuche ist neu bei KI-Diensten. First seen on computerbase.de Jump to article: www.computerbase.de/news/netzpolitik/darknet-betreiber-in-den-usa-zwei-chatgpt-prompts-als-grundlage-fuer-gerichtsbeschluss.94817
-
Zero-Click Exploit Targets MCP and Linked AI Agents to Stealthily Steal Data
Operant AI’s security research team has uncovered Shadow Escape, a dangerous zero-click attack that exploits the Model Context Protocol to steal sensitive data through AI assistants. The attack works with widely used platforms, including ChatGPT, Claude, Gemini, and other AI agents that rely on MCP connections to access organisational systems. Unlike traditional security breaches requiring…
-
OpenAI Atlas Browser Vulnerability Lets Attackers Execute Malicious Scripts in ChatGPT
Cybersecurity firm LayerX has identified a critical vulnerability in OpenAI’s ChatGPT Atlas browser that allows malicious actors to inject harmful instructions into ChatGPT’s memory and execute remote code. This security flaw poses significant risks to users across all browsers but presents particularly severe dangers for those using the new ChatGPT Atlas browser. Cross-Site Request Forgery…
-
‘ChatGPT Tainted Memories’ Exploit Enables Command Injection in Atlas Browser
LayerX Security found a flaw in OpenAI’s ChatGPT Atlas browser that lets attackers inject commands into its memory, posing major security and phishing risks. First seen on hackread.com Jump to article: hackread.com/chatgpt-tainted-memories-atlas-browser/
-
New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands
Tags: access, ai, chatgpt, cybersecurity, exploit, intelligence, malicious, malware, openai, vulnerabilityCybersecurity researchers have discovered a new vulnerability in OpenAI’s ChatGPT Atlas web browser that could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant’s memory and run arbitrary code.”This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware,” LayerX First seen on thehackernews.com…
-
ChatGPT’s Atlas Browser Jailbroken to Hide Malicious Prompts Inside URLs
Security researchers at NeuralTrust have uncovered a critical vulnerability in OpenAI’s Atlas browser that allows attackers to bypass safety measures by disguising malicious instructions as innocent-looking web addresses. The flaw exploits how the browser’s omnibox interprets user input, potentially enabling harmful actions without proper security checks. The Omnibox Vulnerability Explained Atlas features an omnibox that…
-
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.”The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to…
-
Strafverfolgung: OpenAI zur Herausgabe von Nutzerdaten gezwungen
Ein Verdächtiger nannte in einem anonymen Forum Prompts, die er in ChatGPT eingegeben hatte. Diese konnten mithilfe von OpenAI zugeordnet werden. First seen on golem.de Jump to article: www.golem.de/news/strafverfolgung-openai-zur-herausgabe-von-nutzerdaten-gezwungen-2510-201549.html
-
Chatbots Are Pushing Sanctioned Russian Propaganda
ChatGPT, Gemini, DeepSeek, and Grok are serving users propaganda from Russian-backed media when asked about the invasion of Ukraine, new research finds. First seen on wired.com Jump to article: www.wired.com/story/chatbots-are-pushing-sanctioned-russian-propaganda/
-
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.”The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to…
-
Strafverfolgung: OpenAI zur Herausgabe von Nutzerdaten gezwungen
Ein Verdächtiger nannte in einem anonymen Forum Prompts, die er in ChatGPT eingegeben hatte. Diese konnten mithilfe von OpenAI zugeordnet werden. First seen on golem.de Jump to article: www.golem.de/news/strafverfolgung-openai-zur-herausgabe-von-nutzerdaten-gezwungen-2510-201549.html
-
OpenAI goes after Microsoft 365 Copilot’s lunch with ‘company knowledge’ feature
ChatGPT can now rummage through corporate files via connectors, though Redmond still has the deeper hooks First seen on theregister.com Jump to article: www.theregister.com/2025/10/24/openai_chatgpt_company_knowledge/
-
Cybersecurity Snapshot: Top Advice for Detecting and Preventing AI Attacks, and for Securing AI Systems
Tags: access, ai, attack, authentication, awareness, best-practice, breach, business, chatgpt, china, ciso, cloud, computing, container, control, credentials, crime, cve, cyber, cyberattack, cybersecurity, data, defense, detection, email, exploit, extortion, finance, flaw, framework, fraud, google, governance, government, group, guide, hacker, hacking, healthcare, iam, identity, incident response, intelligence, LLM, malicious, malware, mitigation, monitoring, network, open-source, openai, organized, phishing, ransom, risk, risk-management, russia, sans, scam, service, skills, soc, strategy, supply-chain, technology, theft, threat, tool, training, vulnerability, zero-trustAs organizations eagerly adopt AI, cybersecurity teams are racing to protect these new systems. In this special edition of the Cybersecurity Snapshot, we round up some of the best recent guidance on how to fend off AI attacks, and on how to safeguard your AI systems. Key takeaways Developers are getting new playbooks from groups…
-
Shadow Escape 0-Click Attack in AI Assistants Puts Trillions of Records at Risk
Operant AI reveals Shadow Escape, a zero-click attack using the MCP flaw in ChatGPT, Gemini, and Claude to secretly steal trillions of SSNs and financial data. Traditional security is blind to this new AI threat. First seen on hackread.com Jump to article: hackread.com/shadow-escape-0-click-attack-ai-assistants-risk/
-
Shadow Escape 0-Click Attack in AI Assistants Puts Trillions of Records at Risk
Operant AI reveals Shadow Escape, a zero-click attack using the MCP flaw in ChatGPT, Gemini, and Claude to secretly steal trillions of SSNs and financial data. Traditional security is blind to this new AI threat. First seen on hackread.com Jump to article: hackread.com/shadow-escape-0-click-attack-ai-assistants-risk/
-
OpenAI Faces DHS Request to Disclose User’s ChatGPT Prompts in Investigation
Over the past year, federal agents struggled to uncover who operated a notorious child exploitation site on the dark web. Their search took an unexpected turn when the suspect revealed their use of ChatGPT, marking a significant moment in digital investigations. Federal Warrant Seeks ChatGPT Data Last week, in Maine, a federal search warrant was…
-
OpenAI Faces DHS Request to Disclose User’s ChatGPT Prompts in Investigation
Over the past year, federal agents struggled to uncover who operated a notorious child exploitation site on the dark web. Their search took an unexpected turn when the suspect revealed their use of ChatGPT, marking a significant moment in digital investigations. Federal Warrant Seeks ChatGPT Data Last week, in Maine, a federal search warrant was…
-
Cybersecurity Awareness Month Is for Security Leaders, Too
Think you know all there is to know about cybersecurity? Guess again. Shadow AI is challenging security leaders with many of the same issues raised by other “shadow” technologies. Only this time, it’s evolving at breakneck speed. Key takeaways: The vast majority of organizations (89%) are either using AI or piloting it. Shadow AI lurks…

