Tag: openai
-
OpenAI Atlas Browser Vulnerability Lets Attackers Execute Malicious Scripts in ChatGPT
Cybersecurity firm LayerX has identified a critical vulnerability in OpenAI’s ChatGPT Atlas browser that allows malicious actors to inject harmful instructions into ChatGPT’s memory and execute remote code. This security flaw poses significant risks to users across all browsers but presents particularly severe dangers for those using the new ChatGPT Atlas browser. Cross-Site Request Forgery…
-
‘ChatGPT Tainted Memories’ Exploit Enables Command Injection in Atlas Browser
LayerX Security found a flaw in OpenAI’s ChatGPT Atlas browser that lets attackers inject commands into its memory, posing major security and phishing risks. First seen on hackread.com Jump to article: hackread.com/chatgpt-tainted-memories-atlas-browser/
-
New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands
Tags: access, ai, chatgpt, cybersecurity, exploit, intelligence, malicious, malware, openai, vulnerabilityCybersecurity researchers have discovered a new vulnerability in OpenAI’s ChatGPT Atlas web browser that could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant’s memory and run arbitrary code.”This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware,” LayerX First seen on thehackernews.com…
-
Crafted URLs can trick OpenAI Atlas into running dangerous commands
Attackers can trick OpenAI Atlas browser via prompt injection, treating malicious instructions disguised as URLs in the omnibox as trusted commands. Attackers can exploit the OpenAI Atlas browser by disguising malicious instructions as URLs in the omnibox, which Atlas interprets as trusted commands, enabling harmful actions. NeuralTrust researchers warn that agentic browsers fail by not…
-
Researchers exploit OpenAI’s Atlas by disguising prompts as URLs
NeutralTrust shows how agentic browser can interpret bogus links as trusted user commands First seen on theregister.com Jump to article: www.theregister.com/2025/10/27/openai_atlas_prompt_injection/
-
ChatGPT’s Atlas Browser Jailbroken to Hide Malicious Prompts Inside URLs
Security researchers at NeuralTrust have uncovered a critical vulnerability in OpenAI’s Atlas browser that allows attackers to bypass safety measures by disguising malicious instructions as innocent-looking web addresses. The flaw exploits how the browser’s omnibox interprets user input, potentially enabling harmful actions without proper security checks. The Omnibox Vulnerability Explained Atlas features an omnibox that…
-
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.”The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to…
-
Strafverfolgung: OpenAI zur Herausgabe von Nutzerdaten gezwungen
Ein Verdächtiger nannte in einem anonymen Forum Prompts, die er in ChatGPT eingegeben hatte. Diese konnten mithilfe von OpenAI zugeordnet werden. First seen on golem.de Jump to article: www.golem.de/news/strafverfolgung-openai-zur-herausgabe-von-nutzerdaten-gezwungen-2510-201549.html
-
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.”The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to…
-
Strafverfolgung: OpenAI zur Herausgabe von Nutzerdaten gezwungen
Ein Verdächtiger nannte in einem anonymen Forum Prompts, die er in ChatGPT eingegeben hatte. Diese konnten mithilfe von OpenAI zugeordnet werden. First seen on golem.de Jump to article: www.golem.de/news/strafverfolgung-openai-zur-herausgabe-von-nutzerdaten-gezwungen-2510-201549.html
-
OpenAI goes after Microsoft 365 Copilot’s lunch with ‘company knowledge’ feature
ChatGPT can now rummage through corporate files via connectors, though Redmond still has the deeper hooks First seen on theregister.com Jump to article: www.theregister.com/2025/10/24/openai_chatgpt_company_knowledge/
-
The glaring security risks with AI browser agents
New AI browsers from OpenAI and Perplexity promise to increase user productivity, but they also come with increased security risks. First seen on techcrunch.com Jump to article: techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/
-
The glaring security risks with AI browser agents
New AI browsers from OpenAI and Perplexity promise to increase user productivity, but they also come with increased security risks. First seen on techcrunch.com Jump to article: techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/
-
Amazon Explains How Its AWS Outage Took Down the Web
Plus: The Jaguar Land Rover hack sets an expensive new record, OpenAI’s new Atlas browser raises security fears, Starlink cuts off scam compounds, and more. First seen on wired.com Jump to article: www.wired.com/story/amazon-explains-how-its-aws-outage-took-down-the-web/
-
Cybersecurity Snapshot: Top Advice for Detecting and Preventing AI Attacks, and for Securing AI Systems
Tags: access, ai, attack, authentication, awareness, best-practice, breach, business, chatgpt, china, ciso, cloud, computing, container, control, credentials, crime, cve, cyber, cyberattack, cybersecurity, data, defense, detection, email, exploit, extortion, finance, flaw, framework, fraud, google, governance, government, group, guide, hacker, hacking, healthcare, iam, identity, incident response, intelligence, LLM, malicious, malware, mitigation, monitoring, network, open-source, openai, organized, phishing, ransom, risk, risk-management, russia, sans, scam, service, skills, soc, strategy, supply-chain, technology, theft, threat, tool, training, vulnerability, zero-trustAs organizations eagerly adopt AI, cybersecurity teams are racing to protect these new systems. In this special edition of the Cybersecurity Snapshot, we round up some of the best recent guidance on how to fend off AI attacks, and on how to safeguard your AI systems. Key takeaways Developers are getting new playbooks from groups…
-
OpenAI’s Atlas shrugs off inevitability of prompt injection, releases AI browser anyway
‘Trust no AI’ says one researcher First seen on theregister.com Jump to article: www.theregister.com/2025/10/22/openai_defends_atlas_as_prompt/
-
OpenAI’s Atlas shrugs off inevitability of prompt injection, releases AI browser anyway
‘Trust no AI’ says one researcher First seen on theregister.com Jump to article: www.theregister.com/2025/10/22/openai_defends_atlas_as_prompt/
-
Ministry of Justice’s OpenAI deal paves way to sovereign AI
OpenAI has been busy signing deals with the UK government to bolster UK artificial intelligence. It’s now launching data residency for UK customers First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366633421/Ministry-of-Justices-OpenAI-deal-paves-way-to-sovereign-AI
-
Spoofed AI sidebars can trick Atlas, Comet users into dangerous actions
OpenAI’s Atlas and Perplexity’s Comet browsers are vulnerable to AI sidebar spoofing attacks that mislead users into following fake AI-generated instructions. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/spoofed-ai-sidebars-can-trick-atlas-comet-users-into-dangerous-actions/
-
Spoofed AI sidebars can trick Atlas, Comet users into dangerous actions
OpenAI’s Atlas and Perplexity’s Comet browsers are vulnerable to AI sidebar spoofing attacks that mislead users into following fake AI-generated instructions. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/spoofed-ai-sidebars-can-trick-atlas-comet-users-into-dangerous-actions/
-
OpenAI Faces DHS Request to Disclose User’s ChatGPT Prompts in Investigation
Over the past year, federal agents struggled to uncover who operated a notorious child exploitation site on the dark web. Their search took an unexpected turn when the suspect revealed their use of ChatGPT, marking a significant moment in digital investigations. Federal Warrant Seeks ChatGPT Data Last week, in Maine, a federal search warrant was…
-
OpenAI Faces DHS Request to Disclose User’s ChatGPT Prompts in Investigation
Over the past year, federal agents struggled to uncover who operated a notorious child exploitation site on the dark web. Their search took an unexpected turn when the suspect revealed their use of ChatGPT, marking a significant moment in digital investigations. Federal Warrant Seeks ChatGPT Data Last week, in Maine, a federal search warrant was…
-
Is Sora 2 the Future of Video? AI, Copyright, and Privacy Issues
OpenAI’s Sora 2 is here, and it’s not just another AI toy. This episode explores how Sora 2 works, how users can insert almost anything into generated content, and why that’s raising alarms about privacy, identity, and copyright. We walk you through the initial opt-out copyright controversy, the backlash from studios and creators, and… First…
-
OpenAI confirms GPT-6 is not shipping in 2025
Tags: openaiOpenAI is not planning to ship GPT-6 this year, but that doesn’t necessarily mean the company will not release new models. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-confirms-gpt-6-is-not-shipping-in-2025/
-
Cybersecurity Snapshot: F5 Breach Prompts Urgent U.S. Gov’t Warning, as OpenAI Details Disrupted ChatGPT Abuses
Tags: ai, attack, awareness, backdoor, breach, business, chatgpt, china, cisa, cloud, control, corporate, cve, cyber, cybersecurity, data, data-breach, defense, detection, exploit, framework, fraud, governance, government, group, hacker, incident, infrastructure, Internet, iran, law, LLM, malicious, malware, mitigation, monitoring, network, openai, organized, phishing, privacy, resilience, risk, russia, scam, security-incident, service, software, strategy, supply-chain, technology, threat, training, update, vulnerabilityF5’s breach triggers a CISA emergency directive, as Tenable calls it “a five-alarm fire” that requires urgent action. Meanwhile, OpenAI details how attackers try to misuse ChatGPT. Plus, boards are increasing AI and cyber disclosures. And much more! Key takeaways A critical breach at cybersecurity firm F5, attributed to a nation-state, has triggered an urgent…
-
OpenAI’s ChatGPT is so popular that almost no one will pay for it
If you build it, they will come and expect the service to be free First seen on theregister.com Jump to article: www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay/
-
OpenAI’s ChatGPT is so popular that almost no one will pay for it
If you build it, they will come and expect the service to be free First seen on theregister.com Jump to article: www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay/
-
13 cybersecurity myths organizations need to stop believing
Tags: access, ai, attack, authentication, backup, banking, breach, business, ceo, compliance, computer, computing, corporate, credentials, cyber, cybersecurity, data, data-breach, deep-fake, defense, encryption, finance, government, group, identity, incident response, infrastructure, jobs, law, malicious, mfa, monitoring, network, nist, openai, passkey, password, phishing, privacy, regulation, risk, service, skills, strategy, technology, theft, threat, tool, vulnerabilityBig tech platforms have strong verification that prevents impersonation: Some of the largest tech platforms like to talk about their strong identity checks as a way to stop impersonation. But looking good on paper is one thing, and holding up to the promise in the real world is another.”The truth is that even advanced verification…
-
Hackers Mimic as OpenAI and Sora Services to Steal Login Credentials
Hackers have launched a sophisticated phishing campaign impersonating both OpenAI and the recently released Sora 2 AI service. By cloning legitimate-looking landing pages, these actors are duping users into submitting their login credentials, participating in faux “gift” surveys, and even falling victim to cryptocurrency scams. Security researchers note that these deceptive domains are already ensnaring…
-
Simple Prompt Injection Lets Hackers Bypass OpenAI Guardrails Framework
Security researchers have discovered a fundamental vulnerability in OpenAI’s newly released Guardrails framework that can be exploited using basic prompt injection techniques. The vulnerability enables attackers to circumvent the system’s safety mechanisms and generate malicious content without triggering any security alerts, raising serious concerns about the effectiveness of AI self-regulation approaches. Critical Flaw in LLM-Based…

