Tag: openai
-
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
Satya has also delivered Sam most of the cash he promised First seen on theregister.com Jump to article: www.theregister.com/2025/10/29/microsoft_earnings_q1_26_openai_loss/
-
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
Satya has also delivered Sam most of the cash he promised First seen on theregister.com Jump to article: www.theregister.com/2025/10/29/microsoft_earnings_q1_26_openai_loss/
-
AtlasExploit ermöglicht Angriff auf ChatGPT-Speicher
Security-Forscher haben eine neue Schwachstelle entdeckt, die den ChatGPT Atlas-Browser von OpenAI betrifft.Nur wenige Tage, nachdem Cybersicherheitsanalysten davor gewarnt hatten, den neuen Atlas-Browser von OpenAI zu installieren, haben Forscher von LayerX Security eine Schwachstelle entdeckt. Die Lücke soll es Angreifen ermöglichen, bösartige Befehle direkt in den ChatGPT-Speicher der Anwender einzuschleusen und Remote-Code auszuführen. Wie Or…
-
AtlasExploit ermöglicht Angriff auf ChatGPT-Speicher
Security-Forscher haben eine neue Schwachstelle entdeckt, die den ChatGPT Atlas-Browser von OpenAI betrifft.Nur wenige Tage, nachdem Cybersicherheitsanalysten davor gewarnt hatten, den neuen Atlas-Browser von OpenAI zu installieren, haben Forscher von LayerX Security eine Schwachstelle entdeckt. Die Lücke soll es Angreifen ermöglichen, bösartige Befehle direkt in den ChatGPT-Speicher der Anwender einzuschleusen und Remote-Code auszuführen. Wie Or…
-
OpenAI’s gpt-oss-safeguard enables developers to build safer AI
OpenAI is releasing a research preview of gpt-oss-safeguard, a set of open-weight reasoning models for safety classification. The models come in two sizes: … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2025/10/29/openai-gpt-oss-safeguard-safety-models/
-
New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts
Cybersecurity researchers have flagged a new security issue in agentic web browsers like OpenAI ChatGPT Atlas that exposes underlying artificial intelligence (AI) models to context poisoning attacks.In the attack devised by AI security company SPLX, a bad actor can set up websites that serve different content to browsers and AI crawlers run by ChatGPT and…
-
OpenAI Restructures, Nonprofit Foundation Retains Control
Nonprofit Foundation Holds Equity, Oversight Around $130B For-Profit Corporation. The nonprofit OpenAI Foundation now controls a $130 billion for-profit arm after a recapitalization process approved by attorneys general in California and Delaware. The nonprofit retains governance authority and will fund global health and AI risk mitigation programs, backed by regulatory approval. First seen on govinfosecurity.com…
-
Security for AI: A Practical Guide to Enforcing Your AI Acceptable Use Policy
Tags: access, ai, awareness, best-practice, business, chatgpt, compliance, control, corporate, data, data-breach, disinformation, finance, governance, government, guide, intelligence, LLM, malicious, monitoring, openai, privacy, regulation, risk, service, strategy, technology, threat, tool, training, update, vulnerabilityAn AI acceptable use policy can help your organization mitigate the risk of employees accidentally exposing sensitive data to public AI tools. Benchmark your organization’s policy against our best practices and discover how prompt-level visibility from Tenable AI Exposure eases policy enforcement. Key takeaways: An AI acceptable use policy governs the appropriate use of generative…
-
LayerX Exposes Critical Flaw in OpenAI’s ChatGPT Atlas Browser
LayerX found a flaw in ChatGPT’s Atlas browser letting hackers inject malicious code and exploit AI memory for remote access. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/news/openai-atlas-vulnerability/
-
Exclusive: OpenAI’s Atlas browser, and others, can be tricked by manipulated web content
Researchers poke holes in OpenAI’s new browser as standards bodies fear U.S. businesses are “sleepwalking” into an AI governance crisis. First seen on cyberscoop.com Jump to article: cyberscoop.com/openai-atlas-splx-research-cloaking-attacks-browser-agents/
-
OpenAI Atlas Browser Vulnerability Lets Attackers Execute Malicious Scripts in ChatGPT
Cybersecurity firm LayerX has identified a critical vulnerability in OpenAI’s ChatGPT Atlas browser that allows malicious actors to inject harmful instructions into ChatGPT’s memory and execute remote code. This security flaw poses significant risks to users across all browsers but presents particularly severe dangers for those using the new ChatGPT Atlas browser. Cross-Site Request Forgery…
-
‘ChatGPT Tainted Memories’ Exploit Enables Command Injection in Atlas Browser
LayerX Security found a flaw in OpenAI’s ChatGPT Atlas browser that lets attackers inject commands into its memory, posing major security and phishing risks. First seen on hackread.com Jump to article: hackread.com/chatgpt-tainted-memories-atlas-browser/
-
New ChatGPT Atlas Browser Exploit Lets Attackers Plant Persistent Hidden Commands
Tags: access, ai, chatgpt, cybersecurity, exploit, intelligence, malicious, malware, openai, vulnerabilityCybersecurity researchers have discovered a new vulnerability in OpenAI’s ChatGPT Atlas web browser that could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant’s memory and run arbitrary code.”This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware,” LayerX First seen on thehackernews.com…
-
Crafted URLs can trick OpenAI Atlas into running dangerous commands
Attackers can trick OpenAI Atlas browser via prompt injection, treating malicious instructions disguised as URLs in the omnibox as trusted commands. Attackers can exploit the OpenAI Atlas browser by disguising malicious instructions as URLs in the omnibox, which Atlas interprets as trusted commands, enabling harmful actions. NeuralTrust researchers warn that agentic browsers fail by not…
-
Researchers exploit OpenAI’s Atlas by disguising prompts as URLs
NeutralTrust shows how agentic browser can interpret bogus links as trusted user commands First seen on theregister.com Jump to article: www.theregister.com/2025/10/27/openai_atlas_prompt_injection/
-
ChatGPT’s Atlas Browser Jailbroken to Hide Malicious Prompts Inside URLs
Security researchers at NeuralTrust have uncovered a critical vulnerability in OpenAI’s Atlas browser that allows attackers to bypass safety measures by disguising malicious instructions as innocent-looking web addresses. The flaw exploits how the browser’s omnibox interprets user input, potentially enabling harmful actions without proper security checks. The Omnibox Vulnerability Explained Atlas features an omnibox that…
-
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.”The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to…
-
Strafverfolgung: OpenAI zur Herausgabe von Nutzerdaten gezwungen
Ein Verdächtiger nannte in einem anonymen Forum Prompts, die er in ChatGPT eingegeben hatte. Diese konnten mithilfe von OpenAI zugeordnet werden. First seen on golem.de Jump to article: www.golem.de/news/strafverfolgung-openai-zur-herausgabe-von-nutzerdaten-gezwungen-2510-201549.html
-
ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands
The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit.”The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to…
-
Strafverfolgung: OpenAI zur Herausgabe von Nutzerdaten gezwungen
Ein Verdächtiger nannte in einem anonymen Forum Prompts, die er in ChatGPT eingegeben hatte. Diese konnten mithilfe von OpenAI zugeordnet werden. First seen on golem.de Jump to article: www.golem.de/news/strafverfolgung-openai-zur-herausgabe-von-nutzerdaten-gezwungen-2510-201549.html
-
OpenAI goes after Microsoft 365 Copilot’s lunch with ‘company knowledge’ feature
ChatGPT can now rummage through corporate files via connectors, though Redmond still has the deeper hooks First seen on theregister.com Jump to article: www.theregister.com/2025/10/24/openai_chatgpt_company_knowledge/
-
The glaring security risks with AI browser agents
New AI browsers from OpenAI and Perplexity promise to increase user productivity, but they also come with increased security risks. First seen on techcrunch.com Jump to article: techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/
-
The glaring security risks with AI browser agents
New AI browsers from OpenAI and Perplexity promise to increase user productivity, but they also come with increased security risks. First seen on techcrunch.com Jump to article: techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/
-
Amazon Explains How Its AWS Outage Took Down the Web
Plus: The Jaguar Land Rover hack sets an expensive new record, OpenAI’s new Atlas browser raises security fears, Starlink cuts off scam compounds, and more. First seen on wired.com Jump to article: www.wired.com/story/amazon-explains-how-its-aws-outage-took-down-the-web/
-
Cybersecurity Snapshot: Top Advice for Detecting and Preventing AI Attacks, and for Securing AI Systems
Tags: access, ai, attack, authentication, awareness, best-practice, breach, business, chatgpt, china, ciso, cloud, computing, container, control, credentials, crime, cve, cyber, cyberattack, cybersecurity, data, defense, detection, email, exploit, extortion, finance, flaw, framework, fraud, google, governance, government, group, guide, hacker, hacking, healthcare, iam, identity, incident response, intelligence, LLM, malicious, malware, mitigation, monitoring, network, open-source, openai, organized, phishing, ransom, risk, risk-management, russia, sans, scam, service, skills, soc, strategy, supply-chain, technology, theft, threat, tool, training, vulnerability, zero-trustAs organizations eagerly adopt AI, cybersecurity teams are racing to protect these new systems. In this special edition of the Cybersecurity Snapshot, we round up some of the best recent guidance on how to fend off AI attacks, and on how to safeguard your AI systems. Key takeaways Developers are getting new playbooks from groups…
-
OpenAI’s Atlas shrugs off inevitability of prompt injection, releases AI browser anyway
‘Trust no AI’ says one researcher First seen on theregister.com Jump to article: www.theregister.com/2025/10/22/openai_defends_atlas_as_prompt/
-
OpenAI’s Atlas shrugs off inevitability of prompt injection, releases AI browser anyway
‘Trust no AI’ says one researcher First seen on theregister.com Jump to article: www.theregister.com/2025/10/22/openai_defends_atlas_as_prompt/
-
Ministry of Justice’s OpenAI deal paves way to sovereign AI
OpenAI has been busy signing deals with the UK government to bolster UK artificial intelligence. It’s now launching data residency for UK customers First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366633421/Ministry-of-Justices-OpenAI-deal-paves-way-to-sovereign-AI
-
Spoofed AI sidebars can trick Atlas, Comet users into dangerous actions
OpenAI’s Atlas and Perplexity’s Comet browsers are vulnerable to AI sidebar spoofing attacks that mislead users into following fake AI-generated instructions. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/spoofed-ai-sidebars-can-trick-atlas-comet-users-into-dangerous-actions/
-
Spoofed AI sidebars can trick Atlas, Comet users into dangerous actions
OpenAI’s Atlas and Perplexity’s Comet browsers are vulnerable to AI sidebar spoofing attacks that mislead users into following fake AI-generated instructions. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/spoofed-ai-sidebars-can-trick-atlas-comet-users-into-dangerous-actions/

