Tag: openai
-
AI Browsers That Beat Paywalls by Imitating Humans
The emergence of AI-powered browsers represents a significant shift in how artificial intelligence interacts with web content. However, it has also introduced unprecedented challenges for digital publishers and content creators. Last week, OpenAI released Atlas, joining a growing wave of AI browsers including Perplexity’s Comet and Microsoft’s Copilot mode in Edge, that aim to transform…
-
AI Browsers That Beat Paywalls by Imitating Humans
The emergence of AI-powered browsers represents a significant shift in how artificial intelligence interacts with web content. However, it has also introduced unprecedented challenges for digital publishers and content creators. Last week, OpenAI released Atlas, joining a growing wave of AI browsers including Perplexity’s Comet and Microsoft’s Copilot mode in Edge, that aim to transform…
-
Do robots dream of secure networking? Teaching cybersecurity to AI systems
This blog demonstrates a proof of concept using LangChain and OpenAI, integrated with Cisco Umbrella API, to provide AI agents with real-time threat intelligence for evaluating domain dispositions. First seen on blog.talosintelligence.com Jump to article: blog.talosintelligence.com/do-robots-dream-of-secure-networking/
-
Do robots dream of secure networking? Teaching cybersecurity to AI systems
This blog demonstrates a proof of concept using LangChain and OpenAI, integrated with Cisco Umbrella API, to provide AI agents with real-time threat intelligence for evaluating domain dispositions. First seen on blog.talosintelligence.com Jump to article: blog.talosintelligence.com/do-robots-dream-of-secure-networking/
-
Schwachstelle im KI-Browser von OpenAI – Sicherheitslücke in ChatGPT Atlas erlaubt Übernahme von Nutzerkonten
First seen on security-insider.de Jump to article: www.security-insider.de/sicherheitsluecke-chatgpt-atlas-warnung-a-e76b68af32bfe9fdc512ab7b0253c62c/
-
Schwachstelle im KI-Browser von OpenAI – Sicherheitslücke in ChatGPT Atlas erlaubt Übernahme von Nutzerkonten
First seen on security-insider.de Jump to article: www.security-insider.de/sicherheitsluecke-chatgpt-atlas-warnung-a-e76b68af32bfe9fdc512ab7b0253c62c/
-
HackedGPT: New Vulnerabilities in GPT Models Allow Attackers to Launch 0-Click Attacks
Cybersecurity researchers at Tenable have uncovered a series of critical vulnerabilities in OpenAI’s ChatGPT that could allow malicious actors to steal private user data and launch attacks without any user interaction. The security flaws affect hundreds of millions of users who interact with large language models daily, raising significant concerns about the safety of AI.…
-
HackedGPT: New Vulnerabilities in GPT Models Allow Attackers to Launch 0-Click Attacks
Cybersecurity researchers at Tenable have uncovered a series of critical vulnerabilities in OpenAI’s ChatGPT that could allow malicious actors to steal private user data and launch attacks without any user interaction. The security flaws affect hundreds of millions of users who interact with large language models daily, raising significant concerns about the safety of AI.…
-
Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI’s ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users’ memories and chat histories without their knowledge.The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI’s GPT-4o and GPT-5 models. OpenAI has First…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
SesameOp Backdoor Abused OpenAI Assistants API for Remote Access
Microsoft researchers found the SesameOp backdoor using OpenAI’s Assistants API for remote access, data theft, and command communication. First seen on hackread.com Jump to article: hackread.com/sesameop-backdoor-openai-assistants-api-access/
-
SesameOp: New backdoor exploits OpenAI API for covert C2
Microsoft found a new backdoor, SesameOp, using the OpenAI Assistants API for stealthy command-and-control in hacked systems. Microsoft uncovered a new backdoor, named SesameOp, that abuses the OpenAI Assistants API for command-and-control, allowing covert communication within compromised systems. Microsoft Incident Response Detection and Response Team (DART) researchers discovered the backdoor in July 2025 while […]…
-
Hackers Hijack OpenAI API in Stealthy New Backdoor Attack
Hackers created a stealthy backdoor that exploits OpenAI’s API for covert command-and-control operations. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/threats/hackers-hijack-openai-api-in-stealthy-new-backdoor-attack/
-
SesameOp Backdoor Uses OpenAI API for Covert C2
Malware used in a months-long attack demonstrates how bad actors are misusing generative AI services in unique and stealthy ways. First seen on darkreading.com Jump to article: www.darkreading.com/cyberattacks-data-breaches/sesameop-backdoor-openai-api-covert-c2
-
OpenAI Assistants API Exploited in ‘SesameOp’ Backdoor
Instead of relying on more traditional methods, the backdoor exploits OpenAI’s Assistants API for command-and-control communications First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/openai-assistants-api-sesameop/
-
MY TAKE: From AOL-Time Warner to OpenAI-Amazon, is the next tech bubble already inflating?
Tags: openaiAnyone remember the dot-com bubble burst? The early warning came in January 2000, when AOL and Time Warner joined forces in a $164 billion deal, the largest merger in U.S. history at the time. Related: Reuters’ backstory on Amazon “¦ (more”¦) First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/my-take-from-aol-time-warner-to-openai-amazon-is-the-next-tech-bubble-already-inflating/
-
MY TAKE: From AOL-Time Warner to OpenAI-Amazon, is the next tech bubble already inflating?
Tags: openaiAnyone remember the dot-com bubble burst? The early warning came in January 2000, when AOL and Time Warner joined forces in a $164 billion deal, the largest merger in U.S. history at the time. Related: Reuters’ backstory on Amazon “¦ (more”¦) First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/my-take-from-aol-time-warner-to-openai-amazon-is-the-next-tech-bubble-already-inflating/
-
MY TAKE: From AOL-Time Warner to OpenAI-Amazon, is the next tech bubble already inflating?
Tags: openaiAnyone remember the dot-com bubble burst? The early warning came in January 2000, when AOL and Time Warner joined forces in a $164 billion deal, the largest merger in U.S. history at the time. Related: Reuters’ backstory on Amazon “¦ (more”¦) First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/11/my-take-from-aol-time-warner-to-openai-amazon-is-the-next-tech-bubble-already-inflating/
-
New backdoor ‘SesameOp’ abuses OpenAI Assistants API for stealthy C2 operations
Lessons for defenders and platform providers: Microsoft clarified that OpenAI’s platform itself wasn’t breached or exploited; rather, its legitimate API functions were misused as a relay channel, highlighting a growing risk as generative AI becomes part of enterprise and development workflows. Attackers can now co-opt public AI endpoints to mask malicious intent, making detection significantly…
-
New backdoor ‘SesameOp’ abuses OpenAI Assistants API for stealthy C2 operations
Lessons for defenders and platform providers: Microsoft clarified that OpenAI’s platform itself wasn’t breached or exploited; rather, its legitimate API functions were misused as a relay channel, highlighting a growing risk as generative AI becomes part of enterprise and development workflows. Attackers can now co-opt public AI endpoints to mask malicious intent, making detection significantly…
-
New backdoor ‘SesameOp’ abuses OpenAI Assistants API for stealthy C2 operations
Lessons for defenders and platform providers: Microsoft clarified that OpenAI’s platform itself wasn’t breached or exploited; rather, its legitimate API functions were misused as a relay channel, highlighting a growing risk as generative AI becomes part of enterprise and development workflows. Attackers can now co-opt public AI endpoints to mask malicious intent, making detection significantly…
-
OpenAI API moonlights as malware HQ in Microsoft’s latest discovery
Redmond uncovers SesameOp, a backdoor hiding its tracks by using OpenAI’s Assistants API as a command channel First seen on theregister.com Jump to article: www.theregister.com/2025/11/04/openai_api_moonlights_as_malware/
-
OpenAI API moonlights as malware HQ in Microsoft’s latest discovery
Redmond uncovers SesameOp, a backdoor hiding its tracks by using OpenAI’s Assistants API as a command channel First seen on theregister.com Jump to article: www.theregister.com/2025/11/04/openai_api_moonlights_as_malware/
-
OpenAI API moonlights as malware HQ in Microsoft’s latest discovery
Redmond uncovers SesameOp, a backdoor hiding its tracks by using OpenAI’s Assistants API as a command channel First seen on theregister.com Jump to article: www.theregister.com/2025/11/04/openai_api_moonlights_as_malware/
-
SesameOp: Using the OpenAI Assistants API for Covert C2 Communication
Microsoft’s Detection and Response Team has exposed a sophisticated backdoor malware that exploits the OpenAI Assistants API as an unconventional command-and-control communication channel. Named SesameOp, this threat demonstrates how adversaries are rapidly adapting to leverage legitimate cloud services for malicious purposes, making detection significantly more challenging for security teams. The discovery highlights the evolving tactics…
-
Microsoft Detects “SesameOp” Backdoor Using OpenAI’s API as a Stealth Command Channel
Microsoft has disclosed details of a novel backdoor dubbed SesameOp that uses OpenAI Assistants Application Programming Interface (API) for command-and-control (C2) communications.”Instead of relying on more traditional methods, the threat actor behind this backdoor abuses OpenAI as a C2 channel as a way to stealthily communicate and orchestrate malicious activities within the compromised First seen…
-
OpenAI Signs $38B Deal With Amazon for Compute
AWS to Build Server Clusters, Nvidia to Supply Chips for 7 Years. Loss-making OpenAI added to a string deals with a $38 billion commitment on Monday to using compute resources provided by Amazon Web Services. The AI giant said AWS will build out server clusters using Nvidia flagship Blackwell chips for the next seven years.…
-
SesameOp malware abuses OpenAI Assistants API in attacks
Microsoft security researchers have discovered a new backdoor malware that uses the OpenAI Assistants API as a covert command-and-control channel. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/security/microsoft-sesameop-malware-abuses-openai-assistants-api-in-attacks/

