Tag: chatgpt
-
Multiple ChatGPT Security Bugs Allow Rampant Data Theft
Attackers can use them to inject arbitrary prompts, exfiltrate personal user information, bypass safety mechanisms, and take other malicious actions. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/multiple-chatgpt-security-bugs-rampant-data-theft
-
Schwachstelle im KI-Browser von OpenAI – Sicherheitslücke in ChatGPT Atlas erlaubt Übernahme von Nutzerkonten
First seen on security-insider.de Jump to article: www.security-insider.de/sicherheitsluecke-chatgpt-atlas-warnung-a-e76b68af32bfe9fdc512ab7b0253c62c/
-
Schwachstelle im KI-Browser von OpenAI – Sicherheitslücke in ChatGPT Atlas erlaubt Übernahme von Nutzerkonten
First seen on security-insider.de Jump to article: www.security-insider.de/sicherheitsluecke-chatgpt-atlas-warnung-a-e76b68af32bfe9fdc512ab7b0253c62c/
-
HackedGPT: New Vulnerabilities in GPT Models Allow Attackers to Launch 0-Click Attacks
Cybersecurity researchers at Tenable have uncovered a series of critical vulnerabilities in OpenAI’s ChatGPT that could allow malicious actors to steal private user data and launch attacks without any user interaction. The security flaws affect hundreds of millions of users who interact with large language models daily, raising significant concerns about the safety of AI.…
-
HackedGPT: New Vulnerabilities in GPT Models Allow Attackers to Launch 0-Click Attacks
Cybersecurity researchers at Tenable have uncovered a series of critical vulnerabilities in OpenAI’s ChatGPT that could allow malicious actors to steal private user data and launch attacks without any user interaction. The security flaws affect hundreds of millions of users who interact with large language models daily, raising significant concerns about the safety of AI.…
-
Researchers Find ChatGPT Vulnerabilities That Let Attackers Trick AI Into Leaking Data
Cybersecurity researchers have disclosed a new set of vulnerabilities impacting OpenAI’s ChatGPT artificial intelligence (AI) chatbot that could be exploited by an attacker to steal personal information from users’ memories and chat histories without their knowledge.The seven vulnerabilities and attack techniques, according to Tenable, were found in OpenAI’s GPT-4o and GPT-5 models. OpenAI has First…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
HackedGPT: Novel AI Vulnerabilities Open the Door for Private Data Leakage
Tenable Research has discovered seven vulnerabilities and attack techniques in ChatGPT, including unique indirect prompt injections, exfiltration of personal user information, persistence, evasion, and bypass of safety mechanisms. Key takeaways: Tenable Research has discovered multiple new and persistent vulnerabilities in OpenAI’s ChatGPT that could allow an attacker to exfiltrate private information from users’ memories and…
-
XLoader Malware Analyzed Using ChatGPT’s AI, Breaks RC4 Encryption Layers in Hours
Cybersecurity researchers have successfully demonstrated how artificial intelligence can dramatically accelerate malware analysis, decrypting complex XLoader samples in a fraction of the time previously required. XLoader, a sophisticated malware loader with information-stealing capabilities dating back to 2020, has long been considered one of the most challenging malware families to analyze. The malware combines multiple layers…
-
XLoader Malware Analyzed Using ChatGPT’s AI, Breaks RC4 Encryption Layers in Hours
Cybersecurity researchers have successfully demonstrated how artificial intelligence can dramatically accelerate malware analysis, decrypting complex XLoader samples in a fraction of the time previously required. XLoader, a sophisticated malware loader with information-stealing capabilities dating back to 2020, has long been considered one of the most challenging malware families to analyze. The malware combines multiple layers…
-
OpenAIs Aardvark soll Fehler im Code erkennen und beheben
Tags: ai, ceo, chatgpt, cve, cyberattack, LLM, open-source, openai, risk, software, supply-chain, tool, update, vulnerabilityKI soll das Thema Sicherheit frühzeitig in den Development-Prozess miteinbeziehen.OpenAI hat Aardvark vorgestellt, einen autonomen Agenten auf Basis von GPT-5. Er soll wie ein menschlicher Sicherheitsforscher in der Lage sein, Code zu scannen, zu verstehen und zu patchen.Im Gegensatz zu herkömmlichen Scannern, die verdächtigen Code mechanisch markieren, versucht Aardvark zu analysieren, wie und warum sich…
-
KI-Irrsinn Teil 1: Wenn ChaptGPT, Copilot Co. dich zu Fake-Orten locken
In Zeiten des AI-Einsatzes von ChatGPT und anderer Lösungen laufen Touristen in ein bisher unbekanntes Problem. Sie werden durch Berichte im Internet zu Fake-Locations gelockt, die gar nicht existieren. Dank KI wurden Berichte und Videos gefälscht. Das könnte ein wachsendes … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/11/03/ki-irrsinn-teil-1-wenn-chaptgpt-copilot-co-dich-zu-fake-orten-locken/
-
KI-Irrsinn Teil 1: Wenn ChaptGPT, Copilot Co. dich zu Fake-Orten locken
In Zeiten des AI-Einsatzes von ChatGPT und anderer Lösungen laufen Touristen in ein bisher unbekanntes Problem. Sie werden durch Berichte im Internet zu Fake-Locations gelockt, die gar nicht existieren. Dank KI wurden Berichte und Videos gefälscht. Das könnte ein wachsendes … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/11/03/ki-irrsinn-teil-1-wenn-chaptgpt-copilot-co-dich-zu-fake-orten-locken/
-
OpenAI’s ChatGPT Atlas: What It Means for Cybersecurity and Privacy
In this episode, we explore OpenAI’s groundbreaking release GPT Atlas, the AI-powered browser that remembers your activities and acts on your behalf. Discover its features, implications for enterprise security, and the risks it poses to privacy. Join hosts Tom Eston and Scott Wright as they discuss everything from the browser’s memory function to vulnerabilities like……
-
OpenAI’s ChatGPT Atlas: What It Means for Cybersecurity and Privacy
In this episode, we explore OpenAI’s groundbreaking release GPT Atlas, the AI-powered browser that remembers your activities and acts on your behalf. Discover its features, implications for enterprise security, and the risks it poses to privacy. Join hosts Tom Eston and Scott Wright as they discuss everything from the browser’s memory function to vulnerabilities like……
-
OpenAI’s ChatGPT Atlas: What It Means for Cybersecurity and Privacy
In this episode, we explore OpenAI’s groundbreaking release GPT Atlas, the AI-powered browser that remembers your activities and acts on your behalf. Discover its features, implications for enterprise security, and the risks it poses to privacy. Join hosts Tom Eston and Scott Wright as they discuss everything from the browser’s memory function to vulnerabilities like……
-
OpenAI is going Meta route, as it considers memory-based ads on ChatGPT
OpenAI is planning to introduce ads on ChatGPT, as it continues to struggle with revenue from paid users. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-is-going-meta-route-as-it-considers-memory-based-ads-on-chatgpt/
-
Beware of Fake ChatGPT Apps That Spy on Users and Steal Sensitive Data
The proliferation of artificial intelligence applications has created unprecedented opportunities for cybercriminals to exploit user trust through deceptive mobile apps. Mobile app stores today are flooded with hundreds of lookalike applications claiming to offer ChatGPT, DALL·E, and other AI services. Security researchers have discovered that beneath polished logos and promises of advanced functionality lies a…
-
Beware of Fake ChatGPT Apps That Spy on Users and Steal Sensitive Data
The proliferation of artificial intelligence applications has created unprecedented opportunities for cybercriminals to exploit user trust through deceptive mobile apps. Mobile app stores today are flooded with hundreds of lookalike applications claiming to offer ChatGPT, DALL·E, and other AI services. Security researchers have discovered that beneath polished logos and promises of advanced functionality lies a…
-
New Agent-Aware Cloaking Technique Uses ChatGPT Atlas Browser to Feed Fake Content
Security researchers have uncovered a sophisticated attack vector that exploits how AI search tools and autonomous agents retrieve web content. The vulnerability, termed >>agent-aware cloaking,
-
Spyware-Plugged ChatGPT, DALL·E and WhatsApp Apps Target US Users
Are you using a fake version of a popular app? Appknox warns US users about malicious brand clones hiding on third-party app stores. Protect yourself from hidden spyware and ‘commercial parasites.’ First seen on hackread.com Jump to article: hackread.com/spyware-chatgpt-dalle-whatsapp-apps-us-users/
-
Spyware-Plugged ChatGPT, DALL·E and WhatsApp Apps Target US Users
Are you using a fake version of a popular app? Appknox warns US users about malicious brand clones hiding on third-party app stores. Protect yourself from hidden spyware and ‘commercial parasites.’ First seen on hackread.com Jump to article: hackread.com/spyware-chatgpt-dalle-whatsapp-apps-us-users/
-
AtlasExploit ermöglicht Angriff auf ChatGPT-Speicher
Security-Forscher haben eine neue Schwachstelle entdeckt, die den ChatGPT Atlas-Browser von OpenAI betrifft.Nur wenige Tage, nachdem Cybersicherheitsanalysten davor gewarnt hatten, den neuen Atlas-Browser von OpenAI zu installieren, haben Forscher von LayerX Security eine Schwachstelle entdeckt. Die Lücke soll es Angreifen ermöglichen, bösartige Befehle direkt in den ChatGPT-Speicher der Anwender einzuschleusen und Remote-Code auszuführen. Wie Or…
-
AtlasExploit ermöglicht Angriff auf ChatGPT-Speicher
Security-Forscher haben eine neue Schwachstelle entdeckt, die den ChatGPT Atlas-Browser von OpenAI betrifft.Nur wenige Tage, nachdem Cybersicherheitsanalysten davor gewarnt hatten, den neuen Atlas-Browser von OpenAI zu installieren, haben Forscher von LayerX Security eine Schwachstelle entdeckt. Die Lücke soll es Angreifen ermöglichen, bösartige Befehle direkt in den ChatGPT-Speicher der Anwender einzuschleusen und Remote-Code auszuführen. Wie Or…
-
AI Search Tools Easily Fooled by Fake Content
New research shows AI crawlers like Perplexity, Atlas, and ChatGPT are surprisingly easy to fool. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/ai-search-tools-easily-fooled-by-fake-content
-
AI Search Tools Easily Fooled by Fake Content
New research shows AI crawlers like Perplexity, Atlas, and ChatGPT are surprisingly easy to fool. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/ai-search-tools-easily-fooled-by-fake-content
-
AI Search Tools Easily Fooled by Fake Content
New research shows AI crawlers like Perplexity, Atlas, and ChatGPT are surprisingly easy to fool. First seen on darkreading.com Jump to article: www.darkreading.com/cyber-risk/ai-search-tools-easily-fooled-by-fake-content
-
New AI-Targeted Cloaking Attack Tricks AI Crawlers Into Citing Fake Info as Verified Facts
Cybersecurity researchers have flagged a new security issue in agentic web browsers like OpenAI ChatGPT Atlas that exposes underlying artificial intelligence (AI) models to context poisoning attacks.In the attack devised by AI security company SPLX, a bad actor can set up websites that serve different content to browsers and AI crawlers run by ChatGPT and…
-
Ethical Prompt Injection: Fighting Shadow AI with Its Own Weapon
AI language models like ChatGPT, DeepSeek, and Copilot are transforming business operations at lightning speed. They help us generate documents, summarise meetings, and even make decisions faster than ever before. But this rapid adoption comes at a price. Employees often use unapproved AI tools on personal devices, risking sensitive company information leaking into ungoverned spaces.…

