Tag: openai
-
MalTerminal: New GPTPowered Malware That Writes Its Own Ransomware
A groundbreaking discovery in cybersecurity research has revealed the emergence of’MalTerminal’, potentially the earliest known example of Large Language Model (LLM)-enabled malware that leverages OpenAI’s GPT-4 API to dynamically generate ransomware code and reverse shells at runtime. This discovery represents a significant evolution in malware sophistication, presenting unprecedented challenges for traditional detection methods. SentinelLABS researchers…
-
ShadowLeak Zero-Click Flaw Leaks Gmail Data via OpenAI ChatGPT Deep Research Agent
Cybersecurity researchers have disclosed a zero-click flaw in OpenAI ChatGPT’s Deep Research agent that could allow an attacker to leak sensitive Gmail inbox data with a single crafted email without any user action.The new class of attack has been codenamed ShadowLeak by Radware. Following responsible disclosure on June 18, 2025, the issue was addressed by…
-
‘ShadowLeak’ ChatGPT Attack Allows Hackers to Invisibly Steal Emails
The loophole allows cyberattackers to exfiltrate company data via OpenAI’s infrastructure, leaving no trace at all on enterprise systems. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/shadowleak-chatgpt-invisibly-steal-emails
-
OpenAI’s $4 GPT Go plan may expand to more regions
Tags: openai.OpenAI released $4 GPT Go in August, but it was limited to just India. Now, OpenAI is expanding GPT Go to include new regions. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openais-4-gpt-go-plan-may-expand-to-more-regions/
-
ChatGPT Search is now smarter as OpenAI takes on Google Search
OpenAI has rolled out a big update to ChatGPT Search, which is an AI-powered search feature, similar to Google AI Mode. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-search-is-now-smarter-as-openai-takes-on-google-search/
-
OpenAI plugs ShadowLeak bug in ChatGPT that let miscreants raid inboxes
Radware says flaw enabled hidden email prompts to trick Deep Research agent into exfiltrating sensitive data First seen on theregister.com Jump to article: www.theregister.com/2025/09/19/openai_shadowleak_bug/
-
ShadowLeak: Radware Uncovers Zero-Click Attack on ChatGPT
Radware discovered a server-side data theft attack, dubbed ShadowLeak, targeting ChatGPT. OpenAI patched the zero-click vulnerability. Researchers at Radware uncovered a server-side data theft attack targeting ChatGPT, called ShadowLeak. The experts discovered a zero-click vulnerability in ChatGPT’s Deep Research agent when connected to Gmail and browsing. The researchers explained that using a crafted email could trigger the agent to…
-
ChatGPT now gives you greater control over GPT-5 Thinking model
OpenAI is finally rolling out a toggle that allows you to decide how hard the GPT-5-thinking model can think. This feature is rolling out to Plus and Pro subscribers. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-now-gives-you-greater-control-over-gpt-5-thinking-model/
-
OpenAI fixes zero-click ShadowLeak vulnerability affecting ChatGPT Deep Research agent
Cybersecurity firm Radware discovered a vulnerability they call “ShadowLeak” where an attacker could exploit the vulnerability by simply sending an email to the user. First seen on therecord.media Jump to article: therecord.media/openai-fixes-zero-click-shadowleak-vulnerability
-
New attack on ChatGPT research agent pilfers secrets from Gmail inboxes
Unlike most prompt injections, ShadowLeak executes on OpenAI’s cloud-based infrastructure. First seen on arstechnica.com Jump to article: arstechnica.com/information-technology/2025/09/new-attack-on-chatgpt-research-agent-pilfers-secrets-from-gmail-inboxes/
-
OpenAI says models are programmed to make stuff up instead of admitting ignorance
Tags: openaiEven a wrong answer is right some of the time First seen on theregister.com Jump to article: www.theregister.com/2025/09/17/openai_hallucinations_incentives/
-
OpenAI Adds Age Checks, Parental Controls for Minors
New Safeguards Follow Teen Suicides Linked to ChatGPT and Other AI Chatbots. OpenAI is rolling out new safeguards in ChatGPT to protect younger users by adding age estimation tools and, in some cases, requiring ID verification for those claiming to be over 18. The move follows growing scrutiny over the impact of chatbots on teenagers.…
-
OpenAI to predict ages in bid to stop ChatGPT from discussing self harm with kids
The announcement comes weeks after the parents of a teenager who killed himself sued the tech giant for allegedly helping their son draft a suicide note and giving him tips for how to do so most effectively. First seen on therecord.media Jump to article: therecord.media/openai-age-prediction-chatgpt-children-safety
-
OpenAI’s new GPT-5 Codex model takes on Claude Code
Tags: openaiOpenAI is rolling out the GPT-5 Codex model to all Codex instances, including Terminal, IDE extension, and Codex Web (codex.chatgpt.com). First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openais-new-gpt-5-codex-model-takes-on-claude-code/
-
Top AI companies have spent months working with US, UK governments on model safety
OpenAI and Anthropic said they turned over their models to government researchers, who found an array of previously undiscovered vulnerabilities and attack techniques. First seen on cyberscoop.com Jump to article: cyberscoop.com/openai-anthropic-ai-safety-government-research-us-uk/
-
OpenAI reportedly on the hook for $300B Oracle Cloud bill
Tick tock Sam, just fifteen months before your first bill is due First seen on theregister.com Jump to article: www.theregister.com/2025/09/11/openai_reportedly_on_the_hook/
-
OpenAI eats jobs, then offers to help you find a new one at Walmart
Move over LinkedIn, Altman’s crew wants a piece of the action First seen on theregister.com Jump to article: www.theregister.com/2025/09/05/openai_jobs_board/
-
OpenAI targets India with datacentre push
The AI firm is planning to open a one-gigawatt datacentre in India, which could reduce latency, ensure regulatory compliance and give it an edge over hyperscalers First seen on computerweekly.com Jump to article: www.computerweekly.com/news/366630088/OpenAI-targets-India-with-datacentre-push
-
Sicherheit in ChatGPT: OpenAI meldet Chatverläufe an Strafverfolgungsbehörden
Unter bestimmten Umständen werden Chatverläufe von ChatGPT-Nutzern von einem OpenAI-Team beurteilt und gemeldet. First seen on golem.de Jump to article: www.golem.de/news/sicherheit-in-chatgpt-openai-meldet-chatverlaeufe-an-strafverfolgungsbehoerden-2509-199717.html
-
OpenAI releases big upgrade for ChatGPT Codex for agentic coding
OpenAI has announced a big update for Codex, which is the company’s agentic coding tool. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-releases-big-upgrade-for-chatgpt-codex-for-agentic-coding/
-
KI als Cybercrime-Copilot
Tags: ai, business, ciso, cyberattack, cybercrime, dns, group, injection, intelligence, malware, openai, ransomware, RedTeam, sans, strategy, threat, toolDas KI-gestützte Entwickler-Tool Claude Code hat einem Cyberkriminellen dabei geholfen, in Netzwerke einzudringen.CISOs und Sicherheitsentscheider rechnen schon seit längerem damit, dass Cyberangriffe nicht mehr von Menschen mit KI-Tools, sondern von KI-Systemen selbst ausgehen. Diese Befürchtung hat sich nun mit neuen Forschungserkenntnissen bestätigt. So offenbart Anthropics aktueller Threat Intelligence Report , dass das KI-gestützte Entwickler-Tool Claude…
-
KI als Cybercrime-Copilot
Tags: ai, business, ciso, cyberattack, cybercrime, dns, group, injection, intelligence, malware, openai, ransomware, RedTeam, sans, strategy, threat, toolDas KI-gestützte Entwickler-Tool Claude Code hat einem Cyberkriminellen dabei geholfen, in Netzwerke einzudringen.CISOs und Sicherheitsentscheider rechnen schon seit längerem damit, dass Cyberangriffe nicht mehr von Menschen mit KI-Tools, sondern von KI-Systemen selbst ausgehen. Diese Befürchtung hat sich nun mit neuen Forschungserkenntnissen bestätigt. So offenbart Anthropics aktueller Threat Intelligence Report , dass das KI-gestützte Entwickler-Tool Claude…
-
OpenAI is testing “Thinking effort” for ChatGPT
OpenAI is working on a new feature called the Thinking effort picker for ChatGPT. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-is-testing-thinking-effort-for-chatgpt/
-
KI greift erstmals autonom an
Tags: ai, business, ciso, cyberattack, cybercrime, dns, group, injection, intelligence, malware, openai, ransomware, RedTeam, sans, strategy, threat, toolDas KI-gestützte Entwickler-Tool Claude Code hat einem Cyberkriminellen dabei geholfen, in Netzwerke einzudringen.CISOs und Sicherheitsentscheider rechnen schon seit längerem damit, dass Cyberangriffe nicht mehr von Menschen mit KI-Tools, sondern von KI-Systemen selbst ausgehen. Diese Befürchtung hat sich nun mit neuen Forschungserkenntnissen bestätigt. So offenbart Anthropics aktueller Threat Intelligence Report , dass das KI-gestützte Entwickler-Tool Claude…
-
OpenAI, Anthropic Swap Safety Reviews
AI Giants Evaluated Each Other’s Newer Models for Safety Risks. OpenAI and Anthropic evaluated each other’s AI models in the summer, testing for concerning behaviors that could indicate misalignment risks. Both companies released their findings simultaneously: no model was severely problematic, but all showed plenty of troubling behavior in testing scenarios. First seen on govinfosecurity.com…
-
First AI-Powered Ransomware “PromptLock” Uses OpenAI gpt-oss-20b for Encryption
PromptLock, a novel ransomware strain discovered by the ESET Research team, marks the first known instance of malware harnessing a local large language model to generate its malicious payload on the victim’s machine. Rather than carrying pre-compiled attack logic, PromptLock ships with hard-coded prompts that instruct a locally hosted OpenAI gpt-oss:20b model”, accessed via the…
-
Someone Created the First AI-Powered Ransomware Using OpenAI’s gpt-oss:20b Model
Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock.Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month.”PromptLock First seen on thehackernews.com…
-
Anthropic detects the inevitable: genAI-only attacks, no humans involved
Tags: ai, attack, business, ciso, control, cybercrime, cybersecurity, defense, dns, infrastructure, injection, intelligence, malicious, malware, open-source, openai, RedTeam, threat, tool, warfarenot find.”There is potentially a lot of this activity we’re not seeing. Anthropic being open about their platform being used for malicious activities is significant, and OpenAI has recently shared the same as well. But will others open up about what is already likely happening?” Brunkard asked. “Or maybe they haven’t shared because they don’t…
-
First AI-Powered Ransomware PromptLock Targets Windows, Linux and macOS
ESET has identified PromptLock, the first AI-powered ransomware, using OpenAI models to generate scripts that target Windows, Linux… First seen on hackread.com Jump to article: hackread.com/first-ai-promptlock-ransomware-windows-linux-macos/
-
Someone Created First AI-Powered Ransomware Using OpenAI’s gpt-oss:20b Model
Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock.Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month.”PromptLock First seen on thehackernews.com…

