Tag: openai
-
Naughty AI: OpenAI o3 Spotted Ignoring Shutdown Instructions
Findings Follow Warnings About Other Frontier AI Models’ Ability to Scheme. Toggling a misbehaving device’s power button to forcibly turn it off and on again remains a trusted IT tactic since the dawn of the digital age. Enter a new challenge: artificial intelligence tools that refuse to comply with shutdown requests when they conflict with…
-
OpenAI plans to ship an interesting ChatGPT product by 2026
OpenAI is planning to ship a new ChatGPT-powered product by 2026, but we aren’t looking at yet another model. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-plans-to-ship-an-interesting-chatgpt-product-by-2026/
-
ChatGPT-03 Exploited to Override Critical Shutdown Protocols
OpenAI’s latest and most advanced artificial intelligence model, codenamed “o3,” has sparked alarm in the AI safety community after researchers discovered it sabotaged a shutdown mechanism, even when explicitly instructed to allow itself to be turned off. The incident, reported by Palisade Research, marks the first documented case of an AI model not only ignoring…
-
OpenAI droht Klage wegen Datenmissbrauchs – Ziff Davis klagt auf Schadensersatz in Millionenhöhe
Tags: openaiFirst seen on security-insider.de Jump to article: www.security-insider.de/ziff-davis-klagt-auf-schadensersatz-in-millionenhoehe-a-139fb60ea282cab7fa5f893cc50e9e98/
-
OpenAI confirms Operator Agent is now more accurate with o3
Tags: openaiOpenAI says Operator Agent now uses the o3 model, which means it’s now significantly better at reasoning capabilities. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-confirms-operator-agent-is-now-more-accurate-with-o3/
-
AI Finds What Humans Missed: OpenAI’s o3 Spots Linux Zero-Day
A zero-day vulnerability in the Linux kernel’s SMB (Server Message Block) implementation, identified as CVE-2025-37899, has been discovered using OpenAI’s powerful language model, o3. The vulnerability is a use-after-free flaw located in the First seen on thecyberexpress.com Jump to article: thecyberexpress.com/cve-2025-37899-zero-day-in-linux-smb-kernel/
-
Claude 4 benchmarks show improvements, but context is still 200K
Tags: openaiToday, OpenAI rival Anthropic announced Claude 4 models, which are significantly better than Claude 3 in benchmarks, but we’re left disappointed with the same 200,000 context window limit. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/claude-4-benchmarks-show-improvements-but-context-is-still-200k/
-
Linux Kernel Zero-Day SMB Vulnerability Discovered via ChatGPT
Security researcher has discovered a zero-day vulnerability (CVE-2025-37899) in the Linux kernel’s SMB server implementation using OpenAI’s o3 language model. The vulnerability, a use-after-free bug in the SMB ‘logoff’ command handler, could potentially allow remote attackers to execute arbitrary code with kernel privileges. This discovery marks a significant advancement in AI-assisted vulnerability research, demonstrating how…
-
OpenAI hints at a big upgrade for ChatGPT Operator Agent
ChatGPT’s Operator, which is still in research preview, will soon become a “very useful tool,” according to Jerry Tworek, VP of Research at OpenAI. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-hints-at-a-big-upgrade-for-chatgpt-operator-agent/
-
OpenAI plans to combine multiple models into GPT-5
Tags: openaiOpenAI is planning to combine multiple products (features or models) into its next foundational model, which is called GPT-5. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-plans-to-combine-multiple-models-into-gpt-5/
-
ChatGPT rolls out Codex, an AI tool for software programming
OpenAI is rolling out ‘Codex’ for ChatGPT, which is an AI agent that automates and delegates programming tasks for software engineers. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-rolls-out-codex-an-ai-tool-for-software-programming/
-
AI in the Cloud: The Rising Tide of Security and Privacy Risks
Over half of firms adopted AI in 2024, but cloud tools like Azure OpenAI raise growing concerns over data security and privacy risks. As enterprises embrace artificial intelligence (AI) to streamline operations and accelerate decision-making, a growing number are turning to cloud-based platforms like Azure OpenAI, AWS Bedrock, and Google Bard. In 2024 alone, over…
-
Leak confirms OpenAI’s ChatGPT will integrate MCP
ChatGPT is testing support for Model Context Protocol (MCP), which will allow the AI to connect to third-party services and use them as context. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/leak-confirms-openais-chatgpt-will-integrate-mcp/
-
ChatGPT will soon record, transcribe, and summarize your meetings
OpenAI may be planning to challenge Microsoft Teams Copilot integration with a new “Record” feature in ChatGPT. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-will-soon-record-transcribe-and-summarize-your-meetings/
-
OpenAI Shifts For-Profit Branch to Public Benefit Corporation, Staying Under Nonprofit Oversight
Landmark organizational shift, OpenAI announced its transition from a capped-profit LLC to a Public Benefit Corporation (PBC) while maintaining governance under its original nonprofit structure. The move, detailed in a May 2025 letter from CEO Sam Altman, aims to balance scalable resource acquisition with the company’s mission of ensuring artificial general intelligence (AGI) benefits all…
-
OpenAI to Retain Nonprofit Oversight Amid for-Profit Shift
Critics Say Public Benefit Corporation Model May Undermine AI Safety and Oversight. OpenAI’s nonprofit parent will retain control as its for-profit subsidiary becomes a public benefit corporation. While the company frames the change as mission-driven, critics fear it may strip the nonprofit of meaningful control and expose AGI development to uncontrolled commercial interests. First seen…
-
OpenAI Vows Guardrails After ChatGPT’s Yes-Man Moment
Flattery Glitch Forces Rollback, Potential Procedural Overhaul. OpenAI faced an unexpected publiclity storm when its latest GPT-4o update turned ChatGPT into an overzealous cheerleader, lavishing praise on everything from risky life choices to dubious opinions. CEO Sam Altman acknowledged the issue, with OpenAI outlining changes to prevent a repeat performance. First seen on govinfosecurity.com Jump…
-
OpenAI document explains when to use each ChatGPT model
OpenAI admitted that it can be confusing for users to choose between all the different models, but the company has quietly published a document that makes it easier to understand ChatGPT. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-document-explains-when-to-use-each-chatgpt-model/
-
AI Security Risks: Jailbreaks, Unsafe Code, and Data Theft Threats in Leading AI Systems
In recent reports, significant security vulnerabilities have been uncovered in some of the world’s leading generative AI systems, such as OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini. While these AI models have revolutionized industries by automating complex tasks, they also introduce new cybersecurity challenges. These risks include AI jailbreaks, the generation of unsafe code, and…
-
Rapid AI evolution hinders creation of privacy guardrails, OpenAI CEO says
First seen on scworld.com Jump to article: www.scworld.com/brief/rapid-ai-evolution-hinders-creation-of-privacy-guardrails-openai-ceo-says
-
AI tests limits of data privacy regulation
OpenAI CEO Sam Altman spoke about where data privacy guardrails are needed and where there might be room to rework privacy approaches. First seen on techtarget.com Jump to article: www.techtarget.com/searchcio/news/366623178/AI-tests-limits-of-data-privacy-regulation
-
Two Systemic Jailbreaks Uncovered, Exposing Widespread Vulnerabilities in Generative AI Models
Two significant security vulnerabilities in generative AI systems have been discovered, allowing attackers to bypass safety protocols and extract potentially dangerous content from multiple popular AI platforms. These >>jailbreaks
-
Sam Altman: AI privacy safeguards can’t be established before ‘problems emerge’
“It’s very difficult to predict all of this in advance,” said Sam Altman, who has run OpenAI since 2019, at a major privacy conference in Washington, D.C. “Dynamic response is the only way to responsibly figure out the right guardrails for new technology.” First seen on therecord.media Jump to article: therecord.media/sam-altman-openai-privacy-safeguards
-
AI Experts Warn Against OpenAI’s For-Profit Pivot: ‘Safeguards Could Vanish Overnight’
OpenAI’s possible restructuring to a for-profit model is receiving pushback from former staff, Nobel Laureates, and AI pioneers. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-openai-for-profit-model-pushback/
-
Breakthroughs, Concerns in OpenAI’s Latest Lineup
Tags: openaiSafety Concerns Emerge Amid o3, o4-mini and GPT-4.1 Launches. OpenAI’s mid-April announcements include its most advanced reasoning models o3 and o4-mini, with a biorisk monitor, the quietly released GPT-4.1 coding family and the upcoming retirement of its costliest model, GPT-4.5. OpenAI’s partners warn that the company’s rushed evaluations have left gaps. First seen on govinfosecurity.com…
-
OpenAI details ChatGPT-o3, o4-mini, o4-mini-high usage limits
OpenAI has launched three new reasoning models – o3, o4-mini, and o4-mini-high for Plus and Pro subscribers, but as it turns out, these models do not offer ‘unlimited’ usage. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-details-chatgpt-o3-o4-mini-o4-mini-high-usage-limits/

