Tag: openai
-
OpenAI Faces DHS Request to Disclose User’s ChatGPT Prompts in Investigation
Over the past year, federal agents struggled to uncover who operated a notorious child exploitation site on the dark web. Their search took an unexpected turn when the suspect revealed their use of ChatGPT, marking a significant moment in digital investigations. Federal Warrant Seeks ChatGPT Data Last week, in Maine, a federal search warrant was…
-
OpenAI Faces DHS Request to Disclose User’s ChatGPT Prompts in Investigation
Over the past year, federal agents struggled to uncover who operated a notorious child exploitation site on the dark web. Their search took an unexpected turn when the suspect revealed their use of ChatGPT, marking a significant moment in digital investigations. Federal Warrant Seeks ChatGPT Data Last week, in Maine, a federal search warrant was…
-
Is Sora 2 the Future of Video? AI, Copyright, and Privacy Issues
OpenAI’s Sora 2 is here, and it’s not just another AI toy. This episode explores how Sora 2 works, how users can insert almost anything into generated content, and why that’s raising alarms about privacy, identity, and copyright. We walk you through the initial opt-out copyright controversy, the backlash from studios and creators, and… First…
-
OpenAI confirms GPT-6 is not shipping in 2025
Tags: openaiOpenAI is not planning to ship GPT-6 this year, but that doesn’t necessarily mean the company will not release new models. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-confirms-gpt-6-is-not-shipping-in-2025/
-
Cybersecurity Snapshot: F5 Breach Prompts Urgent U.S. Gov’t Warning, as OpenAI Details Disrupted ChatGPT Abuses
Tags: ai, attack, awareness, backdoor, breach, business, chatgpt, china, cisa, cloud, control, corporate, cve, cyber, cybersecurity, data, data-breach, defense, detection, exploit, framework, fraud, governance, government, group, hacker, incident, infrastructure, Internet, iran, law, LLM, malicious, malware, mitigation, monitoring, network, openai, organized, phishing, privacy, resilience, risk, russia, scam, security-incident, service, software, strategy, supply-chain, technology, threat, training, update, vulnerabilityF5’s breach triggers a CISA emergency directive, as Tenable calls it “a five-alarm fire” that requires urgent action. Meanwhile, OpenAI details how attackers try to misuse ChatGPT. Plus, boards are increasing AI and cyber disclosures. And much more! Key takeaways A critical breach at cybersecurity firm F5, attributed to a nation-state, has triggered an urgent…
-
OpenAI’s ChatGPT is so popular that almost no one will pay for it
If you build it, they will come and expect the service to be free First seen on theregister.com Jump to article: www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay/
-
OpenAI’s ChatGPT is so popular that almost no one will pay for it
If you build it, they will come and expect the service to be free First seen on theregister.com Jump to article: www.theregister.com/2025/10/15/openais_chatgpt_popular_few_pay/
-
13 cybersecurity myths organizations need to stop believing
Tags: access, ai, attack, authentication, backup, banking, breach, business, ceo, compliance, computer, computing, corporate, credentials, cyber, cybersecurity, data, data-breach, deep-fake, defense, encryption, finance, government, group, identity, incident response, infrastructure, jobs, law, malicious, mfa, monitoring, network, nist, openai, passkey, password, phishing, privacy, regulation, risk, service, skills, strategy, technology, theft, threat, tool, vulnerabilityBig tech platforms have strong verification that prevents impersonation: Some of the largest tech platforms like to talk about their strong identity checks as a way to stop impersonation. But looking good on paper is one thing, and holding up to the promise in the real world is another.”The truth is that even advanced verification…
-
Hackers Mimic as OpenAI and Sora Services to Steal Login Credentials
Hackers have launched a sophisticated phishing campaign impersonating both OpenAI and the recently released Sora 2 AI service. By cloning legitimate-looking landing pages, these actors are duping users into submitting their login credentials, participating in faux “gift” surveys, and even falling victim to cryptocurrency scams. Security researchers note that these deceptive domains are already ensnaring…
-
Simple Prompt Injection Lets Hackers Bypass OpenAI Guardrails Framework
Security researchers have discovered a fundamental vulnerability in OpenAI’s newly released Guardrails framework that can be exploited using basic prompt injection techniques. The vulnerability enables attackers to circumvent the system’s safety mechanisms and generate malicious content without triggering any security alerts, raising serious concerns about the effectiveness of AI self-regulation approaches. Critical Flaw in LLM-Based…
-
Simple Prompt Injection Lets Hackers Bypass OpenAI Guardrails Framework
Security researchers have discovered a fundamental vulnerability in OpenAI’s newly released Guardrails framework that can be exploited using basic prompt injection techniques. The vulnerability enables attackers to circumvent the system’s safety mechanisms and generate malicious content without triggering any security alerts, raising serious concerns about the effectiveness of AI self-regulation approaches. Critical Flaw in LLM-Based…
-
OpenAI’s Guardrails Can Be Bypassed by Simple Prompt Injection Attack
Just weeks after its release, OpenAI’s Guardrails system was quickly bypassed by researchers. Read how simple prompt injection attacks fooled the system’s AI judges and exposed an ongoing security concern for OpenAI. First seen on hackread.com Jump to article: hackread.com/openai-guardrails-bypass-prompt-injection-attack/
-
LLM-Powered MalTerminal Malware Uses OpenAI GPT-4 to Create Ransomware Code
LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code. Our research discovered hitherto unknown samples, and what may be the earliest example known to date of an LLM-enabled malware we dubbed “MalTerminal.” Our methodology also uncovered other offensive LLM applications, including…
-
LLM-Powered MalTerminal Malware Uses OpenAI GPT-4 to Create Ransomware Code
LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code. Our research discovered hitherto unknown samples, and what may be the earliest example known to date of an LLM-enabled malware we dubbed “MalTerminal.” Our methodology also uncovered other offensive LLM applications, including…
-
OpenAI Blocks Global Hackers Misusing ChatGPT for Cyberattacks
OpenAI halts hackers from Russia, North Korea, and China exploiting ChatGPT for malware and phishing attacks. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/news/chatgpt-cyberattacks/
-
OpenAI Blocks Global Hackers Misusing ChatGPT for Cyberattacks
OpenAI halts hackers from Russia, North Korea, and China exploiting ChatGPT for malware and phishing attacks. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/news/chatgpt-cyberattacks/
-
OpenAI Blocks Global Hackers Misusing ChatGPT for Cyberattacks
OpenAI halts hackers from Russia, North Korea, and China exploiting ChatGPT for malware and phishing attacks. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/news/chatgpt-cyberattacks/
-
OpenAI Blocks ChatGPT Accounts Linked to Chinese Hackers Developing Malware
OpenAI has taken decisive action to stop misuse of its ChatGPT models by banning accounts tied to a group of Chinese hackers. This move reflects OpenAI’s core aim to ensuring artificial general intelligence benefits everyone. By setting clear rules and acting swiftly on policy violations, OpenAI hopes to keep AI tools safe and accessible for…
-
OpenAI Blocks ChatGPT Accounts Linked to Chinese Hackers Developing Malware
OpenAI has taken decisive action to stop misuse of its ChatGPT models by banning accounts tied to a group of Chinese hackers. This move reflects OpenAI’s core aim to ensuring artificial general intelligence benefits everyone. By setting clear rules and acting swiftly on policy violations, OpenAI hopes to keep AI tools safe and accessible for…
-
OpenAI Finds Growing Exploitation of AI Tools by Foreign Threat Groups
OpenAI’s new report warns hackers are combining multiple AI tools for cyberattacks, scams, and influence ops linked to China, Russia, and North Korea. First seen on hackread.com Jump to article: hackread.com/openai-ai-tools-exploitation-threat-groups/
-
OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
Tags: access, ai, chatgpt, china, credentials, cyberattack, hacker, intelligence, malware, north-korea, openai, russia, threat, toolOpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development.This includes a Russian”‘language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator…
-
OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
Tags: access, ai, chatgpt, china, credentials, cyberattack, hacker, intelligence, malware, north-korea, openai, russia, threat, toolOpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development.This includes a Russian”‘language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator…
-
Threat actors use us to be efficient, not make new tools
A new report from the leader in the generative AI boom says AI is being used in existing workflows, instead of to create new ones dedicated to malicious hacking. First seen on cyberscoop.com Jump to article: cyberscoop.com/openai-threat-report-ai-cybercrime-hacking-scams/
-
Threat actors use us to be efficient, not make new tools
A new report from the leader in the generative AI boom says AI is being used in existing workflows, instead of to create new ones dedicated to malicious hacking. First seen on cyberscoop.com Jump to article: cyberscoop.com/openai-threat-report-ai-cybercrime-hacking-scams/
-
OpenAI bans suspected Chinese accounts using ChatGPT to plan surveillance
It also banned some suspected Russian accounts trying to create influence campaigns and malware First seen on theregister.com Jump to article: www.theregister.com/2025/10/07/openai_bans_suspected_china_accounts/
-
ChatGPT Pulse is coming to the web, but no word on free or Plus roll out
OpenAI’s ChatGPT Pulse, which is a tool that gives you personalised updates based on usage patterns, is coming to the web. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-pulse-is-coming-to-the-web-but-no-word-on-free-or-plus-roll-out/
-
OpenAI is testing ChatGPT-powered Agent Builder
AI startups are convinced AI agents are the future, and OpenAI is building a tool that will allow you to create your own AI Agents. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-is-testing-chatgpt-powered-agent-builder/
-
ChatGPT social could be a thing, as leak shows direct messages support
OpenAI doesn’t want ChatGPT to remain just a chatbot for interacting with a large language model. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-social-could-be-a-thing-as-leak-shows-direct-messages-support/
-
OpenAI rolls out GPT Codex Alpha with early access to new models
OpenAI’s Codex is already making waves in the vibe coding vertical, and it’s now set to get even better. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-rolls-out-gpt-codex-alpha-with-early-access-to-new-models/
-
OpenAI wants ChatGPT to be your emotional support
GPT-5 isn’t as good as GPT-4o when it comes to emotional support, but that changes today. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-wants-chatgpt-to-be-your-emotional-support/

