Tag: openai
-
Hackers Mimic as OpenAI and Sora Services to Steal Login Credentials
Hackers have launched a sophisticated phishing campaign impersonating both OpenAI and the recently released Sora 2 AI service. By cloning legitimate-looking landing pages, these actors are duping users into submitting their login credentials, participating in faux “gift” surveys, and even falling victim to cryptocurrency scams. Security researchers note that these deceptive domains are already ensnaring…
-
Simple Prompt Injection Lets Hackers Bypass OpenAI Guardrails Framework
Security researchers have discovered a fundamental vulnerability in OpenAI’s newly released Guardrails framework that can be exploited using basic prompt injection techniques. The vulnerability enables attackers to circumvent the system’s safety mechanisms and generate malicious content without triggering any security alerts, raising serious concerns about the effectiveness of AI self-regulation approaches. Critical Flaw in LLM-Based…
-
Simple Prompt Injection Lets Hackers Bypass OpenAI Guardrails Framework
Security researchers have discovered a fundamental vulnerability in OpenAI’s newly released Guardrails framework that can be exploited using basic prompt injection techniques. The vulnerability enables attackers to circumvent the system’s safety mechanisms and generate malicious content without triggering any security alerts, raising serious concerns about the effectiveness of AI self-regulation approaches. Critical Flaw in LLM-Based…
-
OpenAI’s Guardrails Can Be Bypassed by Simple Prompt Injection Attack
Just weeks after its release, OpenAI’s Guardrails system was quickly bypassed by researchers. Read how simple prompt injection attacks fooled the system’s AI judges and exposed an ongoing security concern for OpenAI. First seen on hackread.com Jump to article: hackread.com/openai-guardrails-bypass-prompt-injection-attack/
-
LLM-Powered MalTerminal Malware Uses OpenAI GPT-4 to Create Ransomware Code
LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code. Our research discovered hitherto unknown samples, and what may be the earliest example known to date of an LLM-enabled malware we dubbed “MalTerminal.” Our methodology also uncovered other offensive LLM applications, including…
-
LLM-Powered MalTerminal Malware Uses OpenAI GPT-4 to Create Ransomware Code
LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code. Our research discovered hitherto unknown samples, and what may be the earliest example known to date of an LLM-enabled malware we dubbed “MalTerminal.” Our methodology also uncovered other offensive LLM applications, including…
-
OpenAI Blocks Global Hackers Misusing ChatGPT for Cyberattacks
OpenAI halts hackers from Russia, North Korea, and China exploiting ChatGPT for malware and phishing attacks. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/news/chatgpt-cyberattacks/
-
OpenAI Blocks Global Hackers Misusing ChatGPT for Cyberattacks
OpenAI halts hackers from Russia, North Korea, and China exploiting ChatGPT for malware and phishing attacks. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/news/chatgpt-cyberattacks/
-
OpenAI Blocks Global Hackers Misusing ChatGPT for Cyberattacks
OpenAI halts hackers from Russia, North Korea, and China exploiting ChatGPT for malware and phishing attacks. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/news/chatgpt-cyberattacks/
-
OpenAI Blocks ChatGPT Accounts Linked to Chinese Hackers Developing Malware
OpenAI has taken decisive action to stop misuse of its ChatGPT models by banning accounts tied to a group of Chinese hackers. This move reflects OpenAI’s core aim to ensuring artificial general intelligence benefits everyone. By setting clear rules and acting swiftly on policy violations, OpenAI hopes to keep AI tools safe and accessible for…
-
OpenAI Blocks ChatGPT Accounts Linked to Chinese Hackers Developing Malware
OpenAI has taken decisive action to stop misuse of its ChatGPT models by banning accounts tied to a group of Chinese hackers. This move reflects OpenAI’s core aim to ensuring artificial general intelligence benefits everyone. By setting clear rules and acting swiftly on policy violations, OpenAI hopes to keep AI tools safe and accessible for…
-
OpenAI Finds Growing Exploitation of AI Tools by Foreign Threat Groups
OpenAI’s new report warns hackers are combining multiple AI tools for cyberattacks, scams, and influence ops linked to China, Russia, and North Korea. First seen on hackread.com Jump to article: hackread.com/openai-ai-tools-exploitation-threat-groups/
-
OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
Tags: access, ai, chatgpt, china, credentials, cyberattack, hacker, intelligence, malware, north-korea, openai, russia, threat, toolOpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development.This includes a Russian”‘language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator…
-
OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
Tags: access, ai, chatgpt, china, credentials, cyberattack, hacker, intelligence, malware, north-korea, openai, russia, threat, toolOpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development.This includes a Russian”‘language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator…
-
Threat actors use us to be efficient, not make new tools
A new report from the leader in the generative AI boom says AI is being used in existing workflows, instead of to create new ones dedicated to malicious hacking. First seen on cyberscoop.com Jump to article: cyberscoop.com/openai-threat-report-ai-cybercrime-hacking-scams/
-
Threat actors use us to be efficient, not make new tools
A new report from the leader in the generative AI boom says AI is being used in existing workflows, instead of to create new ones dedicated to malicious hacking. First seen on cyberscoop.com Jump to article: cyberscoop.com/openai-threat-report-ai-cybercrime-hacking-scams/
-
OpenAI bans suspected Chinese accounts using ChatGPT to plan surveillance
It also banned some suspected Russian accounts trying to create influence campaigns and malware First seen on theregister.com Jump to article: www.theregister.com/2025/10/07/openai_bans_suspected_china_accounts/
-
ChatGPT Pulse is coming to the web, but no word on free or Plus roll out
OpenAI’s ChatGPT Pulse, which is a tool that gives you personalised updates based on usage patterns, is coming to the web. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-pulse-is-coming-to-the-web-but-no-word-on-free-or-plus-roll-out/
-
OpenAI is testing ChatGPT-powered Agent Builder
AI startups are convinced AI agents are the future, and OpenAI is building a tool that will allow you to create your own AI Agents. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-is-testing-chatgpt-powered-agent-builder/
-
ChatGPT social could be a thing, as leak shows direct messages support
OpenAI doesn’t want ChatGPT to remain just a chatbot for interacting with a large language model. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-social-could-be-a-thing-as-leak-shows-direct-messages-support/
-
OpenAI rolls out GPT Codex Alpha with early access to new models
OpenAI’s Codex is already making waves in the vibe coding vertical, and it’s now set to get even better. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-rolls-out-gpt-codex-alpha-with-early-access-to-new-models/
-
OpenAI wants ChatGPT to be your emotional support
GPT-5 isn’t as good as GPT-4o when it comes to emotional support, but that changes today. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-wants-chatgpt-to-be-your-emotional-support/
-
OpenAI wants ChatGPT to be your emotional support
GPT-5 isn’t as good as GPT-4o when it comes to emotional support, but that changes today. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-wants-chatgpt-to-be-your-emotional-support/
-
OpenAI prepares $4 ChatGPT Go for several new countries
OpenAI has been testing a new, cheaper ChatGPT plan called “Go,” and it’s now rolling out to more regions. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-prepares-4-chatgpt-go-for-several-new-countries/
-
USENIX 2025: PEPR ’25 Harnessing LLMs for Scalable Data Minimization
Creators, Authors and Presenters: Charles de Bourcy, OpenAI Our thanks to USENIX for publishing their Presenter’s outstanding USENIX Enigma ’23 Conference content on the organization’s’ YouTube channel. Permalink First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/10/usenix-2025-pepr-25-harnessing-llms-for-scalable-data-minimization/
-
ChatGPT tests free trial for paid plans, rolls out cheaper Go in more regions
OpenAI is offering some users a free trial for ChatGPT Plus, which costs $20. In addition, $4 GPT Go is now available in Indonesia. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-tests-free-trial-for-paid-plans-rolls-out-cheaper-go-in-more-regions/
-
OpenAI is routing GPT-4o to safety models when it detects harmful activities
Tags: openaiOver the weekend, some people noticed that GPT-4o is routing requests to an unknown model out of nowhere. Turns out it’s a “safety” feature. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-is-routing-gpt-4o-to-safety-models-when-it-detects-harmful-activities/
-
Lyin’ and Cheatin’, AI Models Playing a Game
OpenAI, Apollo Research Find Models Hide Misalignment; Training Cuts Deception. Frontier artificial intelligence models are learning to hide their true intentions to pursue hidden agendas, said OpenAI and Apollo Research. Researchers say the risk of deception needs to be tackled now, especially as AI systems take on more complex, real-world responsibilities. First seen on govinfosecurity.com…
-
OpenAI Fixes Gmail Data Flaw in ChatGPT Agent
Attackers Could Siphon Gmail Data Unnoticed From Users Who Let AI Tool Access Email. OpenAI patched a flaw in ChatGPT’s Deep Research agent that could have enabled hackers to extract Gmail data without the user’s knowledge. Radware researchers said the flaw affected subscribers who authorized the artificial intelligence tool to access their email accounts. First…
-
ShadowLeak Exploit Exposed Gmail Data Through ChatGPT Agent
Radware researchers revealed a service-side flaw in OpenAI’s ChatGPT. The ShadowLeak attack had used indirect prompt injection to bypass defences and leak sensitive data, but the issue has since been fixed. First seen on hackread.com Jump to article: hackread.com/shadowleak-exploit-exposed-gmail-data-chatgpt-agent/

