Tag: openai
-
Beschuldigung als Kindermörder: noyb reicht 2. Beschwerde gegen OpenAI ein
Datenschutzaktivisten von noyb haben eine zweite Beschwerde gegen OpenAI eingereicht. Der Hintergrund ist, das ChatGPT bei einem Norweger eine Fake-Story erfunden hat, die den Mann fälschlich als Kindermörder darstellte. Der rasante Aufstieg von KI-Chatbots wie ChatGPT wurde von kritischen Stimmen … First seen on borncity.com Jump to article: www.borncity.com/blog/2025/03/24/beschuldigung-als-kindermoerder-noyb-reicht-2-beschwerde-gegen-openai-ein/
-
Violent ChatGPT Hallucination Sparks GDPR Complaint
Norwegian Man Tells OpenAI: I Didn’t Kill My Children. A Norwegian man is peeved that a chatbot hallucinated a violent backstory for his life after seeing that ChatGPT apparently believes he’s a child killer spending decades inside prison. The fact that someone could read this output and believe it is true is what scares me…
-
Actively Exploited ChatGPT Bug Puts Organizations at Risk
A server-side request forgery vulnerability in OpenAI’s chatbot infrastructure can allow attackers to direct users to malicious URLs, leading to a range of threat activity. First seen on darkreading.com Jump to article: www.darkreading.com/cyberattacks-data-breaches/actively-exploited-chatgpt-bug-organizations-risk
-
Hackers Exploit SSRF Vulnerability to Attack OpenAI’s ChatGPT Infrastructure
Tags: attack, chatgpt, cve, cyber, cybersecurity, exploit, hacker, infrastructure, openai, threat, vulnerabilityA critical cybersecurity alert has been issued following the active exploitation of a Server-Side Request Forgery (SSRF) vulnerability in OpenAI’s ChatGPT infrastructure. According to the Veriti report, the vulnerability, identified as CVE-2024-27564, has been weaponized by attackers in real-world attacks, highlighting the dangers of underestimating medium-severity vulnerabilities. CVE-2024-27564: Understanding the Threat CVE-2024-27564 allows attackers to…
-
Hackers Exploit ChatGPT with CVE-2024-27564, 10,000+ Attacks in a Week
In its latest research report, cybersecurity firm Veriti has spotted active exploitation of a vulnerability within OpenAI’s ChatGPT… First seen on hackread.com Jump to article: hackread.com/hackers-exploit-chatgpt-cve-2024-27564-10000-attacks/
-
ChatGPT Down as Users Report >>Gateway Time-out<< Error
ChatGPT Down: Users report “Gateway time-out” errors. OpenAI’s popular AI chatbot is experiencing widespread outages. Stay updated on the service disruption. First seen on hackread.com Jump to article: hackread.com/chatgpt-down-as-users-report-gateway-time-out-error/
-
Google, OpenAI Push Urges Trump to Ease AI Export Controls
AI Giants Also Like ‘Fair Use’ Exemptions for Copyrighted Material. OpenAI and Google laid out visions for regulation in response to the Trump administration’s AI Action Plan, which aims to help the United States maintain technological lead over China. Both companies want Biden-era export controls lightened. First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/google-openai-push-urges-trump-to-ease-ai-export-controls-a-27739
-
AI Operator Agents Helping Hackers Generate Malicious Code
Symantec’s Threat Hunter Team has demonstrated how AI agents like OpenAI’s Operator can now perform end-to-end phishing attacks with minimal human intervention, marking a significant evolution in AI-enabled threats. A year ago, Large Language Model (LLM) AIs were primarily passive tools that could assist attackers in creating phishing materials or writing code. Now, with the…
-
Invisible C2″Š”, “Šthanks to AI-powered techniques
Tags: ai, api, attack, breach, business, chatgpt, cloud, communications, control, cyberattack, cybersecurity, data, defense, detection, dns, edr, email, encryption, endpoint, hacker, iot, LLM, malicious, malware, ml, monitoring, network, office, openai, powershell, service, siem, soc, strategy, threat, tool, update, vulnerability, zero-trustInvisible C2″Š”, “Šthanks to AI-powered techniques Just about every cyberattack needs a Command and Control (C2) channel”Š”, “Ša way for attackers to send instructions to compromised systems and receive stolen data. This gives us all a chance to see attacks that are putting us at risk. LLMs can help attackers avoid signature based detection Traditionally, C2…
-
OpenAI’s Operator AI agent can be used in phishing attacks, say researchers
First seen on scworld.com Jump to article: www.scworld.com/news/openais-operator-ai-agent-can-be-used-in-phishing-attacks-say-researchers
-
Symantec Demonstrates OpenAI’s Operator Agent in PoC Phishing Attack
Symantec demonstrates OpenAI’s Operator Agent in PoC phishing attack, highlighting AI security risks and the need for proper cybersecurity. First seen on hackread.com Jump to article: hackread.com/symantec-openai-operator-agent-poc-phishing-attack/
-
Symantec Uses OpenAI Operator to Show Rising Threat of AI Agents
Symantec threat researchers used OpenAI’s Operator agent to carry out a phishing attack with little human intervention, illustrating the looming cybersecurity threat AI agents pose as they become more powerful. The agent learned how to write a malicious PowerShell script and wrote an email with the phishing lure, among other actions. First seen on securityboulevard.com…
-
DeepSeek R1 Jailbreaked to Create Malware, Including Keyloggers and Ransomware
Tags: ai, chatgpt, cyber, cybercrime, exploit, google, intelligence, malicious, malware, openai, ransomware, toolThe increasing popularity of generative artificial intelligence (GenAI) tools, such as OpenAI’s ChatGPT and Google’s Gemini, has attracted cybercriminals seeking to exploit these technologies for malicious purposes. Despite the guardrails implemented by traditional GenAI platforms to prevent misuse, cybercriminals have circumvented these restrictions by developing their own malicious large language models (LLMs), including WormGPT, FraudGPT,…
-
Breach Roundup: The Ivanti Patch Treadmill
Also: Patch Tuesday, Equalize Scandal Figure Dies and Polymorphic Extension Attack. This week, Ivanti EPM customers should patch, Patch Tuesday, fake web browser extensions, North Korean Android malware, a key figure in Italy’s Equalize scandal dead of heart attack. Also, Apache Camel flaw, OpenAI’s agent automates phishing and Apple patched another zero day. First seen…
-
OpenAI Operator Agent Used in ProofConcept Phishing Attack
Researchers from Symantec showed how OpenAI’s Operator agent, currently in research preview, can be used to construct a basic phishing attack from start to finish. First seen on darkreading.com Jump to article: www.darkreading.com/threat-intelligence/openai-operator-agent-proof-concept-phishing-attack
-
Hackers Exploit Microsoft Copilot for Advanced Phishing Attacks
Hackers have been targeting Microsoft Copilot, a newly launched Generative AI assistant, to carry out sophisticated phishing attacks. This campaign highlights the risks associated with the widespread adoption of Microsoft services and the challenges that come with introducing new technologies to employees, as per a report by Cofense. Microsoft Copilot, similar to OpenAI’s ChatGPT, is…
-
Attackers Can Manipulate AI Memory to Spread Lies
Tested on Three OpenAI Models, ‘Minja’ Has High Injection and Attack Rates. A memory injection attack dubbed Minja turns AI chatbots into unwitting agents of misinformation, requiring no hacking and just a little clever prompting. The exploit allows attackers to poison an AI model’s memory with deceptive information, potentially altering its responses for all users.…
-
Manus mania is here: Chinese ‘general agent’ is this week’s ‘future of AI’ and OpenAI-killer
Prompts see it scour the web for info and turn it into decent documents at reasonable speed First seen on theregister.com Jump to article: www.theregister.com/2025/03/10/manus_chinese_general_ai_agent/
-
MINJA sneak attack poisons AI models for other chatbot users
Nothing like an OpenAI-powered agent leaking data or getting confused over what someone else whispered to it First seen on theregister.com Jump to article: www.theregister.com/2025/03/11/minja_attack_poisons_ai_model_memory/
-
UK CMA Halts Review of Microsoft, OpenAI Partnership
Probe into Microsoft’s $13 Billion OpenAI Investment Launched in 2023. The U.K. antitrust regulator won’t open an investigation into a partnership between computing giant Microsoft and artificial intelligence company OpenAI. U.K. Competition Market Authority concludes that there is no relevant merger situation. First seen on govinfosecurity.com Jump to article: www.govinfosecurity.com/uk-cma-halts-review-microsoft-openai-partnership-a-27666
-
GPT-4.5 Scores EQ Points, but Not Much Else
Model Appears to Be a Way Station on the Road to Something Greater. OpenAI on Thursday released its latest generative AI model, but don’t call it the next big thing just yet. More thoughtful, persuasive and emotionally intelligent, GPT-4.5 aims to feel less like an algorithm and more like a conversation partner. First seen on…
-
Microsoft targets AI deepfake cybercrime network in lawsuit
Microsoft alleges that defendants used stolen Azure OpenAI API keys and special software to bypass content guardrails and generate illicit AI deepfakes for payment. First seen on techtarget.com Jump to article: www.techtarget.com/searchsecurity/news/366619781/Microsoft-targets-AI-deepfake-cybercrime-network-in-lawsuit
-
Microsoft files lawsuit against LLMjacking gang that bypassed AI safeguards
LLMjacking can cost organizations a lot of money: LLMjacking is a continuation of the cybercriminal practice of abusing stolen cloud account credentials for various illegal operations, such as cryptojacking, abusing hacked cloud computing resources to mine cryptocurrency. The difference is that large quantities of API calls to LLMs can quickly rack up huge costs, with…
-
Microsoft names alleged credential-snatching ‘Azure Abuse Enterprise’ operators
Crew helped lowlifes generate X-rated celeb deepfakes using Redmond’s OpenAI-powered cloud claim First seen on theregister.com Jump to article: www.theregister.com/2025/02/28/microsoft_names_and_shames_4/
-
Does terrible code drive you mad? Wait until you see what it does to OpenAI’s GPT-4o
Model was fine-tuned to write vulnerable software then suggested enslaving humanity First seen on theregister.com Jump to article: www.theregister.com/2025/02/27/llm_emergent_misalignment_study/
-
Researchers Jailbreak OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Models
Researchers from Duke University and Carnegie Mellon University have demonstrated successful jailbreaks of OpenAI’s o1/o3, DeepSeek-R1, and Google’s Gemini 2.0 Flash models through a novel attack method called Hijacking Chain-of-Thought (H-CoT). The research reveals how advanced safety mechanisms designed to prevent harmful outputs can be systematically bypassed using the models’ reasoning processes, raising urgent questions…

