Tag: chatgpt
-
The AI Agent Identity Crisis: 80% of Agents Don’t Properly Identify Themselves, 80% of Sites Don’t Verify
AI agent identity verification fails at both ends. DataDome tested 698,000 sites”, 80% couldn’t detect spoofed ChatGPT traffic. Here’s why. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/02/the-ai-agent-identity-crisis-80-of-agents-dont-properly-identify-themselves-80-of-sites-dont-verify/
-
Entra ID OAuth Consent Can Grant ChatGPT Access to Emails
OAuth consent in Entra ID can grant apps like ChatGPT email access after approval, exposing hidden risks that may bypass MFA and enable persistent access. First seen on hackread.com Jump to article: hackread.com/entra-id-oauth-consent-chatgpt-emails-access/
-
Fraudsters integrate ChatGPT into global scam campaigns
AI models are being folded into fraud and influence operations that follow long standing tactics. A February 2026 update to OpenAI’s Disrupting Malicious Uses of Our Models … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2026/02/26/openai-malicious-chatgpt-use-report/
-
OpenAI Confirms Chinese Hackers Used ChatGPT in Cyberattack Campaign
OpenAI has confirmed that Chinese-linked operators misused ChatGPT as part of a broader campaign that blended cyber operations, online harassment, and covert influence tactics, according to its latest threat report “Disrupting malicious uses of AI.” While the models were not used to write exploits or break into networks directly, they were repeatedly abused to plan…
-
Chinese Police Use ChatGPT to Smear Japan PM Takaichi
A Chinese keyboard warrior inadvertently leaked information about politically motivated influence operations through a ChatGPT account. First seen on darkreading.com Jump to article: www.darkreading.com/cyberattacks-data-breaches/chinese-police-chatgpt-smear-japan-pm-takaichi
-
Understanding RAG Architecture: The Technical Foundation of Effective GEO
RAG powers every AI search engine. Understanding Retrieval Augmented Generation”, how it indexes content, retrieves chunks, and cites sources”, is essential for GEO. This technical guide reveals optimization strategies for ChatGPT, Perplexity, and Google AI Overviews based on RAG architecture. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/02/understanding-rag-architecture-the-technical-foundation-of-effective-geo/
-
Understanding RAG Architecture: The Technical Foundation of Effective GEO
RAG powers every AI search engine. Understanding Retrieval Augmented Generation”, how it indexes content, retrieves chunks, and cites sources”, is essential for GEO. This technical guide reveals optimization strategies for ChatGPT, Perplexity, and Google AI Overviews based on RAG architecture. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/02/understanding-rag-architecture-the-technical-foundation-of-effective-geo-2/
-
Poisoning AI Training Data
All it takes to poison AI training data is to create a website: I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my…
-
Chinese group’s ChatGPT use reveals worldwide harassment campaign against critics
OpenAI said a Chinese law enforcement agency uploaded reports to ChatGPT that details a worldwide digital operation to track and silence regime critics at home and abroad. First seen on cyberscoop.com Jump to article: cyberscoop.com/chinese-chatgpt-online-harassment-campaign-against-critics-dissidents/
-
OpenAI says Chinese cops used ChatGPT to plan and track smear ops against opponents
Note to secret agents: ChatGPT is NOT a private diary First seen on theregister.com Jump to article: www.theregister.com/2026/02/25/chinese_law_enforcement_chatgpt_abuse/
-
OAuth Vulnerabilities in Entra ID Could Exploit ChatGPT to Breach User Email Accounts
OAuth consent attacks in Microsoft Entra ID are giving threat actors a stealthy path to cloud email, and even trusted apps like ChatGPT can become a vehicle if permissions are abused. In this hypothetical case, a user in an Entra ID tenant adds the legitimate ChatGPT service principal and grants it Microsoft Graph OAuth permissions,…
-
The Apple-Google AI Deal: What $1 Billion Says About Who’s Really Winning the AI Race
Apple chose Google’s Gemini over ChatGPT for Siri’s AI upgrade. This $1B/year deal reveals who’s actually winning the AI race”, and it’s not who you think. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/02/the-apple-google-ai-deal-what-1-billion-says-about-whos-really-winning-the-ai-race/
-
Passwörter per ChatGPT erstellen: Warum du das lieber lassen solltest
First seen on t3n.de Jump to article: t3n.de/news/passwoerter-chatgpt-erstellen-unsicher-1730517/
-
ChatGPT Ads Are Coming: What 800 Million Users Need to Know About the New Economics of ‘Free’ AI
OpenAI just announced ads are coming to ChatGPT. For 800M weekly users, this changes everything about how ‘free’ AI actually works. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/02/chatgpt-ads-are-coming-what-800-million-users-need-to-know-about-the-new-economics-of-free-ai/
-
Deepfakes, Betrug und unsichere Inhalte – 2025 gab es 346 KI-Vorfälle, ChatGPT am häufigsten beteiligt
First seen on security-insider.de Jump to article: www.security-insider.de/steigende-vorfaelle-von-ki-missbrauch-2025-a-7188b0190068b40db2b49da6c7830a1f/
-
A new approach for GenAI risk protection
Solution 1: GenAI enterprise model: Implement enterprise licenses for approved GenAI solutions (such as ChatGPT Enterprise or Microsoft CoPilot 365, which is integrated into existing O365 tenants). Enterprise GenAI solutions typically include a robust set of built-in security tools that allow organizations to secure their data and implement DLP controls within the enterprise GenAI solution…
-
Side-Channel Attacks Against LLMs
Tags: access, attack, chatgpt, credit-card, data, defense, exploit, LLM, monitoring, network, open-source, openai, phone, side-channelHere are three papers describing different side-channel attacks against LLMs. “Remote Timing Attacks on Efficient Language Model Inference”: Abstract: Scaling up language models has significantly increased their capabilities. But larger models are slower models, and so there is now an extensive body of work (e.g., speculative sampling or parallel decoding) that improves the (average case)…
-
ChatGPT gets new security feature to fight prompt injection attacks
OpenAI has introduced Lockdown Mode and Elevated Risk labels in ChatGPT to help users and organizations reduce the risk of prompt injection attacks and other advanced security … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2026/02/16/chatgpt-lockdown-mode-elevated-risk/
-
OpenAI streicht sicher aus seinem Leitbild
OpenAI hat bei der Umstrukturierung in ein gewinnorientiertes Unternehmen die Sicherheitsformulierung aus seinem Leitbild entfernt. First seen on golem.de Jump to article: www.golem.de/news/chatgpt-openai-streicht-sicher-aus-seinem-leitbild-2602-205413.html
-
Proofpoint Purchases Startup Acuvity to Bolster AI Security
Deal Targets GenAI Risks, Prompt Injection Attacks and Autonomous Agents. Proofpoint has acquired AI security startup Acuvity to address fast-evolving risks tied to generative AI, prompt injection and autonomous agents. The company says intent-based guardrails and deep AI forensics will help enterprises secure tools such as ChatGPT, Claude and emerging agent frameworks. First seen on…
-
Proofpoint Purchases Startup Acuvity to Bolster AI Security
Deal Targets GenAI Risks, Prompt Injection Attacks and Autonomous Agents. Proofpoint has acquired AI security startup Acuvity to address fast-evolving risks tied to generative AI, prompt injection and autonomous agents. The company says intent-based guardrails and deep AI forensics will help enterprises secure tools such as ChatGPT, Claude and emerging agent frameworks. First seen on…
-
Proofpoint Purchases Startup Acuvity to Bolster AI Security
Deal Targets GenAI Risks, Prompt Injection Attacks and Autonomous Agents. Proofpoint has acquired AI security startup Acuvity to address fast-evolving risks tied to generative AI, prompt injection and autonomous agents. The company says intent-based guardrails and deep AI forensics will help enterprises secure tools such as ChatGPT, Claude and emerging agent frameworks. First seen on…
-
Fake AI Assistants in Google Chrome Web Store Steal Passwords and Spy on Emails
Hundreds of thousands of users have downloaded malicious AI extensions masquerading as ChatGPT, Gemini, Grok and others, warn cybersecurity researchers at LayerX First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/fake-ai-assistants-google-chrome/
-
Malicious Chrome AI Extensions Target 260,000 Users with Injected Iframes
As AI tools like ChatGPT, Claude, Gemini, and Grok gain mainstream adoption, cybercriminals are weaponizing their popularity to distribute malicious browser extensions. Security researchers have uncovered a coordinated campaign involving 30 Chrome extensions that masquerade as legitimate AI assistants while secretly deploying dangerous surveillance capabilities affecting over 260,000 users. The malicious extensions pose as AI-powered…
-
Malicious Chrome AI Extensions Target 260,000 Users with Injected Iframes
As AI tools like ChatGPT, Claude, Gemini, and Grok gain mainstream adoption, cybercriminals are weaponizing their popularity to distribute malicious browser extensions. Security researchers have uncovered a coordinated campaign involving 30 Chrome extensions that masquerade as legitimate AI assistants while secretly deploying dangerous surveillance capabilities affecting over 260,000 users. The malicious extensions pose as AI-powered…
-
Malicious Chrome AI Extensions Target 260,000 Users with Injected Iframes
As AI tools like ChatGPT, Claude, Gemini, and Grok gain mainstream adoption, cybercriminals are weaponizing their popularity to distribute malicious browser extensions. Security researchers have uncovered a coordinated campaign involving 30 Chrome extensions that masquerade as legitimate AI assistants while secretly deploying dangerous surveillance capabilities affecting over 260,000 users. The malicious extensions pose as AI-powered…
-
OpenAI released GPT-5.3-Codex-Spark, a real-time coding model
OpenAI has released a research preview of GPT-5.3-Codex-Spark, an ultra-fast model for real-time coding in Codex. It is available to ChatGPT Pro users in the latest versions … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2026/02/13/openai-gpt-5-3-codex-spark/
-
8,000+ ChatGPT API Keys Left Publicly Accessible
The rapid integration of artificial intelligence into mainstream software development has introduced a new category of security risk, one that many organizations are still unprepared to manage. According to research conducted by Cyble Research and Intelligence Labs (CRIL), thousands of exposed First seen on thecyberexpress.com Jump to article: thecyberexpress.com/exposed-chatgpt-api-keys-github-websites/

