Tag: chatgpt
-
Breach of Confidence: 24 April 2026
I spent an hour this week explaining to someone that no, ChatGPT cannot reliably fact-check itself, and yes, that’s a problem when your entire business strategy depends on it being right. They looked at me like I’d just told them Father Christmas works part-time at Argos. The Swing That Crosses Borders 40 Times a Minute……
-
OpenAI tackles a bad habit people have when interacting with AI
Since people tend to paste personal data into AI tools such as ChatGPT, OpenAI has released Privacy Filter, an open-weight model designed to detect and redact personally … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2026/04/23/openai-privacy-filter-personally-identifiable-information/
-
OpenAI tackles a bad habit people have when interacting with AI
Since people tend to paste personal data into AI tools such as ChatGPT, OpenAI has released Privacy Filter, an open-weight model designed to detect and redact personally … First seen on helpnetsecurity.com Jump to article: www.helpnetsecurity.com/2026/04/23/openai-privacy-filter-personally-identifiable-information/
-
CISOs reshape their roles as business risk strategists
Tags: ai, business, chatgpt, ciso, compliance, cyber, cybersecurity, data, finance, jobs, mitigation, risk, risk-assessment, skills, strategy, technology, toolEvolving risks require a new CISO leadership profile: The shift to CISO as a risk position, and not one limited to technical and cybersecurity alone, has been years in the making. But it has accelerated since the arrival of ChatGPT in late 2022, as organizations embraced first generative AI and more recently agentic AI. That’s…
-
Kein Word, kein ChatGPT, kein Google: Was passiert, wenn man eine Woche auf europäische Software setzt
First seen on t3n.de Jump to article: t3n.de/news/kein-word-kein-chatgpt-kein-google-1728129/
-
OpenAI expands Trusted Access for Cyber program with new GPT 5.4 Cyber model
A new cybersecurity-focused variant of ChatGPT and an expanded access program put OpenAI in direct competition with Anthropic’s Project Glasswing, and raises fresh questions about who gets to wield the most powerful security AI. First seen on cyberscoop.com Jump to article: cyberscoop.com/openai-expands-trusted-access-for-cyber-to-thousands-for-cybersecurity/
-
KI-Sprachassistent für Bauarbeiter Was er auf der Baustelle kann und was nicht
Auch in der Baubranche werden traditionelle Arbeitsmethoden immer mehr durch KI-gesteuerte Anwendungen ersetzt. Der ChatGPT für Bauarbeiter ist ein innovatives Tool, das die Arbeitsabläufe auf der Baustelle optimiert. Der Beitrag erklärt, wie KI-Tools für Auftragnehmer die Arbeiter auf der Baustelle unterstützen, die Kommunikation verbessern, die Zusammenarbeit stärken, die Produktivität steigern und langfristig gesehen die Wettbewerbsfähigkeit…
-
ChatGPT under scrutiny as Florida investigates campus shooting
New cases and research suggest AI chatbots don’t always shut down dangerous conversations. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/04/chatgpt-under-scrutiny-as-florida-investigates-campus-shooting/
-
Claude and ChatGPT Exploited in Sweeping Cyber Campaign Against Government Agencies
In a groundbreaking technical report released by Gambit Security researcher Eyal Sela, new details have emerged about a massive cyberattack targeting government infrastructure. A single threat actor successfully leveraged artificial intelligence platforms to breach nine Mexican government agencies. The campaign, which operated from late December 2025 through mid-February 2026, resulted in the exfiltration of hundreds…
-
ChatGPT rolls out new $100 Pro subscription to challenge Claude
OpenAI has rolled out a new Pro subscription that costs $100 and is in line with Claude’s pricing, which also has a $100 subscription, in addition to the $200 Max monthly plan. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-rolls-out-new-100-pro-subscription-to-challenge-claude/
-
Florida investigates OpenAI for role ChatGPT may have played in deadly shooting
Last week, the family of one of two victims in the attack announced it plans to sue OpenAI because the gunman allegedly constantly communicated with ChatGPT in the days leading to the shootings. First seen on therecord.media Jump to article: therecord.media/florida-investigates-openai-chatgpt-deadly-shooting
-
ChatGPT, Claude, and Gemini Among 11 AI Models Vulnerable to One-Line Jailbreak
A newly discovered jailbreak technique named >>sockpuppeting<< successfully forces 11 leading artificial intelligence models, including ChatGPT, Claude, and Gemini, to bypass their safety guardrails. By exploiting a standard application programming interface (API) feature with a single line of code, attackers can trick these models into generating malicious outputs without requiring complex mathematical optimisation. When a…
-
What to Know About CyberAv3ngers: The IRGC-Linked Group Targeting Critical Infrastructure
Tags: access, advisory, ai, attack, authentication, automation, backup, cctv, chatgpt, cisa, communications, compliance, control, credentials, crypto, cve, cyber, cybersecurity, data, data-breach, defense, detection, dns, email, exploit, finance, firewall, flaw, government, group, healthcare, infrastructure, intelligence, international, Internet, iot, iran, kev, leak, linux, malicious, malware, mitigation, mitre, monitoring, network, office, openai, password, radius, resilience, risk, router, service, siem, software, strategy, switch, technology, threat, tool, update, vpn, vulnerability, vulnerability-managementAn Iran-affiliated threat group has evolved from defacing water utility displays to deploying custom ICS malware and exploiting Rockwell Automation PLCs across multiple U.S. critical infrastructure sectors. Key takeaways: CyberAv3ngers is a state-directed threat group operating under Iran’s IRGC Cyber-Electronic Command. The U.S. Treasury sanctioned six named officials in February 2024 and the State Department…
-
10 ChatGPT AI Prompts L1 SOC Analysts Can Use in Their Daily Work
Discover 10 practical ChatGPT prompts SOC analysts can use to speed up triage, analyze threats, improve documentation, and enhance incident response workflows. The post 10 ChatGPT AI Prompts L1 SOC Analysts Can Use in Their Daily Work appeared first on TechRepublic. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-chatgpt-prompts-soc-analysts-incident-response/
-
10 ChatGPT Prompts L1 SOC Analysts Can Use in Their Daily Work
10 ChatGPT Prompts L1 SOC Analysts Can Use in Their Daily Work First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/artificial-intelligence/10-chatgpt-prompts-l1-soc-analysts-can-use-in-their-daily-work/
-
LLM-generated passwords are indefensible. Your codebase may already prove it
Temperature is not a remedy: A reflexive objection from practitioners familiar with LLM configuration holds that increasing sampling temperature would attenuate these distributional biases by flattening the probability landscape from which characters are drawn. Irregular’s empirical results are unambiguous in refuting this intuition. Testing conducted at temperature 1.0, the maximum setting on Claude, produces no…
-
The Attack Helix: Praetorian Guard’s AI Architecture for Offensive Security
The Kill Chain models how an attack succeeds. The Attack Helix models how the offensive baseline improves. Tipping Points One person. Two AI subscriptions. Ten government agencies. 150 gigabytes of sovereign data. In December 2025, a single unidentified operator used Anthropic’s Claude and OpenAI’s ChatGPT to breach ten Mexican government agencies and a financial institution….…
-
Why I Cancelled Semrush After 7 Years (And Why GEO Is the Only B2B Growth Strategy That Matters Now)
I cancelled my Semrush subscription last month. Not because it stopped working, but because the metrics it tracks no longer predict revenue. The game changed when 90% of B2B buyers started using ChatGPT to build their vendor shortlists. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/04/why-i-cancelled-semrush-after-7-years-and-why-geo-is-the-only-b2b-growth-strategy-that-matters-now/
-
Fake ChatGPT Ad Blocker Chrome Extension Caught Spying on Users
A fake Chrome browser extension called ‘ChatGPT Ad Blocker’ was harvesting conversations of ChatGPT users in the name of offering an ad-free experience. First seen on hackread.com Jump to article: hackread.com/fake-chatgpt-ad-blocker-chrome-extension-spy-users/
-
Asking AI for personal advice is a bad idea, Stanford study shows
AI chatbots, including ChatGPT, Claude, and Gemini, were all too willing to validate and hype up their users, a new Stanford study showed. First seen on securityboulevard.com Jump to article: securityboulevard.com/2026/03/asking-ai-for-personal-advice-is-a-bad-idea-stanford-study-shows/
-
Check Point Research Reveals ChatGPT Data Exfiltration Flaw
A ChatGPT flaw lets a single prompt silently exfiltrate data via DNS, bypassing safeguards. First seen on esecurityplanet.com Jump to article: www.esecurityplanet.com/artificial-intelligence/check-point-research-reveals-chatgpt-data-exfiltration-flaw/
-
ChatGPT Security Issue Enabled Data Theft via Single Prompt
OpenAI has patched vulnerability, which Check Point said was because of a DNS loophole First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/chatgpt-security-issue-steal-data/
-
How we made Trail of Bits AI-native (so far)
Tags: access, ai, application-security, attack, automation, blockchain, business, ceo, chatgpt, computer, computing, conference, control, data, email, germany, government, identity, injection, jobs, macOS, marketplace, nvidia, open-source, risk, service, skills, strategy, supply-chain, technology, threat, tool, vulnerabilityThis post is adapted from a talk I gave at [un]prompted, the AI security practitioner conference. Thanks to Gadi Evron for inviting me to speak. You can watch the recorded presentation below or download the slides. Most companies hand out ChatGPT licenses and wait for the productivity numbers to move. We built a system instead.…
-
OpenAI patches twin leaks as Codex slips and ChatGPT spills
ChatGPT’s hidden outbound channel leaks user data: OpenAI has reportedly fixed a parallel bug in ChatGPT that goes beyond credential theft. Check Point researchers uncovered a hidden outbound communication path in ChatGPT’s code execution runtime that could be triggered with a single malicious prompt.This channel successfully bypassed the platform’s expected safeguards around external data sharing.…
-
ChatGPT Vulnerability Enabled Silent Leakage of Prompts and Sensitive Information
Artificial intelligence assistants increasingly handle our most sensitive data, operating under the assumption that enclosed environments keep this information secure. However, a newly disclosed vulnerability in ChatGPT shattered this expectation. Discovered by Check Point Research, this flaw exploited the isolated code execution runtime to establish a covert outbound communication channel, effectively turning standard chat sessions…
-
OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability
A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive conversation data to be exfiltrated without user knowledge or consent, according to new findings from Check Point.”A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content,” the cybersecurity company said in First…
-
Schwachstelle bei ChatGPT erlaubte Konversationsdaten auszulesen
Die Sicherheitsforscher von Check Point Research haben eine bislang unbekannte Sicherheitslücke aufgedeckt, die es ermöglichte, sensible ChatGPT-Konversationsdaten unbemerkt ohne Wissen oder Zustimmung der Nutzer abzusaugen. Inzwischen hat OpenAI die Lücke geschlossen. Die entdeckte Schwachstelle zeigt, KI-Plattformen müssen wie Cloud- und Computing-Infrastruktur behandelt werden. Die integrierte Sicherheit beseitigt Risiken nicht. Unternehmen können sich nicht […] First…
-
ChatGPT Data Leakage via a Hidden Outbound Channel in the Code Execution Runtime
ey Takeaways What Happened AI assistants now handle some of the most sensitive data people own. Users discuss symptoms and medical history. They ask questions about taxes, debts, and personal finances, upload PDFs, contracts, lab results, and identity-rich documents that contain names, addresses, account details, and private records. That trust depends on a simple expectation:…

