Tag: chatgpt
-
Nork snoops whip up fake South Korean military ID with help from ChatGPT
Kimsuky gang proves that with the right wording, you can turn generative AI into a counterfeit factory First seen on theregister.com Jump to article: www.theregister.com/2025/09/15/north_korea_chatgpt_fake_id/
-
AI-Forged Military IDs Used in North Korean Phishing Attack
Genians observed the Kimsuky group impersonate a defense institution in a spear-phishing attack, leveraging ChatGPT to create fake military ID cards First seen on infosecurity-magazine.com Jump to article: www.infosecurity-magazine.com/news/ai-military-ids-north-korea/
-
Hackers using generative AI “ChatGPT” to evade anti-virus defenses
The Kimsuky APT group has begun leveraging generative AI ChatGPT to craft deepfake South Korean military agency ID cards. Phishing lures deliver batch files and AutoIt scripts designed to evade anti-virus scanning through sophisticated obfuscation. Organizations must deploy endpoint detection and response (EDR) solutions to unmask hidden scripts and secure endpoints. On July 17, 2025,…
-
ChatGPT makes Projects feature free, adds a toggle to split chat
Tags: chatgptChatGPT’s Projects feature is now feature and second new feature allows you to create new conversations from existing conversations. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-makes-projects-feature-free-adds-a-toggle-to-split-chat/
-
How the generative AI boom opens up new privacy and cybersecurity risks
Privacy and cybersecurity risks: Another major problem lies in potential privacy and cybersecurity breaches, both for end users and for the companies themselves.Panda warns how AIs fed with large amounts of personal data can become a gateway to fraud or to create much more sophisticated and infallible attacks if they fall into the wrong hands.…
-
How the generative AI boom opens up new privacy and cybersecurity risks
Privacy and cybersecurity risks: Another major problem lies in potential privacy and cybersecurity breaches, both for end users and for the companies themselves.Panda warns how AIs fed with large amounts of personal data can become a gateway to fraud or to create much more sophisticated and infallible attacks if they fall into the wrong hands.…
-
How the generative AI boom opens up new privacy and cybersecurity risks
Privacy and cybersecurity risks: Another major problem lies in potential privacy and cybersecurity breaches, both for end users and for the companies themselves.Panda warns how AIs fed with large amounts of personal data can become a gateway to fraud or to create much more sophisticated and infallible attacks if they fall into the wrong hands.…
-
Sicherheit in ChatGPT: OpenAI meldet Chatverläufe an Strafverfolgungsbehörden
Unter bestimmten Umständen werden Chatverläufe von ChatGPT-Nutzern von einem OpenAI-Team beurteilt und gemeldet. First seen on golem.de Jump to article: www.golem.de/news/sicherheit-in-chatgpt-openai-meldet-chatverlaeufe-an-strafverfolgungsbehoerden-2509-199717.html
-
Leaked ChatGPT Chats: Users Treat AI as Therapist, Lawyer, Confidant
Leaked ChatGPT chats reveal users sharing sensitive data, resumes, and seeking advice on mental health, exposing risks of… First seen on hackread.com Jump to article: hackread.com/leaked-chatgpt-chats-users-ai-therapist-lawyer-confidant/
-
OpenAI releases big upgrade for ChatGPT Codex for agentic coding
OpenAI has announced a big update for Codex, which is the company’s agentic coding tool. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-releases-big-upgrade-for-chatgpt-codex-for-agentic-coding/
-
Anthropic is testing GPT Codex-like Claude Code web app
Anthropic is planning to bring the famous Claude Code to the web, and it might be similar to ChatGPT Codex, but you’ll need GitHub to get started. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/anthropic-is-testing-gpt-codex-like-claude-code-web-app/
-
ChatGPT can now create flashcards quiz on any topic
Tags: chatgptIf you use ChatGPT to learn new topics, you might want to try its new flashcard-based quiz feature, which can help you evaluate your progress. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-can-now-create-flashcards-quiz-on-any-topic/
-
Anthropic is testing GPT Codex-like Claude Code web app
Anthropic is planning to bring the famous Claude Code to the web, and it might be similar to ChatGPT Codex, but you’ll need GitHub to get started. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/anthropic-is-testing-gpt-codex-like-claude-code-web-app/
-
ChatGPT can now create flashcards quiz on any topic
Tags: chatgptIf you use ChatGPT to learn new topics, you might want to try its new flashcard-based quiz feature, which can help you evaluate your progress. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/chatgpt-can-now-create-flashcards-quiz-on-any-topic/
-
OpenAI is testing “Thinking effort” for ChatGPT
OpenAI is working on a new feature called the Thinking effort picker for ChatGPT. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-is-testing-thinking-effort-for-chatgpt/
-
Can Your Security Stack See ChatGPT? Why Network Visibility Matters
Generative AI platforms like ChatGPT, Gemini, Copilot, and Claude are increasingly common in organizations. While these solutions improve efficiency across tasks, they also present new data leak prevention for generative AI challenges. Sensitive information may be shared through chat prompts, files uploaded for AI-driven summarization, or browser plugins that bypass familiar security controls. First seen…
-
ChatGPT hates LA Chargers fans
Tags: chatgptHarvard researchers find model guardrails tailor query responses to user’s inferred politics and other affiliations First seen on theregister.com Jump to article: www.theregister.com/2025/08/27/chatgpt_has_a_problem_with/
-
Cloudflare bringt Echtzeit-Schutz in ChatGPT, Claude und Google Gemini
Zur besseren Absicherung generativer KI für Unternehmen arbeitet Cloudflare mit führenden KI-Anbietern zusammen. Durch eine direkte Einbettung bei den beliebtesten generativen Tools unterstützt Cloudflare eine sichere KI-Nutzung First seen on infopoint-security.de Jump to article: www.infopoint-security.de/cloudflare-bringt-echtzeit-schutz-in-chatgpt-claude-und-google-gemini/a41804/
-
Shadow AI is surging, getting AI adoption right is your best defense
Why most organizations fail at phase one: Despite the clarity of this progression, many organizations struggle to begin. One of the most common reasons is poor platform selection. Either no tool is made available, or the wrong class of tool is introduced. Sometimes what is offered is too narrow, designed for one function or team.…
-
ChatGPT-Agenten: Je eigenständiger KI-Agenten handeln, desto gefährlicher werden sie
Generative KI wird bisher vor allem als Assistenzwerkzeug verstanden, doch mit der Einführung von sogenannten »Agenten« steht ein Paradigmenwechsel bevor. Diese autonomen Systeme führen nicht nur Befehle aus, sondern agieren selbstständig, interagieren mit Systemen, treffen Entscheidungen und können, im schlimmsten Fall, ohne menschliches Zutun Schaden anrichten. In einem neuen Blogbeitrag warnt Trend Micro vor den……
-
ChatGPT-5 Downgrade Attack Allows Hackers to Evade AI Defenses With Minimal Prompts
Security researchers from Adversa AI have uncovered a critical vulnerability in ChatGPT-5 and other major AI systems that allows attackers to bypass safety measures using simple prompt modifications. The newly discovered attack, dubbed PROMISQROUTE, exploits AI routing mechanisms that major providers use to save billions of dollars annually by directing user queries to cheaper, less…
-
Easy ChatGPT Downgrade Attack Undermines GPT-5 Security
By using brief, plain clues in their prompts that are likely to influence the app to query older models, a user can downgrade ChatGPT for malicious ends. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/chatgpt-downgrade-attack-gpt-5-security
-
Easy ChatGPT Downgrade Attack Undermines GPT-5 Security
By using brief, plain clues in their prompts that are likely to influence the app to query older models, a user can downgrade ChatGPT for malicious ends. First seen on darkreading.com Jump to article: www.darkreading.com/application-security/chatgpt-downgrade-attack-gpt-5-security
-
Forscher zeigen: Ein präpariertes Dokument reicht, um ChatGPT zum Datenklau zu bringen
Tags: chatgptFirst seen on t3n.de Jump to article: t3n.de/news/chatgpt-datenklau-forscher-1701254/
-
Forscher zeigen: Ein präpariertes Dokument reicht, um ChatGPT zum Datenklau zu bringen
Tags: chatgptFirst seen on t3n.de Jump to article: t3n.de/news/chatgpt-datenklau-forscher-1701254/
-
Die meisten KI-Tools haben massive Sicherheitsprobleme
Cybersicherheitsexperten sagen, dass »unkontrollierte KI-Nutzung« gefährliche blinde Flecken schafft, da Untersuchungen zeigen, dass 84 % der KI-Tool-Anbieter Sicherheitsverletzungen erlitten haben. Vor drei Wochen enthüllte das ChatGPT-Leck »geteilte Gespräche« und machte Tausende von Benutzerchats einige davon mit Namen, Adressen und anderen persönlich identifizierbaren Informationen in den Google-Suchergebnissen sichtbar. Die Links, die ursprünglich über… First seen on…
-
OpenAI releases $4 ChatGPT plan, but it’s not available in the US for now
OpenAI has finally announced the GPT Go subscription, which costs just $4 in the US or INR 399 in India. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-releases-4-chatgpt-plan-but-its-not-available-in-the-us-for-now/
-
The Hidden Risks of External AI Models and How Businesses can Mitigate Them
As AI adoption accelerates, businesses face hidden risks from third-party models like ChatGPT and Claude, including data leakage and malicious data infiltration. By implementing corporate AI tools and educating employees, companies can harness generative AI’s benefits while safeguarding sensitive data, compliance, and trust. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/08/the-hidden-risks-of-external-ai-models-and-how-businesses-can-mitigate-them/

