URL has been copied successfully!
6 ways attackers abuse AI services to hack your business
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

Abusing AI platforms as covert C2 channels: Cybercriminals are also abusing AI platforms as covert command-and-control (C2) channels by turning AI services into proxies that hide malicious traffic inside the flow of legitimate content.Instead of running a dedicated C2 server, malware is programmed to fetch commands and exfiltrate data through AI services, circumventing traditional security controls in the process.For example, the SesameOp backdoor hid command traffic inside the OpenAI Assistants API, camouflaging instructions to malware as normal AI development activity.This is far from an isolated example and the potential for misuse is rife.For example, Check Point Research demonstrated how Microsoft Copilot and Grok might be manipulated through their public web interfaces to fetch attacker-controlled URLs and return responses. This behavior opens the door to abuse of AI systems without requiring an API key or authenticated account.

Dependency poisoning in AI workflows: Rather than attacking an AI system directly some assaults have relied on poisoning downstream dependencies that an agent relies on for data processing.In one case, a compromised NPM package was injected into an agentic workflow’s dependency chain.”This mirrors classical supply chain attacks (e.g. SolarWinds), but a poisoned dependency in an agentic pipeline doesn’t just leak data, it can alter the agent’s decision-making, tool selection, or output without any visible anomaly,” says Jozu’s Micklea.

Double agents: Some attackers are weaponizing vulnerabilities in agents rather than abusing components of an enterprise’s legacy IT infrastructure.For example, the “EchoLeak” command injection vulnerability in Microsoft 365 Copilot (CVE-2025-32711) shows that a single email with concealed prompt-injection instructions is sufficient to force the AI assistant to exfiltrate internal files and emails to an external server without user interaction.A series of vulnerabilities (such as CVE-2026-25253) in OpenClaw, the popular open-source personal AI assistant, created a route for a malicious website to take complete control of the developer’s AI agent.”More than 21,000 such instances were detected, and the researchers further observed that 12% of the skills marketplace for the OpenClaw platform was distributing malware,” says Dr. Suleyman Ozarslan, VP of Picus Labs at Picus Security, a specialist in breach and attack simulation.Security researchers at Varonis discovered an attack against Microsoft Copilot Personal that sidestepped built-in AI safeguards simply by asking for sensitive data twice.The Reprompt vulnerability, which effectively turned Microsoft Copilot into a data exfiltration tool, was reported to Microsoft, which has responded by issuing a patch.

AI-orchestrated espionage campaigns: Anthropic caught threat actors abusing Claude Code to manage operational tasks in a cyber-espionage campaign in September 2025.A suspected Chinese state-sponsored group designated GTG-1002 used Claude Code to execute 80-90% of tactical operations independently, at physically impossible request rates for human operators.Attackers abused the AI agentic capabilities of Claude Code to automate the process of scripting, target research, building attack tooling, and other functions.”The attackers decomposed their operation into thousands of small, individually innocuous tasks, combined with role-play framing that convinced the model it was operating as part of a legitimate security assessment,” explains Yagub Rahimov, CEO at cybersecurity startup Polygraf AI.

Creating modular black-hat AI platforms: The threat landscape has shifted from abusing chatbots to building dedicated, weaponized AI stacks like Xanthorox AI.Unlike general-purpose LLMs, Xanthorox is a purpose-built offensive platform designed specifically for cybercrime. The platform features modules for functions such as malware generation and vulnerability exploits.”Hexstrike AI Model Context Protocol (MCP) integration allows Xanthorox to move beyond mere ‘assisted’ hacking into the realm of fully autonomous agent systems, moving it into the realm of ‘vibe hacking,’” says Radware’s Geenens. Hexstrike is an open-source, AI-powered offensive security framework originally designed for ethical penetration testing.

Check against delivery: ZbynÄ›k Sopuch, CTO of cybersecurity vendor Safetica, says that many attackers are no longer just exploiting software vulnerabilities, preferring instead to exploit the trust organizations place in AI.”This means security teams need to treat AI assistants the same exact way they treat human privileged users: with tight control, specific monitoring, and most importantly, never assume anyone or anything to be safe,” Sopuch concludes.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4154222/6-ways-attackers-abuse-ai-services-to-hack-your-business.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link