URL has been copied successfully!
Crooks are hijacking and reselling AI infrastructure: Report
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

exposed endpoints on default ports of common LLM inference services;unauthenticated API access without proper access controls;development/staging environments with public IP addresses;MCP servers connecting LLMs to file systems, databases and internal APIs.Common misconfigurations leveraged by these threat actors include:
Ollama running on port 11434 without authentication;OpenAI-compatible APIs on port 8000 exposed to the internet;MCP servers accessible without access controls;development/staging AI infrastructure with public IPs;production chatbot endpoints (customer support, sales bots) without authentication or rate limiting.George Gerchow, chief security officer at Bedrock Data, said Operation Bizarre Bazaar “is a clear sign that attackers have moved beyond ad hoc LLM abuse and now treat exposed AI infrastructure as a monetizable attack surface. What’s especially concerning isn’t just unauthorized compute use, but the fact that many of these endpoints are now tied to the Model Context Protocol (MCP), the emerging open standard for securely connecting large language models to data sources and tools. MCP is powerful because it enables real-time context and autonomous actions, but without strong controls, those same integration points become pivot vectors into internal systems.”Defenders need to treat AI services with the same rigor as APIs or databases, he said, starting with authentication, telemetry, and threat modelling early in the development cycle. “As MCP becomes foundational to modern AI integrations, securing those protocol interfaces, not just model access, must be a priority,” he said.In an interview, Pillar Security report authors Eilon Cohen and Ariel Fogel couldn’t estimate how much revenue threat actors might have pulled in so far. But they warn that CSOs and infosec leaders had better act fast, particularly if an LLM is accessing critical data.Their report described three components to the Bizarre Bazaar campaign:
the scanner: a distributed bot infrastructure that systematically probes the internet for exposed AI endpoints. Every exposed Ollama instance, every unauthenticated vLLM server, every accessible MCP endpoint gets cataloged. Once an endpoint appears in scan results, exploitation attempts begin within hours;the validator: Once scanners identify targets, infrastructure tied to an alleged criminal site validates the endpoints through API testing. During a concentrated operational window, the attacker tested placeholder API keys, enumerated model capabilities and assessed response quality;the marketplace: Discounted access to 30+ LLM providers is being sold on a site called The Unified LLM API Gateway. It’s hosted on bulletproof infrastructure in the Netherlands and marketed on Discord and Telegram.So far, the researchers said, those buying access appear to be people building their own AI infrastructure and trying to save money, as well as people involved in online gaming.Threat actors may not only be stealing AI access from fully developed applications, the researchers added. A developer trying to prototype an app, who, through carelessness, doesn’t secure a server, could be victimized through credential theft as well.Joseph Steinberg, a US-based AI and cybersecurity expert, said the report is another illustration of how new technology like artificial intelligence creates new risks and the need for new security solutions beyond the traditional IT controls.CSOs need to ask themselves if their organization has the skills needed to safely deploy and protect an AI project, or whether the work should be outsourced to a provider with the needed expertise.

Mitigation: Pillar Security said CSOs with externally-facing LLMs and MCP servers should:
enable authentication on all LLM endpoints. Requiring authentication eliminates opportunistic attacks. Organizations should verify that Ollama, vLLM, and similar services require valid credentials for all requests;audit MCP server exposure. MCP servers must never be directly accessible from the internet. Verify firewall rules, review cloud security groups, confirm authentication requirements;block known malicious infrastructure.  Add the 204.76.203.0/24 subnet to deny lists. For the MCP reconnaissance campaign, block AS135377 ranges;implement rate limiting. Stop burst exploitation attempts. Deploy WAF/CDN rules for AI-specific traffic patterns;audit production chatbot exposure. Every customer-facing chatbot, sales assistant, and internal AI agent must implement security controls to prevent abuse.

Don’t give up: Despite the number of news stories in the past year about AI vulnerabilities, Meghu said the answer is not to give up on AI, but to keep strict controls on its usage. “Do not just ban it, bring it into the light and help your users understand the risk, as well as work on ways for them to use AI/LLM in a safe way that benefits the business,” he advised.”It is probably time to have dedicated training on AI use and risk,” he added. “Make sure you take feedback from users on how they want to interact with an AI service and make sure you support and get ahead of it. Just banning it sends users into a shadow IT realm, and the impact from this is too frightening to risk people hiding it. Embrace and make it part of your communications and planning with your employees.”

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4123806/crooks-are-hijacking-and-reselling-ai-infrastructure-report.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link