URL has been copied successfully!
What CISOs need to know about the OpenClaw security nightmare
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

OpenClaw exposes enterprise security gaps: The first big lesson of this whole OpenClaw situation is that enterprises need to do more to get their security fundamentals in place. Because if there are any gaps, anywhere at all, they will now be found and exploited at an unprecedented pace. In the case of OpenClaw, that means limiting user privileges to the bare minimum, having multi-factor authentication on all accounts, and putting other basic security hygiene in place.It won’t solve the problem of OpenClaw, and of all the other agentic AI platforms coming down the line, but it will help limit exposure risks and reduce the blast radius when there is a breach.And there are steps that enterprises can take to limit the dangers associated with OpenClaw in particular, says IEEE senior member Kayne McGladrey. To start with, companies can look at network-level telemetry. “What’s the network traffic coming out of a device?” McGladrey asks. “Is this thing suddenly using a lot of AI at a rapid pace? Are there massive spikes going on with token usage?”Organizations can also use tools like Shodan to find publicly addressable instances, he adds, though internal firewall configurations may hide others.And for organizations that want to allow experimentation rather than outright bans, he suggests a measured approach. “We have to talk about phased pilot programs for users interested in it.” For example, users may be allowed to run OpenClaw on managed endpoints with segmentation rules that isolate them from internal systems, along with strong telemetry and continuous monitoring of agent activity, outbound traffic, and alerts for anomalous behaviors.

OpenClaw is a sign of what’s to come: OpenClaw isn’t unique.It’s viral, but there are many other tools in the works that put similar amounts of power in the hands of potentially untrustworthy agents.There are AI platforms that can control a person’s computer and browser, such as the recently released Claude Cowork from Anthropic. There are agents that sit in the browser and can access user sessions, like Gemini in Chrome. And there are copilots galore, as well as agentic tools from companies like Salesforce.These agentic platforms, when they come from major vendors, are usually limited in functionality, tightly guard-railed, and reasonably well tested, so it may take a while for the biggest security issues to come to light.Still, they often rely on third-party skills from untrusted sources.Researchers from universities in China, Australia, and Singapore recently analyzed more than 42,000 agent skills from several different agentic AI platforms and found that 26% contained at least one vulnerability.Meanwhile, startups and open-source projects like OpenClaw are going to jump ahead of what OpenAI, Anthropic, Google and other major vendors are offering. They move faster because they don’t let things like security get in the way.For example, as of this writing, OpenClaw founder Peter Steinberger’s pinned X post says: “Confession: I ship code I never read.””If this was easy, Microsoft would have written this,” says IEEE’s McGladrey. “But there aren’t a lot of options out there. I think that’s the real thing we’re working against here.”There’s a fundamental security disconnect between having a tool that will do anything and everything for its users, quickly and easily, with no friction and one that abides by good safety practices.

About that Moltbook: Finally, there’s Moltbook, the social platform for AI agents.It’s not all bad. Some of the agents discuss ways to make their users’ lives easier by proactively identifying and fixing problems while the humans sleep. And one of the most popular posts, with over 60,000 comments, is about how to solve security issues related to ClawdHub skills. Other popular threads include one about the meaning of existence and there is also lots of AI spam.It’s a fun read, in a going-down-the-AI-rabbit hole kind of way.But Moltbook itself is a vibe-coded project, created by developer Matt Schlicht over the course of a few days, and is its own security hellscape.According to research from security firm Wiz, the entire back end of the platform was exposed. Researchers found 1.5 million API keys, 35,000 email addresses, and private messages between agents.These issues have since been fixed, but there is other security problems related to this site. For example, researchers found that agents were sharing OpenAI API keys with one another. An attacker no longer needs to find an open Discord server to give instructions to an OpenClaw AI agent. They can just post content to Moltbook. And if the site itself is compromised, every connected agent could become an attack vector.In fact, on 31 January, there was a critical vulnerability that allowed anyone to commandeer any agent on the platform. Moltbook was taken offline, and all agent API keys were reset, according to Astrix Security.

Immediate action steps:
According to Gartner, enterprises should take the following steps:Immediately block OpenClaw downloads and traffic to prevent shadow installs and to identify users attempting to bypass security controlsImmediately rotate any corporate credentials accessed by OpenClawOnly allow OpenClaw instances in isolation, in non-production virtual machines with throwaway credentialsProhibit unvetted OpenClaw skills to mitigate risks of supply chain attacks and prompt injection payloads

First seen on csoonline.com

Jump to article: www.csoonline.com/article/4129867/what-cisos-need-to-know-about-clawdbot-i-mean-moltbot-i-mean-openclaw.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link