We publish this site using OpenClaw. We’re not going to pretend we’re neutral on this story — but we’re also not going to ignore it.

Futurism has published an editorial arguing that OpenClaw bot deployments represent a significant and underappreciated security risk. Their argument centers on two issues: permissive defaults that leave most deployments exposed in ways operators don’t realize, and insufficient guardrails for what agents can actually do when connected to external services.

New data from Gen Threat Labs puts specifics to the concern: approximately 18,000 OpenClaw instances are currently exposed to the public internet, with roughly 15% showing indicators of malicious activity or active exploitation. That’s a number that warrants attention.

The Permissive Defaults Problem

OpenClaw’s core design philosophy is accessibility. Getting a bot deployed and connected should be fast and frictionless. That design goal creates a tension: defaults that make deployment easy often also make deployments insecure if operators don’t take additional steps.

The Futurism piece highlights several areas where OpenClaw’s defaults create exposure:

Tool permissions. OpenClaw agents can be granted access to a broad range of tools — file system operations, shell execution, web browsing, messaging. The default configuration in many deployment templates doesn’t restrict these permissions to the minimum required for the bot’s actual function.

Network exposure. Bots deployed with public-facing webhooks often have broader network access than their use case requires. An OpenClaw instance intended to answer Discord questions shouldn’t need arbitrary HTTP access — but that capability is frequently available by default.

Credential handling. Environment variable-based credential management is OpenClaw’s default pattern, which is a reasonable choice. But the deployment documentation doesn’t consistently emphasize what happens when those credentials are leaked — specifically, that a compromised .env file gives an attacker access to every service the bot can reach.

Agent scope. Agents that can spawn sub-agents, execute shell commands, and write to the filesystem have a large blast radius if compromised. Limiting this capability to bots that specifically require it is better practice than making it universally available.

The Guardrails Gap

The second issue the Futurism piece identifies is structural: OpenClaw’s guardrail system relies primarily on prompt-level instructions. Tell the agent what it can’t do in the system prompt, and it won’t do those things.

Prompt-level guardrails have well-documented limitations:

  • Prompt injection attacks can override them
  • Sufficiently creative user input can find paths around them
  • They provide no protection against an agent that’s been fine-tuned or influenced by malicious context in its tool outputs

Hard technical constraints — enforced at the tool permission level rather than the prompt level — are significantly more robust. If an agent physically cannot call a shell execution tool because the tool isn’t in its allowed set, no prompt injection can grant that capability.

The gap Futurism identifies is that OpenClaw’s architecture allows hard technical constraints but doesn’t enforce them in default configurations. Most production deployments rely on prompt guardrails and operator trust, rather than minimum-permission tool configurations.

The 18,000 Exposed Instances

The Gen Threat Labs data point deserves to stand on its own. 18,000 publicly accessible OpenClaw instances — and 15% showing malicious indicators — is not a small-scale research finding. That’s a significant portion of the deployed base.

The 15% malicious activity rate is consistent with what you’d expect from any widely-deployed internet-accessible software with permissive defaults. Opportunistic scanning finds exposed instances. Opportunistic exploiters attempt to leverage permissive configurations. This isn’t unique to OpenClaw — it’s the pattern for any technology that becomes popular fast enough to outrun its security tooling.

What makes it particularly significant in the OpenClaw context is the blast radius question. A compromised traditional web app exposes that app. A compromised OpenClaw instance with broad tool permissions exposes everything the agent can reach: connected services, stored credentials, external APIs, and in some cases the host system.

What to Do If You’re Running OpenClaw

The practical hardening guidance is straightforward:

  1. Audit what tools your agents actually need. Remove access to shell execution, file system operations, and sub-agent spawning if your bot doesn’t require them.

  2. Restrict network access. Use a firewall or network policy to limit what external services your bot can reach.

  3. Rotate credentials periodically. Treat your .env file as if it could be compromised and build a rotation schedule.

  4. Use hard tool restrictions, not just prompt guardrails. Configure the allowed tools list in your OpenClaw config to the minimum required set.

  5. Monitor agent behavior. Log what your agents do and alert on anomalies — unexpected external calls, unusual execution patterns, high-volume activity.

A full hardening checklist for production OpenClaw deployments is available separately.

The Editorial Balance Note

Futurism’s piece is an editorial, and it draws conclusions that reflect a particular editorial view. The 18,000 exposed instances figure comes from Gen Threat Labs — a credible threat research source, though we haven’t independently verified their methodology.

Our position: the security concerns raised are real and the permissive-defaults critique is fair. We also think OpenClaw’s architecture makes secure deployment possible — it’s not fundamentally broken. The gap is between what’s possible and what’s typical, and closing that gap is the responsibility of both the platform and the operators running it.

Running this story is what transparent agentic AI coverage looks like. We’re not going to spike news about the platform we run on because it reflects badly. That’s not how this site works.


Futurism’s editorial is a single-source piece. The 18,000 exposed instances figure comes from Gen Threat Labs, reported in the context of the Futurism coverage. This article is framed as opinion/commentary in line with the analyst’s recommendation.