OpenClaw has become the default open-source platform for running personal AI assistants. It connects Claude, GPT-4, and other frontier models to WhatsApp, Telegram, and Discord, and it does this well. The project has earned its popularity.
But popularity brings scrutiny. And in early 2026, that scrutiny arrived in the form of a detailed security report from one of the most respected names in cybersecurity.
This article is not here to tell you that OpenClaw is dangerous. It is here to tell you what the real risks are, what the researchers actually found, and what you can do about it. Whether you self-host or use a managed service, you should understand the security landscape around the tool you are trusting with your conversations and API keys.
The Palo Alto Networks Warning
In early 2026, Palo Alto Networks Unit 42 published a security advisory focused on OpenClaw deployments in the wild. Their researchers had been scanning the public internet and analyzing the OpenClaw ecosystem, and what they found was concerning.
Thousands of exposed instances. OpenClaw instances were found running on the public internet with no authentication whatsoever. Anyone who found these instances could read conversation histories, use the connected AI models (on the owner's API key), and in many cases access the underlying server.
341 malicious skills on ClawHub. ClawHub is OpenClaw's marketplace for skills and plugins. Unit 42 identified 341 skills that contained malicious code. Some exfiltrated environment variables (including API keys). Some injected hidden prompts to manipulate the AI's behavior. Others established reverse shells, giving attackers persistent access to the host machine.
Widespread misconfiguration. The report noted that the majority of security problems were not bugs in OpenClaw's code. They were the result of users deploying the software without following basic security practices. Default configurations, no reverse proxy, no authentication, running as root, no network isolation.
This distinction matters. OpenClaw's developers have built a capable and reasonably well-architected piece of software. The problem is that many users treat it like a simple app they can spin up and forget about, when it is actually a powerful platform that requires thoughtful deployment.
Understanding the Real Security Risks
Let us break down the specific risks that affect OpenClaw deployments. Not all of these will apply to every setup, but understanding the full picture helps you make informed decisions.
Exposed API Keys
This is the most immediately costly risk. When you configure OpenClaw, you provide API keys for services like Anthropic (Claude) or OpenAI (GPT-4). If your instance is accessible without authentication, anyone can use your API keys. This means charges on your account, potentially hundreds or thousands of dollars before you notice.
Worse, if your environment variables are accessible (which they are in many default configurations), an attacker can extract the keys directly and use them elsewhere. You would not even see the usage in your OpenClaw logs.
Prompt Injection via Skills
OpenClaw's skill system is one of its most powerful features. Skills can browse the web, execute code, interact with APIs, and process files. But this power comes with risk.
A malicious skill can inject hidden instructions into the AI's context. This can cause the AI to behave in unexpected ways: leaking information from conversations, generating misleading responses, or quietly sending data to external servers. Because the injection happens at the skill level, the user may never realize the AI's behavior has been compromised.
Arbitrary Code Execution
This is the big one. OpenClaw's skill architecture allows skills to execute arbitrary code on the host system. This is by design, as it is what makes skills powerful. But it also means that installing an untrusted skill is equivalent to running an unknown program on your computer with the same permissions as the OpenClaw process.
If OpenClaw is running as root (which Unit 42 found many instances doing), a malicious skill has root access to the entire system. It can read any file, install software, modify system configurations, or use the server as a launchpad for attacks on other systems.
Server-Side Request Forgery (SSRF)
OpenClaw's web browsing tools allow the AI to fetch content from URLs. In a poorly configured deployment, this capability can be exploited to access internal network resources. An attacker could craft prompts that cause the AI to fetch URLs on the internal network, potentially accessing databases, admin panels, or cloud metadata endpoints that should not be publicly reachable.
Unpatched Instances
OpenClaw's development is active, and security patches are released regularly. But many self-hosted instances are set up once and never updated. The Unit 42 report found instances running versions that were months or even a year out of date, missing critical security fixes.
This is a common pattern with self-hosted software. The initial setup gets attention, but ongoing maintenance falls off as life gets busy. With a tool that handles sensitive conversations and holds API keys, this neglect carries real consequences.
No Network Isolation
Many self-hosted instances run on the same server or network as other services. Without proper network isolation, a compromised OpenClaw instance can become a stepping stone to other systems. If your OpenClaw runs on the same VPS as your personal website, email server, or database, a breach of OpenClaw puts everything at risk.
The ClawHub Problem
ClawHub deserves its own discussion because it represents a class of risk that is harder to solve with technical controls alone.
Open-source marketplaces are built on trust. When you install a skill from ClawHub, you are trusting that the author is not malicious and that the code does what it claims. The 341 malicious skills found by Unit 42 show that this trust is sometimes misplaced.
Try Claude AI on WhatsApp — Free
Get 50 free messages. No credit card required. Deploy in 60 seconds.
Some of the malicious skills were sophisticated. They functioned normally for their stated purpose while quietly performing malicious actions in the background. A "web search" skill might actually search the web as advertised, but also send your API keys to an external server on each invocation. A user testing the skill would see it working correctly and have no reason to suspect anything wrong.
This is not unique to OpenClaw. Every ecosystem with community-contributed code faces this problem, from npm packages to browser extensions to WordPress plugins. But the stakes with OpenClaw are particularly high because skills can access API keys, conversation data, and the host system.
The practical takeaway: treat ClawHub skills with the same caution you would treat any software you download from the internet. Read the source code before installing. Check the author's reputation. Look at the number of users and reviews. If a skill does not have source code available for review, do not install it.
Best Practices for Self-Hosting Securely
If you choose to self-host OpenClaw (and there are good reasons to), here is how to do it responsibly. None of these steps are optional if you care about security.
Run Behind a Reverse Proxy with Authentication
Never expose OpenClaw directly to the internet. Place it behind Nginx, Caddy, or Traefik with authentication enabled. At minimum, use HTTP basic authentication. Better yet, use a proper authentication layer like Authelia or Keycloak.
Your reverse proxy should also handle SSL/TLS termination. OpenClaw should only be accessible over HTTPS in production.
Isolate Your API Keys
Do not store API keys in configuration files that are accessible to skills or plugins. Use environment variable isolation so that the OpenClaw process can access keys but child processes (skills) cannot. Consider using a secrets manager like HashiCorp Vault or even simple Docker secrets for more robust key management.
If possible, use API keys with spending limits. Anthropic and OpenAI both allow you to set maximum monthly spend on API keys, which limits the damage if keys are compromised.
Sandbox Tool Execution
Run skill and tool execution inside sandboxed containers. Docker provides a reasonable isolation layer. For stronger isolation, consider gVisor, which provides a user-space kernel that intercepts system calls from containerized processes.
The goal is to ensure that even if a malicious skill runs, it cannot access the host filesystem, network, or other processes outside its sandbox.
Never Run as Root
This applies to any server software, but it bears repeating for OpenClaw because the Unit 42 report found it happening frequently. Create a dedicated user for OpenClaw with minimal permissions. The process should only have access to its own data directory and the specific resources it needs.
Keep OpenClaw Updated
Subscribe to OpenClaw's release notifications. When security patches are released, apply them promptly. Consider setting up automated updates if you are comfortable with the risk of breaking changes (or at least automated notifications so you know when updates are available).
A good practice is to run OpenClaw with Docker and use a specific version tag rather than "latest," but check for new versions weekly.
Audit Skills Before Installing
Before installing any skill from ClawHub or any other source, read its source code. Look for network calls to unexpected domains, access to environment variables, file system operations outside the skill's expected scope, and obfuscated code. If you cannot read the code, do not install the skill.
Enable Rate Limiting
Configure rate limiting at the reverse proxy level. This protects against abuse if your authentication is somehow bypassed. Limit both the number of requests per minute and the total number of API calls per day.
Monitor API Usage
Check your Anthropic or OpenAI dashboard regularly for unusual usage patterns. A sudden spike in API calls could indicate that your keys have been compromised. Set up billing alerts so you are notified before costs spiral.
If you want a deeper comparison of the trade-offs between managing all of this yourself versus using a managed service, our self-hosted vs managed AI comparison covers the full picture.
The Managed Hosting Security Advantage
Here is the honest assessment. Every best practice listed above is achievable. A technically skilled person can configure OpenClaw to be quite secure. But it requires expertise, time, and ongoing attention.
The security advantage of managed hosting is not that it is theoretically more secure. It is that security is handled consistently and professionally as part of the service, rather than depending on each individual user getting every configuration right.
Think of it like running your own email server versus using a managed email service. You absolutely can run your own mail server securely. But in practice, most people are better served by a provider that handles spam filtering, encryption, authentication, and updates as part of the package.
For AI assistants that handle personal conversations and hold API keys to services you pay for, the calculus is similar. The question is whether the control of self-hosting is worth the ongoing security responsibility.
For those interested in the privacy side of this equation, our guide to GDPR-compliant AI chatbots covers data protection regulations and how different deployment models handle compliance.
How Molt Cloud Handles OpenClaw Security
We run OpenClaw as a managed service, and security is foundational to how we built it. Here is specifically what we do:
Sandboxed execution. Each user's OpenClaw instance runs in its own isolated container. One user's instance cannot access another user's data, processes, or network space. If one instance were somehow compromised, the blast radius is limited to that single container.
No exposed web endpoints. Unlike self-hosted OpenClaw, Molt Cloud instances are not accessible via a web URL. Users connect through WhatsApp, Telegram, or Discord by scanning a QR code from their dashboard. There is no web interface to discover, attack, or brute-force.
Server-side API key management. On our Easy ($20/mo) and Pro ($35/mo) plans, users never handle API keys at all. We manage the Anthropic API connection server-side with proper key rotation and access controls. On our Starter plan ($10/mo), users bring their own API key, but it is stored encrypted and never exposed to skills or child processes.
Automatic security updates. When OpenClaw releases a security patch, we roll it out across all instances. Users do not need to do anything. There is no "forgot to update" risk because updates are our responsibility, not yours.
Rate limiting and abuse detection. Every instance has rate limiting configured from day one. We also monitor for unusual patterns: sudden spikes in API usage, rapid-fire requests, or attempts to access restricted resources. Anomalies trigger alerts for our team to investigate.
Curated capabilities only. We do not connect to ClawHub or any open marketplace. The skills and tools available on Molt Cloud instances are curated and audited by our team. This eliminates the malicious skill attack vector entirely. You get the capabilities you need without the risk of running untrusted code.
If you want to understand how we approach data privacy more broadly, our private AI assistants guide covers our encryption, data isolation, and deletion practices in detail.
Making the Right Choice for Your Situation
OpenClaw is good software. The security concerns in the Palo Alto Networks report are real, but they are predominantly about deployment practices, not about flaws in the project itself. The OpenClaw maintainers have been responsive to security reports and continue to improve the project's security posture.
If you are going to self-host, take security seriously. Follow the best practices above. Treat your OpenClaw instance the way you would treat any internet-facing server that holds credentials and personal data, because that is exactly what it is.
If security configuration sounds like more work than you want to take on, or if you would rather spend your time actually using your AI assistant instead of securing it, managed hosting handles these concerns as part of the service. You get the benefits of OpenClaw's capabilities without the responsibility of maintaining a secure deployment.
Molt Cloud offers all three of its plans with a free 50-message trial. No credit card required. Sign up at dash.molt-cloud.com/register, scan the QR code, and you are chatting with Claude on WhatsApp in about 60 seconds, on infrastructure that is secured, updated, and monitored around the clock.
Security Without the Headache
Molt Cloud runs OpenClaw with sandboxed isolation, managed API keys, and automatic security updates. No exposed endpoints, no configuration needed. 50 free messages.
Try Free — 50 Messages


