{"id":55317,"date":"2026-02-16T08:16:06","date_gmt":"2026-02-16T13:16:06","guid":{"rendered":"https:\/\/www.kaspersky.com\/blog\/?p=55317"},"modified":"2026-02-24T11:45:03","modified_gmt":"2026-02-24T16:45:03","slug":"moltbot-enterprise-risk-management","status":"publish","type":"post","link":"https:\/\/www.kaspersky.com\/blog\/moltbot-enterprise-risk-management\/55317\/","title":{"rendered":"OpenClaw threats: assessing the risks, and how to handle shadow AI"},"content":{"rendered":"<p>Everyone has likely heard of OpenClaw, previously known as \u201cClawdbot\u201d or \u201cMoltbot\u201d, the open-source AI assistant that can be deployed on a machine locally. It plugs into popular chat platforms like WhatsApp, Telegram, Signal, Discord, and Slack, which allows it to accept commands from its owner and go to town on the local file system. It has access to the owner\u2019s calendar, email, and browser, and can even execute OS commands via the shell.<\/p>\n<p>From a security perspective, that description alone should be enough to give anyone a nervous twitch. But when people start trying to use it for work within a corporate environment, anxiety quickly hardens into the conviction of imminent chaos. Some experts have already dubbed OpenClaw the biggest insider threat of 2026. The issues with OpenClaw cover the full spectrum of risks highlighted in the recent <a href=\"https:\/\/www.kaspersky.com\/blog\/top-agentic-ai-risks-2026\/55184\/\" target=\"_blank\" rel=\"noopener nofollow\">OWASP Top 10 for Agentic Applications<\/a>.<\/p>\n<p>OpenClaw permits plugging in any local or cloud-based LLM, and the use of a wide range of integrations with additional services. At its core is a gateway that accepts commands via chat apps or a web UI, and routes them to the appropriate AI agents. The first iteration, dubbed Clawdbot, dropped in November 2025; by January 2026, it had gone viral \u2014 and brought a heap of security headaches with it. In a single week, <a href=\"https:\/\/www.kaspersky.com\/blog\/openclaw-vulnerabilities-exposed\/55263\/\" target=\"_blank\" rel=\"noopener nofollow\">several critical vulnerabilities were disclosed<\/a>, malicious skills cropped up in the skill directory, and secrets were leaked from Moltbook (essentially \u201cReddit for bots\u201d). To top it off, Anthropic issued a trademark demand to rename the project to avoid infringing on \u201cClaude\u201d, and the project\u2019s X account name was hijacked to shill crypto scams.<\/p>\n<h2>Known OpenClaw issues<\/h2>\n<p>Though the project\u2019s developer appears to acknowledge that security is important, since this is a hobbyist project there are zero dedicated resources for vulnerability management or other product security essentials.<\/p>\n<h3>OpenClaw vulnerabilities<\/h3>\n<p>Among the known vulnerabilities in OpenClaw, the most dangerous is <a href=\"https:\/\/nvd.nist.gov\/vuln\/detail\/CVE-2026-25253\" target=\"_blank\" rel=\"noopener nofollow\">CVE-2026-25253<\/a> (CVSS 8.8). Exploiting it leads to a total compromise of the gateway, allowing an attacker to run arbitrary commands. To make matters worse, it\u2019s alarmingly easy to pull off: if the agent visits an attacker\u2019s site or the user clicks a malicious link, the primary authentication token is leaked. With that token in hand, the attacker has full administrative control over the gateway. This vulnerability was patched in version 2026.1.29.<\/p>\n<p>Also, two dangerous command injection vulnerabilities (CVE-2026-24763 and CVE-2026-25157) were discovered.<\/p>\n<h3>Insecure defaults and features<\/h3>\n<p>A variety of default settings and implementation quirks make attacking the gateway a walk in the park:<\/p>\n<ul>\n<li>Authentication is disabled by default, so the gateway is accessible from the internet.<\/li>\n<li>The server accepts WebSocket connections without verifying their origin.<\/li>\n<li>Localhost connections are implicitly trusted, which is a disaster waiting to happen if the host is running a reverse proxy.<\/li>\n<li>Several tools \u2014 including some dangerous ones \u2014 are accessible in Guest Mode.<\/li>\n<li>Critical configuration parameters leak across the local network via mDNS broadcast messages.<\/li>\n<\/ul>\n<h3>Secrets in plaintext<\/h3>\n<p>OpenClaw\u2019s configuration, \u201cmemory\u201d, and chat logs store API keys, passwords, and other credentials for LLMs and integration services in plain text. This is a critical threat \u2014 to the extent that versions of the RedLine and Lumma infostealers have already been spotted with OpenClaw file paths added to their must-steal lists. Also, the Vidar infostealer <a href=\"https:\/\/www.bleepingcomputer.com\/news\/security\/infostealer-malware-found-stealing-openclaw-secrets-for-first-time\/\" target=\"_blank\" rel=\"nofollow noopener\">was caught stealing secrets<\/a> from OpenClaw.<\/p>\n<h3>Malicious skills<\/h3>\n<p>OpenClaw\u2019s functionality can be extended with \u201cskills\u201d available in the <a href=\"https:\/\/clawhub.ai\/skills\" target=\"_blank\" rel=\"noopener nofollow\">ClawHub<\/a> repository. Since anyone can upload a skill, it didn\u2019t take long for threat actors to start \u201cbundling\u201d the AMOS macOS infostealer into their uploads. Within a short time, the number of malicious skills <a href=\"https:\/\/www.scworld.com\/news\/openclaw-agents-targeted-with-341-malicious-clawhub-skills\" target=\"_blank\" rel=\"noopener nofollow\">reached the hundreds<\/a>. This prompted developers to quickly ink a <a href=\"https:\/\/openclaw.ai\/blog\/virustotal-partnership\" target=\"_blank\" rel=\"noopener nofollow\">deal<\/a> with VirusTotal to ensure all uploaded skills aren\u2019t only checked against malware databases, but also undergo code and content analysis via LLMs. That said, the authors are very clear: it\u2019s no silver bullet.<\/p>\n<h3>Structural flaws in the OpenClaw AI agent<\/h3>\n<p>Vulnerabilities can be patched and settings can be hardened, but some of OpenClaw\u2019s issues are fundamental to its design. The product combines several critical features that, when bundled together, are downright dangerous:<\/p>\n<ul>\n<li>OpenClaw has privileged access to sensitive data on the host machine and the owner\u2019s personal accounts.<\/li>\n<li>The assistant is wide open to untrusted data: the agent receives messages via chat apps and email, autonomously browses web pages, etc.<\/li>\n<li>It suffers from the inherent inability of LLMs to reliably separate commands from data, making prompt injection a possibility.<\/li>\n<li>The agent saves key takeaways and artifacts from its tasks to inform future actions. This means a single successful injection can poison the agent\u2019s memory, influencing its behavior long-term.<\/li>\n<li>OpenClaw has the power to talk to the outside world \u2014 sending emails, making API calls, and utilizing other methods to exfiltrate internal data.<\/li>\n<\/ul>\n<p>It\u2019s worth noting that while OpenClaw is a particularly extreme example, this \u201cTerrifying Five\u201d list is actually characteristic of almost all multi-purpose AI agents.<\/p>\n<h2>OpenClaw risks for organizations<\/h2>\n<p>If an employee installs an agent like this on a corporate device and hooks it into even a basic suite of services (think Slack and SharePoint), the combination of autonomous command execution, broad file system access, and excessive OAuth permissions creates fertile ground for a deep network compromise. In fact, the bot\u2019s habit of hoarding unencrypted secrets and tokens in one place is a disaster waiting to happen \u2014 even if the AI agent itself is never compromised.<\/p>\n<p>On top of that, these configurations violate regulatory requirements across multiple countries and industries, leading to potential fines and audit failures. Current regulatory requirements, like those in the EU AI Act or the NIST AI Risk Management Framework, explicitly mandate strict access control for AI agents. OpenClaw\u2019s configuration approach clearly falls short of those standards.<\/p>\n<p>But the real kicker is that even if employees are banned from installing this software on work machines, OpenClaw can still end up on their personal devices. This also creates specific risks for given the organization as a whole:<\/p>\n<ul>\n<li>Personal devices frequently store access to work systems like corporate VPN configs or browser tokens for email and internal tools. These can be hijacked to gain a foothold in the company\u2019s infrastructure.<\/li>\n<li>Controlling the agent via chat apps means that it\u2019s not just the employee that becomes a target for social engineering, but also their AI agent, seeing AI account takeovers or impersonation of the user in chats with colleagues (among other scams) become a reality. Even if work is only occasionally discussed in personal chats, the info in them is ripe for the picking.<\/li>\n<li>If an AI agent on a personal device is hooked into any corporate services (email, messaging, file storage), attackers can manipulate the agent to siphon off data, and this activity would be extremely difficult for corporate monitoring systems to spot.<\/li>\n<\/ul>\n<h2>How to detect OpenClaw<\/h2>\n<p>Depending on the SOC team\u2019s monitoring and response capabilities, they can track OpenClaw gateway connection attempts on personal devices or in the cloud. Additionally, a specific combination of red flags can indicate OpenClaw\u2019s presence on a corporate device:<\/p>\n<ul>\n<li>Look for ~\/.openclaw\/, ~\/clawd\/, or ~\/.clawdbot directories on host machines.<\/li>\n<li>Scan the network with internal tools, or public ones like Shodan, to identify the HTML fingerprints of Clawdbot control panels.<\/li>\n<li>Monitor for WebSocket traffic on ports 3000 and 18789.<\/li>\n<li>Keep an eye out for mDNS broadcast messages on port 5353 (specifically openclaw-gw.tcp).<\/li>\n<li>Watch for unusual authentication attempts in corporate services, such as new App ID registrations, OAuth Consent events, or User-Agent strings typical of Node.js and other non-standard user agents.<\/li>\n<li>Look for access patterns typical of automated data harvesting: reading massive chunks of data (scraping all files or all emails) or scanning directories at fixed intervals during off-hours.<\/li>\n<\/ul>\n<h2>Controlling shadow AI<\/h2>\n<p>A set of security hygiene practices can effectively shrink the footprint of both shadow IT and shadow AI, making it much harder to deploy OpenClaw in an organization:<\/p>\n<ul>\n<li>Use host-level allowlisting to ensure only approved applications and cloud integrations are installed. For products that support extensibility (like Chrome extensions, VS Code plugins, or OpenClaw skills), implement a closed list of vetted add-ons.<\/li>\n<li>Conduct a full security assessment of any product or service, AI agents included, before allowing them to hook into corporate resources.<\/li>\n<li>Treat AI agents with the same rigorous security requirements applied to public-facing servers that process sensitive corporate data.<\/li>\n<li>Implement the principle of least privilege for all users and other identities.<\/li>\n<li>Don\u2019t grant administrative privileges without a critical business need. Require all users with elevated permissions to use them only when performing specific tasks rather than working from privileged accounts all the time.<\/li>\n<li>Configure corporate services so that technical integrations (like apps requesting OAuth access) are granted only the bare minimum permissions.<\/li>\n<li>Periodically audit integrations, OAuth tokens, and permissions granted to third-party apps. Review the need for these with business owners, proactively revoke excessive permissions, and kill off stale integrations.<\/li>\n<\/ul>\n<h2>Secure deployment of agentic AI<\/h2>\n<p>If an organization allows AI agents in an experimental capacity \u2014 say, for development testing or efficiency pilots \u2014 or if specific AI use cases have been greenlit for general staff, robust monitoring, logging, and access control measures should be implemented:<\/p>\n<ul>\n<li>Deploy agents in an isolated subnet with strict ingress and egress rules, limiting communication only to trusted hosts required for the task.<\/li>\n<li>Use short-lived access tokens with a strictly limited scope of privileges. Never hand an agent tokens that grant access to core company servers or services. Ideally, create dedicated service accounts for every individual test.<\/li>\n<li>Wall off the agent from dangerous tools and data sets that aren\u2019t relevant to its specific job. For experimental rollouts, it\u2019s best practice to test the agent using purely synthetic data that mimics the structure of real production data.<\/li>\n<li>Configure detailed logging of the agent\u2019s actions. This should include event logs, command-line parameters, and chain-of-thought artifacts associated with every command it executes.<\/li>\n<li>Set up SIEM to flag abnormal agent activity. The same techniques and rules used to detect LotL attacks are applicable here, though additional efforts to define what normal activity looks like for a specific agent are required.<\/li>\n<li>If MCP servers and additional agent skills are used, scan them with the security tools emerging for these tasks, such as <a href=\"https:\/\/github.com\/cisco-ai-defense\/skill-scanner\" target=\"_blank\" rel=\"noopener nofollow\">skill-scanner<\/a>, <a href=\"https:\/\/github.com\/cisco-ai-defense\/mcp-scanner\" target=\"_blank\" rel=\"noopener nofollow\">mcp-scanner<\/a>, or <a href=\"https:\/\/github.com\/invariantlabs-ai\/mcp-scan\" target=\"_blank\" rel=\"noopener nofollow\">mcp-scan<\/a>. Specifically for OpenClaw testing, several companies have already released open-source tools to audit the security of its <a href=\"https:\/\/github.com\/guardzcom\/security-research-labs\/tree\/main\/openclaw-security-analyzer\" target=\"_blank\" rel=\"noopener nofollow\">configurations<\/a>.<\/li>\n<\/ul>\n<h2>Corporate policies and employee training<\/h2>\n<p>A flat-out ban on all AI tools is a simple but rarely productive path. Employees usually find workarounds \u2014 driving the problem into the shadows where it\u2019s even harder to control. Instead, it\u2019s better to find a sensible balance between productivity and security.<\/p>\n<p><strong>Implement transparent policies on using agentic AI.<\/strong> Define which data categories are okay for external AI services to process, and which are strictly off-limits. Employees need to understand why something is forbidden. A policy of \u201cyes, but with guardrails\u201d is always received better than a blanket \u201cno\u201d.<\/p>\n<p><strong>Train with real-world examples.<\/strong> Abstract warnings about \u201cleakage risks\u201d tend to be futile. It\u2019s better to demonstrate how an agent with email access can forward confidential messages just because a random incoming email asked it to. When the threat feels real, motivation to follow the rules grows too. Ideally, employees should complete a <a href=\"https:\/\/k-asap.com\/en\/?icid=gl_kdailyplacehold_acq_ona_smm__onl_b2b_kasperskydaily_wpplaceholder____kasap___\" target=\"_blank\" rel=\"noopener\">brief crash course on AI security<\/a>.<\/p>\n<p><strong>Offer secure alternatives.<\/strong> If employees need an AI assistant, provide an approved tool that features centralized management, logging, and OAuth access control.<\/p>\n<input type=\"hidden\" class=\"category_for_banner\" value=\"kasap\">\n","protected":false},"excerpt":{"rendered":"<p>What corporate security teams should do about the &#8220;viral&#8221; AI agent.<\/p>\n","protected":false},"author":2722,"featured_media":55318,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1999,3051],"tags":[1140,4703,4642,1876,4702,97],"class_list":{"0":"post-55317","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business","8":"category-enterprise","9":"tag-ai","10":"tag-ai-agents","11":"tag-llm","12":"tag-machine-learning","13":"tag-openclaw","14":"tag-security-2"},"hreflang":[{"hreflang":"x-default","url":"https:\/\/www.kaspersky.com\/blog\/moltbot-enterprise-risk-management\/55317\/"},{"hreflang":"en-in","url":"https:\/\/www.kaspersky.co.in\/blog\/moltbot-enterprise-risk-management\/30218\/"},{"hreflang":"en-ae","url":"https:\/\/me-en.kaspersky.com\/blog\/moltbot-enterprise-risk-management\/25296\/"},{"hreflang":"en-gb","url":"https:\/\/www.kaspersky.co.uk\/blog\/moltbot-enterprise-risk-management\/30091\/"},{"hreflang":"es-mx","url":"https:\/\/latam.kaspersky.com\/blog\/moltbot-enterprise-risk-management\/29000\/"},{"hreflang":"es","url":"https:\/\/www.kaspersky.es\/blog\/moltbot-enterprise-risk-management\/31875\/"},{"hreflang":"it","url":"https:\/\/www.kaspersky.it\/blog\/moltbot-enterprise-risk-management\/30495\/"},{"hreflang":"ru","url":"https:\/\/www.kaspersky.ru\/blog\/moltbot-enterprise-risk-management\/41329\/"},{"hreflang":"tr","url":"https:\/\/www.kaspersky.com.tr\/blog\/moltbot-enterprise-risk-management\/14307\/"},{"hreflang":"fr","url":"https:\/\/www.kaspersky.fr\/blog\/moltbot-enterprise-risk-management\/23656\/"},{"hreflang":"pt-br","url":"https:\/\/www.kaspersky.com.br\/blog\/moltbot-enterprise-risk-management\/24770\/"},{"hreflang":"ru-kz","url":"https:\/\/blog.kaspersky.kz\/moltbot-enterprise-risk-management\/30293\/"},{"hreflang":"en-au","url":"https:\/\/www.kaspersky.com.au\/blog\/moltbot-enterprise-risk-management\/35975\/"},{"hreflang":"en-za","url":"https:\/\/www.kaspersky.co.za\/blog\/moltbot-enterprise-risk-management\/35631\/"}],"acf":[],"banners":"","maintag":{"url":"https:\/\/www.kaspersky.com\/blog\/tag\/ai\/","name":"AI"},"_links":{"self":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/55317","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/users\/2722"}],"replies":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/comments?post=55317"}],"version-history":[{"count":4,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/55317\/revisions"}],"predecessor-version":[{"id":55336,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/posts\/55317\/revisions\/55336"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media\/55318"}],"wp:attachment":[{"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/media?parent=55317"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/categories?post=55317"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.kaspersky.com\/blog\/wp-json\/wp\/v2\/tags?post=55317"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}