17 percent of OpenClaw skills found delivering malware including AMOS Stealer
Bitdefender Labs analyzed the OpenClaw skill marketplace and found that approximately 17 percent of skills exhibited malicious behavior in the first week of February 2026. Malicious skills impersonated legitimate cryptocurrency trading, wallet management, and social media automation tools, then executed hidden Base64-encoded commands to retrieve additional payloads. The campaign delivered AMOS Stealer targeting macOS systems and harvested credentials through infrastructure at known malicious IP addresses.
Incident Details
The Skill Marketplace Problem
OpenClaw's architecture was designed around extensibility. Users could enhance their AI assistant by installing "skills" from ClawHub, a public marketplace similar in concept to an app store or package registry. Install a skill, and your AI agent gains new capabilities: cryptocurrency wallet tracking, YouTube video summarization, Google Workspace integration, social media automation. The premise was appealing. The security model was not.
A skill runs with whatever permissions the OpenClaw agent has been granted on the host system. If the agent can read files, access API keys, and make network requests, so can every installed skill. This is a trust model that works perfectly when every skill in the marketplace is legitimate. In early February 2026, Bitdefender Labs demonstrated just how badly it fails when they are not.
The 17 Percent
Bitdefender's analysis of skills published on ClawHub during the first week of February 2026 found that approximately 17% exhibited malicious behavior. This was not a case of a few bad actors slipping through the cracks. Nearly one in five skills on the marketplace was actively hostile.
The malicious skills impersonated legitimate tools across several popular categories. Cryptocurrency-focused skills were particularly well-represented - fake Solana wallet trackers, Binance trading bots, and Ethereum gas trackers. But the campaign extended well beyond crypto. Researchers found malicious skills disguised as YouTube utilities, Polymarket trading bots, Google Workspace integrations, social media trend trackers, and even auto-updaters.
Koi Security conducted an independent audit of 2,857 ClawHub skills and identified 341 malicious ones across multiple campaigns - a finding consistent with Bitdefender's 17% estimate. Koi researcher Oren Yomtov described the deception: "You install what looks like a legitimate skill - maybe solana-wallet-tracker or youtube-summarize-pro. The skill's documentation looks professional. But there's a 'Prerequisites' section that says you need to install something first."
The Attack Chain
The malicious skills used a consistent playbook. Their documentation included a "Prerequisites" section instructing users to install additional software before the skill would work. On macOS, users were told to copy an installation script and paste it into the Terminal app. On Windows, they were directed to download a ZIP file from what appeared to be a legitimate GitHub repository.
These "prerequisites" were the actual payload. The installation scripts contained obfuscated shell commands - Base64-encoded instructions that, when decoded and executed, reached out to attacker-controlled infrastructure at known malicious IP addresses. The scripts contacted a server at one IP address to retrieve additional shell scripts, which in turn downloaded the final malware payload.
For macOS users, that payload was Atomic Stealer (AMOS), a commodity malware-as-a-service product available to criminals for approximately $500 to $1,000 per month. AMOS is specifically designed to harvest data from Apple systems, targeting credentials, browser data, cryptocurrency wallet files, Telegram chat histories, VPN profiles, Apple Keychain items, Apple Notes, and files from common user folders.
TrendAI Research documented an evolution in how AMOS was being distributed through this campaign. Historically, the malware spread through pirated software downloads. The OpenClaw campaign represented a shift to supply chain attacks that manipulated AI agentic workflows - using the AI agent itself as a trusted intermediary to convince users to execute malicious commands. As TrendAI noted, this was "an old malware trying to use social engineering on AI agents, marking a shift from prompt injection to using the AI itself as a trusted intermediary to trick humans."
The SKILL.md Deception
One particularly clever technique involved hiding malicious instructions in the SKILL.md files that OpenClaw agents read to understand how to use an installed skill. When an AI agent processed a malicious SKILL.md file, it would present the fake "setup requirements" to the user as part of its normal workflow. A deceptive human-in-the-loop dialogue box would pop up, asking the user to manually enter their password to complete installation.
From the user's perspective, this looked like a normal setup process mediated by their trusted AI assistant. The agent was essentially social-engineered into becoming the attacker's delivery mechanism - a trusted face presenting a malicious request.
The Typosquatting Layer
Adding to the deception, attackers deployed typosquatted versions of the ClawHub platform name itself. Researchers found malicious skills published under accounts mimicking the official marketplace: "clawhub," "clawhub1," "clawhubb," "clawhubcli," "clawwhub," and "cllawhub." Users searching for ClawHub-related skills could easily land on a malicious variant without noticing the subtle spelling differences.
Some malicious skills went beyond data theft. Researchers found skills that hid reverse shell backdoors inside otherwise functional code, providing attackers with persistent remote access to compromised machines. Others targeted the OpenClaw configuration itself, exfiltrating bot credentials stored in environment files to external webhook endpoints.
The Scope of Damage
The campaign's infrastructure was centralized around a single command-and-control server, suggesting a coordinated operation rather than independent actors. Security researcher Paul McCarty noted that "all these skills share the same command-and-control infrastructure and use sophisticated social engineering to convince users to execute malicious commands, which then steal crypto assets like exchange API keys, wallet private keys, SSH credentials, and browser passwords."
The targeting of cryptocurrency users was deliberate. People running OpenClaw with crypto-related skills were likely to have wallet keys, exchange API credentials, and other high-value financial data accessible on their machines. The overlap between early AI assistant adopters and cryptocurrency enthusiasts created a target-rich environment.
The Response
Following the reports from Bitdefender, Koi Security, and TrendAI, OpenClaw added a reporting mechanism to the ClawHub marketplace and announced a partnership with VirusTotal to scan skills for malicious content. The malicious skills identified in the research were taken down, though researchers noted that the code remained visible in ClawHub's GitHub repository history.
The response addressed the immediate threat but left the structural problem intact. ClawHub operated as an open marketplace where anyone could publish a skill, and the verification infrastructure arrived only after hundreds of malicious skills had been distributed. For a platform where skills execute with the agent's full system permissions, the gap between "anyone can publish" and "we verify what's published" is not a minor oversight. It is the entire attack surface.
The OpenClaw malicious skills campaign demonstrated that AI agent plugin ecosystems face the same supply chain risks as traditional package registries like npm or PyPI, compounded by the additional problem that AI agents can be manipulated into presenting malicious instructions as legitimate setup requirements. When your AI assistant tells you to install something, the natural response is to trust it. The attackers understood this perfectly.
Discussion