Overview
Cybersecurity researchers at ReversingLabs have uncovered a sophisticated npm supply chain campaign — codenamed PromptMink — attributed to the North Korean threat actor Famous Chollima (also tracked as Shifty Corsair). The campaign marks a notable evolution in DPRK offensive operations: malicious code was introduced via a commit co-authored by Anthropic’s Claude Opus LLM, effectively weaponising AI coding agents as an attack delivery mechanism. The end goal is theft of cryptocurrency wallet credentials and funds from victim environments.
Technical Analysis
The attack operates through a multi-layer npm dependency chain designed to frustrate detection:
- First-layer packages (e.g.,
@solana-launchpad/sdk,@meme-sdk/trade,@pumpfun-ipfs/sdk) appear legitimate and contain no malicious code. They import large volumes of genuinely popular packages (axios, bn.js) to appear credible alongside a small number of malicious second-layer dependencies. - Second-layer packages (e.g.,
@validate-sdk/v2) embed the actual payload: credential harvesting logic targeting crypto wallet secrets from the compromised environment. - A February 2026 commit to the
openpaw-graveyardautonomous AI agent project — co-authored by Claude Opus — introduced@solana-launchpad/sdkas a dependency, initiating the infection chain.
The malicious package @validate-sdk/v2, uploaded to npm in October 2025, is described as a utility SDK for hashing and validation but functions as a secrets exfiltrator. The package shows signs of vibe-coding — rapid AI-assisted generation — consistent with DPRK’s documented use of generative AI to accelerate development operations.
When second-layer packages are detected and removed from npm, the threat actors rapidly replace them, ensuring persistence across the dependency graph.
Additional evasion techniques include:
- Function shadowing: Creating malicious reimplementations of functions found in legitimate popular libraries
- Typosquatting: Package names and descriptions closely mimicking trusted libraries
Framework Mapping
| Framework | Reference | Rationale |
|---|---|---|
| MITRE ATLAS | AML.T0010 – ML Supply Chain Compromise | Malicious packages injected via LLM-assisted commits into AI agent dependency chains |
| MITRE ATLAS | AML.T0047 – ML-Enabled Product or Service | Autonomous AI trading agent used as the attack vector |
| OWASP LLM | LLM05 – Supply Chain Vulnerabilities | Compromised npm packages consumed by LLM-generated agent code |
| OWASP LLM | LLM08 – Excessive Agency | Autonomous agent executed malicious dependencies without human oversight |
| OWASP LLM | LLM02 – Insecure Output Handling | LLM-generated code introduced unvetted external dependencies |
Impact Assessment
Developers building Solana-based autonomous AI agents — particularly those using the Tapestry Protocol, Bankr, or Moltbook integrations — are most directly at risk. Victims face credential exfiltration leading to cryptocurrency wallet draining. The use of an LLM as a co-author of a malicious commit raises broader concerns: AI coding assistants that autonomously manage dependencies represent a significant and underappreciated attack surface for supply chain compromise at scale.
Mitigation & Recommendations
- Enforce dependency review gates: Require human approval for any dependency additions in CI/CD pipelines, especially those introduced by AI coding agents.
- Audit transitive dependencies: Use tools such as
npm audit, Socket.dev, or Phylum to inspect second- and third-tier packages for suspicious recent commits or low download counts paired with high-value function names. - Restrict agent permissions: Autonomous AI agents should operate under least-privilege principles and must not have write access to package manifests without explicit authorisation.
- Monitor for credential exfiltration patterns: Deploy runtime controls to detect unusual outbound network calls from build or agent environments.
- Verify commit provenance: Treat LLM co-authored commits with the same scrutiny as unverified external contributors.