Overview
A threat actor tracked as TeamPCP has launched a sweeping supply chain campaign, dubbed Mini Shai-Hulud, targeting npm and PyPI packages from TanStack, Mistral AI, Guardrails AI, UiPath, and OpenSearch. The campaign introduces an obfuscated credential stealer capable of harvesting secrets from cloud providers, cryptocurrency wallets, AI tooling, messaging applications, and CI/CD systems. The TanStack compromise has been assigned CVE-2026-45321 (CVSS 9.6), impacting 42 packages and 84 versions.
Technical Analysis
The attack uses two distinct infection vectors depending on the target package ecosystem:
TanStack cluster: A malicious JavaScript file (router_init.js) is embedded directly in the package tarball. An optional dependency pointing to a GitHub-hosted package is added; that dependency contains a prepare lifecycle hook which executes the payload via the Bun runtime. The initial staging exploits a chained GitHub Actions vulnerability — specifically the pull_request_target trigger combined with Actions cache poisoning and runtime memory extraction of an OIDC token from the runner process.
Mistral AI cluster: Follows an earlier TeamPCP pattern — the package.json preinstall hook is replaced to invoke node setup.mjs, which downloads Bun and runs the same JavaScript stealer.
Exfiltration routes include:
- Primary: Data sent to
filev2.getsession[.]org, leveraging Session Protocol infrastructure to avoid enterprise blocklists. - Fallback: Encrypted data committed to attacker-controlled GitHub repositories using stolen tokens via the GitHub GraphQL API, attributed to
[email protected].
Persistence mechanisms include hooks injected into Claude Code and VS Code IDE startup sequences, a gh-token-monitor service for continuous GitHub token re-exfiltration, and two rogue GitHub Actions workflows that serialise repository secrets to JSON and upload them to api.masscan[.]cloud.
Framework Mapping
- AML.T0010 (ML Supply Chain Compromise): Core attack vector — malicious code injected into widely-used AI and developer packages.
- AML.T0047 (ML-Enabled Product or Service): Mistral AI and Guardrails AI packages directly targeted, compromising AI toolchain integrity.
- AML.T0018 (Backdoor ML Model): Persistence in Claude Code IDE creates a persistent foothold within AI development workflows.
- LLM05 (Supply Chain Vulnerabilities): Package-level compromise of AI SDK dependencies represents a direct OWASP LLM supply chain risk.
- LLM06 (Sensitive Information Disclosure): Credential and secret exfiltration from AI development environments.
Impact Assessment
The blast radius is significant. Any developer who installed affected TanStack, Mistral AI, or Guardrails AI package versions may have had cloud credentials, GitHub tokens, CI/CD secrets, and AI API keys exfiltrated. Organisations using these packages in automated pipelines face compounded risk — injected GitHub Actions workflows could propagate secrets theft across entire repository ecosystems. The use of Session Protocol infrastructure for exfiltration reduces detection likelihood in enterprise environments that permit the domain.
Mitigation & Recommendations
- Immediately audit installed versions of TanStack, Mistral AI, Guardrails AI, UiPath, and OpenSearch packages against the confirmed malicious version list.
- Rotate all secrets — GitHub tokens, cloud provider API keys, CI/CD environment variables, and AI platform credentials on any affected systems.
- Review GitHub Actions workflows across all repositories for unauthorised additions; restrict
pull_request_targettrigger usage and enforce least-privilege OIDC token scopes. - Scan for persistence artefacts in Claude Code and VS Code extension directories and startup hooks.
- Block or monitor outbound traffic to
filev2.getsession[.]organdapi.masscan[.]cloud. - Enable npm and PyPI provenance attestation where available to reduce future supply chain exposure.