LIVE THREATS
CRITICAL Four OpenClaw Flaws Chain Together for Full AI Agent Compromise // CRITICAL Malicious node-ipc Versions Target Cloud, AI Tool Credentials via Supply Chain Backdoor // MEDIUM Microsoft Outlines Defense-in-Depth Framework for Autonomous AI Agents // MEDIUM Rust Compiler Project Drafts Formal LLM Contribution Policy // HIGH TanStack Supply Chain Attack Compromises OpenAI Developer Devices and Signing Certificates // HIGH TeamPCP Steals 5GB of Mistral AI Source Code via Supply Chain Attack // MEDIUM Agentic AI Red Teaming Emerges as Defence Against AI-Speed Attack Chains // HIGH AI Agents Weaponised to Generate Custom Attack Tools in LatAm Campaigns // HIGH GPT-5.5 Matches Specialist Models in Vulnerability Discovery, Democratising Cyber Offence // HIGH Microsoft MDASH Agentic AI System Discovers 16 Critical Windows Vulnerabilities //
ATLAS OWASP HIGH Significant risk · Prioritise patching RELEVANCE ▲ 8.5

TanStack Supply Chain Attack Compromises OpenAI Developer Devices and Signing Certificates

TL;DR HIGH
  • What happened: TanStack supply chain attack hit two OpenAI employee devices, exposing code-signing certificates and internal repositories.
  • Who's at risk: macOS users of OpenAI apps and developers relying on TanStack or shared open-source CI/CD tooling are most directly exposed.
  • Act now: Update ChatGPT Desktop, Codex App, Codex CLI, and Atlas on macOS before June 12, 2026 · Audit all dependencies and CI/CD pipelines for TanStack or other TeamPCP-targeted packages · Rotate credentials and code-signing certificates for any repositories exposed to compromised developer environments
TanStack Supply Chain Attack Compromises OpenAI Developer Devices and Signing Certificates

Overview

OpenAI has confirmed that two employee devices within its corporate environment were compromised as part of the broader Mini Shai-Hulud supply chain attack targeting TanStack, a widely used open-source library ecosystem. The incident, attributed to threat actor TeamPCP, resulted in unauthorized access to a limited subset of internal source code repositories and the exfiltration of credential material — including code-signing certificates used for OpenAI’s macOS, iOS, and Windows applications.

Although OpenAI states no user data, production systems, or intellectual property were modified or stolen at scale, the exposure of signing certificates represents a meaningful risk vector: a malicious actor in possession of valid certificates could potentially distribute trojanized versions of OpenAI apps that bypass OS-level trust checks.

Technical Analysis

The Mini Shai-Hulud malware, deployed via compromised TanStack packages, exhibited credential-focused exfiltration behaviour after gaining initial access through the developer supply chain. Once installed on the two employee machines, the malware accessed internal source code repositories and extracted limited credential material — consistent with known TeamPCP tactics of harvesting secrets from CI/CD-connected developer environments.

The most operationally significant exposure was the presence of code-signing certificates for OpenAI’s macOS apps (ChatGPT Desktop, Codex App, Codex CLI, Atlas) within the affected repositories. While OpenAI assesses the risk of certificate misuse as unlikely, the company proactively revoked the old certificates and issued new ones. Existing macOS app versions signed with the compromised certificates will be blocked by Gatekeeper after June 12, 2026.

This is notably OpenAI’s second certificate rotation in approximately one month. In mid-April 2026, a separate incident involving a compromised Axios library — introduced via a malicious GitHub Actions workflow and linked to North Korean threat group UNC1069 — forced an earlier rotation cycle.

TeamPCP’s campaign has now been confirmed to have impacted packages associated with TanStack, UiPath, Mistral AI, OpenSearch, and Guardrails AI, indicating a broad and sustained offensive against AI-adjacent open-source tooling.

Framework Mapping

  • AML.T0010 – ML Supply Chain Compromise: The attack directly exploited upstream open-source dependencies (TanStack) to reach downstream AI developer environments, a textbook ML supply chain compromise.
  • AML.T0012 – Valid Accounts: Credential material exfiltrated from repositories could enable subsequent access using legitimate identities.
  • AML.T0047 – ML-Enabled Product or Service: End-user AI products (ChatGPT Desktop, Codex) were indirectly affected via the certificate exposure, requiring mandatory updates.
  • LLM05 – Supply Chain Vulnerabilities: The attack propagated through shared open-source libraries and CI/CD infrastructure, directly matching this OWASP category.
  • LLM06 – Sensitive Information Disclosure: Credential and certificate material was exfiltrated from internal repositories.

Impact Assessment

The immediate operational impact is limited but non-trivial. macOS end users of four OpenAI applications must update before June 12, 2026 or face app blockage. The credential exposure required full credential rotation across affected repositories and temporary suspension of code-deployment workflows, disrupting engineering operations. The broader signal is more concerning: two separate supply chain incidents within a single month targeting the same organisation suggests persistent adversarial focus on AI developer toolchains.

Mitigation & Recommendations

  • macOS users: Update ChatGPT Desktop, Codex App, Codex CLI, and Atlas immediately — do not wait until the June 12 deadline.
  • Developers: Audit all open-source dependencies, particularly any TanStack, Axios, or packages flagged in TeamPCP advisories, using tools such as Socket.dev or Deps.dev.
  • Security teams: Implement lockfile integrity checks, dependency pinning, and provenance verification (SLSA framework) for CI/CD pipelines.
  • Credential hygiene: Treat any developer machine with broad repository access as a high-value target; enforce short-lived tokens and just-in-time access for signing infrastructure.
  • Detection: Monitor for anomalous outbound connections from CI/CD runners and unexpected credential usage patterns in source code management systems.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.