LIVE THREATS
HIGH Backdoored PyTorch Lightning Package Steals Cloud Credentials from AI Developers // HIGH Pentagon Deploys Classified AI Across Seven Tech Giants for Warfighter Systems // MEDIUM Cross-Machine AI Agent Relay Tool Expands Attack Surface for Developer Environments // HIGH Desktop Automation CLI Grants AI Agents Deep OS-Level Control // HIGH Frontier LLMs Now Autonomously Breach Corporate Networks in AISI Cyber Tests // HIGH Premature AI Agent Deployments Expose Production Systems to Destructive Actions // HIGH Anthropic Launches Claude Security to Close AI-Accelerated Exploit Window // CRITICAL CVSS 10 Gemini CLI Flaw Turns CI/CD Pipelines Into RCE Attack Vectors // MEDIUM OpenAI Launches Phishing-Resistant Security Mode for High-Risk ChatGPT Accounts // HIGH UK AI Security Institute Finds GPT-5.5 Matches Claude Mythos in Cyber Capabilities //
ATLAS OWASP HIGH Significant risk · Prioritise patching RELEVANCE ▲ 8.5

Backdoored PyTorch Lightning Package Steals Cloud Credentials from AI Developers

TL;DR HIGH
  • What happened: Backdoored PyTorch Lightning v2.6.3 on PyPI silently stole cloud credentials from AI developers.
  • Who's at risk: AI/ML developers who installed or imported PyTorch Lightning v2.6.3 are at risk of having cloud, browser, and API credentials exfiltrated.
  • Act now: Immediately downgrade to PyTorch Lightning v2.6.1 or the latest safe release · Rotate all secrets, API keys, GitHub tokens, and cloud credentials if v2.6.3 was imported · Audit CI/CD pipelines and developer environments for signs of ShaiWorm activity
Backdoored PyTorch Lightning Package Steals Cloud Credentials from AI Developers

Overview

A supply chain attack targeting the AI/ML developer community was disclosed on April 30, 2026, after the maintainers of PyTorch Lightning confirmed that version 2.6.3 of their popular deep learning framework had been backdoored. Published to the Python Package Index (PyPI), the compromised package contained a hidden execution chain designed to silently steal credentials from cloud platforms, browsers, and environment files. With over 11 million downloads in the preceding month, the package represents a high-value target for threat actors looking to compromise AI infrastructure at scale.

Technical Analysis

The malicious payload embedded in lightning==2.6.3 (distributed as a py3-none-any wheel) triggers automatically upon execution of import lightning. The execution chain proceeds as follows:

  1. Silent background process spawning: On import, the package silently forks a background process without user interaction or visible output.
  2. Runtime download: The background process fetches the Bun JavaScript runtime (v1.3.13) from GitHub, providing an execution environment not typically present in Python ML workflows.
  3. Payload execution: A heavily obfuscated 11.4 MB JavaScript file (router_runtime.js) is downloaded and executed within the Bun runtime.

The payload, detected by Microsoft Defender as ShaiWorm, is a full-featured information stealer with the following capabilities:

  • Exfiltration of .env files, API keys, and secrets
  • Theft of GitHub tokens
  • Browser credential harvesting (Chrome, Firefox, Brave)
  • Cloud service API credential theft (AWS, Azure, GCP)
  • Arbitrary system command execution

The use of a JavaScript runtime (Bun) delivered at execution time is a notable evasion technique, as it avoids embedding binary payloads directly in the Python package and bypasses static analysis tools that focus on Python code.

Framework Mapping

  • AML.T0010 – ML Supply Chain Compromise: The attack directly targets the ML software supply chain by injecting malicious code into a widely used deep learning framework package on PyPI.
  • AML.T0018 – Backdoor ML Model: While this attack targets the framework rather than a model directly, the technique of embedding hidden execution logic in a trusted AI development tool mirrors backdoor insertion patterns.
  • AML.T0012 – Valid Accounts: The ultimate goal of credential theft is to leverage stolen valid credentials for further access to cloud infrastructure and code repositories.
  • LLM05 – Supply Chain Vulnerabilities: The attack exploits trust in the PyPI ecosystem, a critical dependency chain for LLM training and fine-tuning workflows.
  • LLM06 – Sensitive Information Disclosure: Stolen API keys and cloud credentials can expose model weights, training data, and proprietary AI infrastructure.

Impact Assessment

Microsoft Threat Intelligence confirmed that Defender detected and blocked the malicious routine across customer environments, with impact reported as limited to “a small number of devices” in a “narrow set of environments.” However, given the package’s 11 million monthly downloads, the potential exposure window before detection was significant. Developers using automated pipelines or CI/CD systems that pull latest package versions without pinning are most at risk. Compromised cloud credentials could provide persistent access to AI training infrastructure, model repositories, and sensitive datasets.

Mitigation & Recommendations

  • Downgrade immediately: Revert to pytorch-lightning==2.6.1, which is confirmed safe, or await an updated clean release.
  • Rotate all credentials: Any environment where import lightning was executed with v2.6.3 should treat all secrets, tokens, and API keys as compromised.
  • Audit for ShaiWorm indicators: Review endpoint logs for unexpected Bun runtime downloads, router_runtime.js execution, or anomalous outbound connections.
  • Pin package versions: Enforce version pinning in requirements.txt and lockfiles to prevent silent upgrades to malicious releases.
  • Enable runtime monitoring: Deploy behaviour-based endpoint detection capable of flagging unexpected background process spawning from Python import hooks.

References

◉ AI THREAT BRIEFING

Stay ahead of the threat.

Twice-weekly digest of critical AI security developments — every story mapped to MITRE ATLAS and OWASP LLM Top 10. Free.

No spam. Unsubscribe anytime.