Overview
A supply chain attack targeting the AI/ML developer community was disclosed on April 30, 2026, after the maintainers of PyTorch Lightning confirmed that version 2.6.3 of their popular deep learning framework had been backdoored. Published to the Python Package Index (PyPI), the compromised package contained a hidden execution chain designed to silently steal credentials from cloud platforms, browsers, and environment files. With over 11 million downloads in the preceding month, the package represents a high-value target for threat actors looking to compromise AI infrastructure at scale.
Technical Analysis
The malicious payload embedded in lightning==2.6.3 (distributed as a py3-none-any wheel) triggers automatically upon execution of import lightning. The execution chain proceeds as follows:
- Silent background process spawning: On import, the package silently forks a background process without user interaction or visible output.
- Runtime download: The background process fetches the Bun JavaScript runtime (v1.3.13) from GitHub, providing an execution environment not typically present in Python ML workflows.
- Payload execution: A heavily obfuscated 11.4 MB JavaScript file (
router_runtime.js) is downloaded and executed within the Bun runtime.
The payload, detected by Microsoft Defender as ShaiWorm, is a full-featured information stealer with the following capabilities:
- Exfiltration of
.envfiles, API keys, and secrets - Theft of GitHub tokens
- Browser credential harvesting (Chrome, Firefox, Brave)
- Cloud service API credential theft (AWS, Azure, GCP)
- Arbitrary system command execution
The use of a JavaScript runtime (Bun) delivered at execution time is a notable evasion technique, as it avoids embedding binary payloads directly in the Python package and bypasses static analysis tools that focus on Python code.
Framework Mapping
- AML.T0010 – ML Supply Chain Compromise: The attack directly targets the ML software supply chain by injecting malicious code into a widely used deep learning framework package on PyPI.
- AML.T0018 – Backdoor ML Model: While this attack targets the framework rather than a model directly, the technique of embedding hidden execution logic in a trusted AI development tool mirrors backdoor insertion patterns.
- AML.T0012 – Valid Accounts: The ultimate goal of credential theft is to leverage stolen valid credentials for further access to cloud infrastructure and code repositories.
- LLM05 – Supply Chain Vulnerabilities: The attack exploits trust in the PyPI ecosystem, a critical dependency chain for LLM training and fine-tuning workflows.
- LLM06 – Sensitive Information Disclosure: Stolen API keys and cloud credentials can expose model weights, training data, and proprietary AI infrastructure.
Impact Assessment
Microsoft Threat Intelligence confirmed that Defender detected and blocked the malicious routine across customer environments, with impact reported as limited to “a small number of devices” in a “narrow set of environments.” However, given the package’s 11 million monthly downloads, the potential exposure window before detection was significant. Developers using automated pipelines or CI/CD systems that pull latest package versions without pinning are most at risk. Compromised cloud credentials could provide persistent access to AI training infrastructure, model repositories, and sensitive datasets.
Mitigation & Recommendations
- Downgrade immediately: Revert to
pytorch-lightning==2.6.1, which is confirmed safe, or await an updated clean release. - Rotate all credentials: Any environment where
import lightningwas executed with v2.6.3 should treat all secrets, tokens, and API keys as compromised. - Audit for ShaiWorm indicators: Review endpoint logs for unexpected Bun runtime downloads,
router_runtime.jsexecution, or anomalous outbound connections. - Pin package versions: Enforce version pinning in
requirements.txtand lockfiles to prevent silent upgrades to malicious releases. - Enable runtime monitoring: Deploy behaviour-based endpoint detection capable of flagging unexpected background process spawning from Python import hooks.