LIVE THREATS
HIGH FIDO Alliance Launches Standards Push to Secure AI Agent Transactions // CRITICAL Pre-Auth SQLi Flaw in LiteLLM Gateway Actively Exploited to Steal AI Credentials // LOW Welcoming Llama Guard 4 on Hugging Face Hub // HIGH Frontier agentic LLMs risk industrialising cyberattacks, but may also empower defenders. // HIGH TeamPCP resumes supply chain attacks, poisoning xinference PyPI and triggering a Bitwarden … // MEDIUM Hugging Face 'Spaces' now acts as an MCP-App-Store. Anybody thinking on the security … // CRITICAL An AI agent confesses after deleting a production database. The Oops! moment. // HIGH Discord Sleuths Gained Unauthorized Access to Anthropic’s Mythos // HIGH GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI … // MEDIUM Open source memory layer so any AI agent can do what Claude.ai and ChatGPT do //
ATLAS OWASP CRITICAL Active exploitation · Immediate action required RELEVANCE ▲ 9.2

Pre-Auth SQLi Flaw in LiteLLM Gateway Actively Exploited to Steal AI Credentials

TL;DR CRITICAL
  • What happened: Unauthenticated SQL injection in LiteLLM proxy actively exploited to steal AI provider credentials.
  • Who's at risk: Developers and organisations running self-hosted LiteLLM instances prior to v1.83.7, particularly those storing live OpenAI, Anthropic, or AWS Bedrock API keys in the proxy database.
  • Act now: Upgrade LiteLLM to version 1.83.7 or later immediately · Rotate all API keys, virtual keys, and provider credentials stored in the LiteLLM database · Audit proxy access logs for crafted Authorization: Bearer headers targeting /chat/completions
Pre-Auth SQLi Flaw in LiteLLM Gateway Actively Exploited to Steal AI Credentials

Overview

A critical unauthenticated SQL injection vulnerability tracked as CVE-2026-42208 in the popular LiteLLM open-source LLM gateway is under active exploitation. Threat actors are leveraging the flaw to extract sensitive credentials — including API keys for OpenAI, Anthropic, and AWS Bedrock — directly from the proxy’s backend database. Exploitation was confirmed by Sysdig researchers approximately 36 hours after public disclosure on April 24, 2026, underscoring the speed at which AI infrastructure vulnerabilities are now weaponised.

LiteLLM is a widely adopted proxy and SDK layer that provides a unified API interface for calling multiple LLM providers. With 45,000 GitHub stars and 7,600 forks, its compromise represents a significant supply-chain risk across the LLM application ecosystem.

Technical Analysis

The vulnerability exists in LiteLLM’s proxy API key verification logic, where string concatenation was used to construct SQL queries rather than parameterised inputs. An attacker can craft a malicious Authorization: Bearer header and send it to any LLM API route — such as /chat/completions — without requiring prior authentication.

POST /chat/completions HTTP/1.1
Host: <litellm-host>
Authorization: Bearer ' UNION SELECT api_key, key_alias, spend FROM litellm_verificationtoken--
Content-Type: application/json

Sysdig’s analysis of observed exploitation revealed a two-phase attack pattern:

  1. Reconnaissance phase: Attackers probed the database schema, querying specific tables storing API keys, provider credentials, environment variables, and configuration secrets. Crucially, no benign tables were queried — indicating prior knowledge of LiteLLM’s data model.
  2. Precision phase: Attackers rotated IP addresses for evasion, then reissued targeted queries against confirmed table names, reducing noise and improving extraction efficiency.

A fix was delivered in LiteLLM v1.83.7 by replacing string concatenation with parameterised queries throughout the affected verification flow.

Framework Mapping

  • AML.T0040 (ML Model Inference API Access): Stolen provider credentials grant direct, unauthenticated access to underlying LLM inference APIs.
  • AML.T0012 (Valid Accounts): Harvested API keys and master keys allow attackers to impersonate legitimate users against AI providers.
  • AML.T0057 (LLM Data Leakage): Sensitive configuration data and secrets are directly exfiltrated from the proxy database.
  • LLM06 (Sensitive Information Disclosure): The vulnerability directly exposes stored secrets, provider tokens, and environment configurations.
  • LLM05 (Supply Chain Vulnerabilities): LiteLLM’s position as a middleware layer means compromise cascades across all connected AI services.

Impact Assessment

Organisations running unpatched, self-hosted LiteLLM instances face immediate credential exposure. Stolen API keys can be used to:

  • Exhaust billing quotas on provider accounts (OpenAI, Anthropic, AWS Bedrock)
  • Access proprietary prompts and pipeline configurations
  • Pivot into connected infrastructure using environment secrets

The attack surface is broad: LiteLLM is embedded in numerous LLM application stacks, MLOps platforms, and enterprise AI gateways. The concurrent supply-chain attack via malicious PyPI packages compounds overall risk for the LiteLLM ecosystem.

Mitigation & Recommendations

  1. Upgrade immediately to LiteLLM v1.83.7 or later — this is the only complete remediation.
  2. Rotate all credentials stored in the LiteLLM database: API keys, virtual keys, master keys, and any provider tokens (OpenAI, Anthropic, Bedrock).
  3. Audit logs for POST requests to /chat/completions or other API routes with anomalous Authorization headers, particularly those containing SQL metacharacters.
  4. Restrict network exposure of LiteLLM proxy instances; place behind authenticated reverse proxies where possible.
  5. Review PyPI dependencies for LiteLLM-adjacent packages given the concurrent supply-chain campaign.

References