- A critical pre-authentication SQL injection in LiteLLM lets attackers extract API keys and provider credentials without any login.
- Exploitation began 36 hours after the vulnerability was disclosed — and the attackers knew exactly which tables to query.
- LiteLLM’s 45,000 GitHub stars make it one of the most widely deployed AI infrastructure tools in production today.
Someone is systematically harvesting API keys from LiteLLM proxies, and they barely needed a running start. CVE-2026-42208, a critical pre-authentication SQL injection in the open-source LLM gateway, was published to the GitHub Advisory Database on April 24 at 16:17 UTC. By April 26, attackers had already exploited it in the wild.
LiteLLM is the connective tissue between thousands of enterprise applications and their AI model providers. It routes requests to OpenAI, Anthropic, Amazon Bedrock, and others through a single unified API. The project has 45,000 GitHub stars and 7,600 forks. It also stores every API key, virtual key, master key, and environment secret that flows through it — making its PostgreSQL database one of the highest-value targets in the AI supply chain.
The flaw lives in the proxy’s API key verification step. LiteLLM takes the Authorization: Bearer header value and concatenates it directly into a SQL query — the kind of mistake that was supposed to die in 2005. No parameterization. No sanitization. Any remote attacker who can reach a LiteLLM proxy can issue arbitrary SELECT statements against its database without credentials, the security advisory explains. A fix shipped in version 1.83.7, which replaces string concatenation with parameterized queries.
Why the LiteLLM Exploitation Was Surgical, Not Opportunistic
The Sysdig Threat Research Team observed the first exploitation attempt 36 hours and 7 minutes after the advisory hit the global database. That is not fast by zero-day standards — the recent Marimo RCE was weaponized in hours. But the LiteLLM attacks were not a generic SQLmap spray, which is what most SQL injection exploitation looks like. The attacker sent crafted requests to /chat/completions with a malicious Authorization: Bearer header and queried specific tables containing API keys, provider credentials, environment data, and configuration secrets.
“The operator went straight to where the secrets live,” Sysdig reported, calling it a strong indicator that the attacker knew exactly what to target. In the second phase, the threat actor switched IP addresses — likely for evasion — and reran the same injection attempts with fewer, more precise payloads derived from the first round of enumeration. This was reconnaissance followed by exploitation, not a bot swinging blindly.
Anyone running an exposed LiteLLM instance on a version before 1.83.7 should treat it as potentially compromised. Every virtual API key, master key, and provider credential stored in internet-facing LiteLLM proxies should be rotated immediately. For those who cannot upgrade, the maintainers suggest setting disable_error_logs: true under general_settings to block the path through which malicious inputs reach the vulnerable query. It is a bandage, not a cure.
Two Security Incidents in Six Weeks for LiteLLM
This is the second time in six weeks that LiteLLM has been in the crosshairs. In March, the self-proclaimed TeamPCP threat actor compromised the PyPI publishing credentials for LiteLLM and pushed malicious packages that deployed an infostealer to harvest credentials, tokens, and secrets from infected systems. That incident triggered a full CI/CD pipeline overhaul and a security audit by Veria Labs.
The April audit uncovered additional vulnerabilities: CVE-2026-35030, a critical authentication bypass via OIDC cache collision affecting deployments with JWT auth enabled; CVE-2026-35029, a high-severity privilege escalation via /config/update that let any authenticated user modify the proxy’s runtime configuration; and a pass-the-hash login flaw where passwords were stored as unsalted SHA-256 hashes, with some endpoints returning the hash to any authenticated user. All were patched in version 1.83.0.
LiteLLM’s maintainers have now launched a bug bounty program, and Veria Labs is continuing its audit. But the pattern is clear: AI infrastructure is being targeted at every layer, and the gateway that consolidates your model credentials is the highest-value target in the stack. The Vercel breach earlier this month exposed environment variables through an AI tool. The Lazarus keychain attack targeted macOS credential storage. LiteLLM is the same playbook at a different layer — find the credential concentration point and drill into it.
CVE-2026-42208 has not yet been added to CISA’s Known Exploited Vulnerabilities catalog.

