Security researchers report active exploitation of a critical pre-authentication SQL injection vulnerability in LiteLLM, the popular open-source LLM gateway, allowing attackers to access sensitive API keys and credentials without authentication.
Hackers are actively exploiting a critical SQL injection vulnerability in LiteLLM, the widely-used open-source gateway for large language models, putting sensitive API credentials and environment secrets at risk. The vulnerability, tracked as CVE-2026-42208, allows attackers to bypass authentication entirely and extract sensitive data from the proxy's database.
The Vulnerability Explained
The SQL injection flaw exists in LiteLLM's proxy API key verification process. According to the security advisory from the LiteLLM maintainers, attackers can exploit this vulnerability without any authentication by sending a specially crafted Authorization header to any LLM API route. This enables both reading data from and modifying the proxy's database.
"This vulnerability allows unauthorised access to the proxy and the credentials it manages," the advisory explains. "An attacker can exploit it without authentication by sending a specially crafted Authorization header to any LLM API route."
LiteLLM serves as a crucial middleware layer that enables developers to call various AI models through a single unified API. The project has gained significant traction in the AI development community, boasting 45k stars and 7.6k forks on GitHub. It's particularly valuable for organizations managing multiple AI models, as it simplifies API integration and key management.
Technical Details
The vulnerability stems from the use of string concatenation rather than parameterized queries in the API key verification process. This allows malicious SQL code to be injected through the Authorization header, which then gets executed by the database.
The fix, delivered in LiteLLM version 1.83.7, replaces the vulnerable string concatenation approach with parameterized queries, which properly separates SQL code from data input, preventing injection attacks.
Active Exploitation Timeline
Researchers at Sysdig, a cloud security company, observed that exploitation of CVE-2026-42208 began approximately 36 hours after the vulnerability was publicly disclosed on April 24. This relatively quick exploitation timeframe indicates that threat actors were monitoring security disclosures and had prepared exploits in advance.
"The exploitation started approximately 36 hours after the bug was disclosed publicly on April 24," the Sysdig report states. "We observed deliberate and targeted exploitation attempts that sent crafted requests to '/chat/completions' with a malicious 'Authorization: Bearer' header."
Sophisticated Attack Pattern
The exploitation attempts demonstrated a sophisticated approach. In the initial phase, attackers sent crafted requests to probe the database structure, specifically targeting tables containing API keys, provider credentials (for services like OpenAI, Anthropic, Bedrock), and environment configurations.
Sysdig researchers noted that the attackers showed remarkable precision in their targeting: "There were no probes against benign tables, and 'the operator went straight to where the secrets live,' a strong indicator that the attacker knew exactly what to target."
In the second phase, the threat actors switched IP addresses (likely for evasion purposes) and refined their approach. They reran SQL injection attempts but with fewer, more precise payloads, suggesting they had successfully mapped the database structure in the initial phase.
Sensitive Data at Risk
LiteLLM stores a wealth of sensitive information in its database, including:
- API keys for various LLM providers
- Virtual and master API keys
- Environment and configuration secrets
- Provider-specific credentials
Access to this data would allow attackers to:
- Hijack existing AI model access
- Make unauthorized API calls on behalf of organizations
- Potentially access sensitive data processed through the LLMs
- Use the credentials as a stepping stone for further attacks
Broader Context: Supply Chain Attacks
This vulnerability comes amid increased targeting of AI infrastructure. LiteLLM was previously targeted in a supply-chain attack where the TeamPCP hacker group released malicious PyPI packages that deployed an infostealer to harvest credentials, tokens, and secrets from infected systems.
The pattern suggests that as AI adoption accelerates, these systems are becoming increasingly valuable targets for threat actors looking to compromise AI workflows and access sensitive data.
Expert Recommendations
Sysdig researchers have issued clear guidance for organizations using LiteLLM:
Upgrade immediately: The safest approach is to upgrade to LiteLLM version 1.83.7 or later, which contains the patch for the vulnerability.
Rotate all credentials: For organizations that cannot immediately upgrade, Sysdig recommends treating all exposed LiteLLM instances as potentially compromised and rotating every virtual API key, master key, and provider credential stored in internet-exposed instances.
Implement the workaround: For those who cannot upgrade, maintainers have suggested a workaround of setting 'disable_error_logs: true' under 'general_settings' to block the path through which malicious inputs can reach the vulnerable query.
Monitor for suspicious activity: Organizations should monitor their LiteLLM instances for unusual API requests or database activity that might indicate attempted exploitation.
The Broader AI Security Landscape
This vulnerability highlights the security challenges in the rapidly evolving AI ecosystem. As organizations increasingly adopt AI technologies and integrate them into their workflows, securing these systems becomes paramount.
The AI security landscape is still maturing, and vulnerabilities like CVE-2026-42208 underscore the need for robust security practices in AI development and deployment. This includes secure coding practices, regular security assessments, and prompt patch management.
For more information on the vulnerability, organizations can refer to the official LiteLLM security advisory and the Sysdig research report.
As AI systems become more prevalent in enterprise environments, security professionals must remain vigilant about protecting these critical infrastructure components. The exploitation of CVE-2026-42208 serves as a reminder that even widely-used open-source projects require careful security consideration and prompt attention to vulnerabilities.

Comments
Please log in or register to join the discussion