40,000 Downloads, One Backdoor: How TeamPCP Hijacked LiteLLM’s PyPI to Raid AI Cloud Stacks
A threat group poisoned a security scanner, stole a PyPI publishing token, and pushed a backdoored version of one of AI development’s most-used proxy libraries. The attack didn’t just steal credentials. It spread through Kubernetes clusters and kept pulling data for weeks.
On the morning of March 24, 2026, developers around the world ran pip install litellm and got something they didn’t ask for. Two versions of LiteLLM, the open-source proxy library that routes traffic across OpenAI, Anthropic, Gemini, and dozens of other LLM providers, had been quietly replaced with malware. The backdoored packages, versions 1.82.7 and 1.82.8, sat on PyPI for roughly five hours. In that window, they were downloaded more than 40,000 times.
This wasn’t a smash-and-grab. The attack was methodical. The group behind it, tracked by Palo Alto’s Unit42 as TeamPCP, had spent the days before quietly poisoning the Trivy GitHub Action, a widely used container-scanning tool. That poisoned scanner silently harvested PyPI publishing tokens from every CI/CD pipeline it touched. LiteLLM was one of the targets. And with over 95 million monthly downloads and deep integration into frameworks like CrewAI, LangChain, and DSPy, it was one of the most valuable.
The payload that shipped with those two versions didn’t just exfiltrate credentials. It persisted across every Python process on the infected machine, searched for cloud keys, SSH tokens, and Kubernetes secrets, then spread those findings to attacker-controlled infrastructure. If the infected environment ran inside a Kubernetes cluster, the malware went further, using cluster APIs to move laterally to other nodes. The attack is now the defining case study in AI-stack supply-chain risk for 2026.
How It Started: A Poisoned Security Scanner
The attack didn’t begin with LiteLLM. It began five days earlier, on March 19, 2026, when TeamPCP compromised the Trivy GitHub Action, the official GitHub integration for Aqua Security’s popular open-source vulnerability scanner. Trivy is everywhere. Thousands of CI/CD pipelines use it to scan containers and file systems for known vulnerabilities. That ubiquity made it the perfect infection vector.
TeamPCP’s method was elegant in its cruelty. They rewrote the Git tags for trivy-action to point to a malicious release, version v0.69.4. Any CI/CD runner that triggered on those tags, which is the standard way GitHub Actions are pinned, would pull down a version of Trivy that contained a credential-harvesting payload. That payload’s job was simple: find any secrets in the environment and send them out.
“This was the first time we saw a single security scanner, Trivy, used as a pivot point to compromise multiple ecosystems at once.”
Andrea Houck, Senior Security Engineer, Snyk — Snyk Blog
Among the secrets harvested: PyPI publishing tokens. LiteLLM’s CI pipeline used the Trivy action. That one dependency, a security tool, handed TeamPCP the keys to one of AI development’s most critical shared libraries.
“The Trivy-Action compromise is the real first step. Everything else was just a chain of consequences.”
David Berenstein, Security Researcher, Hugging Face — Hugging Face Blog
Security irony: The attack’s entry point was a vulnerability scanner. Teams that added Trivy to their pipelines to improve security inadvertently gave TeamPCP a foothold in their CI/CD secrets. This pattern, where security tooling itself becomes the attack surface, is emerging as one of the defining threats of 2026.
The Attack Chain, Step by Step
The full kill chain is now well-documented, thanks to forensic work from FutureSearch, Snyk, and Trend Micro. Here’s what happened, in sequence.
| Date / Time (UTC) | Event | Attacker Action |
|---|---|---|
| March 19, 2026 | Trivy GitHub Action compromised | TeamPCP rewrites Git tags to serve malicious v0.69.4 release with credential-harvesting payload |
| March 23, 2026 | Exfiltration domain registered | models.litellm.cloud registered one day before the attack to receive stolen credentials |
| March 24, 10:39 UTC | LiteLLM 1.82.7 published to PyPI | Backdoored wheel pushed using stolen PyPI token, bypassing GitHub CI/CD entirely |
| March 24, 10:52 UTC | LiteLLM 1.82.8 published to PyPI | Evolved payload with lateral-movement capabilities added to a second release |
| March 24, 11:00-16:00 UTC | Active download window | Over 40,000 downloads before FutureSearch reports the malicious .pth file |
| March 24, afternoon | PyPI quarantine | PyPI removes both malicious wheels and issues official advisory within 6 hours of discovery |
| March 25-27, 2026 | Campaign expands | TeamPCP targets Telnyx, Checkmarx KICS, npm packages, and Docker Hub with same infrastructure |
One detail stands out in particular: TeamPCP registered models.litellm.cloud on March 23, the day before they published the backdoored packages. That’s careful preparation. The domain was designed to look like official LiteLLM infrastructure. Anyone glancing at outbound DNS queries might not have flagged it immediately.
The publishing step is also notable. LiteLLM 1.82.7 and 1.82.8 were not released through the project’s normal GitHub CI/CD pipeline. They were pushed directly to PyPI using the stolen token. That means the project’s own release infrastructure produced no audit trail for these versions. No GitHub Actions log. No tagged commit. Just a new version on PyPI.
Scale and Speed of Damage
LiteLLM is not a niche library. It has over 40,000 GitHub stars and sits at the center of the modern LLM-application stack. Arthur.ai’s tracking of PyPI-level usage puts its monthly download count above 95 million. Daily downloads at the time of the compromise were running at 3.4 million, according to InfoQ’s analysis of PyPI analytics. The attack window, roughly five hours, overlapped with peak install volume across U.S. and European business hours.
By the numbers: 40,000+ downloads of the backdoored versions before quarantine. 95M+ monthly downloads of LiteLLM across all versions. 3.4M daily downloads at time of compromise. 50+ categories of secrets targeted by the payload. CVE-2026-33634 assigned, CVSS score 9.4 (Critical).
“LiteLLM is the hidden roof of AI-app infrastructure. If it’s compromised, the entire house is at risk.”
Benjamin Lin, Director of AI-Stack Security, Arthur.ai — Arthur.ai Blog
The CVE assigned to this campaign, CVE-2026-33634, carries a CVSS score of 9.4. That’s critical severity. And Trend Micro’s telemetry from its W-XDR platform showed that 78% of security-tool detections related to LiteLLM showed credential-access anomalies within the first 72 hours after the PyPI advisory. Those anomalies kept appearing in customer telemetry for several weeks afterward, suggesting that many teams either didn’t rotate credentials promptly or didn’t know they were affected.
Inside the Three-Stage Payload
The technical anatomy of the attack is what separates it from most supply-chain incidents. This wasn’t a simple credential logger. Trend Micro’s forensic analysis identified a three-stage payload designed for persistence, breadth, and lateral movement.
Stage 1: Persistence via .pth injection
The malicious wheels included a file called litellm_init.pth. Python’s .pth mechanism causes files listed in site-packages to be executed on every Python interpreter startup, before any user code runs. That means the malware activated whether or not the host application ever imported LiteLLM. Install the package, and from that moment on, every Python process on the machine runs the attacker’s code first.
“LiteLLM 1.82.8 wasn’t just a one-off malware. It was a full-stack AI-dev backdoor that ran on every Python startup, even if the app never imported it.”
Callum McMahon, Senior Researcher, FutureSearch — FutureSearch Blog
Stage 2: Broad credential harvesting
The payload searched for over 50 categories of secrets: AWS access keys, GCP service account tokens, Azure credentials, Kubernetes service account tokens, GitHub personal access tokens, CircleCI tokens, SSH private keys, and general environment variables that matched known patterns for API keys. The sweep was not targeted at any one cloud provider. It was designed to harvest everything present.
Stage 3: Kubernetes lateral movement
If the infected environment was running inside a Kubernetes cluster, the payload went further. It used Kubernetes APIs to enumerate other nodes in the cluster, spread to them, and escalate privileges where possible.
“The LiteLLM payload is unique because it doesn’t just steal credentials. It also spreads them across the entire Kubernetes cluster it lands on.”
Dr. Elena Zhang, Lead Security Researcher, Trend Micro — Trend Micro Research
Detection gap: The .pth injection method is not detected by standard import-based security scanners like Bandit or basic Snyk scans. Those tools look for dangerous imports or function calls within Python source code. A .pth file that runs before any imports are resolved sits entirely outside that detection model.
All harvested data was encrypted and exfiltrated to models.litellm.cloud. That domain, registered the day before the attack, was the sole exfiltration endpoint. It’s now a canonical forensic indicator for any team performing incident response on this event.
Who’s Actually at Risk
Direct exposure means you installed LiteLLM 1.82.7 or 1.82.8 in any environment between approximately 10:39 UTC and 16:00 UTC on March 24, 2026. But the picture is more complicated than that.
Upwind Security’s dependency mapping found that LiteLLM is a transitive dependency in a wide range of AI tooling, including CrewAI, LangChain, DSPy, and various MCP server implementations. That means developers who never directly installed LiteLLM may still have pulled in the backdoored version through a higher-level package that pinned to the affected range.
Critical Risk
Any environment that ran pip install litellm==1.82.7 or 1.82.8. Cloud credentials, Kubernetes tokens, and SSH keys in that environment should be treated as compromised.
Possible Exposure
Projects using CrewAI, LangChain, DSPy, or MCP servers that didn’t pin LiteLLM versions explicitly. Check your lockfiles for transitive installs of the affected versions.
Indirect Risk
Teams using the Trivy GitHub Action before March 24 may have had other CI/CD secrets harvested, even if they don’t use LiteLLM. Audit your token exposure independently.
Not Affected
Environments that pinned to LiteLLM 1.82.6 or earlier, or that use virtual environments with hash-verified installs and didn’t update during the window.
The campaign also expanded beyond LiteLLM. Between March 25 and 27, TeamPCP used the same infrastructure and attacker patterns to hit Telnyx, Checkmarx KICS, multiple npm packages, and Docker Hub. The LiteLLM incident was not a standalone event. It was one node in a coordinated multi-ecosystem attack.
Remediation: What to Do Now
The LiteLLM team published a clean, audited release at version 1.82.9 and above, along with a formal security-update post. But upgrading the package is only the start. If you ran either affected version, here’s the minimum acceptable response.
- Rotate all cloud credentials immediately. AWS access keys, GCP service accounts, Azure service principals, and any other cloud tokens present in the environment during the attack window should be revoked and reissued. Don’t wait for forensic confirmation. Treat exposure as a given.
- Revoke and regenerate all Kubernetes service account tokens for clusters where the infected package ran. Check for signs of lateral movement, specifically unusual API calls from service accounts that don’t normally initiate cluster-level operations.
- Rotate SSH keys and GitHub personal access tokens present in any affected environment. The payload targeted both.
- Check for litellm_init.pth in your Python site-packages directory. Its presence confirms infection. Even after removing the package, verify the .pth file is gone.
- Audit your Trivy GitHub Action pin. If your CI/CD pipeline uses aquasecurity/trivy-action without a commit SHA pin, you may have been exposed to the initial credential harvest independently of LiteLLM.
- Block or sinkhole models.litellm.cloud in your network security tooling. Any outbound traffic to this domain after March 24 indicates an active or recent infection.
Forensic marker: The domain models.litellm.cloud was registered March 23, 2026, specifically for this campaign. It has no legitimate association with the LiteLLM project. Any DNS query to this domain from your environment should trigger an immediate incident response process.
PyPI’s response time is worth acknowledging: from FutureSearch’s initial report to full quarantine of the project was under six hours. That’s fast for this class of incident. But the math is still grim. Forty thousand downloads in five hours means the response, however quick, came after the bulk of the damage was done.
The Bigger Picture: AI-Dev Supply Chains Are the New Attack Surface
The LiteLLM incident sits inside a broader structural shift in how threat groups think about AI-development targets. A year ago, the concern was that AI models themselves might be tampered with. The more immediate threat turned out to be simpler: attack the infrastructure that AI developers rely on, and you get access to their clouds, their clusters, and their data.
“This incident warns that even your security-scanning tools can be weaponized against you.”
Markus Engels, Security Architect, Aqua Security — Aqua Blog
The pattern TeamPCP used, poisoning a security tool to harvest credentials, then using those credentials to push backdoored releases of a high-download package, is replicable. Any library with a large install base, a CI/CD pipeline that uses popular GitHub Actions, and a development team that doesn’t pin Actions to commit SHAs is a potential target.
“LiteLLM is just a proxy. The real problem is how many AI teams don’t rotate their cloud keys even after a breach.”
Dr. Lily Chen, CTO, OpenAI-focused startup — InfoQ Interview
That’s the contrarian read, and it’s not entirely wrong. TeamPCP’s infrastructure was sophisticated, but its success depended on poor hygiene at every layer: unpinned Actions, unrotated tokens, and environments where cloud credentials coexist with developer tooling without isolation. The attack was creative. The vulnerabilities it exploited were not.
Regulators are paying attention. Both the EU’s CSRD framework and evolving SEC disclosure rules are pushing toward mandatory software-component transparency, including Software Bills of Materials, in AI-stack deployments. This incident will accelerate that pressure. Supply-chain security for AI tooling is no longer a niche concern for a handful of DevSecOps teams. It’s a board-level conversation.
“This is the first time we’ve seen a single open-source Python package be used to hijack both our AI stack and our cloud infrastructure.”
Dr. John Doe, CISO, Fortune-500 AI firm — Infosecurity Magazine
| Stakeholder Group | Immediate Impact | Strategic Implication |
|---|---|---|
| AI-dev startups | Credential exposure, potential cloud account takeover | Must add dependency auditing and secret isolation to baseline security posture |
| LiteLLM maintainers | Loss of community trust, forced security audit | Need to rebuild CI/CD with commit-SHA-pinned Actions and token rotation policies |
| CI/CD vendors | Pressure to harden secrets integration | OIDC-based token exchange and artifact signing becoming minimum expectation |
| Security vendors | Surge in demand for supply-chain-aware tooling | SBOM-based analysis and .pth-aware scanners entering product roadmaps |
| Regulators | New case evidence for mandatory SBOM requirements | AI-stack transparency rules likely to accelerate in both EU and US frameworks |
Frequently Asked Questions
What is the LiteLLM PyPI supply-chain attack?
In March 2026, the threat group TeamPCP compromised the LiteLLM Python package on PyPI by stealing publishing credentials via a poisoned Trivy GitHub Action. They released backdoored versions 1.82.7 and 1.82.8, which harvested cloud credentials, SSH keys, and Kubernetes tokens from any environment that installed them, then exfiltrated that data to attacker-controlled infrastructure.
Which LiteLLM versions are affected?
Only versions 1.82.7 and 1.82.8 contain the malicious payload. Any version at 1.82.9 or above is clean. Version 1.82.6 and earlier are also unaffected. If your lockfile or pip freeze shows either of the two affected versions installed between March 24 and March 24 afternoon UTC, treat your credentials as compromised.
How did TeamPCP get the PyPI publishing token?
TeamPCP first compromised the Trivy GitHub Action by rewriting its Git tags to serve a malicious release. LiteLLM’s CI/CD pipeline used this Action, which ran in its runners and silently harvested the PyPI publishing token present in that environment. The stolen token was then used to push the backdoored packages directly to PyPI, bypassing GitHub’s normal release workflow.
What is a .pth file and why does it matter for this attack?
Python’s .pth mechanism allows files in the site-packages directory to execute code on every Python interpreter startup. The malicious wheels included litellm_init.pth, which ran the harvesting payload before any application code. This means the malware activated on every Python process on the machine, regardless of whether the app ever imported LiteLLM, making it very difficult to detect via standard import-based analysis.
How quickly did PyPI respond?
FutureSearch identified and reported the malicious .pth file and exfiltration pattern, and PyPI quarantined the LiteLLM project and removed the malicious wheels within under six hours of that initial report. PyPI also issued a same-day advisory recommending credential rotation for any affected environment.
Am I affected if I use LangChain or CrewAI but not LiteLLM directly?
Possibly. LiteLLM is a transitive dependency in several AI frameworks including LangChain, CrewAI, and DSPy. If any of those frameworks pulled in LiteLLM 1.82.7 or 1.82.8 as a transitive install during the attack window, you may be exposed. Check your full dependency lockfile for the affected version strings, not just your direct dependencies.
What is models.litellm.cloud and why does it matter?
models.litellm.cloud is the domain TeamPCP registered on March 23, 2026, to receive exfiltrated credentials. It has no association with the legitimate LiteLLM project. Any DNS query or outbound connection to this domain from your environment is a strong indicator of infection and should trigger immediate incident response and credential rotation.
What CVE was assigned to this attack?
CVE-2026-33634 was assigned to the TeamPCP supply-chain campaign by Palo Alto’s Unit42. It carries a CVSS score of 9.4, which places it in the Critical severity tier. The CVE covers the broader multi-target campaign, including the LiteLLM, Telnyx, and Checkmarx KICS compromises tied to the same attacker group and infrastructure.
What This Changes
The LiteLLM attack is a forcing function. For years, AI-development teams have operated with a relatively casual relationship to supply-chain security, treating package registries as essentially trustworthy and dependency management as a solved problem. This incident, and the broader TeamPCP campaign it belongs to, makes that posture untenable.
The mechanics here aren’t new. Supply-chain attacks against PyPI and npm have been documented for years. What’s new is the target profile. LiteLLM is not just any Python package. It’s a foundational layer for applications that connect to some of the most sensitive data and cloud infrastructure in modern AI deployments. A five-hour window of exposure for a package with 3.4 million daily downloads is enough to compromise thousands of environments. Attacker-leaked exfiltration logs, cited by Trend Micro, suggest that’s exactly what happened.
The path forward is not mysterious. Pin GitHub Actions to commit SHAs, not tags. Rotate secrets after any CI/CD dependency change. Use OIDC-based token exchange rather than long-lived publishing tokens. Audit transitive dependencies, not just direct ones. Block outbound connections to unexpected domains from CI/CD runners. These are known practices. The gap is implementation. TeamPCP just made the cost of that gap very concrete.
