PyTorch Lightning Hijacked: 16M Monthly Downloads Exposed to Credential-Stealing Malware
Two versions of the popular AI framework package were quietly poisoned on PyPI, executing a credential harvester the moment any developer imported them. Here’s what got stolen, how it worked, and what you need to do right now.
At some point on the morning of April 30, 2026, someone published two versions of the lightning package on PyPI that should never have gone live. Versions 2.6.2 and 2.6.3 of PyTorch Lightning, a high-level wrapper used by machine learning engineers around the world to train scalable models, carried hidden malware that kicked off the moment a developer ran import lightning. No extra steps. No warnings. Just a background thread quietly draining credentials.
By the time PyPI quarantined the package, the malicious releases had been available for hours. With over 302,000 downloads recorded in a single day and more than 16 million across the past month, the exposure window was not trivial. Any developer who updated Lightning that morning and then ran a training script could have handed over their GitHub tokens, AWS access keys, and more without realizing it.
This wasn’t an opportunistic smash-and-grab. The attack was carefully engineered, obfuscated behind multiple layers, and tied to a broader supply chain campaign that had already hit SAP-related npm packages the day before. The AI and machine learning community, which has built considerable institutional trust in the PyTorch ecosystem, now has a reason to reconsider how it handles package hygiene.
What Happened on April 30
The malicious packages were pushed to PyPI under the lightning project namespace, almost certainly using a compromised PyPI token belonging to the Lightning-AI maintainer account. That’s the most probable entry point, though the full forensic picture hasn’t been publicly confirmed by Lightning-AI at time of writing.
What followed was a rapid sequence of moves that suggested the attacker had a plan well beyond the initial payload. Within hours, a GitHub account identified as pl-ghost pushed and then quickly deleted six short-lived branches across Lightning-AI repositories, including litAI, utilities, and torchmetrics. The branch names were either random 10-character strings or fake Dependabot labels, both designed to blend into the background noise of an active open source project. Fortunately, branch protections and automated workflows on the Lightning-AI repos blocked any of those branches from merging.
Safe version: PyTorch Lightning 2.6.1, released January 30, 2026, is the last confirmed clean release. If you’re running 2.6.2 or 2.6.3, treat your environment as compromised until you’ve completed a full credential rotation.
Community members noticed quickly. A GitHub issue, numbered #21689 on the Lightning-AI repo, described the hidden execution chain in detail. It was closed without explanation. When Socket Research opened a follow-up issue, it was shut down within one minute by the pl-ghost account, which posted a “SILENCE DEVELOPER” meme before closing it. That behavior strongly suggests the project’s GitHub account had already been taken over at that point.
“The issue was closed within one minute by the pl-ghost account, which then posted a ‘SILENCE DEVELOPER’ meme… strongly indicating that the project’s GitHub account appears to be compromised.”
Socket Research Team, Socket.dev — Socket Research Blog, April 30, 2026
The Lightning-AI maintainers eventually acknowledged the situation with a short statement confirming an active investigation, and a subsequent advisory described the affected versions as containing “functionality consistent with a credential harvesting mechanism.” That’s a careful way of saying the packages were designed to steal developer secrets.
Inside the Malware: A Multi-Stage Credential Harvester
The technical sophistication here is worth understanding, because this wasn’t a simple script that grabbed a few environment variables. Socket Research’s full payload teardown reveals a multi-stage attack chain that starts on import and fans out aggressively.
Stage One: The Launcher
The malware hides inside a directory called _runtime/ within the package. A file named start.py triggers silently when the library is imported. Its first job is downloading the Bun JavaScript runtime directly from GitHub. This is an unusual dependency for a Python machine learning library, which is exactly why it works as a hiding mechanism.
Stage Two: The 11 MB Payload
Once Bun is installed, the launcher executes router_runtime.js, an 11-megabyte obfuscated JavaScript file running in a daemon thread. The obfuscation uses string-array rotation combined with AES decryption, consistent with the javascript-obfuscator toolchain. The size and complexity of this file signal that substantial development time went into making it hard to analyze.
703 process.env References
The payload systematically scans environment variables for any tokens, secrets, or credentials present in the developer’s shell.
463+ Auth Token References
Targeted scanning for authentication tokens, API keys, and bearer credentials across multiple platforms and services.
336 Repository References
Once credentials are harvested, the payload attempts to poison up to 50 branches per stolen token across reachable repositories.
npm Worm Component
Local npm .tgz files get infected via postinstall hooks, enabling the malware to spread laterally through package dependencies.
Stage Three: Credential Validation and Exfiltration
The payload doesn’t blindly dump everything it finds. It validates harvested credentials against live APIs before exfiltrating them, confirming that GitHub tokens, npm tokens, and cloud provider keys (AWS, Azure, GCP) are actually active before sending them out. This validation step is a meaningful refinement over simpler stealers; it signals a mature operation focused on quality over volume of data.
Stage Four: Repository Poisoning
With a valid GitHub token, the malware attempts to inject .claude/router_runtime.js and malicious workflow files into up to 50 branches per token. Commits are impersonated using the email claude@users.noreply.github.com, a deliberate choice to blend in with automated commits from legitimate Claude AI tooling. The npm worm component handles local spread, bumping package versions and inserting postinstall hooks into any .tgz files it can reach.
Important dependency: The entire attack chain requires the Bun runtime to be downloadable from GitHub. In environments with strict egress controls or GitHub access restrictions, the payload may not fully execute. That said, any affected version should still be treated as compromised regardless of network configuration.
Detection in 18 Minutes, and the Response That Followed
One of the few things that went right here was speed. Socket’s AI-powered scanner flagged both 2.6.2 and 2.6.3 as potentially malicious just 18 minutes after they were published to PyPI. That’s an impressively short detection window for a supply chain attack, where traditional signature-based tools often lag by hours or days.
“Socket’s AI scanner flagged both versions 2.6.2 and 2.6.3 as potentially malicious eighteen minutes after publication.”
Socket Research Team, Socket.dev — Socket Research Blog, April 30, 2026
PyPI’s own response was also fairly rapid, moving to quarantine the lightning project once the situation was confirmed. Quarantine on PyPI means the affected versions can no longer be installed, though anyone who already pulled them down retains the packages in their local cache.
The maintainer response was more complicated. The GitHub suppression behavior, whether it represents a fully compromised account or something more ambiguous, created a trust problem that a brief advisory statement can’t fully repair. When community members raising legitimate security concerns get silenced by memes within 60 seconds, it damages the project’s credibility in ways that outlast the technical incident itself.
Understanding the Scale of the Risk
PyTorch Lightning isn’t a niche tool. It’s infrastructure for how a meaningful slice of the global AI research and engineering community trains models at scale. The download numbers make that concrete.
| Metric | Figure | Why It Matters |
|---|---|---|
| Daily Downloads (lightning) | 302,431 | Reflects how many installs could occur within a single attack window |
| Weekly Downloads | 3,429,724 | Shows how quickly compromised versions propagate through CI/CD pipelines |
| Monthly Downloads | 16,201,959 | Long-tail exposure risk for teams with infrequent dependency updates |
| GitHub Stars (pytorch-lightning) | 31,100+ | Indicator of broad developer adoption and community reliance |
| Companies using PyTorch | 17,196+ | Enterprise-scale attack surface across industries |
| AI research papers using PyTorch | ~85% | Academic ML pipelines potentially feeding compromised credentials into research infrastructure |
The PyTorch ecosystem is effectively the default substrate for AI research. When something this deeply embedded gets compromised, the blast radius isn’t just individual developers. It extends to corporate training clusters, academic compute environments, and any CI/CD pipeline that automatically pulls the latest compatible version. That last category is particularly dangerous, since many ML projects pin a major version but not a specific patch, meaning an automated update could trigger the malware silently.
It’s also worth noting, as Socket Research flags, that PyPI download statistics include CI mirrors and caching infrastructure. The “real” number of human-initiated installs is lower than 16 million, but that caveat doesn’t meaningfully reduce the risk surface for organizations running automated pipelines.
Connecting the Dots: Mini Shai-Hulud and TeamPCP
This attack didn’t emerge in isolation. The Hacker News assessed the Lightning incident as an extension of the Mini Shai-Hulud campaign, which struck SAP-related npm packages on April 29, just one day earlier. The shared patterns are hard to dismiss: similar obfuscation techniques, the same focus on credential harvesting to enable repository poisoning, and an operational tempo that suggests a coordinated actor moving across ecosystems quickly.
“The campaign is assessed to be an extension of the Mini Shai-Hulud supply chain incident that targeted SAP-related npm packages on Wednesday.”
Ravie Lakshmanan, Editor, The Hacker News — The Hacker News, April 30, 2026
A group calling itself TeamPCP has claimed responsibility via a Tor-accessible site, posting a PGP-signed message that references both LAPSUS$ and a group called CipherForce. Those claims should be treated skeptically. Attribution in supply chain attacks is genuinely difficult, and extortion groups have strong incentives to name-drop well-known threat actors to inflate their perceived credibility. Socket Research itself notes that the Lightning payload lacks specific IOCs tied to Mini Shai-Hulud, suggesting it may be a distinct actor mimicking the same playbook rather than the same crew.
What’s not disputed is the sophistication of the operational security. The use of fake Dependabot branch names, commits impersonating Claude AI tooling, and rapid deletion of evidence branches all point to an attacker who has studied how modern DevOps environments look and knows how to hide in plain sight within them.
IOC note: The specific IOC “SHA1HULUD,” associated with the Mini Shai-Hulud npm campaign, was not found in the Lightning payload. Researchers at Aikido Security and OX Security have documented overlapping infrastructure patterns, but the exact actor relationship remains unconfirmed.
What You Should Do Right Now
If there’s any chance your environment pulled Lightning 2.6.2 or 2.6.3, the response isn’t optional. Here’s the practical order of operations.
- Immediately uninstall both affected versions:
pip uninstall lightning. Then reinstall the last clean release:pip install lightning==2.6.1. - Rotate every secret in your environment. GitHub personal access tokens, fine-grained tokens, npm tokens, and cloud provider credentials (AWS, Azure, GCP) should all be treated as compromised. Don’t audit first and rotate later; rotate now and audit afterward.
- Review your GitHub repository’s branch history for any unexpected branches created around April 30, particularly any with random alphanumeric names or fake Dependabot labels.
- Audit your GitHub Actions workflow files for any unauthorized modifications. The malware attempts to insert malicious workflows; check
.github/workflows/carefully across all branches. - Check your local npm cache and any .tgz packages in your project directories. The worm component targets these specifically via postinstall hooks.
- If your CI/CD pipeline automatically installs the latest compatible lightning version, add a version pin to 2.6.1 immediately and lock it until Lightning-AI publishes a verified clean release with an explicit security advisory.
- Scan your environment with Socket’s security tooling or equivalent software composition analysis (SCA) tools. Look for any
.claude/router_runtime.jsfiles that shouldn’t be there.
For teams: If anyone on your team ran a training job or imported Lightning on April 30 before the quarantine, assume shared secrets are at risk. Service accounts with broad repository access should be rotated first. Check your GitHub security log for any unusual OAuth activity or API calls originating from unfamiliar IP addresses.
Frequently Asked Questions
Are PyTorch Lightning versions 2.6.2 and 2.6.3 safe to use?
No. Both versions contain credential-stealing malware that executes automatically when you import the library. PyPI has quarantined these releases, so they can no longer be installed fresh. If you already have either version, uninstall immediately and downgrade to 2.6.1, the last verified clean release.
What credentials were targeted in the PyTorch Lightning supply chain attack?
The payload targeted GitHub tokens, npm tokens, and cloud provider credentials including AWS, Azure, and GCP access keys. It also scanned environment variables broadly, referencing over 700 process.env lookups. Credentials were validated against live APIs before exfiltration, so only active secrets were sent out.
How do I remove the compromised PyTorch Lightning package?
Run pip uninstall lightning, then pip install lightning==2.6.1 to restore the last clean version. After uninstalling, rotate all secrets in your environment, audit your GitHub repository for unexpected branches or workflow changes, and scan local npm files for signs of the worm component.
Does this affect pytorch-lightning as well as the lightning package?
The confirmed malicious versions were published under the lightning PyPI namespace. The pytorch-lightning package name was previously used but the project migrated to lightning. If your requirements file references lightning at version 2.6.2 or 2.6.3, you’re affected. Check both package names in your environment to be safe.
What is the Mini Shai-Hulud campaign?
Mini Shai-Hulud is the name researchers applied to a supply chain attack that compromised SAP-related npm packages on April 29, 2026. The Lightning PyPI incident shares similar obfuscation techniques and credential-harvesting patterns, leading researchers to assess them as potentially related. A group called TeamPCP has claimed responsibility for both, though attribution remains unconfirmed.
How quickly was the PyTorch Lightning malware detected?
Socket’s AI-powered scanner flagged versions 2.6.2 and 2.6.3 as potentially malicious within 18 minutes of publication. This rapid detection is faster than traditional signature-based approaches, though the packages were still available for several hours before PyPI completed quarantine.
Was the Lightning-AI GitHub account compromised?
Evidence strongly suggests it was. The pl-ghost account closed a legitimate community security report within one minute while posting a dismissive meme, then pushed and deleted six suspicious branches across multiple Lightning-AI repositories. Socket Research concluded this behavior is consistent with a compromised maintainer account, not normal project management.
What should ML engineering teams do to prevent similar attacks?
Pin exact package versions in production environments rather than floating on minor versions. Integrate software composition analysis tools like Socket into your CI/CD pipeline to catch malicious packages before they deploy. Regularly audit your dependency tree, enable two-factor authentication on all package registry accounts, and implement least-privilege policies for tokens used in automated pipelines.
What This Means Going Forward
The PyTorch Lightning compromise is a useful case study in how supply chain attacks actually work in practice: not through spectacular zero-days, but through a compromised token, a sophisticated payload, and a brief window before the community noticed. The 18-minute detection by Socket is genuinely impressive. The hours-long exposure window before full quarantine is not.
For ML engineers specifically, this incident highlights a risk profile that the security community has been raising for years. Training infrastructure typically runs with broad cloud permissions and direct access to sensitive model weights, datasets, and API keys. A credential harvester that lands inside a framework as foundational as PyTorch Lightning doesn’t just steal tokens; it can open doors into production model serving environments, data pipelines, and cloud billing accounts. The attack surface for a compromised ML developer is meaningfully wider than for a compromised web developer.
OSS trust is a fragile thing. The speed of the technical response, from Socket’s detection to PyPI’s quarantine, shows the system can work. But the GitHub suppression behavior, whatever its precise explanation, is the kind of thing that makes developers question whether the open source projects they depend on are actually being watched by anyone paying attention. That’s a confidence problem the Lightning-AI team will need to address directly, not just through code patches, but through transparency about how the account was compromised and what access controls have changed since.
The broader lesson isn’t novel, but it’s clearly not yet internalized everywhere: every package in your dependency tree is a potential attack surface. The more foundational the package, the more attractive the target. In an ecosystem where 85% of AI research runs on PyTorch, “foundational” doesn’t get more foundational than this.
