← All Posts AI Security

The LiteLLM Supply Chain Attack: What Every Caribbean Business Using AI Must Know

Adrian Dunkley March 27, 2026 11 min read
Server room with red warning lights representing a cybersecurity breach

Three days ago, the AI industry got punched in the mouth. On March 24, 2026, two versions of LiteLLM, one of the most widely used Python packages in the entire AI ecosystem, were discovered to contain malware. Not a bug. Not a vulnerability. Deliberate, sophisticated malware designed to steal every credential on your machine and install a persistent backdoor for future attacks.

If you run AI systems, if you build with AI tools, if you deploy AI agents for your business, you need to understand what happened, why it matters, and what to do about it right now. This is not theoretical. This is not a drill. This happened three days ago and the cleanup is still ongoing.

What Is LiteLLM and Why Does It Matter

LiteLLM is an open-source Python library that serves as a gateway to AI models. Its entire purpose is to let developers connect to multiple AI providers through a single interface. OpenAI, Anthropic, Google, Azure, Hugging Face, Replicate, Amazon Bedrock, and dozens more. Instead of writing separate code for each AI provider, you write one set of code and LiteLLM handles the routing.

That convenience made it ubiquitous. The numbers tell the story:

  • 95 million monthly downloads on PyPI (the Python package repository)
  • 3 million daily downloads on average
  • Integrated into virtually every major AI agent framework, MCP server, and LLM orchestration tool
  • Used by startups, enterprises, and government agencies worldwide

Here is the critical detail. Because LiteLLM is an API gateway, a typical LiteLLM deployment holds API keys for dozens of AI providers in its environment variables. It is, by design, one of the most credential-rich packages in any AI infrastructure. That made it the perfect target.

What Happened on March 24, 2026

A threat actor group called TeamPCP published two poisoned versions of LiteLLM to PyPI: versions 1.82.7 and 1.82.8. They got access by compromising the maintainer's PyPI credentials through a prior attack on Trivy, an open-source security scanner used in LiteLLM's CI/CD pipeline.

The compromised versions were available on PyPI for at least two hours before being detected and removed. Given three million daily downloads, the exposure window was significant.

The malware embedded in these versions was not amateur work. It functioned as both a credential stealer and a dropper for additional payloads. Here is what it targeted:

  • Environment variables including all API keys and tokens
  • SSH keys from the compromised machine
  • Cloud credentials for AWS, Google Cloud, and Azure
  • Kubernetes configurations and cluster secrets across all namespaces
  • CI/CD secrets from build systems
  • Docker configurations
  • Database credentials
  • Cryptocurrency wallets

It installed a persistent systemd backdoor called sysmon.py that polls for additional payloads from the attacker's command and control infrastructure. Version 1.82.8 went further, using a .pth file that executes on every Python process startup without needing an import. If it found a Kubernetes service account token, it escalated to read all cluster secrets and deploy privileged pods to every node.

Developer laptops, CI runners, and production servers were all equally at risk.

This Was Not an Isolated Incident

The LiteLLM compromise was part of a coordinated campaign by TeamPCP that hit multiple security tools in rapid succession:

  • March 19: Compromised the Trivy vulnerability scanner
  • March 21: Hijacked the Checkmarx/KICS GitHub Action
  • March 22: Defaced 44 Aqua Security internal repositories
  • March 24: Backdoored LiteLLM on PyPI

The pattern is clear. TeamPCP is systematically targeting the most trusted tools in the cloud-native and AI ecosystems. They compromised a security scanner first, then used that access to compromise the tools that depended on it. That is supply chain attack 101, executed at scale against the AI industry.

There are indications linking TeamPCP to the LAPSUS$ threat group, though attribution remains under active investigation.

Why This Matters for Caribbean Businesses

I run four AI labs in Jamaica. I have been building AI systems for Caribbean organizations for fifteen years. Let me be direct about why this matters for our region specifically.

Caribbean businesses are adopting AI at an accelerating pace. BPO companies in Jamaica, fintech startups in Barbados, financial services firms in The Bahamas, energy companies in Trinidad, and government agencies across CARICOM are all deploying AI tools. Many of these deployments use Python. Many use libraries that depend on LiteLLM, even if the developers never installed it directly.

The problem is compounded by three factors specific to our region:

  • Limited cybersecurity staff. Most Caribbean businesses do not have dedicated security teams. A BPO in Montego Bay with 200 employees might have one IT person. That person is now responsible for auditing every Python environment for a compromised package they may never have heard of.
  • Transitive dependency blindness. Many organizations use AI frameworks that pull in LiteLLM automatically. You might have installed an AI agent toolkit or an LLM orchestration library, and LiteLLM came along for the ride. If you do not actively audit your dependency trees, you will not know it is there.
  • API key concentration. Caribbean businesses often use a single set of API keys across development, staging, and production. If those keys were exposed, the blast radius is the entire operation.

What You Must Do Right Now

This is not optional. If your business uses Python and AI tools, do these things today.

1. Audit Every Python Environment

  • Run pip show litellm on every machine, container, and CI runner
  • Check virtual environments, Docker images, and production servers
  • If you find version 1.82.7 or 1.82.8, treat that machine as fully compromised

2. Check for Persistence Artifacts

  • Look for ~/.config/sysmon/sysmon.py on all systems
  • Check systemd for unauthorized services
  • Look for unexpected .pth files in Python site-packages directories
  • Audit Kubernetes clusters for unauthorized pods, especially in kube-system

3. Rotate Everything

  • Rotate all AI provider API keys (OpenAI, Anthropic, Google, etc.)
  • Rotate all cloud credentials (AWS, GCP, Azure)
  • Rotate database passwords
  • Regenerate SSH keys
  • Rotate CI/CD tokens and secrets

4. Pin Your Dependencies

  • Pin LiteLLM to version 1.82.6 (the last known clean version) or the latest verified clean release
  • Use hash verification for all pip installs
  • Never run pip install without version pinning in production or CI

5. Monitor Outbound Traffic

  • Check network logs for connections to unfamiliar endpoints
  • The malware's C2 communication should show up as unexpected outbound HTTPS traffic
  • If you find evidence of C2 communication, engage incident response immediately

The Bigger Lesson: AI Supply Chains Are Fragile

This attack exposed a fundamental problem with how the AI industry builds software. The modern AI stack is a tower of open-source dependencies, each one maintained by a small number of people, each one a potential point of failure. LiteLLM was maintained primarily by a small team. The entire AI ecosystem depended on their PyPI credentials not being compromised.

For businesses building on AI, the lessons are clear:

  • Dependency auditing is not optional. You must know every package in your dependency tree, not just the ones you installed directly. Tools like pip-audit, Safety, and Snyk can automate this.
  • Pin everything. Never use unpinned dependencies in production. A requirement of litellm>=1.80 would have automatically pulled in the compromised version. A requirement of litellm==1.82.6 would not have.
  • Separate your credentials. Do not store API keys for every AI provider in the same environment. Use secret management services (AWS Secrets Manager, HashiCorp Vault, etc.) with least-privilege access.
  • Air-gap your production AI. Production AI systems should not have access to your development API keys, SSH keys, or cloud credentials. Network segmentation is not a nice-to-have. It is a requirement.
  • Monitor your CI/CD pipeline. The attackers got in through a security scanner used in CI. Your build pipeline is a high-value target. Treat it accordingly.

What This Means for AI Adoption in the Caribbean

I do not want this article to scare people away from AI. The Caribbean cannot afford to fall behind in AI adoption. But it also cannot afford to adopt AI carelessly. The LiteLLM attack is a warning that the tools we depend on are only as secure as their weakest link.

For CARICOM governments considering AI deployments, this incident should accelerate three priorities:

  • National software supply chain security guidelines. Every government AI deployment should have dependency auditing, version pinning, and credential isolation as baseline requirements.
  • Regional cybersecurity coordination. When a supply chain attack hits, every CARICOM nation is affected simultaneously. Sharing indicators of compromise and remediation guidance through CARIIMPACS or a similar mechanism would reduce the response time for everyone.
  • Mandatory security training for AI deployments. Any organization deploying AI tools should have at least one person who understands dependency management, credential hygiene, and incident response basics.

The LiteLLM Customers Who Were Safe

One detail from this incident is worth highlighting. Customers using LiteLLM Cloud or the official LiteLLM Proxy Docker image were not affected. Why? Because those deployments used strict version locking. They did not pull the latest version from PyPI on every build. They used a verified, locked version.

That is the entire lesson in one sentence. Pin your versions. Lock your dependencies. Do not let your production systems automatically pull the latest version of anything from the public internet.

Looking Forward

The AI industry is going to see more attacks like this. The incentives are too strong. AI systems hold valuable credentials. AI packages have massive install bases. AI development moves fast, which means security often gets deferred. And the open-source ecosystem that powers most AI development operates on trust.

TeamPCP demonstrated that this trust can be exploited. The question for every business, in the Caribbean and globally, is whether you will learn this lesson from someone else's incident or from your own.

I have spent fifteen years building AI systems in Jamaica. I have watched the industry grow from academic curiosity to critical infrastructure. With that growth comes responsibility. The LiteLLM attack is a reminder that building with AI is not just about capability. It is about security, diligence, and the discipline to do boring things like pinning dependency versions and rotating credentials. The exciting part of AI is the intelligence. The essential part is the engineering.

Frequently Asked Questions

What happened to LiteLLM in March 2026?

On March 24, 2026, a threat actor group called TeamPCP compromised the LiteLLM Python package on PyPI. Versions 1.82.7 and 1.82.8 contained malware that stole API keys, cloud credentials, SSH keys, Kubernetes secrets, and cryptocurrency wallets. The malware also installed a persistent backdoor for follow-on payloads.

How many businesses were affected by the LiteLLM attack?

LiteLLM has 95 million monthly PyPI downloads and three million daily downloads. The compromised versions were available for at least two hours. Any organization that installed or upgraded LiteLLM during that window, or that pulled it as a transitive dependency through AI frameworks, may have been affected.

Is my Caribbean business at risk from the LiteLLM attack?

If your business uses AI tools built on Python, there is a real chance LiteLLM is somewhere in your dependency tree. Run pip show litellm across all environments. If you see version 1.82.7 or 1.82.8, treat the machine as fully compromised and rotate every credential it had access to.

What should businesses do right now about LiteLLM?

Audit all Python environments for LiteLLM versions. Pin dependencies to version 1.82.6 or the latest clean version. Check for persistence artifacts like ~/.config/sysmon/sysmon.py. Audit Kubernetes clusters for unauthorized pods. Rotate all API keys, cloud credentials, and database passwords that were accessible from any affected machine.

What is a supply chain attack and why does it matter for AI?

A supply chain attack compromises a trusted software dependency rather than attacking the target directly. Because AI systems depend on dozens of open-source libraries, and because AI gateway tools like LiteLLM hold API keys for multiple providers, a single compromised package can expose an entire organization's AI infrastructure.

"The LiteLLM attack compromised the package that holds the keys to every AI provider in your stack. If you run AI systems and you have not audited your Python environments this week, stop reading and go do it now. This is not theoretical. This happened three days ago." - Adrian Dunkley, AI Boss
LiteLLM AI Security Supply Chain Attack AI Boss Caribbean AI Cybersecurity
Adrian Dunkley

Physicist, AI Scientist, and the "AI Boss". Founder of StarApple AI, the Caribbean's First AI Company. Founder of four AI Labs in Jamaica. 15 years building AI systems for the Caribbean. Jamaica's #1 AI Leader.

Connect ↗