The most active piece of enterprise infrastructure in the company is the developer workstation. That laptop is where credentials are created, tested, cached, copied, and reused across services, bots, build tools, and now local AI agents.

In March 2026, the TeamPCP threat actor proved just how valuable developer machines are. Their supply chain attack on LiteLLM, a popular AI development library downloaded millions of times daily, turned developer endpoints into systematic credential harvesting operations. The malware only needed access to the plaintext secrets already sitting on disk.

The LiteLLM Attack: A Case Study in Developer Endpoint Compromise

The attack was straightforward in execution but devastating in scope. TeamPCP compromised LiteLLM packages versions 1.82.7 and 1.82.8 on PyPI, injecting infostealer malware that activated when developers installed or updated the package. The malware systematically harvested SSH keys, cloud credentials for AWS, Azure, and GCP, Docker configurations, and other sensitive data from developer machines.

PyPI removed the malicious packages within hours of detection, but the damage window was significant. GitGuardian's analysis found that 1,705 PyPI packages were configuredto automatically pull the compromised LiteLLM versions as dependencies. Popular packages like dspy (5 million monthly downloads), opik (3 million), and crawl4ai (1.4 million) would have triggered malware execution during installation. The cascade effect meant organizations that never directly used LiteLLM could still be compromised through transitive dependencies.

Why Developer Machines Are Attractive Targets

This attack pattern isn't new; it's just more visible. The Shai-Hulud campaigns demonstrated similar tactics at scale. When GitGuardian analyzed 6,943 compromised developer machines from that incident, researchers found 33,185 unique secrets, with at least 3,760 still valid. More striking: each live secret appeared in roughly eight different locations on the same machine, and 59% of compromised systems were CI/CD runners rather than personal laptops.

Adversaries now slip into the toolchain through compromised dependencies, malicious plugins, or poisoned updates. Once there, they harvest local environment data with the same systematic approach security teams use to scan for vulnerabilities, except they're looking for credentials stored in .env files, shell profiles, terminal history, IDE settings, cached tokens, build artifacts, and AI agent memory stores.

Secrets Live Everywhere in Plaintext

The LiteLLM malware succeeded because developer machines are dense concentration points for plaintext credentials. Secrets end up in source trees, local config files, debug output, copied terminal commands, environment variables, and temporary scripts. They accumulate in .env files that were supposed to be local-only but became a permanent part of the codebase. Convenience turns into residue, which becomes opportunity.

Developers are running agents, local MCP servers, CLI tools, IDE extensions, build pipelines, and retrieval workflows, all requiring credentials. Those credentials spread across predictable paths where malware knows to look: ~/.aws/credentials, ~/.config/gh/config.yml, project .env files, shell history, and agent configuration directories.

Protecting Developer Endpoints at Scale

It’s important to build continuous protection across every developer endpoint where credentials accumulate.GitGuardian approaches this by extending secrets security beyond code repositories to the developer machine itself.

The LiteLLM attack demonstrated what happens when credentials accumulate in plaintext across developer endpoints. Here's what you can do to reduce that exposure.

Understand Your Exposure

Start with visibility. Treat the workstation as the primary environment for secrets scanning, not an afterthought. Use ggshield to scan local repositories for credentials that slipped into code or linger in Git history. Scan filesystem paths where secrets accumulate outside Git: project workspaces, dotfiles, build output, and agent folders where local AI tools generate logs, caches, and "memory" stores.

ggshield detecting a secret in a specific file from a path

Don't assume environment variables are safe just because they're not in files. Shell profiles, IDE settings, and generated artifacts often persist environment values on disk indefinitely. Scan these locations the same way you scan repos.

Add ggshield pre-commit hooks to stop creating new leaks in commits while cleaning up old ones. This turns secret detection into a default guardrail that catches mistakes before they become incidents.

ggshield pre-commit command catching a secret

Move Secrets Into Vaults

Detection without remediation is just noise. When a credential leaks, remediation typically requires coordination across multiple teams: security identifies the exposure, infrastructure owns the service, the original developer may have left the company, and product teams worry about production breaks. Without clear ownership and workflow automation, remediation becomes a manual process that gets deprioritized.

The solution is treating secrets as managed identities with defined ownership, lifecycle policies, and automated remediation paths. Move credentials into a centralized vault infrastructure where security teams can enforce rotation schedules, access policies, and usage monitoring. Integrate incident management with your existing ticketing systems so remediation happens in context rather than requiring constant tool-switching.

GitGuardian Analytics showing the state of secrets being monitored

Treat AI Agents as Credential Risks

Agentic tools can read files, run commands, and move data. With OpenClaw-style agents, "memory" is literally files on disk (SOUL.md, MEMORY.md) stored in predictable locations. Never paste credentials into agent chats, never teach agents secrets "for later," and routinely scan agent memory files as sensitive data stores.

Eliminate Whole Classes of Secrets

The fastest way to reduce secret sprawl is by removing the need for entire categories of shared secrets. On the human side, adopt WebAuthn (passkeys) to replace passwords. On the workload side, migrate to OIDC federation, so pipelines stop relying on stored cloud keys and service account secrets.

Start with the highest-risk paths where leaked credentials hurt most, then expand. Move developer access to passkeys and migrate CI/CD workflows to OIDC-based auth.

Use Ephemeral Credentials

If you can't eliminate secrets yet, make them short-lived and automatically replaced. Use SPIFFE to issue cryptographic identity documents (SVIDs) that rotate automatically instead of relying on static API keys.

Start with long-lived cloud keys, deployment tokens, and service credentials that developers keep locally for convenience. Shift to short-lived tokens, automatic rotation, and workload identity patterns. Each migration is one less durable secret that can be stolen and weaponized.

The goal is to reduce the value an attacker can extract from any successful foothold on a developer machine.

Honeytokens as early warning systems 

Honeytokens provide interim protection. Place decoy credentials in locations attackers systematically target: developer home directories, common configuration paths, and agent memory stores. When harvested and validated, these tokens generate immediate alerts, compressing detection time from "discovering damage weeks later" to "catching attacks while unfolding." This isn't the end state, but it changes the response window while systematic cleanup continues.

Developer endpoints are now part of your critical infrastructure. They sit at the intersection of privilege, trust, and execution. The LiteLLM incident proved that adversaries understand this better than most security programs. Organizations that treat developer machines with the same governance discipline already applied to production systems will be the ones that survive the next supply chain compromise.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.