There's a war raging in the heart of every developer. On one side, you have the id: the impulse-driven creative force that wants to code at the speed of thought and would prefer to deploy first and ask questions later. On the other side, there's the superego, which wants to test every line of code and would push a release by a month if it meant catching one extra bug.

Experienced developers know how to act as a referee between these two forces and find the right balance between speed and security.

But inexperienced or overworked devs often put their id in the driver's seat, which leads (among other things) to accidentally leaking developer secrets. These secrets include things like API and SSH keys, unencrypted credentials, and authentication tokens. Calling developer secrets "the keys to the kingdom" is something of a cliche, but it's tough to think of another phrase that accurately captures the unique power of this data.

Unfortunately, the people who most appreciate the power of developer secrets are often bad actors. Attackers continuously comb the web in search of them, and they seem to be getting even more popular – Dark Reading reported that there have been several recent spikes in threat actors scanning for environment and Git configuration files.

To be clear, developer secrets don't have to be published in publicly accessible code repositories to be compromised; developers constantly share them via Slack DMs and Jira tickets, and I know from experience that unencrypted SSH keys are the first thing attackers look for on a compromised employee laptop. When bad actors get hold of these secrets, they can do all kinds of mischief: exfiltrate data, move laterally, plant ransomware, and alter code.

Exposed developer secrets are among the most dangerous types of credential-based risks, and the problem was an epidemic even before the advent of AI-based code assistants. Now, it's getting worse at a pace that should ring alarm bells everywhere.

In The State of Secrets Sprawl 2025, GitGuardian reported finding nearly 24 million hardcoded secrets in public GitHub repositories – a 25% jump over the previous year. Not all the blame for this leap belongs to AI; in general there are more people learning to code and making more rookie mistakes. Still, AI code assistants have a well-known tendency to expose secrets, and developers who rely heavily on them may be under pressure to ship quickly to show they're taking advantage of AI's productivity benefits. GitGuardian noted that repositories where Copilot is present are 40% more likely to contain leaked secrets than AI-free repositories.

As both a security professional and an AI enthusiast, I am personally invested in addressing this problem, and this is where the whole id/ego/superego thing comes back into play.

In the two-plus years I've been tinkering with LLMs, I've come to understand them as an accelerant of my creative coding id. I can finally work at the speed of my thoughts, and I find that liberating and exhilarating. But I'm lucky: I have years of experience to draw on and am fortunate enough to be in a work culture where I don't feel pressured to churn out commits faster than I'm comfortable with. My team tests everything rigorously, and we encourage transparency about how we're using genAI so we know when to be on the lookout for hallucinations and mistakes.

Many other engineers aren't so fortunate. They haven't developed that watchful superego that tells them to handle AI-generated code with caution, and they may not have senior devs on hand to give them helpful insights, like "the code in your private repository is one misconfiguration away from being public, so act accordingly." Without these safeguards, the steady drip of leaked developer secrets becomes a torrent.

So, how do we stem the tide? By developing a security superego that's just as powerful and empowered as the AI id.

Part of the answer lies in using automated secrets detection tools already available today; pre-commit testing catches a lot of secrets before they're ever exposed. But many secrets don't conform to any easily identifiable structure. GitGuardian found that in 2024, 58% of the secrets they detected were these so-called "generic secrets," which can evade automated tools.

To stop those secrets from getting out, the solutions are equal parts cultural and technical. For example, it's vital to both strongly discourage engineers from storing secrets on their hard drives or sharing secrets via platforms like Slack, and provide them with secure, encrypted alternatives to store and share credentials.

Likewise, developers must be provided with the best company-owned AI tools to incentivize them to use them transparently, instead of sneaking off to experiment with unsanctioned shadow AI. When it comes to agentic AI, credential security must be built in so agents are securely provisioned with the secrets they need to operate, but those secrets are obfuscated, so the agents never directly interact with them.

Finally, senior devs and engineering leaders must act as the refereeing egos of their organizations and insist on rigorous code review to identify exposed credentials, including (but not limited to) AI-generated code.

Taking these measures is how we embrace AI without serving bad actors a cornucopia of exposed credentials. It's also how we train the next generation of developer talent to dream big while paying attention to the details. At the end of the day, developers and organizations need both their ids and their superegos to thrive, so let's develop the tools and the cultures to nurture them both.

About the Author: Jason Meller is a vice president of product at 1Password, the founder of Kolide, and the author of "honest.security." Jason began his security and product career at GE's elite computer incident response team. From there, he moved to Mandiant, quickly working his way up to becoming the chief security strategist in 2015. He later founded and served as the CEO of Kolide until its acquisition by 1Password in 2024.

Jason Meller — Vice President of Product at 1Password https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFSE-0gvI2Hb_8pjxruid12bPuB_-O9TmYMsAcxItgrn8677BRXvnZJZeAbyEVIXEyojOoSEqR-68MEtGGciNz3bO21-0SGDlwhEH5uBBLhNTed3yNQpGKE_IPGppbT8kqmp91CpjH0_axC9_qZwSVyIq6whON5Rt-6DvdCdSWOZYzGQmJM8TnV7p5eVw/s728-rw-e365/Jason.png
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.