I have been a Star Wars fan since the moment I took my seat in the theatre and saw Princess Leia's rebel ship trying to outrun an Imperial Star Destroyer. It's impossible to see that movie (or its greatest successor, Andor) and not take the side of the underdog rebels, who are determined to escape the iron fist of imperial control.

Of course, in my work as a security professional, "control" is the name of the game. I've spent as much of my career trying to stop my own end-users from going outside the lines as I have trying to guard against malicious outsiders. I personally still think I'm the good guy, since my ultimate goal is to protect sensitive data, but I understand why IT and security teams are often seen as the bad guys. After all, we do operate according to something called the "rule of no." It's not great branding, and increasingly, it just isn't working.

Here's the situation in 2025: we have a galaxy's worth of diverse applications, devices, and user identities accessing company data. These disparate systems resist centralized control, in part because it's trivially easy for tech-savvy users to use unsanctioned SaaS apps and personal devices for work. These users aren't behaving maliciously any more than I'm Darth Vader; they're just trying to use the best tools they can in order to be productive. However, existing IAM, MDM, and SSO tools weren't designed for these types of behavior.

The result is a widening Access-Trust Gap that is undermining the very foundations of security. (For the unfamiliar, the Access-Trust Gap refers to the security risks posed by unfederated identities, unmanaged devices, applications, and AI-powered tools accessing company data without proper governance controls.) If you'll allow me to stick with the Star Wars analogy, you can think of it as that little unguarded vent that proves to be a fatal design flaw for the Death Star.

The proliferation of access outside the visibility and control of security teams has created a crisis of access governance, and we haven't even introduced the most revolutionary change yet. I'm talking, of course, about agentic AI.

AI agents have only just begun to spread in the enterprise, and already they're upending everything we thought we knew about securing access. To broadly sketch out the challenge, these autonomous AI agents require broad permissions to function, pull data, and take actions across multiple systems. In some ways, these agents have the same requirements as any other worker: they need credentials, appropriate access tailored to their role, and for that access to be revocable when they no longer need it. But AI agents also behave in ways that are profoundly different from humans: they make decisions faster and often in opaque ways.

Security is rarely part of the equation with these systems; they are designed to increase efficiency and productivity, so they often bypass basic security protections like MFA. This creates the risk of granting access without sufficient trust in AI agents. There's potential for exploiting trust to gain access through jailbreaks or prompt injection. With the velocity and scale of decision-making and the lack of explainability, we see the rapid erosion of traditional control models. The situation harkens back to the early days of the internet, when innovation and mass enthusiasm ran far ahead of strong encryption or access control protocols. Now, like then, we risk digging a security hole that will take years to get out of.

Now is the moment, before the dam of agentic AI has fully burst, to address the Access-Trust Gap. It's also the perfect moment to rethink the heavy-handed "rule of no" approach to security that got us here.

The truth is that end users, like the Star Wars rebels, simply can't be stopped from using the tools that make them most productive. And frankly, the role of security shouldn't be to try and stop them – we should enable them to experiment and innovate safely. So, rather than trying (and failing) to block every personal device, unapproved app, and AI agent, we need to focus on securing them. Allow users to use their personal laptops, but require that they be in a secure and compliant state. Allow users to use their preferred SaaS apps, but require them to use strong authentication. Allow and encourage users to experiment with AI agents, but with tailored permissions, detailed logs, and a human always in the loop.

Taking a user-focused, "rule of yes" approach to security is the only way to effectively draw shadow IT and AI into the light. Even better, it allows security and IT teams to be where we've always belonged: on the side of the good guys.

About this Author: Dave is the Global Advisory CISO at 1Password. He brings over 30 years of industry experience in IT security operations and management, at companies such as Akamai, IBM, Duo Security, Cisco, and AMD. He is also the founder of the security site Liquidmatrix Security Digest as well as host of the Liquidmatrix, Plaintext, and Chasing Entropy podcasts. Dave currently serves on the board of directors for BSides Las Vegas and the advisory board for the Black Hat Sector Security Conference. He co-founded the BSides Toronto conference and was a goon on the speaker operations team for DEF CON for over 13 years. He previously held a board position at (ISC)². For fun, Dave loves playing bass guitar, grilling, and spending quality time with his kids. He's also a part owner of a whisky distillery and a soccer team.

Dave Lewis — Global Advisory CISO at 1Password https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFgs7GMTwjjlPAu41sC8LLOhF5QcsvX3HFX6dgZhxd__TH0PgNI3tYkJgc9i84T7D7q1NwwcfhijtC5wtfgsUku6sZ7bmuuFB6kneKiPEIamUnN9puv6NiQ80PLFdcKLIvlXKcHZoaMCXD7PAtDILYDEm1r1IefcWnAwK34g-OMOtjn4DZx6MV0AateQA/s728-rw-e365/Dave.png
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.