Most people know the story of Paul Bunyan. A giant lumberjack, a trusted axe, and a challenge from a machine that promised to outpace him. Paul doubled down on his old way of working, swung harder, and still lost by a quarter inch. His mistake was not losing the contest. His mistake was assuming that effort alone could outmatch a new kind of tool.

Security professionals are facing a similar moment. AI is our modern steam-powered saw. It is faster in some areas, unfamiliar in others, and it challenges a lot of long-standing habits. The instinct is to protect what we know instead of learning what the new tool can actually do. But if we follow Paul's approach, we'll find ourselves on the wrong side of a shift that is already underway. The right move is to learn the tool, understand its capabilities, and leverage it for outcomes that make your job easier.

AI's Role in Daily Cybersecurity Work

AI is now embedded in almost every security product we touch. Endpoint protection platforms, mail filtering systems, SIEMs, vulnerability scanners, intrusion detection tools, ticketing systems, and even patch management platforms advertise some form of "intelligent" decision-making. The challenge is that most of this intelligence lives behind a curtain. Vendors protect their models as proprietary IP, so security teams only see the output.

This means models are silently making risk decisions in environments where humans still carry accountability. Those decisions come from statistical reasoning, not an understanding of your organization, its people, or its operational priorities. You cannot inspect an opaque model, and you cannot rely on it to capture nuance or intent.

That is why security professionals should build or tune their own AI-assisted workflows. The goal is not to rebuild commercial tools. The goal is to counterbalance blind spots by building capabilities you control. When you design a small AI utility, you determine what data it learns from, what it considers risky, and how it should behave. You regain influence over the logic shaping your environment.

Removing Friction and Raising Velocity

A large portion of security work is translational. Anyone who has written complex JQ filters, SQL queries, or regular expressions just to pull a small piece of information from logs knows how much time that translation step can consume. These steps slow down investigations not because they are difficult, but because they interrupt your flow of thought.

AI can remove much of that translation burden. For example, I have been writing small tools that put AI on the front end and a query language on the back end. Instead of writing the query myself, I can ask for what I want in plain English, and the AI generates the correct syntax to extract it. It becomes a human-to-computer translator that lets me focus on what I am trying to investigate rather than the mechanics of the query language.

In practice, this allows me to:

  • Pull the logs associated with a specific incident without writing the JQ myself
  • Extract the data I need using AI-generated SQL or regex syntax
  • Build small, AI-assisted utilities that automate these repetitive query steps

When AI handles the repetitive translation and filtration steps, security teams can direct their attention toward higher-order reasoning — the part of the job that actually moves investigations forward.

It is also important to remember that while AI can store more information than humans, effective security is not about knowing everything. It is about knowing how to apply what matters in the context of an organization's mission and risk tolerance. AI will make decisions that are mathematically sound but contextually wrong. It will approximate nuance, but it cannot truly understand it. It can simulate ethics, but it cannot feel responsibility for an outcome. Statistical reasoning is not moral reasoning, and it never will be.

Our value across offensive, defensive, and investigative roles is not in memorizing information. It is in applying judgment, understanding nuance, and directing tools toward the right outcomes. AI enhances what we do, but the decisions still rest with us.

How Security Professionals Can Begin: Skills to Develop Now

Much of today's AI work happens in Python, and for many security practitioners it has traditionally felt like a barrier. AI changes that dynamic. You can express your intent in plain English and have the model produce most of the code. The model gets you most of the way there. Your job is to close the remaining gap with judgment and technical literacy.

That requires a baseline level of fluency. You need enough Python to read and refine what the model generates. You need a working sense of how AI systems interpret inputs so you can recognize when the logic drifts. And you need a practical understanding of core machine learning concepts so you know what the tool is doing beneath the surface, even if you are not building full models yourself.

With that foundation, AI becomes a force multiplier. You can build targeted utilities to analyze internal data, use language models to compress information that would take hours to process manually, and automate the routine steps that slow down investigations, offensive testing, and forensic workflows.

Here are concrete ways to start developing those capabilities:

  • Start with a tool audit: Map where AI already operates in your environment and understand what decisions it is making by default.
  • Engage actively with your AI systems: Do not treat outputs as final. Feed models better data, question their results, and tune behaviors where possible.
  • Automate one weekly task: Pick a recurring workflow and use Python plus an AI model to streamline part of it. Small wins build momentum.
  • Build light ML literacy: Learn the basics of how models interpret instructions, where they break, and how to redirect them.
  • Participate in community learning: Share what you build, compare approaches, and learn from others navigating the same transition.

These habits compound over time. They turn AI from an opaque feature inside someone else's product into a capability you understand, direct, and use with confidence.

Join me For a Deeper Dive at SANS 2026

AI is changing how security professionals work, but it does not diminish the need for human judgment, creativity, and strategic thinking. When you understand the tool and guide it with intent, you become more capable, not less necessary.

I will be covering this topic in greater detail during my keynote session at SANS 2026. If you want practical and actionable guidance for strengthening your AI fluency across defensive, offensive, and investigative disciplines, I hope you'll join me in the room.

Register for SANS 2026 here.

Note: This article was expertly authored by Mark Baggett, SANS Fellow.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.