AI SOC Agents are going through a hype cycle. If we're going by Gartner's Hype Cycle for Security Operations, 2025, this technology is still an "Innovation Trigger", but it's at the cusp of "Peak of Inflated Expectations".
Every vendor claims their solution will revolutionize security operations. Every conference features another keynote promising autonomous defense. And every CISO is being asked whether AI will replace their security team.
At Redis, implementing AI in the SOC has been more of a measured journey. The model is more of a hybrid SOC, so there's a combination of external service providers as well as internal resources. In this case, Prophet Security is currently proving themselves alongside a more traditional MDR provider.
But let's take a step back.
The Tipping Point for AI Adoption within the SOC
Considering an AI solution for Redis' SOC came down to the confluence of three drivers.
On an individual level, there was more value from AI tools and platforms. This marks a contrast from when responses from ChatGPT were markedly less useful. There's now much more useful content than hallucinations.
At a team level, more vendors were entering the space with compelling stories about applying this technology to specific security problems. There seemed to be actual potential around amplifying what we're now experiencing with AI on an individual level.
At the enterprise level, Redis has emerged as a key piece of the AI technology stack and continues to power a faster AI landscape. That's a result of incredible work being done in the products and services we offer to make AI faster, cheaper, and better. Adopting AI across more of the organization is not an initiative unique to us, but in the TDR and SecOps space, the ability to act at machine speed and really accelerate decision-making and response pays dividends when you're keeping threat actors out of the environment.
When we put those three together, there was a tipping point to look more seriously at vendors in the space, how they're approaching AI in the SOC, and what value we could get.
Our Pain Points Over Vendor Promises
Many of the SecOps challenges Redis faces are not unique to us. Recurring questions across security teams include:
- How long does it take to respond to alerts?
- How well are we investigating and responding?
- How do we deal with false positives?
- How do we avoid false negatives?
Beyond those classic security questions, there were a couple of strategic things our team had to keep in mind as well.
First, many SecOps teams are being tasked to do more with the same or less. For Redis, that means scaling our team's capabilities in a way that greatly outpaces scaling of the team itself. We're driven to punch above our weight and deliver positive security impacts to the organization.
Second, transparency is non-negotiable. No one wanted a "magic" AI black box that spit out answers that we had to trust blindly. To that end, the platforms willing to address explainability from the onset stood out. We wanted to understand the reasoning, logic, and decision-making – similar to what we'd get with a human analyst – as well as the output itself.
Incremental Implementation
Bringing Prophet Security into the environment was an exercise in building trust. (While we're not supposed to anthropomorphize the AI, we're going to a little; it's a bit like having a highly effective, never-tiring analyst but without the tribal knowledge of your environment.)
At first, we hooked it up, and it started investigating and producing outputs. We looked at those outputs and provided some feedback to fine-tune the AI for our environment. To integrate it into our existing workflows, we started using it as an input into broader detection and response workflows. We started using Prophet AI dispositions as another data point for decision-making for our human analysts. And even in that configuration, it really cut down the amount of time required for initial orientation to those alerts.
Now, especially for lower-level alerts that might sit at the bottom of a queue otherwise, we generally let the AI handle those end-to-end and have it bubble up anything that is bad or inconclusive. And that allows the team to maximize attention on the things that need it.
The goal is to get to a point where we have enough trust in the AI determination to make it a primary decision-driving data point or as a stand-alone decision.
The Results: Tangible and Intangible
Redis' classic SOC metrics moved in the right direction.
For example, the average time to investigate dropped dramatically with the AI SOC solution, down to maybe 10 minutes at the most. It would take a human analyst – even one with a solid playbook – hours to do that same level of investigation. Hours to minutes is a fundamental change in how fast we can move.
Coverage and consistency also improved. Those low severity alerts that never rose to the surface are now getting investigated with the same depth as critical alerts.
The less tangible benefit? The Redis team can stay focused on strategic work; there is less context switching back to alert triage, analysts aren't spending half a day burning down queues, and we can focus on maturing our program and building capabilities.
The Path Forward: What We Would Tell Other Practitioners
While we're still early in our work with Prophet Security, the gains in investigation speed, consistency, and focus have been meaningful enough that I'd recommend any SOC leader evaluating AI in their environment take a serious look at what they're building.
First define your actual problems and non-negotiables before talking to vendors. Identify success metrics that are tightly coupled with the problems you're looking to solve with an AI SOC solution. Evaluate AI SOC providers on how well they can improve your current workflows and functions. (And when you're doing your market research, look into how vendors talk about things like transparency and explainability.)
Start small and measure impact early. Treat implementation like onboarding a new team member. Give the AI time to learn. Review its work. Provide feedback. Build trust gradually before giving it autonomy.
Most importantly, view this as augmentation, not replacement. Platforms like Prophet Security are built with the goal of enabling your team to combine human judgment with machine speed; it's about empowering your analysts to handle more, not eliminate them. AI handles the repetitive, time-consuming investigation work. Humans focus on complex analysis, threat hunting, and building defensive capabilities.
That is how you get beyond the hype and actually improve your security operations.
Author Bio: Justin Lachesky is a results-driven security leader with a track record of delivering measurable impact for customers and internal teams. He has deep experience across both operational and strategic functions in cybersecurity, with a focus on information security, network intrusion analysis, and incident response across diverse environments. Justin is passionate about helping organizations strengthen their defensive posture and consistently drives meaningful outcomes, not just activity.
Justin Lachesky — Director of Cyber Resilience at Redis https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSanv-RBwhDhuUn7A3oafIkWj0LyzFs6pV8MpnOtxn35hup5tBbdy0sStD88jN82trN1OWNiTZZz8DyZGnD-971v5oYQI45mTRw2idv4mWLCkvq-X1lhmtuXSEoulcgMQ3BLyQWeQBNRM8Q4ydwLDgAaOtaJTR44Zb_q5ywgF-wiyfnxIuRFz1vOjPsWg/s728-rw-e365/justin.png



