Every few years, a breach happens that security teams study for the wrong reasons.

SolarWinds is a good example. When the compromised Orion update started reaching customer environments in early 2020, the signals were already there: unusual DNS requests, unexpected authentication behavior in Azure AD, odd SAML token activity, and lateral movement from on-premises Active Directory into cloud environments.

None of it looked like an attack. Each signal sat at low or medium severity, and they were scattered across domains. The attackers had close to a year of dwell time before FireEye, a victim itself, discovered the breach while investigating a stolen red-team toolkit.

We tend to call SolarWinds a one-off. It wasn't.

The real lesson from that breach, and from the ones that have followed it, is structural.

SOCs are designed, staffed, and measured around routine work: phishing, endpoint detections, and user anomalies. The people, processes, dashboards, and tools are all tuned to handle alerts that fit a regular routine. That's a rational response to the volume of work that shows up in the queue.

Alerts that lead to actual breaches rarely look like the ones you've prepared for.

They're long-tail and appear in systems and feeds beyond the SOC's core visibility.

  • An authentication pattern that looks wrong only when you line up three different systems at once.
  • A single WAF anomaly against a rarely used API.
  • A one-off warning in a vendor's update pipeline.
  • A dark web listing offering VPN or admin access to your environment.

None of those fit a standard queue. In practice, they often aren't triggered consistently or in a timely way.

You'll hear more often about the alert volume problem. Volume is an operational challenge, and the industry has spent years building tools to address it. What I'm describing is an architecture problem: the way we design SOCs, define success, and buy technology optimizes everything around the volume of normal alerts we process every day.

The three you have never seen before, and haven't built a way to investigate in many cases.​

Alerts the SOC isn't designed to handle

Long-tail alerts aren't in your top ten detections report. They're low-frequency signals at the edges of what the SOC monitors, and most SOC processes were never designed to handle them.

A single WAF anomaly against a rarely used API endpoint never makes the "most attacked" list because the payload doesn't match a known pattern. It logs at low severity, and nothing in the queue makes it stand out from the hundred other WAF events closed that week.

A one‑off security alert from a major SaaS or software vendor, "we've reset this admin account after detecting suspicious activity," lands in an email inbox or vendor portal instead of the SOC queue. It looks like a routine notification, and there's no SIEM rule wired to it, so it's deemed normal.

A dark web listing offering VPN or admin access to your environment shows up in an external monitoring report. The credentials were stolen and are now for sale, but that report lives in a separate tool and never turns into a triageable alert in the SOC.

An authentication pattern that only looks wrong when you line up three things at once: a rarely used service account, a SaaS app on the periphery of the environment, and an IP range nobody recognizes.

None of these triggers a predefined workflow. There's no clear owner, no existing playbook, and no queue category that captures what they are. So they get logged and often remain untouched.

Why In-House SOCs can't close this gap

SOCs are built for volume. The hiring, training, and measurement models all reflect that reality. They face two structural problems:

  • 80% of alerts create alert fatigue. These are repetitive, high-volume, and routine alerts that create workflow friction.
  • 20% create blind spots. These are the unusual edge cases that no other SOC AI platform covers. SOARs don't have playbooks, and MSSPs/MDRs escalate them back to the team. These are the most complex alerts where the organization is most exposed, and breach risk is highest.

The metrics reinforce it. Every standard KPI (alerts reviewed per analyst, case closure rate, mean time to respond) measures speed and throughput on known alert types. You don't improve any of those numbers by spending three hours on a single long-tail alert. So those cases slide to the bottom of the stack.

The analysts who can investigate long-tail alerts are usually the most senior people in the room. They're also the ones already working on escalations, mentoring junior staff, and leading projects. There are a significant number of long-tail alerts, so assigning all of them to a senior analyst isn't practical. And pulling the analyst into a cross-domain investigation with no clear playbook competes directly with their priority work.

The majority of analysts don't have the skills or training for niche alerts. You could hire 5 more analysts, and it would be unlikely you'd ever find someone with the skill set needed to work on every niche type of alert.

Even when an experienced senior analyst picks up one of these cases, the investigation usually doesn't follow a clear path. Long-tail alerts require pulling in signals from systems the SOC doesn't fully own, such as supply chain pipelines, SaaS apps, dark web feeds, and cloud identity logs. Each one requires different skills and tooling, and often involves locating a different stakeholder. It can be challenging to find the right HR person to ask a question or track down the manager of a user with suspicious activity.

Those investigations become projects, not tickets.​

The alerts with the highest breach potential are exactly the ones the SOC is least structured to handle. The people are capable, but nothing in the design rewards that work or makes it easy to do. And these alerts, like so many others, are mostly false positives.

AI Tools were built for a different problem

AI SOC tools have made real progress on the routine alert problem. They suppress repeat noise, automatically close large categories of known alerts, and free analysts from the triage work that doesn't require human judgment. That's how the tools were designed to perform.

The limitation shows up in what that design doesn't account for.

Most AI SOC platforms are trained on known, well-understood use cases and high-frequency data, such as phishing campaigns, endpoint detection, and well-documented attack patterns that appear across thousands of environments.

The models learn to recognize those alert types and make confident decisions on familiar signals. They work well for alerts that look like the training data.

Long-tail alerts don't look like the training data.

They sit outside what the model was built to evaluate. The system wasn't designed to have an opinion on them, so it doesn't.

When those signals do make it into the AI pipeline, they tend to land in one of two places.

  1. It forces them into the closest known pattern, and the alert looks handled in the dashboard when the underlying signal wasn't understood.
  2. The system flags them as uncertain and hands them back to a human, which looks like escalation on paper, but means the automation stops where the more complicated work begins.

The same AI that clears your routine queue has almost nothing to say about the alerts that carry the most risk.

Why MSSPs don't solve the problem either

MSSPs reward standardization and repeatability. Their model works best when their customers are similar, and alerts can be handled consistently.

Long-tail investigations don't fit that model. They take hours, require deep context about a single environment, and rarely produce a playbook that the provider can reuse elsewhere.

A handful of low-frequency, high-effort cases can blow up margins on a fixed-fee contract, so they're treated as exceptions instead of core work.

On paper, MSSPs offer investigation and response. In practice, the service is scoped and tiered. Deep forensic work, cross-domain hunting, and custom integration are often available only in higher-priced tiers or as out-of-scope line items. When a long-tail case turns into a project, it's possible you'll run into contractual limits.

Day to day, that usually leads to one of three outcomes for long-tail alerts. They're skipped because they don't match a predefined severity or pattern. They get surface-level triage and a generic conclusion that doesn't reflect the real risk. Or they're escalated back to the customer, who probably assumed the provider was handling them.​

MSSPs are optimized for dealing with alert volume and routine alerts, not depth of investigation. For the kinds of alerts we're talking about, outsourcing mostly just moves the problem to another queue. It doesn't make it go away.

What the best SOC teams do

The best SOC teams know they can't treat odd alerts like every other ticket, so they build workarounds. They:

  • Identify blind spots and build internal expertise
  • Look for an AI solution that goes beyond routine use cases. They find a platform that doesn't use pretrained AI and offers full coverage for niche and unique alerts.
  • Create ad hoc escalation paths to senior engineers or threat hunters when something doesn't look standard. They operate on the understanding that "if it feels weird, bring it to this person" when there's no playbook.
  • Maintain internal wikis or Confluence pages that document edge cases and "the weird ones" that tools don't understand out of the box. That documentation becomes the closest thing the team has to institutional memory for long-tail alerts. But it's often only useful if the same people are around to interpret the information.​
  • Rely on a small group of go-to analysts who get pulled into anything unusual. Over time, those people accumulate an internal mental model of how the environment behaves across domains. They see patterns nobody else sees, but that knowledge mostly lives in their heads.​

When a truly strange signal appears, the response often turns into a war room. Cloud, identity, application, and network owners all get pulled into a call to piece together a picture of the breach. The work gets done, but it's expensive, disruptive, and hard to repeat when something similar happens again.​ And it usually happens after the damage is done.

These are all rational responses to a gap in how SOCs are structured and equipped to handle long-tail alerts. Unfortunately, these approaches depend on specific individuals and informal paths.

They don't scale, and they tend to break the moment those key people are on PTO, leave the company, or move into management.

The question SOC leaders need to ask

We still measure SOC performance by volume and speed:

  • Mean time to respond
  • Percentage of alerts closed
  • Tickets processed per analyst.

Those numbers say nothing about what happens to alerts that don't fit predefined types or playbooks.

The question SOC leaders need to ask is simple: What is our plan for the alert we've never seen before?

Triage that is bound by fixed categories and static runbooks has a ceiling. Adversaries know exactly where that ceiling is, and they are ready to exploit it. The job now is to design SOCs, processes, and tooling that can follow any alert, across any domain, regardless of how often it appears.​

I'm not talking about replacing analysts. We need to make sure the alerts that don't look like anything they've handled before are visible, ownable, and have a path to resolution before they turn into the next breach. These alerts include all the enrichment and context needed for any analyst to quickly evaluate them.

About Radiant: The new way of doing SOC

Radiant was built to fill these gaps. Its agentic AI analysts triage every alert, suppress routine false positives, and escalate only real threats with full investigation context and one‑click response.

About the Author: Shahar Ben-Hador is the CEO and Co-founder of Radiant Security. He spent nearly a decade at Imperva, where he rose from IT Manager to become the company's first CISO, experiencing the day-to-day challenges of running security operations. Later, as VP of Product Management at Exabeam, he led the building of the products he wished he had as a practitioner.

Shahar Ben-Hador — CEO and Co-founder at Radiant Security https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgHI2DlWbFATjVyhCBzh0cHwEN1FHSF6uSinlM-ynd6yVmuJ3IHJxjL1Ip-aHqoU6AzYK2briXjkoExqlMu08PuNbshh9LvcO_jRTrfj91S6OLC8CMtwky0Ne0TWbnmDEvTzcKTOu7yz7XMlH0cTAKUMztVcv7CBFfiHde82GLLdgHvz9t3vaaJDcGuBbk/s728-rw-e365/Shahar.png
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.