From the moment it hits the wire—be it MISP or Mandiant—the value and efficacy of cyber threat intelligence (CTI) begins to decay for the organizations that intend to consume it. The data that was once essential for evaluating and reducing risk becomes dated and less helpful as adversaries constantly adapt their tactics, techniques, and procedures (TTPs).

We refer to this as 'threat intelligence decay.' Meanwhile, the NCSC have reported that threat actors have begun leveraging artificial intelligence, with an expectation that they will soon be using AI to evolve and enhance existing TTPs. This advent of AI is exacerbating the challenge of threat intelligence decay. Information that was once a golden nugget of defense can quickly turn into fool's gold, leaving organizations exposed to new threats.

When we look at one of the most practical applications that threat intelligence has in an organization—the threat management process—it's frightening how much these problems are compounding.

The inefficiencies of the threat management process

When a piece of threat intelligence lands on the radar of a security team, it works its way through multiple teams with the ultimate goal of assessing an organization's defensive readiness and enabling the remediation of any exposures. This is commonly initiated from security leadership or a C-level asking, "Are we protected against [insert celebrity CVE]", or from someone within the detection and response function.

An oversimplified example of this lifecycle would look something like this:

  • The threat intelligence team receives a report, determines relevance, and begins extracting the relevant technical information. Analysts will attempt to fill in the blanks where technical specifics may be missing, discern what components of the attack are actionable, and how to best relay information to teams downstream.
  • Offensive security receives the information and begins developing security tests reflective of the threat. Red teamers spin up testing infrastructure and begin to run tests in a production-adjacent environment—potentially missing the configurations and XDR heuristics of a production environment. Finally, they'll contextualize results and prepare a report for the next team.
  • The detection engineering team will review the report and, using their knowledge of vendor query syntax, begin filling in the coverage gaps. They'll determine telemetry coverage, write detection logic, and stimulate the defensive controls to validate their new detections are working.
  • Finally, the teams will (ideally) work collectively to generate a summary attestation that provides some level of assurance about their new protection capabilities.

This process can often take weeks or months. If the name of the game for threat intelligence is speed, then this is a losing battle for network defenders.

When you consider the abundance of threats that security teams need to address and the time-intensive threat management process outlined above, it makes a lot of sense why security teams have a significant backlog of threats that need to be worked through - something that will only get worse with the looming cascade of AI-enabled attacks. All the while, their defensive controls are sitting there, likely capable of stopping threats but in need of augmentation.

Leveraging AI in the threat management process

So, if security teams are already accumulating a backlog of threat intelligence to work through and AI is increasing the volume and velocity of threats, how can security teams take back the momentum from threat actors? More AI, of course.

Here are the three most effective tactical applications we're seeing teams apply AI to move their threat intelligence more quickly through the threat management process:

Processing threat intelligence

Threat intelligence reports are often very dense and lengthy. A good public example of this would be CISA's February 2024 advisory on the People's Republic of China compromise and persistent access to critical infrastructure in the US.

This report is at least forty pages long and riddled with moderately complex technical details. For a threat intelligence analyst to work through this and generate a BLUF (bottom line, upfront) for other team members can often be a slog. AI and large language models (LLMs) can be a home run here.

Enabled by AI, analysts can quickly pull out the need-to-know information, customize the scope and technical depth of the summary to different audiences, and—with more sophistication—potentially improve classification. Teams investing in AI and training bots in the necessary organizational context can dramatically improve the speed and efficiency of their analyst teams.

Turning threat intelligence into security tests

Actioning IOCs—such as hashes and IP addresses—from a threat intelligence report is certainly valuable, but a somewhat trivial exercise given the adoption of SOAR and IOC ingestion modules built directly into defensive technologies. There's an argument to be made that testing is unnecessary here.

Actually, the bigger hangup for security teams is the more advanced adversary tradecraft based on the behaviors of the threat actor moving across and interacting with the environment. In order to understand whether or not defensive controls are capable of seeing, detecting, or preventing these actions, teams must create security tests that simulate these behaviors. These tests need to be structured, safe, and of high enough fidelity to ensure the assessment is of actual value.

Similar to our analyst teams, enabling offensive security teams with autonomous capabilities can yield massive time savings and a faster time to assurance. Trained on proper knowledge of reverse engineering and operating system internals, artificial intelligence can generate tests that represent observed adversary techniques and quickly evaluate defensive responses against these.

Generating detections for missing coverage

After testing is complete and the team has identified protection gaps, the baton is passed to the detection engineering team to generate new defensive capabilities. This is somewhat of a trial-and-error phase where the security engineer is leveraging their knowledge of system internals and query syntax to build logic that will alert on the behaviors of the threat with high confidence and low false positives.

A trained model with a strong context of defensive control capabilities and query language is capable of creating detections, or at the very least, a 'first draft', of detections directly out of the initial threat intelligence report. AI affords the missing jumpstart that places security teams in a much more viable position to accurately gauge assurance, clear their backlog, and quickly respond to threats. As the pace of threats continues to accelerate, security teams win when they take advantage of tools their adversaries are already using.

Machine-speed threat management with Prelude Security

Prelude Security is offering a way for security teams to force multiply their threat management program and ensure their defenses are autonomously updated at machine speed. Organizations can turn their threat intelligence into new, validated protections in under five minutes.

Taking advantage of the opportunities available with AI, organizations can simply upload threat intelligence reports and the platform automatically generates detections and accompanying security tests. This affords organizations the ability to know with certainty that they are protected against emerging threats. Organizations no longer have to rely solely on manual testing and remediations that keep them exposed for weeks or more. Experience the impact for yourself by getting in touch.

Harry Hayward — Director of Community at Prelude Security
Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.