As much as threat mitigation is to a degree a specialist task involving cybersecurity experts, the day to day of threat mitigation often still comes down to systems administrators. For these sysadmins it's not an easy task, however. In enterprise IT, sysadmins teams have a wide remit but limited resources.
For systems administrators finding the time and resources to mitigate against a growing and constantly moving threat is challenging. In this article, we outline the difficulties implied by enterprise threat mitigation, and explain why automated, purpose-built mitigation tools are the way forward.
Threat management is an overwhelming task
There is a range of specialists that work within threat management, but the practical implementation of threat management strategies often comes down to systems administrators. Whether it's patch management, intrusion detection or remediation after an attack, sysadmins typically bear the brunt of the work.
It's an impossible task, given the growing nature of the threat. In 2021 alone, 28,000 vulnerabilities were disclosed. It is such a large number that, in fact, a large proportion never got as far as being assigned a CVE. This is especially relevant in an industry laser-focused on tracking CVEs, testing for their presence on our systems and deploying patches mentioning specific CVE numbers. You can't protect against what you don't know you're vulnerable to. If a given vulnerability does not have a CVE attached, and all your tools/mindset/processes are focused on CVEs, something will fail. The reasons for not assigning a CVE to a vulnerability are many and outside the scope of this article, but none of those will reduce the work that has to be done in security.
Even if an organization had a three-figure team of sysadmins it would be hard to keep track of this constantly growing list of vulnerabilities. We're not even talking about interactions where a vulnerability may affect a secondary system running on your infrastructure in a way that isn't that obvious.
Over time it just melts into a "background noise" of vulnerabilities. There's an assumption that patching happens methodically, weekly or perhaps daily – but in reality, the relevant, detailed information within CVE announcements never reaches top-of-mind.
Overwhelmed teams take risks
With security tasks, including patching, becoming such an overwhelming exercise, it's no wonder that sysadmins will start taking some shortcuts. Perhaps a sysadmin misses that interaction between a new exploit and a secondary system, or neglects to properly test patches before deploying the latest fix – any of which can fail to prevent a network-wide meltdown.
Handled without care, security management tasks such as patching can have consequences. A small change will come back and haunt security teams a few days, weeks or months down the road by breaking something else that they were not expecting.
"Closing holes" is just as much of a problem within this context. For example, take the Log4j vulnerability, where changing the Log4j default configuration could easily provide significant mitigation. It's an obvious, sensible step but the real question is – does the sysadmin team have the resources to complete the task? It's not that it's difficult to perform per se – but it's hard to track down every usage of log4j across a whole system fleet, plus the work needed is in addition to all the other regular activities.
And again, pointing to patching, the resources required to do it consistently often aren't there. Patching is particularly tough given the fact that applying a patch implies restarting the underlying service. Restarts are time-consuming and disruptive and, when it comes to critical components, restarting can simply not be realistic.
The net result is that essential security tasks simply do not get done, leaving sysadmins with a nagging feeling that security just isn't what it should be. It goes for security monitoring too, including penetration testing and vulnerability scanning. Yes, some organizations may have specialists to accomplish this task – even going so far as to have red teams and blue teams.
But, in many cases, security monitoring is yet another task for sysadmins who will inevitably become overloaded and end up taking.
And it's getting worse
One might think all that needs to happen is for sysadmins to get ahead of the burden – muscle down and just get it done. By working through the backlog, maybe getting some extra help, sysadmins could manage the workload and get it all done.
But there's a slight issue here. The number of vulnerabilities is growing rapidly – once the team has dealt with known problems, it'll doubtlessly face even more. And the pace of vulnerabilities is accelerating, more and more are reported every year.
Trying to keep up would mean that teams are increased in size by, say, 30% year on year. It's just not a battle that a human team with manual approaches will win. Clearly, alternatives are needed because a continuous battle of this nature simply won't be won by increasing team sizes year on year in an almost exponential fashion.
Threat management automation is key
The good thing about computing of course is that automation often provides a way out of sticky resource restrictions – and that's the case with threat management too. In fact, if you want any chance of making progress against the growing threat environment, deploying automation for tasks across vulnerability management is key. From monitoring for new vulnerabilities, to patching and reporting.
Some tools will help with specific aspects, others will help with all of those aspects, but the efficacy of tools tends to drop as the tool becomes more encompassing. More specialized tools tend to be better at their specific function than tools that claim to do everything in one go. Think of it as the Unix tool philosophy – do one thing, and do it well, rather than trying to do everything at once.
For example, patching can, and should, be automated. But patching is one of those security tasks that need a dedicated tool that can help sysadmins by patching consistently and with minimal disruption.
A half-hearted approach won't work because patching would still be encumbered by the acceptance of maintenance windows. That would remove from IT teams the flexibility to respond in nearly real time to new threats, without affecting the organization's business operations. A perfect fit for these requirements is live patching through tools like TuxCare's KernelCare Enterprise tool which delivers automatic, non-disruptive, live patching for Linux distributions.
It's not just patching that needs to be automated, of course. Just as cybercriminals use automation to probe for vulnerabilities, so should tech teams rely on automated, continuous vulnerability scanning and penetration testing. Within this sphere of automation should also come firewalls, advanced threat protection, endpoint protection, and so forth.
There's nowhere safe to hide
Clearly, the threat problem is getting worse, and rapidly so – much faster than organizations could possibly hope to grow their security teams if indeed they wanted to tackle these problems manually. Sitting in a particular corner in terms of the solutions in use doesn't provide any solace either, in part because solutions are now so integrated with code shared across so many platforms that a single vulnerability can have an almost universal impact.
Besides, as recent research found, the top ten list of the most vulnerable products excluded some notable products. For example, Microsoft Windows, previously seen as one of the most vulnerable operating systems, isn't even in the top ten – which is instead dominated by Linux-based operating systems. Relying on what is thought to be safer alternatives isn't a good idea.
It underlines how the only real safety is to be found in security automation. From vulnerability scanning through to patching, automation is really the only route that can help overwhelmed sysadmins gain a degree of control over an exploding situation – in fact, it's the only manageable solution.