There's something about craftsmanship. It's personal, its artistry, and it can be incredibly effective in achieving its goals. On the other hand, mass-market production can be effective in other ways, through speed, efficiency, and cost savings.
The story of data centers is one of going from craftsmanship – where every individual machine is a pet project, maintained with great care – to mass production with big server farms where individual units are completely disposable.
In this article, we take a look at how data centers have changed shape over the decades. We examine the implications for data center workloads, and for the people that run them – who have now lost their pet systems. We'll also review the cybersecurity implications of the new data center landscape.
Pet system with a big purpose
For any sysadmin who started their career before the advent of virtualization and other cloud and automation technologies, systems were finely crafted pieces of hardware – and treated with the same love as a pet.
It starts with the 1940s emergence of computer rooms – where big machines manually connected by miles of wires were what could only be called a labor of love. These computer rooms contained the steam engines of the computing age, soon to be replaced with more sophisticated equipment thanks to the silicon revolutions. As for security? A big lock on the door was all that was needed.
Mainframes, the precursors to today's data centers, were finely crafted solutions too, with a single machine taking up an entire room and needing continuous, expert craftsmanship to continue operating. That involved both hardware skills and coding skills where mainframe operators must code on the fly to keep their workloads running.
From a security perspective, mainframes were reasonably easy to manage. It was (way) before the dawn of the internet age, and IT managers' pet systems were at reasonably limited risk of breach. The first computer viruses emerged in the 1970s, but these were hardly of risk to mainframe operations.
Prefab computing power with unique management requirements
Bring on the 1990s and the emergence of data centers. Individual, mass-produced machines offered off-the-shelf computing power that was much more affordable than mainframe units. A data center simply consisted of a collection of these computers – all hooked up to each other. Later in the decade, the data center was also connected to the internet.
Though the individual machines required minimal physical maintenance, the software that drove the workloads for these machines required continuous maintenance. The 1990's data center was very much composed of pet systems. That counted for every machine, which was an act of server management craftsmanship.
From manual software updates to running backups and maintaining the network, IT admins had their work cut out – if not in physically maintaining machines, then certainly in managing the software that supports their workloads.
It's also an era that first exposed corporate workloads to external security vulnerabilities. With data centers now linked up to the internet, there was suddenly a doorway for attackers to enter into data centers. It puts IT admin's pet systems at risk – the risk of data theft, risk of equipment misuse, etc.
So, security became a major concern. Firewalls, threat detection, and regular patching against vulnerabilities are the sort of security tools that IT admins had to adopt to protect their pet systems through the turn of the millennium.
Server farms – mass-produced, mass managed
The 2000s saw a major change in the way that workloads were handled in the data center. The core drive behind this change was efficiency and flexibility. Given the huge demand for computing workloads, solutions including virtualization, and containerization a bit further after that, quickly gained ground.
By loosening the strict link between hardware and operating system, virtualization meant that workloads became relatively speaking independent from the machines that run them. The net result brought a wide range of benefits. Load balancing, for example, ensures that tough workloads always have access to computing power, without the need for excessive financial investment in computing power. High availability, in turn, is designed to eliminate downtime.
As for individual machines – well, these are now completely disposable. The technologies in use in modern data centers mean that individual machines have essentially no meaning – they're just cogs in a much larger operation.
These machines no longer had nice individual names and simply became instances – e.g., the webserver service is no longer provided by the incredibly powerful "Aldebaran" server, but rather by a cadre of "webserver-001" to "webserver-032". Tech teams could no longer afford to spend the time to adjust each one as precisely as before, but the large numbers used and efficiency gained thanks to virtualization meant that the overall computing power in the room would still surpass the results of pet systems.
Limited opportunity for craftsmanship
Container technologies like Docker, and Kubernetes more recently, have taken this process even further. You no longer need to dedicate full systems to perform a given task, you just need the basic infrastructure provided by the container to run a service or application. It's even faster and more efficient to have countless containers underpinning a service rather than specific, dedicated systems for each task.
Deploying a new system no longer requires the manual installation of an operating system or a labor-intensive configuration and service deployment process. Everything now resides in "recipe" files, simple text-based documents that describe how a system should behave, using tools like Ansible, Puppet or Chef.
IT admins could still include some tweaks or optimizations in these deployments but, because each server is no longer unique, and because there are so many of them supporting each service, it hardly makes sense to spend the effort to do so. Admins that need more performance can always reuse the recipe to fire up a few more systems.
While a few core services, like identity management servers or other systems storing critical information would still remain as pets, the majority were now regarded as cattle - sure, you didn't want any of them to fail, but if one did, it could quickly get replaced with another, equally unremarkable, system performing a specific task.
Take into account the fact that workloads are increasingly operating on rented computing resources residing in large cloud facilities and it's clear that the days of running servers as a pet system are over. It's now about mass production – in an almost extreme way. Is that a good thing?
Mass production is great: but there are new risks
Flexibility and efficiency brought along by mass production are good things. In the computing environment, little is lost by no longer needing to "handcraft" and "nurture" computing environments. It's a much sleeker, faster way to make workloads go live and to make sure that they stay live.
But there are a number of security implications. While security could be "crafted" into pet systems, cattle environments require a slightly different approach – and certainly still requires a strong focus on security. For example, cattle systems are spawned from the same recipe files, so any intrinsic flaws in the base images used for them will be also deployed at scale. This directly translates to a larger attack surface when a vulnerability surfaces, as there are just many more possible targets. In this situation, it doesn't really matter if you can fire up a new system within minutes or even seconds - do that over thousands of servers at once and your workloads will be impacted regardless of the time it takes, and that will impact your bottom line.
To a large degree, automation is now the answer to security in server farms. Think about tools like automated penetration scanning, and automated live patching tools. These tools provide more airtight security against an equally automated threat, and reduce the administrative overhead of managing these systems.
A changed computing landscape
The evolving environment in IT has changed the architecture of the data center, and the approach of the people who make data centers work. It's simply not feasible to rely on old practices and expect to have the best results - and this is a tough challenge, as it requires a considerable amount of effort by sysadmins and other IT practitioners - it's a significant mindset change and it takes a conscious effort to change the way you reason about system administration, but some underlying principles, like security, still apply. Given how vulnerability numbers don't seem to go down - quite the opposite, in fact - it will continue to apply in the foreseeable future, regardless of other evolutionary changes affecting your data center.
Rather than opposing it, IT admins should accept that their pet systems are now, for all intents and purposes, gone – replaced by mass production delivery. It also means accepting that the security challenges are still here – but in a changed shape.
In making server workloads run efficiently, IT admins rely on a new toolset, with adapted methods that rely on automating tasks that can no longer be performed manually. So, similarly, in running server farm security operations, IT admins need to take a look at patching automation tools like TuxCare's KernelCare Enterprise, and see how they fit into their new toolset.