Patch Management is reviled, impossible and critical.

It’s technically difficult if not impossible, it’s prone to issues that can lead to disruption, and it’s absolutely required from a security and compliance standpoint. Let’s look at why each of these statements is true and what we can do about it.

Technical Challenges

First of all why is patching so technically difficult? According to the Microsoft Security Intelligence Report, 5,000 to 6,000 new vulnerabilities surface each year. That works out to an average of 15 per day and many of these require a patch. Also what are we patching? Operating Systems and some applications come to mind immediately, but what about hypervisors, devices such as switches and routers, web CMS, and the dreaded database applications, such as Microsoft SQL or Oracle?

How do we get the patches? Each vendor has their own method so someone or something needs to keep up-to-date to know a patch is available, know how to get it, then know how to deploy it. What is our maintenance window? What is our policy or mandate? Are we required to patch in 14 days, 30 days? Also, how complex is our network? Can we download from a central location? Doesn’t that open a serious attack surface? What if we have a network that contains a zero trust zone? These are just some of the potential technical pitfalls that make IT departments heads spin.

Possible Disruption

Next there is the issue of disruption. Why is this an issue? Patches can break things. Microsoft and Intel rushed patches for the Meltdown/Spectre vulnerabilities and effectively broke many networks. But what about the unique to our organization systems that can’t even be tested by the vendor prior to the patch release? If we don’t (and most people don’t) have an exact test environment, that we can somehow replicate the workloads to, prior to releasing a patch there easily could be a situation where an unknown issue can occur and disrupt business. Often devices and servers need to be rebooted following updates, if you are a sprawling enterprise what do you do if the reboot goes awry? Do you have an out of band method to remediate or are you putting boots on the ground?

End users have come to dread the update day, for example many with Windows 10 have found that each patch causes their desktop icons to rearrange. So many times our users attempt to put off or socially engineer their way around updates. Browser updates are becoming more and more insidious as internal apps break often after an update, and users become frustrated.

What can we do?

First let’s start with a couple undeniable truths, not patching is not an option today. Some forward-thinkers are counting on a day when we can protect layer 7 and 8 (the end user), without patching. While we will get there eventually, in the meantime undeniable truth two is we will never be at 100% patched. That being the case how do we manage patching in a sustainable and adequate manner?

3 Key steps

  1. Understand where you are most vulnerable and set patching as a priority on those systems. Do you know your risks? For most organizations we are looking at outward facing technologies and browsers that go out to the world as our riskiest assets. However that may not be your case, the real question is do you know? So the first step is classifying your assets. Once we know what is critical we make that our priority. Much like we may say ‘critical’ patches need to be deployed within 14 days, we can further delineate, ‘critical patches on critical systems’ must be deployed within …. And so forth. Do not forget to evaluate which applications are most often exploited, remember discerning a risk level involves not just the consequence but the likelihood. Of the 5-6000 vulnerabilities a year only maybe 100 are widely exploited.
  2. Architect your systems with patch management in mind. Wherever you can build in active/active redundancy will make patching mush easier. Can you put in hypervisor, storage, database, application redundancy? If you plan ahead for it this will makes the process much easier. There should be a 3 tiered approach to the actual patch deployment. Tier one is to build as accurate as possible test environment and deploy to it first, second we should have a group of people, in our various departments that we co-opt as our pilot group. We try to find users that are adaptable and technology friendly, deploy to them second, and then finally a full deployment.
  3. There will always be emergency patches and our recommendation for those is to set expectations. Senior management should be educated to understand and support the patch management team by empowering them to perform emergency patches without concern of the recourse if there are issues. This of course means having a patch plan that specifies what an emergency patch is and how it is approved (involving senior management).

Patching isn’t going away anytime soon it touches all levels of our organization, and it is technically and operationally difficult. For those who need help, our patching service can provide additional resources.