Controlling change starts with controlling configuration, patching

Okay, another blog, right? There’s only so many hours I can spend perusing so-called experts, providing so-called best practices…don’t you realize my users are clicking on junk they shouldn’t right now; making modifications to registries without knowing; downloading some darn app that flings flame-balls at cartoon penguins and whatnot on their company smartphone?

Yes, I do. I see the same things every day.

What I find as the overriding problem is not that users do these things, but IT is either ill-prepared or slow to respond. How long is some squirrelly malware grabbing information before it is discovered? Can remediation be done manually over the web, or does it require a desk-side visit? And how loud is the CIO screaming about budget management and that we always seem to be in “hair-on-fire mode” and not actually forwarding or achieving revenue-enhancing business goals?

As the title of this blog indicates, we all recognize that the only constant in the IT world is change. And, I’m sure you’ve read enough white papers on the process of change management. That’s not what I will try to accomplish here. In the coming weeks and months, I will provide some insight, opinion, and some industry resources on best practices/polices to control lifecycle, strategic and operational changes through proven process and automation. This covers a wide variety of subjects from security to configuration to ITIL to service management to budgets and priorities, self-service and everything in-between.

The first change I will address is timely. On Wednesday, a serious security flaw was detected in almost all instances of Internet Explorer. It was a significant vulnerability gap by way of a plug-in that could allow an outside influence to take remote control of a device. The issue was serious enough for Homeland Security’s Computer Emergency Readiness Team to recommend that users avoid using IE particularly 9, 10 and 11. Initially Microsoft said it would be a month before a patch was deployed. To their credit, Microsoft developed a patch in seemingly record time and will have it ready late next week.

Depending on the size and scope of your network (and whether your policies mandate or permit use of IE), applying the patch across the entire enterprise could be a Herculean task; set up/configuration, testing, re-testing, troubleshooting, manual adjustments, desk-side visits, etc…

It’s not just the patch that changes the configuration. It is highly likely that you may need to reimage machines so the patch becomes part of the ideal state. So, as not to gum the works and further distract from high priority and revenue-generating tasks, the key (and best practice) is to automate the patching, reimaging and distribution process.

If you have dealt with Microsoft patches before, you know them to be temperamental. Sometimes the success rate in a first attempt update is less than 75%.  And as Microsoft rushes a fix into existence, the speed in which it is deployed may indicate its construction is not as refined as others.  This is not to say the patch is poorly developed, but the likelihood of an issue (considering variable configurations) with its application is in the realm of probability. This is why automation is so important. It replicates exactly against a desired state across an entire enterprise. This means it is the same on the first machine as it is the 10,000th. The possibility that you have multiple configurations depending on user roles is also a consideration. If an organization has a homogenous environment, the variables are considerably more manageable and the patch deployment less intrusive. However, with the advent of BYOD, multiple operating systems, shadow IT applications, and other individual differentiators, the likelihood of a smooth, single deployment grows in complexity and personnel hours.

The goal is a consistent and consolidated enforcement of the update and then having the resources to reimage the newly updated machine as the latest ideal state.  In that this can be done 100% remotely and the new patched image supersedes the previous the next reboot, makes the process significantly versatile. It allows you to concentrate on troubleshooting rogue problems rather than having to re-configure workarounds for an entire update.

If an automation process can improve the success rate of this deployment by 95%, the time and dollars recovered are significant. For example, if you manage 500 devices and the IE patch fails on 25%, which leaves 125 devices in need of manual attention. Typically, we are talking about an hour per unit by a service tech. At a conservative $30/hr, companies can save more than $3000 (based on negative impact avoidance from this patch instance alone). The return on investment grows if the tech can be reassigned to an alternate task that potentially generates revenue.

The ability to successfully and efficiently apply a patch is crucial, but the capacity to do it against an ideal image and have it distribute across the network is the change best practice. Whether we are discussing a high profile vulnerability issues like Internet Explorer or the less publicized issue with Java, asserting control over the state of your IT environment is not only a compliance and best practice necessity, but embedded in the very DNA of you job description.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s