Don’t forget to wipe! The keys to data sanitization and hard disk erasure

Every year IT teams supporting a modest-sized enterprise (2500 devices) will retire about 23% of its devices each year. That’s 575 machines a year containing sensitive information. As many companies like to take advantage of re-purposing these machines, they first must go through an end-of-lifecycle transition; from storage of data to reassignment, resell or donation. If the device is being reassigned from one department to another, it might require a new image; so the previous image with its specific rights and application selection needs a fresh tableau on which to build upon. If the device is leaving the organization, there can’t be any trace of its prior usage left. NIST agrees:

NIST Special Publication 800-88 Guidelines for Media Sanitization mandates that “in order for organizations to have appropriate controls on the information they are responsible for safeguarding, they must properly safeguard used media.” Taking control of old electronic media means disposing of it in a safe, secure, and compliant fashion.

The decommission process can be lengthy and, with all the daily fires requiring attention, considered a lower priority. This is why many companies ether have a stack of old devices waiting for retirement in some storage room or outsource to companies that specialize in data sanitization and hard disk destruction.

This year, IT teams will be potentially inundated with retiring devices considering the sunsetting of Windows XP last April. Because of the cost, many companies have simply opted to invest in brand new machines with Windows 7 preinstalled rather than face the battle of OS migration. This leaves them to face the problem of decommissioning their old PCs in a way that prevents any significant leakage of sensitive information.

As noted, many companies use outside organizations to handle this aspect of their business. Using our modest-sized enterprise as a model, decommissioning 575 devices can be expensive. Based on industry research, this costs between $30 and $50 per device. For our example company, that is a budget line item in excess of $23,000 for the year. Unfortunately for this company, an additional 12% of their machines, still within their industry-accepted 4-year lifecycle, were XP machines. They opted for new units rather than upgrade. Another 300 machines; that’s an additional $12,000. According to Microsoft (The Enterprise PC Lifecycle: Seeing the Big Picture for PC Fleet Management), the breakdown of the service is basically $46 (or as high as $375 per PC) including $12 for archiving data, $12 for sanitizing the hard drive, $8 for reloading the operating systems, and $12 to test the PCs. Granted some of this cost is deferred by the potential resale of these units. However, with older, unsupported OS’s, donation is more likely.

To validate these numbers, I spoke with the VP of IT of a well-known health care plan provider. They routinely spent $25,000 on top of the cost to recycle decommissioned machines to ensure the sensitive data that may still reside on hard drives was removed. This company is bound by very strict HIPAA compliance requirements in addition to the mandates of a dozen or more accreditation agencies.

If cost is prohibitive, the other option is to do it yourself. Without getting into soft costs and personnel time, there are two other potential hurdles that make this option complicated. First, it can be a fairly lengthy process. This means a resource has been reassigned from higher value tasks; not to mention the aforementioned daily emergencies. Secondly, it requires a degree of expertise. Every IT pro worth their salt knows simple file deletion or partitioning is insufficient. Companies must take action that will leave no trace of the previous image or data on a device.

Okay, one last thorn. Your company has the will and bandwidth to re-purpose/ decommission end-of-lifecycle devices. Now you must invest in a unique software license to run shredding/removal process. Besides having another SLA to manage, does the product actually make the process easier? Does it use recognized best practices to remove data, sanitize drives and replace old images with an approved, “clean” version? Can it accommodate multiple drives simultaneously (such as in a RAID) without having to break it apart first? And, does it allow you to provide certified evidence of data destruction?

It’s almost enough, as one IT pro wrote in a tech forum, “to take a sledge hammer, thermite, and go Office Space on 200 old hard drives. But I have other things to do.”

Whether re-purposing for use in another department, donating, reselling or smashing it to bits with a baseball bat, “wiping” the hard drive is a definitive part of the PC lifecycle. For companies that maintain any sensitive data on the drives (that’s most of them!), it rises to the level of necessity. Companies can reduce the financial impact if their sanitization process is included as a part of another indispensable infrastructure maintenance solution such as configuration or change management. For example, deploy one central solution that handles your entire automated configuration initiative: self-healing restoration, recovery, imaging and patching/updating.

But to make the whole thing effective and worth unifying sanitization with other configuration functions, it has to be fast (at least 10 seconds per gigabyte). It has to be thorough. It must use one of the two recognized destruction techniques: degaussing or making every shred of data permanently unreadable by overwriting it. In terms of repurpose and donation, you can now apply a proper clean and approved image on the “wiped” machine with confidence.

Unification makes a great deal of sense since it leverages other components important to compliance and security. The ability to image/reimage a re-purposed machine without having to expend any more capital is a huge boon. It goes back to that often repeated CIO mantra, try to do more for less.

Persystent Suite, which currently facilitates restoration, recovery, imaging and patch/update migration capabilities in a single centralized solution, recently added “wipe” functionality to its suite in order to help larger enterprises fulfill compliance mandates related to data security and device control. See it here.

Advertisements

WinXP…beyond the sunset

The end has come and gone. Despite the warnings, despite the lack of support and despite the realization that an operating system is now stagnate, thousands of businesses still have not made the leap from Windows XP. We’ve heard a great deal of reasons including  the cost to upgrade is too great; older applications don’t have an update that will work in the new paradigm; we’re planning on it in the nebulous future, and; our current configuration works just fine (If it ain’t broke, why fix it!).

Regardless of the excuses, every IT department is eventually going to have to make the move, so organizations are going to have to commit budget to upgrade or replace systems. The following are the top 5 reasons you need to make the move sooner than later…

#1. Be prepared to lose your security compliance and deal with the pain and suffering sure to follow.

#2. Migrating now will ensure continued access to the vital and vast third-party ecosystem of Windows partners and support organizations.

#3. Pervasive mobility — BYOD, consumerization of IT, always-on computing — is nearly unachievable without the move to Windows 7 or, especially, Windows 8.

#4. If your organization is migrating key applications and services to the cloud, staying on XP much longer will be a huge impediment.

#5. Moving to Windows 7 or 8 now is a far better economic proposition than putting off the inevitable until early 2014.

.
These aren’t scare tactics or a hard sell to push product, just some friendly advice!

Becoming a trusted adviser: helping clients prevent unforeseen expenses

As a service provider of any kind, the ultimate compliment is to be considered a “trusted adviser” by your client. But this status is more than simply getting a good reference or getting a customer to renew their annual contract. By its very title, a trusted adviser is an outside insider for a company. A consultant depended upon by an organization to provide valuable insight on how a company can best achieve its stated and latent goals.

For managed service providers, whose very purpose is to ensure various IT infrastructure and applications provide the expected results and value for the client, to become a trusted adviser means you have the responsibility to continually identify and implement ways to improve performance, anticipate challenges and constantly adapt to the transformative nature of technology.

Sounds easy enough. That is what you do, right? Whether you provide network support, security, help desk or a variety of other key services doesn’t immediately raise you to the level of trusted adviser. It simply means you provide an important service…and we assume you provide it very well.

Part of the trusted adviser’s job description is not only to improve performance, but to do so at the maximum level for minimal costs. The transition from service provider to trusted adviser means you are looking out for your client’s best interest, and not just service they can buy. To accomplish this, MSPs must address one of the biggest cost burdens that can affect the relationship: break/fix issues.

The labor required to manage this portion of the relationship is the biggest drain on margin. Regardless of whether a client purchased full coverage for a monthly fee, use a capped block of hours or pay out of pocket for each issue, somebody’s margin is affected when things go sideways.  It’s either money (margin) out of the MSPs pocket or out of the clients.

It’s not that issues arise, it’s just that the labor required to address problems is unpredictable. It could be a five minute fix or something that takes an application or network offline for an extended period of time while troubleshooting, fix planning and solution are applied.

Nothing erodes trusted adviser status faster than money. This is not to say an MSP should operate as a non-profit, but there are ways to proactively and automatically confront the break/fix issue without either side having to dig deep into profit margin. And, more importantly, provide a reliable means to attack unforeseen issues that eat time, upset productivity, and force reprioritization of potential revenue generating services. This is the road to trusted adviser status.

The ability to break out of “firefighter mode,” is the first step to creating lasting value for clients. The less time spent with your hair on fire, the more you can concentrate on tasks that support client business (and add to an MSP partner’s credibility and differentiation). For many MSPs services surround 6 general areas of coverage:

  1. Network Support
  2. Backup and Recovery
  3. Security
  4. End User Support/Help Desk
  5. Compliance
  6. Extra consulting services

The one constant through each of these services are the likelihood that break/fix will occur sooner or later. The ability to mitigate the risk associated with these problems and the labor required to properly diagnose and repair them can by automated configuration.

This doesn’t suggest a simple recovery tool. Instead of applying hours diagnosing and repairing, systems can self-heal upon reboot. It takes the client’s ideal image and removes the service issue. It’s simple. It’s automatic. And it removes problems that would otherwise require manual intervention and desk side visits.

Of course this doesn’t solve every problem, but if it can remove 60-70% of user-inflicted issues like changing critical settings, downloading malicious viruses, making unauthorized application changes, deleting necessary dll files, disabling BITS, and thousand other actions that compromise infrastructure integrity, not only are significant dollars saved, uptime and asset availability increased, but expensive personnel time is saved for higher value tasks.

There are several other benefits an MSP achieves by including automated self-healing as part of an overall package.

Scheduled versus variable labor: Labor costs take a huge bite out of the scope of service–especially when it comes to break/fix issues. An MSP and their client can create more fiscally stable relationship through precision budgeting. The client knows how much is going to pay each and every month and the MSP gains the stable recurring revenue. By using configuration automation and optimization, MSPs can reduce the specter of additional pass-along costs to the client or avoid absorbing the additional expensive labor costs. Now the conversation can move from “how much” to “how to improve” (from reactive to proactive).

Expand geographic reach: Many MSPs operate as regional entities because they do not have the personnel or the budget to adequately cover a larger (or even national) territory. From a cost perspective, self-healing eliminates a great many client visits. Typical on-site services like device restoration, no longer require a warm body in the room. This, in turn, reduces the need to travel and out-of-pocket time and costs. Without having to hop in a car or plane, you can provide effective service to a wider circle of clientele. Now when you visit a client, it is to provide proactive intellectual value and consulting expertise…or simply take them to dinner to thank them for the business.

Help Desk reduction: Resources show that by self-healing and rebooting to an ideal state eliminates more than 34% of all inbound help desk issues without manual intervention. If you consider that very time the help desk phone rings, it’s $20 (based on nat’l average). For more serious issues such as catastrophic device failure, infected operating systems/applications, unauthorized downloads, the cost is obviously greater–and not just in terms of tech/admin intervention, but lost productivity and potential loss of client trust. This doesn’t include scheduled maintenance tasks such as patching, updating and migration—which in itself requires a significant time and resource commitment. By adding a self-healing component to your existing slate of offerings, it reduces the number of help desk calls and, more importantly, allows an MSPs help desk pros to uncover root causes rather than continually fix the symptoms.

Removal of malicious changes: Through maliciousness or carelessness, your client’s network is under constant attack from botnets, malware, viruses and a variety of other negative impact influences. Although automatic configuration and reimaging can’t prevent Stan from sales downloading a suspect app or prevent organized element in Eastern Europe from worming into  a system, the continuous maintenance and reapplication of an ideal state can prevent lingering damage. Any time an unauthorized outside influence tries to change a registry, attach itself to a file, or embed itself in a supported application, the system rejects these modifications in favor of the ideal state…in real time. From an MSP perspective, this avoids the downtime needed to cleanse a network and helps preserve the continuity of critical information.

Of the six general service areas mentioned, it is obvious how configuration/recovery/repair/ reimage automation can helps issues related to the network, backup and end users, however some question the value to those who provide security and compliance services. The answer is simple. Although not a traditional security solution, it not only demonstrates control over network assets (as required in SANS, HIPAA, PCI and others), but enables the operating environment running smooth over the course of the lifecycle.

Because a trusted adviser is more interested in a long term relationship than any short term gains, it is imperative that MSPs find and propose new and innovative solutions to include within their base services. If clients consider a MSPs service as a commodity, then it is very simple to find another provider.

The difference between an expert and a trusted adviser really comes down to a single attribute: an expert provides good answers. A trusted adviser asks good questions. Can you reduce costs while increasing your quality of service?

Learn how to answer that question at www.utopicsoftware.com

Utopic presents Top 5 benefits for self-healing IT

A video blog entry!

Utopic presents it’s Top 5 benefits for adopting a self-configuring, self-optimizing, self-protecting , and of course, self healing process for an enterprise IT landscape. Self-healing describes the ability to perceive that an IT system or device is not operating correctly and, without human intervention, make the necessary adjustments to restore itself to normal operation.

How schools can combine academic success, admin effectiveness and efficient technology without breaking the bank

:Years ago, to get rid of “data” all a teacher needed to do was erase the blackboard. Now, as most schools depend on multi-user computer labs, provide a multitude of devices, and maintain highly integrated technology networks, the need to preserve a relative homogenous environment not only supports effective teaching platforms, but is critical in maintaining a compliant and secure landscape.

In many ways, educational institutions are like large corporations which support many employees, partners, vendors, and clients. Students, teachers, administrators, even parents require access to certain assets, use a variety of devices and, therefore, create a degree of chaos if not properly managed.

The most blatant source for breach and other problematic issues is in computer labs. A single device is typically used by multiple people throughout a single day. Despite stated log in/out protocols and usage restrictions, the ability to make unauthorized changes that might affect the entire network is heightened. Multiply this exponentially by the number of devices in a single lab and a number of labs across campus or campuses. That’s a lot of fingerprints at the crime scene, Sherlock!

However, many campus IT professionals are finding that maintaining a “working state for each classroom and lab is an effective way to battle against user carelessness, intentional damage, and lingering static entries. This “working state” is a controlled and uncorrupted alpha version of the ideal image. By simply rebooting a machine, any changes made during the session are reverted back to the ideal state.

For the past several years, a large central Florida state college (four campuses and 30,000 enrolled students in Central Florida) has applied this strategy to the majority of its controlled devices. After surviving a prolific virus, the college’s IT staff recognized that the labs which promoted the ideal state imaging were self-healed after a single reboot—as if the problem never existed. The other devices (that did not have the Persystent protection) across the campuses required a very time-consuming and invasive reimaging process. The cost was in the tens of thousands.

More than a layer of security, the “working state” also wipes the slate clean after every class. Several of the college’s Computer Science courses are hands-on.  Course curriculum requires students to install or manipulate applications, adjust registry settings, and re-engineer portions of operating systems. Once class is dismissed, another class arrives for a similar lesson. If the system cannot be returned to a “working state,” each subsequent class is building on the work of previous and creating a Gordian Knot of code and chaos. The same strategy that keeps the device free from intrusion is also responsible for maintaining the ideal state on which to build curriculum. A simple reboot gets the device ready for the next class. And, when it comes time to update, upgrade or patch certain applications, the same time-saving process applies.

Schools are not only centers of learning, but also employ large and diverse staffs. In fact, many universities and school districts are the largest employers in their counties and regions. In this respect, their IT environment must perform like that of a corporation. Each role; from the cafeteria manager ordering more French fries, to financial aid officers running credit checks; to professors accessing a content portal for submitted homework; require different applications and access points. Therefore a variety of images need to be maintained towards the goal of administrative efficacy as well as state and federal compliance. The overarching issue here is not the continuous maintenance of an image, but the cost-effective way it needs to be propagated.

If an organization the size of the central Florida college can expect to experience more than 100,000 issues per year, the annual projected costs would be north of half-a-million dollars in technology and personnel expenses.  Yet, by applying an automatic, self-healing reboot to an ideal state such time-consuming issues like troubleshooting, break/fix application, updates/upgrades and migration can be greatly mitigated. This is not implying technology and administrative problems disappear, but the savings towards their remediation is noteworthy and immediately impactful.

For instance, educational institutions are open to abuse as any corporation; sometimes even more so. With so many access points, and devices, managing the landscape is like trying to herd cats. Beyond the administrative issues, there are those who, for one reason or another, engage in what we’ll call “mischief.”  Again the aforementioned Florida school serves as our example. Last year, they experienced an internal breach. Someone hacked the local administrative passwords on several devices in their campus library and changed it. This went on for weeks as the culprits moved from machine to machine. Obviously this made it impossible for anyone to use the affected device and a precarious crack in the security of the network. When discovered, it required a technician to be dispatched and apply a restored image with the original configuration.  By applying the automated configuration, the damage was extremely limited and the personnel cost moved almost to zero.

In terms of refreshing configuration, laptops, tablets and desktops are not the only uniquely educational devices that require monitoring. Many professors are using audience response systems like iClicker to interact with large auditorium classes. These items are like mini-handheld computers that use the password encoded network to connect and respond to a professor’s presentation. With potentially hundreds of devices in the hands of students requiring direct access, the issues with passwords, privileges, settings and other technical issues are compounded.

Most education institutions are cash-strapped or, at the very least, conservatively cost-conscious. It is the successful schools that find and incorporate improved ways to meld academic success, administrative effectiveness and an efficient technology backbone; all without breaking the bank. The easiest way to accomplish this lofty goal is to reduce the instances that potentially impact productivity, drain budgets, and force reassessment of IT priorities. The self-healing properties of automated configuration management allow such instances to be reduced by more than 75%. If that is truly the case (and it is!), there are more resources and more time to devote to higher value tasks.

The issues facing the college in our example are the same that vex Harvard, USC, Palomar Community College and John F. Kennedy High School.  Technology with all of its peaks and pitfalls is increasingly integrated into managing, applying and interacting with curriculum. Keeping this space optimally operational is not a best practice, it is Job One.

 Addressing configuration management in such a way is like boats in a lake after a rain. The water causes everything to rise in equal measure.  An automated backend reduces help desk calls, satisfies administrative compliance through the demonstration of certain controls, extends the lifecycle of the hardware and software, and facilitates student learning.

Though each school may have unique applications and approaches, the need to maintain a continuous ideal state is a common way to fulfill the promise of ensuring the curriculum relies on the teacher, not the technology.

Learn more at www.utopicsoftware.com 

If anti-virus is dead…then what?

How configuration automation fills the vulnerability gap.

Earlier this month, the progenitors of anti-virus software declared that “anti-virus is dead.”(Wall Street Journal May 4, 2014) According to Symantec and other industry leading statistics the software designed to prevent malware,spyware and other intrusive tactics are doomed to failure. They say that anti-virus only catches 45% of the threats.

The battle is being lost  because prevention and protection are always two steps behind. As fast as someone comes up with a preventive signature, six more even nastier bugs are developed and released on unsuspecting networks. It is said that 95% of all networks (source: FireEye and ThreatSTOP) have some

sort of active infection.

To add fuel to the fire, IT security thought guru Eugene Kaspersky recently said: “The single-layer signature-based virus scanning is nowhere near a sufficient degree of protection – not for individuals, not for organizations large or small.”

The barbarians may be at the gates, but it’s not all doom and gloom. Many IT pros, those associated with mid- and larger tier enterprises recognize that security best practices are not singularly tied into firewall protection, but rather, an interoperable combination of key functions.

The defenses may be in place, but the war is still not being won. An organization may be continuously monitoring, correlating, provisioning, authenticating, blocking, but too many companies are not taking advantage of what makes security more effective; more prolific across a wider enterprise expanse. What is missing is automation.

Let’s return to the company that depends heavily on anti-virus to prevent breaches and other negative impact events. If Symantec is a credible source, then this company needs a new and innovative way of maintaining a safe and secure environment. Let’s also assume that even with a stack of other security tools, that phishing, botnets, and malware will always find a way to breach the network. If multinationals like Citibank, eBay, Target and Sony struggle with breaches, than the likelihood is you do as well (sources say 78% experienced breach in the past 2 years). What needs to happen is to automatically protect.

In the absence, or more likely in support of anti-virus protection, initiating some sort of automated repair/recovery program seems to be a progressive alternative growing in acceptance and popularity. It is based on the continuous maintenance of an ideal state. This way, any time an unauthorized outside influence tries to change a registry, attach itself to a file, or embed itself in a supported application, the system rejects these modifications in favor of the ideal state. After every PXE reboot of a workstation, or device, the automated system reapplies the latest approved image.

Within this scenario, any infection introduced after the last boot up is eliminated. Case in point: an inside sales person uses your network and internet connection to reach their independent email account. They see a new email from a friend: “U should see this.” Thinking the friend is a trustworthy source, the email is opened and the link within is clicked.  The website redirect seems harmless enough, a picture of puppies or a video of a skateboarder miserably miscalculating an airborne trick. However, on the next click a Secure Shield dialog box appears and lets the user know their network is in danger. Believing they are good stewards of the company, click on the link to load the “security update.” And as fast as that, the ransom-ware makes their device a paperweight.

Without automation, a help desk tech will probably spend several hours diagnosing and then manually restoring the hacked registry. Even if a fresh image is available, there is still the necessary manual intervention of reapplying specific user settings, applications and privileges based on the business need, corporate policies and organizational role. Then there is a greater time commitment on investigating whether the issue has spread beyond the single device or has evolved into a greater threat. One moment of carelessness creates hours and hours of IT involvement, QA/testing, and re-ensuring compliance requirements. This doesn’t include the lost productivity, potential risk and cost this threat poses to the entire network.

The same scenario using repair/recovery automation doesn’t prevent the recklessness, but prevents the mistake from spreading further. All the user needs to do is turn the machine off and back on. This applies a fresh ideal state. The ransom-ware and any other unauthorized change are gone… automatically without IT intervention. More importantly, the ideal image is configured for the individual (or their role). The image maintains their applications, settings, latest updates and other unique components so the system lifecycle is perpetuated, uninterrupted and remains firmly under IT’s control.

Real time configuration management security also supports compliance considering that several of the SANS critical controls (which serve as the basis for more than 3 dozen regulatory compliance agency mandates) are maintained through proper configuration and demonstrated control. For example, PCI/DSS requires: “2.2 Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.” SAN simplifies this to mean “Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers.”  The ability to continuously maintain an ideal state for a variety of roles is the key to ensuring assets are only available to appropriate users. If each device is covered under a Repair/Recovery Reimage configuration protocol, then (as HIPAA 11.0 demands) you are demonstrating control over data. The system cannot accept unauthorized changes (as detailed in your organization’s standards and policies) to registry, applications or files. This is not to say an organization can forgo provisioning, log archiving, firewall reinforcement or authentication, but automating configuration puts another proverbial brick in your defense wall.

Security requires attention 24/7…

“If I can cut that in half, we’re talking a staggering amount of money,” says Bruce Perrin, CIO for Florida-based Phenix Energy Group. “Seventy percent of what security profes­sionals do could be done completely automatically, giving them more time to do things that are more important.”

62 percent of respondents in a recent IDG Research survey indicated they automate less than 30 percent of their security functions. For most companies that turns out to be a great deal of manual personnel hours. Hackers don’t sleep, so why should your security? Unless an IT department is staffed around the clock, there is a certain amount of time that users are on their own. And the most blatant issues (the ones that gain headlines) don’t start as brute force attacks—they are sneaky and insidious that can lay dormant for days or months (like Heartbleed); so that middle of the night emergency call may never come until it’s too late. By automating the configuration-break/fix process, organizations remove a significant burden.

For example, a unified school district in Central Florida manages a student computer lab more than 2000 PCs. They conservatively estimated that each PC experiences some sort of break/fix incident every 90 days (and 5% experience a catastrophic failure each year). And each incident required a manual intervention of one hour each. This equated to approximately 7,450 man hours over the course of the year. Also when considering the ROI, the average downtime of each machine was at least 4 hours from report to resolution. When they applied an automated process, the break/fix issues were reduced by 90%. This saved 9,450 hours and an annual cost savings of slightly less than 16,000/mo ($191,121/yr).

 

Automation also promotes the ability to respond to higher value threats in a shorter amount of time. And if you can reduce the number of security incidents through automation, you reduce the risk of data loss, which again can amount to staggering amounts of money given the potential cost of a single breach.

Configuration (Repair/Recovery/Reimage) may not be a traditional security solution, but as an automated component in a larger initiative, enables key security features that are not only compliance requirements, but keep the operating environment running smooth over the course of the lifecycle. And, for that reason alone, should be included as part of any organization’s next generation security arsenal.

Learn more at www.utopicsoftware.com

Controlling change starts with controlling configuration, patching

Okay, another blog, right? There’s only so many hours I can spend perusing so-called experts, providing so-called best practices…don’t you realize my users are clicking on junk they shouldn’t right now; making modifications to registries without knowing; downloading some darn app that flings flame-balls at cartoon penguins and whatnot on their company smartphone?

Yes, I do. I see the same things every day.

What I find as the overriding problem is not that users do these things, but IT is either ill-prepared or slow to respond. How long is some squirrelly malware grabbing information before it is discovered? Can remediation be done manually over the web, or does it require a desk-side visit? And how loud is the CIO screaming about budget management and that we always seem to be in “hair-on-fire mode” and not actually forwarding or achieving revenue-enhancing business goals?

As the title of this blog indicates, we all recognize that the only constant in the IT world is change. And, I’m sure you’ve read enough white papers on the process of change management. That’s not what I will try to accomplish here. In the coming weeks and months, I will provide some insight, opinion, and some industry resources on best practices/polices to control lifecycle, strategic and operational changes through proven process and automation. This covers a wide variety of subjects from security to configuration to ITIL to service management to budgets and priorities, self-service and everything in-between.

The first change I will address is timely. On Wednesday, a serious security flaw was detected in almost all instances of Internet Explorer. It was a significant vulnerability gap by way of a plug-in that could allow an outside influence to take remote control of a device. The issue was serious enough for Homeland Security’s Computer Emergency Readiness Team to recommend that users avoid using IE particularly 9, 10 and 11. Initially Microsoft said it would be a month before a patch was deployed. To their credit, Microsoft developed a patch in seemingly record time and will have it ready late next week.

Depending on the size and scope of your network (and whether your policies mandate or permit use of IE), applying the patch across the entire enterprise could be a Herculean task; set up/configuration, testing, re-testing, troubleshooting, manual adjustments, desk-side visits, etc…

It’s not just the patch that changes the configuration. It is highly likely that you may need to reimage machines so the patch becomes part of the ideal state. So, as not to gum the works and further distract from high priority and revenue-generating tasks, the key (and best practice) is to automate the patching, reimaging and distribution process.

If you have dealt with Microsoft patches before, you know them to be temperamental. Sometimes the success rate in a first attempt update is less than 75%.  And as Microsoft rushes a fix into existence, the speed in which it is deployed may indicate its construction is not as refined as others.  This is not to say the patch is poorly developed, but the likelihood of an issue (considering variable configurations) with its application is in the realm of probability. This is why automation is so important. It replicates exactly against a desired state across an entire enterprise. This means it is the same on the first machine as it is the 10,000th. The possibility that you have multiple configurations depending on user roles is also a consideration. If an organization has a homogenous environment, the variables are considerably more manageable and the patch deployment less intrusive. However, with the advent of BYOD, multiple operating systems, shadow IT applications, and other individual differentiators, the likelihood of a smooth, single deployment grows in complexity and personnel hours.

The goal is a consistent and consolidated enforcement of the update and then having the resources to reimage the newly updated machine as the latest ideal state.  In that this can be done 100% remotely and the new patched image supersedes the previous the next reboot, makes the process significantly versatile. It allows you to concentrate on troubleshooting rogue problems rather than having to re-configure workarounds for an entire update.

If an automation process can improve the success rate of this deployment by 95%, the time and dollars recovered are significant. For example, if you manage 500 devices and the IE patch fails on 25%, which leaves 125 devices in need of manual attention. Typically, we are talking about an hour per unit by a service tech. At a conservative $30/hr, companies can save more than $3000 (based on negative impact avoidance from this patch instance alone). The return on investment grows if the tech can be reassigned to an alternate task that potentially generates revenue.

The ability to successfully and efficiently apply a patch is crucial, but the capacity to do it against an ideal image and have it distribute across the network is the change best practice. Whether we are discussing a high profile vulnerability issues like Internet Explorer or the less publicized issue with Java, asserting control over the state of your IT environment is not only a compliance and best practice necessity, but embedded in the very DNA of you job description.