Maintaining control of the repair/recovery process

One of the hallmarks of a self-healing process is the control applied to the width and depth of what parts of the operating system are automatically corrected back to an approved ideal state upon reboot. Based on requirements such as corporate policies, regulatory compliance, multi-user access and best practices, this setting changes from company to company, but also can be easily modified for various sub-groups within each organization.

To maintain proper control, an automated system must allow for or provide multiple levels of repair point control (for example-High, Medium and Low). A device will repair on every boot if it is assigned a Low/Medium/High level repair policy. However, an individual device (or group) can be assigned a “No Repair” setting to support an on-demand repair policy. This way, IT administration can control if and when a device needs to be returned to the last repair point. In fact, best practices suggest that repair should be implemented on demand rather that upon every reboot in order to maintain the continuous integrity of the permitted updates and allowable changes to the image assigned to that specific device.

When engaged to self-heal, Persystent always repairs the registry files (except for the keys excluded in filters). During boot up (after the BIOS loads), the process reapplies the approved image and the Repair Exempt filter. This way, any specific file and setting, such as Virus Definition Files, can be appropriately preserved. One of the chief benefits of the Persystent self-healing process is that the device is not being reset to a Zero-day state, but rather the last approved repair point. Additionally, the repair process only affects operating system and application files. The user’s data and files are not touched. A user profile is only impacted at the highest level of repair.

Further control of the repair process is evidenced by the flexibility of changing settings. The centralized WebUI allows IT administrators to change the repair levels against any individual or group at any time. This is done by simply adjusting the policy setting. This includes adding or modifying excluded files or other policies that can be applied to a named group of devices (i.e. identified by a characteristics like location, department, function or permission etc…) or defined by an event (i.e., updates, public daily usage by multiple users). The policies can be extended to when returns to ideal state can be scheduled and enforced.

The three levels of Repair:

Low Level Repair

  • Repairs any operating system or application files that are either modified or deleted back to the repair point state.
  • Deletes any new files/folders added in operating system and application folders.
  • User profiles are left intact. All change in the user’s profile are preserved and not repaired.
  • Any new files/folders created at the root of C:\ will be left intact.

Medium Level Repair

  • Repairs any operating system and application files that are either modified or deleted back to the repair point state.
  • Deletes any new files/folders added in operating system or application folders.
  • User profiles are left intact. All change in the user’s profile are preserved and not repaired.
  • Any new files/folders created at the root of C:\ are deleted.

High Level Repair

  • Repairs any operating system or application files that are either modified or deleted back to the repair point state.
  • Deletes any new files/folders added in operating system or application folders.
  • User profiles are deleted so that new user profiles will be created when a user logs on.
  • Any new files/folders created at the root of C:\ will be deleted.

Which repair setting is best?

Each company has unique compliance, security, device performance and administrative needs. This is why settings can be adjusted to meet specific requirements. IT administrators can add various policies that control the ability to add or manipulate certain registry, files, and services. Many companies enforce a variety of direct and unique policies that apply to a selection of their diverse user profiles. Most organization use low and medium settings for individual PCs based on the above noted considerations. High level repair settings are typically reserved for publicly accessed devices, classroom, kiosk and other multi-user machines.

Who chooses the repair point?

You do. Persystent’s imaging capabilities facilitates the creation and management of an image. A snapshot of this image is reapplied during the reboot process. When the repair is initiated, the self-healing (automatic corrective action) follows the repair level rules, exemptions and filters associated with the individual or group of devices and applies the last approved ideal state (image).

How often should a new repair point be created?

Best practices dictate that a new repair point should be taken immediately after authorized changes are made to the system. This can be automatically scheduled and executed with Persystent Suite. This includes changes such as Windows Updates, key application updates, installation of new applications, installation of new devices, etc… This will preserve the authorized changes and ensure the integrity of the repair process. Many companies schedule updated images weekly on a weekly basis; typically after the application of “Patch Tuesday” or a similar coordinated event. With Persystent, the process is considerably faster in that an entire image is not recreated. The process only identifies and incorporates the changes since the last approved repair point.

What exactly is repaired?

Depending on the level of control the self-healing applies corrective action against operating system and application files. However, this can be optionally expanded to include other files and folders that are not automatically part of the repair point by using our “Repair Point Include Filter” feature. The following is Persystent’s default repair point listing:

Default on Windows Vista, Windows 7/8/8.1
C:\Bootmgr
C:\Bootsect.bak

Captured on Windows Vista/Windows 7/8/8.1
C:\Windows (Excluding C:\Windows\CSC)
C:\Program Files
C:\Program Files (x86)\
C:\ProgramData
C:\Users\Public
C:\Users\Default
C:\Boot
C:\inetpub

The driving force behind Persystent’s multiple levels of repair is to allow for the maximum amount of control by IT while maintaining corporate standards of performance integrity. The flexibility of Persystent provides the right amount of protection, lifecycle expediency and compliance support for every machine under the enterprise umbrella.

With so many potential issues affecting critical systems, from user errors to malware infections to catastrophic failures (“blue screen of death’) IT departments constantly need to reimage machines from scratch or spend countless hours troubleshooting and repairing. The benefits of self-healing are obvious. It reduces helpdesk calls, promotes faster resolution of issues, eliminates the need for lengthy manual intervention, but most importantly it maintains a standard of performance through Persystent’s levels of repair.

 Download this article as PDF

Advertisements

Don’t forget to wipe! The keys to data sanitization and hard disk erasure

Every year IT teams supporting a modest-sized enterprise (2500 devices) will retire about 23% of its devices each year. That’s 575 machines a year containing sensitive information. As many companies like to take advantage of re-purposing these machines, they first must go through an end-of-lifecycle transition; from storage of data to reassignment, resell or donation. If the device is being reassigned from one department to another, it might require a new image; so the previous image with its specific rights and application selection needs a fresh tableau on which to build upon. If the device is leaving the organization, there can’t be any trace of its prior usage left. NIST agrees:

NIST Special Publication 800-88 Guidelines for Media Sanitization mandates that “in order for organizations to have appropriate controls on the information they are responsible for safeguarding, they must properly safeguard used media.” Taking control of old electronic media means disposing of it in a safe, secure, and compliant fashion.

The decommission process can be lengthy and, with all the daily fires requiring attention, considered a lower priority. This is why many companies ether have a stack of old devices waiting for retirement in some storage room or outsource to companies that specialize in data sanitization and hard disk destruction.

This year, IT teams will be potentially inundated with retiring devices considering the sunsetting of Windows XP last April. Because of the cost, many companies have simply opted to invest in brand new machines with Windows 7 preinstalled rather than face the battle of OS migration. This leaves them to face the problem of decommissioning their old PCs in a way that prevents any significant leakage of sensitive information.

As noted, many companies use outside organizations to handle this aspect of their business. Using our modest-sized enterprise as a model, decommissioning 575 devices can be expensive. Based on industry research, this costs between $30 and $50 per device. For our example company, that is a budget line item in excess of $23,000 for the year. Unfortunately for this company, an additional 12% of their machines, still within their industry-accepted 4-year lifecycle, were XP machines. They opted for new units rather than upgrade. Another 300 machines; that’s an additional $12,000. According to Microsoft (The Enterprise PC Lifecycle: Seeing the Big Picture for PC Fleet Management), the breakdown of the service is basically $46 (or as high as $375 per PC) including $12 for archiving data, $12 for sanitizing the hard drive, $8 for reloading the operating systems, and $12 to test the PCs. Granted some of this cost is deferred by the potential resale of these units. However, with older, unsupported OS’s, donation is more likely.

To validate these numbers, I spoke with the VP of IT of a well-known health care plan provider. They routinely spent $25,000 on top of the cost to recycle decommissioned machines to ensure the sensitive data that may still reside on hard drives was removed. This company is bound by very strict HIPAA compliance requirements in addition to the mandates of a dozen or more accreditation agencies.

If cost is prohibitive, the other option is to do it yourself. Without getting into soft costs and personnel time, there are two other potential hurdles that make this option complicated. First, it can be a fairly lengthy process. This means a resource has been reassigned from higher value tasks; not to mention the aforementioned daily emergencies. Secondly, it requires a degree of expertise. Every IT pro worth their salt knows simple file deletion or partitioning is insufficient. Companies must take action that will leave no trace of the previous image or data on a device.

Okay, one last thorn. Your company has the will and bandwidth to re-purpose/ decommission end-of-lifecycle devices. Now you must invest in a unique software license to run shredding/removal process. Besides having another SLA to manage, does the product actually make the process easier? Does it use recognized best practices to remove data, sanitize drives and replace old images with an approved, “clean” version? Can it accommodate multiple drives simultaneously (such as in a RAID) without having to break it apart first? And, does it allow you to provide certified evidence of data destruction?

It’s almost enough, as one IT pro wrote in a tech forum, “to take a sledge hammer, thermite, and go Office Space on 200 old hard drives. But I have other things to do.”

Whether re-purposing for use in another department, donating, reselling or smashing it to bits with a baseball bat, “wiping” the hard drive is a definitive part of the PC lifecycle. For companies that maintain any sensitive data on the drives (that’s most of them!), it rises to the level of necessity. Companies can reduce the financial impact if their sanitization process is included as a part of another indispensable infrastructure maintenance solution such as configuration or change management. For example, deploy one central solution that handles your entire automated configuration initiative: self-healing restoration, recovery, imaging and patching/updating.

But to make the whole thing effective and worth unifying sanitization with other configuration functions, it has to be fast (at least 10 seconds per gigabyte). It has to be thorough. It must use one of the two recognized destruction techniques: degaussing or making every shred of data permanently unreadable by overwriting it. In terms of repurpose and donation, you can now apply a proper clean and approved image on the “wiped” machine with confidence.

Unification makes a great deal of sense since it leverages other components important to compliance and security. The ability to image/reimage a re-purposed machine without having to expend any more capital is a huge boon. It goes back to that often repeated CIO mantra, try to do more for less.

Persystent Suite, which currently facilitates restoration, recovery, imaging and patch/update migration capabilities in a single centralized solution, recently added “wipe” functionality to its suite in order to help larger enterprises fulfill compliance mandates related to data security and device control. See it here.

WinXP…beyond the sunset

The end has come and gone. Despite the warnings, despite the lack of support and despite the realization that an operating system is now stagnate, thousands of businesses still have not made the leap from Windows XP. We’ve heard a great deal of reasons including  the cost to upgrade is too great; older applications don’t have an update that will work in the new paradigm; we’re planning on it in the nebulous future, and; our current configuration works just fine (If it ain’t broke, why fix it!).

Regardless of the excuses, every IT department is eventually going to have to make the move, so organizations are going to have to commit budget to upgrade or replace systems. The following are the top 5 reasons you need to make the move sooner than later…

#1. Be prepared to lose your security compliance and deal with the pain and suffering sure to follow.

#2. Migrating now will ensure continued access to the vital and vast third-party ecosystem of Windows partners and support organizations.

#3. Pervasive mobility — BYOD, consumerization of IT, always-on computing — is nearly unachievable without the move to Windows 7 or, especially, Windows 8.

#4. If your organization is migrating key applications and services to the cloud, staying on XP much longer will be a huge impediment.

#5. Moving to Windows 7 or 8 now is a far better economic proposition than putting off the inevitable until early 2014.

.
These aren’t scare tactics or a hard sell to push product, just some friendly advice!

Becoming a trusted adviser: helping clients prevent unforeseen expenses

As a service provider of any kind, the ultimate compliment is to be considered a “trusted adviser” by your client. But this status is more than simply getting a good reference or getting a customer to renew their annual contract. By its very title, a trusted adviser is an outside insider for a company. A consultant depended upon by an organization to provide valuable insight on how a company can best achieve its stated and latent goals.

For managed service providers, whose very purpose is to ensure various IT infrastructure and applications provide the expected results and value for the client, to become a trusted adviser means you have the responsibility to continually identify and implement ways to improve performance, anticipate challenges and constantly adapt to the transformative nature of technology.

Sounds easy enough. That is what you do, right? Whether you provide network support, security, help desk or a variety of other key services doesn’t immediately raise you to the level of trusted adviser. It simply means you provide an important service…and we assume you provide it very well.

Part of the trusted adviser’s job description is not only to improve performance, but to do so at the maximum level for minimal costs. The transition from service provider to trusted adviser means you are looking out for your client’s best interest, and not just service they can buy. To accomplish this, MSPs must address one of the biggest cost burdens that can affect the relationship: break/fix issues.

The labor required to manage this portion of the relationship is the biggest drain on margin. Regardless of whether a client purchased full coverage for a monthly fee, use a capped block of hours or pay out of pocket for each issue, somebody’s margin is affected when things go sideways.  It’s either money (margin) out of the MSPs pocket or out of the clients.

It’s not that issues arise, it’s just that the labor required to address problems is unpredictable. It could be a five minute fix or something that takes an application or network offline for an extended period of time while troubleshooting, fix planning and solution are applied.

Nothing erodes trusted adviser status faster than money. This is not to say an MSP should operate as a non-profit, but there are ways to proactively and automatically confront the break/fix issue without either side having to dig deep into profit margin. And, more importantly, provide a reliable means to attack unforeseen issues that eat time, upset productivity, and force reprioritization of potential revenue generating services. This is the road to trusted adviser status.

The ability to break out of “firefighter mode,” is the first step to creating lasting value for clients. The less time spent with your hair on fire, the more you can concentrate on tasks that support client business (and add to an MSP partner’s credibility and differentiation). For many MSPs services surround 6 general areas of coverage:

  1. Network Support
  2. Backup and Recovery
  3. Security
  4. End User Support/Help Desk
  5. Compliance
  6. Extra consulting services

The one constant through each of these services are the likelihood that break/fix will occur sooner or later. The ability to mitigate the risk associated with these problems and the labor required to properly diagnose and repair them can by automated configuration.

This doesn’t suggest a simple recovery tool. Instead of applying hours diagnosing and repairing, systems can self-heal upon reboot. It takes the client’s ideal image and removes the service issue. It’s simple. It’s automatic. And it removes problems that would otherwise require manual intervention and desk side visits.

Of course this doesn’t solve every problem, but if it can remove 60-70% of user-inflicted issues like changing critical settings, downloading malicious viruses, making unauthorized application changes, deleting necessary dll files, disabling BITS, and thousand other actions that compromise infrastructure integrity, not only are significant dollars saved, uptime and asset availability increased, but expensive personnel time is saved for higher value tasks.

There are several other benefits an MSP achieves by including automated self-healing as part of an overall package.

Scheduled versus variable labor: Labor costs take a huge bite out of the scope of service–especially when it comes to break/fix issues. An MSP and their client can create more fiscally stable relationship through precision budgeting. The client knows how much is going to pay each and every month and the MSP gains the stable recurring revenue. By using configuration automation and optimization, MSPs can reduce the specter of additional pass-along costs to the client or avoid absorbing the additional expensive labor costs. Now the conversation can move from “how much” to “how to improve” (from reactive to proactive).

Expand geographic reach: Many MSPs operate as regional entities because they do not have the personnel or the budget to adequately cover a larger (or even national) territory. From a cost perspective, self-healing eliminates a great many client visits. Typical on-site services like device restoration, no longer require a warm body in the room. This, in turn, reduces the need to travel and out-of-pocket time and costs. Without having to hop in a car or plane, you can provide effective service to a wider circle of clientele. Now when you visit a client, it is to provide proactive intellectual value and consulting expertise…or simply take them to dinner to thank them for the business.

Help Desk reduction: Resources show that by self-healing and rebooting to an ideal state eliminates more than 34% of all inbound help desk issues without manual intervention. If you consider that very time the help desk phone rings, it’s $20 (based on nat’l average). For more serious issues such as catastrophic device failure, infected operating systems/applications, unauthorized downloads, the cost is obviously greater–and not just in terms of tech/admin intervention, but lost productivity and potential loss of client trust. This doesn’t include scheduled maintenance tasks such as patching, updating and migration—which in itself requires a significant time and resource commitment. By adding a self-healing component to your existing slate of offerings, it reduces the number of help desk calls and, more importantly, allows an MSPs help desk pros to uncover root causes rather than continually fix the symptoms.

Removal of malicious changes: Through maliciousness or carelessness, your client’s network is under constant attack from botnets, malware, viruses and a variety of other negative impact influences. Although automatic configuration and reimaging can’t prevent Stan from sales downloading a suspect app or prevent organized element in Eastern Europe from worming into  a system, the continuous maintenance and reapplication of an ideal state can prevent lingering damage. Any time an unauthorized outside influence tries to change a registry, attach itself to a file, or embed itself in a supported application, the system rejects these modifications in favor of the ideal state…in real time. From an MSP perspective, this avoids the downtime needed to cleanse a network and helps preserve the continuity of critical information.

Of the six general service areas mentioned, it is obvious how configuration/recovery/repair/ reimage automation can helps issues related to the network, backup and end users, however some question the value to those who provide security and compliance services. The answer is simple. Although not a traditional security solution, it not only demonstrates control over network assets (as required in SANS, HIPAA, PCI and others), but enables the operating environment running smooth over the course of the lifecycle.

Because a trusted adviser is more interested in a long term relationship than any short term gains, it is imperative that MSPs find and propose new and innovative solutions to include within their base services. If clients consider a MSPs service as a commodity, then it is very simple to find another provider.

The difference between an expert and a trusted adviser really comes down to a single attribute: an expert provides good answers. A trusted adviser asks good questions. Can you reduce costs while increasing your quality of service?

Learn how to answer that question at www.utopicsoftware.com

Utopic presents Top 5 benefits for self-healing IT

A video blog entry!

Utopic presents it’s Top 5 benefits for adopting a self-configuring, self-optimizing, self-protecting , and of course, self healing process for an enterprise IT landscape. Self-healing describes the ability to perceive that an IT system or device is not operating correctly and, without human intervention, make the necessary adjustments to restore itself to normal operation.

How schools can combine academic success, admin effectiveness and efficient technology without breaking the bank

:Years ago, to get rid of “data” all a teacher needed to do was erase the blackboard. Now, as most schools depend on multi-user computer labs, provide a multitude of devices, and maintain highly integrated technology networks, the need to preserve a relative homogenous environment not only supports effective teaching platforms, but is critical in maintaining a compliant and secure landscape.

In many ways, educational institutions are like large corporations which support many employees, partners, vendors, and clients. Students, teachers, administrators, even parents require access to certain assets, use a variety of devices and, therefore, create a degree of chaos if not properly managed.

The most blatant source for breach and other problematic issues is in computer labs. A single device is typically used by multiple people throughout a single day. Despite stated log in/out protocols and usage restrictions, the ability to make unauthorized changes that might affect the entire network is heightened. Multiply this exponentially by the number of devices in a single lab and a number of labs across campus or campuses. That’s a lot of fingerprints at the crime scene, Sherlock!

However, many campus IT professionals are finding that maintaining a “working state for each classroom and lab is an effective way to battle against user carelessness, intentional damage, and lingering static entries. This “working state” is a controlled and uncorrupted alpha version of the ideal image. By simply rebooting a machine, any changes made during the session are reverted back to the ideal state.

For the past several years, a large central Florida state college (four campuses and 30,000 enrolled students in Central Florida) has applied this strategy to the majority of its controlled devices. After surviving a prolific virus, the college’s IT staff recognized that the labs which promoted the ideal state imaging were self-healed after a single reboot—as if the problem never existed. The other devices (that did not have the Persystent protection) across the campuses required a very time-consuming and invasive reimaging process. The cost was in the tens of thousands.

More than a layer of security, the “working state” also wipes the slate clean after every class. Several of the college’s Computer Science courses are hands-on.  Course curriculum requires students to install or manipulate applications, adjust registry settings, and re-engineer portions of operating systems. Once class is dismissed, another class arrives for a similar lesson. If the system cannot be returned to a “working state,” each subsequent class is building on the work of previous and creating a Gordian Knot of code and chaos. The same strategy that keeps the device free from intrusion is also responsible for maintaining the ideal state on which to build curriculum. A simple reboot gets the device ready for the next class. And, when it comes time to update, upgrade or patch certain applications, the same time-saving process applies.

Schools are not only centers of learning, but also employ large and diverse staffs. In fact, many universities and school districts are the largest employers in their counties and regions. In this respect, their IT environment must perform like that of a corporation. Each role; from the cafeteria manager ordering more French fries, to financial aid officers running credit checks; to professors accessing a content portal for submitted homework; require different applications and access points. Therefore a variety of images need to be maintained towards the goal of administrative efficacy as well as state and federal compliance. The overarching issue here is not the continuous maintenance of an image, but the cost-effective way it needs to be propagated.

If an organization the size of the central Florida college can expect to experience more than 100,000 issues per year, the annual projected costs would be north of half-a-million dollars in technology and personnel expenses.  Yet, by applying an automatic, self-healing reboot to an ideal state such time-consuming issues like troubleshooting, break/fix application, updates/upgrades and migration can be greatly mitigated. This is not implying technology and administrative problems disappear, but the savings towards their remediation is noteworthy and immediately impactful.

For instance, educational institutions are open to abuse as any corporation; sometimes even more so. With so many access points, and devices, managing the landscape is like trying to herd cats. Beyond the administrative issues, there are those who, for one reason or another, engage in what we’ll call “mischief.”  Again the aforementioned Florida school serves as our example. Last year, they experienced an internal breach. Someone hacked the local administrative passwords on several devices in their campus library and changed it. This went on for weeks as the culprits moved from machine to machine. Obviously this made it impossible for anyone to use the affected device and a precarious crack in the security of the network. When discovered, it required a technician to be dispatched and apply a restored image with the original configuration.  By applying the automated configuration, the damage was extremely limited and the personnel cost moved almost to zero.

In terms of refreshing configuration, laptops, tablets and desktops are not the only uniquely educational devices that require monitoring. Many professors are using audience response systems like iClicker to interact with large auditorium classes. These items are like mini-handheld computers that use the password encoded network to connect and respond to a professor’s presentation. With potentially hundreds of devices in the hands of students requiring direct access, the issues with passwords, privileges, settings and other technical issues are compounded.

Most education institutions are cash-strapped or, at the very least, conservatively cost-conscious. It is the successful schools that find and incorporate improved ways to meld academic success, administrative effectiveness and an efficient technology backbone; all without breaking the bank. The easiest way to accomplish this lofty goal is to reduce the instances that potentially impact productivity, drain budgets, and force reassessment of IT priorities. The self-healing properties of automated configuration management allow such instances to be reduced by more than 75%. If that is truly the case (and it is!), there are more resources and more time to devote to higher value tasks.

The issues facing the college in our example are the same that vex Harvard, USC, Palomar Community College and John F. Kennedy High School.  Technology with all of its peaks and pitfalls is increasingly integrated into managing, applying and interacting with curriculum. Keeping this space optimally operational is not a best practice, it is Job One.

 Addressing configuration management in such a way is like boats in a lake after a rain. The water causes everything to rise in equal measure.  An automated backend reduces help desk calls, satisfies administrative compliance through the demonstration of certain controls, extends the lifecycle of the hardware and software, and facilitates student learning.

Though each school may have unique applications and approaches, the need to maintain a continuous ideal state is a common way to fulfill the promise of ensuring the curriculum relies on the teacher, not the technology.

Learn more at www.utopicsoftware.com 

If anti-virus is dead…then what?

How configuration automation fills the vulnerability gap.

Earlier this month, the progenitors of anti-virus software declared that “anti-virus is dead.”(Wall Street Journal May 4, 2014) According to Symantec and other industry leading statistics the software designed to prevent malware,spyware and other intrusive tactics are doomed to failure. They say that anti-virus only catches 45% of the threats.

The battle is being lost  because prevention and protection are always two steps behind. As fast as someone comes up with a preventive signature, six more even nastier bugs are developed and released on unsuspecting networks. It is said that 95% of all networks (source: FireEye and ThreatSTOP) have some

sort of active infection.

To add fuel to the fire, IT security thought guru Eugene Kaspersky recently said: “The single-layer signature-based virus scanning is nowhere near a sufficient degree of protection – not for individuals, not for organizations large or small.”

The barbarians may be at the gates, but it’s not all doom and gloom. Many IT pros, those associated with mid- and larger tier enterprises recognize that security best practices are not singularly tied into firewall protection, but rather, an interoperable combination of key functions.

The defenses may be in place, but the war is still not being won. An organization may be continuously monitoring, correlating, provisioning, authenticating, blocking, but too many companies are not taking advantage of what makes security more effective; more prolific across a wider enterprise expanse. What is missing is automation.

Let’s return to the company that depends heavily on anti-virus to prevent breaches and other negative impact events. If Symantec is a credible source, then this company needs a new and innovative way of maintaining a safe and secure environment. Let’s also assume that even with a stack of other security tools, that phishing, botnets, and malware will always find a way to breach the network. If multinationals like Citibank, eBay, Target and Sony struggle with breaches, than the likelihood is you do as well (sources say 78% experienced breach in the past 2 years). What needs to happen is to automatically protect.

In the absence, or more likely in support of anti-virus protection, initiating some sort of automated repair/recovery program seems to be a progressive alternative growing in acceptance and popularity. It is based on the continuous maintenance of an ideal state. This way, any time an unauthorized outside influence tries to change a registry, attach itself to a file, or embed itself in a supported application, the system rejects these modifications in favor of the ideal state. After every PXE reboot of a workstation, or device, the automated system reapplies the latest approved image.

Within this scenario, any infection introduced after the last boot up is eliminated. Case in point: an inside sales person uses your network and internet connection to reach their independent email account. They see a new email from a friend: “U should see this.” Thinking the friend is a trustworthy source, the email is opened and the link within is clicked.  The website redirect seems harmless enough, a picture of puppies or a video of a skateboarder miserably miscalculating an airborne trick. However, on the next click a Secure Shield dialog box appears and lets the user know their network is in danger. Believing they are good stewards of the company, click on the link to load the “security update.” And as fast as that, the ransom-ware makes their device a paperweight.

Without automation, a help desk tech will probably spend several hours diagnosing and then manually restoring the hacked registry. Even if a fresh image is available, there is still the necessary manual intervention of reapplying specific user settings, applications and privileges based on the business need, corporate policies and organizational role. Then there is a greater time commitment on investigating whether the issue has spread beyond the single device or has evolved into a greater threat. One moment of carelessness creates hours and hours of IT involvement, QA/testing, and re-ensuring compliance requirements. This doesn’t include the lost productivity, potential risk and cost this threat poses to the entire network.

The same scenario using repair/recovery automation doesn’t prevent the recklessness, but prevents the mistake from spreading further. All the user needs to do is turn the machine off and back on. This applies a fresh ideal state. The ransom-ware and any other unauthorized change are gone… automatically without IT intervention. More importantly, the ideal image is configured for the individual (or their role). The image maintains their applications, settings, latest updates and other unique components so the system lifecycle is perpetuated, uninterrupted and remains firmly under IT’s control.

Real time configuration management security also supports compliance considering that several of the SANS critical controls (which serve as the basis for more than 3 dozen regulatory compliance agency mandates) are maintained through proper configuration and demonstrated control. For example, PCI/DSS requires: “2.2 Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.” SAN simplifies this to mean “Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers.”  The ability to continuously maintain an ideal state for a variety of roles is the key to ensuring assets are only available to appropriate users. If each device is covered under a Repair/Recovery Reimage configuration protocol, then (as HIPAA 11.0 demands) you are demonstrating control over data. The system cannot accept unauthorized changes (as detailed in your organization’s standards and policies) to registry, applications or files. This is not to say an organization can forgo provisioning, log archiving, firewall reinforcement or authentication, but automating configuration puts another proverbial brick in your defense wall.

Security requires attention 24/7…

“If I can cut that in half, we’re talking a staggering amount of money,” says Bruce Perrin, CIO for Florida-based Phenix Energy Group. “Seventy percent of what security profes­sionals do could be done completely automatically, giving them more time to do things that are more important.”

62 percent of respondents in a recent IDG Research survey indicated they automate less than 30 percent of their security functions. For most companies that turns out to be a great deal of manual personnel hours. Hackers don’t sleep, so why should your security? Unless an IT department is staffed around the clock, there is a certain amount of time that users are on their own. And the most blatant issues (the ones that gain headlines) don’t start as brute force attacks—they are sneaky and insidious that can lay dormant for days or months (like Heartbleed); so that middle of the night emergency call may never come until it’s too late. By automating the configuration-break/fix process, organizations remove a significant burden.

For example, a unified school district in Central Florida manages a student computer lab more than 2000 PCs. They conservatively estimated that each PC experiences some sort of break/fix incident every 90 days (and 5% experience a catastrophic failure each year). And each incident required a manual intervention of one hour each. This equated to approximately 7,450 man hours over the course of the year. Also when considering the ROI, the average downtime of each machine was at least 4 hours from report to resolution. When they applied an automated process, the break/fix issues were reduced by 90%. This saved 9,450 hours and an annual cost savings of slightly less than 16,000/mo ($191,121/yr).

 

Automation also promotes the ability to respond to higher value threats in a shorter amount of time. And if you can reduce the number of security incidents through automation, you reduce the risk of data loss, which again can amount to staggering amounts of money given the potential cost of a single breach.

Configuration (Repair/Recovery/Reimage) may not be a traditional security solution, but as an automated component in a larger initiative, enables key security features that are not only compliance requirements, but keep the operating environment running smooth over the course of the lifecycle. And, for that reason alone, should be included as part of any organization’s next generation security arsenal.

Learn more at www.utopicsoftware.com