VMware View Price Increase

The price of VMware View will be increasing by 10% on September 28, 2012. Therefore, if you have been looking to deliver virtualized desktop services from the cloud but have not pulled the trigger yet. Now is the time to do so. If you would like to learn more about VMware View or other IT management products, contact NMGI at (620) 664-6000.


Ensure the Availability of Critical Applications in a Virtual Environment


A recent high-profile cloud computing outage that temporarily knocked out a number of popular websites served as a reminder that, while cloud outages are rare, they can happen. Although service was restored for many of the sites later the same day, the incident “sent a chill” through the cloud community, according to one analyst. ¹


The outage also underscored many of the findings of the most recent Symantec Disaster Recovery Study, which found that, for today’s organizations, the growing challenge of managing disparate physical, virtual, and cloud resources is adding more complexity to their environments and leaving business-critical applications and data unprotected.

Continue reading to learn about the specific management challenges posed by virtualization and the cloud and the steps your organization can take to help reduce downtime.

Too many tools, not enough protection
The Symantec survey, which polled more than 1,700 IT managers in large organizations across 18 countries, provides ample evidence that virtual systems are not being properly protected. And this comes at a time when respondents reported that between one-fourth and one-third of all applications are in virtual environments.

For example, the survey found that nearly half of the data on virtual systems is not regularly backed up, and only one in five respondents use replication and failover technologies to protect their virtual environments. Respondents also indicated that 60% of virtual servers are not covered in current disaster recovery plans. That’s up significantly from 45% reported by respondents in 2009.

Another key finding: Using multiple tools to manage and protect applications and data that reside in virtual environments causes major headaches for data center managers. In particular, nearly 60% of respondents who encountered problems protecting business-critical applications in physical and virtual environments said this was a major challenge for their organization. As one data center manager for an automotive company put it: “If I knew of a tool that would do everything for us, I’d be happy to take a look at it.”

Approximately two-thirds of the respondents said security was their main concern about putting applications in the cloud. However, the biggest challenge respondents face when implementing cloud computing or cloud storage is the ability to control failovers and make resources highly available.

Best practices to reduce downtime
Symantec believes data center managers should simplify and standardize as much as possible so they can focus on fundamental best practices that help protect critical applications and reduce downtime:

  • Treat all environments the same. Ensure that business-critical data and applications are treated the same across environments (virtual, cloud, physical) in terms of DR assessments and planning.

  • Use integrated tool sets. Using fewer tools to manage physical, virtual, and cloud environments will help organizations save time and training costs and help them to better automate processes.

  • Simplify data protection processes. Embrace low-impact backup methods and deduplication to ensure that business-critical data in virtual environments is backed up and efficiently replicated off campus.

  • Plan and automate to minimize downtime. Prioritize planning activities and tools that automate and perform processes that minimize downtime during system upgrades.

  • Identify issues earlier. Implement solutions that detect issues, reduce downtime, and recover faster to be more in line with expectations.

  • Don’t cut corners. Organizations should implement basic technologies and processes that protect in case of an outage, and not take shortcuts that will have disastrous consequences.

Why you need to monitor the health of an application
When it comes to ensuring the high availability of business-critical applications, today’s IT organizations have little margin for error. Recent research illuminates the extremely tight parameters that businesses are working with. According to a report by the Enterprise Strategy Group, respondents said their organizations would suffer “significant revenue loss or other adverse business impact” if their business-critical applications were unavailable for anything from no time up to 1 hour. ²

Of course, ensuring the availability of business-critical applications means more than just ensuring that the virtual machine is running. Just because the virtual machine is available doesn’t mean the application is running properly. While VMware HA provides a robust mechanism to detect failures of infrastructure components, there’s still the question of monitoring the health of an application running within a virtual machine.

Symantec has extensive experience monitoring an application’s state and reacting accordingly in the event of an application failure. ApplicationHA, Symantec’s high availability solution for VMware virtual environments, provides application visibility and control while monitoring the health of an application running within a virtual machine.

The latest release of ApplicationHA enables administrators to monitor the health of hundreds of applications, at once, across their VMware environment via a dashboard.

At the same time, ApplicationHA’s deep integration with VMware vCenter Site Recovery Manager helps organizations address the challenges of traditional disaster recovery so that they can meet their Recovery Time Objectives, Recovery Point Objectives, and compliance requirements. With Application HA and Site Recovery Manager, organizations can quickly manage failover from their production datacenters to disaster recovery sites and ensure their applications are running in the event of a disaster.

As more and more IT organizations adopt new technologies such as virtualization and cloud computing to reduce costs and enhance disaster recovery efforts, they’re adding more complexity to their environments and leaving business-critical applications unprotected. These organizations should strongly consider adopting tools that provide a holistic solution across all environments. Data center managers could then focus on fundamental best practices to help reduce downtime.

To learn more, view the Symantec webcast, “Virtualize Business Critical Applications with Confidence Using ApplicationHA.”

Used with permission from Symantec

¹ “Amazon gets ‘black eye’ from cloud outage,” Computerworld, April 21, 2011
² “2010 Data Protection Trends,” Enterprise Strategy Group, April 2010


Five Challenges You Need to Address About “Going Virtual”


Has any topic attracted more attention than virtualization in IT departments these days? It’s not likely. Research bellwether Gartner Inc. has dubbed virtualization “the highest-impact issue changing infrastructure and operations through 2012.” ¹


That being the case, it’s not surprising that virtualization introduces new challenges into IT environments as they become increasingly both physical and virtual. This Tech Brief looks at five key challenges that you need to be aware of as you deploy – or continue to deploy – virtualization.

Challenge #1: Can you manage both physical and virtual platforms efficiently?
Management software and processes that work in the physical server environment don’t always work in a virtual environment. That can lead to a number of complexities and inefficiencies, such as higher costs for training, software, and operations. According to Symantec’s most recent IT Disaster Recovery Research Report, 35% of respondents cited “too many different tools” as the biggest challenge in protecting mission-critical data and applications in physical and virtual environments. Managing both environments on one platform with a single set of tools reduces “sprawl.”

Challenge #2: Are you maximizing your storage investments as you deploy storage for virtual environments?
Gartner is on the record as stating that “server virtualization solutions and projects often reduced storage visibility, management, utilization and subsequently increased storage costs.” ² The fact is, storage management in virtual environments is more challenging. Make sure your storage management strategy spans both environments and can provide end-to-end visibility, monitoring, analysis, and active testing.

Challenge #3: How confident are you about the processes for backing up and recovering your virtual machines and their data?
As effective as virtualization is for maximizing server utilization, it can create problems for data protection. The proliferation of servers can make backup configuration more time-consuming, increase storage requirements, and complicate backups and restores. Support for advanced technologies such as off-host backup or block-level incremental backup becomes critical to overcoming the performance and bandwidth constraints associated with virtual environments. Data deduplication can reduce the backup storage required for backups and disaster recovery.

Challenge #4: When applications running in your virtual machines fail, are you alerted?
You don’t hear much about it, but virtualization can decrease application visibility and recoverability. That’s because native server virtualization HA tools usually lack the ability to monitor the health of the applications running inside virtual machines. So if there’s an application failure, no action is taken to remediate the problem. Also, native disaster recovery features don’t completely automate recovery at the DR site, and they don’t make failback to the production site easy. This can mean longer downtime, which is unacceptable if these happen to be mission-critical apps. Make sure your HA/DR solution can detect and automate the failover of applications on both virtual and physical servers.

Challenge #5: Do you know what needs to be considered when moving from physical to virtual endpoints?
Multiple configurations and computing models are the norm in today’s enterprise. Desktops and laptops, rich clients and thin clients, physical desktops and virtual desktops, shared systems and dedicated systems all have their place. To be successful, endpoint virtualization, the next big wave after server virtualization, must be approached as part of an overall strategy to decrease PC total cost of ownership and increase end-user productivity.

Virtualization can provide a host of benefits – if you do the proper up-front planning. And that means making sure your virtualization strategy doesn’t stint on storage management, data protection, High Availability/Disaster Recovery, and endpoint virtualization.

¹ “How to Reduce Your PC TCO 30% in 2011,” Troni, Gammage, Silver, March 20, 2009
² “Gartner Magic Quadrant for Storage Resource Management and SAN Management Software,” Filks, Passmore, June 22, 2009


used with permission from Symantec