Security Best Practices

In light of the recent security breach of the LinkedIn website and passwords, NMGI would like remind clients to take every measure possible to ensure the safety of your information.

In case you are not sure where to start, we have listed some  best practices to insure account security and privacy:

Changing Your Password:

  • Never change your password by following a link in an email that you did not request, since those links might be compromised and redirect you to the wrong place.
  • If you don’t remember your password, you can often get password help by clicking on the Forgot password link on the Sign in page of most websites.
  • In order for passwords to be effective, you should aim to update your online account passwords every few months or at least once a quarter.

Creating a Strong Password:

  • Use encrypted password management software to keep track of all of your passwords.
  • Variety – Don’t use the same password on all the sites you visit.
  • Don’t use a word from the dictionary.
  • Length – Select strong passwords that can’t easily be guessed with 10 or more characters.
  • Think of a meaningful phrase, song or quote and turn it into a complex password using the first letter of each word.
  • Complexity – Randomly add capital letters, punctuation or symbols.
  • Substitute numbers for letters that look similar (for example, substitute “0″ for “o” or “3″ for “E”.
  • Never give your password to others or write it down.

A few other account security and privacy best practices to keep in mind are:

  • Sign out of your account after you use a publicly shared computer.
  • Keep your antivirus software up to date.
  • Don’t put your email address, address or phone number on public profiles.
  • Only connect to people you know and trust.
  • Report any privacy issues to Customer Service.

*Modified from LinkedIn.com

Share

FREE Data Protection Webinar with Randy Johnston

Randy Johnston50% Of businesses who lose data file for bankruptcy within 10 days!

How much revenue would your business lose if you lost access to your network data? Many small businesses without data protection services that experience catastrophic data loss never recover, even when the data loss is only for a day. From accounting records to company files, your business relies on having continual access to its data. According to research by K2 Enterprises, 50% of companies without a data recovery and system recovery plan, who experience a data loss of 10 days or more, file for bankruptcy immediately and 93% file for bankruptcy within one year. With a sudden halt to operations and subsequent loss of revenue, your operating expenses still continue. A comprehensive system recovery plan is a vital asset that ensures your business is not stopped in its tracks in the event of a disaster.

Network Management Group Inc. (NMGI) develops and hosts a wide variety of data protection services. In the event of a catastrophic server failure or natural disaster, our remote servers provide a backup of your server and can reproduce your data in as little as 10 minutes for files, 30 minutes for individual servers, and 24-48 hours for entire systems. Compare this to conventional backups, which may take several days for a full system recovery. This rapid recovery helps you resume business operations as quickly as possible and minimizes lost revenue
Remote backup is one of the easiest ways to protect your company’s data.

Don’t take chances with your company’s data. A fully integrated disaster recovery solution will give your company piece of mind. Network Management Group Inc. (NMGI) will host a FREE webinar that you are cordially invited to attend. In this FREE, informative data protection webinar, Randy Johnston, a leading industry expert will discuss topics essential for maintaining the success of your company and avoiding a potential disaster. Some of the many notable topics to be discussed include data protection, backup of server, system recovery, back up and recovery as well as many others.

Don’t leave your company’s future to chance!
Register for your free, no obligation data protection webinar today! http://www.nmgi.com/backup

You can also call (620) 664-6000 for immediate help!

FREE Data Protection Webinar with Randy Johnston

Share

Is your business prepared for an emergency?

Disasters happen. Is your business ready?

Preparing for an emergency is a key factor to business continuity after a disaster. Wherever the threat comes from – whether it’s physical, virtual, network failure or cybercrime-related – it’s important your business is equipped to deal with the problem.

In fact, the U.S. Department of Labor estimates over 40 per cent of businesses never reopen following a disaster. And when we consider these potential consequences, it’s important you have a disaster preparedness plan ready. In creating such a plan Cindy Bates, Microsoft US SMB Vice President, recommends you consider the following: [Read more…]

Share

Disruptive Technology

Disruptive Technology

Can you make it to the end of this article without getting distracted?

I admit, in writing an article advocating less technology, I feel as sheepish as Anheuser-Busch must feel in posting the “Enjoy Responsibly” tag at the end of one of their million dollar Super Bowl commercials. But this problem of office distractions has been gnawing at me since Outlook began popping up email notifications some 10+ years ago. Ding! Hey, there’s another one… [Read more…]

Share

Why You Need a Managed Services Provider

Information technology (IT) systems are expected to meet high standards of operation, while offering 24/7 availability, security, and performance. In today’s environment, you have to keep pace with the constant changes in IT, performance demands, and pressure to deliver competitive IT functionality. To meet these challenges, many organizations consider outsourcing their IT activities to be an attractive option.

What is a Managed Services Provider?

A Managed Services Provider (MSP) lets you delegate specific IT operations to them. The MSP is then responsible for monitoring, managing and/or problem resolution for your IT systems and functions. [Read more…]

Share

Ensure the Availability of Critical Applications in a Virtual Environment

Mc9002504661

A recent high-profile cloud computing outage that temporarily knocked out a number of popular websites served as a reminder that, while cloud outages are rare, they can happen. Although service was restored for many of the sites later the same day, the incident “sent a chill” through the cloud community, according to one analyst. ¹

 

The outage also underscored many of the findings of the most recent Symantec Disaster Recovery Study, which found that, for today’s organizations, the growing challenge of managing disparate physical, virtual, and cloud resources is adding more complexity to their environments and leaving business-critical applications and data unprotected.

Continue reading to learn about the specific management challenges posed by virtualization and the cloud and the steps your organization can take to help reduce downtime.

Too many tools, not enough protection
The Symantec survey, which polled more than 1,700 IT managers in large organizations across 18 countries, provides ample evidence that virtual systems are not being properly protected. And this comes at a time when respondents reported that between one-fourth and one-third of all applications are in virtual environments.

For example, the survey found that nearly half of the data on virtual systems is not regularly backed up, and only one in five respondents use replication and failover technologies to protect their virtual environments. Respondents also indicated that 60% of virtual servers are not covered in current disaster recovery plans. That’s up significantly from 45% reported by respondents in 2009.

Another key finding: Using multiple tools to manage and protect applications and data that reside in virtual environments causes major headaches for data center managers. In particular, nearly 60% of respondents who encountered problems protecting business-critical applications in physical and virtual environments said this was a major challenge for their organization. As one data center manager for an automotive company put it: “If I knew of a tool that would do everything for us, I’d be happy to take a look at it.”

Approximately two-thirds of the respondents said security was their main concern about putting applications in the cloud. However, the biggest challenge respondents face when implementing cloud computing or cloud storage is the ability to control failovers and make resources highly available.

Best practices to reduce downtime
Symantec believes data center managers should simplify and standardize as much as possible so they can focus on fundamental best practices that help protect critical applications and reduce downtime:

  • Treat all environments the same. Ensure that business-critical data and applications are treated the same across environments (virtual, cloud, physical) in terms of DR assessments and planning.

  • Use integrated tool sets. Using fewer tools to manage physical, virtual, and cloud environments will help organizations save time and training costs and help them to better automate processes.

  • Simplify data protection processes. Embrace low-impact backup methods and deduplication to ensure that business-critical data in virtual environments is backed up and efficiently replicated off campus.

  • Plan and automate to minimize downtime. Prioritize planning activities and tools that automate and perform processes that minimize downtime during system upgrades.

  • Identify issues earlier. Implement solutions that detect issues, reduce downtime, and recover faster to be more in line with expectations.

  • Don’t cut corners. Organizations should implement basic technologies and processes that protect in case of an outage, and not take shortcuts that will have disastrous consequences.

Why you need to monitor the health of an application
When it comes to ensuring the high availability of business-critical applications, today’s IT organizations have little margin for error. Recent research illuminates the extremely tight parameters that businesses are working with. According to a report by the Enterprise Strategy Group, respondents said their organizations would suffer “significant revenue loss or other adverse business impact” if their business-critical applications were unavailable for anything from no time up to 1 hour. ²

Of course, ensuring the availability of business-critical applications means more than just ensuring that the virtual machine is running. Just because the virtual machine is available doesn’t mean the application is running properly. While VMware HA provides a robust mechanism to detect failures of infrastructure components, there’s still the question of monitoring the health of an application running within a virtual machine.

Symantec has extensive experience monitoring an application’s state and reacting accordingly in the event of an application failure. ApplicationHA, Symantec’s high availability solution for VMware virtual environments, provides application visibility and control while monitoring the health of an application running within a virtual machine.

The latest release of ApplicationHA enables administrators to monitor the health of hundreds of applications, at once, across their VMware environment via a dashboard.

At the same time, ApplicationHA’s deep integration with VMware vCenter Site Recovery Manager helps organizations address the challenges of traditional disaster recovery so that they can meet their Recovery Time Objectives, Recovery Point Objectives, and compliance requirements. With Application HA and Site Recovery Manager, organizations can quickly manage failover from their production datacenters to disaster recovery sites and ensure their applications are running in the event of a disaster.

Conclusion
As more and more IT organizations adopt new technologies such as virtualization and cloud computing to reduce costs and enhance disaster recovery efforts, they’re adding more complexity to their environments and leaving business-critical applications unprotected. These organizations should strongly consider adopting tools that provide a holistic solution across all environments. Data center managers could then focus on fundamental best practices to help reduce downtime.

To learn more, view the Symantec webcast, “Virtualize Business Critical Applications with Confidence Using ApplicationHA.”


Used with permission from Symantec

¹ “Amazon gets ‘black eye’ from cloud outage,” Computerworld, April 21, 2011
² “2010 Data Protection Trends,” Enterprise Strategy Group, April 2010

Share

Five Challenges You Need to Address About “Going Virtual”

Challenges_ahead

Has any topic attracted more attention than virtualization in IT departments these days? It’s not likely. Research bellwether Gartner Inc. has dubbed virtualization “the highest-impact issue changing infrastructure and operations through 2012.” ¹

 

That being the case, it’s not surprising that virtualization introduces new challenges into IT environments as they become increasingly both physical and virtual. This Tech Brief looks at five key challenges that you need to be aware of as you deploy – or continue to deploy – virtualization.

Challenge #1: Can you manage both physical and virtual platforms efficiently?
Management software and processes that work in the physical server environment don’t always work in a virtual environment. That can lead to a number of complexities and inefficiencies, such as higher costs for training, software, and operations. According to Symantec’s most recent IT Disaster Recovery Research Report, 35% of respondents cited “too many different tools” as the biggest challenge in protecting mission-critical data and applications in physical and virtual environments. Managing both environments on one platform with a single set of tools reduces “sprawl.”

Challenge #2: Are you maximizing your storage investments as you deploy storage for virtual environments?
Gartner is on the record as stating that “server virtualization solutions and projects often reduced storage visibility, management, utilization and subsequently increased storage costs.” ² The fact is, storage management in virtual environments is more challenging. Make sure your storage management strategy spans both environments and can provide end-to-end visibility, monitoring, analysis, and active testing.

Challenge #3: How confident are you about the processes for backing up and recovering your virtual machines and their data?
As effective as virtualization is for maximizing server utilization, it can create problems for data protection. The proliferation of servers can make backup configuration more time-consuming, increase storage requirements, and complicate backups and restores. Support for advanced technologies such as off-host backup or block-level incremental backup becomes critical to overcoming the performance and bandwidth constraints associated with virtual environments. Data deduplication can reduce the backup storage required for backups and disaster recovery.

Challenge #4: When applications running in your virtual machines fail, are you alerted?
You don’t hear much about it, but virtualization can decrease application visibility and recoverability. That’s because native server virtualization HA tools usually lack the ability to monitor the health of the applications running inside virtual machines. So if there’s an application failure, no action is taken to remediate the problem. Also, native disaster recovery features don’t completely automate recovery at the DR site, and they don’t make failback to the production site easy. This can mean longer downtime, which is unacceptable if these happen to be mission-critical apps. Make sure your HA/DR solution can detect and automate the failover of applications on both virtual and physical servers.

Challenge #5: Do you know what needs to be considered when moving from physical to virtual endpoints?
Multiple configurations and computing models are the norm in today’s enterprise. Desktops and laptops, rich clients and thin clients, physical desktops and virtual desktops, shared systems and dedicated systems all have their place. To be successful, endpoint virtualization, the next big wave after server virtualization, must be approached as part of an overall strategy to decrease PC total cost of ownership and increase end-user productivity.

Conclusion
Virtualization can provide a host of benefits – if you do the proper up-front planning. And that means making sure your virtualization strategy doesn’t stint on storage management, data protection, High Availability/Disaster Recovery, and endpoint virtualization.

¹ “How to Reduce Your PC TCO 30% in 2011,” Troni, Gammage, Silver, March 20, 2009
² “Gartner Magic Quadrant for Storage Resource Management and SAN Management Software,” Filks, Passmore, June 22, 2009

 

——–
used with permission from Symantec

Share

5 Steps to Create and Execute a Technology Plan

5steps

As an owner or business executive have you ever contemplated your business objectives and come to realize that your technology is in the way of your plans?

 

Have you had a great idea about improving business operations or productivity and found out that you just don’t have the right computer systems, or that it will cost a ton of money to upgrade? Would you like to know what the new trends are and how they could benefit you?

Technology planning helps answer the above questions and many more that you may not think to ask. The primary goal of a technology plan is to support your business plan objectives and to keep productivity and compliance issues front and center. Here are the five steps in creating and executing a Technology Plan.

HAVE A VISION

This is where your mindset needs to be on the vision for your company as a whole.

First, picture the business as you want it to be, paying no mind to technology – just the business. Out of this exercise will come ideas of how your business should perform as a whole.

Do you see what you want your staff to be able to do, what roadblocks or bottlenecks you want to be eliminated, what job function you would like to see automated, etc.?

Thinking about it this way takes the actual technology (hardware, software, etc.) out of the picture and gives you an infinite number of possibilities to get where you want to go. This will give you a better idea of what you need to do.

EVALUATE WHAT YOU HAVE

Now get together with your IT provider or your CTO (Chief Technology Officer) and draw out exactly what systems you have. This should be a visual representation of your systems and processes. The document should provide comprehensive details regarding the systems, versions, Internet Service Providers, security, and risk related to hardware and software age, as well as compliance with regulations.

MAP YOUR ROUTE

There may be many ways to get to your planned destination. Technology changes rapidly, but that gives you options you may not have had even just last year. Considerations for which path you should take need to include your budget, your risk tolerance, legal compliance, anticipated company growth or contraction, and necessity for change.

This is where your CTO or IT provider will present you with all of the technology options for how your business can get from your current position to your vision.

BUILD THE PLAN

Building your plan is a simple matter of understanding the goal and putting budgets and dates together to execute the plan. A solid technical architect is required to design and compile the appropriate solutions that meet your expectations. The design process should include a hybrid of all technical options available.

START THE JOURNEY

The actual execution of a technology plan can be challenging to say the least, but it should not be so frustrating that you wish you had not started the process. A good technical team will be able to make the transition relatively painless.

 

—–
Post written by Courtney Kaufman, Marketing Manager of Accent Computer Solutions, Inc.

Share