Choosing the Right DR Strategy – Highlights from #VMWorld 2018
When it comes to doing disaster recovery (DR) for your environment, there is no easy path forward.
As a leading provider of complex managed data center solutions for over 20 years, Navisite has learned from our extensive experience what DR solutions are the most appropriate tools to help meet each businesses’ unique infrastructure or application requirements.
Here are some things that we have learned over the years about doing DR:
- Understand your application requirements – When it comes to DR, there is generally a direct correlation between the performance of a particular tool or technology, and the price tag. Don’t pay for a premium tool on 50 servers, if only 5 of them need to be up in less than 10 minutes.
- The Cloud is your friend – The scalability, flexibility, and consumption-based billing aspects of the Cloud make it an ideal DR target. You only pay for the resources you need and don’t have to take care of a secondary data center that you hope you never have to use.
- Disasters come in all shapes and sizes – We tend to plan for a catastrophe that renders an entire data center unusable, but that is rarely what actually happens. Smaller scale disasters like a single server failure or the accidental (or deliberate) actions of a user may affect only a handful of servers out of many. Be sure that your DR plan allows for recovery of a subset of workloads.
- Make testing easy – Hopefully you will never have to execute your DR plan in response to an actual event, but you will likely need to test it several times a year. Cloud-based DR tools allow for test failovers that don’t impact product, but it’s good to include an alternate form of connectivity (such as a terminal server) for users to do application validation inside the recovery sandbox. Also be careful to ensure that your failover Active Directory server doesn’t update the production environment during a test.
- Replicate as close to the application as you can – While there are plenty of tools that will replicate at an OS, hypervisor, or hardware level, those tools generally lack the insight to know what data is more important than others – a temp file gets replicated at the same priority as a critical database record. If applications have a form of replication built-in, they will ensure that bandwidth is only used for the important bits, not the chaff.
We hope these guidelines will be a useful resource for your organization when creating your DR strategy, even knowing there’s no easy answer – but with enough thought and testing, you’ll do a great job!