How to Build a Business Continuity Plan in The Cloud: The Ultimate Guide to Disaster Recovery
Welcome to your ultimate business continuity guide for the cloud — allowing you to make the right choices from the start.
We will cover the success factors that separate a quality disaster recovery (DR) plan from a disaster waiting to happen. There is a solution for every budget, level of expertise and data system.
Get ready to learn how to use the cloud for your business continuity plan and continue operations in the face of disaster.
Let’s get started!
You need a business continuity plan for your business. 90% of businesses with no disaster recovery capabilities close after a major failure. 60% of small businesses that lose data will close within 6 months. What, however, is the best way to secure your information? How do you make sure that you never lose access to your business-critical applications?
Many businesses are turning to the cloud in order to meet modern demands for agility and scalability in disaster recovery solutions. This comes with some definite advantages. The cloud can flexibly accommodate BYOD, remote employees and distributed data sets. Public clouds can be scaled up and down to meet dynamic user demands. There are a growing number of firms that deliver cloud-based disaster recovery-as-a-service (DRaaS). Theoretically, this removes the need for businesses to actively concern themselves with DR planning at all.
But if you are going to make the cloud work for you, it’s going to take a little more planning than simply signing up with the first DRaaS provider you lay your eyes on. Essential to a good business continuity plan and DR outcome is avoiding downtime, even in the event of failure, and minimising data loss while keeping costs down. The basics of DR and business continuity are not changed by the cloud. The cloud just delivers another DR tool.
In loose conversation, terms like disaster recovery, backup and business continuity get used interchangeably. To make sure that you invest in the right outcomes we need to go back to basics and define our terms.
Disaster recovery vs business continuity vs backup
Disaster recovery, business continuity and backup are all subsets of one another. Often, disaster recovery is seen as the most comprehensive. This, however, is a mistake. The umbrella term here is business continuity (BC).
The most basic term is ‘backup’. This simply references the act of making duplicate copies of material. Disaster recovery is the next level up. It references a set of protocols for how and when backup will occur, along with how the restoration of those files will occur in the event of failure.
Business continuity is the all-encompassing plan. It includes disaster recovery planning and matches that with processes and procedures that dictate how your business will ensure continued operations in the face of disaster. Ultimately, a simple DR plan won’t cut it. You need to take into account a far wider range of processes. The cloud doesn’t change this. In fact, the fundamental trap you can enter into when planning for disaster recovery is to look at any technology as a complete solution. The cloud is not (itself) a strategy. It’s one tool able to help deliver an outcome that you need.
Key components of a disaster recovery plan
Around 30% of companies with a disaster recovery plan that is tested by failure still suffer data loss. 35% of companies that experience a failure temporarily lose at least one business-critical application. Of those that lost data in 2016, 12% could not recover that information. You might have a system in place that will allow you to get all (or part) of your data back. However, if there is too much downtime before that happens, it can still be the end of your business. Understanding the key components of success is critical to having a plan that will work.
What it takes to succeed: disaster recovery metrics
An effective business continuity solution starts with a proper understanding of disaster recovery metrics. That means understanding how you will restore your system, it needs to allow you to continue operations during the failure itself.
The four things every comprehensive disaster recovery plan has to take into account are failover, failback and RTO/RPO criteria.
- Failover is the process of transferring all applications and processes to a redundant IT system, generally in a secondary location. This is undertaken to continue operations while repairs are underway on your main system.
- Failback is the process of returning applications and processes to your original and restored system. This synchronisation is an important step to make sure that all data generated during failover is retained in the return to normal operations.
- RTO (Restore Time Objective) references the amount of downtime your business can tolerate in the event of a failure.
- RPO (Restore Point Objective) sets guidelines for the maximum target time in which data might be lost — essentially the update period for your backup system.
Get your instant Backup quote in just under 2 minutes
Use our quote generator today to get the best prices for backup solutions that best fit your specific business needs.
Disaster recovery methods
There are two main methods for achieving a disaster recovery solution: data centre disaster recovery and cloud-based disaster recovery. Cloud-based solutions are then further subdivided between public, private and hybrid cloud models. Each of these solutions have distinct benefits and costs that we will explore further. Each can be configured to meet different RTO and RPO criteria, and failover/failback procedures. To achieve failover, some form of virtualisation is generally used to duplicate the capabilities of your native system to your DR failover infrastructure.
Turning a disaster recovery plan into a business continuity solution
Business continuity goes beyond technology, creating a set of processes that will be triggered in the event of a failure, allowing your staff and business to continue operations and serve customers as if nothing had occurred.
Business continuity is about planning
Your business continuity plan builds on the capability of your disaster recovery plan. Getting DR metrics right is critical to your success. However, once you have identified those criteria and built a solution that fits, you need to make sure that your business is able to use and maintain your DR system.
Business continuity is about seamless operations
The heart of business continuity is making sure that staff know how to carry on their jobs using a failover system. The next step is to ensure that people know what needs to be done to achieve failback. Getting this right is particularly important when switching business-critical applications from traditional servers to cloud infrastructure in the event of failure.
Ultimately, turning your disaster recovery plan into a business continuity solution requires thinking about what non-technical and process-orientated steps are required to maintain your DR solution and effectively carry on operations in the event of a failure.
Disaster Recovery-as-a-Service (DRaaS)
‘As-a-service’ IT models continue to grow in popularity. They allow for far fewer maintenance and operating responsibilities, and you only pay for what you use. Most importantly, they allow firms with limited in-house IT capabilities to access leading-edge technology without the need to hire in-house experts. Outcomes are streamlined and actions simply delegated to specialists, allowing you to focus on delivering your core value to customers.
DRaaS (disaster recovery-as-a-service) brings this pay-as-you-go, managed IT approach to disaster recovery planning. More often than not, DRaaS solutions are cloud-based. All public cloud DR solutions are DRaaS. However, private clouds can be managed in-house on a CPEX (capital expense) model, and data centre DR solutions are occasionally run by managed IT service partners.
The popularity of DRaaS is obvious — it promises simple solutions guaranteed by experts. The public cloud is cheap, and when managed by a DRaaS provider, businesses are freed to focus on daily business while remaining confident in positive DR outcomes.
Business Continuity-as-a-Service (BCaaS)
There are a growing number of DRaaS providers using the term ‘business continuity-as-a-service’ to describe what they do. We support this focus on the practical delivery of real business continuity outcomes. However, the term can be a little misleading. Ultimately, business continuity remains in your hands. No service provider can truly guarantee that you achieve continuous delivery of services to your customers in the event of a failure — your staff need to be prepared.
Generally, what is meant by business continuity as a service is a DRaaS package that is accompanied by consulting on how to develop an internal business continuity solution using their DR plan. Do not, however, treat the title ‘business continuity-as-a-service’ as a replacement for continued business continuity planning, or investigating the specific DR metrics that will allow you to get a DR solution fit for business continuity purposes.
The first thing to understand when looking at the cloud for disaster recovery and business continuity planning is that the cloud isn’t just one thing. Different cloud access plans can dramatically change your relationship to the could. There is a large split between public and private clouds, with hybrid clouds sitting somewhere in the middle. Looking at all cloud solutions as fundamentally the same will get you into trouble from both an outcome and cost perspective.
The public cloud works like hosted services on the internet, and in most cases that is literally what it is. A third-party hosts both the software and platform on which your data is stored and accessed. Those resources are shared among multiple businesses and virtually partitioned. You rent the space you need when you need it.
This is great for variable storage. You are able to scale up and down on demand. However, that flexibility is enabled by a dependency on common infrastructure. The consequence is that providers are constrained in their ability to guarantee access speeds, reliability and security of service. This can be a serious problem if it is the sole basis of your disaster recovery plan. There are serious questions about how suitable the public cloud is for backup and disaster recovery — particularly as a sole solution.
- Low upfront costs
- Limited and variable access speeds
- Security concerns
- Common infrastructure
Private clouds are the source of most claims concerning robust security and reliability in cloud solutions. Private clouds operate on dedicated computing resources. These can be located physically at your organisation's data centre, or hosted by a third party. The use of dedicated hardware improves security and enables deliverable guarantees around access speeds. The cost is upfront infrastructure investments. This might be done on a CAPEX (capital expense) model, in which you purchase the hardware outright. Equally, you could rent private cloud infrastructure on an OPEX (operating expense) model.
Fundamentally, private clouds are purchased much more like traditional IT infrastructure than public clouds. You will have to make pre-planned decisions on your storage needs, and invest accordingly. The whole system will cost more, and require more planning, but it will deliver better results. You need to think about the importance of easy scalability vs security and reliability. A private cloud can be facilitated by a DRaaS provider, or built and maintained in-house.
- Upfront costs
- More expensive to operate
- No on-demand scaling
Hybrid clouds take the benefits of both private and public cloud offerings and deliver them simultaneously. This term can also be used to describe systems in which cloud resources are used in tandem with traditional hardware. Most commonly, it is a system that leverages public cloud access where scale and low costs matter, while delivering private cloud access when security and speed of access are most important.
For example, a hybrid cloud solution could allow users to silo daily data use and disaster recovery planning between different environments that are best suited to particular needs. You can backup your business-critical applications within a small private cloud. Less sensitive data that can remain inaccessible until failback can be relegated to the public cloud. Keeping some public cloud access also allows you to scale easily or keep temporary data and applications in an environment that can be scaled back when it is no longer needed.
Delivered as a package
Generally, a hybrid cloud is delivered as a whole package by a DRaaS solution provider. The private cloud half could be operated and maintained internally. However, this creates issues with continuity, control and seamless operations.
- Able to scale
- Secure where needed
- The best of all worlds
- Limited upfront costs
- More complex to build
- Requires syncing multiple systems
Cloud service models
Within the categories of public, private and hybrid clouds, there are also choices between SaaS (software-as-a-service), PaaS (platform-as-a-service) and IaaS (infrastructure-as-a-service) models. SaaS provides out-of-the-box results — you get a hosted software solution that is supported by an operating system and infrastructure. Any disaster recovery-as-a-service (DRaaS) offering would be SaaS.
PaaS provides the hosted operating system or development tools, supported by infrastructure, but no end-user software. IaaS cuts the service down to simply the servers and storage. PaaS and IaaS models are two choices you can use to operate a private cloud if you want control over your top layer and do not want to build and maintain the required hardware.
To understand the distinct capabilities of cloud-based disaster recovery, you need to understand the legacy approach — data centre disaster recovery. Building a disaster recovery solution within a data centre is expensive, but it is reliable, fast and secure. The truth is that if you need to maximise access guarantees and have a large budget, a data centre solution might be right for you.
What defines data centre disaster recovery?
Data centre disaster recovery entails building redundant hardware systems within a data centre. Your files are backed up using a DR software interface and compression/automation technologies, just like cloud-based systems. Data centre solutions can be hosted and managed by IT service professionals or built in-house. Data centre disaster recovery is generally done within the data centre used for your business’ other IT components. This is not always true, but remote data centre disaster recovery presents connectivity issues.
What really sets data centre disaster recovery apart is direct access. Rather than routing access through a WAN (wide area network), everything is directly connected via a LAN (local area network). This uses physical infrastructure, improving security and speed. LAN connectivity is what gives data centre disaster recovery its robust access capabilities. It is also what limits its ability to deliver connectivity across large distances.
Accelerate access speeds
Direct access and physical storage make backup and access faster. Local backup can mean restore times upwards of 300x faster (and backup procedures 1000x as fast) when compared to remote storage. Local networks remove security risks, and direct control over hardware removes reliability concerns.
Expensive and vulnerable hardware
More physical servers, however, are costly. Local disaster recovery is also vulnerable to physical damage. If you lose data because of a flood, earthquake or other physical events, having your DR servers in the same data centre as your main system will be of little help. For these reasons, most modern business continuity and disaster recovery plans have at least some cloud element augmenting their data centre capabilities. Many businesses are abandoning the data centre altogether and turning to the cloud as a sole solution.
- The fastest solution
- Increased security capabilities
- Guaranteed access speeds
- Large upfront costs
- Expensive and challenging to maintain
- Vulnerable to physical damage
There is no denying the utility of the cloud as a DR solution. Data centre disaster recovery solutions may be faster and more secure, but the affordability and flexibility of cloud solutions (along with their protection against physical damage) make them a vital part of any business continuity solution. You need a disaster recovery plan able to withstand all contingencies. For many businesses, the cloud is able to deliver for all of their disaster recovery needs.
Challenges of using the cloud for disaster recovery
If you want to use the cloud for DR/BC planning, there are some problems that you need to confront. Without proper planning, your disaster recovery solution can be crippled by access speeds, bottlenecks, hidden costs, security weaknesses or reliability concerns.
You need to think about the different types of cloud models (public, private and hybrid), and make sure that your DRaaS provider is delivering a quality service guaranteed through an SLA (service level agreement). You need to investigate different commercial models, make sure costs will not spiral in the event of a failure and invest in the internal training required to use your plan when it is needed.
Cloud access speeds: leaving you unable to access your data and applications
The fundamental issue with the ‘cloud’ is the tethering of its speed to the network connection through which it is being accessed. This is important to understand when thinking about the cloud at all. The reason this can become a disaster for disaster recovery planning is the giant spike in usage you will encounter when attempting to initiate your plan. For example, you may have no issues with access speeds when it comes to backing up files or using the cloud on a day-to-day basis. However, attempting to restore only 1TB of data from a cloud server that is limited to a 20Mb connection will take over 5 days!
Failover: a solution and a problem
The way to avoid delays is to restart business-critical applications outside your primary network while your system is being restored — failover. This can be done in the cloud, but it creates a whole host of problems. Applications that restart in the cloud will route all traffic through your wide area network (WAN) or virtual private network (VPN). Most business applications were not written with non-local user traffic in mind and have no standard measures in place to reduce the amount of traffic they generate. Making this work smoothly will require some preparation and investment in updates to make your most important and data-heavy business applications as cloud compliant as possible.
The longer your failover period persists, the more new data will be generated in the cloud — further increasing the amount of time it will take to restore the data in the cloud to your primary system. It is crucial to make sure that cloud provisions and DRaaS operating agreements provide access speeds high enough to effectively run your business through their network when needed. Simply focusing on day-to-day upload speeds is insufficient.
A lack of public cloud guarantees
A guaranteed RTO and failover outcome is possible with a private cloud. The problem with public clouds is that public cloud providers are ultimately incapable of making genuine guarantees about access speed because they are offering you a segment of a public resource. Your public cloud access will be contingent on wider internet usage and other demands on that specific provider's resources. This can vary at any given point in time. Prioritisation of particular traffic sets can only go so far. You will likely get SLA commitments from DR-public cloud providers, but they are not something that can be meaningfully guaranteed.
An unfortunate reality of web development is that WAN speeds have stayed relatively flatlined because the increase in traffic has paced advances in capability. WAN has been clogged long before the development of cloud applications. It is simply important to keep that in mind when placing critical business applications in the hands of public network speeds. At the very least, things might slow down when operating in a public cloud failover scenario.
Cloud commercial models: hidden costs
Just as there are many clouds, there are many ways to pay for the cloud. Hosted services all have some sort of ongoing fee. This will add up over time and should be compared to the in-house maintenance and management cost savings you achieve.
On a basic level, there are subscription-based models, pay-per-user models and consumption-based pricing. Subscription models are the most straightforward — you pay a set amount for a set amount of time. For disaster recovery, this might turn out to be unnecessarily expensive. However, for user and consumption models, you could end up with an unexpectedly large fee.
Variable pricing for different types of usage
User and consumption based cloud-DR models often charge per GB for communication between servers, and levy a second fee per GB when data is sent online. For example, AWS will charge you for the use of a public IP address and then level an additional fee for every IP address involved in a data transfer — creating a problem, particularly for public-facing websites that encourage downloads. These changes do not always apply but will add up dramatically over time.
Different costs for uploads and failback
In the event of a critical failure, things can get even more expensive. Data transfers out to the internet can cost five times that of the ingest charge — sometimes more. Even if there isn’t a large spike in costs, a disaster will cause a spike in the amount of data you are transferring. It is important to understand what it will cost to get your system up and running again.
There are disaster recovery options that don't require full data replication prior to failback. This has become widely available as the market has matured, but is not a universal option and creates challenges of its own. You have to make sure that the right data is returned to your primary servers and future backups are correctly synchronised.
Licencing costs that are separate from your general public cloud subscription
Many people assume that when purchasing a public cloud service that it will come with inbuilt disaster recovery capabilities. There are generally default backup features, but this isn’t a comprehensive DR plan, and that isn’t always made clear in marketing material. Office 365, for example, provides an uptime guarantee. That is achieved by replicating your data to ensure that it is available when needed. This will secure you against power outages, along with hardware and software failures. However, accidental deletions or malicious cyber attacks will not be solved by this solution.
‘Back-up’ is simply data stored in more than one place. Disaster recovery necessitates at least some diversity of restore points to enable roll-back in the event of corruption. If files become corrupted on Office 365, that error will simply be duplicated into all of your ‘backed-up’ versions, rendering them useless.
If you are relying on the cloud for disaster recovery, you need to investigate the specifics of what you are buying. This often means buying dedicated disaster recovery services and software in addition to a wider cloud purchase.
In-house costs of cloud-based disaster recovery: more hidden costs
The public cloud, particularly, is advertised as a one-stop shop — you simply purchase a subscription. In reality, you need to think about how that service is going to integrate with your wider IT infrastructure. For disaster recovery, that means a detailed investigation of your network strength and how critical business applications will behave in a cloud environment.
Improved infrastructure requirements
The cloud requires a high-strength connection to the internet and/or your private cloud server. Relying on that connection for both failback and failover means placing an even greater scrutiny on your connectivity and bandwidth capabilities. Having infrastructure simply capable of meeting your average day-to-day demands will not be able to cope with running your entire business through the cloud in the event of a total failover.
It is equally important to invest in network redundancy. Placing your access to business-critical applications on a single network connection creates a significant vulnerability to further failure, in addition to latency issues. Having a truly secure network connection requires essentially duplicating your access network. Fundamentally, relying on the public cloud for disaster recovery means spending more money on bandwidth and network maintenance in order to secure a reliable and sufficient connection to your data and critical applications in the event of a failure.
Cloud compliant applications
Even with redundant bandwidth, programs written for physical storage will struggle to operate efficiently and securely within the cloud. You may need to invest in updating business-critical applications in order to make them cloud compliant, even if you don’t intend on running them in the cloud regularly.
Clouds are distributed systems and are most efficient when applications break processing and storage into separate components. Applications run best in the cloud when data is decoupled from the application. If applications are too ‘chatter heavy’ (sending constant updates between different portions of the program and end-users), it will create latency issues over the cloud. How expensive a revamp will be to maximise the cloud compatibility of your business-critical applications will depend on the state of your existing infrastructure. Ultimately, clouds are most cost-effective when used for applications or systems that have usage peaks due to irregular use. Disaster recovery as a whole fits this description. The problem is whether or not your cloud connection will truly be able to handle the strain of your DR solution if activated.
Inherent security risks of cloud computing
Clouds operate over wide-area-networks (WAN). Public clouds operate over the public WAN — the internet. Security can be improved by using a virtual-private-network (VPN) that enables connectivity through an end-to-end encrypted system. For hosted services, however, unless your provider stores data in an encrypted format, this does not provide long-term protection. To achieve the conveniences of disaster recovery-as-a-service (DRaaS), you have to entrust your business’ data to a third-party. The big providers such as AWS, Azure and Google may have already won that trust. However, the distributed nature of the public cloud means that it is hard to know where your data actually is and who has access to it.
Industry-wide security deficiencies
Fundamentally, the nature of the cloud makes it less secure than traditional options, and more preparation needs to go into the details of delivering security on a software, platform and infrastructure level. The problem is that the industry still has a way to go in terms of best-practice investments.
Recent assessments indicate that only 2% of enterprise cloud applications are currently GDPR compliant. In leveraging third parties for off-site data storage there is an inherent risk of a breach through lax processes being undertaken by that contracted party.
There is a fundamental lack of control and encryption across cloud services. Only 1.2% of cloud providers give users encryption keys, only 2.9% have password policies compliant with GDPR and a barely better 7.2% have proper SAML (Security Assertion Markup Language) integration. If a cloud service includes APIs, the security of that service then hinges on the security of the API (Application Programming Interface) — further escalating risk and decentralisation of security.
The Cloud Security Alliance (CSA) is currently demanding security-focused ‘code reviews’ to bring the entire industry up to a higher standard. According to a survey conducted by the CSA, 73% of questioned IT professionals view security concerns as a top challenge holding back cloud adoption. It is essential for the cloud community to up its security game and incumbent on organisations to do their due diligence when assessing the security protocols of the firm they are entrusting to protect their data. You don’t want to find yourself a victim of cybercrime or summoned to GDPR arbitration just as you recover from a comms room meltdown.
The specifics matter
For a firm with poor IT and security protocols, a good DRaaS model might be an improvement. If your company doesn’t handle particularly sensitive data, operating solely within the public cloud might be a viable solution.
Risks of vendor lock-in when using the cloud
A one time move between cloud providers is achievable with some planning. It is difficult, however, to make data-heavy business-critical applications cloud compliant for more than one cloud platform. Many of the costs of becoming ‘cloud ready’ will reoccur if you change plans. There is a danger of cloud ‘lock-in’.
The challenge is mapping the particular services offered by an individual cloud provider and determining how that will impact your operations in a new environment. This becomes even more complicated when you look at the services and ecosystem that sit around different public cloud providers. Complicated and bespoke applications often lose value when transferred. Seamlessly transferring applications between cloud services and traditional infrastructure on a regular basis is not easily achievable. ‘Container’ technologies may change this. However, containers are hard to retroactively implement within legacy applications.
Making the right cloud choices for your disaster recovery plan
Despite the many flaws, there is a reason the cloud is so popular in disaster recovery — businesses need point-in-time restore capabilities that can match the speed and evolution of digital and data-driven businesses. Organisations simply should not enter into the market thinking it’s an easy catch-all solution to every DR need. The challenge is preparation and choice.
Public vs private clouds
It is important to note that most of the access, security and reliability issues with the cloud apply solely to public cloud options. Private clouds still pose portability issues for legacy applications and bring higher costs, but deliver the kind of guarantees you need to feel confident about continued access to business-critical applications and data.
Hybrid clouds are popular because they pair the benefits of public and private clouds, occasionally also leveraging local data centre capabilities. By creating a distributed and diverse system, you are able to minimize costs while guaranteeing results that fit your business needs.
The challenge with creating DR infrastructure out of multiple components is ease of use. You need a software control system able to seamlessly integrate each segment of your DR system and automate backup, failover and failback to and from all of the right places. Picking the right DR software is important no matter what — with a hybrid model, getting that choice right is vital.
Interfacing with your disaster recovery hardware is done through DR software. It’s important that the software you use is reliable and effective — automating tasks but giving you the level of granular control that you need. This will give you peace of mind that your disaster recovery system is in place and your business can continue with minimal interruption.
Many vendors claim that their disaster recovery software is the best on the market. But the best solution for one business is not necessarily the best for every business. When choosing DR software, you need to take into account your specific needs. There is a wide expanse of providers on the market. A few offer both hardware and software, but most are designed to work with third-party infrastructure. Several of these software providers offer DRaaS packages, and other DRaaS providers (or private cloud vendors) will recommend that you use specific software solutions with their system. Take those suggestions seriously. Just like with building your hardware groundwork, consulting with IT specialists will ensure you make the right choices.
Here, we will discuss six of the leading DR software solutions to give you a feeling for the distinctions and similarities that exist between most DR software control systems on the market.
Arcserve UDP: simplicity and functionality
Arcserve is one of the most experienced and established data protection and disaster recovery businesses on the market. Its flagship is Arcserve UDP (Unified Data Protection) which delivers a data backup and recovery software solution all on one platform. According to Arcserve, their solution can protect an entire IT ecosystem, no matter the IT platform used. UDP includes many features without becoming complex to operate. It can also be scaled up or down depending on a user’s needs so a business can use more storage, less storage, make more frequent uploads to the cloud and protect their data whether they are based in an office or operate remotely.
Veeam: granular control
Veeam’s goal is to make data “Hyper-Available” for both small businesses and large enterprises. Their scalable solutions blend together data backup and recovery, data protection and data security, with an aim to make data intelligent and self-governing. Their solutions can be tailored to a business’ needs, as they can deliver data loss avoidance, automated disaster recovery, scale-out backup repositories, instant file level recovery, reporting, monitoring, standalone consoles, and capacity planning and forecasting. These features can all be controlled on a granular level to provide the exact service and data you need without overburdening your system, making it a business favourite.
CommVault: streamlined automation
As a well-established backup and recovery vendor, Commvault boasts many impressive features which aim to remove workloads from their customers. Their solution is ideal for businesses in complex environments and businesses that don’t have the internal resources required to manage the IT side of their DR policy. Their solution covers all data protection needs, whilst optimising performance using artificial intelligence and machine learning algorithms. In addition to their complete package, which includes backup and recovery on all files and apps, file and VM archiving, endpoint data and mailbox protection and hardware snapshot management, Commvault also provides hardware to help create a complete data recovery solution.
Barracuda: physical and virtual recovery
Barracuda is a steadily growing backup and restore company. A relatively new firm, their data recovery products include backup and cloud-to-cloud backup which can be deployed simply and flexibly. Perfect for both small and large businesses, Barracuda’s products also offer unlimited cloud service and full cloud-based management. Barracuda’s products protect data across physical devices, virtual environments, Office 365 and SQL data, giving organisations the flexible options they need to ensure their disaster recovery policy suits them. Another of their top selling points is their 24-hour award-winning tech support.
Rubrik: the upstart
Founded in 2014, Rubrik has already become a global leader in the data recovery market, providing a range of data management applications to meet a business’ needs. Their primary offering is a cloud management platform that allows companies to manage and protect their data in various environments, including on-premises and in the cloud, all without requiring you to change your existing software stack. Rubrik’s solution provides features outside of just backup and recovery, such as search, analytics, compliance and archival support. This means companies can search through their data to probe it for valuable insights.
Cohesity: converged disaster recovery
Cohesity is quite a young enterprise data management and storage company that is aiming to make storage systems simplified and easy to use. Cohesity wants its customers to have a single platform which integrates everything: backup, archiving file storage, testing and analytics. They want to make the system less complex for IT and enterprise managers. They do this by providing an intuitive UI from which IT professionals can see all their data workloads, achieve global sharing of all data resources, perform analytics, connect to the cloud and file share within the application itself.
Dell EMC Data Domain: a giant in IT
Dell EMC Data Domain is Dell’s flagship data protection product. Their platform allows data monitoring from a single platform and provides a range of features, such as a top-level view for management to track data protection and more granular tools for analysts and data engineers. Dell offers a stack of both hardware and software products to help firms maintain access to their information. The company wants to make the system easy to use and understand for IT professionals with less experience, providing a range of simple walk-through video tutorials.
Amazon Web Services vs Microsoft Azure
There are many ways to access and use the public cloud, including some of the DR software vendors mentioned above. But there are two big players that everyone should investigate — Amazon Web Services (AWS) and Microsoft Azure. Both can be used in partnership with another software control system, or on their own.
Amazon Web Services (AWS)
Launched in 2006 by Amazon, AWS is a cloud computing platform. It offers a range of services, including IaaS, PaaS and SaaS. It was one of the first companies to offer a pay-as-you-go cloud computing model and it scales for its users as needed. Its range of data protection services offer data management capabilities that can be scaled, the ability to archive data as well as disaster recovery solutions.
Microsoft’s Azure is one of AWS’ main competitors. Its top selling point is its compatible cloud infrastructure. Microsoft has taken its on-premises software — such as Windows Server, Office, System Centre and others — and repurposed it for the cloud. This tight integration allows Microsoft users to easily transition into Azure.
So which comes out on top?
Both are leading cloud platforms. In terms of backup and recovery, they both provide varying benefits depending on your business needs. If your business already uses Windows, Azure would be an option that can quickly be integrated into your system, meaning you can easily take advantage of their backup and disaster recovery solutions. If you’re looking to use a hybrid cloud to backup and recover your data, Azure makes this simple.
It is only recently that AWS has acquired their own fully managed backup service — they previously had a few backup and storage partners. AWS has many more functions than Azure, so depending on what else your business wants to use the cloud for, AWS may be a better option.
Before you pay for cloud services for backup and recovery, ensure you do the research to figure out which platform suits the needs of your business.
Get your instant Backup quote in just under 2 minutes
Use our quote generator today to get the best prices for backup solutions that best fit your specific business needs.
Creating a disaster recovery and business continuity plan that will deliver what you need requires defining your requirements and picking hardware and software components that fit. You need to consider your access requirements, security vulnerabilities and customer expectations.
Managed IT service partners can help you make the right decisions. Professional advice will ensure that your internal assessments are accurate and aligned with solutions that can actually deliver. The specifics matter, but the process undertaken by a professional service firm will start with the following points of assessment:
1. Do you understand your RPO and RTO criteria?
The heart of your disaster recovery plan is your RPO (restore point objective) and RTO (restore time objective) criteria. Determining this breaks down into two sub-questions:
a. How much downtime can you tolerate?
Your restore time objective (RTO) is how long it will take for your system to get up and running again. You need to assess your business needs and determine how long is too long. For some businesses, a few days might be fine. For most businesses, more than a few minutes is a disaster.
b. How much data can you lose?
You restore point objective (RPO) reflects how often your DR system ‘backs up’. The more often backups occur, the more taxing your ongoing DR procedure will be on your general IT infrastructure. However, long delays between syncing will create data loss if a failure happens near the end of a cycle.
Think about how damaging it would be to lose a day’s worth of data, vs an hour’s worth of data, vs one minute or one second worth of data. Weight that against the cost and performance impact of executing the different strategies.
Your disaster recovery plan needs to reflect your answers to these two questions. If your RPO and RTO requirements are not matched by your DR capabilities, your system will let you down in the event of a failure.
2. Do you need/have failover and failback capabilities?
Failover is the process of rebooting your business-critical applications in a temporary DR system while your main system is being restored. Failback is the process of returning those applications (along with any new data generated) to your restored system.
If your RTO criteria cannot accommodate substantial delays, you need a failover and failback contingency plan. Even if you estimate that your network speeds would allow for the restoration of your system within an acceptable amount of time, you are still taking risks without failover planning.
The cause of your original system failure may have damaged your main system, requiring maintenance before restore. In this case, failover is your only option. For most businesses, the amount of data they have means some form of failover planning is necessary to hit RTO timeframes even if no repairs to their main system are required.
3. What are your scaling demands?
A large benefit of the public cloud is dynamic, on-demand scaling. If you have a large influx of data, a public cloud will be able to seamlessly accommodate those demands. Equally, you pay for what you use — if you need to scale down operations you will not be stuck maintaining hardware that is no longer needed.
If you have dynamic scaling needs, you need some form of public cloud access. However, if your data requirements are predictable and/or relatively static, the public cloud offers far fewer benefits. It is still the cheapest option but scaling demands are what make the public cloud a requirement for some DR systems, rather than simply an economical choice.
4. How sensitive is your data?
Everyone’s data is sensitive, but some data is more sensitive. The use of clouds, and the public cloud more specifically, come with inherent security risks. Your data is broadcast across a WAN (in the case of public clouds, across the public WAN — the internet), meaning that it can be intercepted. Encryption should be standard for any business IT system. But you may need to go further.
Private clouds are more secure than the public cloud. If you need secure access, you need an encrypted and authenticated private cloud. A data-centre, local disaster recovery system can be even more secure, removing remote access altogether. However, it is generally impractical for modern businesses to avoid cloud access. The far larger selling point for local DR systems is restore speeds. But the fact remains, if you need security, you need a private cloud.
5. What are your access speed requirements?
You need to understand how much pressure you will be placing on your DR system, and its ability to deliver access speeds that meet your demands. This requires assessing your internal network capabilities and the guarantees made by any third-parties (DRaaS providers).
There are three sub-categories to access speeds that you need to consider:
a. Regular backup: your system needs to be able to accommodate the regular backup of new data generated throughout the day in timeframes and regularity that match your RPO criteria.
b. Failover: your system needs to be able to accommodate all of the traffic that will be relayed over that network in the event of failover.
c. Restore/failback: your system needs the ability to restore all data and applications within a timely manner. If you do not have a failover system (which is not recommended), this needs to be quick enough to match your RTO criteria.
If failover does occur, your system needs to be able to accommodate continued failover operations while also executing failback — restoring your system. This needs to be fast enough to restore new data being generated during failover, while still maintaining access speeds fast enough to continue operations.
6. Are your business-critical applications cloud compliant
If you are going to use the cloud for disaster recovery, particularly if you are going to execute failover procedures in the cloud, you need to assess the cloud compliant capabilities of your business-critical applications.
Legacy applications are often ‘chatter’ heavy — sending signals repeatedly back and forth between different components, even when not active. This causes latency issues in the cloud. It can also be challenging to move any application into a cloud environment that tightly winds data processing and storage in a single mechanism.
Depending on the nature of the problem, legacy applications can either put an increased strain on your access speeds or be eternally incapable of operating in the cloud. You need to make sure these problems are identified and mitigated from the beginning.
7. Are their clear silos in your data and applications?
Most businesses have data and applications that are critical to their operations, and others that are secondary or only required for compliance purposes. Some applications may have dynamically scaling data storage requirements, while others are far more predictable or static.
This could be a divide across security criteria — with some data and applications or data requiring far less protection than others. Lastly, this divide could impact cloud compliant applications and legacy application.
These kinds of clear divisions can make it far easier to use a hybrid model, subdividing applications and data to different systems that match their particular needs. Unless you believe your entire system could operate in the public cloud, it is recommended that you try to create these distinctions within your business applications and data — allowing you to keep costs down while also providing quality service where it counts.
8. Do you understand your cost model — have you picked the right one?
Cloud disaster recovery plans operate on different cost models. Private clouds might operate on CAPEX (capital expense) or OPEX (operating expense) models. You will save money in the long run purchasing the hardware, but only if used to capacity.
Public clouds are always OPEX, but how that is billed varies. There are subscription models, pay-per-user models and consumption-based pricing. You may be charged based on IP addresses, there can be different fees based on uploads, downloads, data transfers and more.
You need to know these specifics, know your requirements and make the best choice.
9. Have you invested in staff training?
To turn your disaster recovery plan into a business continuity solution, you need to make sure that your staff know how to use the system, and that plans are in place to continue operations in the event of a failure.
It is not good enough to simply build the technical components of a disaster recovery solution — you need to invest the time and money required to ensure it is being used properly.
How to build a disaster recovery and business continuity solution
The criteria above will direct the type of solution that will work best for you. The final question you need to ask yourself is the ability of your organisation to execute and build the system you need on your own.
Public cloud resources have to be delivered ‘as-a-service’. However, you could build and connect a number of other hybrid features to that system using in-house processes. You need to be confident that your IT team will be able to flawlessly maintain your system. You are relying on that system to protect your business. Managed IT service providers can help ensure the quality outcome you need — either through procurement advice or managing and maintaining the system on your behalf.
Avoid rigid disaster recovery solutions and strive for cloud/traditional IT interoperability
The best solution for most businesses is some form of hybrid cloud capabilities — using the public cloud for cloud compliant business applications and non-sensitive data, while keeping traffic heavy applications backed up in-house and using private cloud servers for the most sensitive information. These solutions offer some of the cost savings and flexibility of public clouds while mitigating security and bandwidth concerns. Make sure, however, to plan integration accordingly.
By comparing options and investigating hybrid and private cloud solutions, many businesses will likely be able to improve their backup and disaster recovery capabilities while still making cost savings and improving business continuity. It is simply necessary to plan and invest with open eyes. Identify the necessary RPO and RTO criteria, never underestimate costs and investigate the specifics of the pricing scheme.
There are hidden fees in how exactly you will be charged for uploading and downloading data. Most importantly, you need access to your data when disaster strikes. Scrutinise SLAs and the particularities of your own business before placing your disaster recovery options in the hands of a third-party.
Invest in partnerships
Partners are the key to procuring the right solution. For firms with limited in-house capabilities, partners provide a route to managed disaster recovery services that you can count on. Make sure to take advantage of these opportunities and be honest about your own internal capabilities. Your firm will be counting on your DR solution when it is needed — don’t let yourself down.
Get your instant Backup quote in just under 2 minutes
Use our quote generator today to get the best prices for backup solutions that best fit your specific business needs.