Your business has changed. The demand for agility, scalability and availability means we now do everything at a faster pace than ever before. The growing ability to process huge amounts of real-time information and the vast and continuous trail of data and alterations it generates has fueled the need to develop new processes of data storage, backup and recovery options.
Traditional backup solutions struggle to address the needs of distributed data, or keep pace with the digital ‘speed of business’. Work is no longer tied to the office, and just as your business processes and applications have had to accommodate BYOD and off-site employees, so too do your disaster recovery options. What you need are solutions that are distributed and can scale. Without point-in-time backups, your organisation runs a significant risk of losing vital data in the event of a failure. However, you also require near-immediate failback/failover capabilities to avoid the high costs of missing a single step in the fast-paced digital world of modern business. The advent of the public Cloud, with its promise of infinite capacity and a reduction in cost, has introduced an answer that supports that ability to scale, while making savings and meeting the offsite requirements that now surround backup and disaster recovery plans. It appears to offer both reliability and agility on a budget. But, is it all it’s cracked up to be? Hidden fees, security threats, recovery time and access speeds have all called into question the ability of the public Cloud revolution to truly offer progress in this sensitive area of business operations. This article explores the potential shortcomings of the public Cloud as a platform for backup and disaster recovery and provides answers to how businesses can reliably secure their information in the event of a system failure.Backup and Disaster Recovery Priorities
Costs. The Danger Of Hidden Fees
In theory, public Cloud server and storage costs are falling. The reality is that while it’s true that pricing continues to become more modular and aligned to usage, this doesn’t necessarily translate to cheaper costs. These seemingly initial small costs can spiral out of control. It is essential that your in-house IT team learns about Cloud pricing models and takes steps to mitigate price gouging for essential services. Your first payment will seem minuscule compared to the upfront costs of building on-site infrastructure for backup purposes—this will seem much less the case when making the 100th payment. Other factors, like adequate bandwidth required to access the public Cloud from both your premises and at the server level, can also have a serious impact on costs. Many public Cloud providers charge per GB for communication between servers and levy a second fee per GB when data is sent online. For example, AWS will charge you for the use of a public IP address and then level an additional fee for every IP address involved in a data transfer. Many of these charges seem insignificant and do not always apply, but like the continual cost of renting the service, they can add up dramatically over time. In the event that you experience a critical failure, things can get even more expensive. A poorly constructed solution may require full copies of your data to be replicated back to your primary servers before failback can occur. This will not only be slow, it will be expensive. Data transfer out to the internet can cost five times that of the ingest charge—sometimes more. From a cost perspective, it can require your company to invest in an even larger bandwidth package to support the use of external applications and incur additional fees to make applications Cloud compliant. This can not only compound general latency issues—particularly for midsized firms with a single connection—it also requires the purchase of redundancy measures to protect that single point of failure and increases general internet costs.Speed of Failback—Speed of Business
Recovery Time Objectives are a critical part of any backup and disaster recovery plan. Without them, all the time, costs and resources invested are useless because you have set no guarantees regarding how your business will recover in a disaster scenario.“attempting to restore only 1TB of data from a public Cloud server that is limited to a 20Mb connection will take over 5 days”The use of the public Cloud makes it fundamentally difficult to ensure any of these Recovery Time Objectives because of the limited bandwidth provided to public Cloud servers and competition for bandwidth across the wider internet. Prioritisation of particular traffic sets can only go so far. For services operating via the internet or leased infrastructure such as AWS, it is impossible to make meaningful Operational Level and Service Level Agreements regarding restore times. Without these, public Cloud disaster recovery is not capable of preventing catastrophic delays in restarting business processes after a failure. For example, attempting to restore only 1TB of data from a public Cloud server that is limited to a 20Mb connection will take over 5 days. Fundamentally, using a local backup system can mean your restore time is upwards of 300x faster, and a backup procedure operating at literally 1000x the speed. It is simply prudent to keep that in mind when placing your critical business applications in the hands of public network speeds. The bandwidth bottleneck ultimately draws concerns for the longevity and spread of the public Cloud as a replacement service for anything and everything as it is often proffered to be. Public Cloud based disaster recovery is simply one area of concern.