Business Continuity: Overcoming Data Loss and Downtime

Business Continuity: Overcoming Data Loss and Downtime
In this article:
    Add a header to begin generating the table of contents
    In the guest blog below, originally written by The Arcserve Team, we learn about how Data Loss can disrupt day to day business and productivity, and explore the challenges that must be overcome in order to ensure business continuity. You can find The Arcserve Team’s full blog below, as well as a link to the original at the bottom.

    Meet With A Veeam Specialist

    Fear. Bewilderment. Despair. It unfolds like a nightmare, except it’s actually happening in your workplace: you’re at the helm when there’s a data loss disaster, and with spirit-crushing agony, you realize you won’t be able to restore your systems quickly enough to meet your company’s most urgent needs. It can be a frightful experience—one that threatens everything you’ve worked for and strived to achieve. Because when you lose data, as all companies will, you can suffer brutal delays in how you do business, lose a fortune in productivity and revenue, and watch helplessly as customers get frustrated by a bad experience and flee elsewhere. In fact, according to a recent ITIC study, 98% of organizations report that a single hour of downtime costs them $100,000 or more, while 81% state that the hourly cost is $300,000 or more. And those are just averages: the actual duration and cost of an outage can be far higher, even for small-to-medium sized businesses, often reaching millions of dollars per incidence. Watch out—and be sure “disaster recovery” applies to your company, not your career.

    Standard metrics, substandard results.

    Data protection has traditionally focused on two key metrics—RTO (recovery time objective), which measures the time it takes to restore your data, and RPO (recovery point objective), which measures how much data you’re willing to lose in an outage. Over the years, IT professionals have often focused on RTO as the primary way to guarantee a business gets back to normal. Many organizations can now get their data back online lickety-split—doing it in mere minutes, rather than hours…or days. Problem solved? Happy ending? Not necessarily. That’s because the other key element—the age of your data—also plays a vital role in whether you’re able to recover from disaster. Sure, with your impressive RTO, you may be back up and running in the blink of an eye. But what if your last backup was 10 hours ago, and you therefore can’t restore or fulfill any customer orders that were placed during this time span? You’d lose revenue that was already a “done deal,” without ever knowing who placed the lost orders or whether there was a chance of converting them into long-term customers who would provide significant lifetime value. Time to hit the panic button. And tighten up your RPO.

    Budget concerns, a cost-cutting straightjacket.

    Of course, in principle, you know you need to establish the RTOs and RPOs that are right for your systems and applications. But budget is always a factor. In terms of RPO, you simply may not have the necessary funds allocated to your IT department to successfully back up all your data as frequently as needed. Infrastructure and people costs can gobble up dollars quickly, and limit how many resources you can commit to a backup solution capable of supporting RPOs of minutes. Unfortunately, to meet these budget restrictions, many organizations are forced to give a back seat to performance. It’s a case of the old “penny wise and pound foolish” recipe for disaster. Tightening the purse strings may look wise in the short term, often with executive-level backing, but it can later hurt a company badly. Increasing complexity, non-sustainable status quo. Perhaps the biggest change in data protection over the past few years is the level of complexity in your IT environment. It’s enough to make anyone’s head spin. That’s because there are now so many moving parts that confusion and friction are bound to increase, leading to ugly delays in recovery time. Consider the following:

    Variety is everywhere.

    Today, you’re dealing with on-premises, cloud, hybrid and virtual environments, plus big data, video and photos—all dispersed on mobile devices around the globe. And all of which must be protected, most likely with varying SLAs.

    More backup mechanisms and vendors mean more hassles. 

    Sure, you may have a great local backup system, but it may not be connected to the cloud. Or your cloud backup may be managed by a different vendor than who handles your data centers. Mobile backups? Look to the individual app providers. And so on, and so on. It’s to the point where Gartner reports that average midsize companies have 3+ backup solutions as part of their decentralized operations, with a quarter of these companies looking to switch vendors as soon as possible.

    Data is siloed and unequal. 

    Given the many environments referenced above, your company’s data is probably siloed in many locations—in your data center, in the public cloud, at a remote location, the list goes on. But data is not only separated by where it’s housed, it’s separated by degrees of importance. Your IT team had better be able to restore Point of Sale (POS) data in a few minutes, while restoring the presentation from a marketing conference two years ago is a far lower priority. You must be ready to execute a first-to-last action plan when downtime and data loss occurs. Easier said than done—a lot easier. With so much complexity in today’s computing environment, it’s fair to say we’ve entered a new computing era. However, it’s likely your business continuity plans were put in place during the preceding era—when the path to data recovery was clearer and simpler. That puts you in danger. Because if you’re still using an old model that assigns a flat dollar value for recovery, without properly determining the actual financial risk of downtime and data loss, you’re setting yourself up for failure. Remember, as cited earlier, the average downtime event can cost a midsize company up to $300,000 per hour or more in direct costs. And that doesn’t even include the indirect costs, which include customer turnover, the need for new customer acquisition programs, and decreased customer goodwill. Originally posted by The Arcserve Team here: https://www.arcserve.com/insights/business-continuity-overcoming-data-loss-and-downtime/  

    Rob Townsend

    Rob is a co-founder at Nexstor and has dedicated his career to helping a range of organisations from SME to Enterprise to get ahead of the game when it comes to their compute, storage and data needs.

    Subscribe to receive the latest content from Nexstor


      By clicking subscribe you accept our terms and conditions and privacy policy. We always treat you and your data with respect and we won't share it with anyone. You can always unsubscribe at the bottom of every email.