3 Questions You Need to Ask When Backing Up Your Systems

One conversation we have here at DCD with just about every client is about the importance of backing up their systems, and on our blog, we’ve talked about World Backup Day and the importance of cyber security. The top five causes of data loss: system malfunction, human error, software corruption, viruses and malware, and natural disasters all cause data loss in different ways, and thus require different forms of protection.

An infographic from technologynewsextra.com that has some specific examples of common causes of data loss.
Sure, you can say human error is a leading cause of data loss, but what kind of errors does that include? This infographic from technologynewsextra.com has some specific examples of common causes of data loss.

The IT industry always touts the importance of keeping and maintaining backups, but when you’re backing up your data, it’s important to ask yourself what it is that you’re protecting yourself from. As more and more operations move into the cloud, and companies keep their information on systems operated by Amazon, Microsoft, or Google, justifying the cost of backups can become difficult.

It’s not that it’s expensive. If you look into list prices for suitable backup space today, you’ll find relatively low prices: Amazon charges $0.025 per Gigabyte-month for Cold HDD volumes. Azure costs around $0.05/GB-month, or $0.08 if the data is going to be replicated at an alternate geographical location. Google offers “Nearline” storage for only $0.01/GB-month — almost down to the cost of a tape-backup solution in a datacenter.

Cloud services like AWS might give off the impression that because you’re protected from hardware failure, backups are unnecessary. However, hardware failure is generally one of the rarest reasons that a system restore is needed. Human error is one of the leading causes of data loss, and a large number of the restores performed by DCD’s experts were originally caused by an administrator or a superuser. While making modifications to a system, a change was made either that caused irrevocable damage, or created an error so arduous to fix that restoring from the backups was faster.  

The backups used for these sorts of accidents are wildly different than the machine-image snapshots that most non-technical users might imagine backups to be. While services like Amazon’s RDS and Azure’s SQL Server allow for point-in-time restores and similarly precise recoveries, many businesses still haven’t upgraded to those applications. In our interactions with our clients, DCD encounters a lot of GoDaddy hosted servers from 2011, pure ASP applications on SQL Server Express, and PHP applications on MySQL.

Using these systems is not a bad thing; many companies don’t replace their current apps and underlying technology because it is usually far too expensive of a project for them to undertake. However, this is also why making sure a robust backup system is in place is paramount.

To help users size their backups, we have them consider a few questions:

  1. If, at right this moment, all of your customer, order, or inventory data disappeared from your production systems, how long would it take you to notice or be informed? That will give you a base time for how far back you want your backups to go; usually some small multiple of that time will give you time to react while you still have usable backups.
  2. If you found out right now that your production system was down and your employees couldn’t work on it, how much per hour would that cost you? Would you still be able to take orders? Would it be obvious to your customers that you were down? These questions help measure the costs of an outage.
  3. What are the odds of an outage or a data-loss event happening? Most of our clients have been in business long enough to have had multiple such accidents, and they can give us a number. Usually it comes down to once every six to eighteen months. By combining this number and the cost-per-hour of an outage, we can usually establish a reasonable budget, or at least upper limit, on how much is reasonable to spend on data protection.

For instance, if a business makes $100,000 per year, and an outage takes them down for an average of 1 business day per year, that means they are likely to lose:

  • $100,000 per year/365 days per year = $273.97 per year of outage
  • $273.97 per year/12 months per year = $22.83 per month of outage

Factoring in some room for error and provisioning for extra outage time, they probably don’t want to spend more than about $33/month on data protection, because they’re likely to pay more for backups than they actually stand to lose. If the system they’re looking to implement would mean they were down for only half a day (making phone calls, having specialists log in to their systems, run the restore, and hand control back to the users), they’re looking at less than $15/month as being a reasonable investment.

However, for a business that does $5,000,000 per year, and an outage of their large ERP system takes 3 days to repair under their current system (including restoration time plus re-keying any lost data), they might justify as much as $5,000 per month to protect those systems.

For datacenter users, the conversation takes on a different perspective: how do you protect against risks that would compromise multiple systems? The classic case for this discussion is the datacenter burning down: all servers, drives, and on-site backup tapes are destroyed in their entirety, and beyond hope of repair. Is that something you’re worried about protecting against, and what’s a reasonable price to pay?

As scary as such a high profile disaster may be, statistics suggest that such comprehensive destructive events are historically rare, and rarer still now that datacenters tend to get placed in free-standing buildings far from urban centers. So let’s say that you can expect a datacenter to have a 0.1% chance of a catastrophic event (fire, flood, industrial accident, etc.) in any given year.  How long would it take to bring the client back up and operational? How much money would be lost in the meantime?

From disaster to a return to normal operations, recovery would likely take at least 14 days. This covers the time it would take to provision new hardware, have it shipped and installed, and then for the backups to be restored onto the new machines. It’s an enormous downtime window, but given the tiny, tiny risk of it happening, an accountant at our hypothetical $5,000,000 per year company could only really assign about $25 every year to protecting against it. Unfortunately, that’s not enough to buy even a single 500 GB hard disk.

So if our $5,000,000 per year company won’t allocate more than $25 for data loss protection, where does that leave our $100,000 company? A key part of this conversation we have with our clients is the concept of a risk not being worth protecting against. $25 per year isn’t enough to implement off-site backups on several hundred gigabytes, let alone a cold or hot site. As it turns out, when you’re a small business, sometimes “we’ll rebuild from scratch” is actually a viable strategy.

But a full rebuild doesn’t have to be your go-to option if you can’t afford off-site backups. It may cost a little extra in time and effort, but there are low cost alternatives, such as sending somebody to the datacenter with an external hard drive to copy the most recent backups.

If you need help calculating how much your company should be spending on your data protection plan, contact us or leave a comment below.

2017-01-29T18:06:21-04:00August 4th, 2016|Business Practices, Database Operations, Tips and Tricks|

About the Author:

Andrew is a technical writer for Deep Core Data. He has been writing creatively for 10 years, and has a strong background in graphic design. He enjoys reading blogs about the quirks and foibles of technology, gadgetry, and writing tips.

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.