A Word About Backup Solutions

Does your company have a defined backup recovery strategy and system in place? Does having such a system even matter? Unfortunately, most companies haven’t integrated effective backup solutions and, in some cases, the requirements for what constitutes a proper backup solution are not even present. A proper system should be tailored to business needs and will allow a business to resume operations in the event of media failure or some other unforeseen event. Let’s expand on what backup systems should provide.

Full Deduplication Across all Systems

Deduplication drastically reduces the space required to store backups by removing redundant files across multiple machines (all standard OS files will only need to be stored once even though they exist on multiple systems). Backups can be stored indefinitely because deduplication can occur over time with the ability to restore to any previous day.

Full Client Side Encryption

Any data leaving the backed-up host should be encrypted with a unique encryption key stored only on that system and known exclusivelyby the backup administrator. When the need arises to comply with specific regulations requiring the separation of admin duties, two people that would each enter an encryption key and are blind to one another would generate the key separately. Each half of the two sides would then be stored in a physical or virtual “break glass*” environment in order to retrieve the key system.

Full System Backups

There is an old but valid saying: “a backup is not a backup until you have tried restoring it”. Unfortunately, many people just back up the particular data they consider critical and aren’t concerned about the systems themselves. If you have ever tried restoring a backup at 3 a.m. Sunday morning you can empathize how short sighted this is. Imagine having to reverse-engineer the version of every library, database, kernel and patch, scrambling to locate all aspects while attempting to get everything working as it was. You’re now totally deprived of sleep. You need the full system.

Databases, VMs & Other Stateful Files

A stateful file is one that relies on the state of another file or another part of itself to be consistent in order to be useful. A good example is a database file. Imagine a scenario where someone performs a SQL database transaction that updates 5 tables — the SQL transaction is required to either succeed as a whole or fail completely. If only some of the tables are updated, then the internal state becomes inconsistent and can cause major issues for the application using the database. Applied to backups, this is important when we start backing-up these files in the middle of transactions. Assuming that it might take 30 minutes to back up all of the database files, it’s very likely that some transactions happened during the backup that cause the stored files to be internally inconsistent. The same problem applies to things like TrueCrypt volumes, loop devices, Virtual Machine disks and any other file that creates a “virtual file system”. These stateful files need to be backed up with knowledge of the state in order to make them consistent and restorable. Database tools need to be leveraged in order to “dump” a consistent snapshot, backup the mounted TrueCrypt/loop device volume rather than the file and to backup virtual machines from the guest OS. If desired, a tool like Hypervisor Snapshots can be used to create an internally consistent restore point before performing the backup.

Secure and Verified Transit

Even though all backups should be encrypted on the client-side, you still don’t want them falling into malicious hands. So when moving data around, the source and destination should always be verified. Note that relying on IP address filtering is not secure; rather, both the clients and servers should use a secure transport like SSL/SSH with host authentication using private/public keys. Using things like SMB or NFS for transferring backups is not recommended. Remember that anyone who can access your backup will most likely be able to compromise not only the data but also the server that’s using system data from the backup.

Client-Side Differential/Incremental Algorithms

Any good backup solution should have built-in mechanisms to detect if particular files or parts of files already exist on the backup server and these mechanisms will also avoid transferring/processing these files. This processing should happen on the client-side and results in greatly reduced network traffic and time taken to backup. The determination of which files should be backed up can be done by one of several mechanisms or a combination of them. At a minimum, the solution should use a “last modified” timestamp on the file to determine if it has changed. More sophisticated solutions use hashes on blocks of files to identify the parts of a file that have changed and then transfer only the altered parts.

Compression

Compression should be performed on the client-side to reduce load on the backup server. Having the server perform the compression duties severely limits the number of hosts that can share the same deduplication pool and will reduce efficiency. The compression data used will typically need to be a stream compression algorithm; the best of which is LZMA (7Zip) followed by bzip2 and finally good ol’ gzip. What makes LZMA ideal is its superior compression utilizing very inexpensive decompression. Since each incremental backup will only include a relatively small amount of data, the increased CPU processing time is irrelevant when compared to bzip2 or gzip.

The Final Word

So there you have it, all things to consider when thinking about your backup system. If your backup solutions meet all of these requirements, you should have a very scalable and secure way to perform backups and have the ability to restore data to any desired point. Some people may look at the requirements and conclude that they are internally incompatible, but they can be by use of some clever technology.

Convergent encryption allows a file (or part of it) to be encrypted using an encryption key that is a hash of itself. The result is an encrypted file that can be deduplified on the server because all instances of the file will be binary-identical. The only way to decrypt/restore it is to have had the original at some point to generate the hash.

A second technology worthy of mention is a rolling hash. At it’s most simple a hash is a fixed length string that is a “unique” ID for an arbitrary amount of data. If you have read this much already, you are probably familiar with hashing functions such as MD5 and SHA(1/256). The detriment of such is they are computationally rather expensive to perform and by changing even one bit of the input data requires the hash to be computed from scratch. Rolling hashes are a different beast; parts of the input data can be changed and the hash can be re-computed with minimal resources that only work on the altered facets of that data. Rolling hashes are not considered secure and are very weak for cryptographic uses, but for the purpose of matching parts of files with other parts of files they are a very efficient solution. Combined with strong hash functions such as SHA256, they become an excellent tool for deduplication and allow a remote client to perform the deduplication work without having to transfer all of the data.

Typical results of deploying a solution that fulfills these requirements are backups taking less than 5 minutes on a standard webserver with each additional server taking about 1% of the space it actually uses on it’s own. Restoring actions of a typical webserver can be performed in less than 20 minutes.

If you are implementing or building your own solution by bundling different tools, you should explore these:

Any combination of these tools should serve you well.

* A “break glass” system is a way to store passwords and keys in such a manner that they can be retrieved in cases of emergency, but it’s immediately obvious that the key/password has been retrieved and it is logged whoever has retrieved the key/password. This style of system should be used to store all superuser credentials that don’t directly correspond to a person since they allow disaster/emergency recovery, but offer a full audit trail of system access. You should consider this style of system to store data like root passwords for *nix machines.

Interested in Network Security? Read More

Previous Article
3 Things To Consider When You Revisit Your Backup System
3 Things To Consider When You Revisit Your Backup System

What’s expected from you in your role as a CISO is expanding as companies rely heavily on more complicated ...

Next Article
More than just a Joel Score: 5 Things Technical Employees Should Look for in an Employer
More than just a Joel Score: 5 Things Technical Employees Should Look for in an Employer

Joel Spolsky is one of the most influential developers around. If you haven’t already and you work in or wi...

×

Schedule a live demo

First Name
Last Name
Company Name
!
Thank you!
Error - something went wrong!