The 3-2-1 rule for ransomware protection

Now that your Veeam server is better protected, let’s move on to the 3-2-1 Rule. You may ask yourself what is “The 3-2-1 Rule” and why do I care when it comes to ransomware? It’s an industry standard for how to protect data and it is your ultimate line of defense system in the fight against ransomware.

Breaking down the 3-2-1 rule

We break the 3-2-1 rule down into three parts:

To break this concept down further, the three copies of data are three redundant copies of your data spread across different underlying storages. By different media we mean different format types like copying to a hard drive, tape or the cloud. One copy will remain on your computer. Last is keeping one copy off site. This means like another building sent through the WAN or Sneakernet, shipping the tapes off to a storage facility or something like public/private cloud.

Testing your backups

To further this concept, we have what is called the 3-2-1-0 rule. This may seem like we are just adding more numbers to the list for fun, but this is actually a very important part of the process. The concept of the 3-2-1 part remains the same but 0 means there are 0 errors when testing your backup’s recovery. If you are not properly testing that your backups contain the data you desire and have the ability to restore, you should be updating your resume.

Continuing with this concept, this article will cover different options Veeam offers to meet these standards, common mistakes when trying to follow the rules and some popular configuration combinations.

Veeam storage options

Veeam works with an endless number of media solutions in our posture to be storage agnostic. This means if we can connect and authenticate to the storage, with authorization to read and write, we can send backups to it. Our storage availability covers seven main types of storage: direct attached, network, deduplicating, object storage, tape, storage snapshot orchestrator and Backup as a Service (BaaS). Each storage has its advantages and disadvantages when it comes to performance, cost and security.

Direct attached storage

Statistically, direct attached storage is one of the fastest storage types. We support both Windows and Linux operating systems. The machine can be either virtual or physical. There can be some maintenance drawbacks to this repository type. If there is no one familiar with Linux operating systems, then there might be a need to factor in the cost of a Windows license. There is also a need to be conscious of operating system updates and how they might affect how data is written or processes are handled. There could be a security flaw that is patched in the same package that changes the way data is saved or optimized in the file tables. Even with these potential drawbacks, you can expect consistent speeds and this server can function as a proxy to potentially saving an extra hop of the data over the network.

Network attached storage

Network attached storage (NAS) is generally the most convenient type of repository. The share can be presented to your network and have multiple gateway servers directly writing to it, creating the optimal throughput speed for your backups. The share can come from many kinds of sources, like a NAS or a Windows server. One of the drawbacks can be the connection to the share; it relies on the ups and downs in network congestion in the environment. If the network experiences an uptick in traffic for a period of time from users, then expect your backups to be throttled by the throughput availability in the network. The issue can be mitigated by having dedicated subnets for this traffic with pre-allocated network bandwidth. The backups being sent over the network should be encrypted in transit, otherwise someone sniffing on the line could steal the backup data. In short, this connection type can be more resilient with multiple gateway servers but take precautions that the network is reliable and the data is secure.

Deduplication appliances

Deduplicating appliances are best used as a long-term backup storage solution but are not recommended to be the primary backup location for backups. This is for several reasons.

Most deduplication appliances do not handle synthetic processes like merges and synthetic fulls well; it needs to rehydrate the backup files in order to do these synthetic processes. This same rehydration process makes doing restores of the backup data very slow and most cases do not meet the business required service level agreements (SLAs) for recovery time objectives (RTOs). This is why deduplicating appliances are great for storing long-term copies of backup data for audits and legal hold requirements, but not for primary backup data.

These appliances help save space when storing long-term backups that do not need to be accessed frequently. The takeaways here are deduplicating appliances are great for secondary storage for long-term backups due to the space saving features, but in most cases have very slow recovery speeds for that data.

Object storage

Object storage is a very hot topic in terms of cloud talk. Object storage creates the semblance of infinite storage expandability without the need to continually add hard drives to a physical server. This emerging technology has been popularized by the concept of OpEx over the CapEx that small and enterprise level businesses alike must face today when it comes to expenditures. Object storage offers the flexibility to pay for the storage you need without paying for any of the hardware or maintenance upfront.

But even with this magical ever-expanding space to store backups, there are drawbacks. When restoring the data or pulling backups back from the object storage there is often what is called an egress charge, a fee for the download back to on-premises. Generally, the cheaper it is to store the data the more it tends to cost to pull that data back down. To sum up: object storage is a great way to lower upfront capital expenditures but make sure to read the fine print on the S3 storage for egress fees.

Tape is not dead

This is my favorite phrase to start a conversation about tape backups because it is in fact not dead. Tape is still the lowest cost storage per data byte that exists ($5-8 per TB) and is one of the most natively resilient storage types against ransomware attacks. Tapes are also faster now than one might expect. LTO generation 8 has a capacity up to 12 TB and uncompressed speeds of 360 MB/s (I would like to stress that speed is a big B not a little b, so that is 360 megabytes per second). Tape servers are some of the least likely components to be attacked by even resilient ransomware. With the storage being so cheap, tapes can be rotated to be offline and air gapped in a safe locally or sent off to Iron Mountain for compliance. This is not a primary backup solution, but this is a very viable secondary and archival backup solution.

Storage snapshot orchestration

Storage snapshot orchestration is a great feature that taps into your existing storage production storage capabilities. Veeam can create and manage the storage snapshots, so if ransomware hits you, you can easily use our software to roll back to a known working state. Storage snapshot orchestration generally produces the lowest recovery point objective (RPO) due to the reduced overhead. When orchestrating a storage snapshot, the new data is redirected to write to a different sector of the underlying storage, preserving a point-in-time copy of the storage. Since there is no need to copy any existing data to a new location, the points-in-time storage snapshots are created in minutes, not hours and do not take up network traffic. But this process will not help in case of hardware failure because the production VMs and storage snapshots are on the same storage unless the data is being mirrored to a secondary storage array. This has the capability to have the shortest RPO for rollbacks but does not protect you against underlying hardware failures.

Backup as a Service

Last, we have Veeam service providers who offer Backup as a Service (BaaS). Having a service provider allows a copy of the backups to be sent to another location that is managed and operated by a service provider. Having a service provider can be a great solution for customers who do not have or cannot afford a secondary business location to offload their backups to. The biggest potential issue with this option is giving up control of the data to a third party. Encrypting the backups and research that the service provider meets the company’s compliance and regulations requirements for data is a must. This is an excellent way to offload some of the responsibility in meeting the 3-2-1 Rule, but it is also necessary to shop around for the right provider and to make sure the contract meets the company’s legal obligations of data.

Common mistakes of the 3-2-1 Rule

I am sure by reading the section above it is easy see where each of these options can be used in the environment. But there are some common overlooked points that can leave your infrastructure vulnerable.

When it comes to three copies of the data, each copy needs to be on a different storage. If you have two copies of the data on the same storage device and the hardware fails, then neither backup is going to work. Like if the same NAS is partitioned as two repositories, one an iSCSI connection and the other to a SMB, the backups are still sitting on the same underlying storage. If the backups are stored on the same storage as the primary VMs and that storage fails, the primary VMs and your backups have been lost. When using storage snapshot orchestration on a storage device that is not a part of an array, the RPO will be low, but if the underlying storage fails, all rollback points and the VMs will be lost. Being mindful of the storage the backups are sent to are not on single points of failure is a key factor in the 3 of the 3-2-1 Rule. Another way to look at it is: “If this hardware dies, are there still two copies of my data somewhere else?”

Spreading the data across different mediums makes it harder for ransomware to infect everything. Different mediums also provide different forms of credentials that may not all be tied into the domain. Some examples: the Linux or Windows repository only uses local credentials; a deduplicating appliance using credentials only configured through the vendor console; object storage only presented through access and a secret key.

The data can be sent over the WAN to any one of these repositories if there is a secondary site to send data within the company, but there are three options when this is not the case. The use removable media option, which can be in the form of a tape taken off site or removable hard drives used in a pool of rotating media (this will be covered more in-depth in the next post), adding object storage to the scale-out backup repository to send it to the cloud or sign up with a service provider to send a copy of the backups to a third-party.

Wrapping up here, none of these options are superior to another. It all comes down to what is best for your business. Understanding the strengths and weaknesses of each option with the cost associated is the best way to determine what your business needs. The next section will help to explore types of repositories configurations that are ultimate defenses against ransomware.


Navigation:

  1. First step to protecting your backups from ransomware
  2. The 3-2-1 rule for ransomware protection
  3. 3 storage options against ransomware
  4. How VCSPs help against ransomware
  5. Are you prepared when ransomware does happen?

Helpful ransomware prevention resources:

Exit mobile version