1-800-691-1991 | 9am - 8pm ET

Guidelines for effectiveness of WAN Accelerator

KB ID: 1877
Product: Veeam Backup & Replication
Version: 9.x
Published: 2014-04-21
Last Modified: 2020-08-13


Identify with what bandwidth the WAN Accelerator feature within Veeam Backup & Replication will be most effective and calculate the needed Cache sizes for both Source and Target WAN Accelerators.


As a general rule we recommend the following guidelines for deciding when to use WAN accelerator and for setting expectations for performance:

Global Cache on Spinning Disk
Link <3Mb/s - WAN likely saturated; processing rate dependent on data reduction ratio (estimated 10x)
Link >3Mb/s and <50Mb/s - WAN will not be fully utilized; expect ~5MB/s processing rate but less bandwidth
Link >50Mb/s - WAN will not be fully utilized, using direct mode copy will use more bandwidth but likely be faster

These numbers are to be considered as a base line , “Your mileage may vary”. The performance of the underlying storage where the Global Dedupe Cache is located can greatly impact the performance of the WAN Accelerator function. 
Tests show that there are no significant performance differences in using spinning disk drives as storage for the target WAN accelerator cache rather than flash storage. However, when multiple source WAN accelerators are connected to a single target WAN accelerator (many-to-one deployment), it is recommended to use SSD or equivalent storage for the target cache, as the I/O is now the sum of all the difference sources.


WAN Accelerator Cache/Digest Provisioning

User-added image
If we assume that we have 3 VMs, each with unique OSes (for instance, Win 2008R2, Win 2012R2, Solaris 10) each OS requires 10GB to be 
allocated for it. The Cache itself is wholly independent from the digests required. That is, the Veeam GUI does not make any determination of 
how much you can allocate for a digest
and so on. The digest is essentially an indexing of what cached blocks go where. For digest size, 1TB of 
VM disk capacity we are backing up should correspond with 20GB of disk space. That is, for 10VMs we are backing up whose capacity is 2TB, you
 must account/allocate 40GB for digest data on the Source WAN Accelerator. This limitation is also applied to the Target WAN Accelerator.

Right click on the image to save it to your computer.
For a Many-to-1 setup, the global cache is calculated per 1 Source WAN Accelerator working with the Target WAN Accelerator. Therefore, the 
global cache needs to be increased proportionally. If we use the same VMs in the previous example, the cache is only required to be 30GB. 
However, since we’re using 3 Source WAN Accelerators, the cache size must be 90GB in response.  On the Target WAN Accelerator, not only is the 
cache size dictated by the amount of Source WAN Accelerators, but so is the Digest on the target end—in this example, we require 120GB of Digest 
space, which added to the cache size (90GB) results in requiring a 210GB volume size at a minimum.
KB ID: 1877
Product: Veeam Backup & Replication
Version: 9.x
Published: 2014-04-21
Last Modified: 2020-08-13

Couldn't find what you were looking for?

Below you can submit an idea for a new knowledge base article.
Report a typo on this page:

Please select a spelling error or a typo on this page with your mouse and press CTRL + Enter to report this mistake to us. Thank you!

Spelling error in text

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your report was sent to the responsible team. Our representative will contact you by email you provided.
We're working on it please try again later
Knowledge base content request
By submitting, you are agreeing to have your personal information managed in accordance with the terms of Veeam's Privacy Policy.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Didn't receive the code? Click to resend

ty icon

Thank you!

We have received your request and our team will reach out to you shortly.


error icon

Oops! Something went wrong.

Please go back try again later.