#1 Global Leader in Data Protection & Ransomware Recovery

How to Simulate Veeam Backup & Replication Disk I/O

KB ID: 2014
Product: Veeam Backup & Replication
Veeam Agent for Microsoft Windows
Veeam Agent for Linux
Veeam Agent for Mac
Published: 2015-03-10
Last Modified: 2023-10-09
mailbox
Get weekly article updates
By subscribing, you are agreeing to have your personal information managed in accordance with the terms of Veeam's Privacy Notice.

Cheers for trusting us with the spot in your mailbox!

Now you’re less likely to miss what’s been brewing in our knowledge base with this weekly digest

error icon

Oops! Something went wrong.

Please try again later.

Purpose

This article provides examples of using common workload simulators (diskspd and fio) to simulate Veeam Backup & Replication disk I/O.

Do Not Send Test Output Files to Veeam Support
The write test output files (testfile.dat) do not contain diagnostic data. As such, please do not attach them to support cases.
If files created by the tools are still on disk after testing, delete them.

About the Tools

This article provides examples using two common tools for benchmarking and simulating disk workloads:

diskspd

Diskspd is a command-line tool  commonly used with Windows-based environments for storage subsystem testing and performance validation.

Note: While DiskSpd is available for Linux, fio is included in this article because it is easier to installDiskSpd for Linux must be compiled, whereas fio binaries are available for most Linux distros through their package manager. and more commonly used.

Command Parameter Details
diskspd [options] [target]
Target

Any command that contains the -w switch must not target an actual backup file, as that would overwrite the backup file contents. You should only target a backup file when performing the listed restore performance tests.

Compatible Targets:

  • File on a volume with an assigned letter: D:\testfile.dat
  • File on a CIFS/SMB share: \\nas\share\testfile.dat
  • File on an NFS share, provided you have mounted it to a disk letter with Client for NFS: N:\testfile.dat
  • Disk: #X, where X is the number of the disk in Disk Management. You can use a local disk or one attached by iSCSI, and it does not matter if they are Online or Offline. In this mode, diskspd reads or writes directly from/to the disk ("RAW").

You can specify multiple targets. Allowing you to simulate several jobs running at the same time.

Block size

-b specifies the size of a read or write operation.

For Veeam, this size depends on the job settings. The "Local" storage optimization setting is selected by default, corresponding to a 1MB block size in backups. However, every data block is compressed before being written to the backup file, reducing the size. It is safe to assume that blocks compress down to half the size, so in most cases, picking a 512KB block size is a reasonable estimate.

If the job is using a different setting, WAN (256KB), LAN (512KB), or Local+ (4MB), change the -b value accordingly to 128KB, 256KB, or 4MB, respectively. And if the Decompress option is on, don't halve the values.

File size

-c specifies the file size you need to create for testing. We recommend using sizes equivalent to restore points. If the files are too small, they may be cached by hardware, thus yielding incorrect results.

Duration

-d specifies the duration of the test. By default, it does 5 seconds of warm-up (statistics are not collected), then 10 seconds of the test. This is OK for a short test, but for more conclusive results, run the test for at least 10 minutes (-d600).

Caching

-Sh disables Windows and hardware caching.

This flag should always be set. VeeamAgents explicitly disable caching for I/O operations for improved reliability, even though this results in lower speed. For example, Windows Explorer uses the Cache Manager and, in a straightforward copy-paste test, will achieve greater speeds than Veeam due to cached reads and lazy writes. That is why using Explorer is never a valid test.

fio

Fio (Flexible I/O tester) is an open-source workload simulator commonly used with Linux-based environments.

Command Parameter Details
fio [options] [jobfile]
Regarding Job Files

It is possible to configure fio using preconfigured jobfiles that tell it what to do. This article does not use jobfiles but uses all command options to simplify the presentation and demonstrate the parity of settings with the DiskSpd commands.

Warning Regarding Write Tests

Any command that contains the 'rw=write' or 'rw=randrw'  parameters must not target an actual backup file, as that would overwrite the backup file contents. You should only target a backup file when performing the listed restore performance tests.

For a list of compatible targets, please reference fio documentation.

Block size

--bs specifies the size of a read or write operation.

For Veeam, this size depends on the job settings. The "Local" storage optimization setting is selected by default, corresponding to a 1MB block size in backups. However, every data block is compressedExcept when Compression is disabled at the Job Level or the Repository Level. before it is written to the backup file, so the size is reduced. It is safe to assume that blocks compress to half the size, so in most cases, picking a 512KB block size is a reasonable estimate.

If the job is using a different setting, WAN (256KB), LAN (512KB), or Local+ (4MB), change the -b value accordingly to 128KB, 256KB, or 2MB, respectively. And if the Decompress option is on, don't halve the values.

File size

--size specifies the file size you need to create for testing. We recommend using sizes equivalent to restore points. If the files are too small, they may be cached by hardware, thus yielding incorrect results.

Duration

--time_based specifies that the test should be performed until the specified runtime expires.

--runtime specifies the duration of the test. For more conclusive results, run the test for at least 10 minutes (--runtime=600).

Caching

--direct when set to =1 disables I/O buffering.

This flag should always be set. VeeamAgents explicitly disable caching for I/O operations for improved reliability.

Solution

In the sections below, you will find different activities Veeam Backup & Replication performs, and examples of how the I/O load they cause can be simulated using diskspd or fio.

Simulation Examples

Each section below provides a command example of how to simulate performance for equivalent Veeam operations.

Please keep in mind that, as with all synthetic benchmarks, real-world results may differ.

NEVER target a restore point with a write speed test.
Doing so would overwrite the restore point and destroy backup data.

Commands in this article that use the following parameters perform write operations:

  • For DiskSpd: '-w'
  • For fio: '--rw=write' or '--rw=randrw'

You should only target a backup file when performing the listed restore performance tests; those examples that demonstrate using a backup file.

Active Full / Forward Incremental

This test simulates the sequential I/O generated when creating an Active Full or Forward Incremental restore point.

diskspd.exe -c25G -b512K -w100 -Sh -d600 D:\testfile.dat
fio --name=full-write-test --filename=/tmp/testfile.dat --size=25G --bs=512k --rw=write --ioengine=libaio --direct=1 --time_based --runtime=600s
Remember: Update the path in the command to have the tool test the location where the backups are stored.

Synthetic Full / Merge Operations

This test simulates the I/O that occurs when creating a Synthetic Full or when a Forever Forward Incremental Merge occurs. Both of these operations within Veeam Backup & Replication involve two files wherein one is being read from while the other is being written.

  • This test is applicable when all restore points involved in either the Synthetic Full or Merge operation are stored on the same storage.
  • This test is not valid when using Fast Clone or when using a Scale-Out Backup Repository in Performance Mode, where the Full and Incremental restore points are kept on different storage.
diskspd.exe -c100G -b512K -w50 -r4K -Sh -d600 D:\testfile.dat
fio --name=merge-test  --filename=/tmp/testfile.dat --size=100G --bs=512k --rw=randrw --rwmixwrite=50 --direct=1 --ioengine=libaio --iodepth=4 --runtime=600 --time_based
Remember: Update the path in the command to have the tool test the location where the backups are stored.

After completing the test, combine the read and write speed from the results and divide it by 2. This is because, for every processed block, Veeam needs to do two I/O operations; thus, the effective speed is half.

To estimate an expected time to complete a synthetic operation (in seconds):

  • For a Synthetic Full, divide the expected size (in MB) of the new full backup file (typically the same as previous full backup files) by the calculated effective speed.
  • For a Forever Forward Incremental merge operation, divide the size of the oldest incremental (in MB) by the calculated effective speed.
This benchmark cannot reproduce Fast Clone behavior.

Reverse Incremental

This test simulates the I/O that occurs during an Incremental run of a backup job that uses Reverse Incremental.

diskspd.exe -c100G -b512K -w67 -r4K -Sh -d600 D:\testfile.dat
fio --name=reverse-inc-test  --filename=/tmp/testfile.dat --size=100G --bs=512k --rw=randrw --rwmixwrite=67 --direct=1 --ioengine=libaio --iodepth=4 --runtime=600 --time_based
Remember: Update the path in the command to have the tool test the location where the backups are stored.
After completing the test, add the read and write speeds from the results and divide by 3. This accounts for the three I/O operations that Veeam performs for each changed block found in the source VM. Before the block in the full backup can be updated, the old block must first be read from the full backup and then written into the rollback file. After this, the full backup file can be updated with the changed data. This process reduces the effective speed to 33% of the maximum possible.

Restore / Health Check /  SureBackup

The read performance of a restore point during a restore task can vary depending on the fragmentation level of the blocks being read within the restore point. The two tests below can provide a lower-bound and upper-bound of the expected restore read speed. In theory, the actual restore speed should fall somewhere in between.

One factor contributing to fragmentation is the use of Forever Forward Incremental or Reverse Incremental.   These methods can cause the Full restore point to become fragmented over time as blocks are added or replaced. To reduce fragmentation caused by these retention methods, consider using the 'Defragment and compact full backup file' option.

Remember: Update the path in the command to have the tool test the location where the backups are stored.
Max Fragmentation Read Test (lower-bound)

This test performs a random read to simulate a restore operation as if the blocks being read are fragmented within the restore point.

diskspd.exe -b512K -r4K -Sh -d600 \\nas\share\VeeamBackups\Job\Job2014-01-23T012345.vbk
fio --name=frag-read-test --filename=/VeeamBackups/JobName/VMname.vbk --bs=512k --rw=randread --ioengine=libaio --direct=1 --time_based --runtime=600s
Zero Fragmentation Read Test (upper-bound)

This test performs a sequential read to simulate a restore operation as if the blocks being read are not fragmented within the restore point.

diskspd.exe -b512K -Sh -d600 \\nas\share\VeeamBackups\Job\Job2014-01-23T012345.vbk
fio --name=seq-read-test  --filename=/VeeamBackups/JobName/VMname.vbk --bs=512k --rw=read --ioengine=libaio --direct=1 --time_based --runtime=600

Direct-SAN Disk Read Speed

This addtional test is strictly for Windows-based VMware Backup Proxy that would be engaged in backing up VMs using Direct-SAN Transport. This test can be used with Offline disks and will not overwrite data.

diskspd.exe -Sh -d600 #X

Where is the number of the disk that you see in Disk Management.

This test will not overwrite data, it is a safe test, and it works for Offline disks. You can simulate and measure the maximum possible reading speed in SAN or hot-add modes. However, this will not take any VDDK overhead into account.

Note: The target specified must be in quotes if the command is executed from a PowerShell prompt. (e.g., diskspd.exe -Sh -d600 "#2")

More information

FAQ
  • Can diskspd be used to stress-test NAS boxes for reliability? ("specified network name is no longer available" errors in Veeam)
    Unfortunately, no. If the SMB share disappears, diskspd will ignore that issue. It is better to use Wireshark.
  • I am getting extremely high I/O speed, like 4 GB/s, in any test I try, even though I have set the -Sh flag; what's going on?
    Most likely, you're running diskspd on a Hyper-V VM, testing the performance of a virtualized (.vhdx) disk, so the Hyper-V host caches the data. Run the test on the datastore where that .vhdx is located instead.
To submit feedback regarding this article, please click this link: Send Article Feedback
To report a typo on this page, highlight the typo with your mouse and press CTRL + Enter.

Spelling error in text

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you!

Thank you!

Your feedback has been received and will be reviewed.

Oops! Something went wrong.

Please try again later.

You have selected too large block!

Please try select less.

KB Feedback/Suggestion

This form is only for KB Feedback/Suggestions, if you need help with the software open a support case

By submitting, you are agreeing to have your personal information managed in accordance with the terms of Veeam's Privacy Notice.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Verify your email to continue your product download
We've sent a verification code to:
  • Incorrect verification code. Please try again.
An email with a verification code was just sent to
Didn't receive the code? Click to resend in sec
Didn't receive the code? Click to resend
Thank you!

Thank you!

Your feedback has been received and will be reviewed.

error icon

Oops! Something went wrong.

Please try again later.