SureBackup: Error: Mount with leaseId already activated

KB ID: 1237
Products: Veeam Backup & Replication
Version: 6.x, 7.x, 8.x 9.x
Published:
Last Modified: 2016-09-28

Challenge

When running a SureBackup Job, you receive this error in the summary: "Error: Mount with leaseId 'xxx-xxx-xxx-xxx-xxx-xxx' already activated."

When the performance of the NFS datastore is slower than expected, SureBackup job fails because the mount lease timeout expired.  This is the timeout set for how long to wait for NFS mount to complete its operation.
 

Cause

Surebackup will fail with this message if certain operations time out. Veeam Backup & Replication implements timeouts for most operations to protect against hangs. However, even when no process is hung, timeouts may occur due to significant performance problems or an unusual use case.
Typically this error occurs due to slow performance of the vPower NFS datastore. Possible causes of the slow performance:

  • Slow repository read performance, especially due to deduplication storage "rehydrating" deduped/compressed backup data.
  • Slow network link between host and vPower server due to congestion, setting of 100Mb/s on a NIC, or other infrastructure issues.
  • Poor performance of the vPower NFS server

Solution

Increase Timeouts

Open Regedit on the Veeam Backup Server and create the following keys and values if they do not already exist. Increase the timeouts to 2x or 3x the default value. Although you can increase the timeouts beyond these limits, it is usually better to investigate performance first. Make sure no jobs or restore are running, then restart the Veeam Backup Service to apply these changes.

MountLeaseTimeOut:

Key: HKLM\SOFTWARE\Veeam\Veeam Backup and Replication\SureBackup
DWORD: MountLeaseTimeOut
Default value: 600 (seconds)
Suggested value: 1800

remotingTimeout:

Key: HKLM\SOFTWARE\Veeam\Veeam Backup and Replication
DWORD: remotingTimeout
Default value: 900 (seconds)
Suggested value: 1800


Improve Performance

The most common cause of this error is slow read performance from the backup repository. Deduplicating storage is not recommended as a back-end for Surebackup. If you must verify backups from deduplicating storage, it may be necessary to increase the above timeouts to several hours, especially if there are any large (multi-TB) VM disks. Where possible, optimize storage devices for random read I/O of large blocks (typically 256 KB – 512KB with default settings, or 4 MB for backups on deduplicating storage; your use case may vary). A simple benchmark is described in KB2014.

As a workaround, or to verify that storage performance is the cause of the timeout, try temporarily moving the backup files to faster storage.

Additional performance troubleshooting:

  • If performing verification of offsite backups, make sure the virtual lab and vPower NFS server are located in the same site as the repository.
  • Depending on the underlying infrastructure, there can be significant performance differences between running the vPower NFS service from a VM or from a physical machine. For example, try using a VM located on the same ESXi host as the virtual lab.
  • Heavily fragmented full backup files can reduce restore performance. Schedule compact operations to reduce fragmentation.
  • Where applicable, test throughput of the network connections between the repository and vPower NFS server, and between the vPower NFS server and the ESXi host.
  • Investigate CPU and memory usage of the repository and vPower servers.

More Information

User Guide:
Overview of vPower NFS Service
Configuring vPower server
The remotingTimeout setting affects all processes and services communicating with the Veeam Backup Service. It can cause failures of any vPower NFS mount. In some cases, communication failures will be retried, so an operation may not fail until this timeout has occurred multiple times.
Consider that from a networking and vSphere configuration perspective there is little difference between vPower and any other NFS datastore.
VMware Technical Paper:
Best Practices for Running vSphere on NFS Storage

5 / 5 (2 votes cast)

Report a typo on this page:

Please select a spelling error or a typo on this page with your mouse and press CTRL + Enter to report this mistake to us. Thank you!

Orphus system