When running a SureBackup Job, you receive this error in the summary: "Error: Mount with leaseId 'xxx-xxx-xxx-xxx-xxx-xxx' already activated."
When the performance of the NFS datastore is slower than expected, SureBackup job fails because the mount lease timeout expired. This is the timeout set for how long to wait for NFS mount to complete its operation.
Surebackup will fail with this message if certain operations time out. Veeam Backup & Replication implements timeouts for most operations to protect against hangs. However, even when no process is hung, timeouts may occur due to significant performance problems or an unusual use case.
Typically this error occurs due to slow performance of the vPower NFS datastore. Possible causes of the slow performance:
- Slow repository read performance, especially due to deduplication storage "rehydrating" deduped/compressed backup data.
- Slow network link between host and vPower server due to congestion, setting of 100Mb/s on a NIC, or other infrastructure issues.
- Poor performance of the vPower NFS server.
Open registry editor with administrative privileges (regedit.exe) on the mount server and create the following keys and values if they have not been added already. Mount server name could be found in the repository settings where verified VM is saved. Increase the timeouts to 2x or 3x the default value. Although, you can increase the timeouts beyond these limits, it is usually better to investigate performance first. Make sure no jobs or restore are running, then restart the Veeam Backup Service to apply these changes.
MountLeaseTimeOut: Key: HKLM\SOFTWARE\Veeam\Veeam Backup and Replication\SureBackup DWORD: MountLeaseTimeOut Default value: 600 (seconds) Suggested value: 1800 remotingTimeout: Key: HKLM\SOFTWARE\Veeam\Veeam Backup and Replication DWORD: remotingTimeout Default value: 900 (seconds) Suggested value: 1800
The most common cause of this error is slow read performance from the backup repository. Deduplicating storage is not recommended as a back-end for Surebackup. If you must verify backups from deduplicating storage, it may be necessary to increase the above timeouts to several hours, especially if there are any large (multi-TB) VM disks. Where possible, optimize storage devices for random read I/O of large blocks (typically 256 KB – 512KB with default settings, or 4 MB for backups on deduplicating storage; your use case may vary). A simple benchmark is described in KB2014.
As a workaround, or to verify that storage performance is the cause of the timeout, try temporarily moving the backup files to faster storage.
Additional performance troubleshooting:
- If performing verification of offsite backups, make sure the virtual lab and vPower NFS server are located in the same site as the repository.
- Depending on the underlying infrastructure, there can be significant performance differences between running the vPower NFS service from a VM or from a physical machine. For example, try using a VM located on the same ESXi host as the virtual lab.
- Heavily fragmented full backup files can reduce restore performance. Schedule compact operations to reduce fragmentation.
- Where applicable, test throughput of the network connections between the repository and vPower NFS server, and between the vPower NFS server and the ESXi host.
- Investigate CPU and memory usage of the repository and vPower servers.
Overview of vPower NFS Service
Configuring vPower server
The remotingTimeout setting affects all processes and services communicating with the Veeam Backup Service. It can cause failures of any vPower NFS mount. In some cases, communication failures will be retried, so an operation may not fail until this timeout has occurred multiple times.
Consider that from a networking and vSphere configuration perspective there is little difference between vPower and any other NFS datastore.
VMware Technical Paper: Best Practices for Running vSphere on NFS Storage
Please be aware that we’re making changes which will restrict access to product updates for users without an active contract.