During the snapshot removal step of a Veeam job, the source VM loses connectivity temporarily.
Veeam does not remove the snapshot itself, Veeam sends an API call to VMware to have the action performed.
The snapshot removal process significantly lowers the total IOPS that can be delivered by the VM because of additional locks on the VMFS storage due to the increase in metadata updates, as well as the added IOP load of the snapshot removal process itself. In most environments, if you are already over 30-40% IOP load for your target storage, which is not uncommon with a busy SQL/Exchange server, then the snapshot removal process will easily push that into the 80%+ mark and likely much higher. Most storage arrays will see a significant latency penalty once IOP's get into the 80%+ mark which will of course be detrimental to application performance.
The following test should be performed during a time when connectivity to the VM is not sensitive, for instance during off peak hours.
To isolate this issue to the specific VMware snapshot removal event, Veeam suggests the following isolation test:
1. Create a snapshot on the VM in question.
2. Leave the snapshot on the VM for duration of time that a Veeam job runs against that VM.
3. Remove the snapshot.
4. Observe the VM during the snapshot removal.
If while performing the test above you observe the same connectivity issues as during the Veeam job run, the issue very likely exists within the VMware environment itself. Please review the following list of troubleshooting steps and known issues. If none of the following work to resolve the issue, we advise that you contact VMware support directly regarding the snapshot removal issue.
Common Troubleshooting Tasks
- Check the VM for snapshots while no job is running and remove any that are found.
- Check for orphaned snapshots on the VM. (See: http://kb.vmware.com/kb/1005049)
- Reduce the number of concurrent tasks that are occurring within Veeam, this will in turn reduce the number of active snapshot tasks on the datastores.
- Move VM to a datastore with more available IOPS, or split the disks of the VM up into multiple datastores to more evenly spread the load.
- If the VMs CPU resources spike heavily during Snapshot consolidation, consider increasing the CPU reservation for that VM.
- Ensure you are on the latest build of your current version of vSphere, hypervisors, VMware Tools and SAN firmware when applicable.
- Move VM to a host with more available resources.
- If possible, change the time of day that the VM gets backed up or replicated to a time when the least storage activity occurs.
- Use a workingDir to redirect Snapshots to a different datastore than the one the VM resides on. http://kb.vmware.com/kb/1002929
- Disable VMware Tools Sync driver on the VM: http://kb.vmware.com/kb/1009886
For environments with NFS Datastores
At the time of the writing of this KB there is a known issue with NFS Datastores and Virtual Appliance transport mode. The issue is documented in this VMware KB article: http://kb.vmware.com/kb/2010953
Veeam advises that if this issue occurs one of three things can be done to work around this:
1. Switch to Direct NFS access mode if you are running Veeam version 9
2. Switch your proxies to the Network transport mode.
b. Click the [Choose] button next to “Transport mode”
c. Select the radio option for “Network” mode
b. On the server where the Veeam Backup & Replication console is installed open ‘Registry Editor’
c. Navigate to the key:
value: 1 (if proxy on same host as VM is unavailable, Veeam Backup & Replication will fail over to a proxy on a different host and use available transport mode, which may cause stun)
value: 2 (if proxy on same host as VM is unavailable, Veeam Backup & Replication will use an available proxy on a different host, but force it to use network transport mode, so that no stun occurs; this may be preferable when stun is not tolerable)
Note: Both values 1 or 2 will enable the SameHostHotaddMode, which forces Veeam B&R to first attempt to use the Proxy that is on the same host as the VM to be backed up.