You could experience significant slowdown in performance or failures(in rare cases) when performing Backup or Backup Copy tasks to volumes with Microsoft Windows deduplication enabled where resulted backup files are larger than 1TB in size.
The issue comes from Windows deduplication limitations, which could cause significant performance degradation while reading/appending data from/to very large files (more than 1TB). MS Requirements for Data Deduplication: https://msdn.microsoft.com/en-us/library/hh769303(v=vs.85).aspx
a) Avoid growth of backup files more than 1TB on deduplication storage, split backup jobs if possible(reducing amount of VMs per job).
b) If size of VMs does not allow splitting, try to avoid Windows deduplicated volumes usage and transfer backup files to another repository without MS Windows deduplication enabled.
1) It is important to format the volume with large NTFS File Record Segment (FRS) (4096 bytes instead of 1024 by default) as you could face NTFS limitation errors in future. You could verify FRS size with the following command:
fsutil fsinfo ntfsinfo <volume pathname>
Command to reformat NTFS volume with larger FRS (/L):
format <volume pathname> /L
2) Use maximum NTFS allocation unit size(cluster size) of 64Kb.
Considering option mentioned in pt. 1, two commands should be united into one:
format <volume pathname> /A:64K /L
3) Avoid growth of files more than 1TB;
4) For Backup jobs preferred backup method is Forward Incremental with Active Full backups enabled.
Introduction to Data Deduplication in Windows Server 2012: http://blogs.technet.com/b/filecab/archive/2012/05/21/introduction-to-data-deduplication-in-windows-server-2012.aspx
Plan to Deploy Data Deduplication: https://technet.microsoft.com/en-us/library/hh831700.aspx