Veeam Backup & Replication v6 logoOne of the things I’ve been working on is rounding up a number of tricks to help Veeam Backup & Replication customers get the best throughput for their backup and replication jobs. The best backup proxy configuration is the Direct SAN mode. This is the first of a series (which will also be summarized when the series is done) on how to get Direct SAN mode working with iSCSI storage for vSphere environments. My preference is Direct SAN, as it is the fastest data mover with Veeam Backup & Replication for vSphere environments. It generally performs the fastest, and most of my practice is with iSCSI storage as of late; so the timing is good to start with this configuration option. iSCSI is probably the easiest way to get Direct SAN access with Veeam Backup & Replication. This is because the block storage protocol is delivered over Ethernet. With a Veeam backup proxy, having a vmnic assigned both to a storage network and a management network will allow it to move most of the data on designated storage networks.

Direct SAN access is a very effective data mover

Before I go too far on the specific design elements, let’s talk a bit about the Veeam proxy. It is simply a Windows service that does the work of the Veeam console. Also, in this case, it can exist easily as a physical or virtual machine or, in some environments, as both. The proxy service itself is stateless, so if a system is dedicated as a Veeam proxy (even a Windows Server Core makes a good proxy!) don’t worry about any transient data. The repository is the critical part of the infrastructure and is where the backup data resides. There are a few basic components to iSCSI storage networks that the backup proxy needs to be aware of to make Direct SAN access work correctly. The first component is the iSCSI initiator. This is the identifier of the Veeam proxy on the storage network, and it is configured by Windows. The iSCSI initiator is built-in to Windows Server 2008 and can be added to Windows Server 2003. Each VMware ESX(i) host also has an iSCSI initiator. For Direct SAN access to work successfully, all of the iSCSI initiators need access to the iSCSI targets on the storage controller. Each VMFS volume is an iSCSI target that each iSCSI initiator can access. This may require configuration on the storage processor and/or Ethernet networks. Every iSCSI target and iSCSI initiator is assigned a unique iSCSI qualified name (IQN) that is represented on the storage network. The figure below shows a generic three host cluster with two VMFS volumes and one Veeam proxy (separated from the Veeam console) configured for direct SAN access.

a generic three host cluster with two VMFS volumes and one Veeam proxy (separated from the Veeam console)

In the example above, the Veeam proxies are installed on virtual machines, if a physical machine was selected, then the backup CPU and I/O traffic is off of the vSphere cluster during the backup and replication jobs. There is one Veeam proxy installed by default with the Veeam Backup & Replication installation. This one is referred to as the “VMware Backup Proxy” and is the only proxy in use if no additional proxies are added. If more proxy resources are needed (additional CPU resources, network placement, etc.), any Windows Server system can be added as a proxy easily. The proxy is a stateless service that will do the data mover tasks associated with both backup and replication jobs. Further, each proxy has a configuration on what transport mode it should operate in: Direct SAN access, Virtual Appliance and Network modes, or an Automatic option that will choose the best option. If you want to have granular control of the proxy behavior, you can explicitly choose how each proxy will perform as shown in the figure below.

Direct SAN access, Virtual Appliance and Network modes, or an Automatic option that will choose the best option. If you want to have granular control of the proxy behavior, you can explicitly choose how each proxy will perform.

Strictly speaking in terms of the storage network, the IQNs of each component the ESXi hosts, VMFS volumes and Veeam proxies are on the same network. In this way, the sequence of the Veeam backup task will allow the Veeam proxy to communicate directly to the VMFS volume. This part of the backup job occurs after the virtual machine has been sent the VMware snapshot command and you can then read the .VMDK data (and its changed-block tracking information) in an application consistent, read-only state. The figure below is a representation of the storage network.

This part of the backup job occurs after the virtual machine has been sent the VMware snapshot command and you can then read the .VMDK data (and its changed-block tracking information) in an application consistent, read-only state.

In this fashion, the proxy can communicate directly to the VMFS volume to move the data. The iSCSI initiator in the Veeam proxy needs to be configured with the iSCSI targets (usually just the IP address of the SAN controller) in the local iSCSI initiator, much like that of the VMware ESX(i) hosts. At this point, the Veeam proxy can communicate directly to the VMFS volumes for the backup (and replication) process.

Additional Resources

A number of additional resources exist to help showcase some specific configuration examples of Veeam Backup & Replication v6 and using Direct SAN access mode for backup and replication jobs. Check out these popular Veeam Forum and Blog posts for more information:

What tips and tricks have you used for iSCSI storage and Direct SAN access mode? Share your comments below.

GD Star Rating
loading...
Direct SAN access tips for iSCSI VMFS volumes and backup proxies, 4.4 out of 5 based on 9 ratings

View posts related to category:

All Veeam Products Top List
Technology Bloggers on Air
  • makotoclimb

    Very useful topics!

    GD Star Rating
    loading...
  • http://www.rickvanover.com Rick Vanover

    Thank you, Makoto!

    GD Star Rating
    loading...
  • Arran

    Hi Rick,
    Surely mounting the VMFS LUNS to the Veeam server with R/W access would cause some kind of corruption? Most of the arrays we work with do not support setting LUN read only access to some initiators unfortunately.

    Thanks

    GD Star Rating
    loading...
  • http://www.veeam.com/blog Rick.Vanover

    Hi Arran: Veeam disables automount on install, and accesses the disks via vSphere APIs such as VDDK. This is a supported use of the vSphere API.

    GD Star Rating
    loading...
  • Boon Hong

    If the proxy is a physical, is there any advantage and disadvantage to also move Veeam B&R server into the same server as compared with Veeam B&R server as a VM?

    btw, isn’t local drive the fastest possible option since the additional Ethernet layer is also removed from iSCSI?

    GD Star Rating
    loading...
  • http://www.rickvanover.com Rick Vanover

    I like having the veeam console a virtual machine, and a physical proxy for LAN-free backups (FC) and offloading the CPU burden of backups from the vSphere cluster.

    GD Star Rating
    loading...
  • Boon Hong

    Since the proxy server is a physical Windows Server instead of a VM running on top of ESXi host, how does it access the disks thru vSphere API? Especially when it also has a direct Microsoft iSCSI connection straight into the SAN volume?

    This also bring in another question of how does the proxy read the snapshot and copied the files since Windows does not handle VMFS volume at all? Or is the Veeam B&R server doing the middleman linkage between vSphere API and the proxy server to make this possible?

    GD Star Rating
    loading...
  • http://www.rickvanover.com Rick Vanover

    For iSCSI, physical or virtual doesn’t matter.

    The vStorage APIs for Data Protection allow the read of the VMFS volume from Veeam in a Windows system. Veeam reads the VMDK, not the snapshot.

    GD Star Rating
    loading...
  • http://dailyVMTech.wordpress.com Hussain Al Sayed

    Hello,
    Attached my network diagram of Veeam Backup Infrastructure. In the attached diagram,
    http://i45.tinypic.com/dp8kg9.jpg

    Veeam Backup Enterprise connected with two pNICs, one for vSphere Management and one for production as this server joined to the domain.

    Proxy Servers connected each with three vNICs, one beside vSphere Management Network and two beside the iSCSI Network.

    Symantec / Veeam Repository server connected with two interfaces, one beside vSphere Management Network and one beside the iSCSI Network to access the Repository LUNs via iSCSI Initiator.

    Proxies added to the Veeam Enterprise via the vSphere Management Network; to get best performance is it possible to add the proxy to the Veeam Enterprise via the iSCSI Interface? Provided Veeam Enterprise able to reach the iSCSI Network to be able to add the Proxy Servers.

    Repository added to the Veeam Enterprise via the vSphere Management Network; to get best performance is it possible to add the Repository Server to the Veeam Enterprise via the iSCSI Interface? Provided Veeam Enterprise able to reach the iSCSI Network to be able to add the Repository Server.

    Doing that assumes the backup performance would increase and all the backup communication goes via the iSCSI Network. Or the current setup and configuration is accurate?

    Please advise.

    GD Star Rating
    loading...
  • http://dailyVMTech.wordpress.com Hussain Al Sayed

    No available proxies are running on ESX(i) management interface subnet. Using proxy from a different subnet, performance may be impacted.

    GD Star Rating
    loading...
  • Ratti3

    Just installed VBR6.1 on 2008 R2 and Automount did not get disabled automatically. Good thing I double checked!

    GD Star Rating
    loading...
    • Edgar Morillo

      Veeam
      Backup & Replication versions 5.X – 6.1 disable automount in
      Microsoft Windows Server automatically during installation, hence you do
      not need to do it manually. Veeam Backup & Replication version 6.5
      disables automount in the case of Windows versions 2003 and earlier, or
      will update the SAN Policy to “Offline shared” in Windows Server
      versions 2008 or newer.

      http://www.veeam.com/kb1446
      https://technet.microsoft.com/en-us/library/Gg252636.aspx

      GD Star Rating
      loading...
  • BarnacleBill

    Hi Rick,

    Will direct SAN access work if I’m using an iSCSI cache? I.e., ESX initiator maps iSCSI cache (target), iSCSI cache (initiator) maps SAN array (target). I want my backups to access the SAN array directly, not the cache. Is this possible?

    Thanks.

    GD Star Rating
    loading...
  • http://www.facebook.com/people/Rick-Vanover/100000762051069 Rick Vanover

    @BarnacleBill: This scenario (while I don’t entirely understand it) would not be Direct SAN access. Direct SAN access would be the iscsi initiator of the Guest VM talking to the iscsi target (which is VMFS…) directly.

    GD Star Rating
    loading...
  • http://www.facebook.com/billy.pumphrey Billy Pumphrey

    I am trying to get direct to SAN configured. My SAN system (192.168.1.200.x/24) is on a different subnet. I have no router on it since VMware talks to it with the other NIC’s, etc. My Veeam (10.1.1.x/16) server is a virtual server.
    How do I get the Veeam server talking to the SAN? Add another NIC to the Veeam VM? Put a router in there?

    Thanks for your help.

    GD Star Rating
    loading...
  • http://www.facebook.com/people/Rick-Vanover/100000762051069 Rick Vanover

    The iSCSI Initiator on the Veeam proxy just needs to get to the iSCSI Target – so a route is all that is needed!

    GD Star Rating
    loading...
  • lalit kumar

    Nice post and Thanks for sharing..
    see : http://networkexpert.co/

    GD Star Rating
    loading...