Veeam Backup & Replication v6 logoOne of the things I’ve been working on is rounding up a number of tricks to help Veeam Backup & Replication customers get the best throughput for their backup and replication jobs. The best backup proxy configuration is the Direct SAN mode. This is the first of a series (which will also be summarized when the series is done) on how to get Direct SAN mode working with iSCSI storage for vSphere environments. My preference is Direct SAN, as it is the fastest data mover with Veeam Backup & Replication for vSphere environments. It generally performs the fastest, and most of my practice is with iSCSI storage as of late; so the timing is good to start with this configuration option. iSCSI is probably the easiest way to get Direct SAN access with Veeam Backup & Replication. This is because the block storage protocol is delivered over Ethernet. With a Veeam backup proxy, having a vmnic assigned both to a storage network and a management network will allow it to move most of the data on designated storage networks.

Direct SAN access is a very effective data mover

Before I go too far on the specific design elements, let’s talk a bit about the Veeam proxy. It is simply a Windows service that does the work of the Veeam console. Also, in this case, it can exist easily as a physical or virtual machine or, in some environments, as both. The proxy service itself is stateless, so if a system is dedicated as a Veeam proxy (even a Windows Server Core makes a good proxy!) don’t worry about any transient data. The repository is the critical part of the infrastructure and is where the backup data resides. There are a few basic components to iSCSI storage networks that the backup proxy needs to be aware of to make Direct SAN access work correctly. The first component is the iSCSI initiator. This is the identifier of the Veeam proxy on the storage network, and it is configured by Windows. The iSCSI initiator is built-in to Windows Server 2008 and can be added to Windows Server 2003. Each VMware ESX(i) host also has an iSCSI initiator. For Direct SAN access to work successfully, all of the iSCSI initiators need access to the iSCSI targets on the storage controller. Each VMFS volume is an iSCSI target that each iSCSI initiator can access. This may require configuration on the storage processor and/or Ethernet networks. Every iSCSI target and iSCSI initiator is assigned a unique iSCSI qualified name (IQN) that is represented on the storage network. The figure below shows a generic three host cluster with two VMFS volumes and one Veeam proxy (separated from the Veeam console) configured for direct SAN access.

a generic three host cluster with two VMFS volumes and one Veeam proxy (separated from the Veeam console)

In the example above, the Veeam proxies are installed on virtual machines, if a physical machine was selected, then the backup CPU and I/O traffic is off of the vSphere cluster during the backup and replication jobs. There is one Veeam proxy installed by default with the Veeam Backup & Replication installation. This one is referred to as the “VMware Backup Proxy” and is the only proxy in use if no additional proxies are added. If more proxy resources are needed (additional CPU resources, network placement, etc.), any Windows Server system can be added as a proxy easily. The proxy is a stateless service that will do the data mover tasks associated with both backup and replication jobs. Further, each proxy has a configuration on what transport mode it should operate in: Direct SAN access, Virtual Appliance and Network modes, or an Automatic option that will choose the best option. If you want to have granular control of the proxy behavior, you can explicitly choose how each proxy will perform as shown in the figure below.

Direct SAN access, Virtual Appliance and Network modes, or an Automatic option that will choose the best option. If you want to have granular control of the proxy behavior, you can explicitly choose how each proxy will perform.

Strictly speaking in terms of the storage network, the IQNs of each component the ESXi hosts, VMFS volumes and Veeam proxies are on the same network. In this way, the sequence of the Veeam backup task will allow the Veeam proxy to communicate directly to the VMFS volume. This part of the backup job occurs after the virtual machine has been sent the VMware snapshot command and you can then read the .VMDK data (and its changed-block tracking information) in an application consistent, read-only state. The figure below is a representation of the storage network.

This part of the backup job occurs after the virtual machine has been sent the VMware snapshot command and you can then read the .VMDK data (and its changed-block tracking information) in an application consistent, read-only state.

In this fashion, the proxy can communicate directly to the VMFS volume to move the data. The iSCSI initiator in the Veeam proxy needs to be configured with the iSCSI targets (usually just the IP address of the SAN controller) in the local iSCSI initiator, much like that of the VMware ESX(i) hosts. At this point, the Veeam proxy can communicate directly to the VMFS volumes for the backup (and replication) process.

Additional Resources

A number of additional resources exist to help showcase some specific configuration examples of Veeam Backup & Replication v6 and using Direct SAN access mode for backup and replication jobs. Check out these popular Veeam Forum and Blog posts for more information:

What tips and tricks have you used for iSCSI storage and Direct SAN access mode? Share your comments below.

GD Star Rating
loading...
Direct SAN access tips for iSCSI VMFS volumes and backup proxies, 4.5 out of 5 based on 8 ratings

View posts related to category:

    Veeam Availability Suite — Download free 30-day trial

    • makotoclimb

      Very useful topics!

      GD Star Rating
      loading...
    • http://www.rickvanover.com Rick Vanover

      Thank you, Makoto!

      GD Star Rating
      loading...
    • Arran

      Hi Rick,
      Surely mounting the VMFS LUNS to the Veeam server with R/W access would cause some kind of corruption? Most of the arrays we work with do not support setting LUN read only access to some initiators unfortunately.

      Thanks

      GD Star Rating
      loading...
    • http://www.veeam.com/blog Rick.Vanover

      Hi Arran: Veeam disables automount on install, and accesses the disks via vSphere APIs such as VDDK. This is a supported use of the vSphere API.

      GD Star Rating
      loading...
    • lalit kumar

      Nice post and Thanks for sharing..
      see : http://networkexpert.co/

      GD Star Rating
      loading...

    Rick Vanover
    Author: Rick Vanover
    Rick Vanover (MVP, vExpert, Cisco Champion) is the director of Technical Product Marketing & Evangelism for Veeam Software based in Columbus, Ohio. Rick’s IT experience includes system administration and IT management; with virtualization being the central theme of his career recently.
    Follow Rick on Twitter... 

    Published: March 16, 2012