Last week when I outlined a few considerations about whether to install Veeam Backup & Replication as a virtual or physical machine, a follow-up conversation reminded me of an important configuration scenario. If Veeam Backup & Replication is installed on a virtual machine with an iSCSI storage processor as the production storage for vSphere, you can configure the iSCSI initiator within the guest virtual machine. This enables Veeam Backup & Replication to access the production storage for vSphere directly.

Once you configure the iSCSI initiator, you will see your production VMFS LUNs appear in the disk management snap-in of the Veeam Backup & Replication server.  Be sure to check out this is this post from Justin’s IT Blog about this topic. This is shown in the figure below:

image

This would require any zoning on the iSCSI target to include the iSCSI qualified name (IQN) of the Veeam Backup & Replication server to the storage provisioned for the vSphere (or VI3) environment. In this example, the storage is a 2 TB LUN that is formatted with VMFS. While the drive path is visible within Windows, do not try to initialize or format these LUNs within the disk management snap-in, as this could corrupt or overwrite data stored on the VMFS LUNs. Further, note that the Veeam Backup & Replication v5 and later setup automatically disables the automount feature of Windows. Automount allows for automatically mounting and assigning configuration to newly connected volumes. If you add a VMFS datastore to the Veeam Backup & Replication server, with automount enabled, the operating system may initialize and re-signature the volume. This would make it unrecognizable by the ESX(i) hosts.

Having these LUNs visible within disk management can ensure that all of the required LUNs are available to Veeam Backup & Replication, including viewing the target and LUN IDs as presented from the storage processor. Conversely, if all VMFS LUNs are not visible; there may be a zoning issue.

Direct SAN access processing mode allows Veeam Backup & Replication to communicate directly with the storage for the highest backup job performance. Further, if the backup target supports iSCSI or fibre channel; direct SAN access mode also enables a completely LAN-free backup implementation.

In the case of iSCSI storage, this usually means that the virtual machine that Veeam Backup & Replication is running on will have a presence of at least two (or more) networks (vNICs) as in most iSCSI storage networks are separated from other networks (including production networks), at least at a VLAN level. Should fibre channel storage be used as the backup target, NPIV can be leveraged to connect a virtual machine running Veeam Backup & Replication directly to the SAN fabric.

What configuration practices have you done with Veeam Backup & Replication as a virtual machine and iSCSI storage networks? Share your comments below.

GD Star Rating
loading...
Using the iSCSI initiator within Veeam Backup & Replication in a VM, 4.8 out of 5 based on 12 ratings

View posts related to category:

All Veeam Products Top List
Technology Bloggers on Air
  • Hussain

    Hello,
    In my environment, I have Veeam installed on a VM and I have done this practice, but still I don’t see better performance in backing up a VM or restoring a VM. Still the speed at 10 ~ 15 MB/s.

    What could be the other issues to speed up my backup?

    Thanks,

    GD Star Rating
    loading...
  • http://www.veeam.com/blog Rick.Vanover

    Hi Hussain:

    Installing the iSCSI initiator within a VM will ensure that Direct SAN mode will work.

    As for your concern about the rate, is the production storage and backup storage on the same SAN/storage processor?

    GD Star Rating
    loading...
  • Hussain

    Hi Rick,

    This is a test environment.

    The storage is hosted on Windows 2003 StarWind Server, both SAN Server and ESX servers are connected to the same pSwitches. vLAN separating the traffic. Veeam VM installed on the same ESX host that the iSCSI initiator targeted.

    My configuration:

    ESX01 Server connected to Dell 2650 running iSCSI StarWind.
    Veeam VM running on the same Server ESX01
    Backup LUN .vmdk added from StarWind server via ESX01.
    iSCSI initiator configured on veeam VM.
    Backup running on VMs running on ESX01 Server
    Backup job as Direct SAN and Virtual Appliance, both are same speed. 🙁

    Thanks,

    GD Star Rating
    loading...
  • http://www.rickvanover.com Rick Vanover

    I don’t have a specific baseline for how a configuration like that would perform as other factors such as SCSI/SAS/SATA and number of drives come into play as well as the makeup of the VM.

    Curiously, is StarWind installed as a VM or directly on the Dell 2650?

    GD Star Rating
    loading...
  • Wesley

    Hi Rick,
    I have Veeam Backup 5.0.2 running on standalone Windows 2008 R2 64Bit server.
    The server is directly attached to SAN through SAS cables, there is nothing between the SAN and the Server. The VMFS LUN’s are visible in Disk Manager. Everytime I try to run Backup using the Direct SAN Access it always fails over to network. Any ideas why this happens?

    GD Star Rating
    loading...
  • http://www.veeam.com/blog Rick.Vanover

    “Through SAS cables” concerns me. If it is SAN, the connectivity must be iSCSI (Ethernet) or fibre channel (HBA).

    What is the storage product in use?

    GD Star Rating
    loading...
  • vmcreator

    Hi wesley,

    SAS cables?. Are you using something like a HP P2000 directly attached. If so, check “Explicit Mapping” etc

    GD Star Rating
    loading...
  • http://iSCSIPerformance Hussain

    Hi,
    I have changed the setup a bit and now veeam v6 installed on a VM 64bit Ent with 2 vCPUs quoad core each and 6 GB memory.

    Veeam is installed on one of the LUNs that comes from EMC AX4-5i and all the VMs resides on this SAN Storage.

    1 LUN Target attached to the Veeam VM via iSCSI Initiator from EMC SAN.

    2 LUNs target attached to Veeam VM via iSCSI Initiator from IBM DS3500 SAN.

    1 LUN target from OpenFiler attached to the Veeam VM via iSCSI Initiator.

    Veeam VM has 4 vNICs connected to the iSCSI Portgroup on the iSCSI vSwitch in ESX.5.0.

    All the LUNs where the VMs resides are presented to the Veeam VM and they shown in the Disk Management.

    How can I increase the performance? I can see that performance is really slow. Is there a way or tricks to make it faster?

    Thanks,
    Hussain

    GD Star Rating
    loading...
  • Rich

    I too have a EMC Ax4-5i which all the VM’s reside on. I have configured my Veeam Backup Servers ISCSI Initiator (2008 R2) to see the VMFS LUNS but it still cannot connect via direct san access. Besides the normal EMC stuff do I need to do anything special to get the VMFS LUN’s to be accessable on the Veeam backup server for the direct SAN Access?
    Thanks,
    Rich

    GD Star Rating
    loading...
  • http://www.veeam.com/blog Rick.Vanover

    Hi Rich, can you see the drives in Server Manager? Then we should be good. You can open a support case at cp.veeam.com!

    GD Star Rating
    loading...
  • http://iSCSIPerformance Hussain

    Hi,
    The disks are showing in the Disk Manager of Windows 2008, but some of them as Unknown and some of them are Online. They must be all as online BLUE Disks or Unknown as BLACK Disks?

    H

    GD Star Rating
    loading...
  • http://www.rickvanover.com Rick Vanover

    They should be offline, further, properties of each disk will help you determine where it is coming from (SAN-wise).

    GD Star Rating
    loading...
  • http://iSCSIPerformance Hussain

    The disks are offline and Veeam gives Warning “Unable to establish connection to SAN”

    GD Star Rating
    loading...
  • http://www.rickvanover.com Rick Vanover

    Hi Hussain – you may want to check with support to see if they can give you more information on the situation.

    GD Star Rating
    loading...
  • Neadom

    I think the issue I have with this is that this post does not give the configuration where this applies. This does not work for a HOST running “Local Storage” I have a client that has a IBM Blade Center S with Local Storage. My Backup Target is my SAN over iSCSI and I am running ESXi 5. I have to setup my Veeam Backup Server (Virtual) to connect to the iSCSI with the Microsoft iSCSI Initiator and map the drive to a letter so that Veeam will see the drive and I am able to backup to it. This configuration is ONLY for Backing up LUN based VMs so that Veeam / Backup Proxy can READ the Guest VM Data from the LUN Directly.

    GD Star Rating
    loading...
  • Boon Hong

    When you said direct SAN access, do you mean the iSCSI storage is NOT map into ESXi as VMDK and present to the VM with Veeam installed as local storage? But using Microsoft iSCSI inside the VM and map directly to the iSCSI storage, bypassing the ESXi completely?

    GD Star Rating
    loading...
  • http://www.veeam.com/blog Rick.Vanover

    Hi Boon – yes, the iSCSI initiator in this configuration would be directly to the VMFS volume on the SAN. Here is a related post: http://www.veeam.com/blog/direct-san-access-tips-for-iscsi-vmfs-volumes-and-backup-proxies.html

    GD Star Rating
    loading...
  • Dan

    I had the same issue with a Dell MD3200 install he tools from the resource CD and reboot backup server that should sort it for you

    GD Star Rating
    loading...
  • Boon Hong

    I understand that Windows cannot read a VMFS volume. So how is it going to manage the files with a direct access to the VMFS volume?

    And is this direct SAN access meant for reading the source for backing the VMs on this volume? Or is it meant for writing to the destination as NTFS formatted volume?

    btw, someone from veeam told me that Veeam read the backup data thru vmware hotaddd scsi capability and not thru iSCSI network. Thus, a direct SAN access is not necessary.

    I’m getting quite confused.

    GD Star Rating
    loading...
  • Boon Hong

    I’m very concerned with setting up a direct iSCSI direct to read the source and risk corruption because my SAN storage do not allow read-only access base on connection but base on volume.

    Also, should I enabled simulaneous iSCSI connection to the SAN volumes? VMFS support this but not NTFS, which will corrupt the SAN volumes as well.

    GD Star Rating
    loading...
  • http://www.rickvanover.com Rick Vanover

    You can set the Veeam IQN to be read-only, we are reading via the vStorage APIs – but not all SANs are able to do this by IQN of the initiator of the Veeam proxy, with ESX(i) initiators being R+W.

    Or you can just leverage the virtual appliance hot-add transport mode.

    GD Star Rating
    loading...
  • Boon Hong

    vmware told me that if I configure an iSCSI initiator inside a VM, it will make use of the VM Network (The only available NIC for the VM) to access the SAN volume instead of ESXi iSCSI network. This could be the reason why there is no performance improvement, or even poorer performance! True?

    GD Star Rating
    loading...
  • http://www.rickvanover.com Rick Vanover

    Hi Boon – you should probably contact Veeam support.

    GD Star Rating
    loading...
  • Gerd Meyerink

    Hi Rick
    I have ESXi 5 enviorment with 2 Server and an equallocic storage.
    The VEEAM 6.1 Software is installed on a physical server and conectet to the ISCSI Network with 2 NICs with enabled multipathing in ISCSI-Initiator.
    When I do a replication off a VM the reading network traffic goes fine over the ISCSI-Network, but the outgoing writing network traffic uses the public LAN.
    When I do a backup, everything works fine, reading and writing network traffic goes over the ISCSI-Network.
    Is this a bug off VEEAM6.x or is this function as designed

    GD Star Rating
    loading...
  • http://www.veeam.com/blog Rick.Vanover

    The write must be via virtual appliance hot-add, therefore it must access VMKernel.

    In that sense, it sounds like it is working as expected.

    GD Star Rating
    loading...
  • Gerd Meyerink

    Thanks a lot Rick, that saves me a lot off work

    GD Star Rating
    loading...
  • Kristof VM

    Is there also a performance benefit configuring jumbo packets with SAN mode on a seperate NIC? Or does Veeam default to 1500 bytes MTU?

    I am using a physical backup server with 2 NICs, 1 in LAN & 1 in iSCSI, on iSCSI NIC enabled jumbo frames on broadcom adapter of R510 server with local disks for repository. Whole iSCSI netwerk; switch has jumbo frames enabled as well as my equallogic.

    GD Star Rating
    loading...
  • http://www.veeam.com/blog Rick.Vanover

    Hi Kristof…. I’d make the Jumbo Frames decision in regards to primary storage, rather than backup architecture.

    Myself, I’ve not had much experience with Jumbo Frames but those whom I’ve dealt with who have say it makes such a minor difference that it is not worth the extra configuration effort.

    GD Star Rating
    loading...
  • Karl-Heinz Hildebrandt

    Just installed veeam backup & replication 6.5 inside a vm.

    diskpart tells me:

    Automatic mounting of new volumes enabled.

    So I do not see that the automount feature is disabled by the install procedure, please correct.

    tia
    Karl-Heinz Hildebrandt

    GD Star Rating
    loading...
  • http://www.veeam.com/blog Rick.Vanover

    Karl – please contact support. It isn’t an install option.

    GD Star Rating
    loading...
  • Will.R

    Rick,

    I have questions about the configuration of VMware when using a VM as a proxy with Veeam Direct SAN transport mode.

    How should my ESXi vSwitches, nics, etc. be configured to make sure that the iSCSI traffic is from the Proxy VM is only going to use the iSCSI nics, vSwitches in my hosts?

    Also, should I create a separate vSwitch, nics, etc. from my production (binded) iSCSI for the virtual proxy to use?

    GD Star Rating
    loading...
  • http://www.facebook.com/people/Rick-Vanover/100000762051069 Rick Vanover

    Hi Will, I would make a port group on the vmKernel network that is the iSCSI network, put the proxy there with no IP route on that interface to get elsewhere (meaning it can only get to the iSCSI target there).

    GD Star Rating
    loading...
  • Anthony Montes

    I must be totally missing something but I don’t know what. I have a Windows 2k8 R2 server with 2 NICs and a lot of local storage. One NIC is a 1G on our 10.139.x.x net which all servers/PC/s/etc access. The other is a 10G NIC on a 10.140.x.x net which storage and the ESX Servers, and the Veeam box are on. (The ESX’s, Veeam, and 2k8 box are also on the 10.139 net) Since the 2k8 Server and the ESXs are on the 10G backbone and the source VM is on ESX, the destination for the Veeam backup is on the 2k8 Server, I am looking for the fastest avenue of transport from source to target. I thought that if I installed MS iSCSI Target software on the 2k8 box and created an iSCSI target from the local storage on it and then configured Veeam with iSCSI initiator software to access that target, it would improve throughput. On the Veeam box I have added the iSCSI target ( the 2k8 box). Now I can see the Target under Storage manager on the Veeam box but cannot figure out how to add it as a repository so that I can backup to it. Maybe I am going about this all wrong but there is probably somebody else out there as clueless as me that needs help. What would be the best way to take advantage of this 10G backbone to backup from ESX to the 2k8 box? Simply adding the 2k8 box as a Windows drive letter to the Veeam box only gives me about 39M throughput. I know I can get faster than that. Thanks for tolerating my newbieness

    GD Star Rating
    loading...
    • Rick Vanover

      Hi Anthony – there are a number of things. I’d recommend giving Veeam support a call. they can take the right look at your options, it looks like you have the pieces right – just got to get them sorted.

      You can call, go to cp.veeam.com or go to vee.am/help for info on how to make a case.

      Cheers.

      GD Star Rating
      loading...
  • Rick Vanover

    Hi DZak, in that situation, I’d put the iSCSI intiatior on the guest VM that will be the repository server. The proxies would communicate to the repository server directly.

    GD Star Rating
    loading...
  • Nate Cartwright

    So, Windows 7 SP1 won’t install on a system with automount disabled. You get error “The boot configuration data store could not be opened.
    The system cannot find the file specified.” Microsoft’s recommendation to fix this is to re-enable automount. So, how does one install SP1 with iSCSI and Veeam running on a system without ruining the LUNs? Remove all of them, install SP1, and re-add them?

    GD Star Rating
    loading...
  • Rick Vanover

    Hey Nate: Take the system off the iSCSI network, enable automount, do the update, re-disable automount, put it back on iSCSI network…

    GD Star Rating
    loading...
  • Flux Blocker

    What if the Veeam backup and replication server is on a physical Windows Server, with a backup proxy residing on a VM that is attached to and using SAN storage? We have a small deployment of vSphere with a 3-host cluster and I am just trying Veeam for the first time but I am almost certain we will buy the license. It is the fully-functional trial that I am working with now. With this setup, am I better off putting the Veeam server in a VM that uses the SAN storage, or is the setup above, (with the Veeam server on a physical Windows server and a backup proxy in a VM) just as good? Is there a “best practices” blog post somewhere you could refer me to? I want to make sure I set this up the most optimal way. I plan to also use a single ESXi host server with local storage, (in a remote DR site) and keep a copy of the backed-up VMs so that they can be powered on in the event of a loss of access to the production SAN, but right now I just want to make sure I set up the main site with the SAN the best possible way. Many, many thanks!

    GD Star Rating
    loading...