18.08.2013 Views

vSphere Storage - ESXi 5.1 - Documentation - VMware

vSphere Storage - ESXi 5.1 - Documentation - VMware

vSphere Storage - ESXi 5.1 - Documentation - VMware

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>vSphere</strong> <strong>Storage</strong><br />

For information, see the <strong>vSphere</strong> Compatibility Guide and refer to you vendor documentation.<br />

n Use HBAs of the same type, either all QLogic or all Emulex. <strong>VMware</strong> does not support heterogeneous<br />

HBAs on the same host accessing the same LUNs.<br />

n If a host uses multiple physical HBAs as paths to the storage, zone all physical paths to the virtual<br />

machine. This is required to support multipathing even though only one path at a time will be active.<br />

n Make sure that physical HBAs on the host have access to all LUNs that are to be accessed by NPIVenabled<br />

virtual machines running on that host.<br />

n The switches in the fabric must be NPIV-aware.<br />

n When configuring a LUN for NPIV access at the storage level, make sure that the NPIV LUN number and<br />

NPIV target ID match the physical LUN and Target ID.<br />

NPIV Capabilities and Limitations<br />

Learn about specific capabilities and limitations of the use of NPIV with <strong>ESXi</strong>.<br />

<strong>ESXi</strong> with NPIV supports the following items:<br />

n NPIV supports vMotion. When you use vMotion to migrate a virtual machine it retains the assigned<br />

WWN.<br />

If you migrate an NPIV-enabled virtual machine to a host that does not support NPIV, VMkernel reverts<br />

to using a physical HBA to route the I/O.<br />

n If your FC SAN environment supports concurrent I/O on the disks from an active-active array, the<br />

concurrent I/O to two different NPIV ports is also supported.<br />

When you use <strong>ESXi</strong> with NPIV, the following limitations apply:<br />

n Because the NPIV technology is an extension to the FC protocol, it requires an FC switch and does not<br />

work on the direct attached FC disks.<br />

n When you clone a virtual machine or template with a WWN assigned to it, the clones do not retain the<br />

WWN.<br />

n NPIV does not support <strong>Storage</strong> vMotion.<br />

n Disabling and then re-enabling the NPIV capability on an FC switch while virtual machines are running<br />

can cause an FC link to fail and I/O to stop.<br />

Assign WWNs to Virtual Machines in the <strong>vSphere</strong> Web Client<br />

Assign WWN settings to virtual machine with an RDM disk.<br />

You can create from 1 to 16 WWN pairs, which can be mapped to the first 1 to 16 physical FC HBAs on the<br />

host.<br />

Prerequisites<br />

Create a virtual machine with an RDM disk. See “Create Virtual Machines with RDMs in the <strong>vSphere</strong> Web<br />

Client,” on page 185.<br />

Procedure<br />

1 Browse to the virtual machine.<br />

a Select a datacenter, folder, cluster, resource pool, or host.<br />

b Click the Related Objects tab and click Virtual Machines.<br />

2 Right-click the virtual machine and select Edit Settings.<br />

42 <strong>VMware</strong>, Inc.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!