18.08.2013 Views

vSphere Storage - ESXi 5.1 - Documentation - VMware

vSphere Storage - ESXi 5.1 - Documentation - VMware

vSphere Storage - ESXi 5.1 - Documentation - VMware

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Because each application has different requirements, you can meet these goals by choosing an appropriate<br />

RAID group on the storage system. To achieve performance goals, perform the following tasks:<br />

n Place each LUN on a RAID group that provides the necessary performance levels. Pay attention to the<br />

activities and resource utilization of other LUNS in the assigned RAID group. A high-performance RAID<br />

group that has too many applications doing I/O to it might not meet performance goals required by an<br />

application running on the <strong>ESXi</strong> host.<br />

n Provide each server with a sufficient number of network adapters or iSCSI hardware adapters to allow<br />

maximum throughput for all the applications hosted on the server for the peak period. I/O spread across<br />

multiple ports provides higher throughput and less latency for each application.<br />

n To provide redundancy for software iSCSI, make sure the initiator is connected to all network adapters<br />

used for iSCSI connectivity.<br />

n When allocating LUNs or RAID groups for <strong>ESXi</strong> systems, multiple operating systems use and share that<br />

resource. As a result, the performance required from each LUN in the storage subsystem can be much<br />

higher if you are working with <strong>ESXi</strong> systems than if you are using physical machines. For example, if you<br />

expect to run four I/O intensive applications, allocate four times the performance capacity for the <strong>ESXi</strong><br />

LUNs.<br />

n When using multiple <strong>ESXi</strong> systems in conjunction with vCenter Server, the performance needed from the<br />

storage subsystem increases correspondingly.<br />

n The number of outstanding I/Os needed by applications running on an <strong>ESXi</strong> system should match the<br />

number of I/Os the SAN can handle.<br />

Network Performance<br />

A typical SAN consists of a collection of computers connected to a collection of storage systems through a<br />

network of switches. Several computers often access the same storage.<br />

Single Ethernet Link Connection to <strong>Storage</strong> shows several computer systems connected to a storage system<br />

through an Ethernet switch. In this configuration, each system is connected through a single Ethernet link to<br />

the switch, which is also connected to the storage system through a single Ethernet link. In most configurations,<br />

with modern switches and typical traffic, this is not a problem.<br />

Figure 14-1. Single Ethernet Link Connection to <strong>Storage</strong><br />

Chapter 14 Best Practices for iSCSI <strong>Storage</strong><br />

When systems read data from storage, the maximum response from the storage is to send enough data to fill<br />

the link between the storage systems and the Ethernet switch. It is unlikely that any single system or virtual<br />

machine gets full use of the network speed, but this situation can be expected when many systems share one<br />

storage device.<br />

When writing data to storage, multiple systems or virtual machines might attempt to fill their links. As Dropped<br />

Packets shows, when this happens, the switch between the systems and the storage system has to drop data.<br />

This happens because, while it has a single connection to the storage device, it has more traffic to send to the<br />

storage system than a single link can carry. In this case, the switch drops network packets because the amount<br />

of data it can transmit is limited by the speed of the link between it and the storage system.<br />

<strong>VMware</strong>, Inc. 129

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!