18.08.2013 Views

vSphere Storage - ESXi 5.1 - Documentation - VMware

vSphere Storage - ESXi 5.1 - Documentation - VMware

vSphere Storage - ESXi 5.1 - Documentation - VMware

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Storage</strong> Array Performance<br />

<strong>Storage</strong> array performance is one of the major factors contributing to the performance of the entire SAN<br />

environment.<br />

If there are issues with storage array performance, be sure to consult your storage array vendor’s<br />

documentation for any relevant information.<br />

Follow these general guidelines to improve the array performance in the <strong>vSphere</strong> environment:<br />

n When assigning LUNs, remember that each LUN is accessed by a number of hosts, and that a number of<br />

virtual machines can run on each host. One LUN used by a host can service I/O from many different<br />

applications running on different operating systems. Because of this diverse workload, the RAID group<br />

containing the <strong>ESXi</strong> LUNs should not include LUNs used by other servers that are not running <strong>ESXi</strong>.<br />

n Make sure read/write caching is enabled.<br />

n SAN storage arrays require continual redesign and tuning to ensure that I/O is load balanced across all<br />

storage array paths. To meet this requirement, distribute the paths to the LUNs among all the SPs to<br />

provide optimal load balancing. Close monitoring indicates when it is necessary to rebalance the LUN<br />

distribution.<br />

Tuning statically balanced storage arrays is a matter of monitoring the specific performance statistics (such<br />

as I/O operations per second, blocks per second, and response time) and distributing the LUN workload<br />

to spread the workload across all the SPs.<br />

NOTE Dynamic load balancing is not currently supported with <strong>ESXi</strong>.<br />

Server Performance with Fibre Channel<br />

You must consider several factors to ensure optimal server performance.<br />

Each server application must have access to its designated storage with the following conditions:<br />

n High I/O rate (number of I/O operations per second)<br />

n High throughput (megabytes per second)<br />

n Minimal latency (response times)<br />

Chapter 9 Best Practices for Fibre Channel <strong>Storage</strong><br />

Because each application has different requirements, you can meet these goals by choosing an appropriate<br />

RAID group on the storage array. To achieve performance goals:<br />

n Place each LUN on a RAID group that provides the necessary performance levels. Pay attention to the<br />

activities and resource utilization of other LUNS in the assigned RAID group. A high-performance RAID<br />

group that has too many applications doing I/O to it might not meet performance goals required by an<br />

application running on the <strong>ESXi</strong> host.<br />

n Make sure that each server has a sufficient number of HBAs to allow maximum throughput for all the<br />

applications hosted on the server for the peak period. I/O spread across multiple HBAs provide higher<br />

throughput and less latency for each application.<br />

n To provide redundancy in the event of HBA failure, make sure the server is connected to a dual redundant<br />

fabric.<br />

n When allocating LUNs or RAID groups for <strong>ESXi</strong> systems, multiple operating systems use and share that<br />

resource. As a result, the performance required from each LUN in the storage subsystem can be much<br />

higher if you are working with <strong>ESXi</strong> systems than if you are using physical machines. For example, if you<br />

expect to run four I/O intensive applications, allocate four times the performance capacity for the <strong>ESXi</strong><br />

LUNs.<br />

<strong>VMware</strong>, Inc. 69

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!