HP-UX VxFS tuning and performance - filibeto.org

HP-UX VxFS tuning and performance - filibeto.org HP-UX VxFS tuning and performance - filibeto.org

18.07.2013 Views

12 Note that buffer merging was not implemented initially on IA-64 systems with VxFS 3.5 on 11i v2. So using the default max_buf_data_size of 8 KB would result in a maximum physical I/O size of 8 KB. Buffer merging is implemented in 11i v2 0409 released in the fall of 2004. Page sizes on HP-UX 11i v3 and later On HP-UX 11i v3 and later, VxFS performs cached I/O through the Unified File Cache. The UFC is paged based, and the max_buf_data_size tunable on 11i v3 has no effect. The default page size is 4 KB. However, when larger reads or read ahead is performed, the UFC will attempt to use contiguous 4 KB pages. For example, when performing sequential reads, the read_pref_io value will be used to allocate the consecutive 4K pages. Note that the “buffer” size is no longer limited to 8 KB or 64 KB with the UFC, eliminating the buffer merging overhead caused by chaining a number of 8 KB buffers. Also, when doing small random I/O, VxFS can perform 4 KB page requests instead of 8 KB buffer requests. The page request has the advantage of returning less data, thus reducing the transfer size. However, if the application still needs to read the other 4 KB, then a second I/O request would be needed which would decrease performance. So small random I/O performance may be worse on 11i v3 if cache hits would have occurred had 8 KB of data been read instead of 4 KB. The only solution is to increase the base_pagesize(5) from the default value of 4 to 8 or 16. Careful testing should be taken when changing the base_pagesize from the default value as some applications assume the page size will be the default 4 KB. Direct I/O If OnlineJFS in installed and licensed, direct I/O can be enabled in several ways: Mount option “-o mincache=direct” Mount option “-o convosync=direct” Use VX_DIRECT cache advisory of the VX_SETCACHE ioctl() system call Discovered Direct I/O Direct I/O has several advantages: Data accessed only once does not benefit from being cached Direct I/O data does not disturb other data in the cache Data integrity is enhanced since disk I/O must complete before the read or write system call completes (I/O is synchronous) Direct I/O can perform larger physical I/O than buffered or cached I/O, which reduces the total number of I/Os for large read or write operations However, direct I/O also has its disadvantages: Direct I/O reads cannot benefit from VxFS read ahead algorithms All physical I/O is synchronous, thus each write must complete the physical I/O before the system call returns Direct I/O performance degrades when the I/O request is not properly aligned Mixing buffered/cached I/O and direct I/O can degrade performance Direct I/O works best for large I/Os that are accessed once and when data integrity for writes is more important than performance.

Note The OnlineJFS license is required to perform direct I/O when using VxFS 5.0 or earlier. Beginning with VxFS 5.0.1, direct I/O is available with the Base JFS product. Discovered direct I/O With HP OnLineJFS, direct I/O will be enabled if the read or write size is greater than or equal to the file system tunable discovered_direct_iosz. The default discovered_direct_iosz is 256 KB. As with direct I/O, all discovered direct I/O will be synchronous. Read ahead and flush behind will not be performed with discovered direct I/O. If read ahead and flush behind is desired for large reads and writes or the data needs to be reused from buffer/file cache, then the discovered_direct_iosz can be increased as needed. Direct I/O and unaligned data When performing Direct I/O, alignment is very important. Direct I/O requests that are not properly aligned have the unaligned portions of the request managed through the buffer or file cache. For writes, the unaligned portions must be read from disk into the cache first, and then the data in the cache is modified and written out to disk. This read-before-write behavior can dramatically increase the times for the write requests. Note The best direct I/O performance occurs when the logical requests are properly aligned. Prior to VxFS 5.0 on 11i v3, the logical requests need to be aligned on file system block boundaries. Beginning with VxFS 5.0 on 11i v3, logical requests need to be aligned on a device block boundary (1 KB). Direct I/O on HP-UX 11.23 Unaligned data is still buffered using 8 KB buffers (default size), although the I/O is still done synchronously. For all VxFS versions on HP-UX 11i v2 and earlier, direct I/O performs well when the request is aligned on a file system block boundary and the entire request can by-pass the buffer cache. However, requests smaller than the file system block size or requests that do not align on a file system block boundary will use the buffer cache for the unaligned portions of the request. For large transfers greater than the block size, part of the data can still be transferred using direct I/O. For example, consider a 16 KB write request on a file system using an 8 KB block where the request does not start on a file system block boundary (the data starts on a 4 KB boundary in this case). Since the first 4 KB is unaligned, the data must be buffered. Thus an 8 KB buffer is allocated, and the entire 8 KB of data is read in, thus 4 KB of unnecessary data is read in. This behavior is repeated for the last 4 KB of data, since it is also not properly aligned. The two 8 KB buffers are modified and written to the disk along with the 8 KB direct I/O for the aligned portion of the request. In all, the operation took 5 synchronous physical I/Os, 2 reads and 3 writes. If the file system is recreated to use a 4 KB block size or smaller, the I/O would be aligned and could be performed in a single 16 KB physical I/O. 13

12<br />

Note that buffer merging was not implemented initially on IA-64 systems with <strong>VxFS</strong> 3.5 on 11i v2. So<br />

using the default max_buf_data_size of 8 KB would result in a maximum physical I/O size of 8 KB.<br />

Buffer merging is implemented in 11i v2 0409 released in the fall of 2004.<br />

Page sizes on <strong>HP</strong>-<strong>UX</strong> 11i v3 <strong>and</strong> later<br />

On <strong>HP</strong>-<strong>UX</strong> 11i v3 <strong>and</strong> later, <strong>VxFS</strong> performs cached I/O through the Unified File Cache. The UFC is<br />

paged based, <strong>and</strong> the max_buf_data_size tunable on 11i v3 has no effect. The default page size is 4<br />

KB. However, when larger reads or read ahead is performed, the UFC will attempt to use contiguous<br />

4 KB pages. For example, when performing sequential reads, the read_pref_io value will be used to<br />

allocate the consecutive 4K pages. Note that the “buffer” size is no longer limited to 8 KB or 64 KB<br />

with the UFC, eliminating the buffer merging overhead caused by chaining a number of 8 KB buffers.<br />

Also, when doing small r<strong>and</strong>om I/O, <strong>VxFS</strong> can perform 4 KB page requests instead of 8 KB buffer<br />

requests. The page request has the advantage of returning less data, thus reducing the transfer size.<br />

However, if the application still needs to read the other 4 KB, then a second I/O request would be<br />

needed which would decrease <strong>performance</strong>. So small r<strong>and</strong>om I/O <strong>performance</strong> may be worse on<br />

11i v3 if cache hits would have occurred had 8 KB of data been read instead of 4 KB. The only<br />

solution is to increase the base_pagesize(5) from the default value of 4 to 8 or 16. Careful testing<br />

should be taken when changing the base_pagesize from the default value as some applications<br />

assume the page size will be the default 4 KB.<br />

Direct I/O<br />

If OnlineJFS in installed <strong>and</strong> licensed, direct I/O can be enabled in several ways:<br />

Mount option “-o mincache=direct”<br />

Mount option “-o convosync=direct”<br />

Use VX_DIRECT cache advisory of the VX_SETCACHE ioctl() system call<br />

Discovered Direct I/O<br />

Direct I/O has several advantages:<br />

Data accessed only once does not benefit from being cached<br />

Direct I/O data does not disturb other data in the cache<br />

Data integrity is enhanced since disk I/O must complete before the read or write system call<br />

completes (I/O is synchronous)<br />

Direct I/O can perform larger physical I/O than buffered or cached I/O, which reduces the total<br />

number of I/Os for large read or write operations<br />

However, direct I/O also has its disadvantages:<br />

Direct I/O reads cannot benefit from <strong>VxFS</strong> read ahead algorithms<br />

All physical I/O is synchronous, thus each write must complete the physical I/O before the system<br />

call returns<br />

Direct I/O <strong>performance</strong> degrades when the I/O request is not properly aligned<br />

Mixing buffered/cached I/O <strong>and</strong> direct I/O can degrade <strong>performance</strong><br />

Direct I/O works best for large I/Os that are accessed once <strong>and</strong> when data integrity for writes is<br />

more important than <strong>performance</strong>.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!