18.07.2013 Views

HP-UX VxFS tuning and performance - filibeto.org

HP-UX VxFS tuning and performance - filibeto.org

HP-UX VxFS tuning and performance - filibeto.org

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

6<br />

Data access methods<br />

Buffered/cached I/O<br />

By default, most access to files in a <strong>VxFS</strong> file system is through the cache. In <strong>HP</strong>-<strong>UX</strong> 11i v2 <strong>and</strong><br />

earlier, the <strong>HP</strong>-<strong>UX</strong> Buffer Cache provided the cache resources for file access. In <strong>HP</strong>-<strong>UX</strong> 11i v3 <strong>and</strong><br />

later, the Unified File Cache is used. Cached I/O allows for many features, including asynchronous<br />

prefetching known as read ahead, <strong>and</strong> asynchronous or delayed writes known as flush behind.<br />

Read ahead<br />

When reads are performed sequentially, <strong>VxFS</strong> detects the pattern <strong>and</strong> reads ahead or prefetches data<br />

into the buffer/file cache. <strong>VxFS</strong> attempts to maintain 4 read ahead “segments” of data in the<br />

buffer/file cache, using the read ahead size as the size of each segment.<br />

Figure 1. <strong>VxFS</strong> read ahead<br />

256K 256K 256K 256K 256K<br />

Sequential read<br />

The segments act as a “pipeline”. When the data in the first of the 4 segments is read, asynchronous<br />

I/O for a new segment is initiated, so that 4 segments are maintained in the read ahead pipeline.<br />

The initial size of a read ahead segment is specified by <strong>VxFS</strong> tunable read_pref_io. As the process<br />

continues to read a file sequentially, the read ahead size of the segment approximately doubles, to a<br />

maximum of read_pref_io * read_nstream.<br />

For example, consider a file system with read_pref_io set to 64 KB <strong>and</strong> read_nstream set to 4. When<br />

sequential reads are detected, <strong>VxFS</strong> will initially read ahead 256 KB (4 * read_pref_io). As the<br />

process continues to read ahead, the read ahead amount will grow to 1 MB (4 segments *<br />

read_pref_io * read_nstream). As the process consumes a 256 KB read ahead segment from the front<br />

of the pipeline, asynchronous I/O is started for the 256 KB read ahead segment at the end of the<br />

pipeline.<br />

By reading the data ahead, the disk latency can be reduced. The ideal configuration is the minimum<br />

amount of read ahead that will reduce the disk latency. If the read ahead size is configured too large,<br />

the disk I/O queue may spike when the reads are initiated, causing delays for other potentially more<br />

critical I/Os. Also, the data read into the cache may no longer be in the cache when the data is<br />

needed, causing the data to be re-read into the cache. These cache misses will cause the read ahead<br />

size to be reduced automatically.<br />

Note that if another thread is reading the file r<strong>and</strong>omly or sequentially, <strong>VxFS</strong> has trouble with the<br />

sequential pattern detection. The sequential reads may be treated as r<strong>and</strong>om reads <strong>and</strong> no read<br />

ahead is performed. This problem can be resolved with enhanced read ahead discussed later. The<br />

read ahead policy can be set on a per file system basis by setting the read_ahead tunable using<br />

vxtunefs(1M). The default read_ahead value is 1 (normal read ahead).

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!