19.06.2013 Views

DB2 UDB for z/OS Version 8 Performance Topics - IBM Redbooks

DB2 UDB for z/OS Version 8 Performance Topics - IBM Redbooks

DB2 UDB for z/OS Version 8 Performance Topics - IBM Redbooks

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

8.4.2 Conclusion<br />

The values listed show a significant reduction in overall CPU used by the DBM1 address<br />

space, (-6%), with a modest increase in transaction throughput, (+2%). This is a result of <strong>DB2</strong><br />

per<strong>for</strong>ming fewer list prefetch requests <strong>for</strong> the given workload, as more pages are eligible to<br />

be retrieved by each prefetch I/O request.<br />

Table 8-15 summarizes CPU times from <strong>DB2</strong> PE statistics reports <strong>for</strong> a data sharing<br />

environment, running the same IRWW workload.<br />

Table 8-15 <strong>DB2</strong> I/O CI limit removal - Data sharing CPU<br />

DBM1<br />

(msec / commit)<br />

This table combines the effect of both the Locking Protocol changes as well as the removal of<br />

the <strong>DB2</strong> CI I/O limits on the CPU consumed by the DBM1 address space. Refer to 8.2,<br />

“Locking protocol level 2” on page 325, <strong>for</strong> a discussion on data sharing locking protocol<br />

changes in <strong>DB2</strong> V8.<br />

From Table 8-10 on page 330 we can see the locking protocol changes can account <strong>for</strong> about<br />

a 10% improvement in CPU used by the DBM1 address space. So, by removing the <strong>DB2</strong> CI<br />

I/O limits in <strong>DB2</strong> V8 you can see another modest reduction in CPU used by the DBM1<br />

address space. This is due to fewer list prefetch requests and more efficient castout<br />

processing.<br />

The removal of the <strong>DB2</strong> CI I/O limits <strong>for</strong> list prefetch requests and castout I/O has the potential<br />

to significantly reduce the CPU used by the DBM1 address space.<br />

Further benefits can be realized in a data sharing environment through more efficient castout<br />

processing. In the past castout processing generally only was able to castout a few pages on<br />

each request as pages to be castout can be quite random. With this enhancement castout<br />

processing can process more pages <strong>for</strong> a table space with each request and not be restricted<br />

to only pages that are within the 180 CIs within the table space.<br />

The extent of CPU gains from this enhancement will vary significantly, depending on<br />

application processing, access paths used, data organization and placement. Applications<br />

which per<strong>for</strong>m a lot of list prefetch with a large distance between pages will benefit the most<br />

from this enhancement.<br />

In data sharing, applications which generate a lot of castout processing <strong>for</strong> not contiguous<br />

pages, should also benefit from this enhancement. OLTP per<strong>for</strong>mance can definitely<br />

experience a positive per<strong>for</strong>mance impact from 180 CI limit removal. One customer reported<br />

a significant CPU spike every 15 seconds caused by excessive unlock castout and delete<br />

namelist requests. This problem was eliminated after the 180 CI limit removal in castout.<br />

8.4.3 Recommendations<br />

With I/O CI Limit I/O CI Limit<br />

Removed<br />

0.518 0.590 0.456 0.457 -18%<br />

Delta<br />

(Removed / With)<br />

This enhancement is transparent and is available in CM as soon as the PTFs <strong>for</strong> APARs<br />

PQ86071 and PQ89919 have been applied.<br />

Chapter 8. Data sharing enhancements 335

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!