19.06.2013 Views

DB2 UDB for z/OS Version 8 Performance Topics - IBM Redbooks

DB2 UDB for z/OS Version 8 Performance Topics - IBM Redbooks

DB2 UDB for z/OS Version 8 Performance Topics - IBM Redbooks

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

8.3.2 Conclusion<br />

The impact of IMMEDWRITE YES remains unchanged. Changed pages are written to the<br />

group buffer pool as soon as the buffer updates are complete, be<strong>for</strong>e committing.<br />

With IMMEDWRITE NO (or PH1) and YES options, the CPU cost of writing the changed<br />

pages to the group buffer pool is charged to the allied TCB. Prior to <strong>DB2</strong> V8, this was true<br />

only <strong>for</strong> PH1 and YES options. For the NO option, the CPU cost of writing the changed pages<br />

to the group buffer pool was charged to the MSTR SRB, since the pages were written as part<br />

of phase 2 commit processing under the MSTR SRB.<br />

<strong>DB2</strong> V8 now charges all writes of changed pages to the allied TCB, not the MSTR address<br />

space. This enhancement provides a more accurate accounting <strong>for</strong> all <strong>DB2</strong> workloads. <strong>DB2</strong> is<br />

now able to charge more work back to the user who initiated the work in the first place.<br />

The impact of changing the default IMMEDWRITE BIND option means that with V8 <strong>DB2</strong> no<br />

longer writes any page to the group buffer pool during phase 2 of commit processing. This<br />

change has very little impact on Internal Throughput Rates of a transaction workload and<br />

there<strong>for</strong>e has little or no per<strong>for</strong>mance impact.<br />

However, you may see some increase in CPU used by each allied address space, as the cost<br />

of writing any remaining changed pages to the group buffer pool has shifted from the MSTR<br />

address space to the allied address space.<br />

8.3.3 Recommendations<br />

The IMMEDWRITE enhancements are immediately available when you migrate to <strong>DB2</strong> V8.<br />

You do not have to wait until new-function mode.<br />

Customers who use the allied TCB time <strong>for</strong> end-user charge back may see additional CPU<br />

cost with this change.<br />

8.4 <strong>DB2</strong> I/O CI limit removed<br />

8.4.1 Per<strong>for</strong>mance<br />

Prior to <strong>DB2</strong> V8, <strong>DB2</strong> did not schedule a single I/O <strong>for</strong> a page set with CIs that span a range<br />

of more than 180 CIs. This restriction was in place to avoid long I/O response times.<br />

With advances in DASD technology, especially Parallel Access Volume (PAV) support, this<br />

restriction can now be lifted.<br />

<strong>DB2</strong> V8 removes the CIs per I/O limit <strong>for</strong> list prefetch and castout I/O requests. The limit still<br />

remains <strong>for</strong> deferred write requests, to avoid any potential per<strong>for</strong>mance degradation during<br />

transaction processing. Recall that a page is not available <strong>for</strong> update while it is being written<br />

to DASD. So, we do not want an online transaction to have to wait <strong>for</strong> a potentially longer<br />

deferred write I/O request to complete.<br />

A number of per<strong>for</strong>mance studies have been done to understand the benefit of removing the<br />

CI limit <strong>for</strong> <strong>DB2</strong> I/O requests. For these studies, the IRWW workload was used in both a data<br />

sharing environment and a non-data sharing environment.<br />

Table 8-12 shows extracts from <strong>DB2</strong> PE statistics reports <strong>for</strong> a non-data sharing environment<br />

running the IRWW workload.<br />

Chapter 8. Data sharing enhancements 333

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!