05.11.2015 Views

Apress.Expert.Oracle.Database.Architecture.9i.and.10g.Programming.Techniques.and.Solutions.Sep.2005

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

CHAPTER 6 ■ LOCKING AND LATCHING 225<br />

Now, if we were to run two of these programs simultaneously, we might expect the hard<br />

parsing to jump to about 1,600 per second (we have two CPUs available, after all) <strong>and</strong> the CPU<br />

time to double to perhaps 52 CPU seconds. Let’s take a look:<br />

Elapsed:<br />

0.78 (mins)<br />

Load Profile<br />

~~~~~~~~~~~~ Per Second Per Transaction<br />

--------------- ---------------<br />

Parses: 1,066.62 16,710.33<br />

Hard parses: 1,064.28 16,673.67<br />

Top 5 Timed Events<br />

~~~~~~~~~~~~~~~~~~ % Total<br />

Event Waits Time (s) Call Time<br />

-------------------------------------------- ------------ ----------- ---------<br />

CPU time 74 97.53<br />

log file parallel write 53 1 1.27<br />

latch: shared pool 406 1 .66<br />

control file parallel write 21 0 .45<br />

log file sync 6 0 .04<br />

What we discover is that the hard parsing goes up a little bit, but the CPU time triples<br />

rather than doubles! How could that be? The answer lies in <strong>Oracle</strong>’s implementation of latching.<br />

On this multi-CPU machine, when we could not immediately get a latch, we “spun.” The<br />

act of spinning itself consumes CPU. Process 1 attempted many times to get a latch onto the<br />

Shared pool only to discover that process 2 held that latch, so process 1 had to spin <strong>and</strong> wait<br />

for it (consuming CPU). The converse would be true for process 2—many times it would find<br />

that process 1 was holding the latch to the resource it needed. So, much of our processing<br />

time was spent not doing real work, but waiting for a resource to become available. If we page<br />

down through the Statspack report to the “Latch Sleep Breakdown” report, we discover the<br />

following:<br />

Latch Name Requests Misses Sleeps Sleeps 1->3+<br />

---------------- ------------- ----------- -------- ------------<br />

shared pool 1,126,006 229,537 406 229135/398/4/0<br />

library cache 1,108,039 45,582 7 45575/7/0/0<br />

Note how the number 406 appears in the SLEEPS column here? That 406 corresponds to<br />

the number of waits reported in the preceding “Top 5 Timed Events” report. This report shows<br />

us the number of times we tried to get a latch <strong>and</strong> failed in the spin loop. That means the<br />

“Top 5” report is showing us only the tip of the iceberg with regard to latching issues—the<br />

229,537 misses (which means we spun trying to get the latch) are not revealed in the “Top 5”<br />

report for us. After examination of the “Top 5” report, we might not be inclined to think “We<br />

have a hard parse problem here,” even though we have a very serious one. To perform 2 units<br />

of work, we needed to use 3 units of CPU. This was due entirely to the fact that we need that<br />

shared resource, the Shared pool—such is the nature of latching. However, it can be very hard<br />

to diagnose a latching-related issue, unless we underst<strong>and</strong> the mechanics of how they are<br />

implemented. A quick glance at a Statspack report, using the “Top 5” section, might cause us

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!