Apress.Expert.Oracle.Database.Architecture.9i.and.10g.Programming.Techniques.and.Solutions.Sep.2005

rekharaghuram
from rekharaghuram More from this publisher
05.11.2015 Views

CHAPTER 4 ■ MEMORY STRUCTURES 147 ops$tkyte@ORA10G> show parameter db_cache_size NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ db_cache_size big integer 1G ops$tkyte@ORA10G> alter system set db_cache_size = 768m; System altered. ops$tkyte@ORA10G> alter system set db_16k_cache_size = 256m; System altered. ops$tkyte@ORA10G> create tablespace ts_16k 2 datafile size 5m 3 blocksize 16k; Tablespace created. So, now I have another buffer cache set up: one to cache any blocks that are 16KB in size. The Default pool, controlled by the db_cache_size parameter, is 768MB in size and the 16KB cache, controlled by the db_16k_cache_size parameter, is 256MB in size. These two caches are mutually exclusive; if one “fills up,” it cannot use space in the other. This gives the DBA a very fine degree of control over memory use, but it comes at a price. A price of complexity and management. These multiple blocksizes were not intended as a performance or tuning feature, but rather came about in support of transportable tablespaces—the ability to take formatted data files from one database and transport or attach them to another database. They were implemented in order to take data files from a transactional system that was using an 8KB blocksize and transport that information to a data warehouse using a 16KB or 32KB blocksize. The multiple blocksizes do serve a good purpose, however, in testing theories. If you want to see how your database would operate with a different blocksize—how much space, for example, a certain table would consume if you used a 4KB block instead of an 8KB block— you can now test that easily without having to create an entirely new database instance. You may also be able to use multiple blocksizes as a very finely focused tuning tool for a specific set of segments, by giving them their own private buffer pools. Or, in a hybrid system with transactional users, you could use one set of data and reporting/warehouse users could query a separate set of data. The transactional data would benefit from the smaller blocksizes due to less contention on the blocks (less data/rows per block means less people in general would go after the same block at the same time) as well as better buffer cache utilization (users read into the cache only the data they are interested in—the single row or small set of rows). The reporting/warehouse data, which might be based on the transactional data, would benefit from the larger blocksizes due in part to less block overhead (it takes less storage overall), and larger logical I/O sizes perhaps. And since reporting/warehouse data does not have the same update contention issues, the fact that there are more rows per block is not a concern, but a benefit. Additionally, the transactional users get their own buffer cache in effect; they do not have to worry about the reporting queries overrunning their cache. But in general, the Default, Keep, and Recycle pools should be sufficient for fine-tuning the block buffer cache, and multiple blocksizes would be used primarily for transporting data from database to database and perhaps for a hybrid reporting/transactional system.

148 CHAPTER 4 ■ MEMORY STRUCTURES Shared Pool The Shared pool is one of the most critical pieces of memory in the SGA, especially with regard to performance and scalability. A Shared pool that is too small can kill performance to the point where the system appears to hang. A Shared pool that is too large can have the same effect. A Shared pool that is used incorrectly will be a disaster as well. So, what exactly is the Shared pool? The Shared pool is where Oracle caches many bits of “program” data. When we parse a query, the parsed representation is cached there. Before we go through the job of parsing an entire query, Oracle searches the Shared pool to see if the work has already been done. PL/SQL code that you run is cached in the Shared pool, so the next time you run it, Oracle doesn’t have to read it in from disk again. PL/SQL code is not only cached here, it is shared here as well. If you have 1,000 sessions all executing the same code, only one copy of the code is loaded and shared among all sessions. Oracle stores the system parameters in the Shared pool. The data dictionary cache (cached information about database objects) is stored here. In short, everything but the kitchen sink is stored in the Shared pool. The Shared pool is characterized by lots of small (4KB or less in general) chunks of memory. Bear in mind that 4KB is not a hard limit—there will be allocations that exceed that size—but in general the goal is to use small chunks of memory to prevent the fragmentation that would occur if memory chunks were allocated in radically different sizes, from very small to very large. The memory in the Shared pool is managed on a LRU basis. It is similar to the buffer cache in that respect—if you don’t use it, you’ll lose it. A supplied package called DBMS_SHARED_POOL may be used to change this behavior—to forcibly pin objects in the Shared pool. You can use this procedure to load up your frequently used procedures and packages at database startup time, and make it so they are not subject to aging out. Normally, though, if over time a piece of memory in the Shared pool is not reused, it will become subject to aging out. Even PL/SQL code, which can be rather large, is managed in a paging mechanism so that when you execute code in a very large package, only the code that is needed is loaded into the Shared pool in small chunks. If you don’t use it for an extended period of time, it will be aged out if the Shared pool fills up and space is needed for other objects. The easiest way to break Oracle’s Shared pool is to not use bind variables. As you saw in Chapter 1, not using bind variables can bring a system to its knees for two reasons: • The system spends an exorbitant amount of CPU time parsing queries. • The system uses large amounts of resources managing the objects in the Shared pool as a result of never reusing queries. If every query submitted to Oracle is a unique query with the values hard-coded, the concept of the Shared pool is substantially defeated. The Shared pool was designed so that query plans would be used over and over again. If every query is a brand-new, never-before-seen query, then caching only adds overhead. The Shared pool becomes something that inhibits performance. A common but misguided technique that many use to try to solve this issue is adding more space to the Shared pool, which typically only makes things worse than before. As the Shared pool inevitably fills up once again, it gets to be even more of an overhead than the smaller Shared pool, for the simple reason that managing a big, full Shared pool takes more work than managing a smaller, full Shared pool. The only true solution to this problem is to use shared SQL—to reuse queries. Earlier, in Chapter 1, we briefly looked at the parameter CURSOR_SHARING, which can work as a short-term

148<br />

CHAPTER 4 ■ MEMORY STRUCTURES<br />

Shared Pool<br />

The Shared pool is one of the most critical pieces of memory in the SGA, especially with regard<br />

to performance <strong>and</strong> scalability. A Shared pool that is too small can kill performance to the<br />

point where the system appears to hang. A Shared pool that is too large can have the same<br />

effect. A Shared pool that is used incorrectly will be a disaster as well.<br />

So, what exactly is the Shared pool? The Shared pool is where <strong>Oracle</strong> caches many bits of<br />

“program” data. When we parse a query, the parsed representation is cached there. Before we<br />

go through the job of parsing an entire query, <strong>Oracle</strong> searches the Shared pool to see if the<br />

work has already been done. PL/SQL code that you run is cached in the Shared pool, so the<br />

next time you run it, <strong>Oracle</strong> doesn’t have to read it in from disk again. PL/SQL code is not only<br />

cached here, it is shared here as well. If you have 1,000 sessions all executing the same code,<br />

only one copy of the code is loaded <strong>and</strong> shared among all sessions. <strong>Oracle</strong> stores the system<br />

parameters in the Shared pool. The data dictionary cache (cached information about database<br />

objects) is stored here. In short, everything but the kitchen sink is stored in the Shared pool.<br />

The Shared pool is characterized by lots of small (4KB or less in general) chunks of memory.<br />

Bear in mind that 4KB is not a hard limit—there will be allocations that exceed that<br />

size—but in general the goal is to use small chunks of memory to prevent the fragmentation<br />

that would occur if memory chunks were allocated in radically different sizes, from very<br />

small to very large. The memory in the Shared pool is managed on a LRU basis. It is similar to<br />

the buffer cache in that respect—if you don’t use it, you’ll lose it. A supplied package called<br />

DBMS_SHARED_POOL may be used to change this behavior—to forcibly pin objects in the Shared<br />

pool. You can use this procedure to load up your frequently used procedures <strong>and</strong> packages at<br />

database startup time, <strong>and</strong> make it so they are not subject to aging out. Normally, though, if<br />

over time a piece of memory in the Shared pool is not reused, it will become subject to aging<br />

out. Even PL/SQL code, which can be rather large, is managed in a paging mechanism so that<br />

when you execute code in a very large package, only the code that is needed is loaded into the<br />

Shared pool in small chunks. If you don’t use it for an extended period of time, it will be aged<br />

out if the Shared pool fills up <strong>and</strong> space is needed for other objects.<br />

The easiest way to break <strong>Oracle</strong>’s Shared pool is to not use bind variables. As you saw in<br />

Chapter 1, not using bind variables can bring a system to its knees for two reasons:<br />

• The system spends an exorbitant amount of CPU time parsing queries.<br />

• The system uses large amounts of resources managing the objects in the Shared pool as<br />

a result of never reusing queries.<br />

If every query submitted to <strong>Oracle</strong> is a unique query with the values hard-coded, the concept<br />

of the Shared pool is substantially defeated. The Shared pool was designed so that query<br />

plans would be used over <strong>and</strong> over again. If every query is a br<strong>and</strong>-new, never-before-seen<br />

query, then caching only adds overhead. The Shared pool becomes something that inhibits<br />

performance. A common but misguided technique that many use to try to solve this issue is<br />

adding more space to the Shared pool, which typically only makes things worse than before.<br />

As the Shared pool inevitably fills up once again, it gets to be even more of an overhead than<br />

the smaller Shared pool, for the simple reason that managing a big, full Shared pool takes<br />

more work than managing a smaller, full Shared pool.<br />

The only true solution to this problem is to use shared SQL—to reuse queries. Earlier, in<br />

Chapter 1, we briefly looked at the parameter CURSOR_SHARING, which can work as a short-term

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!