Apress.Expert.Oracle.Database.Architecture.9i.and.10g.Programming.Techniques.and.Solutions.Sep.2005

rekharaghuram
from rekharaghuram More from this publisher
05.11.2015 Views

CHAPTER 13 ■ PARTITIONING 561 elsewhere to hold a copy of both indexes, you’ll need a temporary transaction log table to record the changes made against the base table during the time you spend rebuilding the index, and so on. On the other hand, if the index itself had been partitioned into ten 1GB partitions, then you could rebuild each index partition individually, one by one. Now you need 10 percent of the free space you needed previously. Likewise, the individual index rebuilds will each be much faster (ten times faster, perhaps), so far fewer transactional changes occurring during an online index rebuild need to be merged into the new index, and so on. Also, consider what happens in the event of a system or software failure just before completing the rebuilding of a 10GB index. The entire effort is lost. By breaking the problem down and partitioning the index into 1GB partitions, at most you would lose 10 percent of the total work required to rebuild the index. Alternatively, it may be that you need to rebuild only 10 percent of the total aggregate index—for example, only the “newest” data (the active data) is subject to this reorganization, and all of the “older” data (which is relatively static) remains unaffected. Finally, consider the situation whereby you discover 50 percent of the rows in your table are “migrated” rows (see Chapter 10 for details on chained/migrated rows), and you would like to fix this. Having a partitioned table will facilitate the operation. To “fix” migrated rows, you must typically rebuild the object—in this case, a table. If you have one 100GB table, you will need to perform this operation in one very large “chunk,” serially, using ALTER TABLE MOVE. On the other hand, if you have 25 partitions, each 4GB in size, then you can rebuild each partition one by one. Alternatively, if you are doing this during off-hours and have ample resources, you can even do the ALTER TABLE MOVE statements in parallel, in separate sessions, potentially reducing the amount of time the whole operation will take. Virtually everything you can do to a nonpartitioned object, you can do to an individual partition of a partitioned object. You might even discover that your migrated rows are concentrated in a very small subset of your partitions, hence you could rebuild one or two partitions instead of the entire table. Here is a quick example demonstrating the rebuild of a table with many migrated rows. Both BIG_TABLE1 and BIG_TABLE2 were created from a 10,000,000-row instance of BIG_TABLE (see the “Setup” section for the BIG_TABLE creation script). BIG_TABLE1 is a regular, nonpartitioned table, whereas BIG_TABLE2 is a hash partitioned table in eight partitions (we’ll cover hash partitioning in a subsequent section; suffice it to say, it distributed the data rather evenly into eight partitions): ops$tkyte@ORA10GR1> create table big_table1 2 ( ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, 3 OBJECT_ID, DATA_OBJECT_ID, 4 OBJECT_TYPE, CREATED, LAST_DDL_TIME, 5 TIMESTAMP, STATUS, TEMPORARY, 6 GENERATED, SECONDARY ) 7 tablespace big1 8 as 9 select ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, 10 OBJECT_ID, DATA_OBJECT_ID, 11 OBJECT_TYPE, CREATED, LAST_DDL_TIME, 12 TIMESTAMP, STATUS, TEMPORARY, 13 GENERATED, SECONDARY 14 from big_table.big_table; Table created.

562 CHAPTER 13 ■ PARTITIONING ops$tkyte@ORA10GR1> create table big_table2 2 ( ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, 3 OBJECT_ID, DATA_OBJECT_ID, 4 OBJECT_TYPE, CREATED, LAST_DDL_TIME, 5 TIMESTAMP, STATUS, TEMPORARY, 6 GENERATED, SECONDARY ) 7 partition by hash(id) 8 (partition part_1 tablespace big2, 9 partition part_2 tablespace big2, 10 partition part_3 tablespace big2, 11 partition part_4 tablespace big2, 12 partition part_5 tablespace big2, 13 partition part_6 tablespace big2, 14 partition part_7 tablespace big2, 15 partition part_8 tablespace big2 16 ) 17 as 18 select ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, 19 OBJECT_ID, DATA_OBJECT_ID, 20 OBJECT_TYPE, CREATED, LAST_DDL_TIME, 21 TIMESTAMP, STATUS, TEMPORARY, 22 GENERATED, SECONDARY 23 from big_table.big_table; Table created. Now, each of those tables is in its own tablespace, so we can easily query the data dictionary to see the allocated and free space in each tablespace: ops$tkyte@ORA10GR1> select b.tablespace_name, 2 mbytes_alloc, 3 mbytes_free 4 from ( select round(sum(bytes)/1024/1024) mbytes_free, 5 tablespace_name 6 from dba_free_space 7 group by tablespace_name ) a, 8 ( select round(sum(bytes)/1024/1024) mbytes_alloc, 9 tablespace_name 10 from dba_data_files 11 group by tablespace_name ) b 12 where a.tablespace_name (+) = b.tablespace_name 13 and b.tablespace_name in ('BIG1','BIG2') 14 / TABLESPACE MBYTES_ALLOC MBYTES_FREE ---------- ------------ ----------- BIG1 1496 344 BIG2 1496 344

CHAPTER 13 ■ PARTITIONING 561<br />

elsewhere to hold a copy of both indexes, you’ll need a temporary transaction log table to<br />

record the changes made against the base table during the time you spend rebuilding the<br />

index, <strong>and</strong> so on. On the other h<strong>and</strong>, if the index itself had been partitioned into ten 1GB partitions,<br />

then you could rebuild each index partition individually, one by one. Now you need<br />

10 percent of the free space you needed previously. Likewise, the individual index rebuilds will<br />

each be much faster (ten times faster, perhaps), so far fewer transactional changes occurring<br />

during an online index rebuild need to be merged into the new index, <strong>and</strong> so on.<br />

Also, consider what happens in the event of a system or software failure just before completing<br />

the rebuilding of a 10GB index. The entire effort is lost. By breaking the problem down<br />

<strong>and</strong> partitioning the index into 1GB partitions, at most you would lose 10 percent of the total<br />

work required to rebuild the index.<br />

Alternatively, it may be that you need to rebuild only 10 percent of the total aggregate<br />

index—for example, only the “newest” data (the active data) is subject to this reorganization,<br />

<strong>and</strong> all of the “older” data (which is relatively static) remains unaffected.<br />

Finally, consider the situation whereby you discover 50 percent of the rows in your table<br />

are “migrated” rows (see Chapter 10 for details on chained/migrated rows), <strong>and</strong> you would like<br />

to fix this. Having a partitioned table will facilitate the operation. To “fix” migrated rows, you<br />

must typically rebuild the object—in this case, a table. If you have one 100GB table, you will<br />

need to perform this operation in one very large “chunk,” serially, using ALTER TABLE MOVE. On<br />

the other h<strong>and</strong>, if you have 25 partitions, each 4GB in size, then you can rebuild each partition<br />

one by one. Alternatively, if you are doing this during off-hours <strong>and</strong> have ample resources,<br />

you can even do the ALTER TABLE MOVE statements in parallel, in separate sessions, potentially<br />

reducing the amount of time the whole operation will take. Virtually everything you can do to<br />

a nonpartitioned object, you can do to an individual partition of a partitioned object. You<br />

might even discover that your migrated rows are concentrated in a very small subset of your<br />

partitions, hence you could rebuild one or two partitions instead of the entire table.<br />

Here is a quick example demonstrating the rebuild of a table with many migrated rows.<br />

Both BIG_TABLE1 <strong>and</strong> BIG_TABLE2 were created from a 10,000,000-row instance of BIG_TABLE<br />

(see the “Setup” section for the BIG_TABLE creation script). BIG_TABLE1 is a regular, nonpartitioned<br />

table, whereas BIG_TABLE2 is a hash partitioned table in eight partitions (we’ll cover<br />

hash partitioning in a subsequent section; suffice it to say, it distributed the data rather evenly<br />

into eight partitions):<br />

ops$tkyte@ORA10GR1> create table big_table1<br />

2 ( ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME,<br />

3 OBJECT_ID, DATA_OBJECT_ID,<br />

4 OBJECT_TYPE, CREATED, LAST_DDL_TIME,<br />

5 TIMESTAMP, STATUS, TEMPORARY,<br />

6 GENERATED, SECONDARY )<br />

7 tablespace big1<br />

8 as<br />

9 select ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME,<br />

10 OBJECT_ID, DATA_OBJECT_ID,<br />

11 OBJECT_TYPE, CREATED, LAST_DDL_TIME,<br />

12 TIMESTAMP, STATUS, TEMPORARY,<br />

13 GENERATED, SECONDARY<br />

14 from big_table.big_table;<br />

Table created.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!