Apress.Expert.Oracle.Database.Architecture.9i.and.10g.Programming.Techniques.and.Solutions.Sep.2005

rekharaghuram
from rekharaghuram More from this publisher
05.11.2015 Views

CHAPTER 3 ■ FILES 97 ■Note df is a Unix command to show “disk free.” This command showed that I have 29,008,368KB free in the file system containing /d01/temp before I added a 2GB temp file to the database. After I added that file, I had 29,008,240KB free in the file system. Apparently it took only 128KB of storage to hold that file. But if we ls it ops$tkyte@ORA10G> !ls -l /d01/temp/temp_huge -rw-rw---- 1 ora10g ora10g 2147491840 Jan 2 16:34 /d01/temp/temp_huge it appears to be a normal 2GB file, but it is in fact only consuming some 128KB of storage. The reason I point this out is because we would be able to actually create hundreds of these 2GB temporary files, even though we have roughly 29GB of disk space free. Sounds great—free storage for all! The problem is as we start to use these temp files and they start expanding out, we would rapidly hit errors stating “no more space.” Since the space is allocated or physically assigned to the file as needed by the OS, we stand a definite chance of running out of room (especially if after we create the temp files someone else fills up the file system with other stuff). How to solve this differs from OS to OS. On Linux, some of the options are to use dd to fill the file with data, causing the OS to physically assign disk storage to the file, or use cp to create a nonsparse file, for example: ops$tkyte@ORA10G> !cp --sparse=never /d01/temp/temp_huge /d01/temp/temp_huge2 ops$tkyte@ORA10G> !df Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda2 74807888 44099336 26908520 63% / /dev/hda1 102454 14931 82233 16% /boot none 1030804 0 1030804 0% /dev/shm ops$tkyte@ORA10G> drop tablespace temp_huge; Tablespace dropped. ops$tkyte@ORA10G> create temporary tablespace temp_huge 2 tempfile '/d01/temp/temp_huge2' reuse; Tablespace created. ops$tkyte@ORA10G> !df Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda2 74807888 44099396 26908460 63% / /dev/hda1 102454 14931 82233 16% /boot none 1030804 0 1030804 0% /dev/shm After copying the sparse 2GB file to /d01/temp/temp_huge2 and creating the temporary tablespace using that temp file with the REUSE option, we are assured that temp file has allocated all of its file system space and our database actually has 2GB of temporary space to

98 CHAPTER 3 ■ FILES ■Note In my experience, Windows NTFS does not do sparse files, and this applies to UNIX/Linux variants. On the plus side, if you have to create a 15GB temporary tablespace on UNIX/Linux and have temp file support, you’ll find it goes very fast (instantaneous), but just make sure you have 15GB free and reserve it in your mind. Control Files The control file is a fairly small file (it can grow up to 64MB or so in extreme cases) that contains a directory of the other files Oracle needs. The parameter file tells the instance where the control files are, and the control files tell the instance where the database and online redo log files are. The control files also tell Oracle other things, such as information about checkpoints that have taken place, the name of the database (which should match the DB_NAME parameter), the timestamp of the database as it was created, an archive redo log history (this can make a control file large in some cases), RMAN information, and so on. Control files should be multiplexed either by hardware (RAID) or by Oracle when RAID or mirroring is not available. More than one copy of them should exist, and they should be stored on separate disks, to avoid losing them in the event you have a disk failure. It is not fatal to lose your control files—it just makes recovery that much harder. Control files are something a developer will probably never have to actually deal with. To a DBA they are an important part of the database, but to a software developer they are not extremely relevant. Redo Log Files Redo log files are crucial to the Oracle database. These are the transaction logs for the database. They are generally used only for recovery purposes, but they can be used for the following as well: • Instance recovery after a system crash • Media recovery after a data file restore from backup • Standby database processing • Input into Streams, a redo log mining process for information sharing (a fancy way of saying replication) Their main purpose in life is to be used in the event of an instance or media failure, or as a method of maintaining a standby database for failover. If the power goes off on your database machine, causing an instance failure, Oracle will use the online redo logs to restore the system to exactly the point it was at immediately prior to the power outage. If your disk drive containing your data file fails permanently, Oracle will use archived redo logs, as well as online redo logs, to recover a backup of that drive to the correct point in time. Additionally, if you

98<br />

CHAPTER 3 ■ FILES<br />

■Note In my experience, Windows NTFS does not do sparse files, <strong>and</strong> this applies to UNIX/Linux variants.<br />

On the plus side, if you have to create a 15GB temporary tablespace on UNIX/Linux <strong>and</strong> have temp file support,<br />

you’ll find it goes very fast (instantaneous), but just make sure you have 15GB free <strong>and</strong> reserve it in<br />

your mind.<br />

Control Files<br />

The control file is a fairly small file (it can grow up to 64MB or so in extreme cases) that contains<br />

a directory of the other files <strong>Oracle</strong> needs. The parameter file tells the instance where the<br />

control files are, <strong>and</strong> the control files tell the instance where the database <strong>and</strong> online redo log<br />

files are.<br />

The control files also tell <strong>Oracle</strong> other things, such as information about checkpoints that<br />

have taken place, the name of the database (which should match the DB_NAME parameter), the<br />

timestamp of the database as it was created, an archive redo log history (this can make a control<br />

file large in some cases), RMAN information, <strong>and</strong> so on.<br />

Control files should be multiplexed either by hardware (RAID) or by <strong>Oracle</strong> when RAID or<br />

mirroring is not available. More than one copy of them should exist, <strong>and</strong> they should be stored<br />

on separate disks, to avoid losing them in the event you have a disk failure. It is not fatal to<br />

lose your control files—it just makes recovery that much harder.<br />

Control files are something a developer will probably never have to actually deal with.<br />

To a DBA they are an important part of the database, but to a software developer they are not<br />

extremely relevant.<br />

Redo Log Files<br />

Redo log files are crucial to the <strong>Oracle</strong> database. These are the transaction logs for the database.<br />

They are generally used only for recovery purposes, but they can be used for the<br />

following as well:<br />

• Instance recovery after a system crash<br />

• Media recovery after a data file restore from backup<br />

• St<strong>and</strong>by database processing<br />

• Input into Streams, a redo log mining process for information sharing (a fancy way of<br />

saying replication)<br />

Their main purpose in life is to be used in the event of an instance or media failure, or as<br />

a method of maintaining a st<strong>and</strong>by database for failover. If the power goes off on your database<br />

machine, causing an instance failure, <strong>Oracle</strong> will use the online redo logs to restore the<br />

system to exactly the point it was at immediately prior to the power outage. If your disk drive<br />

containing your data file fails permanently, <strong>Oracle</strong> will use archived redo logs, as well as online<br />

redo logs, to recover a backup of that drive to the correct point in time. Additionally, if you

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!