EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC ...

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC ... EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC ...

04.04.2013 Views

Chapter 3: Storage Design ASM diskgroups The database was built using four distinct ASM diskgroups: • The Data diskgroup contained all datafiles and the first control file. • The Online Redo diskgroup contained online redo logs for the database and a second control file. Ordinarily, Oracle’s best practice recommendation is for the redo logs files to be placed in the same diskgroup as all the database files (the Data diskgroup in this example). However, it is necessary to separate the online redo logs from the data diskgroup when planning to do recovery from split mirror snap copies since the current redo log files cannot be used to recover the cloned database. • The Flash Recovery diskgroup contained the archive logs. • The Temp diskgroup contained tempfiles. ASM data area MetaLUNs were chosen for ease of management and future scalability. As the data grows, and consequently the number of ASM disks increases, ASM will have an inherent overhead managing a large number of disks. Therefore, metaLUNs were selected to allow the CLARiiON to manage request queues for large number of LUNs. For the Data diskgroup, four striped metaLUNs was created, each containing four members. The selection of members for each metaLUN was chosen to ensure that each member resided on a different back-end bus to ensure maximum throughput. The starting LUN for each metaLUN were also carefully selected to avoid all the metaLUNs starting on the same RAID group. This selection criterion was to avoid starting all the ASM disks on the same set of spindles, and alternating the metaLUN members to balance the LUN residence. This methodology was used to ensure that ASM parallel chunk IOs would not hit the same spindles at the same time within the metaLUNs when, or if, Oracle performed a parallel table scan. EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide 20

Chapter 3: Storage Design EMC SnapView SnapView clones were used to create complete copies of the database. A clone copy was used to offload the backup operations from the production nodes to the proxy node. A second clone copy was used as a gold copy. The following graphic shows an example of a clone LUN's relationship to the source LUN, in this example the clone information for one of the LUNs is contained in the ASM datagroup. SnapView clones create a full bit-for-bit copy of the respective source LUN. A clone was created for each of the LUNs contained within the ASM diskgroups, and all clones were then simultaneously split from their respective sources to provide a point-in-time content consistent replica set. The command naviseccli – h arrayIP snapview –listclonegroup –data1 was used to display information on this clone group. Each of the ASM diskgroup LUNs was added to a clone group becoming the clone source device. Target LUN clones were then added to the clone group. Each clone group is assigned a unique ID and each clone gets a unique clone ID within the group. The first clone added has a clone ID of 010000000000000, and for each subsequent clone added the clone ID increments. The clone ID is then used to specify which clone is selected each time a cloning operation is performed. As shown above there are two clones assigned to the clone group. Clone ID 01000000000000000 was used as the gold copy and clone ID 0200000000000000 was used for backups. (The Navisphere Manager GUI also shows this information.) When the clones are synchronized they can be split (fractured) from the source LUN to provide an independent point-in-time copy of the database. EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide 21

Chapter 3: Storage Design<br />

ASM diskgroups<br />

The database was built using four distinct ASM diskgroups:<br />

• The Data diskgroup contained all datafiles <strong>and</strong> the first control file.<br />

• The Online Redo diskgroup contained online redo logs <strong>for</strong> the database <strong>and</strong><br />

a second control file. Ordinarily, <strong>Oracle</strong>’s best practice recommendation is<br />

<strong>for</strong> the redo logs files to be placed in the same diskgroup as all the database<br />

files (the Data diskgroup in this example). However, it is necessary to<br />

separate the online redo logs from the data diskgroup when planning to do<br />

recovery from split mirror snap copies since the current redo log files cannot<br />

be used to recover the cloned database.<br />

• The Flash <strong>Recovery</strong> diskgroup contained the archive logs.<br />

• The Temp diskgroup contained tempfiles.<br />

ASM data area<br />

MetaLUNs were chosen <strong>for</strong> ease of management <strong>and</strong> future scalability. As the data<br />

grows, <strong>and</strong> consequently the number of ASM disks increases, ASM will have an<br />

inherent overhead managing a large number of disks. There<strong>for</strong>e, metaLUNs were<br />

selected to allow the CLARiiON to manage request queues <strong>for</strong> large number of<br />

LUNs.<br />

For the Data diskgroup, four striped metaLUNs was created, each containing four<br />

members. The selection of members <strong>for</strong> each metaLUN was chosen to ensure that<br />

each member resided on a different back-end bus to ensure maximum throughput.<br />

The starting LUN <strong>for</strong> each metaLUN were also carefully selected to avoid all the<br />

metaLUNs starting on the same RAID group. This selection criterion was to avoid<br />

starting all the ASM disks on the same set of spindles, <strong>and</strong> alternating the metaLUN<br />

members to balance the LUN residence. This methodology was used to ensure that<br />

ASM parallel chunk IOs would not hit the same spindles at the same time within the<br />

metaLUNs when, or if, <strong>Oracle</strong> per<strong>for</strong>med a parallel table scan.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

20

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!