EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC ...

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC ... EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC ...

04.04.2013 Views

Chapter 3: Storage Design CLARiiON storage design and configuration Design CLARiiON CX4-960 uses UltraFlex technology to provide array connectivity. This approach is extremely flexible and allows each CX4 to be tailored to each user’s specific needs. In the CX4 deployed for this use case, each storage processor was populated with four back-end buses to provide 4 Gb connectivity to the DAEs and disk drives. Each storage processor had eight 4 Gb front-end Fibre Channel ports for SAN connectivity. There were also two iSCSI ports on each storage processor that were not used. Nine DAEs were populated with 130 x 300 GB 15k drives, and five 146 GB drives were also used for the vault. The CLARiiON was configured to house a 1 TB production database and two clone copies of that database. The clone copies were utilized as follows: • Gold copy • Backup copy Gold copy At various logical checkpoints within the testing process the gold copy was refreshed to ensure there was an up-to-date copy of the database available at all times. This ensured that an instantaneous recovery image was always available in the event that any logical corruption occurred during, or as result of, the testing process. If any issue did occur, a reverse synchronization from the SnapView clone gold copy would have made the data available immediately, thereby avoiding having to rebuild the database. Backup copy The backup clone copy was used for NetWorker proxy backups. The clone copy of the database was mounted to the proxy node and the backups were executed on the proxy node. This is also referred to as the “clone mount host.” Configuration It is a best practice to use ASM external redundancy for data protection when using EMC arrays. CLARiiON will also provide protection against loss of media, as well as transparent failover in the event of a specific disk or component failure. The following image shows the CLARiiON layout; the CX4-960 deployed for this solution had four 4 Gb Fibre Channel back-end buses for disk connectivity. The back-end buses are numbered Bus 0 to Bus 3. Each bus was connected to a number of DAEs (disk array enclosures). DAEs are numbered using the “Bus X Enc Y” nomenclature, so the first enclosure on Bus 0 is therefore known as Bus 0 Enc 0. Each bus had connectivity to both storage processors for failover purposes. Each enclosure can hold up to 15 disk drives. Each disk drive is numbered in an extension of the Bus Enclosure scheme. The first disk in Bus 0 Enclosure 0 is known as disk 0_0_0. EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide 18

Chapter 3: Storage Design The following image shows how ASM diskgroups were positioned on the CLARiiON array. The first enclosure contained the vault area. The first five drives 0_0_0 through 0_0_4 have a portion of the drives reserved for internal use. This reserved area contained the storage processor boot images as well as the cache vault area. Disks 0_0_11 to 0_0_14 were configured as hot spares. Disks 0_0_5 to 0_0_9 were configured as RAID Group 0 with 16 LUNs used for the redo logs. These LUNs were then allocated as an ASM diskgroup, named the redo diskgroup. RAID Group 0 also contained the OCR disk and the Voting disk. The next four enclosures contained three additional ASM diskgroups. The following section explores this in more detail. EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide 19

Chapter 3: Storage Design<br />

CLARiiON storage design <strong>and</strong> configuration<br />

Design CLARiiON CX4-960 uses UltraFlex technology to provide array connectivity. This<br />

approach is extremely flexible <strong>and</strong> allows each CX4 to be tailored to each user’s<br />

specific needs.<br />

In the CX4 deployed <strong>for</strong> this use case, each storage processor was populated with<br />

four back-end buses to provide 4 Gb connectivity to the DAEs <strong>and</strong> disk drives. Each<br />

storage processor had eight 4 Gb front-end Fibre Channel ports <strong>for</strong> SAN<br />

connectivity. There were also two iSCSI ports on each storage processor that were<br />

not used.<br />

Nine DAEs were populated with 130 x 300 GB 15k drives, <strong>and</strong> five 146 GB drives<br />

were also used <strong>for</strong> the vault. The CLARiiON was configured to house a 1 TB<br />

production database <strong>and</strong> two clone copies of that database. The clone copies were<br />

utilized as follows:<br />

• Gold copy<br />

• <strong>Backup</strong> copy<br />

Gold copy<br />

At various logical checkpoints within the testing process the gold copy was refreshed<br />

to ensure there was an up-to-date copy of the database available at all times. This<br />

ensured that an instantaneous recovery image was always available in the event that<br />

any logical corruption occurred during, or as result of, the testing process. If any<br />

issue did occur, a reverse synchronization from the SnapView clone gold copy would<br />

have made the data available immediately, there<strong>by</strong> avoiding having to rebuild the<br />

database.<br />

<strong>Backup</strong> copy<br />

The backup clone copy was used <strong>for</strong> NetWorker proxy backups. The clone copy of<br />

the database was mounted to the proxy node <strong>and</strong> the backups were executed on the<br />

proxy node. This is also referred to as the “clone mount host.”<br />

Configuration It is a best practice to use ASM external redundancy <strong>for</strong> data protection when using<br />

<strong>EMC</strong> arrays. CLARiiON will also provide protection against loss of media, as well as<br />

transparent failover in the event of a specific disk or component failure.<br />

The following image shows the CLARiiON layout; the CX4-960 deployed <strong>for</strong> this<br />

solution had four 4 Gb Fibre Channel back-end buses <strong>for</strong> disk connectivity. The<br />

back-end buses are numbered Bus 0 to Bus 3. Each bus was connected to a<br />

number of DAEs (disk array enclosures). DAEs are numbered using the “Bus X Enc<br />

Y” nomenclature, so the first enclosure on Bus 0 is there<strong>for</strong>e known as Bus 0 Enc 0.<br />

Each bus had connectivity to both storage processors <strong>for</strong> failover purposes.<br />

Each enclosure can hold up to 15 disk drives. Each disk drive is numbered in an<br />

extension of the Bus Enclosure scheme. The first disk in Bus 0 Enclosure 0 is known<br />

as disk 0_0_0.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

18

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!