04.04.2013 Views

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC ...

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC ...

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong><br />

<strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong><br />

<strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON,<br />

<strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager<br />

using NFS<br />

Proven Solution Guide


Copyright © 2010 <strong>EMC</strong> Corporation. All rights reserved.<br />

Published June, 2010<br />

<strong>EMC</strong> believes the in<strong>for</strong>mation in this publication is accurate as of its publication date. The in<strong>for</strong>mation<br />

is subject to change without notice.<br />

Benchmark results are highly dependent upon workload, specific application requirements, <strong>and</strong><br />

system design <strong>and</strong> implementation. Relative system per<strong>for</strong>mance will vary as a result of these <strong>and</strong><br />

other factors. There<strong>for</strong>e, this workload should not be used as a substitute <strong>for</strong> a specific customer<br />

application benchmark when critical capacity planning <strong>and</strong>/or product evaluation decisions are<br />

contemplated.<br />

All per<strong>for</strong>mance data contained in this report was obtained in a rigorously controlled environment.<br />

Results obtained in other operating environments may vary significantly.<br />

<strong>EMC</strong> Corporation does not warrant or represent that a user can or will achieve similar per<strong>for</strong>mance<br />

expressed in transactions per minute.<br />

No warranty of system per<strong>for</strong>mance or price/per<strong>for</strong>mance is expressed or implied in this document.<br />

Use, copying, <strong>and</strong> distribution of any <strong>EMC</strong> software described in this publication requires an applicable<br />

software license.<br />

For the most up-to-date listing of <strong>EMC</strong> product names, see <strong>EMC</strong> Corporation Trademarks on<br />

<strong>EMC</strong>.com.<br />

All other trademarks used herein are the property of their respective owners.<br />

Part number: H7207


Table of Contents<br />

Table of Contents<br />

Chapter 1: About this Document ........................................................................................... 4<br />

Overview ............................................................................................................................ 4<br />

Audience <strong>and</strong> purpose ....................................................................................................... 5<br />

Business challenge ............................................................................................................ 6<br />

Technology solution ........................................................................................................... 6<br />

Objectives .......................................................................................................................... 7<br />

Reference Architecture ...................................................................................................... 8<br />

Validated environment profile ............................................................................................. 9<br />

Hardware <strong>and</strong> software resources ..................................................................................... 9<br />

Prerequisites <strong>and</strong> supporting documentation ................................................................... 11<br />

Terminology ..................................................................................................................... 12<br />

Chapter 2: Use Case Components ...................................................................................... 13<br />

Chapter 3: Storage Design ................................................................................................... 17<br />

Overview .......................................................................................................................... 17<br />

CLARiiON storage design <strong>and</strong> configuration ................................................................... 18<br />

Data Domain .................................................................................................................... 23<br />

SAN topology ................................................................................................................... 25<br />

Chapter 4: <strong>Oracle</strong> Database Design .................................................................................... 28<br />

Overview .......................................................................................................................... 28<br />

Chapter 5: Installation <strong>and</strong> Configuration ........................................................................... 33<br />

Overview .......................................................................................................................... 33<br />

Navisphere ....................................................................................................................... 34<br />

PowerPath ........................................................................................................................ 37<br />

Install <strong>Oracle</strong> Clusterware ................................................................................................ 42<br />

Data Domain .................................................................................................................... 47<br />

NetWorker ........................................................................................................................ 57<br />

Multiplexing ...................................................................................................................... 62<br />

Chapter 6: Testing <strong>and</strong> Validation ....................................................................................... 63<br />

Overview .......................................................................................................................... 63<br />

Section A: Test results summary <strong>and</strong> resulting recommendations .................................. 64<br />

Chapter 7: Conclusion .......................................................................................................... 76<br />

Overview .......................................................................................................................... 76<br />

Appendix A: Scripts .............................................................................................................. 78<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

3


Chapter 1: About this Document<br />

Overview<br />

Introduction to<br />

solution<br />

Use case<br />

definition<br />

This Proven Solution Guide summarizes a series of best practices that were<br />

discovered, validated, or otherwise encountered during the validation of the <strong>EMC</strong><br />

Data Domain ® <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> an <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> environment enabled<br />

<strong>by</strong> <strong>EMC</strong> ® CLARiiON ® , <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker ® , <strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong><br />

Manager.<br />

<strong>EMC</strong>'s commitment to consistently maintain <strong>and</strong> improve quality is led <strong>by</strong> the Total<br />

Customer Experience (TCE) program, which is driven <strong>by</strong> Six Sigma methodologies.<br />

As a result, <strong>EMC</strong> has built Customer Integration Labs in its Global Solutions Centers<br />

to reflect real-world deployments in which TCE use cases are developed <strong>and</strong><br />

executed. These use cases provide <strong>EMC</strong> with an insight into the challenges currently<br />

facing its customers.<br />

A use case reflects a defined set of tests that validates the reference architecture <strong>for</strong><br />

a customer environment. This validated architecture can then be used as a reference<br />

point <strong>for</strong> a Proven Solution.<br />

Contents The content of this chapter includes the following topics.<br />

Topic See Page<br />

Audience <strong>and</strong> purpose 5<br />

Business challenge 6<br />

Technology solution 6<br />

Objectives 7<br />

Reference Architecture 8<br />

Validated environment profile 9<br />

Hardware <strong>and</strong> software resources 9<br />

Prerequisites <strong>and</strong> supporting documentation 11<br />

Terminology 12


Chapter 1: About this Document<br />

Audience <strong>and</strong> purpose<br />

Audience The intended audience <strong>for</strong> the Proven Solution Guide is:<br />

• internal <strong>EMC</strong> personnel<br />

• <strong>EMC</strong> partners<br />

• customers<br />

Purpose The purpose of this proven solution <strong>for</strong> deduplication is to define a working<br />

infrastructure <strong>for</strong> an <strong>Oracle</strong> RAC environment with an <strong>Oracle</strong> 1 TB <strong>OLTP</strong> database<br />

on CLARiiON storage infrastructure using a Data Domain appliance to:<br />

• Demonstrate the dramatic reduction in the amount of disk storage needed to<br />

retain <strong>and</strong> protect enterprise data enabled <strong>by</strong> Data Domain<br />

• Determine the reduction of backup impact <strong>by</strong> offloading the backup to a<br />

proxy mount host enabled <strong>by</strong> SnapView clones <strong>and</strong> NetWorker<br />

• Validate the improvement in <strong>Recovery</strong> Time Objective (RTO) when the<br />

backup schedule utilizes full backups only<br />

This document provides a specification <strong>for</strong> the customer environment (storage<br />

configurations, design, sizing, software <strong>and</strong> hardware, <strong>and</strong> so on) that constitutes an<br />

enterprise <strong>Oracle</strong> <strong>11g</strong> RAC backup <strong>and</strong> recovery solution in an <strong>Oracle</strong> <strong>OLTP</strong><br />

environment, deployed on the <strong>EMC</strong> CLARiiON CX4-960.<br />

In addition, this use case provides in<strong>for</strong>mation on:<br />

• Building an enterprise <strong>Oracle</strong> <strong>11g</strong> RAC environment on an <strong>EMC</strong> CLARiiON<br />

CX4-960.<br />

• Identifying the steps required to design <strong>and</strong> implement an enterprise-level<br />

<strong>Oracle</strong> <strong>11g</strong> RAC solution around <strong>EMC</strong> software <strong>and</strong> hardware.<br />

• Deploying a Data Domain DD880 appliance.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

5


Chapter 1: About this Document<br />

Business challenge<br />

Overview Today's IT is being challenged <strong>by</strong> the business to solve the following pain points<br />

around the backup <strong>and</strong> recovery of the business’ critical data:<br />

Technology solution<br />

• Protect the business in<strong>for</strong>mation as an asset of the business’ defined<br />

recovery point objective (RPO - amount of data to recover) <strong>and</strong> recovery<br />

time objective (RTO - time to recover)<br />

• Use of both infrastructure <strong>and</strong> people to support the business efficiently<br />

• Back up large enterprise-critical, multi-tera<strong>by</strong>te systems<br />

Exponential data growth, changing regulatory requirements, <strong>and</strong> increasingly<br />

complex IT infrastructure all have a major impact on data managers’ data protection<br />

schemes. RTO continues to decrease while the precision of the RPO increases. In<br />

other words, IT managers must be able to recover from a given failure quicker than<br />

ever <strong>and</strong> with less data loss. It is not uncommon <strong>for</strong> organizations to routinely<br />

exceed their backup window, or even have a backup window that takes up most of<br />

the day. Such long backup operations leave little margin <strong>for</strong> error <strong>and</strong> any disruption<br />

can place some of the data at risk of loss. Such operations also mean that a<br />

guaranteed RPO cannot be met.<br />

Because of the dem<strong>and</strong>s generated <strong>by</strong> data growth <strong>and</strong> the RTO/RPO requirements<br />

in <strong>Oracle</strong> database environments, it is critical that robust, reliable, <strong>and</strong> tested backup<br />

<strong>and</strong> recovery processes are in place. <strong>Backup</strong> <strong>and</strong> recovery of <strong>Oracle</strong> databases are<br />

a vital part of IT data protection strategies. To meet these backup <strong>and</strong> recovery<br />

challenges, enterprises need proven solution architectures that encompass the best<br />

of what <strong>EMC</strong> <strong>and</strong> <strong>Oracle</strong> can offer.<br />

Overview This solution describes a backup <strong>and</strong> recovery environment of an <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong><br />

database. The database was deployed on a CLARiiON CX4-960 <strong>and</strong> demonstrates<br />

the ease <strong>and</strong> power of integrating <strong>EMC</strong> storage with <strong>Oracle</strong> Automatic Storage<br />

Management (ASM). This was tested in a two-node <strong>Oracle</strong> RAC configuration.<br />

<strong>Backup</strong> <strong>and</strong> recovery were implemented using <strong>Oracle</strong> RMAN, SnapView clones, <strong>and</strong><br />

<strong>EMC</strong> NetWorker. The backup was deployed over NFS to an <strong>EMC</strong> Data Domain<br />

DD880 deduplication appliance.<br />

The backup process was off-loaded to an <strong>EMC</strong> NetWorker proxy host using<br />

Navisphere ® SnapView clones. The replica clone copy of the database was mounted<br />

to the proxy node <strong>and</strong> backups were then executed on the proxy node. This is also<br />

referred to as the “clone mount host.”<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

6


Chapter 1: About this Document<br />

Objectives<br />

The following table describes the key components <strong>and</strong> their configuration details<br />

within this environment.<br />

Component Description Configuration Software<br />

Storage array CLARiiON<br />

CX4-960<br />

Deduplication<br />

appliance<br />

Data Domain<br />

DD880<br />

Database <strong>Oracle</strong> <strong>11g</strong><br />

<strong>OLTP</strong><br />

database<br />

system<br />

<strong>Backup</strong><br />

manager<br />

<strong>EMC</strong><br />

NetWorker<br />

Four BE 4 Gb FC<br />

ports, eight FE 4 Gb<br />

FC ports per storage<br />

processor, nine DAEs<br />

with five 146 GB <strong>and</strong><br />

130 x 300 GB disk<br />

drives<br />

Two 10 GbE optical<br />

NICs, two SAS HBAs<br />

- disk connectivity,<br />

three ES20 disk<br />

shelves with 48 disks<br />

1 TB <strong>Oracle</strong> <strong>11g</strong><br />

<strong>OLTP</strong> database on a<br />

two-node RAC using<br />

ASM<br />

NetWorker server,<br />

dedicated storage<br />

node <strong>and</strong> clients<br />

FLARE ®<br />

04.29.000.5.003<br />

DDOS 4.7.1.3<br />

<strong>Oracle</strong> <strong>11g</strong><br />

Database/Cluster/<br />

ASM versions<br />

11.1.0.7<br />

NetWorker 7.6<br />

NetWorker Module<br />

<strong>for</strong> <strong>Oracle</strong> (NMO) 5.0<br />

This document provides guidelines on how to configure <strong>and</strong> set up an <strong>Oracle</strong> <strong>11g</strong><br />

<strong>OLTP</strong> database with Data Domain deduplication storage systems. The solution<br />

demonstrates the benefits of deduplication in an <strong>Oracle</strong> backup environment. The<br />

backup schedule used all level 0 (full backups). Level 1 (incremental backups) was<br />

not used in the schedule because when target deduplication is deployed, only<br />

unique, new data is written to disk. There<strong>for</strong>e, the deduplicated backup image does<br />

not carry the restore penalty associated with incremental backups because the entire<br />

backup image is still always available.<br />

This document is not intended to be a comprehensive guide to every aspect of an<br />

Enterprise <strong>Oracle</strong> <strong>11g</strong> solution. This document describes how to per<strong>for</strong>m the<br />

following functions:<br />

• Install <strong>and</strong> build of the infrastructure<br />

• Configure <strong>and</strong> test CLARiiON storage<br />

• Configure the <strong>Oracle</strong> <strong>11g</strong> environment<br />

• Configure a Data Domain virtual tape library<br />

• Configure NetWorker<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

7


Chapter 1: About this Document<br />

Reference Architecture<br />

Corresponding<br />

Reference<br />

Architecture<br />

Reference<br />

Architecture<br />

diagram<br />

This use case has a corresponding Reference Architecture document that is<br />

available on Powerlink ® <strong>and</strong> <strong>EMC</strong>.com. Refer to <strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong><br />

<strong>Oracle</strong> Database <strong>11g</strong>—<strong>OLTP</strong> enabled <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>and</strong><br />

<strong>EMC</strong> NetWorker using NFS Reference Architecture <strong>for</strong> details.<br />

If you do not have access to the following content, contact your <strong>EMC</strong> representative.<br />

The following diagram depicts the overall physical architecture of the use case.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

8


Chapter 1: About this Document<br />

Validated environment profile<br />

Profile<br />

characteristics<br />

The use case was validated with the following environment profile.<br />

Profile characteristic Value<br />

Database characteristic <strong>OLTP</strong><br />

Benchmark profile Swingbench OrderEntry - TPC-C-like<br />

benchmark<br />

Response time < 10 ms<br />

Read / Write ratio 70 / 30<br />

Database scale A Swingbench load that keeps the system<br />

running within agreed per<strong>for</strong>mance limits<br />

Size of databases 1 TB<br />

Number of databases 1<br />

Array drives: size <strong>and</strong> speed 300 GB; 15k rpm<br />

Hardware <strong>and</strong> software resources<br />

Hardware The hardware used to validate the use case is listed below.<br />

Equipment Quantity Configuration<br />

Storage array 1 CLARiiON CX4-960:<br />

• Nine DAEs<br />

• 5 x 146 GB FC drives<br />

• 126 x 300 GB FC drives<br />

• 4 x 300 GB hot spares<br />

SAN 2 4 Gb capable FC switch, 64 port<br />

Deduplication appliance 1 Data Domain DD880<br />

Two 10 GbE optical NICs, two SAS HBAs - disk connectivity,<br />

three ES20 disk shelves with 48 disks<br />

<strong>Oracle</strong> database node 2 Four Quad-Core Xeon E7330 processors, 2.4 GHz, 6 MB,<br />

1066 FSB, 32 GB RAM. Two 73 GB 10k internal disks. Two<br />

dual-port 4 Gb Emulex LP11002E HBAs.<br />

Proxy node (mount host) 1 Four Quad-Core Xeon E7330 processors, 2.4 GHz, 6 MB,<br />

1066 FSB, 32 GB RAM. Two 73 GB 10k internal disks. Two<br />

dual-port 4 Gb Emulex LP11002E HBAs. Two 10 Gigabit XF<br />

SR Server Adapter<br />

Navisphere management<br />

server<br />

NetWorker server<br />

1 Two Quad-Core processors, 1.86 GHz, 16 GB RAM. Two 4<br />

Gb Emulex LP11002E HBAs.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

9


Chapter 1: About this Document<br />

Network (backup) 1 Brocade TurboIron 24<br />

Network (management) 2 Cisco Catalyst 3750G<br />

Software The software used to validate the use case is listed below.<br />

Software Version Comment<br />

RedHat Linux 5.3 OS <strong>for</strong> database nodes<br />

Microsoft Windows 2003 SP2 OS <strong>for</strong> Navisphere<br />

Management Server<br />

<strong>Oracle</strong> Database/Cluster/ASM <strong>11g</strong> Release 1 (11.1.0.7.0) Database/cluster<br />

software/volume management<br />

<strong>Oracle</strong> ASMLib 2.0 Support library <strong>for</strong> ASM<br />

Swingbench 2.3 <strong>OLTP</strong> database benchmark<br />

Orion 10.2 Orion is the <strong>Oracle</strong> I/O<br />

Numbers Calibration Tool<br />

designed to simulate <strong>Oracle</strong><br />

I/O workloads<br />

FLARE operating environment 04.29.000.5.003<br />

Navisphere Management Suite Includes:<br />

• Access Logix<br />

• Navisphere Agent<br />

Navisphere Analyzer 6.29.0.6.34 Analyzer Enabler<br />

SnapView 6.29.0.6.34.1 SnapView Enabler<br />

PowerPath ® 5.3 Multipathing software<br />

DDOS 4.7.1.3 Data Domain OS<br />

NetWorker 7.6 <strong>Backup</strong> <strong>and</strong> recover suite<br />

NetWorker Module <strong>for</strong> <strong>Oracle</strong> 5.0 NetWorker <strong>Oracle</strong> integration<br />

Brocade TurboIron software 04.1.00c 10 GbE network<br />

Cisco IOS 12.2 Network<br />

Fabric OS 6.2.0g SAN<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

10


Chapter 1: About this Document<br />

Prerequisites <strong>and</strong> supporting documentation<br />

Technology It is assumed the reader has a general knowledge of:<br />

Supporting<br />

documents<br />

Third-party<br />

documents<br />

• <strong>EMC</strong> CLARiiON<br />

• <strong>EMC</strong> Data Domain<br />

• <strong>EMC</strong> NetWorker<br />

• <strong>Oracle</strong> Database<br />

• Red Hat Linux<br />

The following documents, located on Powerlink.com, provide additional, relevant<br />

in<strong>for</strong>mation. Access to these documents is based on your login credentials. If you do<br />

not have access to the following content, contact your <strong>EMC</strong> representative.<br />

• <strong>EMC</strong> CLARiiON CX4-960 Setup Guide<br />

• <strong>EMC</strong> Navisphere Manager Help (html)<br />

• <strong>EMC</strong> PowerPath Product Guide<br />

• <strong>EMC</strong> CLARiiON Database Storage Solution: <strong>Oracle</strong> 10g/<strong>11g</strong> with CLARiiON<br />

Storage Replication Consistency<br />

• <strong>EMC</strong> CLARiiON Server Support Products <strong>for</strong> Linux Servers Installation<br />

Guide<br />

• <strong>EMC</strong> Support Matrix<br />

• Data Domain OS Initial Configuration Guide<br />

• Data Domain OS Administration Guide<br />

• NetWorker Installation Guide<br />

• NetWorker Administration Guide<br />

• NetWorker Module <strong>for</strong> <strong>Oracle</strong> Administration Guide<br />

• NetWorker Module <strong>for</strong> <strong>Oracle</strong> Installation Guide<br />

• <strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong><br />

CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker, <strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong><br />

Manager using Fibre Channel Proven Solution Guide<br />

The following documents are available on third-party websites.<br />

• <strong>Oracle</strong> Database Installation Guide <strong>11g</strong> Release 1 (11.1) <strong>for</strong> Linux<br />

• <strong>Oracle</strong> Real Application Clusters Installation Guide <strong>11g</strong> Release 1 (11.1) <strong>for</strong><br />

Linux<br />

• <strong>Oracle</strong> Clusterware Installation Guide <strong>11g</strong> Release 1 (11.1) <strong>for</strong> Linux<br />

• <strong>Oracle</strong> Database <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> User's Guide<br />

• Orion: <strong>Oracle</strong> I/O Numbers Calibration Tool<br />

• Why Are Datafiles Being Written To During Hot <strong>Backup</strong>?<br />

(Doc ID: 1050932.6)<br />

• What Happens When A Tablespace/Database Is Kept In Begin <strong>Backup</strong><br />

Mode (Doc ID: 469950.1)<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

11


Chapter 1: About this Document<br />

Terminology<br />

Terms <strong>and</strong><br />

definitions<br />

Term Definition<br />

This section defines the terms used in this document.<br />

ASM Automatic Storage Management<br />

BE Back End<br />

DAE Disk Array Enclosure<br />

DBCA Database Configuration Assistant<br />

FE Front End<br />

NFS Network file System<br />

NMO NetWorker Module <strong>for</strong> <strong>Oracle</strong><br />

RAC Real Application Cluster<br />

RPO <strong>Recovery</strong> Point Objective<br />

RTO <strong>Recovery</strong> Time Objective<br />

SAS Serial Attached SCSI<br />

SISL Stream-In<strong>for</strong>med Segment Layout<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

12


Chapter 2: Use Case Components<br />

Chapter 2: Use Case Components<br />

Introduction This section briefly describes the key solutions components. For details on all of the<br />

components that make up the solution architecture, refer to the “Hardware” <strong>and</strong><br />

“Software” sections.<br />

CLARiiON<br />

CX4-960<br />

<strong>EMC</strong> Data<br />

Domain DD880<br />

Brocade<br />

TurboIron 24X<br />

switch<br />

The <strong>EMC</strong> CLARiiON CX4 model 960 enables you to h<strong>and</strong>le the most data-intensive<br />

workloads <strong>and</strong> large consolidation projects. CLARiiON CX4-960 delivers innovative<br />

technologies such as Flash drives, Virtual Provisioning, a 64-bit operating system,<br />

<strong>and</strong> multi-core processors.<br />

The CX4 new flexible I/O module design, UltraFlex technology, delivers an easily<br />

customizable storage system. Additional connection ports can be added to exp<strong>and</strong><br />

connection paths from servers to the CLARiiON. The CX4-960 can be populated with<br />

up to six I/O modules per storage processor.<br />

CLARiiON CX4 is designed to work with <strong>Oracle</strong> ASM to give DBAs the most<br />

comprehensive protection <strong>for</strong> their <strong>Oracle</strong> database environment, while maintaining<br />

the ease-of-use elements offered <strong>by</strong> ASM.<br />

<strong>EMC</strong> Data Domain deduplication storage systems dramatically reduce the amount of<br />

disk storage needed to retain <strong>and</strong> protect enterprise data, including <strong>Oracle</strong><br />

databases. By identifying redundant data as it is being stored, Data Domain systems<br />

provide a storage footprint that is five to 30 times smaller, on average, than the<br />

original dataset. <strong>Backup</strong> data can then be efficiently replicated <strong>and</strong> retrieved over<br />

existing networks <strong>for</strong> streamlined disaster recovery <strong>and</strong> consolidated tape<br />

operations. This allows Data Domain appliances to integrate seamlessly into <strong>Oracle</strong><br />

architectures, maintaining existing backup strategies such as <strong>Oracle</strong> RMAN with no<br />

changes to scripts, backup processes, or system architecture.<br />

The Data Domain DD880 is the industry’s highest throughput, most cost-effective<br />

<strong>and</strong> scalable deduplication storage solution <strong>for</strong> disk backup <strong>and</strong> network-based<br />

disaster recovery (DR).<br />

The high-throughput inline deduplication data rate of the DD880 is enabled <strong>by</strong> the<br />

Data Domain Stream-In<strong>for</strong>med Segment Layout (SISL) scaling architecture. The<br />

level of throughput is achieved <strong>by</strong> a CPU-centric approach to deduplication, which<br />

minimizes the number of disk spindles required.<br />

The Brocade TurboIron 24X switch is a compact, high-per<strong>for</strong>mance, high-availability,<br />

<strong>and</strong> high-density 10/1 GbE dual-speed solution. It meets mission-critical data center<br />

top-of-rack <strong>and</strong> High-Per<strong>for</strong>mance Cluster Computing (HPCC) requirements. An<br />

ultra-low-latency, cut-through, non-blocking architecture <strong>and</strong> low power consumption<br />

help provide a cost-effective solution <strong>for</strong> server or compute-node connectivity.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

13


Chapter 2: Use Case Components<br />

Navisphere<br />

Management<br />

Suite<br />

<strong>EMC</strong><br />

PowerPath<br />

The Navisphere Management Suite of integrated software tools allows you to<br />

manage, discover, monitor, <strong>and</strong> configure <strong>EMC</strong> CLARiiON systems as well as<br />

control all plat<strong>for</strong>m replication applications from a simple, secure, web-based<br />

management console.<br />

Navisphere Management Suite enables you to access <strong>and</strong> manage all CLARiiON<br />

advanced software functionality—including <strong>EMC</strong> Navisphere Quality of Service<br />

Manager, Navisphere Analyzer, SnapView, SAN Copy, <strong>and</strong> MirrorView. When<br />

used with other <strong>EMC</strong> storage management software, you gain storage resource,<br />

SAN, <strong>and</strong> replication management functionality—<strong>for</strong> greater efficiency <strong>and</strong> control<br />

over CLARiiON storage infrastructure.<br />

<strong>EMC</strong> PowerPath is a server-resident software that enhances per<strong>for</strong>mance <strong>and</strong><br />

application availability. PowerPath works with the storage system to intelligently<br />

manage I/O paths, <strong>and</strong> supports multiple paths to a logical device. In this solution,<br />

PowerPath manages I/O paths <strong>and</strong> provides:<br />

• Automatic failover in the event of a hardware failure. PowerPath automatically<br />

detects path failure <strong>and</strong> redirects I/O to another path.<br />

• Dynamic multipath load balancing. PowerPath intelligently distributes I/O requests<br />

to a logical device across all available paths, thus improving I/O per<strong>for</strong>mance <strong>and</strong><br />

reducing management time <strong>and</strong> downtime <strong>by</strong> eliminating the need to configure<br />

paths statically across logical devices.<br />

PowerPath enables customers to st<strong>and</strong>ardize on a single multipathing solution<br />

across their entire environment.<br />

<strong>EMC</strong> NetWorker <strong>EMC</strong> NetWorker software comprises a high-capacity, easy-to-use data storage<br />

management solution that protects <strong>and</strong> helps to manage data across an entire<br />

network. NetWorker simplifies the storage management process <strong>and</strong> reduces the<br />

administrative burden <strong>by</strong> automating <strong>and</strong> centralizing data storage operations.<br />

NetWorker Module <strong>for</strong> <strong>Oracle</strong> (NMO)<br />

NMO provides the capability to integrate database <strong>and</strong> file system backups, to<br />

relieve the burden of backup from the database administrator while allowing the<br />

administrator to retain control of the restore process. NMO includes the following<br />

features:<br />

• Automatic database storage management through automated scheduling,<br />

autochanger support, electronic tape labeling, <strong>and</strong> tracking.<br />

• Support <strong>for</strong> backup to a centralized backup server.<br />

• High per<strong>for</strong>mance through support <strong>for</strong> multiple, concurrent high-speed<br />

devices, such as digital linear tape (DLT) drives.<br />

<strong>EMC</strong> NetWorker, together with the NetWorker Module <strong>for</strong> <strong>Oracle</strong>, provides tight<br />

integration with <strong>Oracle</strong> RMAN <strong>and</strong> seamlessly uses a Data Domain deduplication<br />

appliance as an NFS target <strong>for</strong> RMAN backups.<br />

These elements create a fast, efficient, <strong>and</strong> nondisruptive backup that offloads the<br />

backup burden from the production RAC environment.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

14


Chapter 2: Use Case Components<br />

<strong>EMC</strong> SnapView SnapView is a storage-system-based software application that allows you to create a<br />

copy of a LUN <strong>by</strong> using either clones or snapshots. A clone is an actual copy of a<br />

LUN <strong>and</strong> takes time to create, depending on the size of the source LUN. A snapshot<br />

is a virtual point-in-time copy of a LUN <strong>and</strong> takes only seconds to create. SnapView<br />

has the following important benefits:<br />

• Allows full access to a point-in-time copy of your production data with modest<br />

impact on per<strong>for</strong>mance <strong>and</strong> without modifying the actual production data.<br />

• For decision support or revision testing, provides a coherent, readable <strong>and</strong><br />

writeable copy of real production data.<br />

• For backup, practically eliminates the time that production data spends offline or in<br />

hot backup mode, <strong>and</strong> it offloads the backup overhead from the production server<br />

to another server.<br />

• Provides a consistent replica across a set of LUNs. You can do this <strong>by</strong> per<strong>for</strong>ming<br />

a consistent fracture, which is a fracture of more than one clone at the same time,<br />

or a fracture that you create when starting a session in consistent mode.<br />

• Provides instantaneous data recovery if the source LUN becomes corrupt. You can<br />

per<strong>for</strong>m a recovery operation on a clone <strong>by</strong> initiating a reverse synchronization <strong>and</strong><br />

on a snapshot session <strong>by</strong> initiating a rollback operation.<br />

<strong>Oracle</strong><br />

Database <strong>11g</strong><br />

Enterprise<br />

Edition<br />

<strong>Oracle</strong> Database <strong>11g</strong> Enterprise Edition delivers industry-leading per<strong>for</strong>mance,<br />

scalability, security, <strong>and</strong> reliability on a choice of clustered or single servers running<br />

Windows, Linux, <strong>and</strong> UNIX. It provides comprehensive features to easily manage the<br />

most dem<strong>and</strong>ing transaction processing, business intelligence, <strong>and</strong> content<br />

management applications.<br />

<strong>Oracle</strong> Database <strong>11g</strong> Enterprise Edition comes with a wide range of options to help<br />

grow your business <strong>and</strong> meet users' per<strong>for</strong>mance, security, <strong>and</strong> availability service<br />

level expectations.<br />

<strong>Oracle</strong> Database <strong>11g</strong> RAC<br />

<strong>Oracle</strong> Real Application Clusters (RAC) is an optional feature of <strong>Oracle</strong> Database<br />

<strong>11g</strong> Enterprise Edition. <strong>Oracle</strong> RAC supports the transparent deployment of a single<br />

database across a cluster of servers, providing fault tolerance from hardware failures<br />

or planned outages. If a node in the cluster fails, <strong>Oracle</strong> continues running on the<br />

remaining nodes. If more processing power is needed, you can add new nodes to<br />

the cluster to provide horizontal scaling.<br />

<strong>Oracle</strong> RAC supports mainstream business applications of all kinds, including Online<br />

Transaction Processing (<strong>OLTP</strong>) <strong>and</strong> Decision Support System (DSS).<br />

<strong>Oracle</strong> ASM<br />

<strong>Oracle</strong> Automatic Storage Management (ASM) is an integrated database filesystem<br />

<strong>and</strong> disk manager that reduces the complexity of managing the storage <strong>for</strong> the<br />

database. The ASM filesystem <strong>and</strong> volume management capabilities are built into<br />

the <strong>Oracle</strong> database kernel.<br />

In addition to providing per<strong>for</strong>mance <strong>and</strong> reliability benefits, ASM can also increase<br />

database availability because disks can be added or removed without shutting down<br />

the database. ASM automatically rebalances the database files across an ASM<br />

diskgroup after disks have been added or removed.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

15


Chapter 2: Use Case Components<br />

<strong>Oracle</strong> ASMLib<br />

ASMLib is a support library <strong>for</strong> the ASM feature of <strong>Oracle</strong> Database. It is an add-on<br />

module that simplifies the management <strong>and</strong> discovery of ASM disks. The ASMLib<br />

provides an alternative to the st<strong>and</strong>ard operating system interface <strong>for</strong> ASM to identify<br />

<strong>and</strong> access block devices.<br />

ASMLib is composed of the actual ASMLib library, which is loaded <strong>by</strong> <strong>Oracle</strong> at<br />

<strong>Oracle</strong> startup, <strong>and</strong> a kernel driver that is loaded into the OS kernel at system boot.<br />

The kernel driver version is specific to the OS kernel.<br />

ASMCMD<br />

<strong>Oracle</strong> database administrators can use the asmcmd utility to query <strong>and</strong> manage<br />

their ASM systems. ASM-related in<strong>for</strong>mation can be retrieved easily <strong>for</strong> diagnosing<br />

<strong>and</strong> debugging purposes.<br />

<strong>Oracle</strong> <strong>Recovery</strong> Manager<br />

<strong>Oracle</strong> <strong>Recovery</strong> Manager (RMAN) is a comm<strong>and</strong>-line <strong>and</strong> Enterprise Managerbased<br />

tool <strong>for</strong> backing up <strong>and</strong> recovering an <strong>Oracle</strong> database. It provides block-level<br />

corruption detection during backup <strong>and</strong> restore. RMAN optimizes per<strong>for</strong>mance <strong>and</strong><br />

space consumption during backup with file multiplexing <strong>and</strong> backup set<br />

compression, <strong>and</strong> integrates with <strong>Oracle</strong> Secure <strong>Backup</strong> <strong>and</strong> third-party media<br />

management products <strong>for</strong> tape backup.<br />

Swingbench Swingbench is a publicly available load generator (<strong>and</strong> benchmark tool) designed to<br />

stress test <strong>Oracle</strong> databases. Swingbench consists of a load generator, a<br />

coordinator, <strong>and</strong> a cluster overview. The software enables a load to be generated<br />

<strong>and</strong> the transactions/response times to be charted.<br />

Swingbench is provided with four benchmarks:<br />

• OrderEntry – TPC-C-like workload.<br />

• Calling Circle – Telco-based self-service workload.<br />

• Stress Test – Per<strong>for</strong>ms simple insert/update/delete/select operations.<br />

• DSS – A DSS workload, based on the <strong>Oracle</strong> Sales History schema.<br />

The Swingbench workload used in this testing was Order Entry. The Order Entry<br />

(PL/SQL) workload models the classic order entry stress test. It has a profile similar<br />

to the TPC-C benchmark. It models an online order entry system, with users being<br />

required to log in be<strong>for</strong>e purchasing goods.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

16


Chapter 3: Storage Design<br />

Chapter 3: Storage Design<br />

Overview<br />

Introduction to<br />

storage design<br />

The environment consisted of a two-node <strong>Oracle</strong> <strong>11g</strong> RAC cluster that accessed a<br />

single production database. Each cluster node resided on its own server, which is a<br />

typical <strong>Oracle</strong> RAC configuration. The two RAC nodes communicated with each<br />

other through a dedicated private network that includes a Cisco Catalyst 3750G-<br />

48TS switch. This cluster interconnection synchronized cache across various<br />

database instances between user requests. The 10 GbE backup network was<br />

created using a Brocade TurboIron 24 switch. A Fibre Channel SAN was provided <strong>by</strong><br />

two Brocade 4900 switches. <strong>EMC</strong> PowerPath was used in this solution <strong>and</strong> works<br />

with the storage system to intelligently manage I/O paths. In this solution, <strong>for</strong> each<br />

server, PowerPath managed four active I/O paths to each device <strong>and</strong> four passive<br />

I/O paths to each device.<br />

Contents This chapter contains the following topics:<br />

Topic See Page<br />

CLARiiON storage design <strong>and</strong> configuration 18<br />

Data Domain 23<br />

SAN topology 25<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

17


Chapter 3: Storage Design<br />

CLARiiON storage design <strong>and</strong> configuration<br />

Design CLARiiON CX4-960 uses UltraFlex technology to provide array connectivity. This<br />

approach is extremely flexible <strong>and</strong> allows each CX4 to be tailored to each user’s<br />

specific needs.<br />

In the CX4 deployed <strong>for</strong> this use case, each storage processor was populated with<br />

four back-end buses to provide 4 Gb connectivity to the DAEs <strong>and</strong> disk drives. Each<br />

storage processor had eight 4 Gb front-end Fibre Channel ports <strong>for</strong> SAN<br />

connectivity. There were also two iSCSI ports on each storage processor that were<br />

not used.<br />

Nine DAEs were populated with 130 x 300 GB 15k drives, <strong>and</strong> five 146 GB drives<br />

were also used <strong>for</strong> the vault. The CLARiiON was configured to house a 1 TB<br />

production database <strong>and</strong> two clone copies of that database. The clone copies were<br />

utilized as follows:<br />

• Gold copy<br />

• <strong>Backup</strong> copy<br />

Gold copy<br />

At various logical checkpoints within the testing process the gold copy was refreshed<br />

to ensure there was an up-to-date copy of the database available at all times. This<br />

ensured that an instantaneous recovery image was always available in the event that<br />

any logical corruption occurred during, or as result of, the testing process. If any<br />

issue did occur, a reverse synchronization from the SnapView clone gold copy would<br />

have made the data available immediately, there<strong>by</strong> avoiding having to rebuild the<br />

database.<br />

<strong>Backup</strong> copy<br />

The backup clone copy was used <strong>for</strong> NetWorker proxy backups. The clone copy of<br />

the database was mounted to the proxy node <strong>and</strong> the backups were executed on the<br />

proxy node. This is also referred to as the “clone mount host.”<br />

Configuration It is a best practice to use ASM external redundancy <strong>for</strong> data protection when using<br />

<strong>EMC</strong> arrays. CLARiiON will also provide protection against loss of media, as well as<br />

transparent failover in the event of a specific disk or component failure.<br />

The following image shows the CLARiiON layout; the CX4-960 deployed <strong>for</strong> this<br />

solution had four 4 Gb Fibre Channel back-end buses <strong>for</strong> disk connectivity. The<br />

back-end buses are numbered Bus 0 to Bus 3. Each bus was connected to a<br />

number of DAEs (disk array enclosures). DAEs are numbered using the “Bus X Enc<br />

Y” nomenclature, so the first enclosure on Bus 0 is there<strong>for</strong>e known as Bus 0 Enc 0.<br />

Each bus had connectivity to both storage processors <strong>for</strong> failover purposes.<br />

Each enclosure can hold up to 15 disk drives. Each disk drive is numbered in an<br />

extension of the Bus Enclosure scheme. The first disk in Bus 0 Enclosure 0 is known<br />

as disk 0_0_0.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

18


Chapter 3: Storage Design<br />

The following image shows how ASM diskgroups were positioned on the CLARiiON<br />

array.<br />

The first enclosure contained the vault area. The first five drives 0_0_0 through<br />

0_0_4 have a portion of the drives reserved <strong>for</strong> internal use. This reserved area<br />

contained the storage processor boot images as well as the cache vault area. Disks<br />

0_0_11 to 0_0_14 were configured as hot spares.<br />

Disks 0_0_5 to 0_0_9 were configured as RAID Group 0 with 16 LUNs used <strong>for</strong> the<br />

redo logs. These LUNs were then allocated as an ASM diskgroup, named the redo<br />

diskgroup. RAID Group 0 also contained the OCR disk <strong>and</strong> the Voting disk.<br />

The next four enclosures contained three additional ASM diskgroups. The following<br />

section explores this in more detail.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

19


Chapter 3: Storage Design<br />

ASM diskgroups<br />

The database was built using four distinct ASM diskgroups:<br />

• The Data diskgroup contained all datafiles <strong>and</strong> the first control file.<br />

• The Online Redo diskgroup contained online redo logs <strong>for</strong> the database <strong>and</strong><br />

a second control file. Ordinarily, <strong>Oracle</strong>’s best practice recommendation is<br />

<strong>for</strong> the redo logs files to be placed in the same diskgroup as all the database<br />

files (the Data diskgroup in this example). However, it is necessary to<br />

separate the online redo logs from the data diskgroup when planning to do<br />

recovery from split mirror snap copies since the current redo log files cannot<br />

be used to recover the cloned database.<br />

• The Flash <strong>Recovery</strong> diskgroup contained the archive logs.<br />

• The Temp diskgroup contained tempfiles.<br />

ASM data area<br />

MetaLUNs were chosen <strong>for</strong> ease of management <strong>and</strong> future scalability. As the data<br />

grows, <strong>and</strong> consequently the number of ASM disks increases, ASM will have an<br />

inherent overhead managing a large number of disks. There<strong>for</strong>e, metaLUNs were<br />

selected to allow the CLARiiON to manage request queues <strong>for</strong> large number of<br />

LUNs.<br />

For the Data diskgroup, four striped metaLUNs was created, each containing four<br />

members. The selection of members <strong>for</strong> each metaLUN was chosen to ensure that<br />

each member resided on a different back-end bus to ensure maximum throughput.<br />

The starting LUN <strong>for</strong> each metaLUN were also carefully selected to avoid all the<br />

metaLUNs starting on the same RAID group. This selection criterion was to avoid<br />

starting all the ASM disks on the same set of spindles, <strong>and</strong> alternating the metaLUN<br />

members to balance the LUN residence. This methodology was used to ensure that<br />

ASM parallel chunk IOs would not hit the same spindles at the same time within the<br />

metaLUNs when, or if, <strong>Oracle</strong> per<strong>for</strong>med a parallel table scan.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

20


Chapter 3: Storage Design<br />

<strong>EMC</strong> SnapView SnapView clones were used to create complete copies of the database. A clone<br />

copy was used to offload the backup operations from the production nodes to the<br />

proxy node. A second clone copy was used as a gold copy. The following graphic<br />

shows an example of a clone LUN's relationship to the source LUN, in this example<br />

the clone in<strong>for</strong>mation <strong>for</strong> one of the LUNs is contained in the ASM datagroup.<br />

SnapView clones create a full bit-<strong>for</strong>-bit copy of the respective source LUN. A clone<br />

was created <strong>for</strong> each of the LUNs contained within the ASM diskgroups, <strong>and</strong> all<br />

clones were then simultaneously split from their respective sources to provide a<br />

point-in-time content consistent replica set.<br />

The comm<strong>and</strong> naviseccli – h arrayIP snapview –listclonegroup –data1 was used<br />

to display in<strong>for</strong>mation on this clone group. Each of the ASM diskgroup LUNs was<br />

added to a clone group becoming the clone source device. Target LUN clones were<br />

then added to the clone group. Each clone group is assigned a unique ID <strong>and</strong> each<br />

clone gets a unique clone ID within the group. The first clone added has a clone ID<br />

of 010000000000000, <strong>and</strong> <strong>for</strong> each subsequent clone added the clone ID<br />

increments. The clone ID is then used to specify which clone is selected each time a<br />

cloning operation is per<strong>for</strong>med.<br />

As shown above there are two clones assigned to the clone group. Clone ID<br />

01000000000000000 was used as the gold copy <strong>and</strong> clone ID 0200000000000000<br />

was used <strong>for</strong> backups. (The Navisphere Manager GUI also shows this in<strong>for</strong>mation.)<br />

When the clones are synchronized they can be split (fractured) from the source LUN<br />

to provide an independent point-in-time copy of the database.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

21


Chapter 3: Storage Design<br />

The LUNs used <strong>for</strong> the clone copies were configured in a similar fashion to the<br />

source copy to maintain the required throughput during the backup process. The<br />

image below shows the clone relationship <strong>for</strong> two of the metaLUNs.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

22


Chapter 3: Storage Design<br />

Data Domain<br />

Overview The following sections describe how Data Domain systems ensure data integrity <strong>and</strong><br />

provide multiple levels of data compression, reliable restorations <strong>and</strong> multipath<br />

configurations. The Data Domain operating system (DD OS) Data Invulnerability<br />

Architecture protects against data loss from hardware <strong>and</strong> software failures.<br />

Data integrity When writing to disk, the DD OS creates <strong>and</strong> stores checksums <strong>and</strong> self-describing<br />

metadata <strong>for</strong> all data received. After writing the data to disk, the DD OS then<br />

recomputes <strong>and</strong> verifies the checksums <strong>and</strong> metadata. An append-only write policy<br />

guards against overwriting valid data.<br />

Data<br />

compression<br />

After a backup completes, a validation process looks at what was written to disk to<br />

see that all file segments are logically correct within the file system <strong>and</strong> that the data<br />

is the same on the disk as it was be<strong>for</strong>e being written to disk.<br />

In the background, the Online Verify operation continuously checks that data on the<br />

disks is correct <strong>and</strong> unchanged since the earlier validation process.<br />

The back-end storage is set up in a double parity RAID 6 configuration (two parity<br />

drives). Additionally, hot spares are configured within the system. Each parity stripe<br />

has block checksums to ensure that data is correct. The checksums are constantly<br />

used during the online verify operation <strong>and</strong> when data is read from the Data Domain<br />

system. With double parity, the system can fix simultaneous errors on up to two<br />

disks.<br />

To keep data synchronized during a hardware or power failure, the Data Domain<br />

system uses NVRAM (non-volatile RAM) to track outst<strong>and</strong>ing I/O operations. An<br />

NVRAM card with fully-charged batteries (the typical state) can retain data <strong>for</strong> a<br />

minimum of 48 hours. When reading data back on a restore operation, the DD OS<br />

uses multiple layers of consistency checks to verify that restored data is correct.<br />

DD OS stores only unique data. Through Global Compression, a Data Domain<br />

system pools redundant data from each backup image. Any duplicate data is stored<br />

only once. The storage of unique data is invisible to backup software, which sees the<br />

entire virtual file system.<br />

DD OS data compression is independent of data <strong>for</strong>mat. This can be structured, <strong>for</strong><br />

example, databases, or unstructured, <strong>for</strong> example, text files. Data can be from file<br />

systems or raw volumes. Typical compression ratios are 20:1 on average over many<br />

weeks. This assumes weekly full <strong>and</strong> daily incremental backups. A backup that<br />

includes many duplicate or similar files (files copied several times with minor<br />

changes) benefits the most from compression. Depending on backup volume, size,<br />

retention period, <strong>and</strong> rate of change, the amount of compression can vary.<br />

Data Domain per<strong>for</strong>ms inline deduplication only. Inline deduplication ensures:<br />

• Smaller footprint<br />

• Longer retention<br />

• Faster restore<br />

• Faster time to disaster recovery<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

23


Chapter 3: Storage Design<br />

SISL Stream-In<strong>for</strong>med Segment Layout (SISL) enables inline deduplication. SISL<br />

identifies 99 percent of duplicate segments in RAM <strong>and</strong> ensures that all related<br />

segments are stored in close proximity on disk <strong>for</strong> optimal reads.<br />

Multipath <strong>and</strong><br />

load-balancing<br />

configuration<br />

Data Domain systems that have at least two 10 GbE ports support multipath<br />

configuration <strong>and</strong> load balancing. In a multipath configuration on a Data Domain<br />

system, each of two 10 GbE ports on the system is connected to a separate port on<br />

the backup server.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

24


Chapter 3: Storage Design<br />

SAN topology<br />

SAN topology<br />

<strong>Oracle</strong> layout<br />

The two-node <strong>Oracle</strong> <strong>11g</strong> RAC cluster nodes <strong>and</strong> the proxy node were cabled <strong>and</strong><br />

zoned as shown in the following image. Each node contained two dual-port HBAs.<br />

The four ports were used to connect the nodes to the CX4-960. CLARiiON best<br />

practice dictates that single initiator soft zoning is used. Each HBA is zoned to both<br />

storage processors. This configuration offers the highest level of protection <strong>and</strong> may<br />

also offer higher per<strong>for</strong>mance. It entails the use of full-feature PowerPath software.<br />

In this configuration, there are multiple HBAs connected to the host; there<strong>for</strong>e, there<br />

are redundant paths to each storage processor. There is no single point of failure.<br />

Data availability is ensured in event of an HBA, cable, switch, or storage processor<br />

failure. Since there are multiple paths per storage processor, this configuration<br />

benefits from the PowerPath load-balancing feature <strong>and</strong> thus provides additional<br />

per<strong>for</strong>mance.<br />

The connectivity diagram below shows the two-node <strong>Oracle</strong> <strong>11g</strong> RAC cluster nodes.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

25


Chapter 3: Storage Design<br />

NetWorker<br />

topology<br />

The <strong>EMC</strong> NetWorker environment provides the ability to protect your enterprise<br />

against the loss of valuable data. In a network environment, where the amount of<br />

data grows rapidly, the need to protect data becomes crucial. The <strong>EMC</strong> NetWorker<br />

product gives you the power <strong>and</strong> flexibility to meet such a challenge.<br />

A Data Domain system integrates into a NetWorker environment as the storage<br />

destination <strong>for</strong> directed backups. In this solution, the Data Domain system was<br />

configured as a number of NFS shares. The NFS shares were configured as<br />

advanced file type devices (adv_file). This takes advantage of the speed of disk <strong>and</strong><br />

easily integrates with a previously configured NetWorker environment.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

26


Chapter 3: Storage Design<br />

10 GbE network<br />

topology<br />

The 10 GbE backup network was enabled using a Brocade TurboIron 24 switch. The<br />

TurboIron is a compact, high-per<strong>for</strong>mance, high-availability, <strong>and</strong> high-density 10/1<br />

GbE dual-speed solution. Variable length subnet masking was used to ensure that<br />

both paths to the Data Domain appliance were used to transport data during the<br />

backup phase.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

27


Chapter 4: <strong>Oracle</strong> Database Design<br />

Chapter 4: <strong>Oracle</strong> Database Design<br />

Overview<br />

Introduction to<br />

<strong>Oracle</strong><br />

database<br />

design<br />

ASM<br />

diskgroups<br />

This chapter provides guidelines on the <strong>Oracle</strong> database design used <strong>for</strong> this<br />

validated solution. The design <strong>and</strong> configuration instructions apply to the specific<br />

revision levels of components used during the development of the solution.<br />

Be<strong>for</strong>e attempting to implement any real-world solution based on this validated<br />

scenario, gather the appropriate configuration documentation <strong>for</strong> the revision levels<br />

of the hardware <strong>and</strong> software components. Version-specific release notes are<br />

especially important.<br />

The database was built with four distinct ASM diskgroups (+DATA, +FRA, +REDO,<br />

<strong>and</strong> +TEMP).<br />

ASM Diskgroup Contents<br />

DATA Data <strong>and</strong> index tablespaces, controlfile<br />

FRA<br />

Archive logs<br />

REDO Online redo log files, controlfile<br />

TEMP Temporary tablespace<br />

The ASMCMD CLI lists the diskgroups, showing the state of each one.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

28


Chapter 4: <strong>Oracle</strong> Database Design<br />

Control files The <strong>Oracle</strong> database, in this solution, has two control files, each stored in different<br />

ASM diskgroups.<br />

Redo logs All database changes are written to the redo logs (unless logging is explicitly turned<br />

off) <strong>and</strong> are there<strong>for</strong>e very write-intensive. To protect against a failure involving the<br />

redo log, the <strong>Oracle</strong> database was created with multiplexed redo logs so that copies<br />

of the redo log can be maintained on different disks.<br />

Archive log mode was enabled, which automatically created database-generated<br />

offline archived copies of online redo log files. Archive log mode enables online<br />

backups <strong>and</strong> media recovery.<br />

Note<br />

<strong>Oracle</strong> recommends that archive logging is enabled.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

29


Chapter 4: <strong>Oracle</strong> Database Design<br />

The previous graphic shows that once archive log mode is enabled, the archive logs<br />

were written out to the FRA diskgroup.<br />

Parameter files A centrally located server parameter file (spfile) stored <strong>and</strong> managed the database<br />

initialization parameters persistently <strong>by</strong> all RAC instances. <strong>Oracle</strong> recommends that<br />

you create a server parameter file as a dynamic means of maintaining initialization<br />

parameters.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

30


Chapter 4: <strong>Oracle</strong> Database Design<br />

Swingbench<br />

<strong>and</strong><br />

Datagenerator<br />

Datagenerator is a utility used to populate, create, <strong>and</strong> load tables with semi-r<strong>and</strong>om<br />

data. This was used to generate the 1 TB schema. The following image shows the<br />

Swingbench Order Entry schema.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

31


Chapter 4: <strong>Oracle</strong> Database Design<br />

The Swingbench Configuration, User Details, <strong>and</strong> Load tabs (see the following<br />

image) enable you to change all of the important attributes that control the size <strong>and</strong><br />

type of load placed on your server. Four of the most useful are:<br />

• Number of Users: This describes the number of sessions that Swingbench will<br />

create against the database.<br />

• Min <strong>and</strong> Max Delay Between Transactions (ms): These values control how long<br />

Swingbench will put a session to sleep between transactions.<br />

• Benchmark Run Time: This is the total time that Swingbench will run the bench <strong>for</strong>.<br />

After this time has expired, Swingbench will automatically log off the sessions.<br />

This graphic shows a typical example with 120 concurrent sessions.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

32


Chapter 5: Installation <strong>and</strong> Configuration<br />

Chapter 5: Installation <strong>and</strong> Configuration<br />

Overview<br />

Introduction to<br />

installation <strong>and</strong><br />

configuration<br />

This chapter provides procedures <strong>and</strong> guidelines <strong>for</strong> installing <strong>and</strong> configuring the<br />

components that make up the validated solution scenario. The installation <strong>and</strong><br />

configuration instructions presented in this chapter apply to the specific revision<br />

levels of components used during the development of this solution.<br />

Be<strong>for</strong>e attempting to implement any real-world solution based on this validated<br />

scenario, gather the appropriate installation <strong>and</strong> configuration documentation <strong>for</strong> the<br />

revision levels of the hardware <strong>and</strong> software components planned in the solution.<br />

Version-specific release notes are especially important.<br />

Contents This chapter contains the following topics:<br />

Topic See Page<br />

Navisphere 34<br />

PowerPath 37<br />

Install <strong>Oracle</strong> Clusterware 42<br />

Data Domain 47<br />

NetWorker 57<br />

Multiplexing 62<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

33


Chapter 5: Installation <strong>and</strong> Configuration<br />

Navisphere<br />

Overview Navisphere Management Suite enables you to access <strong>and</strong> manage all CLARiiON<br />

advanced software functionality.<br />

Register hosts The Connectivity Status view in Navisphere, seen in the image below, shows the<br />

new host as logged in but not registered.<br />

Install the Navisphere host agent on the host <strong>and</strong> reboot. The HBAs will then<br />

automatically register, as shown in the following image.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

34


Chapter 5: Installation <strong>and</strong> Configuration<br />

The Hosts tab shows the host as unknown <strong>and</strong> the host agent is unreachable; this is<br />

because the host is multi-homed, that is, the host has multiple NICs configured, as<br />

shown in the following image.<br />

A multi-homed host machine has multiple IP addresses on two or more NICs. You<br />

can physically connect the host to multiple data links that can be on the same or<br />

different networks. When Navisphere Host Agent is installed on a multi-homed host,<br />

the host agent, <strong>by</strong> default, binds to the first NIC in the host. To ensure that the host<br />

agent successfully registers with the desired CLARiiON storage system, you need to<br />

configure the host agent to bind to a specific NIC. To bind the agent to a specific<br />

NIC, you must create a file named agentID.txt. Stop the Navisphere agent, then<br />

rename or delete the HostIdFile.txt file located in /var/log, as shown in the following<br />

image.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

35


Chapter 5: Installation <strong>and</strong> Configuration<br />

Create agentID.txt in root; this file should only contain the fully qualified hostname of<br />

the host <strong>and</strong> the IP address HBA/NIC port that the Navisphere agent should use.<br />

The agentID.txt file should contain only these two lines <strong>and</strong> no special characters,<br />

as shown in the following image.<br />

Then stop <strong>and</strong> restart the Navisphere agent; this re-creates the HostIdFile.txt file<br />

binding the agent to the correct NIC. The host now shows as registered correctly<br />

with Navisphere, as in the following image.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

36


Chapter 5: Installation <strong>and</strong> Configuration<br />

PowerPath<br />

Overview <strong>EMC</strong> PowerPath provides I/O multipath functionality. With PowerPath, a node can<br />

access the same SAN volume via multiple paths (HBA ports), which enables both<br />

load balancing across the multiple paths <strong>and</strong> transparent failover between the paths.<br />

PowerPath<br />

policy<br />

After PowerPath has been installed <strong>and</strong> licensed, it is important to set the PowerPath<br />

policy to “CLARiiON-Only”. The following image shows the powermt display output<br />

prior to setting the PowerPath policy.<br />

The I/O Path Mode is shown to be unlicensed.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

37


Chapter 5: Installation <strong>and</strong> Configuration<br />

Once the PowerPath Policy has been set correctly, all paths are now alive <strong>and</strong><br />

licensed. The previous image shows the powermt set policy comm<strong>and</strong> <strong>and</strong> the<br />

powermt display comm<strong>and</strong> output <strong>for</strong> CLARiiON LUN 80. It lists the eight paths <strong>for</strong><br />

this device. These paths are managed <strong>by</strong> PowerPath. Since the SPA owns the LUN,<br />

the four paths to SPA are active, <strong>and</strong> the remaining paths to SPB are passive.<br />

All ASM diskgroups are then built using PowerPath pseudo names.<br />

Note<br />

A pseudo name is a plat<strong>for</strong>m-specific value assigned <strong>by</strong> PowerPath to the<br />

PowerPath device.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

38


Chapter 5: Installation <strong>and</strong> Configuration<br />

Because of the way in which the SAN devices were discovered on each node, there<br />

was a possibility that a pseudo device pointing to a specific LUN on one node might<br />

point to a different LUN on another node. The emcpadm comm<strong>and</strong> was used to<br />

ensure consistent naming of PowerPath devices on all nodes.<br />

The following image shows how to determine the available pseudo names.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

39


Chapter 5: Installation <strong>and</strong> Configuration<br />

The next image shows how to change the pseudo names using the following<br />

comm<strong>and</strong>:<br />

emcpadm renamepseudo –s – t <br />

This table shows the PowerPath names associated with the LUNs used in the ASM<br />

diskgroups.<br />

Diskgroup<br />

Purpose<br />

Diskgroup Name Path<br />

CLARiiON<br />

LUN<br />

Data files DATA /dev/emcpowerac 10<br />

/dev/emcpowerad 8<br />

/dev/emcpowerae 2<br />

/dev/emcpoweraf 0<br />

Online Redo Logs REDO /dev/emcpowere 65<br />

/dev/emcpowerf 64<br />

/dev/emcpowerg 63<br />

/dev/emcpowerh 62<br />

/dev/emcpoweri 61<br />

/dev/emcpowerj 60<br />

/dev/emcpowerk 59<br />

/dev/emcpowerl 58<br />

/dev/emcpowerm 57<br />

/dev/emcpowern 56<br />

/dev/emcpowero 55<br />

/dev/emcpowerp 54<br />

/dev/emcpowerq 53<br />

/dev/emcpowerr 52<br />

/dev/emcpowers 50<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

40


Chapter 5: Installation <strong>and</strong> Configuration<br />

High<br />

availability<br />

health check<br />

/dev/emcpowert 51<br />

Temp/Undo TEMP /dev/emcpoweru 22<br />

/dev/emcpowerv 20<br />

/dev/emcpowerw 16<br />

/dev/emcpowerx 18<br />

Flash <strong>Recovery</strong> FRA /dev/emcpowery 23<br />

/dev/emcpowerz 21<br />

/dev/emcpoweraa 19<br />

/dev/emcpoweab 17<br />

To verify that the hosts <strong>and</strong> CLARiiON are set up <strong>for</strong> high availability, install <strong>and</strong> run<br />

the naviserverutilcli utility on each node to ensure that everything is set up correctly<br />

<strong>for</strong> failover. To run the utility, use the following comm<strong>and</strong>:<br />

naviserverutilcli hav –upload – ip 172.<br />

In addition to the st<strong>and</strong>ard output, the health check utility also uploads a report to the<br />

CLARiiON storage processors that can be retrieved <strong>and</strong> stored <strong>for</strong> reference.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

41


Chapter 5: Installation <strong>and</strong> Configuration<br />

Install <strong>Oracle</strong> Clusterware<br />

Overview <strong>Oracle</strong> <strong>11g</strong> Clusterware was installed <strong>and</strong> configured <strong>for</strong> both production nodes.<br />

Below are a number of screenshots taken during the installation, showing the<br />

configuration of both RAC nodes.<br />

Cluster<br />

installation<br />

summary<br />

Configure ASM<br />

<strong>and</strong> <strong>Oracle</strong> <strong>11g</strong><br />

software <strong>and</strong><br />

database<br />

The image below shows the installation summary screen.<br />

Be<strong>for</strong>e configuring <strong>Oracle</strong> <strong>and</strong> ASM, <strong>EMC</strong> recommends reviewing the <strong>Oracle</strong><br />

Database Installation Guide <strong>11g</strong> Release 1 (11.1) <strong>for</strong> Linux.<br />

The following general guidelines apply when configuring ASM with <strong>EMC</strong> technology:<br />

• Use multiple diskgroups, preferably a minimum of four, optimally five. Place<br />

the Data, Redo, Temp, <strong>and</strong> FRA in different (separate) diskgroups.<br />

• Use external redundancy instead of ASM mirroring.<br />

• Configure diskgroups so that each contains LUNs of the same size <strong>and</strong><br />

per<strong>for</strong>mance characteristics.<br />

• Distribute ASM diskgroup members over as many spindles as is practical <strong>for</strong><br />

the site’s configuration <strong>and</strong> operational needs.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

42


Chapter 5: Installation <strong>and</strong> Configuration<br />

Partition the disks<br />

In order to use either file systems or ASM, you must have unused disk partitions<br />

available. This section describes how to create the partitions that will be used <strong>for</strong><br />

new file systems <strong>and</strong> <strong>for</strong> ASM.<br />

When partitioning the disks it is important to align the partition correctly. Intel-based<br />

systems are misaligned due to the metadata written <strong>by</strong> the BIOS. To correctly align<br />

the partition <strong>and</strong> ensure improved per<strong>for</strong>mance, use an offset of 64 KB (128 blocks).<br />

This example uses /dev/emcpowera (an empty disk with no existing partitions) to<br />

create a single partition <strong>for</strong> the entire disk.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

43


Chapter 5: Installation <strong>and</strong> Configuration<br />

ASM diskgroup<br />

creation<br />

The <strong>Oracle</strong> DBCA creates the ASM data diskgroup <strong>for</strong> the ASM instance. You can<br />

then create additional diskgroups.<br />

ASM uses mirroring <strong>for</strong> redundancy. ASM supports these three types of redundancy:<br />

• External redundancy.<br />

• Normal redundancy: 2-way mirrored. At least two failure groups are needed.<br />

• High redundancy: 3-way mirrored. At least three failure groups are needed.<br />

<strong>EMC</strong> recommends using external redundancy as protection is provided <strong>by</strong> the<br />

CLARiiON CX4-940. Refer to the CLARiiON configuration setup.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

44


Chapter 5: Installation <strong>and</strong> Configuration<br />

Database<br />

installation<br />

Once the ASM diskgroups were created, <strong>Oracle</strong> Database <strong>11g</strong> 11.1.0.6.0 was<br />

installed.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

45


Chapter 5: Installation <strong>and</strong> Configuration<br />

The <strong>Oracle</strong> environment was patched to 11.1.0.7.0.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

46


Chapter 5: Installation <strong>and</strong> Configuration<br />

Data Domain<br />

Introduction Data Domain DD880 integrates easily into existing data centers <strong>and</strong> can be<br />

configured <strong>for</strong> leading backup <strong>and</strong> archiving applications using NFS, CIFS, OST, or<br />

VTL protocols. This solution is deployed using NFS.<br />

Data Domain<br />

Enterprise<br />

Manager<br />

Create multiple<br />

shares<br />

The Data Domain appliance was configured with two 10 GbE optical cards <strong>for</strong><br />

connection to the backup network.<br />

The following image shows the Data Domain Enterprise Manager.<br />

When integrating a Data Domain appliance <strong>for</strong> use in an environment that also had<br />

NetWorker <strong>and</strong> RMAN deployed, it is best practice to create multiple shares on the<br />

appliance. You can then access these shares as either NFS or CIFS shares on the<br />

NetWorker storage nodes.<br />

Creating the shares involves mounting the appliance “/backup” directory to a server<br />

<strong>and</strong> creating the required directories. The number of directories required is<br />

determined <strong>by</strong> the total number of NetWorker storage nodes that will access the<br />

restorer <strong>and</strong> the total number of streams required <strong>by</strong> each server. Each NetWorker<br />

stream requires an individual device.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

47


Chapter 5: Installation <strong>and</strong> Configuration<br />

1. To mount /backup, create a suitable mount point on the Linux box, <strong>for</strong> example:<br />

mkdir /ddr/masbackup<br />

Enter the following comm<strong>and</strong>:<br />

mount -t nfs -o hard,intr,nfsvers=3,proto=tcp,bg 192.168.0.1:/backup<br />

/ddr/masbackup<br />

2. When /backup is mounted, create new subdirectories using a comm<strong>and</strong> such as:<br />

mkdir enc06<br />

mkdir pm1<br />

3. In this use case, the NetWorker server is a Windows 2003 server. There<strong>for</strong>e a CIFS<br />

share is required <strong>for</strong> this Windows host.<br />

Use the GUI to set up shares, select:<br />

GUI > Maintenance > Tasks > Launch Configuration Wizard > CIFS<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

48


Chapter 5: Installation <strong>and</strong> Configuration<br />

4. Select the authentication method. In this example, Workgroup authentication was<br />

used.<br />

5. In the “Enter workgroup name” field, enter the workgroup name, <strong>and</strong> the WINS<br />

Server name in the “WINS Server” field, if applicable.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

49


Chapter 5: Installation <strong>and</strong> Configuration<br />

6. Add the appropriate backup user name <strong>and</strong> password.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

50


Chapter 5: Installation <strong>and</strong> Configuration<br />

7. Enter the <strong>Backup</strong> Server list. In this example, * was used. An asterisk (*) gives<br />

access to all clients on the network.<br />

Create a CIFS share<br />

The CLI can then be used to create a CIFS share <strong>for</strong> use <strong>by</strong> the NetWorker server.<br />

1. Enter the following comm<strong>and</strong> to create a CIFS share:<br />

cifs share create share enc06 path /backup/enc06 clients 192.168.0.2 writeable<br />

enabled users backup<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

51


Chapter 5: Installation <strong>and</strong> Configuration<br />

2. Check that the share is available on Windows. Select Start – Run, <strong>and</strong> enter:<br />

\\path\dir<br />

Note<br />

The devices should not be mounted on Windows; this is only used to verify the UNC<br />

path.<br />

3. Because the CIFS device is on a remote server, it is important that NetWorker has<br />

the correct permissions to access the remote device. To achieve this, the NetWorker<br />

service must log on a specific account instead of the default local system account.<br />

This account must be the same as that specified earlier as the backup user.<br />

4. Ensure that the permissions on the share are correct. As the share was created on<br />

Linux, root is the owner, there<strong>for</strong>e permission must be granted to other users <strong>and</strong><br />

groups.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

52


Chapter 5: Installation <strong>and</strong> Configuration<br />

5. It is then possible to create a new device <strong>and</strong> label it. Users should not edit device<br />

files <strong>and</strong> directories. This action is not supported, <strong>and</strong> such editing can cause<br />

unpredictable behavior making it impossible to recover data.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

53


Chapter 5: Installation <strong>and</strong> Configuration<br />

6. Below is a typical device after labeling.<br />

Set up NFS shares<br />

The <strong>Oracle</strong> RAC nodes <strong>and</strong> NetWorker proxy server are all Linux-based; there<strong>for</strong>e<br />

NFS shares are also required.<br />

1. Use the Data Domain GUI to set up shares.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

54


Chapter 5: Installation <strong>and</strong> Configuration<br />

2. Select GUI > Maintenance > Tasks > Launch Configuration Wizard > NFS.<br />

3. In the <strong>Backup</strong> Server List field, add all servers to the list. NFS shares are then<br />

added to the appliance; you specify the client <strong>and</strong> the path.<br />

4. Enter the following comm<strong>and</strong> to add new clients:<br />

nfs add /backup/pm1 192.168.0.3<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

55


Chapter 5: Installation <strong>and</strong> Configuration<br />

5. Display the client list <strong>by</strong> entering:<br />

nfs show clients<br />

6. Mount the devices on the Linux host, <strong>for</strong> example:<br />

mount -t nfs -o hard,intr,nfsvers=3,proto=tcp,bg 192.168.0.1:/backup<br />

/ddr/masbackup<br />

The new devices can then be added to NetWorker.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

56


Chapter 5: Installation <strong>and</strong> Configuration<br />

NetWorker<br />

NetWorker<br />

introduction<br />

NetWorker<br />

configuration<br />

The following NetWorker components were installed:<br />

• NetWorker Server<br />

• NetWorker Server<br />

• NetWorker Management Console<br />

• RAC nodes<br />

• NetWorker storage node<br />

• NetWorker Client<br />

• NMO<br />

• Proxy node<br />

• NetWorker storage node<br />

• NetWorker Client<br />

• NMO<br />

Once the NFS shares are mounted to the appropriate servers, NetWorker can then<br />

mount <strong>and</strong> label the shares as adv_file type devices.<br />

Because the NFS share is a remote device, the name <strong>for</strong>mat used is similar to the<br />

example below. The full path name is preceded <strong>by</strong> rd.<br />

rd=tce-r900-enc03.emcweb.ie/dd/backup3<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

57


Chapter 5: Installation <strong>and</strong> Configuration<br />

NetWorker will then verify the path to the devices, <strong>and</strong> once verified the device is<br />

available to be labeled. When NetWorker labels an advanced file type device, it<br />

automatically creates a secondary device with read-only accessibility. The secondary<br />

volume is given a “_readonly” in its name, <strong>and</strong> then automounts this device. This<br />

enables concurrent operations, such as reading from the read-only device.<br />

The NetWorker wizard was used to configure the client backups on each node.<br />

The NetWorker Module <strong>for</strong> <strong>Oracle</strong> (NMO) was installed on each node to enable tight<br />

NetWorker integration with <strong>Oracle</strong> RMAN.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

58


Chapter 5: Installation <strong>and</strong> Configuration<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

59


Chapter 5: Installation <strong>and</strong> Configuration<br />

Once the client was added successfully, it was then modified to set the following<br />

parameters:<br />

• Number of channels<br />

• <strong>Backup</strong> level<br />

• Control file backup<br />

• Archive redo log backup<br />

• Filesperset<br />

Testing was conducted using different numbers of RMAN channels, <strong>and</strong> the Data<br />

Domain appliance was configured with two 10 GbE optical NICs <strong>for</strong> connection to the<br />

NetWorker storage nodes.<br />

The filesperset parameter was tested at default <strong>and</strong> at one. <strong>EMC</strong> recommends<br />

setting this parameter to one ensures that multiplexing is not introduced, as this has<br />

a negative effect on the deduplication rates achieved. This is explained in greater<br />

detail in the next section.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

60


Chapter 5: Installation <strong>and</strong> Configuration<br />

The wizard creates the RMAN script, as shown below, which can be modified if<br />

required. Refer to “Chapter 6: Testing <strong>and</strong> Validation” <strong>and</strong> “Appendix A: Scripts” <strong>for</strong><br />

more details.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

61


Chapter 5: Installation <strong>and</strong> Configuration<br />

Multiplexing<br />

RMAN<br />

multiplexing<br />

When using a deduplication appliance, such as a DD880, you should disable<br />

multiplexing.<br />

When creating backup sets, RMAN can simultaneously read multiple files from disk<br />

<strong>and</strong> then write their blocks into the same backup set. For example, RMAN can read<br />

from two datafiles simultaneously, <strong>and</strong> then combine the blocks from these datafiles<br />

into a single backup piece. The combination of blocks from multiple files is called<br />

RMAN multiplexing. Similar to NetWorker multiplexing, RMAN multiplexing has the<br />

same negative effect on deduplication<br />

The parameter that sets up multiplexing within <strong>Oracle</strong> is filesperset. The filesperset<br />

parameter specifies the number of files that will be packaged together <strong>and</strong> sent on a<br />

single channel to a tape device. This has the same effect as mixing bits from many<br />

files, <strong>and</strong> again makes it more difficult to detect segments of data that already exist.<br />

There<strong>for</strong>e, to take full advantage of data deduplication, it is important to have the<br />

filesperset parameter set to one.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

62


Chapter 6: Testing <strong>and</strong> Validation<br />

Chapter 6: Testing <strong>and</strong> Validation<br />

Overview<br />

Introduction to<br />

testing <strong>and</strong><br />

validation<br />

Storage design is an important element to ensure the successful development of the<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong><br />

Data Domain, <strong>EMC</strong> NetWorker, <strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS solution.<br />

Contents This section contains the following topic:<br />

Topic See Page<br />

Section A: Test results summary <strong>and</strong> resulting<br />

recommendations<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

63<br />

64


Chapter 6: Testing <strong>and</strong> Validation<br />

Section A: Test results summary <strong>and</strong> resulting recommendations<br />

Description of<br />

the results<br />

summary <strong>and</strong><br />

conclusions<br />

<strong>Backup</strong>s were run using a backup schedule consisting of level 0, or full backups<br />

only. For the purposes of this solution, Friday COB was deemed to be the start of the<br />

weekend.<br />

Archived redo logs <strong>and</strong> the control file were also backed up as part of each backup<br />

that occurred during this solution. Backing up the archived redo logs had a significant<br />

impact on the overall change rate of the database. The change rate of the database<br />

was 2 percent. However, because the archived log files were backed up on every<br />

backup, the change rate observed during incremental backups was actually much<br />

higher, closer to 10 percent.<br />

For this use case, <strong>EMC</strong> carried out a number of tests on the <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong><br />

backup <strong>and</strong> recovery infrastructure. At a high level:<br />

• Orion validation<br />

• Swingbench<br />

• Validate Swingbench profile<br />

• <strong>Backup</strong> from production<br />

• SnapView clone copy from production<br />

• Data Domain deduplication<br />

• Restore<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

64


Chapter 6: Testing <strong>and</strong> Validation<br />

Orion<br />

validation<br />

Once the disk environment was set up on the CLARiiON CX4-960, the disk<br />

configuration was validated using an <strong>Oracle</strong> toolset called Orion. Orion is the <strong>Oracle</strong><br />

I/O Numbers Calibration Tool designed to simulate <strong>Oracle</strong> I/O workloads without<br />

having to create <strong>and</strong> run an <strong>Oracle</strong> database. It utilizes the <strong>Oracle</strong> database I/O<br />

libraries <strong>and</strong> can simulate <strong>OLTP</strong> workloads (small I/Os) or data warehouses (large<br />

I/Os). Orion is useful <strong>for</strong> underst<strong>and</strong>ing the per<strong>for</strong>mance capabilities of a storage<br />

system, either to uncover per<strong>for</strong>mance issues or to size a new database installation.<br />

Note<br />

Orion is a destructive tool so it should only be run against raw devices prior to<br />

installing any database or application.<br />

This graph shows total throughput on a single node, with four metaLUNs, consisting<br />

of 40 disks.<br />

This demonstrates the desired scaling.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

65


Chapter 6: Testing <strong>and</strong> Validation<br />

Validate<br />

backup source<br />

RAC node or<br />

proxy node<br />

The following image shows the processor utilization on a RAC node under the<br />

following conditions:<br />

• Swingbench load only<br />

• Swingbench load plus a backup running on the node<br />

• Swingbench load plus a clone sync<br />

This graph shows that using SnapView clones to create a copy of the production<br />

database significantly alleviates much of the backup overhead. The clone copy is<br />

mounted to the proxy host <strong>and</strong> the backup is then run from the proxy host.<br />

Line 1: shows the total CPU utilization on a RAC node under Swingbench load<br />

simulating a production-like load on the database.<br />

Line 2: shows the overhead incurred when running a backup on the RAC node<br />

concurrently with the Swingbench load.<br />

Line 3: shows the overhead incurred when creating a clone copy of the<br />

production database while running the Swingbench load.<br />

Pointer 4: is the point at which the clone sync commenced. This was an<br />

incremental sync.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

66


Chapter 6: Testing <strong>and</strong> Validation<br />

MetaLUN<br />

response times<br />

The following graphs further illustrate the advantage of offloading the backup from<br />

the production node to the proxy node. They illustrate the CLARiiON metaLUN<br />

response times. The first graph below shows the response time from the metaLUNs<br />

assigned to the metaLUNs under a Swingbench load. These metaLUNs constitute<br />

the ASM DATA+ data group.<br />

The following graph shows the same metaLUNs response time. Similar to the<br />

previous example, the Swingbench load is running against the cluster. In addition, an<br />

RMAN backup initiated <strong>by</strong> NetWorker is also running against <strong>Oracle</strong> RAC Node 1.<br />

The backup is running against the same LUNs that are serving the Swingbench load.<br />

The response time is higher <strong>for</strong> the duration of the backup.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

67


Chapter 6: Testing <strong>and</strong> Validation<br />

The graph below shows the response time from the CLARiiON CX4-960 <strong>for</strong> the<br />

duration of the backup process. Here, the Swingbench load is running against the<br />

RAC node cluster. This initiates synchronization of the clone copy on completion of<br />

the sync, the database is put in hot backup mode. The clones are then fractured, the<br />

database is taken out of hot backup mode, <strong>and</strong> the clones are mounted to the proxy<br />

host. NetWorker then initiates the backup from the proxy host. The response time<br />

remains steady except <strong>for</strong> two short periods, explained below.<br />

Pointer 1: The first increase occurs when the clone copy is initiated.<br />

Pointer 2: The second increase in response time occurs when the database is put<br />

into hot backup mode. The spike occurs when the database is put into hot backup<br />

mode because:<br />

• Any dirty data buffers in the database buffer cache are written out to files<br />

<strong>and</strong> the datafiles are checkpointed.<br />

• The datafile headers are updated to the system change number (SCN),<br />

captured when the begin backup comm<strong>and</strong> is issued. The SCN is not<br />

incremented with checkpoints while a file is in hot backup mode. This lets the<br />

recovery process underst<strong>and</strong> which archive redo log files may be required to<br />

fully recover this file from that SCN onward.<br />

• The datablocks within the database files continue to be read <strong>and</strong> written to.<br />

• During hot backup, an entire block is written to the redo log files the first time<br />

the data block is changed. Subsequently, only redo vectors (changed <strong>by</strong>tes)<br />

are written to the redo logs.<br />

Pointer 3: When the database is taken out of hot backup mode, the datafile header<br />

<strong>and</strong> SCN are updated.<br />

Pointer 4: The clone copy is then mounted to the proxy node <strong>and</strong> the RMAN backup<br />

is launched from NetWorker using the proxy host, which is a dedicated storage node.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

68


Chapter 6: Testing <strong>and</strong> Validation<br />

The backup begins at data point 4. Because the clone metaLUNs are made up of a<br />

separate <strong>and</strong> independent group of disks there is no additional overhead on the<br />

production LUNs.<br />

Pointer 5: <strong>Backup</strong> completes.<br />

Deduplication The following images show the data stored on the DD880 after five weeks running<br />

the backup schedule. The backup schedule consisted of RMAN level 0 (full) backups<br />

only. That is, Level 0 backups on the weekend <strong>and</strong> RMAN level 0 backups Monday<br />

through Thursday. The database daily change rate is ≈ 2 percent. However, because<br />

the archived log files were also backed up, the change rate observed during<br />

incremental backups was actually much higher, closer to 10 percent.<br />

By eliminating redundant data segments, the Data Domain system allows many<br />

more backups to be stored <strong>and</strong> managed than would normally be possible <strong>for</strong> a<br />

traditional storage server. While completely new data has to be written to disk<br />

whenever discovered, the variable-length segment deduplication capability of the<br />

Data Domain system makes finding identical segments extremely efficient.<br />

The storage saving graph above shows the data written to the DD880 over a fiveweek<br />

period. The backup cycle consisted of all RMAN level 0 (full) backups.<br />

Line 1: "Data Written" shows that approximately 24 TB was backed up over the<br />

five-week period.<br />

Line 2: "Data Stored" tracks the unique data actually stored on the DD880 after<br />

inline deduplication. The remaining redundant data was eliminated. This results<br />

in a net saving of 92 percent of storage space required over the five-week<br />

period.<br />

Line 3: "% Reduction" shows the storage saving as a percentage over the fiveweek<br />

period. This results in a deduplication factor of 13:1.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

69


Chapter 6: Testing <strong>and</strong> Validation<br />

Line 1: shows the deduplication factor of 13:1.<br />

A full only backup schedule made possible when using a deduplication appliance<br />

eliminates the restore penalty associated with an incremental backup schedule<br />

because the entire image is always available on the device <strong>for</strong> any given restore<br />

point. However, a backup schedule consisting of only level 0 backups is not always<br />

possible or practical.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

70


Chapter 6: Testing <strong>and</strong> Validation<br />

The charts below show the same database backup cycle but in this case, the backup<br />

schedule employed a mix of both Level 0 full <strong>and</strong> Level 1 (incremental) backup.<br />

Refer to the <strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong><br />

CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker, <strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager<br />

using Fibre Channel Proven Solution Guide <strong>for</strong> more details.<br />

Line 1: “Data Written” is much lower in this instance as the backup schedule<br />

employed incremental backups during the week; there<strong>for</strong>e much less data was sent<br />

to the appliance, approximately 10 TB.<br />

Line 2: “Data Stored” remains the same, however, as the Data Domain appliance<br />

identifies <strong>and</strong> saves only the unique data sent to the appliance. This reduces the<br />

overall data reduction as less redundant data is sent to the appliance.<br />

Line 3: "% Reduction" shows the storage saving as a percentage over the five-week<br />

period.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

71


Chapter 6: Testing <strong>and</strong> Validation<br />

The graph below shows that during the weekdays when level 1 (incremental)<br />

backups are sent to the appliance, the deduplication rate decreases.<br />

Line 1: shows the deduplication factor of 6:1.<br />

Note<br />

The graphs show the total “Data Written” to the DD880 increasing over time; this is<br />

also described as the logical capacity. The “Data Stored” refers to the unique data<br />

that is stored on the appliance. The “% Reduction” shows the storage savings gained<br />

from using Data Domain.<br />

Filesperset parameter<br />

When using a deduplication appliance, such as a DD880, it is best practice to ensure<br />

that multiplexing is disabled. The parameter that sets up multiplexing within <strong>Oracle</strong> is<br />

filesperset. To take full advantage of data deduplication, it is important to set this<br />

parameter to one. The graphs below show the effect of setting filesperset (FPS) to<br />

the default.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

72


Chapter 6: Testing <strong>and</strong> Validation<br />

Line 1: Shows the deduplication rate when the filesperset parameter is set to one.<br />

Three weeks into the backup cycle, the deduplication factor is over 11:1.<br />

Line 2: Shows the deduplication rate over the same time period when the<br />

filesperset parameter is set to default. The deduplication factor achieved now only<br />

reaches 8:1.<br />

There<strong>for</strong>e, when you set the filesperset parameter to the default, the percentage<br />

storage saving is lower than that achieved when it is set to one.<br />

The following graph shows the effect on the percentage storage saving when the<br />

filesperset parameter is set to one versus setting it to the default.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

73


Chapter 6: Testing <strong>and</strong> Validation<br />

Line 1: The filesperset parameter is set to one, there is a saving of over 91 percent<br />

of storage requirements.<br />

Line 2: When the filesperset parameter is set to the default you save 87 percent.<br />

This clearly demonstrates the effect of the filesperset parameter on deduplication<br />

rates. Setting the parameter to one achieves a four percent improvement in storage<br />

capacity savings.<br />

Restore Data Domain’s Stream-In<strong>for</strong>med Segment Layout (SISL) technology ensures<br />

balanced backup <strong>and</strong> restore speeds.<br />

The backup schedule utilizes only full backups every day. This is possible because<br />

only unique data is stored on the DD880 appliance. This schedule has the<br />

advantage that when a recovery is required <strong>for</strong> any point in time only a single restore<br />

is required as incremental or differential restores are not required. This greatly<br />

improves the RTO.<br />

When you implement an incremental backup schedule, as shown, you will need a<br />

multi-stage restore operation (to restore Thursday's backup). If in the worst case<br />

scenario a restore of Thursday’s backup is required, then a multi-stage restore<br />

operation is necessary.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

74


Chapter 6: Testing <strong>and</strong> Validation<br />

You must first restore the full weekend backup; after this restore is successful, you<br />

must restore each weekday’s incremental backup one after the other, <strong>and</strong> only<br />

Thursday data is restored.<br />

When using a Data Domain deduplication appliance, it is possible to implement a full<br />

only backup schedule. There<strong>for</strong>e only a single restore is required, regardless of<br />

where in the schedule the data restore is required.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

75


Chapter 7: Conclusion<br />

Chapter 7: Conclusion<br />

Overview<br />

Introduction to<br />

conclusion<br />

This Proven Solution Guide details an <strong>Oracle</strong> infrastructure design leveraging an<br />

<strong>EMC</strong> CLARiiON CX4-960 array, <strong>EMC</strong> Data Domain DD880, <strong>and</strong> <strong>EMC</strong> NetWorker.<br />

Also included are various test results, configuration practices, <strong>and</strong> recommended<br />

specific <strong>Oracle</strong> storage design layouts that meet both capacity <strong>and</strong> consolidation<br />

requirements. This document describes many technologies that enable the benefits<br />

outlined below.<br />

Conclusion Traditional hardware compression provides substantial cost savings in <strong>Oracle</strong><br />

environments. However, in this solution data deduplication significantly reduces the<br />

amount of data that needs to be stored over an extended period of time. This<br />

solution offers cost savings both from a management st<strong>and</strong>point <strong>and</strong> in the numbers<br />

of disks or tapes required <strong>by</strong> a customer to achieve their long-term backup strategy.<br />

Data deduplication can fundamentally change the way organizations protect backup<br />

<strong>and</strong> nearline data. Deduplication changes the repetitive backup practice of tape, with<br />

only unique, new data written to disk, there<strong>for</strong>e the deduplicated backup image does<br />

not carry the restore penalty associated with incremental backups because the entire<br />

image is always available on the device. This eliminates the need <strong>for</strong> incremental<br />

restores. The test results show that, in an environment utilizing RMAN full backups,<br />

data deduplication ratio of over 13:1, resulting in a 92 percent saving in the storage<br />

required to accommodate the backup data, makes it economically practical to retain<br />

the savesets <strong>for</strong> longer periods of time. This reduces the likelihood that a data<br />

element must be retrieved from the vault <strong>and</strong> can significantly improve the RTO.<br />

Although cost savings are generally not the initial reason to consider moving to disk<br />

backup <strong>and</strong> deduplication, financial justification is almost always a prerequisite. With<br />

the potential cost savings of disk <strong>and</strong> deduplication, the justification statement<br />

becomes, “we can achieve all of these business benefits <strong>and</strong> save money.” That is a<br />

compelling argument.<br />

The solution meets the business challenges in the following manner:<br />

• Ability to keep applications up 24x7<br />

• Faster backup <strong>and</strong> restores – meet more aggressive backup windows,<br />

<strong>and</strong> restore your key applications in minutes, not days<br />

• Reduced backup windows – minimize backup windows to reduce<br />

impact on your application <strong>and</strong> system availability<br />

• Protect the business in<strong>for</strong>mation as an asset of the business<br />

• Reduced business risk – restore data quickly <strong>and</strong> accurately with builtin<br />

hardware redundancy <strong>and</strong> RAID protection<br />

• Reduced backup windows – minimize backup windows to reduce<br />

impact on your application <strong>and</strong> system availability<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

76


Chapter 7: Conclusion<br />

• Efficient use of both infrastructure <strong>and</strong> people to support the business<br />

• Improved IT efficiency – save hours of staff time <strong>and</strong> boost user<br />

productivity<br />

• Correct costs / reduce costs – match infrastructure costs with changing<br />

in<strong>for</strong>mation value via efficient, cost-effective tiered storage<br />

In summary, utilizing the solution components, in particular CLARiiON technology,<br />

<strong>EMC</strong> Data Domain, <strong>and</strong> <strong>EMC</strong> NetWorker software, provides customers with the best<br />

possible backup solution to prevent both user <strong>and</strong> business impact. Business can<br />

continue as usual, as if there is no backup taking place. In customer environments<br />

where, more than ever, there is a trend toward 24x7 activity, this is a critical<br />

differentiator that <strong>EMC</strong> can offer.<br />

Next steps <strong>EMC</strong> can help to accelerate assessment, design, implementation, <strong>and</strong> management<br />

while lowering the implementation risks <strong>and</strong> costs of a backup <strong>and</strong> recovery solution<br />

<strong>for</strong> an <strong>Oracle</strong> Database <strong>11g</strong> environment.<br />

To learn more about this <strong>and</strong> other solutions contact an <strong>EMC</strong> representative or visit<br />

http://www.emc.com/solutions/application-environment/oracle/index.htm.<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

77


Appendix A: Scripts<br />

Appendix A: Scripts<br />

Clone copy<br />

process<br />

The following is a overview of the steps taken to create a clone copy of the<br />

database. The clone copy is then mounted to the proxy host prior to backup.<br />

The naviseccli comm<strong>and</strong>s were used to sync the proxy clone. It was necessary to<br />

per<strong>for</strong>m the clone fracture in two stages to facilitate a log switch after the database<br />

was taken out of hot backup mode.<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name data1 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name data2 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name data3 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name data4 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name fra1 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name fra2 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name fra3 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name fra4 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name temp1 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name temp2 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name temp3 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name temp4 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name ocr -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name voting -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo1 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo2 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo3 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo4 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo5 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo6 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo7 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo8 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo9 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo10 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo11 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo12 -cloneid<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong><br />

<strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

78


Appendix A: Scripts<br />

NetWorker<br />

RMAN backup<br />

script<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo13 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo14 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo15 -cloneid<br />

0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -syncclone -name redo16 -cloneid<br />

0200000000000000<br />

The naviseccli comm<strong>and</strong>s below were used to fracture the proxy clone.<br />

naviseccli -h 172.30.226.20 snapview -consistentfractureclones -<br />

CloneGroupNameCloneId data1 0200000000000000 data2 0200000000000000<br />

data3 0200000000000000 data4 0200000000000000 temp1 0200000000000000<br />

temp2 0200000000000000 temp3 0200000000000000 temp4 0200000000000000<br />

ocr 0200000000000000 voting 0200000000000000 redo1 0200000000000000<br />

redo2 0200000000000000 redo3 0200000000000000 redo4 0200000000000000<br />

redo5 0200000000000000 redo6 0200000000000000 redo7 0200000000000000<br />

redo8 0200000000000000 redo9 0200000000000000 redo10 0200000000000000<br />

redo11 0200000000000000 redo12 0200000000000000 redo13<br />

0200000000000000 redo14 0200000000000000 redo15 0200000000000000<br />

redo16 0200000000000000<br />

naviseccli -h 172.30.226.20 snapview -consistentfractureclones -<br />

CloneGroupNameCloneId fra1 0200000000000000 fra2 0200000000000000<br />

fra3 0200000000000000 fra4 0200000000000000 -o<br />

The RMAN script below is a typical example of one used to generate backups<br />

through the NetWorker console.<br />

This example shows an eight-channel incremental level 0 backup to tape. Each<br />

backup was assigned a tag ID, which was later used as part of the restore process.<br />

RUN {<br />

ALLOCATE CHANNEL CH1 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH2 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH3 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH4 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH5 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH6 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH7 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH8 TYPE 'SBT_TAPE';<br />

BACKUP<br />

INCREMENTAL LEVEL 0<br />

FILESPERSET 1<br />

FORMAT '%d_%U'<br />

TAG= 'RUN529'<br />

DATABASE PLUS ARCHIVELOG;<br />

backup controlfilecopy '+FRA/ORCL/control_backup' tag= 'RUN529_CTL';<br />

RELEASE CHANNEL CH1;<br />

RELEASE CHANNEL CH2;<br />

RELEASE CHANNEL CH3;<br />

RELEASE CHANNEL CH4;<br />

RELEASE CHANNEL CH5;<br />

RELEASE CHANNEL CH5;<br />

RELEASE CHANNEL CH7;<br />

RELEASE CHANNEL CH8;<br />

}<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

79


Appendix A: Scripts<br />

<strong>Oracle</strong> RMAN<br />

restore script<br />

The restore process consisted of first allocating eight channels, then restoring the<br />

controlfile, mounting the database, <strong>and</strong> per<strong>for</strong>ming the restore database comm<strong>and</strong><br />

using the tag ID assigned earlier. Below is a sample restore script.<br />

RUN<br />

{<br />

ALLOCATE CHANNEL CH1 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH2 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH3 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH4 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH5 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH6 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH7 TYPE 'SBT_TAPE';<br />

ALLOCATE CHANNEL CH8 TYPE 'SBT_TAPE';<br />

restore controlfile from tag'RUN529_CTL';<br />

alter database mount;<br />

restore DATABASE from tag'RUN529';<br />

RELEASE CHANNEL CH1;<br />

RELEASE CHANNEL CH2;<br />

RELEASE CHANNEL CH3;<br />

RELEASE CHANNEL CH4;<br />

RELEASE CHANNEL CH5;<br />

RELEASE CHANNEL CH6;<br />

RELEASE CHANNEL CH7;<br />

RELEASE CHANNEL CH8;<br />

}<br />

<strong>EMC</strong> <strong>Backup</strong> <strong>and</strong> <strong>Recovery</strong> <strong>for</strong> <strong>Oracle</strong> <strong>11g</strong> <strong>OLTP</strong> <strong>Enabled</strong> <strong>by</strong> <strong>EMC</strong> CLARiiON, <strong>EMC</strong> Data Domain, <strong>EMC</strong> NetWorker,<br />

<strong>and</strong> <strong>Oracle</strong> <strong>Recovery</strong> Manager using NFS Proven Solution Guide<br />

80

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!