20.03.2013 Views

Implementing the IBM System Storage SAN Volume Controller V5.1

Implementing the IBM System Storage SAN Volume Controller V5.1

Implementing the IBM System Storage SAN Volume Controller V5.1

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

ibm.com/redbooks<br />

Front cover<br />

<strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

Install, use, and troubleshoot <strong>the</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong><br />

Learn about and how to attach<br />

iSCSI hosts<br />

Understand what solid-state<br />

drives have to offer<br />

Jon Tate<br />

Pall Beck<br />

Angelo Bernasconi<br />

Werner Eggli


International Technical Support Organization<br />

<strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> <strong>V5.1</strong><br />

March 2010<br />

SG24-6423-07


Note: Before using this information and <strong>the</strong> product it supports, read <strong>the</strong> information in “Notices” on<br />

page xvii.<br />

Eighth Edition (March 2010)<br />

This edition applies to Version 5 Release 1 Modification 0 of <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

and is based on pre-GA versions of code.<br />

Note: This book is based on a pre-GA version of a product and might not apply when <strong>the</strong> product becomes<br />

generally available. We recommend that you consult <strong>the</strong> product documentation or follow-on versions of<br />

this book for more current information.<br />

© Copyright International Business Machines Corporation 2010. All rights reserved.<br />

Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule<br />

Contract with <strong>IBM</strong> Corp.


Contents<br />

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii<br />

Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii<br />

Summary of changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix<br />

March 2010, Eighth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix<br />

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi<br />

The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi<br />

Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii<br />

Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii<br />

Stay connected to <strong>IBM</strong> Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv<br />

Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1<br />

1.1 <strong>Storage</strong> virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2<br />

1.2 User requirements that drive storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5<br />

1.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>. . . . . . . . . . . . . . . . . . . . . . . . . 7<br />

2.1 SVC history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8<br />

2.2 Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9<br />

2.2.1 SVC virtualization concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13<br />

2.2.2 MDisk overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17<br />

2.2.3 VDisk overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18<br />

2.2.4 Image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />

2.2.5 Managed mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19<br />

2.2.6 Cache mode and cache-disabled VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20<br />

2.2.7 Mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21<br />

2.2.8 Space-Efficient VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23<br />

2.2.9 VDisk I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25<br />

2.2.10 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26<br />

2.2.11 Usage of IP addresses and E<strong>the</strong>rnet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28<br />

2.2.12 iSCSI VDisk discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30<br />

2.2.13 iSCSI au<strong>the</strong>ntication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30<br />

2.2.14 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />

2.2.15 Advanced Copy Services overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31<br />

2.2.16 FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33<br />

2.3 SVC cluster overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34<br />

2.3.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35<br />

2.3.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />

2.3.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37<br />

2.3.4 Cluster management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38<br />

2.3.5 User au<strong>the</strong>ntication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40<br />

2.3.6 SVC roles and user groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41<br />

2.3.7 SVC local au<strong>the</strong>ntication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42<br />

2.3.8 SVC remote au<strong>the</strong>ntication and single sign-on. . . . . . . . . . . . . . . . . . . . . . . . . . . 43<br />

2.4 SVC hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46<br />

2.4.1 Fibre Channel interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47<br />

2.4.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48<br />

2.5 Solid-state drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. iii


2.5.1 <strong>Storage</strong> bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49<br />

2.5.2 Solid-state drive solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />

2.5.3 Solid-state drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50<br />

2.6 Solid-state drives in <strong>the</strong> SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51<br />

2.6.1 Solid-state drive configuration rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52<br />

2.6.2 SVC 5.1 supported hardware list, device driver, and firmware levels. . . . . . . . . . 55<br />

2.6.3 SVC 4.3.1 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56<br />

2.6.4 New with SVC 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56<br />

2.7 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58<br />

2.8 Useful SVC links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />

2.9 Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59<br />

Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65<br />

3.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66<br />

3.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67<br />

3.2.1 Preparing your uninterruptible power supply unit environment. . . . . . . . . . . . . . . 68<br />

3.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69<br />

3.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73<br />

3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74<br />

3.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74<br />

3.3.2 <strong>SAN</strong> zoning and <strong>SAN</strong> connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76<br />

3.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81<br />

3.3.4 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84<br />

3.3.5 SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86<br />

3.3.6 Managed Disk Group configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88<br />

3.3.7 Virtual disk configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90<br />

3.3.8 Host mapping (LUN masking). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92<br />

3.3.9 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93<br />

3.3.10 <strong>SAN</strong> boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99<br />

3.3.11 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . . 99<br />

3.3.12 SVC configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100<br />

3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100<br />

3.4.1 <strong>SAN</strong>. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101<br />

3.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101<br />

3.4.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102<br />

3.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration . . . . . . . . . . . . . . . . . . . . . . . 103<br />

4.1 Managing <strong>the</strong> cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104<br />

4.1.1 TCP/IP requirements for <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> . . . . . . . . . . . . . . . . . . . . . . . . 104<br />

4.2 <strong>System</strong>s <strong>Storage</strong> Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107<br />

4.2.1 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center hardware . . . . . . . . . . . . . . . . . . . . . . 108<br />

4.2.2 SVC installation planning information for <strong>System</strong> <strong>Storage</strong> Productivity Center . 109<br />

4.2.3 SVC installation planning information for <strong>the</strong> HMC. . . . . . . . . . . . . . . . . . . . . . . 110<br />

4.3 Setting up <strong>the</strong> SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111<br />

4.3.1 Creating <strong>the</strong> cluster (first time) using <strong>the</strong> service panel . . . . . . . . . . . . . . . . . . . 111<br />

4.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114<br />

4.3.3 Initial configuration using <strong>the</strong> service panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115<br />

4.4 Adding <strong>the</strong> cluster to <strong>the</strong> SSPC or <strong>the</strong> SVC HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . 116<br />

4.4.1 Configuring <strong>the</strong> GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117<br />

4.5 Secure Shell overview and CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125<br />

4.5.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . . 126<br />

4.5.2 Uploading <strong>the</strong> SSH public key to <strong>the</strong> SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . 129<br />

iv <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


4.5.3 Configuring <strong>the</strong> PuTTY session for <strong>the</strong> CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130<br />

4.5.4 Starting <strong>the</strong> PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134<br />

4.5.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136<br />

4.6 Using IPv6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136<br />

4.6.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137<br />

4.6.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141<br />

4.7 Upgrading <strong>the</strong> SVC Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142<br />

Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153<br />

5.1 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154<br />

5.1.1 Fibre Channel and <strong>SAN</strong> setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154<br />

5.1.2 Port mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157<br />

5.2 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />

5.2.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />

5.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />

5.2.3 IQN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158<br />

5.3 VDisk discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159<br />

5.4 Au<strong>the</strong>ntication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160<br />

5.5 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />

5.5.1 Configuring <strong>the</strong> AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />

5.5.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 162<br />

5.5.3 HBAs for <strong>IBM</strong> <strong>System</strong> p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162<br />

5.5.4 Configuring for fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 163<br />

5.5.5 Subsystem Device Driver (SDD) Path Control Module (SDDPCM) . . . . . . . . . . 165<br />

5.5.6 Discovering <strong>the</strong> assigned VDisk using SDD and AIX 5L V5.3 . . . . . . . . . . . . . . 167<br />

5.5.7 Using SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170<br />

5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD . . . . . . . . . 172<br />

5.5.9 Discovering <strong>the</strong> assigned VDisk using AIX V6.1 and SDDPCM . . . . . . . . . . . . . 172<br />

5.5.10 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176<br />

5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . 177<br />

5.5.12 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177<br />

5.5.13 Removing an SVC volume on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181<br />

5.5.14 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 181<br />

5.6 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182<br />

5.6.1 Configuring Windows Server 2000, Windows 2003 Server, and Windows Server<br />

2008 hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182<br />

5.6.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182<br />

5.6.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 183<br />

5.6.4 Host adapter installation and configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183<br />

5.6.5 Changing <strong>the</strong> disk timeout on Microsoft Windows Server. . . . . . . . . . . . . . . . . . 185<br />

5.6.6 Installing <strong>the</strong> SDD driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185<br />

5.6.7 Installing <strong>the</strong> SDDDSM driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188<br />

5.7 Discovering assigned VDisks in Windows Server 2000 and Windows 2003 Server. . 190<br />

5.7.1 Extending a Windows Server 2000 or Windows 2003 Server volume . . . . . . . . 195<br />

5.8 Example configuration of attaching an SVC to a Windows Server 2008 host. . . . . . . 200<br />

5.8.1 Installing SDDDSM on a Windows Server 2008 host . . . . . . . . . . . . . . . . . . . . . 200<br />

5.8.2 Installing SDDDSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203<br />

5.8.3 Attaching SVC VDisks to Windows Server 2008 . . . . . . . . . . . . . . . . . . . . . . . . 205<br />

5.8.4 Extending a Windows Server 2008 volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211<br />

5.8.5 Removing a disk on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211<br />

5.9 Using <strong>the</strong> SVC CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214<br />

5.10 Microsoft <strong>Volume</strong> Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215<br />

5.10.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216<br />

Contents v


5.10.2 <strong>System</strong> requirements for <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider. . . . . . . 216<br />

5.10.3 Installing <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider . . . . . . . . . . . . . . . . . . . 216<br />

5.10.4 Verifying <strong>the</strong> installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220<br />

5.10.5 Creating <strong>the</strong> free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . 221<br />

5.10.6 Changing <strong>the</strong> configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222<br />

5.11 Specific Linux (on Intel) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225<br />

5.11.1 Configuring <strong>the</strong> Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225<br />

5.11.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225<br />

5.11.3 Disabling automatic Linux system updates. . . . . . . . . . . . . . . . . . . . . . . . . . . . 225<br />

5.11.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226<br />

5.11.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226<br />

5.11.6 Creating and preparing <strong>the</strong> SDD volumes for use . . . . . . . . . . . . . . . . . . . . . . 231<br />

5.11.7 Using <strong>the</strong> operating system MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233<br />

5.11.8 Creating and preparing MPIO volumes for use. . . . . . . . . . . . . . . . . . . . . . . . . 233<br />

5.12 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237<br />

5.12.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238<br />

5.12.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 238<br />

5.12.3 Guest operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238<br />

5.12.4 HBAs for hosts running VMware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238<br />

5.12.5 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239<br />

5.12.6 VMware storage and zoning recommendations . . . . . . . . . . . . . . . . . . . . . . . . 240<br />

5.12.7 Setting <strong>the</strong> HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . 241<br />

5.12.8 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242<br />

5.12.9 Attaching VMware to VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242<br />

5.12.10 VDisk naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245<br />

5.12.11 Setting <strong>the</strong> Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . 246<br />

5.12.12 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246<br />

5.12.13 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248<br />

5.13 SUN Solaris support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249<br />

5.13.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 249<br />

5.13.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249<br />

5.14 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 250<br />

5.14.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 250<br />

5.14.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250<br />

5.14.3 Co-existence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250<br />

5.14.4 Using an SVC VDisk as a cluster lock disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . 251<br />

5.14.5 Support for HP-UX with greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . 251<br />

5.15 Using SDDDSM, SDDPCM, and SDD Web interface . . . . . . . . . . . . . . . . . . . . . . . . 251<br />

5.16 Calculating <strong>the</strong> queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252<br />

5.17 Fur<strong>the</strong>r sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253<br />

5.17.1 Publications containing SVC storage subsystem attachment guidelines . . . . . 253<br />

Chapter 6. Advanced Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255<br />

6.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256<br />

6.1.1 Business requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256<br />

6.1.2 Moving and migrating data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256<br />

6.1.3 Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257<br />

6.1.4 Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257<br />

6.1.5 Application testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257<br />

6.1.6 SVC FlashCopy features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257<br />

6.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258<br />

6.2.1 FlashCopy and Tivoli <strong>Storage</strong> Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259<br />

6.3 How FlashCopy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261<br />

vi <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


6.4 <strong>Implementing</strong> SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262<br />

6.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262<br />

6.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263<br />

6.4.3 Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264<br />

6.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266<br />

6.4.5 Grains and <strong>the</strong> FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266<br />

6.4.6 Interaction and dependency between Multiple Target FlashCopy mappings . . . 267<br />

6.4.7 Summary of <strong>the</strong> FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 269<br />

6.4.8 Interaction with <strong>the</strong> cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269<br />

6.4.9 FlashCopy rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270<br />

6.4.10 FlashCopy and image mode disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270<br />

6.4.11 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271<br />

6.4.12 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274<br />

6.4.13 Space-efficient FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276<br />

6.4.14 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277<br />

6.4.15 Syn<strong>the</strong>sis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278<br />

6.4.16 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278<br />

6.4.17 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278<br />

6.4.18 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280<br />

6.4.19 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 280<br />

6.4.20 Recovering data from FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281<br />

6.5 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281<br />

6.5.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281<br />

6.5.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282<br />

6.5.3 SVC Metro Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283<br />

6.5.4 Multiple Cluster Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284<br />

6.5.5 Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287<br />

6.5.6 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288<br />

6.5.7 How Metro Mirror works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291<br />

6.5.8 Metro Mirror process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292<br />

6.5.9 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292<br />

6.5.10 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295<br />

6.5.11 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298<br />

6.5.12 Practical use of Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301<br />

6.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions. 302<br />

6.5.14 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302<br />

6.6 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303<br />

6.6.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303<br />

6.6.2 Creating <strong>the</strong> SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304<br />

6.6.3 Creating a Metro Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304<br />

6.6.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305<br />

6.6.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305<br />

6.6.6 Changing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306<br />

6.6.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306<br />

6.6.8 Stopping a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306<br />

6.6.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307<br />

6.6.10 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 307<br />

6.6.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307<br />

6.6.12 Deleting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308<br />

6.6.13 Reversing a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308<br />

6.6.14 Reversing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 308<br />

6.6.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309<br />

6.7 Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309<br />

Contents vii


6.7.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309<br />

6.7.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309<br />

6.8 Remote copy techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310<br />

6.8.1 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310<br />

6.8.2 SVC Global Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311<br />

6.9 Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313<br />

6.9.1 Global Mirror relationship between primary and secondary VDisks . . . . . . . . . . 313<br />

6.9.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313<br />

6.9.3 Dependent writes that span multiple VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . 314<br />

6.9.4 Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315<br />

6.10 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317<br />

6.10.1 Intercluster communication and zoning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317<br />

6.10.2 SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317<br />

6.10.3 Maintenance of <strong>the</strong> intercluster link. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317<br />

6.10.4 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318<br />

6.10.5 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318<br />

6.10.6 Space-efficient background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319<br />

6.11 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319<br />

6.11.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319<br />

6.11.2 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322<br />

6.11.3 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324<br />

6.11.4 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328<br />

6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions. 329<br />

6.11.6 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329<br />

6.12 Global Mirror commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329<br />

6.12.1 Listing <strong>the</strong> available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330<br />

6.12.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333<br />

6.12.3 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 334<br />

6.12.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334<br />

6.12.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334<br />

6.12.6 Changing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 335<br />

6.12.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335<br />

6.12.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335<br />

6.12.9 Starting a Global Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 336<br />

6.12.10 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 336<br />

6.12.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336<br />

6.12.12 Deleting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 337<br />

6.12.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337<br />

6.12.14 Reversing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . 337<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface. . 339<br />

7.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340<br />

7.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340<br />

7.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 340<br />

7.2.1 Viewing disk controller details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340<br />

7.2.2 Renaming a controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341<br />

7.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342<br />

7.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342<br />

7.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343<br />

7.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344<br />

7.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345<br />

7.2.8 Adding MDisks to a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346<br />

7.2.9 Showing <strong>the</strong> Managed Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346<br />

viii <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.2.10 Showing MDisks in an managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . 346<br />

7.2.11 Working with Managed Disk Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346<br />

7.2.12 Creating a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347<br />

7.2.13 Viewing Managed Disk Group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348<br />

7.2.14 Renaming a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348<br />

7.2.15 Deleting a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349<br />

7.2.16 Removing MDisks from a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . 349<br />

7.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350<br />

7.3.1 Creating a Fibre Channel-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350<br />

7.3.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351<br />

7.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353<br />

7.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354<br />

7.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354<br />

7.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355<br />

7.4 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356<br />

7.4.1 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356<br />

7.4.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358<br />

7.4.3 Creating a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358<br />

7.4.4 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359<br />

7.4.5 Adding a mirrored VDisk copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360<br />

7.4.6 Splitting a VDisk Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363<br />

7.4.7 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364<br />

7.4.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365<br />

7.4.9 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367<br />

7.4.10 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367<br />

7.4.11 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368<br />

7.4.12 Showing VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369<br />

7.4.13 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370<br />

7.4.14 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370<br />

7.4.15 Migrate a VDisk to an image mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371<br />

7.4.16 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372<br />

7.4.17 Showing a VDisk on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373<br />

7.4.18 Showing VDisks using a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . 373<br />

7.4.19 Showing which MDisks are used by a specific VDisk . . . . . . . . . . . . . . . . . . . . 374<br />

7.4.20 Showing from which Managed Disk Group a VDisk has its extents . . . . . . . . . 374<br />

7.4.21 Showing <strong>the</strong> host to which <strong>the</strong> VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . . 375<br />

7.4.22 Showing <strong>the</strong> VDisk to which <strong>the</strong> host is mapped . . . . . . . . . . . . . . . . . . . . . . . 376<br />

7.4.23 Tracing a VDisk from a host back to its physical disk . . . . . . . . . . . . . . . . . . . . 376<br />

7.5 Scripting under <strong>the</strong> CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378<br />

7.6 SVC advanced operations using <strong>the</strong> CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378<br />

7.6.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378<br />

7.6.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379<br />

7.7 Managing <strong>the</strong> cluster using <strong>the</strong> CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380<br />

7.7.1 Viewing cluster properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380<br />

7.7.2 Changing cluster settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381<br />

7.7.3 Cluster au<strong>the</strong>ntication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381<br />

7.7.4 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382<br />

7.7.5 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383<br />

7.7.6 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383<br />

7.7.7 Setting <strong>the</strong> cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384<br />

7.7.8 Start statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385<br />

7.7.9 Stopping a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386<br />

7.7.10 Status of copy operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386<br />

Contents ix


7.7.11 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386<br />

7.8 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387<br />

7.8.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388<br />

7.8.2 Adding a node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388<br />

7.8.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390<br />

7.8.4 Deleting a node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390<br />

7.8.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390<br />

7.9 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391<br />

7.9.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391<br />

7.9.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392<br />

7.9.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392<br />

7.9.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393<br />

7.10 Managing au<strong>the</strong>ntication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394<br />

7.10.1 Managing users using <strong>the</strong> CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394<br />

7.10.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395<br />

7.10.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396<br />

7.10.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396<br />

7.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397<br />

7.11.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397<br />

7.11.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398<br />

7.11.3 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398<br />

7.11.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399<br />

7.11.5 Preparing (pre-triggering) <strong>the</strong> FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 401<br />

7.11.6 Preparing (pre-triggering) <strong>the</strong> FlashCopy consistency group . . . . . . . . . . . . . . 402<br />

7.11.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402<br />

7.11.8 Starting (triggering) FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . 404<br />

7.11.9 Monitoring <strong>the</strong> FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404<br />

7.11.10 Stopping <strong>the</strong> FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405<br />

7.11.11 Stopping <strong>the</strong> FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 406<br />

7.11.12 Deleting <strong>the</strong> FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406<br />

7.11.13 Deleting <strong>the</strong> FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . 407<br />

7.11.14 Migrating a VDisk to a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . 407<br />

7.11.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412<br />

7.11.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412<br />

7.12 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413<br />

7.12.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414<br />

7.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 415<br />

7.12.3 Creating a Metro Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 416<br />

7.12.4 Creating <strong>the</strong> Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417<br />

7.12.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 418<br />

7.12.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419<br />

7.12.7 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420<br />

7.12.8 Monitoring <strong>the</strong> background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420<br />

7.12.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422<br />

7.12.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 422<br />

7.12.11 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 423<br />

7.12.12 Restarting a Metro Mirror relationship in <strong>the</strong> Idling state. . . . . . . . . . . . . . . . . 424<br />

7.12.13 Restarting a Metro Mirror consistency group in <strong>the</strong> Idling state . . . . . . . . . . . 424<br />

7.12.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425<br />

7.12.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . . . . 425<br />

7.12.16 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . . 426<br />

7.12.17 Creating an SVC partnership among many clusters. . . . . . . . . . . . . . . . . . . . 427<br />

7.12.18 Star configuration partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428<br />

x <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.13 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434<br />

7.13.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435<br />

7.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 436<br />

7.13.3 Changing link tolerance and cluster delay simulation . . . . . . . . . . . . . . . . . . . . 437<br />

7.13.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 439<br />

7.13.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439<br />

7.13.6 Creating <strong>the</strong> stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 441<br />

7.13.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441<br />

7.13.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 441<br />

7.13.9 Starting a Global Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 442<br />

7.13.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443<br />

7.13.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444<br />

7.13.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 444<br />

7.13.13 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 445<br />

7.13.14 Restarting a Global Mirror relationship in <strong>the</strong> Idling state . . . . . . . . . . . . . . . . 446<br />

7.13.15 Restarting a Global Mirror consistency group in <strong>the</strong> Idling state. . . . . . . . . . . 446<br />

7.13.16 Changing direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447<br />

7.13.17 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . . 447<br />

7.13.18 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 448<br />

7.14 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449<br />

7.14.1 Upgrading software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450<br />

7.14.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456<br />

7.14.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458<br />

7.14.4 Set syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458<br />

7.14.5 Configuring error notification using an e-mail server. . . . . . . . . . . . . . . . . . . . . 459<br />

7.14.6 Analyzing <strong>the</strong> error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460<br />

7.14.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461<br />

7.14.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462<br />

7.14.9 Backing up <strong>the</strong> SVC cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466<br />

7.14.10 Restoring <strong>the</strong> SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467<br />

7.14.11 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468<br />

7.15 <strong>SAN</strong> troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468<br />

7.16 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI. . . . . . . . . . . . . . . . . . . 469<br />

8.1 SVC normal operations using <strong>the</strong> GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470<br />

8.1.1 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470<br />

8.1.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475<br />

8.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475<br />

8.1.4 General housekeeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476<br />

8.1.5 Viewing progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476<br />

8.2 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477<br />

8.2.1 Viewing disk controller details. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477<br />

8.2.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478<br />

8.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479<br />

8.2.4 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479<br />

8.2.5 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479<br />

8.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480<br />

8.2.7 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481<br />

8.2.8 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481<br />

8.2.9 Showing a VDisk using a certain MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482<br />

8.3 Working with Managed Disk Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483<br />

8.3.1 Viewing MDisk group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483<br />

Contents xi


8.3.2 Creating MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484<br />

8.3.3 Renaming a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486<br />

8.3.4 Deleting a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487<br />

8.3.5 Adding MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488<br />

8.3.6 Removing MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489<br />

8.3.7 Displaying MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490<br />

8.3.8 Showing MDisks in this group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491<br />

8.3.9 Showing <strong>the</strong> VDisks that are associated with an MDisk group . . . . . . . . . . . . . . 492<br />

8.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493<br />

8.4.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494<br />

8.4.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495<br />

8.4.3 Fibre Channel-attached hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495<br />

8.4.4 iSCSI-attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497<br />

8.4.5 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499<br />

8.4.6 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500<br />

8.4.7 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501<br />

8.4.8 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502<br />

8.5 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504<br />

8.5.1 Using <strong>the</strong> Viewing VDisks using MDisk window . . . . . . . . . . . . . . . . . . . . . . . . . 504<br />

8.5.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505<br />

8.5.3 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505<br />

8.5.4 Creating a Space-Efficient VDisk with autoexpand. . . . . . . . . . . . . . . . . . . . . . . 509<br />

8.5.5 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513<br />

8.5.6 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514<br />

8.5.7 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514<br />

8.5.8 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516<br />

8.5.9 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517<br />

8.5.10 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518<br />

8.5.11 Migrating a VDisk to an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . 519<br />

8.5.12 Creating a VDisk Mirror from an existing VDisk . . . . . . . . . . . . . . . . . . . . . . . . 521<br />

8.5.13 Creating a mirrored VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523<br />

8.5.14 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526<br />

8.5.15 Creating an image mode mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529<br />

8.5.16 Migrating to a Space-Efficient VDisk using VDisk Mirroring . . . . . . . . . . . . . . . 532<br />

8.5.17 Deleting a VDisk copy from a VDisk mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . 534<br />

8.5.18 Splitting a VDisk copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535<br />

8.5.19 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536<br />

8.5.20 Showing <strong>the</strong> MDisks that are used by a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . 537<br />

8.5.21 Showing <strong>the</strong> MDG to which a VDisk belongs . . . . . . . . . . . . . . . . . . . . . . . . . . 538<br />

8.5.22 Showing <strong>the</strong> host to which <strong>the</strong> VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . . 538<br />

8.5.23 Showing capacity information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538<br />

8.5.24 Showing VDisks mapped to a particular host . . . . . . . . . . . . . . . . . . . . . . . . . . 539<br />

8.5.25 Deleting VDisks from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540<br />

8.6 Working with solid-state drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540<br />

8.6.1 Solid-state drive introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540<br />

8.7 SVC advanced operations using <strong>the</strong> GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543<br />

8.7.1 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543<br />

8.8 Managing <strong>the</strong> cluster using <strong>the</strong> GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544<br />

8.8.1 Viewing cluster properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544<br />

8.8.2 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545<br />

8.8.3 Starting <strong>the</strong> statistics collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547<br />

8.8.4 Stopping <strong>the</strong> statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548<br />

8.8.5 Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549<br />

xii <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.8.6 iSCSI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549<br />

8.8.7 Setting <strong>the</strong> cluster time and configuring <strong>the</strong> Network Time Protocol server . . . . 549<br />

8.8.8 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550<br />

8.9 Manage au<strong>the</strong>ntication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552<br />

8.9.1 Modify current user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553<br />

8.9.2 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554<br />

8.9.3 Modifying a user role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556<br />

8.9.4 Deleting a user role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 556<br />

8.9.5 User groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557<br />

8.9.6 Cluster password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558<br />

8.9.7 Remote au<strong>the</strong>ntication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558<br />

8.10 Working with nodes using <strong>the</strong> GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559<br />

8.10.1 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559<br />

8.10.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559<br />

8.10.3 Adding nodes to <strong>the</strong> cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560<br />

8.10.4 Configuring iSCSI ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563<br />

8.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566<br />

8.12 FlashCopy operations using <strong>the</strong> GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566<br />

8.13 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566<br />

8.13.1 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568<br />

8.13.2 Preparing (pre-triggering) <strong>the</strong> FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573<br />

8.13.3 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574<br />

8.13.4 Starting (triggering) a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . 574<br />

8.13.5 Monitoring <strong>the</strong> FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575<br />

8.13.6 Stopping <strong>the</strong> FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 576<br />

8.13.7 Deleting <strong>the</strong> FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578<br />

8.13.8 Deleting <strong>the</strong> FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 579<br />

8.13.9 Migrating between a fully allocated VDisk and a Space-Efficient VDisk. . . . . . 580<br />

8.13.10 Reversing and splitting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . 580<br />

8.14 Metro Mirror operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582<br />

8.14.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582<br />

8.14.2 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584<br />

8.14.3 Creating <strong>the</strong> SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . 585<br />

8.14.4 Creating a Metro Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 587<br />

8.14.5 Creating Metro Mirror relationships for MM_DB_Pri and MM_DBLog_Pri . . . . 590<br />

8.14.6 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 594<br />

8.14.7 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597<br />

8.14.8 Starting a stand-alone Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . 597<br />

8.14.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598<br />

8.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599<br />

8.14.11 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599<br />

8.14.12 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 600<br />

8.14.13 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 600<br />

8.14.14 Restarting a Metro Mirror relationship in <strong>the</strong> Idling state. . . . . . . . . . . . . . . . . 602<br />

8.14.15 Restarting a Metro Mirror consistency group in <strong>the</strong> Idling state . . . . . . . . . . . 603<br />

8.14.16 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604<br />

8.14.17 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . . 605<br />

8.14.18 Switching <strong>the</strong> copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . 606<br />

8.15 Global Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607<br />

8.15.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608<br />

8.15.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . . 609<br />

8.15.3 Global Mirror link tolerance and delay simulations . . . . . . . . . . . . . . . . . . . . . . 612<br />

8.15.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 614<br />

Contents xiii


8.15.5 Creating Global Mirror relationships for GM_DB_Pri and GM_DBLog_Pri. . . . 617<br />

8.15.6 Creating <strong>the</strong> stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 620<br />

8.15.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624<br />

8.15.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 624<br />

8.15.9 Starting a Global Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 625<br />

8.15.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626<br />

8.15.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627<br />

8.15.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 627<br />

8.15.13 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 628<br />

8.15.14 Restarting a Global Mirror relationship in <strong>the</strong> Idling state . . . . . . . . . . . . . . . . 630<br />

8.15.15 Restarting a Global Mirror consistency group in <strong>the</strong> Idling state. . . . . . . . . . . 631<br />

8.15.16 Changing copy direction for Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . 632<br />

8.15.17 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 634<br />

8.16 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635<br />

8.17 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636<br />

8.17.1 Package numbering and version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636<br />

8.17.2 Upgrade status utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636<br />

8.17.3 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637<br />

8.17.4 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638<br />

8.17.5 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 639<br />

8.17.6 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645<br />

8.17.7 Setting up error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647<br />

8.17.8 Setting syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 649<br />

8.17.9 Set e-mail features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651<br />

8.17.10 Analyzing <strong>the</strong> error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655<br />

8.17.11 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 659<br />

8.17.12 Viewing <strong>the</strong> license settings log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 662<br />

8.17.13 Dumping <strong>the</strong> cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663<br />

8.17.14 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663<br />

8.17.15 Setting up a quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666<br />

8.18 Backing up <strong>the</strong> SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 668<br />

8.18.1 Backup procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 669<br />

8.18.2 Saving <strong>the</strong> SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670<br />

8.18.3 Restoring <strong>the</strong> SVC configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672<br />

8.18.4 Deleting <strong>the</strong> configuration backup files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672<br />

8.18.5 Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 672<br />

8.18.6 Common Information Model object manager log configuration. . . . . . . . . . . . . 673<br />

Chapter 9. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675<br />

9.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676<br />

9.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676<br />

9.2.1 Migrating multiple extents (within an MDG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676<br />

9.2.2 Migrating extents off of an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . 677<br />

9.2.3 Migrating a VDisk between MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 678<br />

9.2.4 Migrating <strong>the</strong> VDisk to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680<br />

9.2.5 Migrating a VDisk between I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680<br />

9.2.6 Monitoring <strong>the</strong> migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681<br />

9.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682<br />

9.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 682<br />

9.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683<br />

9.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683<br />

9.4 Migrating data from an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685<br />

9.4.1 Image mode VDisk migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685<br />

xiv <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


9.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687<br />

9.5 Data migration for Windows using <strong>the</strong> SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687<br />

9.5.1 Windows Server 2008 host system connected directly to <strong>the</strong> DS4700. . . . . . . . 688<br />

9.5.2 Adding <strong>the</strong> SVC between <strong>the</strong> host system and <strong>the</strong> DS4700. . . . . . . . . . . . . . . . 690<br />

9.5.3 Putting <strong>the</strong> migrated disks onto an online Windows Server 2008 host . . . . . . . . 698<br />

9.5.4 Migrating <strong>the</strong> VDisk from image mode to managed mode . . . . . . . . . . . . . . . . . 700<br />

9.5.5 Migrating <strong>the</strong> VDisk from managed mode to image mode . . . . . . . . . . . . . . . . . 702<br />

9.5.6 Migrating <strong>the</strong> VDisk from image mode to image mode . . . . . . . . . . . . . . . . . . . . 705<br />

9.5.7 Free <strong>the</strong> data from <strong>the</strong> SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709<br />

9.5.8 Put <strong>the</strong> free disks online on Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . . 711<br />

9.6 Migrating Linux <strong>SAN</strong> disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712<br />

9.6.1 Connecting <strong>the</strong> SVC to your <strong>SAN</strong> fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714<br />

9.6.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715<br />

9.6.3 Move <strong>the</strong> LUNs to <strong>the</strong> SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719<br />

9.6.4 Migrate <strong>the</strong> image mode VDisks to managed MDisks . . . . . . . . . . . . . . . . . . . . 722<br />

9.6.5 Preparing to migrate from <strong>the</strong> SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725<br />

9.6.6 Migrate <strong>the</strong> VDisks to image mode VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728<br />

9.6.7 Removing <strong>the</strong> LUNs from <strong>the</strong> SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729<br />

9.7 Migrating ESX <strong>SAN</strong> disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732<br />

9.7.1 Connecting <strong>the</strong> SVC to your <strong>SAN</strong> fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733<br />

9.7.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735<br />

9.7.3 Move <strong>the</strong> LUNs to <strong>the</strong> SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 739<br />

9.7.4 Migrating <strong>the</strong> image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742<br />

9.7.5 Preparing to migrate from <strong>the</strong> SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 745<br />

9.7.6 Migrating <strong>the</strong> managed VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . 747<br />

9.7.7 Remove <strong>the</strong> LUNs from <strong>the</strong> SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748<br />

9.8 Migrating AIX <strong>SAN</strong> disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751<br />

9.8.1 Connecting <strong>the</strong> SVC to your <strong>SAN</strong> fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 753<br />

9.8.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 754<br />

9.8.3 Moving <strong>the</strong> LUNs to <strong>the</strong> SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 759<br />

9.8.4 Migrating image mode VDisks to VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761<br />

9.8.5 Preparing to migrate from <strong>the</strong> SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763<br />

9.8.6 Migrating <strong>the</strong> managed VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766<br />

9.8.7 Removing <strong>the</strong> LUNs from <strong>the</strong> SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767<br />

9.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 770<br />

9.10 Using VDisk Mirroring and Space-Efficient VDisks toge<strong>the</strong>r. . . . . . . . . . . . . . . . . . . 771<br />

9.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 771<br />

9.10.2 VDisk Mirroring With Space-Efficient VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . 773<br />

9.10.3 Metro Mirror and Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779<br />

Appendix A. Scripting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785<br />

Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786<br />

Automated virtual disk creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787<br />

SVC tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790<br />

Scripting alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797<br />

Appendix B. Node replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799<br />

Replacing nodes nondisruptively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800<br />

Expanding an existing SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804<br />

Moving VDisks to a new I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806<br />

Replacing nodes disruptively (rezoning <strong>the</strong> <strong>SAN</strong>) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807<br />

Appendix C. Performance data and statistics ga<strong>the</strong>ring. . . . . . . . . . . . . . . . . . . . . . . 809<br />

SVC performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810<br />

Contents xv


Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810<br />

SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810<br />

Performance monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810<br />

Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810<br />

Performance data collection and Total<strong>Storage</strong> Productivity Center for Disk . . . . . . . . 812<br />

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815<br />

<strong>IBM</strong> Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815<br />

O<strong>the</strong>r publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815<br />

Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816<br />

How to get <strong>IBM</strong> Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817<br />

Help from <strong>IBM</strong> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817<br />

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819<br />

xvi <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Notices<br />

This information was developed for products and services offered in <strong>the</strong> U.S.A.<br />

<strong>IBM</strong> may not offer <strong>the</strong> products, services, or features discussed in this document in o<strong>the</strong>r countries. Consult<br />

your local <strong>IBM</strong> representative for information on <strong>the</strong> products and services currently available in your area. Any<br />

reference to an <strong>IBM</strong> product, program, or service is not intended to state or imply that only that <strong>IBM</strong> product,<br />

program, or service may be used. Any functionally equivalent product, program, or service that does not<br />

infringe any <strong>IBM</strong> intellectual property right may be used instead. However, it is <strong>the</strong> user's responsibility to<br />

evaluate and verify <strong>the</strong> operation of any non-<strong>IBM</strong> product, program, or service.<br />

<strong>IBM</strong> may have patents or pending patent applications covering subject matter described in this document. The<br />

furnishing of this document does not give you any license to <strong>the</strong>se patents. You can send license inquiries, in<br />

writing, to:<br />

<strong>IBM</strong> Director of Licensing, <strong>IBM</strong> Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.<br />

The following paragraph does not apply to <strong>the</strong> United Kingdom or any o<strong>the</strong>r country where such<br />

provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION<br />

PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR<br />

IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,<br />

MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of<br />

express or implied warranties in certain transactions, <strong>the</strong>refore, this statement may not apply to you.<br />

This information could include technical inaccuracies or typographical errors. Changes are periodically made<br />

to <strong>the</strong> information herein; <strong>the</strong>se changes will be incorporated in new editions of <strong>the</strong> publication. <strong>IBM</strong> may make<br />

improvements and/or changes in <strong>the</strong> product(s) and/or <strong>the</strong> program(s) described in this publication at any time<br />

without notice.<br />

Any references in this information to non-<strong>IBM</strong> Web sites are provided for convenience only and do not in any<br />

manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of <strong>the</strong><br />

materials for this <strong>IBM</strong> product and use of those Web sites is at your own risk.<br />

<strong>IBM</strong> may use or distribute any of <strong>the</strong> information you supply in any way it believes appropriate without incurring<br />

any obligation to you.<br />

Information concerning non-<strong>IBM</strong> products was obtained from <strong>the</strong> suppliers of those products, <strong>the</strong>ir published<br />

announcements or o<strong>the</strong>r publicly available sources. <strong>IBM</strong> has not tested those products and cannot confirm <strong>the</strong><br />

accuracy of performance, compatibility or any o<strong>the</strong>r claims related to non-<strong>IBM</strong> products. Questions on <strong>the</strong><br />

capabilities of non-<strong>IBM</strong> products should be addressed to <strong>the</strong> suppliers of those products.<br />

This information contains examples of data and reports used in daily business operations. To illustrate <strong>the</strong>m<br />

as completely as possible, <strong>the</strong> examples include <strong>the</strong> names of individuals, companies, brands, and products.<br />

All of <strong>the</strong>se names are fictitious and any similarity to <strong>the</strong> names and addresses used by an actual business<br />

enterprise is entirely coincidental.<br />

COPYRIGHT LICENSE:<br />

This information contains sample application programs in source language, which illustrate programming<br />

techniques on various operating platforms. You may copy, modify, and distribute <strong>the</strong>se sample programs in<br />

any form without payment to <strong>IBM</strong>, for <strong>the</strong> purposes of developing, using, marketing or distributing application<br />

programs conforming to <strong>the</strong> application programming interface for <strong>the</strong> operating platform for which <strong>the</strong> sample<br />

programs are written. These examples have not been thoroughly tested under all conditions. <strong>IBM</strong>, <strong>the</strong>refore,<br />

cannot guarantee or imply reliability, serviceability, or function of <strong>the</strong>se programs.<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. xvii


Trademarks<br />

<strong>IBM</strong>, <strong>the</strong> <strong>IBM</strong> logo, and ibm.com are trademarks or registered trademarks of International Business Machines<br />

Corporation in <strong>the</strong> United States, o<strong>the</strong>r countries, or both. These and o<strong>the</strong>r <strong>IBM</strong> trademarked terms are<br />

marked on <strong>the</strong>ir first occurrence in this information with <strong>the</strong> appropriate symbol (® or ), indicating US<br />

registered or common law trademarks owned by <strong>IBM</strong> at <strong>the</strong> time this information was published. Such<br />

trademarks may also be registered or common law trademarks in o<strong>the</strong>r countries. A current list of <strong>IBM</strong><br />

trademarks is available on <strong>the</strong> Web at http://www.ibm.com/legal/copytrade.shtml<br />

The following terms are trademarks of <strong>the</strong> International Business Machines Corporation in <strong>the</strong> United States,<br />

o<strong>the</strong>r countries, or both:<br />

AIX 5L<br />

AIX®<br />

developerWorks®<br />

DS4000®<br />

DS6000<br />

DS8000®<br />

Enterprise <strong>Storage</strong> Server®<br />

FlashCopy®<br />

GPFS<br />

<strong>IBM</strong> <strong>System</strong>s Director Active Energy<br />

Manager<br />

<strong>IBM</strong>®<br />

Power <strong>System</strong>s<br />

Redbooks®<br />

Redbooks (logo) ®<br />

Solid®<br />

<strong>System</strong> i®<br />

<strong>System</strong> p®<br />

The following terms are trademarks of o<strong>the</strong>r companies:<br />

xviii <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

<strong>System</strong> <strong>Storage</strong><br />

<strong>System</strong> <strong>Storage</strong> DS®<br />

<strong>System</strong> x®<br />

<strong>System</strong> z®<br />

Tivoli®<br />

Total<strong>Storage</strong>®<br />

WebSphere®<br />

XIV®<br />

z/OS®<br />

Emulex, and <strong>the</strong> Emulex logo are trademarks or registered trademarks of Emulex Corporation.<br />

Novell, SUSE, <strong>the</strong> Novell logo, and <strong>the</strong> N logo are registered trademarks of Novell, Inc. in <strong>the</strong> United States<br />

and o<strong>the</strong>r countries.<br />

QLogic, and <strong>the</strong> QLogic logo are registered trademarks of QLogic Corporation. <strong>SAN</strong>blade is a registered<br />

trademark in <strong>the</strong> United States.<br />

ACS, Red Hat, and <strong>the</strong> Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in <strong>the</strong> U.S.<br />

and o<strong>the</strong>r countries.<br />

VMotion, VMware, <strong>the</strong> VMware "boxes" logo and design are registered trademarks or trademarks of VMware,<br />

Inc. in <strong>the</strong> United States and/or o<strong>the</strong>r jurisdictions.<br />

Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in <strong>the</strong> United States, o<strong>the</strong>r<br />

countries, or both.<br />

Microsoft, Windows NT, Windows, and <strong>the</strong> Windows logo are trademarks of Microsoft Corporation in <strong>the</strong><br />

United States, o<strong>the</strong>r countries, or both.<br />

Intel Xeon, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered<br />

trademarks of Intel Corporation or its subsidiaries in <strong>the</strong> United States and o<strong>the</strong>r countries.<br />

UNIX is a registered trademark of The Open Group in <strong>the</strong> United States and o<strong>the</strong>r countries.<br />

Linux is a trademark of Linus Torvalds in <strong>the</strong> United States, o<strong>the</strong>r countries, or both.<br />

O<strong>the</strong>r company, product, or service names may be trademarks or service marks of o<strong>the</strong>rs.


Summary of changes<br />

This section describes <strong>the</strong> technical changes made in this edition of <strong>the</strong> book and in previous<br />

editions. This edition might also include minor corrections and editorial changes that are not<br />

identified.<br />

Summary of Changes<br />

for SG24-6423-07<br />

for <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

as created or updated on March 30, 2010.<br />

March 2010, Eighth Edition<br />

This revision reflects <strong>the</strong> addition, deletion, or modification of new and changed information<br />

described next.<br />

New information<br />

► Added iSCSI information<br />

► Added Solid® State Drive information<br />

Changed information<br />

► Removed duplicate information<br />

► Consolidated chapters<br />

► Removed dated material<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. xix


xx <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Preface<br />

This <strong>IBM</strong>® Redbooks® publication is a detailed technical guide to <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong><br />

<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC), a virtualization appliance solution that maps virtualized<br />

volumes visible to hosts and applications to physical volumes on storage devices. Each<br />

server within <strong>the</strong> storage area network (<strong>SAN</strong>) has its own set of virtual storage addresses,<br />

which are mapped to physical addresses. If <strong>the</strong> physical addresses change, <strong>the</strong> server<br />

continues running using <strong>the</strong> same virtual addresses that it had before. Therefore, volumes or<br />

storage can be added or moved while <strong>the</strong> server is still running. The <strong>IBM</strong> virtualization<br />

technology improves management of information at <strong>the</strong> “block” level in a network, enabling<br />

applications and servers to share storage devices on a network. This book is intended to<br />

allow you to implement <strong>the</strong> SVC at a 5.1.0 release level with a minimum of effort.<br />

The team who wrote this book<br />

This book was produced by a team of specialists from around <strong>the</strong> world working at Brocade<br />

Communications, San Jose, and <strong>the</strong> International Technical Support Organization, San Jose<br />

Center.<br />

Jon Tate is a Project Manager for <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> Solutions at <strong>the</strong> International<br />

Technical Support Organization, San Jose Center. Before joining <strong>the</strong> ITSO in 1999, he<br />

worked in <strong>the</strong> <strong>IBM</strong> Technical Support Center, providing Level 2 and 3 support for <strong>IBM</strong> storage<br />

products. Jon has 24 years of experience in storage software and management, services,<br />

and support, and is both an <strong>IBM</strong> Certified IT Specialist and an <strong>IBM</strong> <strong>SAN</strong> Certified Specialist.<br />

He is also <strong>the</strong> UK Chairman of <strong>the</strong> <strong>Storage</strong> Networking Industry Association.<br />

Pall Beck is a <strong>SAN</strong> Technical Team Lead in <strong>IBM</strong> Nordic. He has 12 years of experience<br />

working with storage and joined <strong>the</strong> <strong>IBM</strong> ITD DK in 2005. Prior to working for <strong>IBM</strong> in Denmark,<br />

he worked as an <strong>IBM</strong> service representative performing hardware installations and repairs for<br />

<strong>IBM</strong> <strong>System</strong> i®, <strong>System</strong> p®, and <strong>System</strong> z® in Iceland. As a <strong>SAN</strong> Technical Team Lead for<br />

ITD DK, he led a team of administrators running several of <strong>the</strong> largest <strong>SAN</strong> installations in<br />

Europe. His current position involves <strong>the</strong> creation and implementation of operational<br />

standards and aligning best practices throughout <strong>the</strong> Nordics. Pall has a diploma as an<br />

Electronic Technician from Odense Tekniske Skole in Denmark and IR in Reykjavik, Iceland.<br />

Angelo Bernasconi is a Certified ITS Senior <strong>Storage</strong> and <strong>SAN</strong> Software Specialist in <strong>IBM</strong><br />

Italy. He has 24 years of experience in <strong>the</strong> delivery of maintenance and professional services<br />

for <strong>IBM</strong> Enterprise clients in z/OS® and open systems. He holds a degree in Electronics and<br />

his areas of expertise include storage hardware, <strong>SAN</strong>, storage virtualization, de-duplication,<br />

and disaster recovery solutions. He has written extensively about <strong>SAN</strong> and virtualization<br />

products in three <strong>IBM</strong> Redbooks publications, and he is <strong>the</strong> Technical Leader of <strong>the</strong> Italian<br />

Open <strong>System</strong> <strong>Storage</strong> Professional Services Community.<br />

Werner Eggli is a Senior IT Specialist with <strong>IBM</strong> Switzerland. He has more than 25 years of<br />

experience in Software Development, Project Management, and Consulting concentrating in<br />

<strong>the</strong> Networking and Telecommunication Segment. Werner joined <strong>IBM</strong> in 2001 and works in<br />

pre-sales as a <strong>Storage</strong> <strong>System</strong>s Engineer for Open <strong>System</strong>s. His expertise is <strong>the</strong> design and<br />

implementation of <strong>IBM</strong> <strong>Storage</strong> Solutions. He holds a degree in Dipl.Informatiker (FH) from<br />

Fachhochschule Konstanz, Germany.<br />

We extend our thanks to <strong>the</strong> following people for <strong>the</strong>ir contributions to this project.<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. xxi


There are many people who contributed to this book. In particular, we thank <strong>the</strong> development<br />

and PFE teams in Hursley. Matt Smith was also instrumental in moving any issues along and<br />

ensuring that <strong>the</strong>y maintained a high profile.<br />

In particular, we thank <strong>the</strong> previous authors of this book:<br />

Matt Amanat<br />

Angelo Bernasconi<br />

Steve Cody<br />

Sean Crawford<br />

Sameer Dhulekar<br />

Katja Gebuhr<br />

Deon George<br />

Amarnath Hiriyannappa<br />

Thorsten Hoss<br />

Juerg Hossli<br />

Philippe Jachimczyk<br />

Kamalakkannan J Jayaraman<br />

Dan Koeck<br />

Bent Lerager<br />

Craig McKenna<br />

Andy McManus<br />

Joao Marcos Leite<br />

Barry Mellish<br />

Suad Musovich<br />

Massimo Rosati<br />

Fred Scholten<br />

Robert Symons<br />

Marcus Thordal<br />

Xiao Peng Zhao<br />

We also want to thank <strong>the</strong> following people for <strong>the</strong>ir contributions to previous editions and to<br />

those people who contributed to this edition:<br />

John Agombar<br />

Alex Ainscow<br />

Trevor Boardman<br />

Chris Canto<br />

Peter Eccles<br />

Carlos Fuente<br />

Alex Howell<br />

Colin Jewell<br />

Paul Mason<br />

Paul Merrison<br />

Jon Parkes<br />

Steve Randle<br />

Lucy Raw<br />

Bill Scales<br />

Dave Sinclair<br />

Matt Smith<br />

Steve White<br />

Barry Whyte<br />

<strong>IBM</strong> Hursley<br />

Bill Wiegand<br />

<strong>IBM</strong> Advanced Technical Support<br />

xxii <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Dorothy Faurot<br />

<strong>IBM</strong> Raleigh<br />

Sharon Wang<br />

<strong>IBM</strong> Chicago<br />

Chris Saul<br />

<strong>IBM</strong> San Jose<br />

Sangam Racherla<br />

<strong>IBM</strong> ITSO<br />

A special mention must go to Brocade for <strong>the</strong>ir unparalleled support of this residency in terms<br />

of equipment and support in many areas throughout. Namely:<br />

Jim Baldyga<br />

Yong Choi<br />

Silviano Gaona<br />

Brian Steffler<br />

Steven Tong<br />

Brocade Communications <strong>System</strong>s<br />

Now you can become a published author, too!<br />

Here’s an opportunity to spotlight your skills, grow your career, and become a published<br />

author - all at <strong>the</strong> same time! Join an ITSO residency project and help write a book in your<br />

area of expertise, while honing your experience using leading-edge technologies. Your efforts<br />

will help to increase product acceptance and customer satisfaction, as you expand your<br />

network of technical contacts and relationships. Residencies run from two to six weeks in<br />

length, and you can participate ei<strong>the</strong>r in person or as a remote resident working from your<br />

home base. Find out more about <strong>the</strong> residency program, browse <strong>the</strong> residency index, and<br />

apply online at:<br />

ibm.com/redbooks/residencies.html<br />

Comments welcome<br />

Your comments are important to us.<br />

We want our books to be as helpful as possible. Send us your comments about this book or<br />

o<strong>the</strong>r <strong>IBM</strong> Redbooks publications in one of <strong>the</strong> following ways:<br />

► Use <strong>the</strong> online Contact us review <strong>IBM</strong> Redbooks form found at:<br />

ibm.com/redbooks<br />

► Send your comments in an e-mail to:<br />

redbooks@us.ibm.com<br />

► Mail your comments to:<br />

<strong>IBM</strong> Corporation, International Technical Support Organization<br />

Dept. HYTD Mail Station P099<br />

2455 South Road<br />

Poughkeepsie, NY 12601-5400<br />

Preface xxiii


Stay connected to <strong>IBM</strong> Redbooks<br />

► Find us on Facebook:<br />

http://www.facebook.com/pages/<strong>IBM</strong>-Redbooks/178023492563?ref=ts<br />

► Follow us on twitter:<br />

http://twitter.com/ibmredbooks<br />

► Look for us on LinkedIn:<br />

http://www.linkedin.com/groups?home=&gid=2130806<br />

► Explore new Redbooks publications, residencies, and workshops with <strong>the</strong> <strong>IBM</strong> Redbooks<br />

weekly newsletter:<br />

https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm<br />

► Stay current on recent Redbooks publications with RSS Feeds:<br />

http://www.redbooks.ibm.com/rss.html<br />

xxiv <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Chapter 1. Introduction to storage<br />

virtualization<br />

1<br />

This chapter defines storage virtualization. It gives a short overview of today’s most critical<br />

storage issues and explains how storage virtualization can help you solve <strong>the</strong>se issues.<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 1


1.1 <strong>Storage</strong> virtualization<br />

<strong>Storage</strong> virtualization is an overused term. Often, people use it as a buzzword to claim that a<br />

product is virtualized. Almost every storage hardware and software product can technically<br />

claim to provide a form of block-level virtualization. So, where do we define actual storage<br />

virtualization? Does <strong>the</strong> fact that a mobile computer has logical volumes that are created from<br />

a single physical drive mean that <strong>the</strong> computer is virtual? Not really.<br />

So, what is storage virtualization? The <strong>IBM</strong> explanation of storage virtualization is clear:<br />

► <strong>Storage</strong> virtualization is a technology that makes one set of resources look and feel like<br />

ano<strong>the</strong>r set of resources, preferably with more desirable characteristics.<br />

► It is a logical representation of resources not constrained by physical limitations:<br />

– Hides part of <strong>the</strong> complexity<br />

– Adds or integrates new function with existing services<br />

– Can be nested or applied to multiple layers of a system<br />

When discussing storage virtualization, it is important to understand that virtualization can be<br />

implemented on separate layers in <strong>the</strong> I/O stack. We have to clearly distinguish between<br />

virtualization on <strong>the</strong> file system layer and virtualization on <strong>the</strong> block, that is, <strong>the</strong> disk layer.<br />

The focus of this book is block-level virtualization, that is, <strong>the</strong> block aggregation layer. File<br />

system virtualization is out of <strong>the</strong> intended scope of this book.<br />

If you are interested in file system virtualization, refer to <strong>IBM</strong> General Parallel File <strong>System</strong><br />

(GPFS) or <strong>IBM</strong> scale out file services, which is based on GPFS. For more information and<br />

an overview of <strong>the</strong> <strong>IBM</strong> General Parallel File <strong>System</strong> (GPFS) Version 3, Release 2 for AIX®,<br />

Linux®, and Windows®, go to this Web site:<br />

http://www-03.ibm.com/systems/clusters/software/whitepapers/gpfs_intro.html<br />

For <strong>the</strong> <strong>IBM</strong> scale out file services, go to this Web site:<br />

http://www-935.ibm.com/services/us/its/html/sofs-landing.html<br />

The <strong>Storage</strong> Networking Industry Association’s (SNIA) block aggregation model (Figure 1-1<br />

on page 3) provides a good overview of <strong>the</strong> storage domain and its layers.<br />

Figure 1-1 on page 3 shows <strong>the</strong> three layers of a storage domain: <strong>the</strong> file, <strong>the</strong> block<br />

aggregation, and <strong>the</strong> block subsystem layers. The model splits <strong>the</strong> block aggregation layer<br />

into three sublayers. Block aggregation can be realized within hosts (servers), in <strong>the</strong> storage<br />

network (storage routers and storage controllers), or in storage devices (intelligent disk<br />

arrays).<br />

The <strong>IBM</strong> implementation of a block aggregation solution is <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> (SVC). The SVC is implemented as a clustered appliance in <strong>the</strong> storage<br />

network layer. Chapter 2, “<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>” on page 7 provides a<br />

more in-depth discussion of why <strong>IBM</strong> has chosen to implement its <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> in <strong>the</strong> storage network layer.<br />

2 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 1-1 SNIA block aggregation model<br />

The key concept of virtualization is to decouple <strong>the</strong> storage (which is delivered by commodity<br />

two-way Redundant Array of Independent Disks (RAID) controllers attaching physical disk<br />

drives) from <strong>the</strong> storage functions that are expected from servers in today’s storage area<br />

network (<strong>SAN</strong>) environment.<br />

Decoupling is abstracting <strong>the</strong> physical location of data from <strong>the</strong> logical representation that an<br />

application on a server uses to access data. The virtualization engine presents logical<br />

entities, which are called volumes, to <strong>the</strong> user and internally manages <strong>the</strong> process of mapping<br />

<strong>the</strong> volume to <strong>the</strong> actual physical location. The realization of this mapping depends on <strong>the</strong><br />

specific implementation. Ano<strong>the</strong>r implementation-specific issue is <strong>the</strong> granularity of <strong>the</strong><br />

mapping, which can range from a small fraction of a physical disk, up to <strong>the</strong> full capacity of a<br />

single physical disk. A single block of information in this environment is identified by its logical<br />

unit identifier (LUN), which is <strong>the</strong> physical disk, and an offset within that LUN, which is known<br />

as a logical block address (LBA).<br />

Be aware that <strong>the</strong> term physical disk that is used in this context describes a piece of storage<br />

that might be carved out of a RAID array in <strong>the</strong> underlying disk subsystem.<br />

The address space is mapped between <strong>the</strong> logical entity, which is usually referred to as a<br />

virtual disk (VDisk), and <strong>the</strong> physical disks, which are identified by <strong>the</strong>ir LUNs. We refer to<br />

<strong>the</strong>se LUNs, which are provided by <strong>the</strong> storage controllers to <strong>the</strong> virtualization layer, as<br />

managed disks (MDisks) throughout this book.<br />

Figure 1-2 on page 4 shows an overview of block-level virtualization.<br />

Chapter 1. Introduction to storage virtualization 3


Figure 1-2 Block level virtualization overview<br />

The server and <strong>the</strong> application only know about logical entities and access <strong>the</strong>se logical<br />

entities via a consistent interface that is provided by <strong>the</strong> virtualization layer. Each logical entity<br />

owns a common and well defined set of functionality that is independent of where <strong>the</strong> physical<br />

representation is located.<br />

The functionality of a VDisk that is presented to a server, such as expanding or reducing <strong>the</strong><br />

size of a VDisk, mirroring a VDisk to a secondary site, creating a FlashCopy/Snapshot, thin<br />

provisioning/over-allocating, and so on, is implemented in <strong>the</strong> virtualization layer and does not<br />

rely in any way on <strong>the</strong> functionality that is provided by <strong>the</strong> disk subsystems that deliver <strong>the</strong><br />

MDisks. Data that is stored in a virtualized environment is stored in a location-independent<br />

way, which allows a user to move or migrate its data, or parts of it, to ano<strong>the</strong>r place or storage<br />

pool, that is, <strong>the</strong> place where <strong>the</strong> data really belongs.<br />

The logical entity can be resized, moved, replaced, replicated, over-allocated, mirrored,<br />

migrated, and so on without any disruption to <strong>the</strong> server and <strong>the</strong> application. After you have<br />

an abstraction layer in <strong>the</strong> <strong>SAN</strong>, you can perform almost any task.<br />

We refer to block-level storage virtualization as <strong>the</strong> cornerstones of virtualization, which are<br />

<strong>the</strong> core advantages that a product, such as <strong>the</strong> SVC, can provide over traditional directly<br />

attached <strong>SAN</strong> storage:<br />

► The SVC provides online volume migration while applications are running, which is<br />

possibly <strong>the</strong> greatest advantage for storage virtualization. With online migration while<br />

applications are running, you can put your data where it belongs, and, if <strong>the</strong> requirements<br />

change over time, move it to <strong>the</strong> right place or storage pool without impacting your server<br />

or application. <strong>Implementing</strong> a tiered storage environment can provide various storage<br />

classes for information life cycle management (ILM), can balance I/O across controllers,<br />

4 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


can allow you to add, upgrade, and retire storage; in essence, it allows you to put your<br />

data where it really belongs.<br />

► The SVC simplifies storage management by providing a single image for multiple<br />

controllers and a consistent user interface for provisioning heterogeneous storage (after<br />

<strong>the</strong> initial array setup).<br />

► The SVC provides enterprise-level copy services for existing storage. You can license a<br />

function one time and use it everywhere. You can purchase new storage as low-cost RAID<br />

“bricks.” The source and target of a copy relationship can be on separate controllers.<br />

► You can increase storage utilization by pooling storage across <strong>the</strong> <strong>SAN</strong>.<br />

► You have <strong>the</strong> potential to increase system performance by reducing hot spots, striping<br />

disks across many arrays and controllers, and in certain implementations, providing<br />

additional caching.<br />

The ability to deliver <strong>the</strong>se functions in a homogeneous way on a scalable and highly<br />

available platform, over any attached storage and to every attached server, is <strong>the</strong> key<br />

challenge for every block-level virtualization solution.<br />

1.2 User requirements that drive storage virtualization<br />

In today’s environment with emphasis on a smarter planet and a dynamic infrastructure, you<br />

need a storage environment that is as flexible as <strong>the</strong> application and server mobility. Business<br />

demands change quickly.<br />

These key client concerns drive storage virtualization:<br />

► Growth in datacenter costs<br />

► Inability of IT organizations to respond quickly to business demands<br />

► Poor asset utilization<br />

► Poor availability or service levels<br />

► Lack of skilled staff for storage administration<br />

You can see <strong>the</strong> importance of addressing <strong>the</strong> complexity of managing storage networks by<br />

applying <strong>the</strong> total cost of ownership (TCO) metric to storage networks. Industry analyses<br />

show that storage acquisition costs are only about 20% of <strong>the</strong> TCO. Most of <strong>the</strong> remaining<br />

costs are related to managing <strong>the</strong> storage system.<br />

How much of managing multiple systems with separate interfaces can be managed as a<br />

single entity? In an non-virtualized storage environment, every system is an island. Even if<br />

you have a large system that claims to virtualize, that system is an island that you will need to<br />

replace in <strong>the</strong> future.<br />

With <strong>the</strong> SVC, you can reduce <strong>the</strong> number of separate environments that you need to<br />

manage to one environment ideally. However, depending on how many tens or thousands of<br />

systems you have, even reducing <strong>the</strong> number is a step in <strong>the</strong> right direction.<br />

The SVC provides a single interface for storage management. Of course, <strong>the</strong>re is an initial<br />

effort for <strong>the</strong> setup of <strong>the</strong> disk subsystems; however, all of <strong>the</strong> day-to-day storage<br />

management can be performed on <strong>the</strong> SVC. For example, you can use <strong>the</strong> data migration<br />

functionality of <strong>the</strong> SVC for data migration as disk subsystems are phased out. SVC can<br />

move <strong>the</strong> data online and without any impact on your servers.<br />

Also, <strong>the</strong> virtualization layer offers advanced functions, such as data mirroring or FlashCopy®<br />

so <strong>the</strong>re is no need to purchase <strong>the</strong>m again for each new disk subsystem.<br />

Chapter 1. Introduction to storage virtualization 5


1.3 Conclusion<br />

Today, it is typical that open systems run at significantly less than 50% of <strong>the</strong> usable capacity<br />

that <strong>the</strong> RAID disk subsystems provide. Using <strong>the</strong> installed raw capacity in <strong>the</strong> disk<br />

subsystems will, dependent on <strong>the</strong> RAID level that is used, show utilization numbers of less<br />

than 35%. A block-level virtualization solution, such as <strong>the</strong> SVC, will support you to increase<br />

that utilization to approximately 75 - 80%.<br />

With <strong>the</strong> SVC, you do not need to keep and manage free space in each disk subsystem. You<br />

do not need to worry whe<strong>the</strong>r <strong>the</strong>re is sufficient free space on <strong>the</strong> right storage tier, or in a<br />

single system.<br />

Even if <strong>the</strong>re is enough free space in one system, it might not be accessible in a<br />

non-virtualized environment for a specific server or application due to multipath driver issues.<br />

The SVC is able to handle <strong>the</strong> storage resources that it manages as a single storage pool.<br />

Disk space allocation from this pool is a matter of minutes for every server connected to <strong>the</strong><br />

SVC, because you provision <strong>the</strong> capacity as needed, without disrupting applications.<br />

<strong>Storage</strong> virtualization is no longer merely a concept or an unproven technology. All major<br />

storage vendors offer storage virtualization products. Making use of storage virtualization as<br />

<strong>the</strong> foundation for a flexible and reliable storage solution helps a company better align<br />

business and IT by optimizing <strong>the</strong> storage infrastructure and storage management to meet<br />

business demands.<br />

The <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> is a mature, fifth generation virtualization<br />

solution, which uses open standards and is consistent with <strong>the</strong> <strong>Storage</strong> Networking Industry<br />

Association (SNIA) storage model. The SVC is an appliance-based in-band block<br />

virtualization process, in which intelligence, including advanced storage functions, is migrated<br />

from individual storage devices to <strong>the</strong> storage network.<br />

We expect <strong>the</strong> use of SVC will improve <strong>the</strong> utilization of your storage resources, simplify <strong>the</strong><br />

storage management, and improve <strong>the</strong> availability of your applications.<br />

6 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong><br />

This chapter describes <strong>the</strong> major concepts of <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> (SVC). It not only covers <strong>the</strong> hardware architecture but also <strong>the</strong> software concepts.<br />

We provide a brief history of <strong>the</strong> product, and we describe <strong>the</strong> additional functionalities that<br />

will be available with <strong>the</strong> newest release.<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 7


2.1 SVC history<br />

The <strong>IBM</strong> implementation of block-level storage virtualization, <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> (SVC), is based on an <strong>IBM</strong> project that was initiated in <strong>the</strong> second half of<br />

1999 at <strong>the</strong> <strong>IBM</strong> Almaden Research Center. The project was called COMPASS (COMmodity<br />

PArts <strong>Storage</strong> <strong>System</strong>). One of its goals was to build a system almost exclusively built from<br />

off-<strong>the</strong>-shelf standard parts. As any enterprise-level storage control system, it had to deliver<br />

high performance and availability that were comparable to <strong>the</strong> highly optimized storage<br />

controllers of previous generations. The idea of building a storage control system that is<br />

based on a scalable cluster of lower performance Pentium®-based servers, instead of a<br />

monolithic architecture of two nodes, is still a compelling idea.<br />

COMPASS also had to address a major challenge for <strong>the</strong> heterogeneous open systems<br />

environment, namely to reduce <strong>the</strong> complexity of managing storage on block devices.<br />

The first publications covering this project were released to <strong>the</strong> public in 2003 in <strong>the</strong> form of<br />

<strong>the</strong> <strong>IBM</strong> SYSTEMS JOURNAL, VOL 42, NO 2, 2003, “The architecture of a <strong>SAN</strong> storage<br />

control system”, by J. S. Glider, C. F. Fuente, and W. J. Scales, which you can read at this<br />

Web site:<br />

http://domino.research.ibm.com/tchjr/journalindex.nsf/e90fc5d047e64ebf85256bc80066<br />

919c/b97a551f7e510eff85256d660078a12e?OpenDocument<br />

The results of <strong>the</strong> COMPASS project defined <strong>the</strong> fundamentals for <strong>the</strong> product architecture.<br />

The announcement of <strong>the</strong> first release of <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

took place in July 2003.<br />

The following releases brought new, more powerful hardware nodes, which approximately<br />

doubled <strong>the</strong> I/O performance and throughput of its predecessors, provided new functionality,<br />

and offered additional interoperability with new elements in host environments, disk<br />

subsystems, and <strong>the</strong> storage area network (<strong>SAN</strong>).<br />

Major steps in <strong>the</strong> product’s evolution were:<br />

► SVC Release 2, February 2005<br />

► SVC Release 3, October 2005<br />

New 8F2 node hardware (based on <strong>IBM</strong> X336, 8 GB cache, 4 x 2 Gb Fibre Channel (FC)<br />

port)<br />

► SVC Release 4.1, May 2006<br />

New 8F4 node hardware (based on <strong>IBM</strong> X336, 8 GB cache, 4 x 4 Gb FC port)<br />

► SVC Release 4.2, May 2007:<br />

– New 8A4 entry-level node hardware (based on <strong>IBM</strong> X3250, 8 GB cache, 4 x 4 Gb FC<br />

port)<br />

– New 8G4 node hardware (based on <strong>IBM</strong> X3550, 8 GB cache, 4 x 4 Gb FC port)<br />

► SVC Release 4.3, May 2008<br />

In 2008, <strong>the</strong> 15,000th SVC engine was shipped by <strong>IBM</strong>. More than 5,000 SVC systems<br />

worldwide are in operation.<br />

With <strong>the</strong> new release of SVC that is introduced in this book, we will get a new generation of<br />

hardware nodes. This hardware, which will approximately double <strong>the</strong> performance of its<br />

predecessors, also provides solid-state drive (SSD) support. New software features are iSCSI<br />

support (which will be available on all hardware nodes that support <strong>the</strong> new firmware) and<br />

8 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


multiple SVC partnerships, which will support data replication between <strong>the</strong> members of a<br />

group of up to four SVC clusters.<br />

2.2 Architectural overview<br />

The <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> is a <strong>SAN</strong> block aggregation appliance that<br />

is designed for attachment to a variety of host computer systems.<br />

There are three major approaches in use today to be considered for <strong>the</strong> implementation of<br />

block-level aggregation:<br />

► Network-based: Appliance<br />

The device is a <strong>SAN</strong> appliance that sits in <strong>the</strong> data path, and all I/O flows through <strong>the</strong><br />

device. This kind of implementation is also referred to as symmetric virtualization or<br />

in-band. The device is both target and initiator. It is <strong>the</strong> target of I/O requests from <strong>the</strong> host<br />

perspective and <strong>the</strong> initiator of I/O requests from <strong>the</strong> storage perspective. The redirection<br />

is performed by issuing new I/O requests to <strong>the</strong> storage.<br />

► Switch-based: Split-path<br />

The device is usually an intelligent <strong>SAN</strong> switch that intercepts I/O requests on <strong>the</strong> fabric<br />

and redirects <strong>the</strong> frames to <strong>the</strong> correct storage location. The actual I/O requests are<br />

<strong>the</strong>mselves redirected. This kind of implementation is also referred to as asymmetric<br />

virtualization or out-of-band. Data and <strong>the</strong> control data path are separated, and a specific<br />

(preferably highly available and disaster tolerant) controller outside of <strong>the</strong> switch holds <strong>the</strong><br />

metainformation and <strong>the</strong> configuration to manage <strong>the</strong> split data paths.<br />

► <strong>Controller</strong>-based<br />

The device is a storage controller that provides an internal switch for external storage<br />

attachment. In this approach, <strong>the</strong> storage controller intercepts and redirects I/O requests<br />

to <strong>the</strong> external storage as it does for internal storage.<br />

Figure 2-1 on page 10 shows <strong>the</strong> three approaches.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 9


Figure 2-1 Overview of <strong>the</strong> block-level aggregation architectures<br />

While all of <strong>the</strong>se approaches provide in essence <strong>the</strong> same cornerstones of virtualization,<br />

several have interesting side effects.<br />

All three approaches can provide <strong>the</strong> required functionality. Although, <strong>the</strong> implementation<br />

(especially <strong>the</strong> switch-based split I/O architecture) can make it more difficult to implement part<br />

of <strong>the</strong> required functionality.<br />

This challenge is especially true for FlashCopy services. Taking a point-in-time clone of a<br />

device in a split I/O architecture means that all of <strong>the</strong> data has to be copied from <strong>the</strong> source to<br />

<strong>the</strong> target first.<br />

The drawback is that <strong>the</strong> target copy cannot be brought online until <strong>the</strong> entire copy has<br />

completed, that is, minutes or hours later. Think of using this approach for implementing a<br />

sparse flash, which is a flash copy without a background copy where <strong>the</strong> target disk is only<br />

populated with <strong>the</strong> blocks or extents that are modified after <strong>the</strong> point in time when <strong>the</strong> flash<br />

copy was taken (or an incremental series of cascaded copies).<br />

Scalability is ano<strong>the</strong>r issue, because it might be difficult to try to scale out to n-way clusters of<br />

intelligent line cards. A multiway switch design is also difficult to code and implement,<br />

because of <strong>the</strong> issues in maintaining fast updates to metadata to keep <strong>the</strong> metadata<br />

synchronized across all processing blades; <strong>the</strong> updates must occur at wire speed or you lose<br />

that claim.<br />

For <strong>the</strong> same reason, space-efficient copies and replication are also difficult to implement.<br />

Both synchronous and asynchronous replication require a level of buffering of I/O requests -<br />

while switches have buffering built in, <strong>the</strong> number of additional buffers is huge and grows as<br />

<strong>the</strong> link distance increases. Most of today’s intelligent line cards do not provide anywhere near<br />

this level of local storage. The most common solution is to use an external system to provide<br />

<strong>the</strong> replication services, which means ano<strong>the</strong>r system to manage and maintain, which<br />

conflicts with <strong>the</strong> concept of virtualization.<br />

10 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Also, remember when choosing a split I/O architecture, your virtualization implementation is<br />

limited to <strong>the</strong> actual switch type and <strong>the</strong> hardware that you use, which makes it hard to<br />

implement any future changes.<br />

The controller-based approach has high functionality, but it fails in terms of scalability or<br />

upgradability. Because of <strong>the</strong> nature of its design, <strong>the</strong>re is no true decoupling with this<br />

approach, which becomes an issue for <strong>the</strong> life cycle of this solution, such as a controller. You<br />

will be challenged with data migration issues and questions, such as how to reconnect <strong>the</strong><br />

servers to <strong>the</strong> new controller, and how to reconnect <strong>the</strong>m online without any impact to your<br />

applications.<br />

Be aware that you not only replace a controller in this scenario, but also, implicitly, replace<br />

your entire virtualization solution. You not only have to replace your hardware, but you also<br />

must update or repurchase <strong>the</strong> licenses for <strong>the</strong> virtualization feature, advanced copy<br />

functions, and so on.<br />

With a network-based appliance solution that is based on a scale-out cluster architecture, life<br />

cycle management tasks, such as adding or replacing new disk subsystems or migrating data<br />

between <strong>the</strong>m, are extremely simple. Servers and applications remain online, data migration<br />

takes place transparently on <strong>the</strong> virtualization platform, and licenses for virtualization and<br />

copy services require no update, that is, cause no additional costs when disk subsystems<br />

have to be replaced. Only <strong>the</strong> network-based appliance solution provides you with an<br />

independent and scalable virtualization platform that can provide enterprise-class copy<br />

services, is open for future interfaces and protocols, lets you choose <strong>the</strong> disk subsystems that<br />

best fit your requirements, and does not lock you into specific <strong>SAN</strong> hardware.<br />

For <strong>the</strong>se reasons, <strong>IBM</strong> has chosen <strong>the</strong> network-based appliance approach for <strong>the</strong><br />

implementation of <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>.<br />

The SVC has <strong>the</strong>se key characteristics:<br />

► Highly scalable: Easy growth path to two-n nodes (grow in a pair of nodes)<br />

► <strong>SAN</strong> interface-independent: Actually supports FC and iSCSI, but it is also open for future<br />

enhancements, such as InfiniBand or o<strong>the</strong>r enhancements<br />

► Host-independent: For fixed block-based Open <strong>System</strong>s environments<br />

► <strong>Storage</strong> (RAID controller)-independent: Ongoing plan to qualify additional types of<br />

Redundant Array of Independent Disks (RAID) controllers<br />

► Able to utilize commodity RAID controllers: Also known as “low complexity RAID bricks”<br />

► Able to utilize node internal disks (solid state disks)<br />

On <strong>the</strong> <strong>SAN</strong> storage that is provided by <strong>the</strong> disk subsystems, <strong>the</strong> SVC can offer <strong>the</strong> following<br />

services:<br />

► The ability to create and manage a single pool of storage attached to <strong>the</strong> <strong>SAN</strong><br />

► Block-level virtualization (logical unit virtualization)<br />

► Advanced functions to <strong>the</strong> entire <strong>SAN</strong>, such as:<br />

– Large scalable cache<br />

– Advanced Copy Services:<br />

FlashCopy (point-in-time copy)<br />

Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous)<br />

Data migration<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 11


This feature list will grow for future releases. This additional layer can provide future features,<br />

such as policy-based space management mapping your storage resources based on desired<br />

performance characteristics, or <strong>the</strong> dynamic reallocation of entire virtual disks (VDisks) or<br />

part of a VDisk according to user-definable performance policies. Extensive functionality is<br />

possible as soon as you set up <strong>the</strong> decoupling properly (installed an additional layer between<br />

<strong>the</strong> server and <strong>the</strong> storage).<br />

You can configure <strong>SAN</strong>-based storage infrastructures using SVC with two or more SVC<br />

nodes, which are arranged in a cluster. These nodes are attached to <strong>the</strong> <strong>SAN</strong> fabric, along<br />

with RAID controllers and host systems. The <strong>SAN</strong> fabric is zoned to allow <strong>the</strong> SVC to “see”<br />

<strong>the</strong> RAID controllers, and for <strong>the</strong> hosts to “see” <strong>the</strong> SVC. The hosts are not usually able to<br />

directly “see” or operate on <strong>the</strong> RAID controllers unless a “split controller” configuration is in<br />

use. You can use <strong>the</strong> zoning capabilities of <strong>the</strong> <strong>SAN</strong> switch to create <strong>the</strong>se distinct zones. The<br />

assumptions that are made about <strong>the</strong> <strong>SAN</strong> fabric will be limited to make it possible to support<br />

a number of separate <strong>SAN</strong> fabrics with a minimum development effort. Anticipated <strong>SAN</strong><br />

fabrics include FC, iSCSI over Gigabit E<strong>the</strong>rnet, and o<strong>the</strong>r types might follow in <strong>the</strong> future.<br />

Figure 2-2 shows a conceptual diagram of a storage system utilizing <strong>the</strong> SVC. It shows a<br />

number of hosts that are connected to a <strong>SAN</strong> fabric or LAN. In practical implementations that<br />

have high availability requirements (<strong>the</strong> majority of <strong>the</strong> target clients for SVC), <strong>the</strong> <strong>SAN</strong> fabric<br />

“cloud” represents a redundant <strong>SAN</strong>. A redundant <strong>SAN</strong> is composed of a fault-tolerant<br />

arrangement of two or more counterpart <strong>SAN</strong>s, <strong>the</strong>refore providing alternate paths for each<br />

<strong>SAN</strong>-attached device.<br />

Both scenarios (using a single network and using two physically separate networks) are<br />

supported for iSCSI-based/LAN-based access networks to <strong>the</strong> SVC. Redundant paths to<br />

VDisks can be provided for both scenarios.<br />

Figure 2-2 SVC conceptual overview<br />

A cluster of SVC nodes are connected to <strong>the</strong> same fabric and present VDisks to <strong>the</strong> hosts.<br />

These VDisks are created from MDisks that are presented by <strong>the</strong> RAID controllers. There are<br />

two distinct zones shown in <strong>the</strong> fabric: a host zone, in which <strong>the</strong> hosts can see and address<br />

12 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>the</strong> SVC nodes, and a storage zone, in which <strong>the</strong> SVC nodes can see and address <strong>the</strong><br />

MDisk/logical unit numbers (LUNs) presented by <strong>the</strong> RAID controllers. Hosts are not<br />

permitted to operate on <strong>the</strong> RAID LUNs directly, and all data transfer happens through <strong>the</strong><br />

SVC nodes. This design is commonly described as symmetric virtualization. Figure 2-3<br />

shows <strong>the</strong> SVC logical topology.<br />

Figure 2-3 SVC topology overview<br />

For simplicity, Figure 2-3 shows only one <strong>SAN</strong> fabric and two types of zones. In an actual<br />

environment, we recommend using two redundant <strong>SAN</strong> fabrics. The SVC can be connected<br />

to up to four fabrics. You set up zoning for each host, disk subsystem, and fabric. Learn about<br />

zoning details in 3.3.2, “<strong>SAN</strong> zoning and <strong>SAN</strong> connections” on page 76.<br />

For iSCSI-based access, using two networks and separating iSCSI traffic within <strong>the</strong> networks<br />

by using a dedicated virtual local area network (VLAN) path for storage traffic will prevent any<br />

IP interface, switch, or target port failure from compromising <strong>the</strong> host server’s access to <strong>the</strong><br />

VDisk LUNs.<br />

2.2.1 SVC virtualization concepts<br />

The SVC product provides block-level aggregation and volume management for disk storage<br />

within <strong>the</strong> <strong>SAN</strong>. In simpler terms, SVC manages a number of back-end storage controllers<br />

and maps <strong>the</strong> physical storage within those controllers into logical disk images that can be<br />

seen by application servers and workstations in <strong>the</strong> <strong>SAN</strong>.<br />

The <strong>SAN</strong> is zoned so that <strong>the</strong> application servers cannot see <strong>the</strong> back-end physical storage,<br />

which prevents any possible conflict between <strong>the</strong> SVC and <strong>the</strong> application servers both trying<br />

to manage <strong>the</strong> back-end storage. The SVC is based on <strong>the</strong> following virtualization concepts,<br />

which are discussed more throughout this chapter.<br />

A node is an SVC, which provides virtualization, cache, and copy services to <strong>the</strong> <strong>SAN</strong>. SVC<br />

nodes are deployed in pairs, to make up a cluster. A cluster can have between one and four<br />

SVC node pairs in it, which is a product limit not an architectural limit.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 13


Each pair of SVC nodes is also referred to as an I/O Group. An SVC cluster might have<br />

between one and up to four I/O Groups. A specific virtual disk or VDisk is always presented<br />

to a host server by a single I/O Group of <strong>the</strong> cluster.<br />

When a host server performs I/O to one of its VDisks, all <strong>the</strong> I/Os for a specific VDisk are<br />

directed to one specific I/O Group in <strong>the</strong> cluster. During normal operating conditions, <strong>the</strong> I/Os<br />

for a specific VDisk are always processed by <strong>the</strong> same node of <strong>the</strong> I/O Group. This node is<br />

referred to as <strong>the</strong> preferred node for this specific VDisk.<br />

Both nodes of an I/O Group act as <strong>the</strong> preferred node for its specific subset of <strong>the</strong> total<br />

number of VDisks that <strong>the</strong> I/O Group presents to <strong>the</strong> host servers. But, both nodes also act as<br />

failover nodes for <strong>the</strong>ir specific partner node in <strong>the</strong> I/O Group. A node will take over <strong>the</strong> I/O<br />

handling from its partner node, if required.<br />

In an SVC-based environment, <strong>the</strong> I/O handling for a VDisk can switch between <strong>the</strong> two<br />

nodes of an I/O Group. Therefore, it is mandatory for servers that are connected through FC<br />

to use multipath drivers to be able to handle <strong>the</strong>se failover situations.<br />

SVC 5.1 introduces iSCSI as an alternative means of attaching hosts. However, all<br />

communications with back-end storage subsystems, and with o<strong>the</strong>r SVC clusters, is still<br />

through FC. The node failover can be handled without a multipath driver installed on <strong>the</strong><br />

server. An iSCSI-attached server can simply reconnect after a node failover to <strong>the</strong> original<br />

target IP address, which is now presented by <strong>the</strong> partner node. To protect <strong>the</strong> server against<br />

link failures in <strong>the</strong> network or host bus adapter (HBA) failures, a multipath driver is mandatory.<br />

The SVC I/O Groups are connected to <strong>the</strong> <strong>SAN</strong> so that all application servers accessing<br />

VDisks from this I/O Group have access to this group. Up to 256 host server objects can be<br />

defined per I/O Group; <strong>the</strong>se host server objects can consume VDisks that are provided by<br />

this specific I/O Group.<br />

If required, host servers can be mapped to more than one I/O Group of an SVC cluster;<br />

<strong>the</strong>refore, <strong>the</strong>y can access VDisks from separate I/O Groups. You can move VDisks between<br />

I/O Groups to redistribute <strong>the</strong> load between <strong>the</strong> I/O Groups. With <strong>the</strong> current release of SVC,<br />

I/Os to <strong>the</strong> VDisk that is being moved have to be quiesced for a short time for <strong>the</strong> duration of<br />

<strong>the</strong> move.<br />

The SVC cluster and its I/O Groups view <strong>the</strong> storage that is presented to <strong>the</strong> <strong>SAN</strong> by <strong>the</strong><br />

back-end controllers as a number of disks, known as managed disks or MDisks. Because <strong>the</strong><br />

SVC does not attempt to provide recovery from physical disk failures within <strong>the</strong> back-end<br />

controllers, an MDisk is usually, but not necessarily, provisioned from a RAID array. The<br />

application servers however do not see <strong>the</strong> MDisks at all. Instead, <strong>the</strong>y see a number of<br />

logical disks, which are known as virtual disks or VDisks, which are presented by <strong>the</strong> SVC I/O<br />

Groups through <strong>the</strong> <strong>SAN</strong> (FC) or LAN (iSCSI) to <strong>the</strong> servers. A VDisk is storage that is<br />

provisioned out of one Managed Disk Group (MDG), or if it is a mirrored VDisk, out of two<br />

MDGs.<br />

An MDG is a collection of up to 128 MDisks, which creates <strong>the</strong> storage pools out of which<br />

VDisks are provisioned. A single cluster can manage up to 128 MDGs. The size of <strong>the</strong>se<br />

pools can be changed (expanded or shrunk) at run time without taking <strong>the</strong> MDG or <strong>the</strong> VDisks<br />

that are provided by it offline. At any point in time, an MDisk can only be a member in one<br />

MDG with one exception (image mode VDisk), which will be explained later in this chapter.<br />

MDisks that are used in a specific MDG must have <strong>the</strong> following characteristics:<br />

► They must have <strong>the</strong> same hardware characteristics, for example, <strong>the</strong> same RAID type,<br />

RAID array size, disk type, and disk revolutions per minute (RPMs). Be aware that it is<br />

14 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


always <strong>the</strong> weakest element (MDisk) in a chain of elements that defines <strong>the</strong> maximum<br />

strength of that chain (MDG).<br />

► The disk subsystems providing <strong>the</strong> MDisks must have similar characteristics, for example,<br />

maximum input/output operations per second (IOPS), response time, cache, and<br />

throughput.<br />

► We recommend that you use MDisks of <strong>the</strong> same size and MDisks that provide <strong>the</strong> same<br />

number of extents, which you need to remember when adding MDisks to an existing MDG.<br />

If that is not feasible, check <strong>the</strong> distribution of <strong>the</strong> VDisks’ extents in that MDG.<br />

For fur<strong>the</strong>r details, refer to <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Best Practices and Performance<br />

Guidelines, SG24-7521, at this Web site:<br />

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open<br />

VDisks can be mapped to a host to allow access for a specific server to a set of VDisks. A<br />

host within <strong>the</strong> SVC is a collection of HBA worldwide port names (WWPNs) or iSCSI qualified<br />

names (IQNs), defined on <strong>the</strong> specific server. Note that iSCSI names are internally identified<br />

by “fake” WWPNs, or WWPNs that are generated by <strong>the</strong> SVC. VDisks might be mapped to<br />

multiple hosts, for example, a VDisk that is accessed by multiple hosts of a server cluster.<br />

Figure 2-4 shows <strong>the</strong> relationships of <strong>the</strong>se entities to each o<strong>the</strong>r.<br />

Figure 2-4 SVC I/O Group overview<br />

An MDisk can be provided by a <strong>SAN</strong> disk subsystem or by <strong>the</strong> solid state drives that are<br />

provided by <strong>the</strong> SVC nodes <strong>the</strong>mselves. Each MDisk is divided into a number of extents. The<br />

size of <strong>the</strong> extent will be selected by <strong>the</strong> user at <strong>the</strong> creation time of an MDG. The size of <strong>the</strong><br />

extent ranges from 16 MB (default) up to 2 GB.<br />

We recommend that you use <strong>the</strong> same extent size for all MDGs in a cluster, which is a<br />

prerequisite for supporting VDisk migration between two MDGs. If <strong>the</strong> extent size does not fit,<br />

you must use VDisk Mirroring (see 2.2.7, “Mirrored VDisk” on page 21) as a workaround. For<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 15


copying (not migrating) <strong>the</strong> data into ano<strong>the</strong>r MDG to a new VDisk, you can use SVC<br />

Advanced Copy Services.<br />

Figure 2-5 shows <strong>the</strong> two most popular ways to provision VDisks out of an MDG. Striped<br />

mode is <strong>the</strong> recommended method for most cases. Sequential extent allocation mode might<br />

slightly increase <strong>the</strong> sequential performance for certain workloads.<br />

Figure 2-5 MDG overview<br />

You can allocate <strong>the</strong> extents for a VDisk in many ways. The process is under full user control<br />

at VDisk creation time and can be changed at any time by migrating single extents of a VDisk<br />

to ano<strong>the</strong>r MDisk within <strong>the</strong> MDG. You can obtain details of how to create VDisks and migrate<br />

extents via GUI or CLI in Chapter 7, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong><br />

command-line interface” on page 339, Chapter 8, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using<br />

<strong>the</strong> GUI” on page 469, and Chapter 9, “Data migration” on page 675.<br />

SVC limits <strong>the</strong> number of extents in a cluster. The number is currently 2 22 ~= 4 million<br />

extents, and this number might change in future releases. Because <strong>the</strong> number of<br />

addressable extents is limited, <strong>the</strong> total capacity of an SVC cluster depends on <strong>the</strong> extent size<br />

that is chosen by <strong>the</strong> user. The capacity numbers that are specified in Table 2-1 for an SVC<br />

cluster assume that all defined MDGs have been created with <strong>the</strong> same extent size.<br />

Table 2-1 Extent size to addressability matrix<br />

Extent size maximum Cluster capacity Extent size maximum Cluster capacity<br />

16 MB 64 TB 256 MB 1 PB<br />

32 MB 128 TB 512 MB 2 PB<br />

64 MB 256 TB 1024 MB 4 PB<br />

128 MB 512 TB 2048 MB 8 PB<br />

For most clusters, a capacity of 1 - 2 PB is sufficient. We <strong>the</strong>refore recommend that you use<br />

256 MB or, for larger clusters, 512 MB as <strong>the</strong> standard extent size.<br />

16 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2.2.2 MDisk overview<br />

The maximum size of an MDisk is 2 TB. An SVC cluster supports up to 4,096 MDisks. At any<br />

point of time, an MDisk is in one of <strong>the</strong> following three modes:<br />

► Unmanaged MDisk<br />

An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged<br />

MDisk is not associated with any VDisks and has no metadata stored on it. SVC will not<br />

write to an MDisk, which is in unmanaged mode, except when it attempts to change <strong>the</strong><br />

mode of <strong>the</strong> MDisk to one of <strong>the</strong> o<strong>the</strong>r modes. SVC can see <strong>the</strong> resource, but it is not<br />

assigned to a pool, that is, an MDG.<br />

► Managed MDisk<br />

Managed mode MDisks are always members of an MDG and contribute extents to <strong>the</strong><br />

pool of extents available in <strong>the</strong> MDG. Zero or more VDisks (if not operated in image mode,<br />

which we discuss next) can use <strong>the</strong>se extents. MDisks operating in managed mode might<br />

have metadata extents allocated from <strong>the</strong>m and can be used as quorum disks.<br />

► Image mode MDisk<br />

Image mode provides a direct block-for-block translation from <strong>the</strong> MDisk to <strong>the</strong> VDisk by<br />

using virtualization. This mode is provided to satisfy three major usage scenarios:<br />

– Image mode allows virtualization of MDisks that already contain data that was written<br />

directly, not through an SVC. It allows a client to insert <strong>the</strong> SVC into <strong>the</strong> data path of an<br />

existing storage configuration with minimal downtime. Chapter 9, “Data migration” on<br />

page 675 provides details of <strong>the</strong> data migration process.<br />

– Image mode allows a VDisk that is managed by <strong>the</strong> SVC to be used with <strong>the</strong> copy<br />

services that are provided by <strong>the</strong> underlying RAID controller. In order to avoid <strong>the</strong> loss<br />

of data integrity when <strong>the</strong> SVC is used in this way, it is important that you disable <strong>the</strong><br />

SVC cache for <strong>the</strong> VDisk.<br />

– SVC provides <strong>the</strong> ability to migrate to image mode, which allows <strong>the</strong> SVC to export<br />

VDisks and access <strong>the</strong>m directly without <strong>the</strong> SVC from <strong>the</strong> server.<br />

An image mode MDisk is associated with exactly one VDisk. The last extent is partial if <strong>the</strong><br />

(image mode) MDisk is not a multiple of <strong>the</strong> MDisk Group’s extent size (see Figure 2-6 on<br />

page 18). An image mode VDisk is a pass-through one-to-one map of its MDisk. It cannot<br />

be a quorum disk and will not have any SVC metadata extents allocated on it. Managed or<br />

image mode MDisks are always members of an MDG.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 17


2.2.3 VDisk overview<br />

Figure 2-6 Image mode MDisk overview<br />

It is a best practice if you work with image mode MDisks to put <strong>the</strong>m in a dedicated MDG<br />

and use a special name for it (Example: MDG_IMG_xxx). And, remember that <strong>the</strong> extent<br />

size chosen for this specific MDG has to be <strong>the</strong> same as <strong>the</strong> extent size in which you plan<br />

to migrate <strong>the</strong> data. All of SVC copy services can be applied to image mode disks.<br />

The maximum size of an VDisk is 256 TB. An SVC cluster supports up to 4,096 VDisks.<br />

VDisks support <strong>the</strong> following services:<br />

► You can create and delete a VDisk.<br />

► You can change <strong>the</strong> size of a VDisk (expand or shrink).<br />

► VDisks can be migrated (full or partially) at run time to ano<strong>the</strong>r MDisk or a storage pool<br />

(MDG).<br />

► VDisks can be created as fully allocated or Space-Efficient VDisks. A conversion from a<br />

fully allocated to a Space-Efficient VDisk and vice versa can be done at run time.<br />

► VDisks can be stored in MDGs (mirrored) to make <strong>the</strong>m resistant to disk subsystem<br />

failures or to improve <strong>the</strong> read performance.<br />

► VDisks can be mirrored synchronously for distances up to 100 KM or asynchronously for<br />

longer distances. An SVC cluster can run active data mirrors to a maximum of three o<strong>the</strong>r<br />

SVC clusters.<br />

► You can use FlashCopy on VDisks. Multiple snapshots and quick restore from snapshots<br />

(reverse flash copy) are supported.<br />

VDisks have two modes: image mode and managed mode. The following state diagram in<br />

Figure 2-7 on page 19 shows <strong>the</strong> state transitions.<br />

18 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


create image<br />

mode vdisk<br />

Figure 2-7 VDisk state transitions<br />

Managed mode VDisks have two policies: <strong>the</strong> sequential policy and <strong>the</strong> striped policy. Policies<br />

define how <strong>the</strong> extents of a VDisk are carved out of an MDG.<br />

2.2.4 Image mode VDisk<br />

Image mode provides a one-to-one mapping between <strong>the</strong> logical block addresses (LBAs)<br />

between a VDisk and an MDisk. Image mode VDisks have a minimum size of one block (512<br />

bytes) and always occupy at least one extent. An image mode MDisk is mapped to one and<br />

only one image mode VDisk. The VDisk capacity that is specified must be less than or equal<br />

to <strong>the</strong> size of <strong>the</strong> image mode MDisk. When you create an image mode VDisk, <strong>the</strong> specified<br />

MDisk must be in “unmanaged” mode and must not be a member of an MDG. The MDisk is<br />

made a member of <strong>the</strong> specified MDG (MDG_IMG_xxx) as a result of <strong>the</strong> creation of <strong>the</strong><br />

image mode VDisk. The SVC also supports <strong>the</strong> reverse process in which a managed mode<br />

VDisk can be migrated to image mode VDisks. If a VDisk is migrated to ano<strong>the</strong>r MDisk, it is<br />

represented as being in managed mode during <strong>the</strong> migration and only represented as an<br />

image mode VDisk after has reached <strong>the</strong> state where it is a straight-through mapping.<br />

2.2.5 Managed mode VDisk<br />

Doesn't<br />

exist<br />

Image<br />

mode<br />

delete<br />

vdisk<br />

create managed<br />

mode vdisk<br />

delete<br />

vdisk<br />

complete<br />

migrate<br />

migrate to<br />

image mode<br />

Managed<br />

mode<br />

Managed<br />

mode<br />

migrating<br />

migrate to<br />

image mode<br />

VDisks operating in managed mode provide a full set of virtualization functions. Within an<br />

MDG, SVC supports an arbitrary relationship between extents on (managed mode) VDisks<br />

and extents on MDisks. Subject to <strong>the</strong> constraints in which each MDisk extent is contained, at<br />

most, one VDisk, each VDisk extent maps to exactly one MDisk extent.<br />

Figure 2-8 on page 20 represents this diagrammatically. It shows VDisk V, which is made up<br />

of a number of extents. Each of <strong>the</strong>se extents is mapped to an extent on one of <strong>the</strong> MDisks: A,<br />

B, or C. The mapping table stores <strong>the</strong> details of this indirection. Note that several of <strong>the</strong> MDisk<br />

extents are unused. There is no VDisk extent, which maps to <strong>the</strong>m. These unused extents are<br />

available for use in creating new VDisks, migration, expansion, and so on.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 19


Figure 2-8 Simple view of block virtualization<br />

A managed mode VDisk can have a size of zero blocks, in which case, it occupies zero<br />

extents. This type of a VDisk cannot be mapped to a host or take part in any Advanced Copy<br />

Services functions.<br />

The allocation of a specific number of extents from a specific set of MDisks is performed by<br />

<strong>the</strong> following algorithm: If <strong>the</strong> set of MDisks from which to allocate extents contains more than<br />

one disk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no free<br />

extents when its turn arrives, its turn is missed and <strong>the</strong> round-robin moves to <strong>the</strong> next MDisk<br />

in <strong>the</strong> set that has a free extent.<br />

Beginning with SVC 5.1, when creating a new VDisk, <strong>the</strong> first MDisk from which to allocate an<br />

extent is chosen in a pseudo random way ra<strong>the</strong>r than simply choosing <strong>the</strong> next disk in a<br />

round-robin fashion. The pseudo random algorithm avoids <strong>the</strong> situation whereby <strong>the</strong> “striping<br />

effect” inherent in a round-robin algorithm places <strong>the</strong> first extent for a large number of VDisks<br />

on <strong>the</strong> same MDisk. Placing <strong>the</strong> first extent of a number of VDisks on <strong>the</strong> same MDisk might<br />

lead to poor performance for workloads that place a large I/O load on <strong>the</strong> first extent of each<br />

VDisk or that create multiple sequential streams.<br />

2.2.6 Cache mode and cache-disabled VDisks<br />

Prior to SVC V3.1, enabling any copy services function in a RAID array controller for a LUN<br />

that was being virtualized by SVC was not supported, because <strong>the</strong> behavior of <strong>the</strong> write-back<br />

cache in <strong>the</strong> SVC led to data corruption. With <strong>the</strong> advent of cache-disabled VDisks, it<br />

becomes possible to enable copy services in <strong>the</strong> underlying RAID array controller for LUNs<br />

that are virtualized by <strong>the</strong> SVC.<br />

Wherever possible, we recommend using SVC copy services in preference to <strong>the</strong> underlying<br />

controller copy services.<br />

20 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2.2.7 Mirrored VDisk<br />

Starting with SVC 4.3, <strong>the</strong> mirrored VDisk feature provides a simple RAID-1 function, which<br />

allows a VDisk to remain accessible even when an MDisk on which it depends has become<br />

inaccessible.<br />

This function is achieved using two copies of <strong>the</strong> VDisk, which are typically allocated from<br />

separate MDGs or using image-mode copies. The VDisk is <strong>the</strong> entity that participates in<br />

FlashCopy and a Remote Copy relationship, is served by an I/O Group, and has a preferred<br />

node. The copy now has <strong>the</strong> virtualization attributes, such as MDG and policy (striped,<br />

sequential, or image).<br />

A copy is not a separate object and cannot be created or manipulated except in <strong>the</strong> context of<br />

<strong>the</strong> VDisk. Copies are identified via <strong>the</strong> configuration interface with a copy ID of <strong>the</strong>ir parent<br />

VDisk. This copy ID can be ei<strong>the</strong>r 0 or 1. Depending on <strong>the</strong> configuration history, a single<br />

copy can have an ID of ei<strong>the</strong>r 0 or 1.<br />

The feature does provide a “point-in-time” copy functionality that is achieved by “splitting” a<br />

copy from <strong>the</strong> VDisk. The feature does not address o<strong>the</strong>r forms of mirroring based on Remote<br />

Copy (sometimes called “Hyperswap”), which mirrors VDisks across I/O Groups or clusters,<br />

nor is it intended to manage mirroring or remote copy functions in back-end controllers.<br />

Figure 2-9 gives an overview of VDisk Mirroring.<br />

Figure 2-9 VDisk Mirroring overview<br />

A copy can be added to a VDisk with only one copy or removed from a VDisk with two copies.<br />

Checks will prevent <strong>the</strong> accidental removal <strong>the</strong> sole copy of a VDisk. A newly created,<br />

unformatted VDisk with two copies will initially have <strong>the</strong> copies out-of-synchronization. The<br />

primary copy will be defined as “fresh” and <strong>the</strong> secondary copy as “stale”. The<br />

synchronization process will update <strong>the</strong> secondary copy until it is synchronized, which will be<br />

done at <strong>the</strong> default “synchronization rate” or one defined when creating <strong>the</strong> VDisk or<br />

subsequently modifying it.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 21


If a two-copy mirrored VDisk is created with <strong>the</strong> format parameter, both copies are formatted<br />

in parallel and <strong>the</strong> VDisk comes online when both operations are complete with <strong>the</strong> copies in<br />

sync.<br />

If mirrored VDisks get expanded or shrunk, all of <strong>the</strong>ir copies also get expanded or shrunk.<br />

If it is known that MDisk space, which will be used for creating copies, is already formatted, or<br />

if <strong>the</strong> user does not require read stability, a “no synchronization” option can be selected which<br />

declares <strong>the</strong> copies as “synchronized” (even when <strong>the</strong>y are not).<br />

The time for a copy, which has become unsynchronized, to resynchronize is minimized by<br />

copying only those 256 KB grains that have been written to since synchronization was lost.<br />

This approach is known as an “incremental synchronization”. Only those changed grains<br />

need be copied to restore synchronization.<br />

Important: An unmirrored VDisk can be migrated from a source to a destination by adding<br />

a copy at <strong>the</strong> desired destination, waiting for <strong>the</strong> two copies to synchronize, and <strong>the</strong>n<br />

removing <strong>the</strong> original copy. This operation can be stopped at any time.The two copies can<br />

be in separate MDGs with separate extent sizes.<br />

Where <strong>the</strong>re are two copies of a VDisk, one copy is known as <strong>the</strong> primary copy. If <strong>the</strong> primary<br />

is available and synchronized, reads from <strong>the</strong> VDisk are directed to it. The user can select <strong>the</strong><br />

primary when creating <strong>the</strong> VDisk or can change it later. Selecting <strong>the</strong> copy allocated on <strong>the</strong><br />

higher-performance controller will maximize <strong>the</strong> read performance of <strong>the</strong> VDisk. The write<br />

performance will be constrained by <strong>the</strong> lower-performance controller, because writes must<br />

complete to both copies before <strong>the</strong> VDisk is considered to have been successfully written.<br />

Remember that writes to both copies must complete to be considered successfully written<br />

when VDisk Mirroring creates one copy in a solid-state drive MDG and <strong>the</strong> second copy in an<br />

MDG populated with resources from a disk subsystem.<br />

Note: SVC does not prevent you from creating <strong>the</strong> two copies in one or more solid-state<br />

drive MDGs of <strong>the</strong> same node. Although doing so means that you lose redundancy and<br />

might <strong>the</strong>refore be faced with access loss to your VDisk if <strong>the</strong> node fails or restarts.<br />

A VDisk with copies can be checked to see whe<strong>the</strong>r all of <strong>the</strong> copies are identical. If a medium<br />

error is encountered while reading from any copy, it will be repaired using data from ano<strong>the</strong>r<br />

fresh copy. This process can be asynchronous but will give up if <strong>the</strong> copy with <strong>the</strong> error goes<br />

offline.<br />

Mirrored VDisks consume bitmap space at a rate of 1 bit per 256 KB grain, which translates to<br />

1 MB of bitmap space supporting 2 TB-worth of mirrored VDisk. The default allocation of<br />

bitmap space in 20 MB, which supports 40 TB of mirrored VDisk. If all 512 MB of variable<br />

bitmap space is allocated to mirrored VDisks, 1 PB of mirrored VDisks can be supported.<br />

The advent of <strong>the</strong> mirrored VDisk feature will inevitably lead clients to think about two-site<br />

solutions for cluster and VDisk availability.<br />

Generally, <strong>the</strong> advice is not to split a cluster, that is, <strong>the</strong> single I/O Groups, across sites. But<br />

<strong>the</strong>re are certain configurations that will be effective. Be careful that you prevent a situation<br />

that is referred to as a “split brain” scenario (caused, for example, by a power outage on <strong>the</strong><br />

<strong>SAN</strong> switches; <strong>the</strong> SVC nodes are protected by <strong>the</strong>ir own uninterruptible power supply unit).<br />

In this scenario, <strong>the</strong> connectivity between components will be lost and a contest for <strong>the</strong> SVC<br />

cluster quorum disk occurs. Which set of nodes wins is effectively arbitrary. If <strong>the</strong> set of nodes<br />

which won <strong>the</strong> quorum disk <strong>the</strong>n experiences a permanent power loss, <strong>the</strong> cluster is lost. The<br />

way to prevent this split brain scenario is to use a configuration that will provide effective<br />

22 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


edundancy because of <strong>the</strong> exact placement of system components in “fault domains”. You<br />

can obtain <strong>the</strong> details of this configuration and <strong>the</strong> required prerequisites in Chapter 3,<br />

“Planning and configuration” on page 65.<br />

2.2.8 Space-Efficient VDisks<br />

Starting with SVC 4.3, VDisks can be configured to ei<strong>the</strong>r be “Space-Efficient” or “Fully<br />

Allocated”. A Space-Efficient VDisk (SE VDisk) will behave with respect to application reads<br />

and writes as though <strong>the</strong>y were fully allocated, including <strong>the</strong> requirements of Read Stability<br />

and Write Atomicity. When an SE VDisk is created, <strong>the</strong> user will specify two capacities: <strong>the</strong><br />

real capacity of <strong>the</strong> VDisk and its virtual capacity.<br />

The real capacity will determine <strong>the</strong> quantity of MDisk extents that will be allocated for <strong>the</strong><br />

VDisk. The virtual capacity will be <strong>the</strong> capacity of <strong>the</strong> VDisk reported to o<strong>the</strong>r SVC<br />

components (for example, FlashCopy, Cache, and Remote Copy) and to <strong>the</strong> host servers.<br />

The real capacity will be used to store both <strong>the</strong> user data and <strong>the</strong> metadata for <strong>the</strong> SE VDisk.<br />

The real capacity can be specified as an absolute value or a percentage of <strong>the</strong> virtual<br />

capacity.<br />

The Space-Efficient VDisk feature can be used on its own to create over-allocated or<br />

late-allocation VDisks, or it can be used in conjunction with FlashCopy to implement<br />

Space-Efficient FlashCopy. SE VDisk can be used in conjunction with <strong>the</strong> mirrored VDisks<br />

feature, as well, which we refer to as Space-Efficient Copies of VDisks.<br />

When an SE VDisk is initially created, a small amount of <strong>the</strong> real capacity will be used for<br />

initial metadata. Write I/Os to grains of <strong>the</strong> SE VDisk that have not previously been written to<br />

will cause grains of <strong>the</strong> real capacity to be used to store metadata and user data. Write I/Os to<br />

grains that have previously been written to will update <strong>the</strong> grain where data was previously<br />

written. The grain is defined when <strong>the</strong> VDisk is created and can be 32 KB, 64 KB, 128 KB, or<br />

256 KB.<br />

Figure 2-10 on page 24 provides an overview.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 23


Figure 2-10 Overview SE VDisk<br />

SE VDisks store both user data and metadata. Each grain requires metadata. The overhead<br />

will never be greater than 0.1% of <strong>the</strong> user data. The overhead is independent of <strong>the</strong> virtual<br />

capacity of <strong>the</strong> SE VDisk. If you are using SE VDisks in a FlashCopy map, use <strong>the</strong> same grain<br />

size as <strong>the</strong> map grain size for <strong>the</strong> best performance. If you are using <strong>the</strong> Space-Efficient<br />

VDisk directly with a host system, use a small grain size.<br />

SE VDisk format: SE VDisks do not need formatting. A read I/O, which requests data from<br />

unallocated data space, will return zeroes. When a write I/O causes space to be allocated,<br />

<strong>the</strong> grain will be zeroed prior to use. Consequently, an SE VDisk will always be formatted<br />

regardless of whe<strong>the</strong>r <strong>the</strong> format flag is specified when <strong>the</strong> VDisk is created. The<br />

formatting flag will be ignored when an SE VDisk is created or when <strong>the</strong> real capacity is<br />

expanded; <strong>the</strong> virtualization component will never format <strong>the</strong> real capacity for an SE VDisk.<br />

The real capacity of an SE VDisk can be changed provided that <strong>the</strong> VDisk is not in image<br />

Mode. Increasing <strong>the</strong> real capacity allows a larger amount of data and metadata to be stored<br />

on <strong>the</strong> VDisk. SE VDisks use <strong>the</strong> real capacity of a VDisk in ascending order as new data is<br />

written to <strong>the</strong> VDisk. Consequently, if <strong>the</strong> user initially assigns too much real capacity to an SE<br />

VDisk, <strong>the</strong> real capacity can be reduced to free up storage for o<strong>the</strong>r uses. It is not possible to<br />

reduce <strong>the</strong> real capacity of an SE VDisk to be less than <strong>the</strong> capacity that is currently in use<br />

o<strong>the</strong>r than by deleting <strong>the</strong> VDisk.<br />

An SE VDisk can be configured to autoexpand, which causes SVC to automatically expand<br />

<strong>the</strong> real capacity of an SE VDisk as its real capacity is used. Autoexpand attempts to maintain<br />

a fixed amount of unused real capacity on <strong>the</strong> VDisk. This amount is known as <strong>the</strong><br />

“contingency capacity”.<br />

24 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


The contingency capacity is initially set to <strong>the</strong> real capacity that is assigned when <strong>the</strong> VDisk is<br />

created. If <strong>the</strong> user modifies <strong>the</strong> real capacity, <strong>the</strong> contingency capacity is reset to be <strong>the</strong><br />

difference between <strong>the</strong> used capacity and real capacity.<br />

A VDisk that is created with a zero contingency capacity will go offline as soon as it needs to<br />

expand whereas a VDisk with a non-zero contingency capacity will stay online until it has<br />

been used up.<br />

Autoexpand will not cause space to be assigned to <strong>the</strong> VDisk that can never be used.<br />

Autoexpand will not cause <strong>the</strong> real capacity to grow much beyond <strong>the</strong> virtual capacity. The<br />

real capacity can be manually expanded to more than <strong>the</strong> maximum that is required by <strong>the</strong><br />

current virtual capacity, and <strong>the</strong> contingency capacity will be recalculated.<br />

To support <strong>the</strong> autoexpansion of SE VDisks, <strong>the</strong> MDGs from which <strong>the</strong>y are allocated have a<br />

configurable warning capacity. When <strong>the</strong> used free capacity of <strong>the</strong> group exceeds <strong>the</strong> warning<br />

capacity, a warning is logged. To allow for capacity used by quorum disks and partial extents<br />

of image mode VDisks, <strong>the</strong> calculation uses <strong>the</strong> free capacity. For example, if a warning of<br />

80% has been specified, <strong>the</strong> warning will be logged when 20% of <strong>the</strong> free capacity remains.<br />

SE VDisks: SE VDisks require additional I/O operations to read and write metadata to<br />

back-end storage and to generate additional load on <strong>the</strong> SVC nodes. We <strong>the</strong>refore do not<br />

recommend <strong>the</strong> use of SE VDisks for high performance applications.<br />

An SE VDisk can be converted to a fully allocated VDisk using VDisk Mirroring.<br />

SVC 5.1.0 introduces <strong>the</strong> ability to convert a fully allocated VDisk to an SE VDisk, by using<br />

<strong>the</strong> following procedure:<br />

1. Start with a VDisk that has one fully allocated copy.<br />

2. Add a Space-Efficient copy to <strong>the</strong> VDisk.<br />

3. Allow VDisk Mirroring to synchronize <strong>the</strong> copies.<br />

4. Remove <strong>the</strong> fully allocated copy.<br />

This procedure uses a zero-detection algorithm. Note that as of 5.1.0, this algorithm is used<br />

only for I/O that is generated by <strong>the</strong> synchronization of mirrored VDisks; I/O from o<strong>the</strong>r<br />

components (for example, FlashCopy) is written using normal procedures.<br />

Note: Consider SE VDisks as targets in Flash Copy relationships. Using <strong>the</strong>m as a target<br />

in Metro Mirror or Global Mirror relationships makes no sense, because during <strong>the</strong> initial<br />

synchronization, <strong>the</strong> target will become fully allocated.<br />

2.2.9 VDisk I/O governing<br />

It is possible to constrain I/O operations so that a system is constrained to <strong>the</strong> amount of I/O<br />

that it can perform to a VDisk in a period of time. You can use this governing to satisfy a<br />

quality of service constraint, or a contractual obligation (for example, a customer agrees to<br />

pay for I/Os performed, but will not pay for I/Os beyond a certain rate). Only commands that<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 25


2.2.10 iSCSI overview<br />

access <strong>the</strong> medium (Read (6/10), Write (6/10), or Write and Verify) are subject to I/O<br />

governing.<br />

I/O governing: I/O governing is applied to remote copy secondaries, as well as primaries.<br />

If an I/O governing rate has been set on a VDisk, which is a remote copy secondary, this<br />

governing rate will also be applied to <strong>the</strong> primary. If governing is in use on both <strong>the</strong> primary<br />

and <strong>the</strong> secondary VDisks, each governed quantity will be limited to <strong>the</strong> lower of <strong>the</strong> two<br />

specified values. Governing has no effect on FlashCopy or data migration I/O.<br />

An I/O budget is expressed as a number of I/Os, or a number of MBs, over a minute. The<br />

budget is evenly divided between all SVC nodes that service that VDisk, that is, between <strong>the</strong><br />

nodes that form <strong>the</strong> I/O Group of which that VDisk is a member.<br />

The algorithm operates two levels of policing. While a VDisk on each SVC node has been<br />

receiving I/O at a rate lower than <strong>the</strong> governed level, no governing is performed. A check is<br />

made every minute that <strong>the</strong> VDisk on each node is continuing to receive I/O at a rate lower<br />

than <strong>the</strong> threshold level. Where this check shows that <strong>the</strong> host has exceeded its limit on one<br />

or more nodes, policing begins for new I/Os.<br />

The following conditions exist while policing is in force:<br />

► A budget allowance is calculated for a 1 second period.<br />

► I/Os are counted over a period of a second.<br />

► If I/Os are received in excess of <strong>the</strong> one second budget on any node in <strong>the</strong> I/O Group,<br />

those I/Os and later I/Os are pended.<br />

► When <strong>the</strong> second expires, a new budget is established, and any pended I/Os are redriven<br />

under <strong>the</strong> new budget.<br />

This algorithm might cause I/O to backlog in <strong>the</strong> front end, which might eventually cause<br />

“Queue Full Condition” to be reported to hosts that continue to flood <strong>the</strong> system with I/O. If a<br />

host stays within its 1 second budget on all nodes in <strong>the</strong> I/O Group for a period of 1 minute,<br />

<strong>the</strong> policing is relaxed, and monitoring takes place over <strong>the</strong> 1 minute period as before.<br />

SVC 4.3.1 and earlier support Fibre Channel (FC) as <strong>the</strong> sole transport protocol for<br />

communicating with hosts, storage, and o<strong>the</strong>r SVC clusters. SVC 5.1.0 introduces iSCSI as<br />

an alternative means of attaching hosts. However, all communications with back-end storage<br />

subsystems, and with o<strong>the</strong>r SVC clusters, still occur via FC.<br />

New iSCSI feature: The new iSCSI feature is a software feature that is provided by <strong>the</strong><br />

new SVC 5.1 code. This feature will be available on any SVC hardware node that supports<br />

SVC 5.1 code. It is not restricted to <strong>the</strong> new 2145-CF8 nodes.<br />

In <strong>the</strong> simplest terms, iSCSI allows <strong>the</strong> transport of SCSI commands and data over a TCP/IP<br />

network, based on IP routers and E<strong>the</strong>rnet switches. iSCSI is a block-level protocol that<br />

encapsulates SCSI commands into TCP/IP packets and <strong>the</strong>reby leverages an existing IP<br />

network, instead of requiring expensive FC HBAs and a <strong>SAN</strong> fabric infrastructure.<br />

A pure SCSI architecture is based on <strong>the</strong> client/server model. A client (for example, server or<br />

workstation) initiates read or write requests for data from a target server (for example, a data<br />

storage system). Commands, which are sent by <strong>the</strong> client and processed by <strong>the</strong> server, are<br />

26 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


put into <strong>the</strong> Command Descriptor Block (CDB). The server executes a command, and<br />

completion is indicated by a special signal alert.<br />

The major functions of iSCSI include encapsulation and <strong>the</strong> reliable delivery of CDB<br />

transactions between initiators and targets through <strong>the</strong> TCP/IP network, especially over a<br />

potentially unreliable IP network.<br />

The concepts of names and addresses have been carefully separated in iSCSI:<br />

► An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An<br />

iSCSI node has one iSCSI name, which stays constant for <strong>the</strong> life of <strong>the</strong> node. The terms<br />

“initiator name” and “target name” also refer to an iSCSI name.<br />

► An iSCSI Address specifies not only <strong>the</strong> iSCSI name of an iSCSI node, but also a location<br />

of that node. The address consists of a host name or IP address, a TCP port number (for<br />

<strong>the</strong> target), and <strong>the</strong> iSCSI name of <strong>the</strong> node. An iSCSI node can have any number of<br />

addresses, which can change at any time, particularly if <strong>the</strong>y are assigned by way of<br />

Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node<br />

and provides statically allocated IP addresses.<br />

Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN),<br />

which can have a size of up to 255 bytes. The IQN is formed according to <strong>the</strong> rules adopted<br />

for Internet nodes.<br />

The iSCSI qualified name format is defined in RFC3720 and contains (in order) <strong>the</strong>se<br />

elements:<br />

► The string “iqn”.<br />

► A date code specifying <strong>the</strong> year and month in which <strong>the</strong> organization registered <strong>the</strong><br />

domain or sub-domain name used as <strong>the</strong> naming authority string.<br />

► The organizational naming authority string, which consists of a valid, reversed domain or a<br />

subdomain name.<br />

► Optionally, a colon (:), followed by a string of <strong>the</strong> assigning organization’s choosing, which<br />

must make each assigned iSCSI name unique.<br />

For SVC, <strong>the</strong> IQN for its iSCSI target is specified as:<br />

iqn.1986-03.com.ibm:2145..<br />

On a Windows server, <strong>the</strong> IQN, that is, <strong>the</strong> name for <strong>the</strong> iSCSI Initiator, can be defined as:<br />

iqn.1991-05.com.microsoft:<br />

You can abbreviate IQNs by a descriptive name, known as an alias. An alias can be assigned<br />

to an initiator or a target. The alias is independent of <strong>the</strong> name and does not have to be<br />

unique. Because it is not unique, <strong>the</strong> alias must be used in a purely informational way. It<br />

cannot be used to specify a target at login or used during au<strong>the</strong>ntication. Both targets and<br />

initiators can have aliases.<br />

An iSCSI name provides <strong>the</strong> correct identification of an iSCSI device irrespective of its<br />

physical location. Remember, <strong>the</strong> IQN is an identifier, not an address.<br />

Be careful: Before changing cluster or node names for an SVC cluster that has servers<br />

connected to it by way of SCSI, be aware that because <strong>the</strong> cluster and node name are part<br />

of <strong>the</strong> SVC’s IQN, you can lose access to your data by changing <strong>the</strong>se names. The SVC<br />

GUI will display a specific warning, <strong>the</strong> CLI does not.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 27


The iSCSI session, which consists of a login phase and a full feature phase, is completed with<br />

a special command.<br />

The login phase of <strong>the</strong> iSCSI is identical to <strong>the</strong> FC port login process (PLOGI). It is used to<br />

adjust various parameters between two network entities and to confirm <strong>the</strong> access rights of<br />

an initiator.<br />

If <strong>the</strong> iSCSI login phase is completed successfully, <strong>the</strong> target confirms <strong>the</strong> login for <strong>the</strong><br />

initiator; o<strong>the</strong>rwise, <strong>the</strong> login is not confirmed and <strong>the</strong> TCP connection breaks.<br />

As soon as <strong>the</strong> login is confirmed, <strong>the</strong> iSCSI session enters <strong>the</strong> full feature phase. If more<br />

than one TCP connection was established, iSCSI requires that each command/response pair<br />

goes through one TCP connection. Thus, each separate read or write command will be<br />

carried out without <strong>the</strong> necessity to trace each request for passing separate flows. However,<br />

separate transactions can be delivered through separate TCP connections within one<br />

session.<br />

Figure 2-11 shows an overview of <strong>the</strong> various block-level storage protocols and where <strong>the</strong><br />

iSCSI layer is positioned.<br />

Figure 2-11 Overview of block-level protocol stacks<br />

2.2.11 Usage of IP addresses and E<strong>the</strong>rnet ports<br />

The addition of iSCSI changes <strong>the</strong> manner in which you configure E<strong>the</strong>rnet access to an SVC<br />

cluster. The SVC 5.1 releases of <strong>the</strong> GUI and <strong>the</strong> command-line interface (CLI) show <strong>the</strong>se<br />

changes.<br />

The existing SVC node hardware has two E<strong>the</strong>rnet ports. Until now, only one E<strong>the</strong>rnet port<br />

has been used for cluster configuration. With <strong>the</strong> introduction of iSCSI, you can now use a<br />

second port. The configuration details of <strong>the</strong> two E<strong>the</strong>rnet ports can be displayed by <strong>the</strong> GUI<br />

or CLI, but <strong>the</strong>y will also be displayed on <strong>the</strong> node’s panel.<br />

There are now two kinds of IP addresses:<br />

► A cluster management IP address is used for access to <strong>the</strong> SVC CLI, as well as to <strong>the</strong><br />

Common Information Model Object Manager (CIMOM) that runs on <strong>the</strong> SVC configuration<br />

node. As before, only a single configuration node presents a cluster management IP<br />

address at any one time, and failover of <strong>the</strong> configuration node is unchanged. However,<br />

<strong>the</strong>re can now be two cluster management IP addresses, one for each of <strong>the</strong> two E<strong>the</strong>rnet<br />

ports.<br />

28 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► A port IP address is used to perform iSCSI I/O to <strong>the</strong> cluster. Each node can have a port IP<br />

address for each of its ports.<br />

In <strong>the</strong> case of an upgrade to <strong>the</strong> SVC 5.1 code, <strong>the</strong> original cluster IP address will be retained<br />

and will always be found on <strong>the</strong> eth0 interface on <strong>the</strong> configuration node. A second, new<br />

cluster IP address can be optionally configured in SVC 5.1. This second cluster IP address<br />

will always be on <strong>the</strong> eth1 interface on <strong>the</strong> configuration node. When <strong>the</strong> configuration node<br />

fails, both configuration IP addresses will move to <strong>the</strong> new configuration node.<br />

Figure 2-12 shows an overview of <strong>the</strong> new IP addresses on an SVC node port and <strong>the</strong> rules<br />

regarding how <strong>the</strong>se IP addresses are moved between <strong>the</strong> nodes of an I/O Group.<br />

The management IP addresses and <strong>the</strong> ISCSI target IP addresses will fail over to <strong>the</strong> partner<br />

node N2 if node N1 restarts (and vice versa). The ISCSI target IPs will fail back to <strong>the</strong>ir<br />

corresponding ports on node N1 when node N1 is up and running again.<br />

Figure 2-12 SVC 5.1 IP address overview<br />

In an SVC cluster running 5.1 code, an eight node cluster with full iSCSI coverage (maximum<br />

configuration) <strong>the</strong>refore has <strong>the</strong> following number of IP addresses:<br />

► Two IPV4 configuration addresses (one configuration address is always associated with<br />

<strong>the</strong> eth0:0 alias for <strong>the</strong> eth0 interface of <strong>the</strong> configuration node, and <strong>the</strong> o<strong>the</strong>r configuration<br />

address goes with eth1:0).<br />

► One IPV4 service mode fixed address (although many DCHP addresses can also be<br />

used). This address is always associated with <strong>the</strong> eth0:0 alias for <strong>the</strong> eth0 interface of <strong>the</strong><br />

configuration node.<br />

► Two IPV6 configuration addresses (one address is always associated with <strong>the</strong> eth0:0 alias<br />

for <strong>the</strong> eth0 interface of <strong>the</strong> configuration node, and <strong>the</strong> o<strong>the</strong>r address goes with eth1:0).<br />

► One IPV6 service mode fixed address (although many DCHP addresses can also be<br />

used). This address is always associated with <strong>the</strong> eth0:0 alias for <strong>the</strong> eth0 interface of <strong>the</strong><br />

configuration node.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 29


► Sixteen IPV4 addresses are used for iSCSI access to each node (<strong>the</strong>se addresses are<br />

associated with <strong>the</strong> eth0:1 or eth1:1 alias for <strong>the</strong> eth0 or eth1 interface on each node).<br />

► Sixteen IPV6 addresses are used for iSCSI access to each node (<strong>the</strong>se addresses are<br />

associated with eth0 and eth1 interfaces on each node).<br />

We show <strong>the</strong> configuration of <strong>the</strong> SVC ports in great detail in Chapter 7, “<strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> operations using <strong>the</strong> command-line interface” on page 339 and in Chapter 8, “<strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI” on page 469.<br />

2.2.12 iSCSI VDisk discovery<br />

The iSCSI target implementation on <strong>the</strong> SVC nodes makes use of <strong>the</strong> hardware off-load<br />

features that are provided by <strong>the</strong> node’s hardware. This implementation results in minimal<br />

impact on <strong>the</strong> node’s CPU load for handling iSCSI traffic and simultaneously delivers<br />

excellent throughput (up to 95 MBps user data) on each of <strong>the</strong> two 1 Gbps LAN ports. The<br />

plan is to support jumbo frames (maximum transmission unit (MTU) sizes greater than 1,500<br />

bytes) in future SVC releases.<br />

Hosts can discover VDisks through one of <strong>the</strong> following three mechanisms:<br />

► Internet <strong>Storage</strong> Name Service (iSNS): SVC can register itself with an iSNS name server;<br />

you set <strong>the</strong> IP address of this server by using <strong>the</strong> svctask chcluster command. A host<br />

can <strong>the</strong>n query <strong>the</strong> iSNS server for available iSCSI targets.<br />

► Service Location Protocol (SLP): The SVC node runs an SLP daemon, which responds to<br />

host requests. This daemon reports <strong>the</strong> available services on <strong>the</strong> node, such as <strong>the</strong><br />

CIMOM service that runs on <strong>the</strong> configuration node; <strong>the</strong> iSCSI I/O service can now also be<br />

reported.<br />

► iSCSI Send Target request. The host can also send a Send Target request using <strong>the</strong> iSCSI<br />

protocol to <strong>the</strong> iSCSI TCP/IP port (port 3260).<br />

2.2.13 iSCSI au<strong>the</strong>ntication<br />

Au<strong>the</strong>ntication of <strong>the</strong> host sever toward <strong>the</strong> SVC cluster is optional and is disabled by default.<br />

The user can choose to enable Challenge Handshake Au<strong>the</strong>ntication Protocol (CHAP)<br />

au<strong>the</strong>ntication, which involves sharing a CHAP secret between <strong>the</strong> SVC cluster and <strong>the</strong> host.<br />

After <strong>the</strong> successful completion of <strong>the</strong> link establishment phase, <strong>the</strong> SVC as au<strong>the</strong>nticator<br />

sends a challenge message to <strong>the</strong> specific server (peer). The server responds with a value<br />

that is calculated by using a one-way hash function on <strong>the</strong> index/secret/challenge, such as an<br />

MD5 checksum hash.<br />

The response is checked by <strong>the</strong> SVC against its own calculation of <strong>the</strong> expected hash value.<br />

If <strong>the</strong>re is a match, <strong>the</strong> SVC acknowledges <strong>the</strong> au<strong>the</strong>ntication. If not, <strong>the</strong> SVC will terminate<br />

<strong>the</strong> connection and will not allow any I/O to VDisks. At random intervals, <strong>the</strong> SVC might send<br />

new challenges to <strong>the</strong> peer to recheck <strong>the</strong> au<strong>the</strong>ntication.<br />

You can assign a CHAP secret to each SVC host object. The host must <strong>the</strong>n use CHAP<br />

au<strong>the</strong>ntication in order to begin a communications session with a node in <strong>the</strong> cluster. You can<br />

also assign a CHAP secret to <strong>the</strong> cluster if two-way au<strong>the</strong>ntication is required. While creating<br />

an iSCSI host within an SVC cluster, you will get <strong>the</strong> initiator’s IQN, for example, for a<br />

Windows server:<br />

iqn.1991-05.com.microsoft:ITSO_W2008<br />

In addition, you must specify an (optional) CHAP secret.<br />

30 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


You add a VDisk to a host, or perform LUN masking, in <strong>the</strong> same way that you connect hosts<br />

by way of FC to <strong>the</strong> SVC.<br />

Because you can use iSCSI in networks where data can be accessed illegally, <strong>the</strong><br />

specification allows separate security methods. You can set up security, for example, via a<br />

method, such as IPSec, which is transparent for higher levels, such as iSCSI, because it is<br />

implemented at <strong>the</strong> IP level. You can obtain details about securing iSCSI in RFC3723,<br />

Securing Block <strong>Storage</strong> Protocols over IP, which is available at this Web site:<br />

http://tools.ietf.org/html/rfc3723<br />

2.2.14 iSCSI multipathing<br />

Multipathing drivers means that <strong>the</strong> host can send commands down multiple paths to <strong>the</strong><br />

SVC to <strong>the</strong> same VDisk. A fundamental multipathing difference exists between FC and iSCSI<br />

environments.<br />

If FC-attached hosts see <strong>the</strong>ir FC target, and VDisks go offline, for example, due to a problem<br />

in <strong>the</strong> target node, its ports, or <strong>the</strong> network, <strong>the</strong> host has to use a separate <strong>SAN</strong> path to<br />

continue I/O. A multipathing driver is <strong>the</strong>refore always required on <strong>the</strong> host.<br />

SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is <strong>the</strong><br />

key difference) <strong>the</strong> host is reconnected to <strong>the</strong> same IP target that reappears after a short<br />

period of time and its VDisks continue to be available for I/O.<br />

Be aware: With <strong>the</strong> iSCSI implementation in SVC, an IP address failover/failback between<br />

partner nodes of an I/O Group will only take place in cases of a planned or unplanned node<br />

restart. In <strong>the</strong> case of a problem in <strong>the</strong> network link (switches, ports, or links), no such<br />

failover takes place.<br />

A host multipathing driver for iSCSI is required if you want <strong>the</strong>se capabilities:<br />

► To protect a server from network link failures<br />

► To protect a server from network failures, if <strong>the</strong> server is connected via two HBAs to two<br />

separate networks<br />

► To protect a server from a server HBA failure (if two HBAs are in use)<br />

► To provide load balancing on <strong>the</strong> server’s HBA and <strong>the</strong> network links<br />

2.2.15 Advanced Copy Services overview<br />

The SVC supports <strong>the</strong> following copy services:<br />

► Synchronous remote copy<br />

► Asynchronous remote copy<br />

► FlashCopy with a full target<br />

► Block virtualization and data migration<br />

Copy services are implemented between VDisks within a single SVC or multiple SVC<br />

clusters. They are <strong>the</strong>refore independent of <strong>the</strong> functionalities of <strong>the</strong> underlying disk<br />

subsystems that are used to provide storage resources to an SVC cluster.<br />

Synchronous/Asynchronous remote copy<br />

The general application of remote copy seeks to maintain two copies of a data set. Often <strong>the</strong><br />

two copies will be separated by distance, but not necessarily.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 31


The remote copy can be maintained in one of two modes: synchronous or asynchronous. The<br />

definition of an asynchronous remote copy needs to be supplemented by describing <strong>the</strong><br />

maximum degree of asynchronicity.<br />

With <strong>the</strong> SVC, Metro Mirror and Global Mirror are <strong>the</strong> <strong>IBM</strong> branded terms for <strong>the</strong> functions that<br />

are synchronous remote copy and asynchronous remote copy.<br />

Synchronous remote copy ensures that updates are committed at both <strong>the</strong> primary and <strong>the</strong><br />

secondary before <strong>the</strong> application considers <strong>the</strong> updates complete; <strong>the</strong>refore, <strong>the</strong> secondary is<br />

fully up-to-date if it is needed in a failover. However, <strong>the</strong> application is fully exposed to <strong>the</strong><br />

latency and bandwidth limitations of <strong>the</strong> communication link to <strong>the</strong> secondary. In a truly<br />

remote situation, this extra latency can have a significant adverse effect on application<br />

performance.<br />

SVC assumes that <strong>the</strong> FC fabric to which it is attached contains hardware that achieves <strong>the</strong><br />

long distance requirement for <strong>the</strong> application. This hardware makes distant storage<br />

accessible as though it were local storage. Specifically, it enables a group of up to four SVC<br />

clusters to connect (FC login) to each o<strong>the</strong>r and establish communications in <strong>the</strong> same way<br />

as though <strong>the</strong>y were located nearby on <strong>the</strong> same fabric. The only differences are in <strong>the</strong><br />

expected latency of that communication, <strong>the</strong> bandwidth capability of <strong>the</strong> links, and <strong>the</strong><br />

availability of <strong>the</strong> links as compared with <strong>the</strong> local fabric. Special configuration guidelines exist<br />

for <strong>SAN</strong> fabrics that are used for data replication. Issues to consider are <strong>the</strong> distance and <strong>the</strong><br />

bandwidth of <strong>the</strong> site interconnections.<br />

In asynchronous remote copy, <strong>the</strong> application considers an update complete before that<br />

update has necessarily been committed at <strong>the</strong> secondary. Hence, on a failover, certain<br />

updates might be missing at <strong>the</strong> secondary. The application must have an external<br />

mechanism for recovering <strong>the</strong> missing updates and reapplying <strong>the</strong>m. This mechanism can<br />

involve user intervention. Asynchronous remote copy provides comparable functionality to a<br />

continuous backup process that is missing <strong>the</strong> last few updates. Recovery on <strong>the</strong> secondary<br />

site involves bringing up <strong>the</strong> application on this recent “backup” and, <strong>the</strong>n, reapplying <strong>the</strong><br />

most recent updates to bring <strong>the</strong> secondary up-to-date.<br />

The asynchronous remote copy must present at <strong>the</strong> secondary a view to <strong>the</strong> application that<br />

might not contain <strong>the</strong> latest updates, but is always consistent. If consistency has to be<br />

guaranteed at <strong>the</strong> secondary, applying updates in an arbitrary order is not an option. At <strong>the</strong><br />

primary side, <strong>the</strong> application is enforcing an ordering implicitly by not scheduling an I/O until a<br />

previous dependent I/O has completed. We do not know <strong>the</strong> actual ordering constraints of <strong>the</strong><br />

application; <strong>the</strong> best approach is to choose an ordering that <strong>the</strong> application might see if I/O at<br />

<strong>the</strong> primary was stopped at a suitable point. One example is to apply I/Os at <strong>the</strong> secondary in<br />

<strong>the</strong> order that <strong>the</strong>y were completed at <strong>the</strong> primary. Thus, <strong>the</strong> secondary always reflects a state<br />

that can have been seen at <strong>the</strong> primary if we froze I/O <strong>the</strong>re.<br />

The SVC Global Mirror protocol operates to identify small groups of I/Os, which are known to<br />

be active concurrently in <strong>the</strong> primary cluster. The process to identify <strong>the</strong>se groups of I/Os<br />

does not significantly contribute to <strong>the</strong> latency of <strong>the</strong>se I/Os when <strong>the</strong>y execute at <strong>the</strong> primary.<br />

These groups are applied at <strong>the</strong> secondary in <strong>the</strong> order in which <strong>the</strong>y were executed at <strong>the</strong><br />

primary. By identifying groups of I/Os that can be applied concurrently at <strong>the</strong> secondary, <strong>the</strong><br />

protocol maintains good throughput as <strong>the</strong> system size grows.<br />

The relationship between <strong>the</strong> two copies is not symmetrical. One copy of <strong>the</strong> data set is<br />

considered <strong>the</strong> primary copy, which is sometimes also known as <strong>the</strong> source. This copy<br />

provides <strong>the</strong> reference for normal runtime operation. Updates to this copy are shadowed to a<br />

secondary copy, which is sometimes known as <strong>the</strong> destination or even <strong>the</strong> target. The<br />

secondary copy is not normally referenced for performing I/O. If <strong>the</strong> primary copy fails, <strong>the</strong><br />

32 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2.2.16 FlashCopy<br />

secondary copy can be enabled for I/O operation. A typical use of this function might involve<br />

two sites where <strong>the</strong> first site provides service during normal operations and <strong>the</strong> second site is<br />

only activated when a failure of <strong>the</strong> first site is detected.<br />

The secondary copy is not accessible for application I/O o<strong>the</strong>r than <strong>the</strong> I/Os that are<br />

performed for <strong>the</strong> remote copy process. The SVC allows read-only access to <strong>the</strong> secondary<br />

storage when it contains a consistent image. This capability is only intended to allow boot<br />

time operating system discovery to complete without error so that any hosts at <strong>the</strong> secondary<br />

site can be ready to start up <strong>the</strong> applications with minimum delay, if required. For instance,<br />

many operating systems need to read logical block address (LBA) 0 to configure a logical<br />

unit.<br />

“Enabling” <strong>the</strong> secondary copy for active operation will require SVC, operating system, and<br />

possibly application-specific work, which needs to be performed as part of <strong>the</strong> entire failover<br />

process. The SVC software at <strong>the</strong> secondary must be instructed to stop <strong>the</strong> relationship,<br />

which makes <strong>the</strong> secondary logical unit accessible for normal I/O access. The operating<br />

system might need to mount file systems, or similar work, which can typically only happen<br />

when <strong>the</strong> logical unit is accessible for writes. The application might have a log of work to<br />

recover.<br />

Note that this property of remote copy, <strong>the</strong> requirement to enable <strong>the</strong> secondary copy,<br />

differentiates it from RAID-1 mirroring. The latter aims to emulate a single, reliable disk,<br />

regardless of what system accesses it. Remote copy retains <strong>the</strong> property that <strong>the</strong>re are two<br />

volumes in existence, but it suppresses one volume while <strong>the</strong> copy is being maintained.<br />

The underlying storage at <strong>the</strong> primary or secondary of a remote copy will normally be RAID<br />

storage, but it can be any storage, which can be managed by <strong>the</strong> SVC.<br />

Making use of a secondary copy involves a conscious policy decision by a user that a failover<br />

is required. The application work involved in establishing operation on <strong>the</strong> secondary copy is<br />

substantial. The goal is to make this rapid but not seamless. Rapid is still much faster<br />

compared to recovering from a backup copy.<br />

Most clients will aim to automate this remote copy through failover management software.<br />

SVC provides Simple Network Management Protocol (SNMP) traps and interfaces to enable<br />

this automation. <strong>IBM</strong> Support for automation is provided by <strong>IBM</strong> Tivoli® <strong>Storage</strong> Productivity<br />

Center for Replication.<br />

Or, you can access <strong>the</strong> documentation online at <strong>the</strong> <strong>IBM</strong> Tivoli <strong>Storage</strong> Productivity Center<br />

information center:<br />

http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp<br />

FlashCopy makes a copy of a source VDisk to a target VDisk. The original content of <strong>the</strong><br />

target VDisk is lost. After <strong>the</strong> copy operation has started, <strong>the</strong> target VDisk has <strong>the</strong> contents of<br />

<strong>the</strong> source VDisk as it existed at a single point in time. Although <strong>the</strong> copy operation takes<br />

time, <strong>the</strong> resulting data at <strong>the</strong> target appears as though <strong>the</strong> copy was made instantaneously.<br />

You can run FlashCopy on multiple source and target VDisks. FlashCopy permits <strong>the</strong><br />

management operations to be coordinated so that a common single point in time is chosen<br />

for copying target VDisks from <strong>the</strong>ir respective source VDisks. This capability allows a<br />

consistent copy of data, which spans multiple VDisks.<br />

SVC also permits multiple Target VDisks to be FlashCopied from each Source VDisk. You can<br />

use this capability to create images from separate points in time for each Source VDisk, you<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 33


can also create multiple images from a Source VDisk at a common point in time. Source and<br />

Target VDisks can be SE VDisks.<br />

Starting with SVC 5.1, Reverse FlashCopy is supported. It enables target VDisks to become<br />

restore points for <strong>the</strong> source without breaking <strong>the</strong> FlashCopy relationship and without having<br />

to wait for <strong>the</strong> original copy operation to complete. SVC supports multiple targets and thus<br />

multiple rollback points.<br />

FlashCopy is sometimes described as an instance of a Time-Zero copy (T0) or a Point in<br />

Time (PiT) copy technology. Although <strong>the</strong> FlashCopy operation takes a finite time, this time is<br />

several orders of magnitude less than <strong>the</strong> time that is required to copy <strong>the</strong> data using<br />

conventional techniques.<br />

Most clients aim to integrate <strong>the</strong> FlashCopy feature for point in time copies and quick recovery<br />

of <strong>the</strong>ir applications and databases. <strong>IBM</strong> Support is provided by Tivoli <strong>Storage</strong> FlashCopy<br />

Manager:<br />

http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/<br />

You can read a detailed description of Data Mirroring and FlashCopy copy services in<br />

Chapter 7, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface” on<br />

page 339. We discuss data migration in Chapter 6, “Advanced Copy Services” on page 255.<br />

2.3 SVC cluster overview<br />

In simple terms, a cluster is a collection of servers that, toge<strong>the</strong>r, provide a set of resources to<br />

a client. The key point is that <strong>the</strong> client has no knowledge of <strong>the</strong> underlying physical hardware<br />

of <strong>the</strong> cluster. The client is isolated and protected from changes to <strong>the</strong> physical hardware,<br />

which offers many benefits, most significantly, high availability.<br />

Resources on clustered servers act as highly available versions of unclustered resources. If a<br />

node (an individual computer) in <strong>the</strong> cluster is unavailable, or too busy to respond to a request<br />

for a resource, <strong>the</strong> request is transparently passed to ano<strong>the</strong>r node capable of processing it,<br />

so that clients are unaware of <strong>the</strong> exact locations of <strong>the</strong> resources <strong>the</strong>y are using.<br />

For example, a client can request <strong>the</strong> use of an application without being concerned about<br />

ei<strong>the</strong>r where <strong>the</strong> application resides or which physical server is processing <strong>the</strong> request. The<br />

user simply gains access to <strong>the</strong> application in a timely and reliable manner. Ano<strong>the</strong>r benefit is<br />

scalability. If you need to add users or applications to your system and want performance to<br />

be maintained at existing levels, additional systems can be incorporated into <strong>the</strong> cluster.<br />

The SVC is a collection of up to eight cluster nodes, which are added in pairs. In future<br />

releases, <strong>the</strong> cluster size might be increased to permit fur<strong>the</strong>r performance scalability. These<br />

nodes are managed as a set (cluster) and present a single point of control to <strong>the</strong><br />

administrator for configuration and service activity.<br />

The actual eight node limit within an SVC cluster is a limitation of <strong>the</strong> actual product, not an<br />

architectural one. Larger clusters are possible without changing <strong>the</strong> underlying architecture.<br />

SVC demonstrated its ability to scale during a recently run project:<br />

http://www-03.ibm.com/press/us/en/pressrelease/24996.wss<br />

Based on a 14-node cluster, coupled with solid-state drive controllers, <strong>the</strong> project achieved a<br />

data rate of over one million IOPS with a response time of under 1 millisecond (ms).<br />

34 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2.3.1 Quorum disks<br />

Although <strong>the</strong> SVC code is based on a purpose-optimized Linux kernel, <strong>the</strong> clustering feature<br />

is not based on Linux clustering code. The cluster software used within SVC, that is, <strong>the</strong> event<br />

manager cluster framework, is based on <strong>the</strong> outcome of <strong>the</strong> COMPASS research project. It is<br />

<strong>the</strong> key element to isolate <strong>the</strong> SVC application from <strong>the</strong> underlying hardware nodes. The<br />

cluster software makes <strong>the</strong> code portable and provides <strong>the</strong> means to keep <strong>the</strong> single<br />

instances of <strong>the</strong> SVC code running on separate cluster nodes in sync. Node restarts (during a<br />

code upgrade), adding new nodes, or removing old nodes from a cluster or node failures<br />

<strong>the</strong>refore cannot impact <strong>the</strong> SVC’s availability.<br />

It is key for all active nodes of a cluster to know that <strong>the</strong>y are members of <strong>the</strong> cluster.<br />

Especially in situations, such as <strong>the</strong> split brain scenario where single nodes lose contact to<br />

o<strong>the</strong>r nodes and cannot determine if <strong>the</strong> o<strong>the</strong>r nodes can be reached anymore, it is key to<br />

have a solid mechanism to decide which nodes form <strong>the</strong> active cluster. A worst case scenario<br />

is a cluster that splits into two separate clusters.<br />

Within an SVC cluster, <strong>the</strong> voting set and an optional quorum disk are responsible for <strong>the</strong><br />

integrity of <strong>the</strong> cluster. If nodes are added to a cluster, <strong>the</strong>y get added to <strong>the</strong> voting set; if<br />

nodes are removed, <strong>the</strong>y will also quickly be removed from <strong>the</strong> voting set. Over time, <strong>the</strong><br />

voting set, and hence <strong>the</strong> nodes in <strong>the</strong> cluster, can completely change so that <strong>the</strong> cluster has<br />

migrated onto a completely separate set of nodes from <strong>the</strong> set on which it started.<br />

Within an SVC cluster, <strong>the</strong> quorum is defined in one of <strong>the</strong>se ways:<br />

► More than half <strong>the</strong> nodes in <strong>the</strong> voting set<br />

► Exactly half of <strong>the</strong> nodes in <strong>the</strong> voting set and <strong>the</strong> quorum disk from <strong>the</strong> voting set<br />

► When <strong>the</strong>re is no quorum disk in <strong>the</strong> voting set, exactly half of <strong>the</strong> nodes in <strong>the</strong> voting set,<br />

if that half includes <strong>the</strong> node that appears first in <strong>the</strong> voting set (a node is entered into <strong>the</strong><br />

voting set in <strong>the</strong> first available free slot)<br />

These rules guarantee that <strong>the</strong>re is only ever at most one group of nodes able to operate as<br />

<strong>the</strong> cluster, so <strong>the</strong> cluster never splits into two. The SVC cluster implements a dynamic<br />

quorum. Following a loss of nodes, if <strong>the</strong> cluster can continue operation, <strong>the</strong> cluster will adjust<br />

<strong>the</strong> quorum requirement, so that fur<strong>the</strong>r node failure can be tolerated.<br />

The lowest Node Unique ID in a cluster becomes <strong>the</strong> boss node for <strong>the</strong> group of nodes and<br />

proceeds to determine (from <strong>the</strong> quorum rules) whe<strong>the</strong>r <strong>the</strong> nodes can operate as <strong>the</strong> cluster.<br />

This node also presents <strong>the</strong> maximum two cluster IP addresses on one or both of its node’s<br />

E<strong>the</strong>rnet ports to allow access for cluster management.<br />

The cluster uses <strong>the</strong> quorum disk for two purposes: as a tie breaker in <strong>the</strong> event of a <strong>SAN</strong><br />

fault, when exactly half of <strong>the</strong> nodes that were previously members of <strong>the</strong> cluster are present,<br />

and to hold a copy of important cluster configuration data. Just over 256 MB is reserved for<br />

this purpose on each quorum disk candidate. There is only one active quorum disk in a<br />

cluster; however, <strong>the</strong> cluster uses three MDisks as quorum disk candidates. The cluster<br />

automatically selects <strong>the</strong> actual active quorum disk from <strong>the</strong> pool of assigned quorum disk<br />

candidates.<br />

If a tiebreaker condition occurs, <strong>the</strong> one half of <strong>the</strong> cluster nodes, which is able to reserve <strong>the</strong><br />

quorum disk after <strong>the</strong> split has occurred, locks <strong>the</strong> disk and continues to operate. The o<strong>the</strong>r<br />

half stops its operation. This design prevents both sides from becoming inconsistent with<br />

each o<strong>the</strong>r.<br />

When MDisks are added to <strong>the</strong> SVC cluster, <strong>the</strong> SVC cluster checks <strong>the</strong> MDisk to see if it can<br />

be used as a quorum disk. If <strong>the</strong> MDisk fulfills <strong>the</strong> requirements, <strong>the</strong> SVC will assign <strong>the</strong> three<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 35


first MDisks added to <strong>the</strong> cluster as quorum candidates. One of <strong>the</strong>m is selected as <strong>the</strong> active<br />

quorum disk.<br />

Note: To be considered eligible as a quorum disk, an LUN must meet <strong>the</strong> following criteria:<br />

► It must be presented by a disk subsystem that is supported to provide SVC quorum<br />

disks.<br />

► It cannot be allocated on one of <strong>the</strong> node’s internal flash disks.<br />

► It has been manually allowed to be a quorum disk candidate using <strong>the</strong> svctask<br />

chcontroller -allow_quorum yes command.<br />

► It must be in managed mode (no image mode disks).<br />

► It must have sufficient free extents to hold <strong>the</strong> cluster state information, plus <strong>the</strong> stored<br />

configuration metadata.<br />

► It must be visible to all of <strong>the</strong> nodes in <strong>the</strong> cluster.<br />

If possible, <strong>the</strong> SVC will place <strong>the</strong> quorum candidates on separate disk subsystems. After <strong>the</strong><br />

quorum disk has been selected, however, no attempt is made to ensure that <strong>the</strong> o<strong>the</strong>r quorum<br />

candidates are presented through separate disk subsystems.<br />

With SVC 5.1, quorum disk candidates and <strong>the</strong> active quorum disk in a cluster can be listed<br />

by <strong>the</strong> svcinfo lsquorum command. When <strong>the</strong> set of quorum disk candidates has been<br />

chosen, it is fixed.<br />

A new quorum disk candidate will only be chosen in one of <strong>the</strong>se conditions:<br />

► The administrator requests that a specific MDisk becomes a quorum disk by using <strong>the</strong><br />

svctask setquorum command.<br />

► An MDisk that is a quorum disk is deleted from an MDG.<br />

► An MDisk that is a quorum disk changes to image mode.<br />

An offline MDisk will not be replaced as a quorum disk candidate.<br />

A cluster needs to be regarded as a single entity for disaster recovery purposes. The cluster<br />

and <strong>the</strong> quorum disk need to be colocated.<br />

There are special considerations concerning <strong>the</strong> placement of <strong>the</strong> active quorum disk for a<br />

stretched cluster and stretched I/O Group configurations. Details are available at this Web<br />

site:<br />

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311<br />

Important: Running an SVC cluster without a quorum disk can seriously affect your<br />

operation. A lack of available quorum disks for storing metadata will prevent any migration<br />

operation (including a forced MDisk delete). Mirrored VDisks might be taken offline if <strong>the</strong>re<br />

is no quorum disk available. This behavior occurs, because synchronization status for<br />

mirrored VDisks is recorded on <strong>the</strong> quorum disk.<br />

During <strong>the</strong> normal operation of <strong>the</strong> cluster, <strong>the</strong> nodes communicate with each o<strong>the</strong>r. If a node<br />

is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with <strong>the</strong> cluster. If a<br />

node fails for any reason, <strong>the</strong> workload that is intended for it is taken over by ano<strong>the</strong>r node<br />

until <strong>the</strong> failed node has been restarted and readmitted to <strong>the</strong> cluster (which happens<br />

automatically). In <strong>the</strong> event that <strong>the</strong> microcode on a node becomes corrupted, resulting in a<br />

failure, <strong>the</strong> workload is transferred to ano<strong>the</strong>r node. The code on <strong>the</strong> failed node is repaired,<br />

and <strong>the</strong> node is readmitted to <strong>the</strong> cluster (again, all automatically).<br />

36 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2.3.2 I/O Groups<br />

2.3.3 Cache<br />

For I/O purposes, <strong>the</strong> SVC nodes within <strong>the</strong> cluster are grouped into pairs, called I/O Groups,<br />

with a single pair being responsible for serving I/O on a given VDisk. One node within <strong>the</strong> I/O<br />

Group represents <strong>the</strong> preferred path for I/O to a given VDisk. The o<strong>the</strong>r node provides <strong>the</strong><br />

failover path. This preference alternates between nodes as each VDisk is created within an<br />

I/O Group, which is an approach to balance <strong>the</strong> workload evenly between <strong>the</strong> two nodes.<br />

Preferred node: The preferred node does not signify absolute ownership. The data can<br />

still be accessed by <strong>the</strong> partner node in <strong>the</strong> I/O Group in <strong>the</strong> event of a failure.<br />

The primary benefit of storage cache is to improve I/O response time. Reads and writes to a<br />

magnetic disk drive suffer from both seek and latency time at <strong>the</strong> drive level, which can result<br />

in from one to 10 ms of response time (for an enterprise-class disk).<br />

The new 2145-CF8 nodes combined with SVC 5.1 provide 24 GB memory per node, or 48<br />

GB per I/O Group, or 192 GB per SVC cluster. The SVC provides a flexible cache model, and<br />

<strong>the</strong> node’s memory can be used as read or write cache. The size of <strong>the</strong> write cache is limited<br />

to a maximum of 12 GB of <strong>the</strong> node’s memory. Dependent on <strong>the</strong> current I/O situation on a<br />

node, <strong>the</strong> free part of <strong>the</strong> memory (maximum 24 GB) can be fully used as read cache.<br />

Cache is allocated in 4 KB pages. A page belongs to one track. A track is <strong>the</strong> unit of locking<br />

and destage granularity in <strong>the</strong> cache. It is 32 KB in size (eight pages). A track might only be<br />

partially populated with valid pages. The SVC coalesces writes up to <strong>the</strong> 32 KB track size if<br />

<strong>the</strong> writes reside in <strong>the</strong> same tracks prior to destage; for example, if 4 KB is written into a<br />

track, ano<strong>the</strong>r 4 KB is written to ano<strong>the</strong>r location in <strong>the</strong> same track. Therefore, <strong>the</strong> blocks<br />

written from <strong>the</strong> SVC to <strong>the</strong> disk subsystem can be any size between 512 bytes up to 32 KB.<br />

When data is written by <strong>the</strong> host, <strong>the</strong> preferred node within <strong>the</strong> I/O Group saves <strong>the</strong> data in its<br />

cache. Before <strong>the</strong> cache returns completion to <strong>the</strong> host, <strong>the</strong> write must be mirrored to <strong>the</strong><br />

partner node, or copied in <strong>the</strong> cache of its partner node, for availability reasons. After having a<br />

copy of <strong>the</strong> written data, <strong>the</strong> cache returns completion to <strong>the</strong> host.<br />

Write data that is held in cache is not destaged to disk; <strong>the</strong>refore, if only one copy of <strong>the</strong> data<br />

is kept, you risk losing data. Write cache entries without updates during <strong>the</strong> last two minutes<br />

are automatically destaged to disk.<br />

If one node of an I/O Group is missing, due to a restart or a hardware failure, <strong>the</strong> remaining<br />

node empties all of its write cache and proceeds in a operation mode, which is referred to as<br />

write-through mode. A node operating in write-through mode writes data directly to <strong>the</strong> disk<br />

subsystem before sending an “I/O complete” status message back to <strong>the</strong> host. Running in<br />

this mode can degrade <strong>the</strong> performance of <strong>the</strong> specific I/O Group.<br />

Starting with SVC Version 4.2.1, write cache partitioning was introduced to <strong>the</strong> SVC. This<br />

feature restricts <strong>the</strong> maximum amount of write cache that a single MDG can allocate in a<br />

cluster. Table 2-2 shows <strong>the</strong> upper limit of write cache data that a single MDG in a cluster can<br />

occupy.<br />

Table 2-2 Upper limit of write cache per MDG<br />

One MDG Two MDGs Three MDGs Four MDGs More than four<br />

MDGs<br />

100% 66% 40% 33% 25%<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 37


For in-depth information about SVC cache partitioning, we strongly recommend <strong>IBM</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> 4.2.1 Cache Partitioning, REDP-4426, which is available at this Web site:<br />

http://www.redbooks.ibm.com/abstracts/redp4426.html?Open<br />

An SVC node can treat part or all of its physical memory as non-volatile. Non-volatile means<br />

that its contents are preserved across power losses and resets. Besides <strong>the</strong> bitmaps for Flash<br />

Copy and Remote Mirroring relationships, <strong>the</strong> Virtualization Table and <strong>the</strong> Write Cache are<br />

<strong>the</strong> most important items in <strong>the</strong> non-volatile memory. The actual amount that can be treated<br />

as non-volatile is dependent on <strong>the</strong> hardware.<br />

In <strong>the</strong> event of a disruption or external power loss, <strong>the</strong> physical memory is copied to a file in<br />

<strong>the</strong> file system on <strong>the</strong> node’s internal disk drive, so that <strong>the</strong> contents can be recovered when<br />

external power is restored. The uninterruptible power supply units, which are delivered with<br />

each node’s hardware, ensure that <strong>the</strong>re is sufficient internal power to keep a node<br />

operational to perform this dump when external power is removed. After dumping <strong>the</strong> content<br />

of <strong>the</strong> non-volatile part of <strong>the</strong> memory to disk, <strong>the</strong> SVC node shuts down.<br />

2.3.4 Cluster management<br />

The SVC can be managed by one of <strong>the</strong> following three interfaces:<br />

► A textual Command-line interface (CLI) accessed via a Secure Shell connection (SSH).<br />

► A Web browser-based graphical user interface (GUI) written as a CIM Client (ICAT) using<br />

<strong>the</strong> SVC CIMOM. It supports flexible and rapid access to storage management<br />

information.<br />

► A CIMOM, which can be used write alternative CIM Clients (such as <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong><br />

Productivity Center).<br />

Starting with SVC release 4.3.1, <strong>the</strong> SVC Console (ICAT) can use <strong>the</strong> CIM Agent that is<br />

embedded in <strong>the</strong> SVC cluster. With release 5.1 of <strong>the</strong> code, using <strong>the</strong> embedded CIMOM is<br />

mandatory. This CIMOM will support <strong>the</strong> <strong>Storage</strong> Management Initiative Specification (SMI-S)<br />

Version 1.3 standard.<br />

User account migration<br />

During <strong>the</strong> upgrade from <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console Version 4.3.1 to Version 5.1, <strong>the</strong><br />

installation program attempts to migrate user accounts that are currently defined to <strong>the</strong><br />

CIMOM on <strong>the</strong> cluster. If <strong>the</strong> migration of those accounts fails with <strong>the</strong> installation program,<br />

you can manually migrate <strong>the</strong> user accounts with <strong>the</strong> help of a script. You can obtain details in<br />

<strong>the</strong> SVC Software Installation and Configuration Guide, SC23-6628-04.<br />

Hardware Management Console<br />

The management console for SVC is referred to as <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity<br />

Center. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center is a hardware and software solution that<br />

includes a suite of storage infrastructure management software that can centralize, automate,<br />

and simplify <strong>the</strong> management of complex and heterogeneous storage environments.<br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center<br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center is based on server hardware (<strong>IBM</strong> <strong>System</strong><br />

x®-based) and a set of pre-installed and optional software modules. Several of <strong>the</strong>se<br />

pre-installed modules provide base functionality only, or are not activated. You can activate<br />

<strong>the</strong>se modules, or <strong>the</strong> enhanced functionalities, by adding separate licenses.<br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center contains <strong>the</strong>se functions:<br />

38 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► Tivoli Integrated Portal: <strong>IBM</strong> Tivoli Integrated Portal is a standards-based architecture for<br />

Web administration. The installation of Tivoli Integrated Portal is required to enable single<br />

sign-on (SSO) for Tivoli <strong>Storage</strong> Productivity Center. Tivoli <strong>Storage</strong> Productivity Center<br />

now installs Tivoli Integrated Portal along with Tivoli <strong>Storage</strong> Productivity Center.<br />

► Tivoli <strong>Storage</strong> Productivity Center: <strong>IBM</strong> Tivoli <strong>Storage</strong> Productivity Center Basic Edition<br />

4.1.0 is pre-installed on <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center server. There are<br />

several o<strong>the</strong>r commercially available products of Tivoli <strong>Storage</strong> Productivity Center that<br />

provide additional functionality beyond Tivoli <strong>Storage</strong> Productivity Center Basic Edition.<br />

You can activate <strong>the</strong>se packages by adding <strong>the</strong> specific licenses to <strong>the</strong> pre-installed Basic<br />

Edition:<br />

– Tivoli <strong>Storage</strong> Productivity Center for Disk allows you to monitor storage systems for<br />

performance.<br />

– Tivoli <strong>Storage</strong> Productivity Center for Data allows you to collect and monitor file<br />

systems and databases.<br />

– Tivoli <strong>Storage</strong> Productivity Center Standard Edition is a bundle that includes all of <strong>the</strong><br />

o<strong>the</strong>r packages, along with <strong>SAN</strong> planning tools that make use of information that is<br />

collected from <strong>the</strong> Tivoli <strong>Storage</strong> Productivity Center components.<br />

► Tivoli <strong>Storage</strong> Productivity Center for Replication: The functions of Tivoli <strong>Storage</strong><br />

Productivity Center for Replication provide <strong>the</strong> management of <strong>the</strong> <strong>IBM</strong> FlashCopy, Metro<br />

Mirror, and Global Mirror capabilities for <strong>the</strong> <strong>IBM</strong> Enterprise <strong>Storage</strong> Server® Model 800,<br />

<strong>IBM</strong> DS6000, DS8000®, and <strong>IBM</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>. You can activate this<br />

package by adding <strong>the</strong> specific licenses.<br />

► SVC GUI (ICAT)<br />

► SSH Client (PuTTY)<br />

► Windows Server 2008 Enterprise Edition<br />

► Several base software packets that are required for Tivoli Productivity Center<br />

► Optional software packages, such as anti-virus software or DS3000/4000/5000 <strong>Storage</strong><br />

Manager, can be installed on <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center server by <strong>the</strong><br />

client.<br />

Figure 2-13 on page 40 provides an overview of <strong>the</strong> SVC management components. We<br />

describe <strong>the</strong> details in Chapter 4, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration” on page 103.<br />

You can obtain details about <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center in <strong>IBM</strong> <strong>System</strong><br />

<strong>Storage</strong> Productivity Center User’s Guide Version 1 Release 4, SC27-2336-03.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 39


Figure 2-13 SVC management overview<br />

2.3.5 User au<strong>the</strong>ntication<br />

With SVC 5.1, several changes concerning user au<strong>the</strong>ntication for an SVC cluster have been<br />

introduced to make user au<strong>the</strong>ntication simpler.<br />

Earlier SVC releases au<strong>the</strong>nticated all users locally. SVC 5.1 has two au<strong>the</strong>ntication<br />

methods:<br />

► Local au<strong>the</strong>ntication: Local au<strong>the</strong>ntication is similar to <strong>the</strong> existing method and will be<br />

described next.<br />

► Remote au<strong>the</strong>ntication: Remote au<strong>the</strong>ntication supports <strong>the</strong> use of a remote<br />

au<strong>the</strong>ntication server, which for SVC is <strong>the</strong> Tivoli Embedded Security Services, to validate<br />

<strong>the</strong> passwords. The Tivoli Embedded Security Services is part of <strong>the</strong> Tivoli Integrated<br />

Portal, which is one of <strong>the</strong> three components that come with Tivoli Productivity Center 4.1<br />

(Tivoli Productivity Center, Tivoli Productivity Center for Replication, and Tivoli Integrated<br />

Portal) that are pre-installed on <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center 1.4. The <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> Productivity Center 1.4 is <strong>the</strong> management console for SVC 5.1 clusters.<br />

Each SVC cluster can have multiple users defined. The cluster maintains an audit log of<br />

successfully executed commands, indicating which users made what actions at what times.<br />

User names can contain only printable ASCII characters:<br />

► Forbidden characters are single quotation mark (‘), colon (:), percent symbol (%), asterisk<br />

(*), comma (,), and double quotation marks (“).<br />

► A user name cannot begin or end with a blank.<br />

Passwords for local users do not have any forbidden characters, but passwords cannot begin<br />

or end with blanks.<br />

40 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


SVC superuser<br />

There is a special local user called <strong>the</strong> superuser that always exists on every cluster. It cannot<br />

be deleted. Its password is set by <strong>the</strong> user during cluster initialization. The superuser<br />

password can be reset from <strong>the</strong> node’s front panel, and this function can be disabled,<br />

although doing this makes <strong>the</strong> cluster inaccessible if all of <strong>the</strong> users forget <strong>the</strong>ir passwords or<br />

lose <strong>the</strong>ir SSH keys. The superuser’s password supersedes <strong>the</strong> cluster administrator<br />

password that was present in previous software releases.<br />

To register an SSH key for <strong>the</strong> superuser to provide command-line access, you use <strong>the</strong> GUI,<br />

usually at <strong>the</strong> end of <strong>the</strong> cluster initialization process. But, you can also add it later.<br />

The superuser is always a member of user group 0, which has <strong>the</strong> most privileged role within<br />

<strong>the</strong> SVC.<br />

2.3.6 SVC roles and user groups<br />

Each user group is associated with a single role. The role for a user group cannot be<br />

changed, but additional new user groups (with one of <strong>the</strong> defined roles) can be created.<br />

User groups are used for local and remote au<strong>the</strong>ntication. Because SVC knows of five roles,<br />

<strong>the</strong>re are, by default, five user groups defined in an SVC cluster (see Table 2-3).<br />

Table 2-3 User groups<br />

User group ID User group Role<br />

0 SecurityAdmin SecurityAdmin<br />

1 Administrator Administrator<br />

2 CopyOperator CopyOperator<br />

3 Service Service<br />

4 Monitor Monitor<br />

The access rights for a user belonging to a specific user group are defined by <strong>the</strong> role that is<br />

assigned to <strong>the</strong> user group. It is <strong>the</strong> role that defines what a user can do (or cannot do) on an<br />

SVC cluster.<br />

Table 2-4 on page 42 shows <strong>the</strong> roles ordered (from <strong>the</strong> top) by starting with <strong>the</strong> least<br />

privileged Monitor role down to <strong>the</strong> most privileged SecurityAdmin role.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 41


Table 2-4 Commands permitted for each role<br />

Role Allowed commands<br />

Monitor All svcinfo commands:<br />

svctask: finderr, dumperrlog, dumpinternallog, chcurrentuser<br />

svcconfig: backup<br />

Service All commands allowed for Monitor role, plus:<br />

svctask: applysoftware, setlocale, addnode, rmnode, cherrstate,<br />

writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps,<br />

settimezone, stopcluster, startstats, stopstats, settime<br />

CopyOperator All commands allowed for Monitor role, plus:<br />

svctask: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp,<br />

chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap,<br />

startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp,<br />

chrcconsistgrp,<br />

startrcrelationship, stoprcrelationship, switchrcrelationship,<br />

chrcrelationship, chpartnership<br />

Administrator All commands, except:<br />

svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp,<br />

chusergrp, setpwdreset<br />

SecurityAdmin All commands<br />

2.3.7 SVC local au<strong>the</strong>ntication<br />

Local users are those users managed entirely on <strong>the</strong> cluster without <strong>the</strong> intervention of a<br />

remote au<strong>the</strong>ntication service. Local users must have ei<strong>the</strong>r a password, an SSH public key,<br />

or both. The password is used for au<strong>the</strong>ntication and <strong>the</strong> SSH key is used for command-line<br />

or file transfer (SecureCopy) access. Therefore, for local users, <strong>the</strong> user can access <strong>the</strong> SVC<br />

cluster via <strong>the</strong> GUI only if a password is specified.<br />

Local users: Be aware that local users are created per each SVC cluster. Each user has a<br />

name, which must be unique across all users in one cluster. If you want to allow access for<br />

a user on multiple clusters, you have to define <strong>the</strong> user in each cluster with <strong>the</strong> same name<br />

and <strong>the</strong> same privileges.<br />

A local user always belongs to only one user group.<br />

Figure 2-14 on page 43 shows an overview of local au<strong>the</strong>ntication within <strong>the</strong> SVC.<br />

42 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 2-14 Simplified overview of SVC local au<strong>the</strong>ntication<br />

2.3.8 SVC remote au<strong>the</strong>ntication and single sign-on<br />

You can configure an SVC cluster to use a remote au<strong>the</strong>ntication service. Remote users are<br />

those users that are managed by <strong>the</strong> remote au<strong>the</strong>ntication service and require<br />

command-line or file-transfer access.<br />

Remote users only have to be defined in <strong>the</strong> SVC if command-line access is required. In that<br />

case, <strong>the</strong> remote au<strong>the</strong>ntication flag has to be set, and an SSH key and its password have to<br />

be defined for this user. Remember that for users requiring CLI access with remote<br />

au<strong>the</strong>ntication, defining <strong>the</strong> password locally for this user is mandatory.<br />

Remote users cannot belong to any user group, because <strong>the</strong> remote au<strong>the</strong>ntication service,<br />

for example, an Lightweight Directory Access Protocol (LDAP) directory server, such as <strong>IBM</strong><br />

Tivoli Directory Server or Microsoft® Active Directory, will deliver <strong>the</strong> user group information.<br />

The upgrade from SVC 4.3.1 is seamless. Existing users and roles are migrated without<br />

interruption. Remote au<strong>the</strong>ntication can be enabled after <strong>the</strong> upgrade is complete.<br />

Figure 2-15 on page 44 gives an overview of SVC remote au<strong>the</strong>ntication.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 43


Figure 2-15 Simplified overview of SVC 5.1 remote au<strong>the</strong>ntication<br />

The au<strong>the</strong>ntication service supported by SVC is <strong>the</strong> Tivoli Embedded Security Services server<br />

component level 6.2.<br />

The Tivoli Embedded Security Services server provides <strong>the</strong> following two key features:<br />

► Tivoli Embedded Security Services isolates <strong>the</strong> SVC from <strong>the</strong> actual directory protocol in<br />

use, which means that <strong>the</strong> SVC communicates only with Tivoli Embedded Security<br />

Services to get its au<strong>the</strong>ntication information. The type of protocol that is used to access<br />

<strong>the</strong> central directory or <strong>the</strong> kind of <strong>the</strong> directory system that is used is transparent to SVC.<br />

► Tivoli Embedded Security Services provides a secure token facility that is used to enable<br />

single sign-on (SSO). SSO means that users do not have to log in multiple times when<br />

using what appears to <strong>the</strong>m to be a single system. It is used within Tivoli Productivity<br />

Center. When <strong>the</strong> SVC Console is launched from within Tivoli Productivity Center, <strong>the</strong> user<br />

will not have to log on to <strong>the</strong> SVC Console, because <strong>the</strong> user has already logged in to<br />

Tivoli Productivity Center.<br />

With reference to Figure 2-16 on page 45, <strong>the</strong> user starts application A with a user name and<br />

password (1), which are au<strong>the</strong>nticated using <strong>the</strong> Tivoli Embedded Security Services server<br />

(2). The server returns a token (3), which is an opaque string that can only be interpreted by<br />

<strong>the</strong> Tivoli Embedded Security Services server. The server also supplies <strong>the</strong> user’s groups and<br />

an expiry time stamp for <strong>the</strong> token. The client device (SVC in our case) is responsible for<br />

mapping an Tivoli Embedded Security Services user group to roles.<br />

Application A needs to launch application B. Instead of getting <strong>the</strong> user to enter a new<br />

password to au<strong>the</strong>nticate to application B, A passes B <strong>the</strong> Tivoli Embedded Security Services<br />

token (4). Application B passes <strong>the</strong> Tivoli Embedded Security Services token to <strong>the</strong> Tivoli<br />

Embedded Security Services server (5), which decodes <strong>the</strong> token and returns <strong>the</strong> user’s ID<br />

and groups to application B (6) along with an expiry time stamp.<br />

44 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


1: login( u, p )<br />

4: launch( tk )<br />

Figure 2-16 SSO with Tivoli Embedded Security Services<br />

The token expiry time stamp is advice to <strong>the</strong> Tivoli Embedded Security Services client<br />

applications A and B about credential caching. The applications are permitted to cache and<br />

use a token or user name-password combination until <strong>the</strong> time stamp expires and is returned<br />

by <strong>the</strong> server.<br />

So, in <strong>the</strong> our example, application B can cache <strong>the</strong> fact that a particular token maps to a<br />

particular user ID and groups, which is a performance boost, because it saves <strong>the</strong> latency of<br />

querying <strong>the</strong> Tivoli Embedded Security Services server on each interaction between A and B.<br />

After <strong>the</strong> lifetime of <strong>the</strong> token has expired, application A must query <strong>the</strong> server again and<br />

obtain a new time stamp to rejuvenate <strong>the</strong> token (or alternatively discover that <strong>the</strong> credentials<br />

are now invalid).<br />

The Tivoli Embedded Security Services server administrator can configure <strong>the</strong> length of time<br />

that is used to set expiry timestamps. This system is only effective if <strong>the</strong> Tivoli Embedded<br />

Security Services server and <strong>the</strong> applications have synchronized clocks.<br />

Using a remote au<strong>the</strong>ntication service<br />

Use <strong>the</strong> following steps to use SVC with a remote au<strong>the</strong>ntication service:<br />

1. Configure <strong>the</strong> cluster with <strong>the</strong> location of <strong>the</strong> remote au<strong>the</strong>ntication server.<br />

You can change <strong>the</strong> settings with this command:<br />

svctask chauthservice.......<br />

You can view settings with this command:<br />

svcinfo lscluster.......<br />

Application A<br />

Application B<br />

2: auth( u, p )<br />

3: auth_ok( tk, ts, g )<br />

5: auth( tk )<br />

6: auth_ok( tk, ts, u, g )<br />

ESS<br />

Server<br />

LDAP<br />

Server<br />

SVC supports ei<strong>the</strong>r an HTTP or HTTPS connection to <strong>the</strong> Tivoli Embedded Security<br />

Services server. If <strong>the</strong> HTTP option is used, <strong>the</strong> user and password information is<br />

transmitted in clear text over <strong>the</strong> IP network.<br />

2. Configure user groups on <strong>the</strong> cluster matching those user groups that are used by <strong>the</strong><br />

au<strong>the</strong>ntication service. For each group of interest that is known to <strong>the</strong> au<strong>the</strong>ntication<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 45


service, <strong>the</strong>re must be an SVC user group with <strong>the</strong> same name and <strong>the</strong> remote setting<br />

enabled.<br />

For example, you can have a group called sysadmins, whose members require <strong>the</strong> SVC<br />

Administrator role. Configure this group by using <strong>the</strong> command:<br />

svctask mkusergrp -name sysadmins -remote -role Administrator<br />

If none of a user’s groups match any of <strong>the</strong> SVC user groups, <strong>the</strong> user is not permitted to<br />

access <strong>the</strong> cluster.<br />

3. Configure users that do not require SSH access. Any SVC users that are to be used with<br />

<strong>the</strong> remote au<strong>the</strong>ntication service and do not require SSH access need to be deleted from<br />

<strong>the</strong> system. The superuser cannot be deleted; it is a local user and cannot use <strong>the</strong> remote<br />

au<strong>the</strong>ntication service.<br />

4. Configure users that do require SSH access. Any SVC users that are to be used with <strong>the</strong><br />

remote au<strong>the</strong>ntication service and do require SSH access must have <strong>the</strong>ir remote setting<br />

enabled and <strong>the</strong> same password set on <strong>the</strong> cluster and <strong>the</strong> au<strong>the</strong>ntication service. The<br />

remote setting instructs SVC to consult <strong>the</strong> au<strong>the</strong>ntication service for group information<br />

after <strong>the</strong> SSH key au<strong>the</strong>ntication step to determine <strong>the</strong> user’s role. The need to configure<br />

<strong>the</strong> user’s password on <strong>the</strong> cluster in addition to <strong>the</strong> au<strong>the</strong>ntication service is due to a<br />

limitation in <strong>the</strong> Tivoli Embedded Security Services server software.<br />

5. Configure <strong>the</strong> system time. For correct operation, both <strong>the</strong> SVC cluster and <strong>the</strong> system<br />

running <strong>the</strong> Tivoli Embedded Security Services server must have <strong>the</strong> exact same view of<br />

<strong>the</strong> current time; <strong>the</strong> easiest way is to have <strong>the</strong>m both use <strong>the</strong> same Network Time<br />

Protocol (NTP) server.<br />

Failure to follow this step can lead to poor interactive performance of <strong>the</strong> SVC user<br />

interface or incorrect user-role assignments.<br />

Also, Tivoli Productivity Center 4.1 leverages <strong>the</strong> Tivoli Integrated Portal infrastructure and its<br />

underlying WebSphere® Application Server capabilities to make use of an LDAP registry and<br />

enable single sign-on (SSO).<br />

You can obtain more information about implementing SSO within Tivoli Productivity Center<br />

4.1 in Chapter 6 (LDAP au<strong>the</strong>ntication support and single sign-on) of <strong>the</strong> <strong>IBM</strong> Tivoli <strong>Storage</strong><br />

Productivity Center V4.1 Release Guide, SG247725, at this Web site:<br />

http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open<br />

2.4 SVC hardware overview<br />

The SVC 5.1 release will also provide new, more powerful hardware nodes. Also, <strong>the</strong>se new<br />

nodes will be, as defined in <strong>the</strong> underlying COMPASS architecture, based on Intel®<br />

processors with standard PCI-X adapters to interface with <strong>the</strong> <strong>SAN</strong> and <strong>the</strong> LAN.<br />

The new SVC 2145-CF8 <strong>Storage</strong> Engine has <strong>the</strong> following key hardware features:<br />

► New SVC engine based on Intel Core i7 2.4 GHz quad-core processor<br />

► 24 GB memory, with future growth possibilities<br />

► Four 8 Gbps FC ports<br />

► Up to four solid-state drives, enabling scale-out high performance solid-state drive support<br />

with SVC<br />

► Two power supplies<br />

► Double bandwidth compared to its predecessor node (2145-8G4)<br />

46 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► Up to double IOPS compared to its predecessor node (2145-8G4)<br />

► A 19-inch rack-mounted enclosure<br />

► <strong>IBM</strong> <strong>System</strong>s Director Active Energy Manager-enabled<br />

The new nodes can be smoothly integrated within existing SVC clusters. New nodes can be<br />

intermixed in pairs within existing SVC clusters. Mixing engine types in a cluster results in<br />

VDisk throughput characteristics of <strong>the</strong> engine type in that I/O Group. The cluster<br />

nondisruptive upgrade capability can be used to replace older engines with new 2145-CF8<br />

engines.<br />

They are 1U high, fit into 19 inch racks, and use <strong>the</strong> same uninterruptible power supply unit<br />

models as previous models. Integration into existing clusters requires that <strong>the</strong> cluster runs<br />

SVC 5.1 code. The only node that does not support SVC 5.1 code is <strong>the</strong> 2145-4F2-type node.<br />

An upgrade scenario for SVC clusters based, or containing, <strong>the</strong>se first generation nodes will<br />

be available later this year. Figure 2-17 shows <strong>the</strong> front-side view of <strong>the</strong> new SVC 2145-CF8<br />

node.<br />

Figure 2-17 The SVC 2145-CF8 storage engine<br />

Remember that several of <strong>the</strong> new features in <strong>the</strong> new SVC 5.1 release, such as iSCSI, are<br />

software features and are <strong>the</strong>refore available on all nodes supporting this release.<br />

2.4.1 Fibre Channel interfaces<br />

The <strong>IBM</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> provides <strong>the</strong> following FC interfaces on <strong>the</strong> node types:<br />

► Supported link speed of 2/4/8 Gbps on SVC 2145-CF8 nodes<br />

► Supported link speed of 1/2/4 Gbps on SVC 2145-8G4, SVC 2145-8A4, and SVC<br />

2145-8F4 nodes<br />

The nodes come with a 4-port HBA. The FC ports on <strong>the</strong>se node types autonegotiate <strong>the</strong> link<br />

speed that is used with <strong>the</strong> FC switch. The ports normally operate at <strong>the</strong> maximum speed that<br />

is supported by both <strong>the</strong> SVC port and <strong>the</strong> switch. However, if a large number of link errors<br />

occur, <strong>the</strong> ports might operate at a lower speed than what is supported.<br />

The actual port speed for each of <strong>the</strong> four ports can be displayed via <strong>the</strong> GUI, <strong>the</strong> CLI, <strong>the</strong><br />

node’s front panel, and also by light-emitting diodes (LEDs) that are placed at <strong>the</strong> rear of <strong>the</strong><br />

node. For details, consult <strong>the</strong> node-specific SVC hardware installation guides:<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Model 2145-CF8 Hardware Installation<br />

Guide, GC52-1356<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Model 2145-8A4 Hardware Installation<br />

Guide, GC27-2219<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 47


2.4.2 LAN interfaces<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Model 2145-8G4 Hardware Installation<br />

Guide, GC27-2220<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Models 2145-8F2 and 2145-8F4 Hardware<br />

Installation Guide, GC27-2221<br />

The SVC imposes no limit on <strong>the</strong> FC optical distance between SVC nodes and host servers.<br />

FC standards, along with small form-factor pluggable optics (SFP) capabilities and cable type,<br />

dictate <strong>the</strong> maximum FC distances that are supported.<br />

If you use longwave SFPs in <strong>the</strong> SVC node itself, <strong>the</strong> longest supported FC link between <strong>the</strong><br />

SVC and switch is 10 km (6.21 miles).<br />

Table 2-5 shows <strong>the</strong> actual cable length that is supported with shortwave SFPs.<br />

Table 2-5 Overview of supported cable length<br />

FC-O OM1 (M6)<br />

standard 62.2/125<br />

microseconds<br />

Table 2-6 shows <strong>the</strong> rules that apply with respect to <strong>the</strong> number of inter-switch link (ISL) hops<br />

allowed in a <strong>SAN</strong> fabric between SVC nodes or <strong>the</strong> cluster.<br />

Table 2-6 Number of supported ISL hops<br />

The 2145-CF8 node supports (as its predecessor nodes did) two 1 Gbps LAN ports. In SVC<br />

4.3.1 and before, <strong>the</strong> SVC cluster presented a single IP interface, which was used by <strong>the</strong> SVC<br />

configuration interfaces (CLI and CIMOM). Although multiple physical nodes were present in<br />

<strong>the</strong> SVC cluster, only a single node (<strong>the</strong> configuration node) was active on <strong>the</strong> IP network.<br />

This configuration IP address was presented from <strong>the</strong> eth0 port of <strong>the</strong> configuration node.<br />

If <strong>the</strong> configuration node failed, a separate node in <strong>the</strong> cluster took over <strong>the</strong> duties of <strong>the</strong><br />

configuration node and <strong>the</strong> IP address for <strong>the</strong> cluster was <strong>the</strong>n presented at <strong>the</strong> eth0 port of<br />

that new configuration. The configuration node supported concurrent access on <strong>the</strong> IPv4 and<br />

IPv6 configuration addresses on <strong>the</strong> eth0 port from SVC 4.3 onward.<br />

Starting with SVC 5.1, <strong>the</strong> cluster configuration node can now be accessed on ei<strong>the</strong>r eth0 or<br />

eth1. The cluster can have two IPv4 and two IPv6 addresses that are used for configuration<br />

purposes (CLI or CIMOM access). The cluster can <strong>the</strong>refore be managed by SSH clients or<br />

GUIs on <strong>System</strong> <strong>Storage</strong> Productivity Centers on separate physical IP networks. This<br />

capability provides redundancy in <strong>the</strong> event of a failure of one of <strong>the</strong>se IP networks.<br />

48 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

OM2 (M5)<br />

standard 50/125<br />

microseconds<br />

2 Gbps FC 150 m 300 m 500 m<br />

4 Gbps FC 70 m 150 m 380 m<br />

8 Gbps FC limiting 21 m 50 m 150 m<br />

Between nodes in an<br />

I/O Group<br />

0<br />

(connect to <strong>the</strong> same<br />

switch)<br />

Between nodes in<br />

separate I/O Groups<br />

1<br />

(recommended: 0,<br />

connect to <strong>the</strong> same<br />

switch)<br />

Between nodes and<br />

<strong>the</strong> disk subsystem<br />

1<br />

(recommended: 0,<br />

connect to <strong>the</strong> same<br />

switch)<br />

OM3 (M5E)<br />

optimized 50/125<br />

microseconds-300<br />

Between nodes and<br />

<strong>the</strong> host server<br />

Maximum 3


Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each<br />

SVC node port; <strong>the</strong>se IP addresses are independent of <strong>the</strong> cluster configuration IP<br />

addresses.<br />

Figure 2-12 on page 29 shows an overview.<br />

2.5 Solid-state drives<br />

You can use solid-state drives, or more specifically, single layer cell (SLC) or multilayer cell<br />

(MLC) NAND Flash-based disks (for <strong>the</strong> sake of simplicity, we call <strong>the</strong>m solid-state drives in<br />

<strong>the</strong> following chapters), to overcome a growing problem that is known as <strong>the</strong> memory/storage<br />

bottleneck.<br />

2.5.1 <strong>Storage</strong> bottleneck problem<br />

The memory/storage bottleneck describes <strong>the</strong> steadily growing gap between <strong>the</strong> time<br />

required for a CPU to access data located in its cache/memory (typically in nanoseconds)<br />

and data located on external storage (typically in milliseconds).<br />

While CPUs and cache/memory devices continually improve <strong>the</strong>ir performance, this is not<br />

true in general for mechanical disks that are used as external storage.<br />

Figure 2-18 shows <strong>the</strong>se access time differences.<br />

Figure 2-18 The memory/storage bottleneck<br />

The single times that are shown are not that important, but look at <strong>the</strong> time differences<br />

between accessing data that is located in cache and data that is located on external disk.<br />

We have added a second scale to Figure 2-18, which gives you an idea of how long it takes to<br />

access <strong>the</strong> data in a scenario where a single CPU cycle takes 1 second. This scale gives you<br />

an idea of <strong>the</strong> importance of future storage technologies closing or reducing <strong>the</strong> gap between<br />

access times for data stored in cache/memory versus access times for data stored on a<br />

external medium.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 49


Since magnetic disks were first introduced by <strong>IBM</strong> in 1956 (RAMAC), <strong>the</strong>y have shown a<br />

remarkable performance regarding capacity growth, form factor/size reduction, price<br />

decrease ($/GB), and reliability.<br />

However, <strong>the</strong> number of I/Os that a disk can handle and <strong>the</strong> response time that it takes to<br />

process a single I/O on it have not increased at <strong>the</strong> same rate — although <strong>the</strong>y have certainly<br />

increased. In actual environments, we can expect from today’s enterprise-class FC<br />

serial-attached SCSI (SAS) disk up to 200 IOPS per disk with an average response time (a<br />

latency) of approximately 7 ms per I/O.<br />

To simplify it, today rotating disks are getting, and still will, bigger in capacity (several TBs),<br />

smaller in form factor/footprint (3.5 inches, 2.5 inches, and 1.8 inches), and less expensive<br />

($/GB), but not necessarily faster.<br />

The limiting factor is <strong>the</strong> number of revolutions per minute (rpm) that a disk can perform<br />

(actually 15,000). This factor defines <strong>the</strong> time that is required to access a specific data block<br />

on a rotating device. There might be smaller improvements in <strong>the</strong> future, but a big step, such<br />

as doubling <strong>the</strong> number of revolutions, if technically even possible, inevitably has a massive<br />

increase in power consumption and a price increase.<br />

2.5.2 Solid-state drive solution<br />

The solid-state drives can provide a solution for this dilemma. No rotating parts mean<br />

improved robustness and lower power consumption. A remarkable improvement in I/O<br />

performance and a massive reduction in <strong>the</strong> average I/O response times (latency) are <strong>the</strong><br />

compelling reasons to use solid-state drives in today’s storage subsystems.<br />

Enterprise-class solid-state drives deliver typically 50,000 read and 20,000 write IOPs with<br />

latencies of typically 50us for reads and 800us for writes. Their form factors (2.5 inches/3.5<br />

inches) and <strong>the</strong>ir interfaces (FC/SAS/Serial Advanced Technology Attachment (SATA)) make<br />

<strong>the</strong>m easy to integrate into existing disk shelves.<br />

Adding solid-state drives: Specific performance problems might be solved by carefully<br />

adding solid-state drives to an existing disk subsystem. But be aware, solving performance<br />

problems by using solid-state drives excessively in existing disk subsystems will inevitably<br />

create performance bottlenecks on <strong>the</strong> underlying RAID controllers.<br />

2.5.3 Solid-state drive market<br />

The solid-state drive storage market is rapidly evolving. The key differentiator among today’s<br />

solid-state drive products that are available on <strong>the</strong> market is not <strong>the</strong> storage medium, but <strong>the</strong><br />

logic in <strong>the</strong> disk internal controllers. Optimally handling what is referred to as wear-out<br />

leveling, which defines <strong>the</strong> controller’s capability to ensure a device’s durability, and closing<br />

<strong>the</strong> remarkable gap between read and write I/O performance are <strong>the</strong> top priorities in today’s<br />

controller development.<br />

Today’s solid-state drive technology is only a first step into <strong>the</strong> world of high performance<br />

persistent semiconductor storage. A group of <strong>the</strong> approximately 10 most promising<br />

technologies are collectively referred to as <strong>Storage</strong> Class Memory (SCM).<br />

<strong>Storage</strong> Class Memory<br />

SCM promises a massive improvement in performance (IOPS), areal density, cost, and<br />

energy efficiency compared to today’s solid-state drive technology. <strong>IBM</strong> Research is actively<br />

engaged in <strong>the</strong>se new technologies.<br />

50 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


You can obtain details of nanoscale devices at this Web site:<br />

http://www.almaden.ibm.com/st/nanoscale_st/nano_devices/<br />

You can obtain details of <strong>Storage</strong> Class Memory at this Web site:<br />

http://tinyurl.com/plk7as<br />

You can read a comprehensive and worthwhile overview of <strong>the</strong> solid-state drive technology in<br />

a subset of <strong>the</strong> well known Spring 2009 SNIA Technical Tutorials, which are available on <strong>the</strong><br />

SNIA Web site:<br />

http://www.snia.org/education/tutorials/2009/spring/solid<br />

When <strong>the</strong>se technologies become a reality, it will fundamentally change <strong>the</strong> architecture of<br />

today’s storage infrastructures.<br />

The next topic describes integrating <strong>the</strong> first releases of this new technology into <strong>the</strong> SVC.<br />

2.6 Solid-state drives in <strong>the</strong> SVC<br />

The solid-state drives in <strong>the</strong> new 2145-CF8 nodes provide a new ultra-high-performance<br />

storage option. They are available in <strong>the</strong> 2145-CF8 nodes only. Solid-state drives can be<br />

pre-installed in <strong>the</strong> new nodes or installed as a field hardware upgrade on a per disk basis at<br />

a later point in time without interrupting service.<br />

Solid-state drives include <strong>the</strong> following features:<br />

► Up to four solid-state drives can be installed on each SVC 2145-CF8 node.<br />

► An <strong>IBM</strong> PCIe SAS HBA is required on each node that contains a solid-state drive.<br />

► Each solid-state drive is a 2.5-inch Serial Attached SCSI (SAS) drive.<br />

► Each solid-state drive provides up to 140 GB of capacity.<br />

► Solid-state drives are hot-pluggable and hot-swappable.<br />

Up to four solid-state drives are supported per node, which will provide up to 560 GB of<br />

usable solid-state drive capacity per node. Always install <strong>the</strong> same amount of solid-state drive<br />

capacity in both nodes of an I/O Group.<br />

In a cluster running 5.1 code, node pairs with solid-state drives can be mixed with older node<br />

pairs, ei<strong>the</strong>r with or without local solid-state drives installed.<br />

This scalable architecture enables clients to take advantage of <strong>the</strong> throughput capabilities of<br />

<strong>the</strong> solid-state drive. The following performance exists per I/O Group (from solid-state drives<br />

only):<br />

► IOPS: 200 K reads, 80 K writes, and 56 K 70/30 mix<br />

► MBps: 800 MBps reads and 400 MBps writes<br />

SSDs are local drives in an SVC node and are presented as MDisks to <strong>the</strong> SVC cluster. They<br />

belong to an SVC internal controller. These controller objects will have <strong>the</strong> worldwide node<br />

name (WWNN) of <strong>the</strong> node in question, but <strong>the</strong>y will be reported as standard controller<br />

objects that can be renamed by <strong>the</strong> user. SVC reserves eight of <strong>the</strong>se controller objects for<br />

<strong>the</strong> internal SSD controllers.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 51


MDisks based on SSD can be identified by showing <strong>the</strong>ir attributes via GUI/CLI. For <strong>the</strong>se<br />

MDisks, <strong>the</strong> attributes Node ID and Node Name are set. In all o<strong>the</strong>r MDisk views, <strong>the</strong>se<br />

attributes are blank.<br />

2.6.1 Solid-state drive configuration rules<br />

You must follow <strong>the</strong> SVC solid-state drive configuration rules for nodes, I/O Groups, and<br />

clusters:<br />

► Nodes that contain solid-state drives can coexist in a single SVC cluster with any o<strong>the</strong>r<br />

supported nodes.<br />

► Do not combine nodes that contain solid-state drives and nodes that do not contain<br />

solid-state drives in a single I/O Group. It is acceptable to temporarily mix node types in an<br />

I/O Group while upgrading SVC node hardware from an older model to <strong>the</strong> 2145-CF8.<br />

► Nodes that contain solid-state drives in a single I/O Group must share <strong>the</strong> same solid-state<br />

drive capacities.<br />

► Quorum functionality is not supported on solid-state drives within SVC nodes.<br />

You must follow <strong>the</strong> SVC solid-state drive configuration rules for MDisks and MDisk groups:<br />

► Each solid-state drive is recognized by <strong>the</strong> cluster as a single MDisk.<br />

► For each node that contains solid-state drives, create a single MDisk group that includes<br />

only <strong>the</strong> solid-state drives that are installed in that node.<br />

Terminology: An MDG using solid-state drives contained within an SVC node will be<br />

referenced as SVC solid-state drive storage throughout this book. The configuration rules<br />

given in this book apply to SVC solid-state drive storage. Do not confuse this term with<br />

solid-state drive storage that is contained in <strong>SAN</strong>-attached storage controllers, such as <strong>the</strong><br />

<strong>IBM</strong> DS8000 or DS5000.<br />

When you add a new solid-state drive to an MDisk group (move it from unmanaged to<br />

managed mode), <strong>the</strong> solid-state drive is automatically formatted and set to a block size of 512<br />

bytes.<br />

You must follow <strong>the</strong>se configuration rules for VDisks using storage from solid-state drives<br />

within SVC nodes:<br />

► VDisks using SVC solid-state drive storage must be created in <strong>the</strong> I/O Group where <strong>the</strong><br />

solid-state drives physically reside.<br />

► VDisks using SVC solid-state drive storage must be mirrored to ano<strong>the</strong>r MDG to provide<br />

fault tolerance. There are two supported mirroring configurations:<br />

– For <strong>the</strong> highest performance, <strong>the</strong> two VDisk copies must be created in <strong>the</strong> two MDGs<br />

that correspond to <strong>the</strong> SVC solid-state drive storage in two nodes in <strong>the</strong> same I/O<br />

Group. The recommended solid-state drive configuration for highest performance is<br />

shown in Figure 2-19 on page 54.<br />

– For <strong>the</strong> best utilization of <strong>the</strong> solid-state drive capacity, <strong>the</strong> primary VDisk copy must be<br />

placed on SVC solid-state drive storage and <strong>the</strong> secondary copy can be placed on Tier<br />

1 storage, such as an <strong>IBM</strong> DS8000. Under certain failure scenarios, <strong>the</strong> performance<br />

of <strong>the</strong> VDisk will degrade to <strong>the</strong> performance of <strong>the</strong> non-solid-state drive storage. All<br />

read I/Os are sent to <strong>the</strong> primary copy of a mirrored VDisk; <strong>the</strong>refore, reads will<br />

experience solid-state drive performance. Write I/Os are mirrored to both locations, so<br />

performance will match <strong>the</strong> speed of <strong>the</strong> slowest copy. The recommended solid-state<br />

52 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


drive configuration for <strong>the</strong> best solid-state drive capacity utilization is shown in<br />

Figure 2-20 on page 55.<br />

► To balance <strong>the</strong> read workload, evenly split <strong>the</strong> primary and secondary VDisk copies on<br />

each node that contains solid-state drives.<br />

► The preferred node of <strong>the</strong> VDisk must be <strong>the</strong> same node that contains <strong>the</strong> solid-state<br />

drives that are used by <strong>the</strong> primary VDisk copy.<br />

Important: For VDisks that are provisioned out of SVC solid-state drive storage, VDisk<br />

Mirroring is mandatory to maintain access to <strong>the</strong> data that is stored on solid-state drives if<br />

one of <strong>the</strong> nodes in <strong>the</strong> I/O Group is being serviced or fails.<br />

Remember that VDisks that are based on SVC solid-state drive storage must always be<br />

presented by <strong>the</strong> I/O Group and, during normal operation, by <strong>the</strong> node to which <strong>the</strong> solid-state<br />

drive belongs. These rules are designed to direct all host I/O to <strong>the</strong> node containing <strong>the</strong><br />

relevant solid-state drives.<br />

Existing VDisks can be migrated while online to SVC solid-state drive storage. It might be<br />

necessary to move <strong>the</strong> VDisk into <strong>the</strong> correct I/O Group first, which requires quiescing I/O to<br />

this VDisk during <strong>the</strong> move.<br />

Figure 2-19 on page 54 shows <strong>the</strong> recommended solid-state drive configuration for <strong>the</strong><br />

highest performance.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 53


Figure 2-19 Solid-state drive configuration for highest performance<br />

For a read-intensive application, mirrored VDisks can keep <strong>the</strong>ir secondary copy on a<br />

<strong>SAN</strong>-based MDG, such as an <strong>IBM</strong> DS8000 providing Tier 1 storage resources to an SVC<br />

cluster.<br />

Because all read I/Os are sent to <strong>the</strong> primary copy (which is set as <strong>the</strong> solid-state drive),<br />

reasonable performance occurs as long as <strong>the</strong> Tier 1 storage can sustain <strong>the</strong> write I/O rate.<br />

Performance will decrease if <strong>the</strong> primary copy fails. Ensure that <strong>the</strong> node on which <strong>the</strong><br />

primary VDisk copy resides is also <strong>the</strong> preferred node for <strong>the</strong> VDisk. Figure 2-20 on page 55<br />

shows <strong>the</strong> recommended solid-state drive configuration for <strong>the</strong> best capacity utilization.<br />

54 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 2-20 Recommended solid-state drive configuration for best solid-state drive capacity utilization<br />

Remember <strong>the</strong>se considerations when using SVC solid-state drive storage:<br />

► I/O requests to solid-state drives that are in o<strong>the</strong>r nodes are automatically forwarded.<br />

However, this forwarding introduces additional delays. Try to avoid <strong>the</strong>se configurations by<br />

following <strong>the</strong> configuration rules.<br />

► Be careful migrating image mode VDisks to SVC solid-state drive storage or deleting a<br />

copy of a mirrored VDisk based on SVC solid-state drive storage. In all of <strong>the</strong> scenarios<br />

where your data is stored in one single solid-state drive-based MDG, your data is not<br />

protected against node or disk failures any longer.<br />

► If you delete or replace nodes containing local solid-state drives from a cluster, remember<br />

that <strong>the</strong> data stored on its solid-state drives might have to be decommissioned.<br />

► If you shut down a node that contains SVC solid-state drive storage containing VDisks<br />

without mirrors on ano<strong>the</strong>r node or storage system, you will lose access to any VDisks that<br />

are associated with that SVC solid-state drive storage. A force option is provided to<br />

prevent an unintended loss of access.<br />

SVC 5.1 provides <strong>the</strong> functionality to upgrade <strong>the</strong> solid-state drive’s firmware and pre-GA<br />

code.<br />

For details, see <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Software Installation and<br />

Configuration Guide Version, SC23-6628.<br />

2.6.2 SVC 5.1 supported hardware list, device driver, and firmware levels<br />

With <strong>the</strong> SVC 5.1 release, as in every release, <strong>IBM</strong> offers functional enhancements and new<br />

hardware that can be integrated into existing or new SVC clusters and also interoperability<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 55


enhancements or new support for servers, <strong>SAN</strong> switches, and disk subsystems. See <strong>the</strong> most<br />

current information at this Web site:<br />

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277<br />

2.6.3 SVC 4.3.1 features<br />

Before we introduce <strong>the</strong> new features of SVC 5.1, we review <strong>the</strong> features that were added<br />

with Release 4.3.1:<br />

► New node type: 2145-8A4: The Entry Edition hardware comes with identical functionality<br />

as <strong>the</strong> 2145-8G4 nodes: 8 GB memory, and four 4 Gbps FC interfaces. The 2145-8A4<br />

nodes provide approximately 60% of <strong>the</strong> performance of <strong>the</strong> actual 2145-8G4 nodes. The<br />

2145-8A4 is an ideal choice for entry-level solutions with reduced performance<br />

requirements, but without any functional restrictions. It uses physical disk-based licensing.<br />

► Embedded CIMOM<br />

The CIMOM, and <strong>the</strong> associated SVC CIM Agent, is <strong>the</strong> software component that provides<br />

<strong>the</strong> industry standard CIM protocol as a management interface to SVC. Up to SVC 4.3.0,<br />

<strong>the</strong> CIMOM ran on <strong>the</strong> SVC Master Console, which was replaced in SVC 4.2.0 by <strong>the</strong><br />

<strong>System</strong> <strong>Storage</strong> Productivity Center-based management console. The <strong>System</strong> <strong>Storage</strong><br />

Productivity Center is an integrated package of hardware and software that provides all of<br />

<strong>the</strong> management software (SVC CIMOM and SVC GUI) that is required to manage <strong>the</strong><br />

SVC, as well as components for managing o<strong>the</strong>r storage systems.<br />

Clients can continue to use ei<strong>the</strong>r <strong>the</strong> Master Console or <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity<br />

Center to manage SVC 4.3.1. In addition, <strong>the</strong> software components required to manage<br />

<strong>the</strong> SVC (SVC CIMOM and SVC GUI) are provided by <strong>IBM</strong> in software form, allowing<br />

clients that have a suitable hardware platform to build <strong>the</strong>ir own Master Console.<br />

► Windows Server 2008 support for <strong>the</strong> SVC GUI and Master Console<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center 1.3 support<br />

► NTP synchronization<br />

The SVC cluster time operates in one of two exclusive modes:<br />

– Default mode in which <strong>the</strong> cluster uses <strong>the</strong> configuration node’s system clock<br />

– NTP mode in which <strong>the</strong> cluster uses an NTP time server as its time source and adjusts<br />

<strong>the</strong> configuration node’s system clock according to time values obtained from <strong>the</strong> NTP<br />

server. When operating in NTP mode, <strong>the</strong> SVC cluster will log an error if an NTP server<br />

is unavailable.<br />

► Performance enhancement for overlapped Global Mirror writes<br />

2.6.4 New with SVC 5.1<br />

Note: With SVC 5.1, <strong>the</strong> usage of <strong>the</strong> embedded CIMOM is mandatory. We <strong>the</strong>refore<br />

recommend, when upgrading, that you switch <strong>the</strong> existing configurations from <strong>the</strong><br />

Master Console/<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center-based CIMOM to <strong>the</strong><br />

embedded CIMOM (remember to update <strong>the</strong> Tivoli Productivity Center configuration if it<br />

in use). Then, upgrade <strong>the</strong> Master Console/<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center,<br />

and finally, upgrade <strong>the</strong> SVC cluster.<br />

We have already described most of <strong>the</strong> new features that are available with SVC Release 5.1.<br />

This list summarizes <strong>the</strong> new features:<br />

► New hardware nodes (CF8)<br />

56 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


SVC 5.1 offers a new SVC engine that is based on <strong>IBM</strong> <strong>System</strong> x3550M2 server Intel<br />

Core i7 2.4 GHz quad-core processor. It provides 24 GB of cache (with future growth<br />

possibilities) and four 8 Gbps FC ports.<br />

It provides support for solid-state drives (up to four per SVC node) enabling scale-out high<br />

performance solid-state drive support with SVC. The new nodes can be intermixed in pairs<br />

with o<strong>the</strong>r engines in SVC clusters. We describe <strong>the</strong> details in 2.4, “SVC hardware<br />

overview” on page 46.<br />

► 64-bit kernel in Model 8F2 and later<br />

The SVC software kernel has been upgraded to take advantage of <strong>the</strong> 64-bit hardware on<br />

SVC nodes. Model 4F2 is not supported with SVC 5.1 software, but it is supported with<br />

SVC 4.3.x software. The 2145-8A4 is an effective replacement for <strong>the</strong> 4F2, and it doubles<br />

<strong>the</strong> performance of <strong>the</strong> 4F2.<br />

Going to 64-bit mode will improve performance capability. It allows for a cache increase<br />

(24 GB) in <strong>the</strong> 2145-CF8 and will be used in future SVC releases for cache increases and<br />

o<strong>the</strong>r expansion options.<br />

► Solid-state disk support<br />

Optional solid-state drives in SVC engines provide a new ultra-high-performance storage<br />

option. Up to four solid-state drives per node (140 GB each, larger in <strong>the</strong> future) can be<br />

added to a node. This capability provides up to 540 GB of usable solid-state drive capacity<br />

per I/O Group, or more than 2 TB to an 8-node SVC cluster. The SVC’s scalable<br />

architecture enables clients to take advantage of <strong>the</strong> throughput capabilities of <strong>the</strong><br />

solid-state drive. The solid-state drives are fully integrated into <strong>the</strong> SVC architecture.<br />

VDisks can be migrated to and from solid-state drive VDisks without application disruption.<br />

FlashCopy can be used for backup or to copy data to solid-state drive VDisks.<br />

We describe details in 2.5, “Solid-state drives” on page 49.<br />

► iSCSI support<br />

SVC 5.1 provides native attachment to SVC for host systems using <strong>the</strong> iSCSI protocol.<br />

This iSCSI support is a software feature. It will be supported on older SVC nodes that<br />

support SVC 5.1. iSCSI is not used for storage attachment, for SVC cluster-to-cluster<br />

communication, or for communication between <strong>the</strong> SVC engines in a cluster. These<br />

functions will still be performed via FC.<br />

We describe <strong>the</strong> details in 2.2.10, “iSCSI overview” on page 26.<br />

► Multiple relationships for synchronous data mirroring (Metro Mirror)<br />

Multiple cluster mirroring enables Metro Mirror (MM) and Global Mirror (GM) relationships<br />

to exist between a maximum of four SVC clusters. Remember that a VDisk can be in only<br />

one MM/GM relationship.<br />

The creation of up to 8,192 Metro Mirror and Global Mirror relationships is supported. The<br />

single relationships are individually controllable (create/delete and start/stop).<br />

We describe <strong>the</strong> details in “Synchronous/Asynchronous remote copy” on page 31.<br />

► Enhancements to FlashCopy and support for reverse FlashCopy<br />

SVC 5.1 enables FlashCopy targets to become restore points for <strong>the</strong> source without<br />

breaking <strong>the</strong> FlashCopy relationship and without having to wait for <strong>the</strong> original copy<br />

operation to complete. Multiple targets and thus multiple rollback points are supported.<br />

We describe <strong>the</strong> details in 2.2.16, “FlashCopy” on page 33.<br />

► Zero detection<br />

Zero detection provides <strong>the</strong> means to reclaim unused allocated disk space (zeros) when<br />

converting a fully allocated VDisk to a Space-Efficient VDisk using VDisk Mirroring. To<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 57


migrate from a fully allocated to a Space-Efficient VDisk, add <strong>the</strong> target space-efficient<br />

copy, wait for synchronization to complete, and <strong>the</strong>n remove <strong>the</strong> source fully allocated<br />

copy.<br />

We describe <strong>the</strong> details in 2.2.7, “Mirrored VDisk” on page 21.<br />

► User au<strong>the</strong>ntication changes<br />

SVC 5.1 will support remote au<strong>the</strong>ntication and SSO by using an external service running<br />

on <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center. The external service will be <strong>the</strong> Tivoli<br />

Embedded Security Services installed on <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center.<br />

Current local au<strong>the</strong>ntication methods will still be supported.<br />

We describe <strong>the</strong> details in 2.3.5, “User au<strong>the</strong>ntication” on page 40.<br />

► Reliability, availability, and serviceability (RAS) enhancements<br />

In addition to <strong>the</strong> existing SVC e-mail and SNMP trap facilities, SVC 5.1 adds syslog error<br />

event logging for those clients that are already using syslog in <strong>the</strong>ir configurations. This<br />

feature enables optional transmission over a syslog interface to a remote syslog daemon<br />

when parsing <strong>the</strong> Error Event Log. The format and content of messages sent to a syslog<br />

server are identical to <strong>the</strong> format and content of messages that are transmitted in a SNMP<br />

trap message.<br />

2.7 Maximum supported configurations<br />

For a list of <strong>the</strong> maximum supported configurations, visit <strong>the</strong> SVC support site at this Web<br />

site:<br />

http://www.ibm.com/storage/support/2145<br />

Several limits have been removed with SVC 5.1, but not all of <strong>the</strong>m. The following list gives an<br />

overview of <strong>the</strong> most important limits. For details, always consult <strong>the</strong> SVC support site:<br />

► iSCSI support<br />

All host iSCSI names are converted to an internally generated WWPN (one per iSCSI<br />

name per I/O Group). Each iSCSI name in an I/O Group consumes one WWPN that<br />

o<strong>the</strong>rwise is available for a “real” FC WWPN.<br />

So, <strong>the</strong> limits for ports per I/O Group/cluster/host object remain <strong>the</strong> same, but <strong>the</strong>se limits<br />

are now shared between FC WWPNs and iSCSI names.<br />

► The number of cluster partnerships has been lifted from one up to a maximum of three<br />

partnerships, which means that a single SVC cluster can have partnerships of up to three<br />

clusters at <strong>the</strong> same time.<br />

► Remote Copy (RC):<br />

– The number of RC relationships has increased from 1,024 to 8,192. Remember that a<br />

single VDisk at a single point of time can be a member of exactly one RC relationship.<br />

– The number of RC relationships per RC consistency group has also increased to<br />

8,192.<br />

► VDisk<br />

A VDisk can contain a maximum of 2 17 (or 131,072) extents. With an extent size of 2 GB,<br />

<strong>the</strong> maximum VDisk size is 256 TB.<br />

58 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2.8 Useful SVC links<br />

The SVC Support Page is at this Web site:<br />

http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&bra<br />

ndind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continue.x=1<br />

SVC online documentation is at this Web site:<br />

http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp<br />

You can see <strong>the</strong> lBM Redbooks publications about SVC at this Web site:<br />

http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC<br />

2.9 Commonly encountered terms<br />

Channel extender<br />

A channel extender is a device for long distance communication connecting o<strong>the</strong>r <strong>SAN</strong> fabric<br />

components. Generally, channel extenders can involve protocol conversion to asynchronous<br />

transfer mode (ATM), Internet Protocol (IP), or ano<strong>the</strong>r long distance communication protocol.<br />

Cluster<br />

A cluster is a group of 2,145 nodes that presents a single configuration and service interface<br />

to <strong>the</strong> user.<br />

Consistency group<br />

A consistency group is a group of VDisks that has copy relationships that need to be<br />

managed as a single entity.<br />

Copied<br />

Copied is a FlashCopy state that indicates that a copy has been triggered after <strong>the</strong> copy<br />

relationship was created. The copy process is complete, and <strong>the</strong> target disk has no fur<strong>the</strong>r<br />

dependence on <strong>the</strong> source disk. The time of <strong>the</strong> last trigger event is normally displayed with<br />

this status.<br />

Configuration node<br />

While <strong>the</strong> cluster is operational, a single node in <strong>the</strong> cluster is appointed to provide<br />

configuration and service functions over <strong>the</strong> network interface. This node is termed <strong>the</strong><br />

configuration node. This configuration node manages a cache of <strong>the</strong> configuration<br />

information that describes <strong>the</strong> cluster configuration and provides a focal point for configuration<br />

commands. If <strong>the</strong> configuration node fails, ano<strong>the</strong>r node in <strong>the</strong> cluster will assume <strong>the</strong> role.<br />

Counterpart <strong>SAN</strong><br />

A counterpart <strong>SAN</strong> is a non-redundant portion of a redundant <strong>SAN</strong>. A counterpart <strong>SAN</strong><br />

provides all of <strong>the</strong> connectivity of <strong>the</strong> redundant <strong>SAN</strong>, but without <strong>the</strong> 100% redundancy. An<br />

SVC node is typically connected to a redundant <strong>SAN</strong> made out of two counterpart <strong>SAN</strong>s. A<br />

counterpart <strong>SAN</strong> is often called a <strong>SAN</strong> fabric.<br />

Error code<br />

An error code is a value used to identify an error condition to a user. This value might map to<br />

one or more error IDs or to values that are presented on <strong>the</strong> service panel. This value is used<br />

to report error conditions to <strong>IBM</strong> and to provide an entry point into <strong>the</strong> service guide.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 59


Error ID<br />

An error ID is a value that is used to identify a unique error condition detected by <strong>the</strong> 2145<br />

cluster. An error ID is used internally in <strong>the</strong> cluster to identify <strong>the</strong> error.<br />

Excluded<br />

Excluded is a status condition that describes an MDisk that <strong>the</strong> 2145 cluster has decided is<br />

no longer sufficiently reliable to be managed by <strong>the</strong> cluster. The user must issue a command<br />

to include <strong>the</strong> MDisk in <strong>the</strong> cluster-managed storage.<br />

Extent<br />

A fixed size unit of data that is used to manage <strong>the</strong> mapping of data between MDisks and<br />

VDisks.<br />

FC port logins<br />

FC port logins is <strong>the</strong> number of hosts that can see any one SVC node port. Certain disk<br />

subsystems, such as <strong>the</strong> <strong>IBM</strong> DS8000, recommend limiting <strong>the</strong> number of hosts that use<br />

each port, to prevent excessive queuing at that port. Clearly, if <strong>the</strong> port fails or <strong>the</strong> path to that<br />

port fails, <strong>the</strong> host might fail over to ano<strong>the</strong>r port and <strong>the</strong> fan-in criteria might be exceeded in<br />

this degraded mode.<br />

Front end and back end<br />

The SVC takes MDisks and presents <strong>the</strong>se MDisks to application servers (hosts). The<br />

MDisks are looked after by <strong>the</strong> “back-end” application of <strong>the</strong> SVC. The VDisks presented to<br />

hosts are looked after by <strong>the</strong> “front-end” application in <strong>the</strong> SVC.<br />

Field replaceable units<br />

Field replaceable units (FRUs) are individual parts, which are held as spares by <strong>the</strong> service<br />

organization.<br />

Grain<br />

A grain is <strong>the</strong> unit of data that is represented by a single bit in a FlashCopy bitmap (64 KB/256<br />

KB) in <strong>the</strong> SVC. It is also <strong>the</strong> unit to extend <strong>the</strong> real size of a Space-Efficient VDisk (32,64,128<br />

or 256 KB).<br />

Host bus adapter<br />

A host bus adapter (HBA) is an interface card that connects between a host bus, such as a<br />

Peripheral Component Interconnect (PCI), and <strong>the</strong> <strong>SAN</strong>.<br />

Host ID<br />

A numeric identifier assigned to a group of host FC ports or iSCSI host names for <strong>the</strong><br />

purposes of LUN mapping. For each host ID, <strong>the</strong>re is a separate mapping of SCSI IDs to<br />

VDisks. The intent is to have a one-to-one relationship between hosts and host IDs, although<br />

this relationship cannot be policed.<br />

IQN (iSCSI qualified name)<br />

Special names refer to both iSCSI initiators and targets. One of <strong>the</strong> three name formats that<br />

iSCSI provides is IQN. The format is iqn.yyyy-mm.{reversed domain name}, for example, <strong>the</strong><br />

default for an SVC node is: iqn.1986-03.com.ibm:2145..<br />

60 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


iSNS (Internet storage name service)<br />

The Internet storage name service (iSNS) protocol allows automated discovery,<br />

management, and configuration of iSCSI and FC devices. It has been defined in RFC 4171.<br />

Image mod<br />

Image mod is a configuration mode similar to <strong>the</strong> router mode but with <strong>the</strong> addition of cache<br />

and copy functions. SCSI commands are not forwarded directly to <strong>the</strong> MDisk.<br />

I/O Group<br />

An I/O Group is a collection of VDisk and node relationships, that is, an SVC node pair that<br />

presents a common interface to host systems. Each SVC node is associated with exactly one<br />

I/O Group. The two nodes in <strong>the</strong> I/O Group provide access to <strong>the</strong> VDisks in <strong>the</strong> I/O Group.<br />

ISL hop<br />

An inter-switch link (ISL) is a connection between two switches and is counted as an “ISL<br />

hop.” The number of “hops” is always counted on <strong>the</strong> shortest route between two N-ports<br />

(device connections). In an SVC environment, <strong>the</strong> number of ISL hops is counted on <strong>the</strong><br />

shortest route between <strong>the</strong> pair of nodes far<strong>the</strong>st apart. It measures distance only in terms of<br />

ISLs in <strong>the</strong> fabric.<br />

Local fabric<br />

Because <strong>the</strong> SVC supports remote copy, <strong>the</strong>re might be significant distances between <strong>the</strong><br />

components in <strong>the</strong> local cluster and those components in <strong>the</strong> remote cluster. The local fabric<br />

is composed of those <strong>SAN</strong> components (switches, cables, and so on) that connect <strong>the</strong><br />

components (nodes, hosts, and switches) of <strong>the</strong> local cluster toge<strong>the</strong>r.<br />

Local and remote fabric interconnect<br />

The local fabric interconnect and <strong>the</strong> remote fabric interconnect are <strong>the</strong> <strong>SAN</strong> components that<br />

are used to connect <strong>the</strong> local and remote fabrics. They can be single-mode optical fibers that<br />

are driven by high-power gigabit interface converters (GBICs) or SFPs, or more sophisticated<br />

components, such as channel extenders or special SFP modules that are used to extend <strong>the</strong><br />

distance between <strong>SAN</strong> components.<br />

LU and LUN<br />

LUN is formally defined by <strong>the</strong> SCSI standards as a logical unit number. It is used as an<br />

abbreviation for an entity, which exhibits disk-like behavior, for example, a VDisk or an MDisk.<br />

Managed disk (MDisk)<br />

An MDisk is a SCSI disk that is presented by a RAID controller and that is managed by <strong>the</strong><br />

cluster. The MDisk is not visible to host systems on <strong>the</strong> <strong>SAN</strong>.<br />

Managed Disk Group (MDiskgrp or MDG)<br />

A collection of MDisks that jointly contains all of <strong>the</strong> data for a specified set of VDisks.<br />

Managed space mode<br />

The managed space mode is a configuration mode that is similar to image mode but with <strong>the</strong><br />

addition of space management functions.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 61


Master Console (MC)<br />

The Master Console is <strong>the</strong> platform on which <strong>the</strong> software used to manage <strong>the</strong> SVC runs.<br />

With Version 4.3, it is being replaced by <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center. However,<br />

V4.3 GUI console code is supported on existing Master Consoles.<br />

Node<br />

A node is a single processing unit, which provides virtualization, cache, and copy services for<br />

<strong>the</strong> <strong>SAN</strong>. SVC nodes are deployed in pairs called I/O Groups. One node in <strong>the</strong> cluster is<br />

designated <strong>the</strong> configuration node.<br />

Oversubscription<br />

Oversubscription is <strong>the</strong> ratio of <strong>the</strong> sum of <strong>the</strong> traffic on <strong>the</strong> initiator N-port connection, or<br />

connections to <strong>the</strong> traffic on <strong>the</strong> most heavily loaded ISLs where more than one connection is<br />

used between <strong>the</strong>se switches. Oversubscription assumes a symmetrical network, and a<br />

specific workload applied evenly from all initiators and directed evenly to all targets. A<br />

symmetrical network means that all <strong>the</strong> initiators are connected at <strong>the</strong> same level, and all <strong>the</strong><br />

controllers are connected at <strong>the</strong> same level.<br />

Prepare<br />

Prepare is a configuration command that is used to cause cached data to be flushed in<br />

preparation for a copy trigger operation.<br />

RAS<br />

RAS stands for reliability, availability, and serviceability.<br />

RAID<br />

RAID stands for a redundant array of independent disks.<br />

Redundant <strong>SAN</strong><br />

A redundant <strong>SAN</strong> is a <strong>SAN</strong> configuration in which <strong>the</strong>re is no single point of failure (SPoF), so<br />

no matter what component fails, data traffic will continue. Connectivity between <strong>the</strong> devices<br />

within <strong>the</strong> <strong>SAN</strong> is maintained, although possibly with degraded performance, when an error<br />

has occurred. A redundant <strong>SAN</strong> design is normally achieved by splitting <strong>the</strong> <strong>SAN</strong> into two<br />

independent counterpart <strong>SAN</strong>s (two <strong>SAN</strong> fabrics), so that if one counterpart <strong>SAN</strong> is<br />

destroyed, <strong>the</strong> o<strong>the</strong>r counterpart <strong>SAN</strong> keeps functioning.<br />

Remote fabric<br />

Because <strong>the</strong> SVC supports remote copy, <strong>the</strong>re might be significant distances between <strong>the</strong><br />

components in <strong>the</strong> local cluster and those components in <strong>the</strong> remote cluster. The remote<br />

fabric is composed of those <strong>SAN</strong> components (switches, cables, and so on) that connect <strong>the</strong><br />

components (nodes, hosts, and switches) of <strong>the</strong> remote cluster toge<strong>the</strong>r.<br />

<strong>SAN</strong><br />

<strong>SAN</strong> stands for storage area network.<br />

<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

The <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> is a <strong>SAN</strong>-based appliance designed for<br />

attachment to a variety of host computer systems, which carries out block-level virtualization<br />

of disk storage.<br />

SCSI<br />

SCSI stands for Small Computer <strong>System</strong>s Interface.<br />

62 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Service Location Protocol<br />

The Service Location Protocol (SLP) is a service discovery protocol that allows computers<br />

and o<strong>the</strong>r devices to find services in a local area network without prior configuration. It has<br />

been defined in RFC 2608.<br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center<br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center replaces <strong>the</strong> Master Console for new installations of<br />

<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Version 4.3.0. For <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center planning,<br />

installation, and configuration information, see <strong>the</strong> following Web site:<br />

http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp<br />

Virtual disk (VDisk)<br />

A virtual disk (VDisk) is an SVC device that appears to host systems attached to <strong>the</strong> <strong>SAN</strong> as<br />

a SCSI disk. Each VDisk is associated with exactly one I/O Group.<br />

Chapter 2. <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 63


64 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Chapter 3. Planning and configuration<br />

3<br />

In this chapter, we describe <strong>the</strong> steps that are required when planning <strong>the</strong> installation of an<br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC) in your storage network. We look at <strong>the</strong><br />

implications for your storage network and discuss performance considerations.<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 65


3.1 General planning rules<br />

To achieve <strong>the</strong> most benefit from <strong>the</strong> SVC, pre-installation planning must include several<br />

important steps. These steps ensure that SVC provides <strong>the</strong> best possible performance,<br />

reliability, and ease of management for your application needs. Proper configuration also<br />

helps minimize downtime by avoiding changes to <strong>the</strong> SVC and <strong>the</strong> storage area network<br />

(<strong>SAN</strong>) environment to meet future growth needs.<br />

Tip: The <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Planning Guide, GA32-0551,<br />

contains comprehensive information that goes into greater depth regarding <strong>the</strong> topics that<br />

we discuss here.<br />

We also go into much more depth about <strong>the</strong>se topics in <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Best<br />

Practices and Performance Guidelines, SG24-7521, which is available at this Web site:<br />

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open<br />

Planning <strong>the</strong> SVC requires that you follow <strong>the</strong>se steps:<br />

1. Collect and document <strong>the</strong> number of hosts (application servers) to attach to <strong>the</strong> SVC, <strong>the</strong><br />

traffic profile activity (read or write, sequential or random), and <strong>the</strong> performance<br />

requirements (I/O per second (IOPS)).<br />

2. Collect and document <strong>the</strong> storage requirements and capacities:<br />

– The total back-end storage already present in <strong>the</strong> environment to be provisioned on <strong>the</strong><br />

SVC<br />

– The total back-end new storage to be provisioned on <strong>the</strong> SVC<br />

– The required virtual storage capacity that is used as a fully managed virtual disk<br />

(VDisk) and used as a Space-Efficient VDisk<br />

– The required storage capacity for local mirror copy (VDisk Mirroring)<br />

– The required storage capacity for point-in-time copy (FlashCopy)<br />

– The required storage capacity for remote copy (Metro and Global Mirror)<br />

– Per host: <strong>Storage</strong> capacity, <strong>the</strong> host logical unit number (LUN) quantity, and sizes<br />

3. Define <strong>the</strong> local and remote <strong>SAN</strong> fabrics and clusters, if a remote copy or a secondary site<br />

is needed.<br />

4. Define <strong>the</strong> number of clusters and <strong>the</strong> number of pairs of nodes (between 1 and 4) for<br />

each cluster. Each pair of nodes (an I/O Group) is <strong>the</strong> container for <strong>the</strong> VDisks. The<br />

number of necessary I/O Groups depends on <strong>the</strong> overall performance requirements.<br />

5. Design <strong>the</strong> <strong>SAN</strong> according to <strong>the</strong> requirement for high availability and best performance.<br />

Consider <strong>the</strong> total number of ports and <strong>the</strong> bandwidth needed between <strong>the</strong> host and <strong>the</strong><br />

SVC, <strong>the</strong> SVC and <strong>the</strong> disk subsystem, between <strong>the</strong> SVC nodes, and for <strong>the</strong> inter-switch<br />

link (ISL) between <strong>the</strong> local and remote fabric.<br />

6. Design <strong>the</strong> iSCSI network according to <strong>the</strong> requirements for high availability and best<br />

performance. Consider <strong>the</strong> total number of ports and <strong>the</strong> bandwidth needed between <strong>the</strong><br />

host and <strong>the</strong> SVC.<br />

7. Determine <strong>the</strong> SVC service IP address and <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center<br />

(SVC Console).<br />

8. Determine <strong>the</strong> IP addresses for <strong>the</strong> SVC cluster and for <strong>the</strong> host that is connected via<br />

iSCSI connections.<br />

9. Define a naming convention for <strong>the</strong> SVC nodes, <strong>the</strong> host, and <strong>the</strong> storage subsystem.<br />

66 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


10.Define <strong>the</strong> managed disks (MDisks) in <strong>the</strong> disk subsystem.<br />

11.Define <strong>the</strong> Managed Disk Groups (MDGs). The MDGs depend on <strong>the</strong> disk subsystem in<br />

place and <strong>the</strong> data migration needs.<br />

12.Plan <strong>the</strong> logical configuration of <strong>the</strong> VDisks between <strong>the</strong> I/O Groups and <strong>the</strong> MDGs in such<br />

a way as to optimize <strong>the</strong> I/O load between <strong>the</strong> hosts and <strong>the</strong> SVC. You can set up an equal<br />

repartition of all of <strong>the</strong> VDisks between <strong>the</strong> nodes or a repartition that takes into account<br />

<strong>the</strong> expected load from <strong>the</strong> hosts.<br />

13.Plan for <strong>the</strong> physical location of <strong>the</strong> equipment in <strong>the</strong> rack.<br />

SVC planning can be categorized into two types:<br />

► Physical planning<br />

► Logical planning<br />

3.2 Physical planning<br />

There are several key factors to consider when performing <strong>the</strong> physical planning of an SVC<br />

installation. The physical site must have <strong>the</strong> following characteristics:<br />

► Power, cooling, and location requirements are present for <strong>the</strong> SVC and <strong>the</strong> uninterruptible<br />

power supply units.<br />

► SVC nodes and <strong>the</strong>ir uninterruptible power supply units must be in <strong>the</strong> same rack.<br />

► We suggest that you place SVC nodes belonging to <strong>the</strong> same I/O Group in separate racks.<br />

► Plan for two separate power sources if you have ordered a redundant AC power switch<br />

(available as an optional feature).<br />

► An SVC node is one Electronic Industries Association (EIA) unit high.<br />

► Each uninterruptible power supply unit that comes with SVC <strong>V5.1</strong> is one EIA unit high. The<br />

uninterruptible power supply unit shipped with <strong>the</strong> earlier version of <strong>the</strong> SVC is two EIA<br />

units high.<br />

► The <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center (SVC Console) is two EIA units high: one<br />

unit for <strong>the</strong> server and one unit for <strong>the</strong> keyboard and monitor.<br />

► O<strong>the</strong>r hardware devices can be in <strong>the</strong> same SVC rack, such as <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong><br />

DS4000®, <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> DS6000, <strong>SAN</strong> switches, E<strong>the</strong>rnet switch, and o<strong>the</strong>r<br />

devices.<br />

► Consider <strong>the</strong> maximum power rating of <strong>the</strong> rack; it must not be exceeded.<br />

Chapter 3. Planning and configuration 67


In Figure 3-1, we show two 2145-CF8 SVC nodes.<br />

Figure 3-1 2145-CF8 SVC nodes<br />

3.2.1 Preparing your uninterruptible power supply unit environment<br />

Ensure that your physical site meets <strong>the</strong> installation requirements for <strong>the</strong> uninterruptible<br />

power supply unit.<br />

Uninterruptible power supply unit: The 2145 UPS-1U is a Powerware 5115.<br />

2145 UPS-1U<br />

The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high and is<br />

shipped, and can only operate, on <strong>the</strong> following node types:<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-CF8<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-8A4<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-8G4<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-8F2<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-8F4<br />

It was also shipped and will operate with <strong>the</strong> SVC 2145-4F2.<br />

When configuring <strong>the</strong> 2145 UPS-1U, <strong>the</strong> voltage that is supplied to it must be 200 – 240 V,<br />

single phase.<br />

Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external<br />

protection.<br />

68 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


3.2.2 Physical rules<br />

The SVC must be installed in pairs to provide high availability, and each node in <strong>the</strong> cluster<br />

must be connected to a separate uninterruptible power supply unit. Figure 3-2 shows an<br />

example of power connections for <strong>the</strong> 2145-8G4.<br />

Figure 3-2 Node uninterruptible power supply unit setup<br />

Be aware of <strong>the</strong>se considerations:<br />

► Each SVC node of an I/O Group must be connected to a separate uninterruptible power<br />

supply unit.<br />

► Each uninterruptible power supply unit pair that supports a pair of nodes must be<br />

connected to a separate power domain (if possible) to reduce <strong>the</strong> chances of input power<br />

loss.<br />

► The uninterruptible power supply units, for safety reasons, must be installed in <strong>the</strong> lowest<br />

positions in <strong>the</strong> rack. If necessary, move lighter units toward <strong>the</strong> top of <strong>the</strong> rack to make<br />

way for <strong>the</strong> uninterruptible power supply units.<br />

► The power and serial connection from a node must be connected to <strong>the</strong> same<br />

uninterruptible power supply unit; o<strong>the</strong>rwise, <strong>the</strong> node will not start.<br />

► The 2145-CF8, 2145-8A4, 2145-8G4, 2145-8F2, and 2145-8F4 hardware models must be<br />

connected to a 5115 uninterruptible power supply unit. They will not start with a 5125<br />

uninterruptible power supply unit.<br />

Important: Do not share <strong>the</strong> SVC uninterruptible power supply unit with any o<strong>the</strong>r devices.<br />

Figure 3-3 on page 70 shows ports for <strong>the</strong> 2145-CF8.<br />

Chapter 3. Planning and configuration 69


Figure 3-3 Ports for <strong>the</strong> 2145-CF8<br />

Figure 3-4 on page 71 shows a power cabling example for <strong>the</strong> 2145-CF8.<br />

70 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 3-4 2145-CF8 power cabling<br />

There are guidelines to follow for Fibre Channel (FC) cable connections. Occasionally, <strong>the</strong><br />

introduction of a new SVC hardware model means that <strong>the</strong>re are internal changes. One<br />

example is <strong>the</strong> worldwide port name (WWPN) mapping in <strong>the</strong> port mapping. The 2145-8G4<br />

and 2145-CF8 have <strong>the</strong> same mapping.<br />

Figure 3-5 on page 72 shows <strong>the</strong> WWPN mapping.<br />

Chapter 3. Planning and configuration 71


Figure 3-5 WWPN mapping<br />

Figure 3-6 on page 73 shows a sample layout within a separate rack.<br />

72 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 3-6 Sample rack layout<br />

We suggest that you place <strong>the</strong> racks in separate rooms, if possible, in order to gain protection<br />

against critical events (fire, water, power loss, and so on) that might affect one room only.<br />

Remember <strong>the</strong> maximum distances that are supported between <strong>the</strong> nodes in one I/O Group<br />

(100 m (or 320 ft., 1.13 inches)). You can extend this distance by submitting a formal SCORE<br />

request to increase <strong>the</strong> limit by following <strong>the</strong> rules that will be specified in any SCORE<br />

approval.<br />

3.2.3 Cable connections<br />

Create a cable connection table or documentation following your environment’s<br />

documentation procedure to track all of <strong>the</strong> connections that are required for <strong>the</strong> setup:<br />

► Nodes<br />

► Uninterruptible power supply unit<br />

► E<strong>the</strong>rnet<br />

► iSCSI connections<br />

► FC ports<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center (SVC Console)<br />

Chapter 3. Planning and configuration 73


3.3 Logical planning<br />

For logical planning, we intend to cover <strong>the</strong>se topics:<br />

► Management IP addressing plan<br />

► <strong>SAN</strong> zoning and <strong>SAN</strong> connections<br />

► iSCSI IP addressing plan<br />

► Back-end storage subsystem configuration<br />

► SVC cluster configuration<br />

► MDG configuration<br />

► VDisk configuration<br />

► Host mapping (LUN masking)<br />

► Advanced copy functions<br />

► <strong>SAN</strong> start-up support<br />

► Data migration from non-virtualized storage subsystems<br />

► SVC configuration backup procedure<br />

3.3.1 Management IP addressing plan<br />

For management, remember <strong>the</strong>se rules:<br />

► In addition to an FC connection, each node has an E<strong>the</strong>rnet connection for configuration<br />

and error reporting.<br />

► Each SVC cluster needs at least two IP addresses.<br />

The first IP address is used for management, and <strong>the</strong> second IP address is used for<br />

service. The service IP address will become usable only when <strong>the</strong> SVC cluster is in<br />

service mode, and remember that service mode is a disruptive operation. Both IP<br />

addresses must be in <strong>the</strong> same IP subnet.<br />

Example 3-1 Management IP address sample<br />

management IP add. 10.11.12.120<br />

service IP add. 10.11.12.121<br />

► Each node in an SVC cluster needs to have at least one E<strong>the</strong>rnet connection.<br />

► <strong>IBM</strong> supports <strong>the</strong> option of having multiple console access, using <strong>the</strong> traditional SVC<br />

hardware management console (HMC) or <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center<br />

console. Multiple Master Consoles or <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center consoles<br />

can access a single cluster, but when multiple Master Consoles access one cluster, you<br />

cannot concurrently perform configuration and service tasks.<br />

► The Master Console can be supplied on ei<strong>the</strong>r pre-installed hardware, or just software<br />

supplied to and subsequently installed by <strong>the</strong> user.<br />

With SVC 5.1, <strong>the</strong> cluster configuration node can now be accessed on both E<strong>the</strong>rnet ports,<br />

and this capability means that <strong>the</strong> cluster can have two IPv4 addresses and two IPv6<br />

addresses that are used for configuration purposes.<br />

Figure 3-7 on page 75 shows <strong>the</strong> IP configuration possibilities.<br />

74 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 3-7 IP configuration possibilities<br />

The cluster can <strong>the</strong>refore be managed by <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Centers on<br />

separate networks, which provides redundancy in <strong>the</strong> event of a failure of one of <strong>the</strong>se<br />

networks.<br />

Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each<br />

E<strong>the</strong>rnet port on every node; <strong>the</strong>se IP addresses are independent of <strong>the</strong> cluster configuration<br />

IP addresses. The command-line interface (CLI) commands for managing <strong>the</strong> cluster IP<br />

addresses have <strong>the</strong>refore been moved from svctask chcluster to svctask chclusterip in<br />

SVC 5.1. And, new commands have been introduced to manage <strong>the</strong> iSCSI IP addresses.<br />

When connecting to <strong>the</strong> SVC with Secure Shell (SSH), choose one of <strong>the</strong> available IP<br />

addresses to connect to. There is no automatic failover capability, so if one network is down,<br />

use <strong>the</strong> o<strong>the</strong>r IP address.<br />

Clients might be able to use intelligence in domain name servers (DNS) to provide partial<br />

failover.<br />

When using <strong>the</strong> GUI, clients can add <strong>the</strong> cluster to <strong>the</strong> SVC Console multiple times (one time<br />

per IP address). Failover is achieved by using <strong>the</strong> functional IP address when launching <strong>the</strong><br />

SVC Console interface.<br />

Chapter 3. Planning and configuration 75


3.3.2 <strong>SAN</strong> zoning and <strong>SAN</strong> connections<br />

<strong>SAN</strong> storage systems using <strong>the</strong> SVC can be configured with two, or up to eight, SVC nodes,<br />

arranged in an SVC cluster. These SVC nodes are attached to <strong>the</strong> <strong>SAN</strong> fabric, along with disk<br />

subsystems and host systems. The <strong>SAN</strong> fabric is zoned to allow <strong>the</strong> SVCs to “see” each<br />

o<strong>the</strong>r’s nodes and <strong>the</strong> disk subsystems, and for <strong>the</strong> hosts to “see” <strong>the</strong> SVCs. The hosts are<br />

not able to directly “see” or operate LUNs on <strong>the</strong> disk subsystems that are assigned to <strong>the</strong><br />

SVC cluster. The SVC nodes within an SVC cluster must be able to see each o<strong>the</strong>r and all of<br />

<strong>the</strong> storage that is assigned to <strong>the</strong> SVC cluster.<br />

The zoning capabilities of <strong>the</strong> <strong>SAN</strong> switch are used to create <strong>the</strong>se distinct zones. SVC 5.1<br />

supports 2 Gbps, 4 Gbps, or 8 Gbps FC fabric, which depends on <strong>the</strong> hardware platform and<br />

on <strong>the</strong> switch where <strong>the</strong> SVC is connected.<br />

We recommend connecting <strong>the</strong> SVC and <strong>the</strong> disk subsystem to <strong>the</strong> switch operating at <strong>the</strong><br />

highest speed, in an environment where you have a fabric with multiple speed switches.<br />

All SVC nodes in <strong>the</strong> SVC cluster are connected to <strong>the</strong> same <strong>SAN</strong>s, and <strong>the</strong>y present VDisks<br />

to <strong>the</strong> hosts. These VDisks are created from MDGs that are composed of MDisks presented<br />

by <strong>the</strong> disk subsystems. There must be three distinct zones in <strong>the</strong> fabric:<br />

► SVC cluster zone: Create one zone per fabric with all of <strong>the</strong> SVC ports cabled to this fabric<br />

to allow SVC intracluster node communication.<br />

► Host zones: Create an SVC host zone for each server that receives storage from <strong>the</strong> SVC<br />

cluster.<br />

► <strong>Storage</strong> zone: Create one SVC storage zone for each storage subsystem that is<br />

virtualized by <strong>the</strong> SVC.<br />

Zoning considerations for Metro Mirror and Global Mirror<br />

Ensure that you are familiar with <strong>the</strong> constraints for zoning a switch to support <strong>the</strong> Metro<br />

Mirror and Global Mirror feature.<br />

<strong>SAN</strong> configurations that use intracluster Metro Mirror and Global Mirror relationships do not<br />

require additional switch zones.<br />

<strong>SAN</strong> configurations that use intercluster Metro Mirror and Global Mirror relationships require<br />

<strong>the</strong> following additional switch zoning considerations:<br />

► A cluster can be configured so that it can detect all of <strong>the</strong> nodes in all of <strong>the</strong> remote<br />

clusters. Alternatively, a cluster can be configured so that it detects only a subset of <strong>the</strong><br />

nodes in <strong>the</strong> remote clusters.<br />

► Use of inter-switch link (ISL) trunking in a switched fabric.<br />

► Use of redundant fabrics.<br />

For intercluster Metro Mirror and Global Mirror relationships, you must perform <strong>the</strong> following<br />

steps to create <strong>the</strong> additional required zones:<br />

1. Configure your <strong>SAN</strong> so that FC traffic can be passed between <strong>the</strong> two clusters. To<br />

configure <strong>the</strong> <strong>SAN</strong> this way, you can connect <strong>the</strong> clusters to <strong>the</strong> same <strong>SAN</strong>, merge <strong>the</strong><br />

<strong>SAN</strong>s, or use routing technologies.<br />

2. (Optional) Configure zoning to allow all of <strong>the</strong> nodes in <strong>the</strong> local fabric to communicate<br />

with all of <strong>the</strong> nodes in <strong>the</strong> remote fabric.<br />

McData Eclipse routers: If you use McData Eclipse routers, Model 1620, only 64 port<br />

pairs are supported, regardless of <strong>the</strong> number of iFCP links that is used.<br />

76 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


3. (Optional) As an alternative to Step 2, choose a subset of nodes in <strong>the</strong> local cluster to be<br />

zoned to <strong>the</strong> nodes in <strong>the</strong> remote cluster. Minimally, you must ensure that one whole I/O<br />

Group in <strong>the</strong> local cluster has connectivity to one whole I/O Group in <strong>the</strong> remote cluster.<br />

I/O between <strong>the</strong> nodes in each cluster is <strong>the</strong>n routed to find a path that is permitted by <strong>the</strong><br />

configured zoning.<br />

Reducing <strong>the</strong> number of nodes that are zoned toge<strong>the</strong>r can reduce <strong>the</strong> complexity of <strong>the</strong><br />

intercluster zoning and might reduce <strong>the</strong> cost of <strong>the</strong> routing hardware that is required for<br />

large installations. Reducing <strong>the</strong> number of nodes also means that I/O must make extra<br />

hops between <strong>the</strong> nodes in <strong>the</strong> system, which increases <strong>the</strong> load on <strong>the</strong> intermediate<br />

nodes and can increase <strong>the</strong> performance impact, in particular, for Metro Mirror.<br />

4. Optionally, modify <strong>the</strong> zoning so that <strong>the</strong> hosts that are visible to <strong>the</strong> local cluster can<br />

recognize <strong>the</strong> remote cluster. This capability allows a host to examine data in both <strong>the</strong><br />

local and remote clusters.<br />

5. Verify that cluster A cannot recognize any of <strong>the</strong> back-end storage that is owned by cluster<br />

B. A cluster cannot access logical units (LUs) that a host or ano<strong>the</strong>r cluster can also<br />

access.<br />

Figure 3-8 shows <strong>the</strong> SVC zoning topology.<br />

Figure 3-8 SVC zoning topology<br />

Figure 3-9 on page 78 shows an example of SVC, host, and storage subsystem connections.<br />

Chapter 3. Planning and configuration 77


Figure 3-9 Example of SVC, host, and storage subsystem connections<br />

You must also apply <strong>the</strong> following guidelines:<br />

► Hosts are not permitted to operate on <strong>the</strong> disk subsystem LUNs directly if <strong>the</strong> LUNs are<br />

assigned to <strong>the</strong> SVC. All data transfer happens through <strong>the</strong> SVC nodes. Under certain<br />

circumstances, a disk subsystem can present LUNs to both <strong>the</strong> SVC (as MDisks, which it<br />

<strong>the</strong>n virtualizes to hosts) and to o<strong>the</strong>r hosts in <strong>the</strong> <strong>SAN</strong>.<br />

► Mixed speeds are permitted within <strong>the</strong> fabric, but not for intracluster communication. You<br />

can use lower speeds to extend <strong>the</strong> distance.<br />

► Uniform SVC port speed for 2145-4F2 and 2145-8F2 nodes: The optical fiber connections<br />

between FC switches and all 2145-4F2 or 2145-8F2 SVC nodes in a cluster must run at<br />

one speed, ei<strong>the</strong>r 1 Gbps or 2 Gbps. The 2145-4F2 or 2145-8F2 nodes with o<strong>the</strong>r speeds<br />

running on <strong>the</strong> node to switch connections in a single cluster is an unsupported<br />

configuration (and is impossible to configure anyway). This rule does not apply to<br />

2145-8F4, 2145-8G4, 2145-8A4, and 2145-CF8 nodes, because <strong>the</strong> FC ports on <strong>the</strong>se<br />

nodes auto-negotiate <strong>the</strong>ir speed independently of one ano<strong>the</strong>r and can run at 2 Gbps,<br />

4 Gbps, or 8 Gbps.<br />

► Each of <strong>the</strong> local or remote fabrics must not contain more than three ISL hops within each<br />

fabric. An operation with more ISLs is unsupported. When a local and a remote fabric are<br />

connected toge<strong>the</strong>r for remote copy purposes, <strong>the</strong>re must only be one ISL hop between<br />

<strong>the</strong> two SVC clusters. Therefore, certain ISLs can be used in a cascaded switch link<br />

between local and remote clusters, provided that <strong>the</strong> local and remote cluster internal ISL<br />

count is fewer than three. This approach gives a maximum of seven ISL hops in an SVC<br />

environment with both local and remote fabrics.<br />

► The switch configuration in an SVC fabric must comply with <strong>the</strong> switch manufacturer’s<br />

configuration rules, which can impose restrictions on <strong>the</strong> switch configuration. For<br />

example, a switch manufacturer might limit <strong>the</strong> number of supported switches in a <strong>SAN</strong>.<br />

Operation outside of <strong>the</strong> switch manufacturer’s rules is not supported.<br />

78 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► The <strong>SAN</strong> contains only supported switches; operation with o<strong>the</strong>r switches is unsupported.<br />

► Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in <strong>the</strong> same host need to<br />

be in separate zones. For example, if you have AIX and Microsoft hosts, <strong>the</strong>y need to be in<br />

separate zones. Here, “dissimilar” means that <strong>the</strong> hosts are running separate operating<br />

systems or are using separate hardware platforms. Therefore, various levels of <strong>the</strong> same<br />

operating system are regarded as similar. This requirement is a <strong>SAN</strong> interoperability issue<br />

ra<strong>the</strong>r than an SVC requirement.<br />

► We recommend that <strong>the</strong> host zones contain only one initiator (HBA) each, and as many<br />

SVC node ports as you need, depending on <strong>the</strong> high availability and performance that you<br />

want to have from your configuration.<br />

Note: In SVC Version 3.1 and later, <strong>the</strong> command svcinfo lsfabric generates a<br />

report that displays <strong>the</strong> connectivity between nodes and o<strong>the</strong>r controllers and hosts.<br />

This report is particularly helpful in diagnosing <strong>SAN</strong> problems.<br />

Zoning examples<br />

Figure 3-10 shows an SVC cluster zoning example.<br />

Figure 3-10 SVC cluster zoning example<br />

Figure 3-11 on page 80 shows a storage subsystem zoning example.<br />

Chapter 3. Planning and configuration 79


Figure 3-11 <strong>Storage</strong> subsystem zoning example<br />

Figure 3-12 shows a host zoning example.<br />

Figure 3-12 Host zoning example<br />

80 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


3.3.3 iSCSI IP addressing plan<br />

SVC 5.1 supports host access via iSCSI (as an alternative to FC), and <strong>the</strong> following<br />

considerations apply:<br />

► SVC uses <strong>the</strong> built-in E<strong>the</strong>rnet ports for iSCSI traffic.<br />

► All node types, which can run SVC 5.1, can use <strong>the</strong> iSCSI feature.<br />

► SVC supports <strong>the</strong> Challenge Handshake Au<strong>the</strong>ntication Protocol (CHAP) au<strong>the</strong>ntication<br />

methods for iSCSI.<br />

► iSCSI IP addresses can fail over to <strong>the</strong> partner node in <strong>the</strong> I/O Group if a node fails. This<br />

design reduces <strong>the</strong> need for multipathing support in <strong>the</strong> iSCSI host.<br />

► iSCSI IP addresses can be configured for one or more nodes.<br />

► iSCSI Simple Name Server (iSNS) addresses can be configured in <strong>the</strong> SVC.<br />

► The iSCSI qualified name (IQN) for an SVC node will be:<br />

iqn.1986-03.com.ibm:2145... Because <strong>the</strong> IQN contains <strong>the</strong><br />

cluster name and <strong>the</strong> node name, it is important not to change <strong>the</strong>se names after iSCSI is<br />

deployed.<br />

► Each node can be given an iSCSI alias, as an alternative to <strong>the</strong> IQN.<br />

► The IQN of <strong>the</strong> host to an SVC host object is added in <strong>the</strong> same way that you add FC<br />

WWPNs.<br />

► Host objects can have both WWPNs and IQNs.<br />

► Standard iSCSI host connection procedures can be used to discover and configure SVC<br />

as an iSCSI target.<br />

Next, we show several ways that SVC 5.1 can be configured.<br />

Figure 3-13 shows <strong>the</strong> use of IPv4 management and iSCSI addresses in <strong>the</strong> same subnet.<br />

Figure 3-13 Use of IPv4 addresses<br />

You can set up <strong>the</strong> equivalent configuration with only IPv6 addresses.<br />

Chapter 3. Planning and configuration 81


Figure 3-14 shows <strong>the</strong> use of IPv4 management and iSCSI addresses in two separate<br />

subnets.<br />

Figure 3-14 IPv4 address plan with two subnets<br />

Figure 3-15 shows <strong>the</strong> use of redundant networks.<br />

Figure 3-15 Redundant networks<br />

Figure 3-16 on page 83 shows <strong>the</strong> use of a redundant network and a third subnet for<br />

management.<br />

82 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 3-16 Redundant network with third subnet for management<br />

Figure 3-17 shows <strong>the</strong> use of a redundant network for both iSCSI data and management.<br />

Figure 3-17 Redundant network for iSCSI and management<br />

Be aware of <strong>the</strong>se considerations:<br />

► All of <strong>the</strong> examples are valid using IPv4 and IPv6 addresses.<br />

► It is valid to use IPv4 addresses on one port and IPv6 addresses on <strong>the</strong> o<strong>the</strong>r port.<br />

► It is valid to have separate subnet configurations for IPv4 and IPv6 addresses.<br />

Chapter 3. Planning and configuration 83


3.3.4 Back-end storage subsystem configuration<br />

Back-end storage subsystem configuration planning must be applied to all of <strong>the</strong> storage that<br />

will supply disk space to an SVC cluster. See <strong>the</strong> following Web site for <strong>the</strong> currently<br />

supported storage subsystems:<br />

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html<br />

Apply <strong>the</strong> following general guidelines for back-end storage subsystem configuration<br />

planning:<br />

► In <strong>the</strong> <strong>SAN</strong>, disk subsystems that are used by <strong>the</strong> SVC cluster are always connected to<br />

<strong>SAN</strong> switches and nothing else.<br />

► O<strong>the</strong>r disk subsystem connections out of <strong>the</strong> <strong>SAN</strong> are possible.<br />

► Multiple connections are allowed from <strong>the</strong> redundant controllers in <strong>the</strong> disk subsystem to<br />

improve data bandwidth performance. It is not mandatory to have a connection from each<br />

redundant controller in <strong>the</strong> disk subsystem to each counterpart <strong>SAN</strong>, but it is<br />

recommended. Therefore, controller A in <strong>the</strong> DS4000 can be connected to <strong>SAN</strong> A only, or<br />

to <strong>SAN</strong> A and <strong>SAN</strong> B, and controller B in <strong>the</strong> DS4000 can be connected to <strong>SAN</strong> B only, or<br />

to <strong>SAN</strong> B and <strong>SAN</strong> A.<br />

► Split controller configurations are supported with certain rules and configuration<br />

guidelines. See <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Planning Guide, GA32-0551,<br />

for more information.<br />

► All SVC nodes in an SVC cluster must be able to see <strong>the</strong> same set of disk subsystem<br />

ports on each disk subsystem controller. Operation in a mode where two nodes see a<br />

separate set of ports on <strong>the</strong> same controller becomes degraded. This degradation can<br />

occur if inappropriate zoning was applied to <strong>the</strong> fabric. It can also occur if inappropriate<br />

LUN masking is used. This guideline has important implications for a disk subsystem,<br />

such as DS3000, DS4000, or DS5000, which imposes exclusivity rules on which HBA<br />

worldwide names (WWNs) a storage partition can be mapped to.<br />

In general, configure disk subsystems as though <strong>the</strong>re is no SVC; however, we recommend<br />

<strong>the</strong> following specific guidelines:<br />

► Disk drives:<br />

– Be careful with large disk drives so that you do not have too few spindles to handle <strong>the</strong><br />

load.<br />

– RAID-5 is suggested, but RAID-10 is viable and useful.<br />

► Array sizes:<br />

– 8+P or 4+P is recommended for <strong>the</strong> DS4000 and DS5000 families, if possible.<br />

– Use <strong>the</strong> DS4000 segment size of 128 KB or larger to help <strong>the</strong> sequential performance.<br />

– Avoid Serial Advanced Technology Attachment (SATA) disk unless running SVC 4.2.1.x<br />

or later<br />

– Upgrade to EXP810 drawers, if possible.<br />

– Create LUN sizes that are equal to <strong>the</strong> RAID array/rank if it does not exceed 2 TB.<br />

– Create a minimum of one LUN per FC port on a disk controller zoned with <strong>the</strong> SVC.<br />

– When adding more disks to a subsystem, consider adding <strong>the</strong> new MDisks to existing<br />

MDGs versus creating additional small MDGs.<br />

– Use a Perl script to restripe VDisk extents evenly across all MDisks in <strong>the</strong> MDG.<br />

Go to this Web site http://www.ibm.com/alphaworks, and search using “svctools”.<br />

84 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► Maximum of 64 worldwide node names (WWNNs):<br />

– EMC DMX/SYMM, All HDS, and SUN/HP HDS clones use one WWNN per port; each<br />

WWNN appears as a separate controller to <strong>the</strong> SVC.<br />

– Upgrade to SVC 4.2.1 or later so that you can map LUNs through up to 16 FC ports,<br />

which results in 16 WWNNs/WWPNs used out of <strong>the</strong> maximum of 64.<br />

– <strong>IBM</strong>, EMC Clariion, and HP use one WWNN per subsystem; each WWNN appears as<br />

a single controller with multiple ports/WWPNs, for a maximum of 16 ports/WWPNs per<br />

WWNN using one out of <strong>the</strong> maximum of 64.<br />

► DS8000 using four or eight 4 port HA cards:<br />

– Use port 1 and 3 or 2 and 4 on each card.<br />

– This setup provides 8 or 16 ports for SVC use.<br />

– Use 8 ports minimum up to 40 ranks.<br />

– Use 16 ports, which is <strong>the</strong> maximum, for 40 or more ranks.<br />

► Upgrade to SVC 4.2.1.9 or later to drive more workload to DS8000.<br />

Increased queue depth for DS4000, DS5000, DS6000, DS8000, or EMC DMX<br />

► DS4000/DS5000 – EMC Clariion/CX:<br />

– Both systems have <strong>the</strong> preferred controller architecture, and SVC supports this<br />

configuration.<br />

– Use a minimum of 4 ports, and preferably 8 or more ports up to maximum of 16 ports,<br />

so that more ports equate to more concurrent I/O that is driven by <strong>the</strong> SVC.<br />

– Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or<br />

cross connecting ports to both fabrics from both controllers. The latter approach is<br />

preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail.<br />

– Upgrade to SVC 4.3.1 or later for an SVC queue depth change for CX models, because<br />

it drives more I/O per port per MDisk.<br />

► DS3400:<br />

– Use a minimum of 4 ports.<br />

– Upgrade to SVC 4.3.x or later for better resiliency if <strong>the</strong> DS3400 controllers reset.<br />

► XIV® requirements and restrictions:<br />

– The SVC cluster must be running Version 4.3.0.1 or later to support <strong>the</strong> XIV.<br />

– The use of certain XIV functions on LUNs presented to <strong>the</strong> SVC is not supported.<br />

– You cannot perform snaps, thin provisioning, synchronous replication, or LUN<br />

expansion on XIV MDisks.<br />

– A maximum of 511 LUNs from one XIV system can be mapped to an SVC cluster.<br />

► Full 15 module XIV recommendations – 79 TB usable:<br />

– Use two interface host ports from each of <strong>the</strong> six interface modules.<br />

– Use ports 1 and 3 from each interface module and zone <strong>the</strong>se 12 ports with all SVC<br />

node ports.<br />

– Create 48 LUNs of equal size, each of which is a multiple of 17 GB, and you will get<br />

1,632 GB approximately if using <strong>the</strong> entire full frame XIV with <strong>the</strong> SVC.<br />

– Map LUNs to <strong>the</strong> SVC as 48 MDisks, and add all of <strong>the</strong>m to <strong>the</strong> one XIV MDG so that<br />

<strong>the</strong> SVC will drive <strong>the</strong> I/O to four MDisks/LUNs for each of <strong>the</strong> 12 XIV FC ports. This<br />

design provides a good queue depth on <strong>the</strong> SVC to drive XIV adequately.<br />

Chapter 3. Planning and configuration 85


► Six module XIV recommendations – 27 TB usable:<br />

– Use two interface host ports from each of <strong>the</strong> two active interface modules.<br />

– Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.)<br />

And, zone <strong>the</strong>se four ports with all SVC node ports.<br />

– Create 16 LUNs of equal size, each of which is a multiple of 17 GB, and you will get<br />

1,632 GB approximately if using <strong>the</strong> entire XIV with <strong>the</strong> SVC.<br />

– Map LUNs to <strong>the</strong> SVC as 16 MDisks, and add all of <strong>the</strong>m to <strong>the</strong> one XIV MDG so that<br />

<strong>the</strong> SVC will drive I/O to four MDisks/LUNs per each of <strong>the</strong> four XIV FC ports. This<br />

design provides a good queue depth on <strong>the</strong> SVC to drive XIV adequately.<br />

► Nine module XIV recommendations – 43 TB usable:<br />

– Use two interface host ports from each of <strong>the</strong> four active interface modules.<br />

– Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are<br />

inactive.) And, zone <strong>the</strong>se eight ports with all of <strong>the</strong> SVC node ports.<br />

– Create 26 LUNs of equal size, each of which is a multiple of 17 GB, and you will get<br />

1,632 GB approximately if using <strong>the</strong> entire XIV with <strong>the</strong> SVC.<br />

– Map LUNs to SVC as 26 MDisks, and map add all of <strong>the</strong>m to <strong>the</strong> one XIV MDG, so that<br />

<strong>the</strong> SVC will drive I/O to three MDisks/LUNs on each of six ports and four<br />

MDisks/LUNs on <strong>the</strong> o<strong>the</strong>r two XIV FC ports. This design provides a good queue depth<br />

on SVC to drive XIV adequately.<br />

► Configure XIV host connectivity for <strong>the</strong> SVC cluster:<br />

– Create one host definition on XIV, and include all SVC node WWPNs.<br />

– You can create clustered host definitions (one per I/O Group), but <strong>the</strong> preceding<br />

method is easier.<br />

– Map all LUNs to all SVC node WWPNs.<br />

3.3.5 SVC cluster configuration<br />

To ensure high availability in SVC installations, consider <strong>the</strong> following guidelines when you<br />

design a <strong>SAN</strong> with <strong>the</strong> SVC:<br />

► The 2145-4F2 and 2145-8F2 SVC nodes contain two HBAs, each of which has two FC<br />

ports. If an HBA fails, this configuration remains valid, and <strong>the</strong> node operates in degraded<br />

mode. If an HBA is physically removed from an SVC node, <strong>the</strong> configuration is<br />

unsupported. The 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 models have one HBA<br />

with four ports.<br />

► All nodes in a cluster must be in <strong>the</strong> same LAN segment, because <strong>the</strong> nodes in <strong>the</strong> cluster<br />

must be able to assume <strong>the</strong> same cluster, or service IP, address. Make sure that <strong>the</strong><br />

network configuration will allow any of <strong>the</strong> nodes to use <strong>the</strong>se IP addresses. Note that if<br />

you plan to use <strong>the</strong> second E<strong>the</strong>rnet port on each node, it is possible to have two LAN<br />

segments. However, port 1 of every node must be in one LAN segment, and port 2 of<br />

every node must be in <strong>the</strong> o<strong>the</strong>r LAN segment.<br />

► To maintain application uptime in <strong>the</strong> unlikely event of an individual SVC node failing, SVC<br />

nodes are always deployed in pairs (I/O Groups). If a node fails or is removed from <strong>the</strong><br />

configuration, <strong>the</strong> remaining node operates in a degraded mode, but it is still a valid<br />

configuration. The remaining node operates in write-through mode, meaning that <strong>the</strong> data<br />

is written directly to <strong>the</strong> disk subsystem (<strong>the</strong> cache is disabled for <strong>the</strong> write).<br />

► The uninterruptible power supply unit must be in <strong>the</strong> same rack as <strong>the</strong> node to which it<br />

provides power, and each uninterruptible power supply unit can only have one node<br />

connected.<br />

86 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► The FC <strong>SAN</strong> connections between <strong>the</strong> SVC node and <strong>the</strong> switches are optical fiber. These<br />

connections can run at ei<strong>the</strong>r 2 Gbps, 4 Gbps, or 8 Gbps, depending on your SVC and<br />

switch hardware. The 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 SVC nodes<br />

auto-negotiate <strong>the</strong> connection speed with <strong>the</strong> switch. The 2145-4F2 and 2145-8F2 nodes<br />

are capable of a maximum of 2 Gbps, which is determined by <strong>the</strong> cluster speed.<br />

► The SVC node ports must be connected to <strong>the</strong> FC fabric only. Direct connections between<br />

<strong>the</strong> SVC and <strong>the</strong> host, or <strong>the</strong> disk subsystem, are unsupported.<br />

► Two SVC clusters cannot share <strong>the</strong> same LUNs in a subsystem. The consequences of<br />

sharing <strong>the</strong> same disk subsystem can result in data loss. If <strong>the</strong> same MDisk becomes<br />

visible on two separate SVC clusters, this error can cause data corruption.<br />

► The two nodes within an I/O Group can be co-located (within <strong>the</strong> same set of racks) or can<br />

be located in separate racks and separate rooms to deploy a simple business continuity<br />

solution.<br />

If a split node cluster (split I/O Group) solution is implemented, observe <strong>the</strong> maximum<br />

distance allowed (100 m (or 320 ft., 1.13 inches)) between <strong>the</strong> nodes in an I/O Group.<br />

O<strong>the</strong>rwise, you will require a SCORE request in order to be supported for longer<br />

distances. Ask your <strong>IBM</strong> service representative for more detailed information about <strong>the</strong><br />

SCORE process.<br />

If a split node cluster (split I/O Group) solution is implemented, we recommend using a<br />

business continuity solution for <strong>the</strong> storage subsystem using <strong>the</strong> VDisk Mirroring option.<br />

Note <strong>the</strong> SVC cluster quorum disk location, as shown in Figure 3-18 on page 88, where<br />

<strong>the</strong> quorum disk is located separately in a third site or room.<br />

► The SVC uses three MDisks as a quorum disk for <strong>the</strong> cluster. We recommend that you, for<br />

redundancy purposes, locate, if possible, <strong>the</strong> three MDisks in three separate storage<br />

subsystems.<br />

If a split node cluster (split I/O Group) solution is implemented, two of <strong>the</strong> three quorum<br />

disks can be co-located in <strong>the</strong> same room where <strong>the</strong> SVC nodes are located, but <strong>the</strong><br />

active quorum disk (as displayed in <strong>the</strong> lsquorum output) must be in a separate room.<br />

Figure 3-18 on page 88 shows a schematic split I/O Group solution.<br />

Chapter 3. Planning and configuration 87


Figure 3-18 Split I/O Group solution<br />

3.3.6 Managed Disk Group configuration<br />

The Managed Disk Group (MDG) is at <strong>the</strong> center of <strong>the</strong> many-to-many relationship between<br />

<strong>the</strong> MDisks and <strong>the</strong> VDisks. It acts as a container into which managed disks contribute<br />

chunks of disk blocks, which are known as extents, and from which VDisks consume <strong>the</strong>se<br />

extents of storage.<br />

MDisks in <strong>the</strong> SVC are LUNs assigned from <strong>the</strong> underlying disk subsystems to <strong>the</strong> SVC and<br />

can be ei<strong>the</strong>r managed or unmanaged. A managed MDisk is an MDisk that is assigned to an<br />

MDG:<br />

► MDGs are collections of MDisks. An MDisk is contained within exactly one MDG.<br />

► An SVC supports up to 128 MDGs.<br />

► There is no limit to <strong>the</strong> number of VDisks that can be in an MDG o<strong>the</strong>r than <strong>the</strong> limit per<br />

cluster.<br />

► MDGs are collections of VDisks. Under normal circumstances, a VDisk is associated with<br />

exactly one MDG. The exception to this rule is when a VDisk is migrated, or mirrored,<br />

between MDGs.<br />

SVC supports extent sizes of 16, 32, 64, 128, 256, 512, 1,024, and 2,048 MB. The extent size<br />

is a property of <strong>the</strong> MDG, which is set when <strong>the</strong> MDG is created. It cannot be changed, and<br />

all MDisks, which are contained in <strong>the</strong> MDG, have <strong>the</strong> same extent size, so all VDisks that are<br />

associated with <strong>the</strong> MDG must also have <strong>the</strong> same extent size.<br />

Table 3-1 on page 89 shows all of <strong>the</strong> extent sizes that are available in an SVC.<br />

88 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Table 3-1 Extent size and maximum cluster capacities<br />

Extent size Maximum cluster capacity<br />

16 MB 64 TB<br />

32 MB 128 TB<br />

64 MB 256 TB<br />

128 MB 512 TB<br />

256 MB 1 PB<br />

512 MB 2 PB<br />

1,024 MB 4 PB<br />

2,048 MB 8 PB<br />

There are several additional MDG considerations:<br />

► Maximum cluster capacity is related to <strong>the</strong> extent size:<br />

– 16 MB extent = 64 TB and doubles for each increment in extent size, for example,<br />

32 MB = 128 TB. We strongly recommend a minimum 128/256 MB. The <strong>IBM</strong> Sales<br />

Productivity Center (SPC) benchmarks used a 256 MB extent.<br />

– Pick <strong>the</strong> extent size and use that size for all MDGs.<br />

– You cannot migrate VDisks between MDGs with various extent sizes.<br />

► MDG reliability, availability, and serviceability (RAS) considerations:<br />

– It might make sense to create multiple MDGs if you ensure a host only gets its VDisks<br />

built from one of <strong>the</strong> MDGs. If <strong>the</strong> MDG goes offline, it impacts only a subset of all of<br />

<strong>the</strong> hosts using <strong>the</strong> SVC; however, creating multiple MDGs can cause a high number of<br />

MDGs, approaching <strong>the</strong> SVC limits.<br />

– If you do not isolate hosts to MDGs, create one large MDG. Creating one large MDG<br />

assumes that <strong>the</strong> physical disks are all <strong>the</strong> same size, speed, and RAID level.<br />

– The MDG goes offline if an MDisk is unavailable even if <strong>the</strong> MDisk has no data on it. Do<br />

not put MDisks into an MDG until needed.<br />

– Create at least one separate MDG for all <strong>the</strong> image mode VDisks.<br />

– Make sure that <strong>the</strong> LUNs that are given to <strong>the</strong> SVC have any host persistent reserves<br />

removed.<br />

► MDG performance considerations<br />

It might make sense to create multiple MDGs if attempting to isolate workloads to separate<br />

disk spindles. MDGs with too few MDisks cause an MDisk overload, so it is better to have<br />

more spindle counts in an MDG to meet workload requirements.<br />

► The MDG and SVC cache relationship<br />

SVC 4.2.1 first introduced cache partitioning to <strong>the</strong> SVC code base. The decision was<br />

made to provide flexible partitioning, ra<strong>the</strong>r than hard-coding a specific number of<br />

partitions. This flexibility is provided on an MDG boundary. That is, <strong>the</strong> cache will<br />

automatically partition <strong>the</strong> available resources on a per MDG basis. Most users create a<br />

single MDG from <strong>the</strong> LUNs provided by a single disk controller, or a subset of a<br />

controller/collection of <strong>the</strong> same controllers, based on <strong>the</strong> characteristics of <strong>the</strong> LUNs<br />

<strong>the</strong>mselves. Characteristics are, for example, RAID-5 versus RAID-10, 10,000 revolutions<br />

per minute (RPM) versus 15,000 RPM, and so on. The overall strategy is provided to<br />

Chapter 3. Planning and configuration 89


protect from individual controller overloading or faults. If many controllers (or in this case<br />

MDGs) are overloaded, <strong>the</strong> overreached controllers can still suffer.<br />

Table 3-2 shows <strong>the</strong> limit of <strong>the</strong> write cache data.<br />

Table 3-2 Limit of <strong>the</strong> cache data<br />

Number of MDGs Upper limit<br />

1 100%<br />

2 66%<br />

3 40%<br />

4 30%<br />

5 or more 25%<br />

Think of <strong>the</strong> rule as no single partition can occupy more than its upper limit of cache<br />

capacity with write data. These limits are upper limits, and <strong>the</strong>y are <strong>the</strong> points at which <strong>the</strong><br />

SVC cache will start to limit incoming I/O rates for VDisks created from <strong>the</strong> MDG. If a<br />

particular partition reaches this upper limit, <strong>the</strong> net result is <strong>the</strong> same as a global cache<br />

resource that is full. That is, <strong>the</strong> host writes will be serviced on a one-out-one-in basis —<br />

as <strong>the</strong> cache destages writes to <strong>the</strong> back-end disks. However, only writes targeted at <strong>the</strong><br />

full partition are limited, all I/O destined for o<strong>the</strong>r (non-limited) MDGs will continue as<br />

normal. Read I/O requests for <strong>the</strong> limited partition will also continue as normal. However,<br />

because <strong>the</strong> SVC is destaging write data at a rate that is obviously greater than <strong>the</strong><br />

controller can actually sustain (o<strong>the</strong>rwise <strong>the</strong> partition does not reach <strong>the</strong> upper limit),<br />

reads are likely to be serviced equally slowly.<br />

3.3.7 Virtual disk configuration<br />

An individual virtual disk (VDisk) is a member of one MDG and one I/O Group. When you<br />

want to create a VDisk, first of all, you have to know <strong>the</strong> purpose for which this VDisk will be<br />

created. Based on that information, you can decide which MDG you have to select to fit your<br />

requirements in terms of cost, performance, and availability:<br />

► The MDG defines which MDisks provided by <strong>the</strong> disk subsystem make up <strong>the</strong> VDisk.<br />

► The I/O Group (two nodes make an I/O Group) defines which SVC nodes provide I/O<br />

access to <strong>the</strong> VDisk.<br />

Note: There is no fixed relationship between I/O Groups and MDGs.<br />

Therefore, you can define <strong>the</strong> VDisks using <strong>the</strong> following considerations:<br />

► Optimize <strong>the</strong> performance between <strong>the</strong> hosts and <strong>the</strong> SVC by distributing <strong>the</strong> VDisks<br />

between <strong>the</strong> various nodes of <strong>the</strong> SVC cluster, which means spreading <strong>the</strong> load equally on<br />

<strong>the</strong> nodes in <strong>the</strong> SVC cluster.<br />

► Get <strong>the</strong> level of performance, reliability, and capacity you require by using <strong>the</strong> MDG that<br />

corresponds to your needs (you can access any MDG from any node), that is, choose <strong>the</strong><br />

MDG that fulfils <strong>the</strong> demands for your VDisk, with respect to performance, reliability, and<br />

capacity.<br />

90 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► I/O Group considerations:<br />

– When you create a VDisk, it is associated with one node of an I/O Group. By default,<br />

every time that you create a new VDisk, it is associated with <strong>the</strong> next node using a<br />

round-robin algorithm. You can specify a preferred access node, which is <strong>the</strong> node<br />

through which you send I/O to <strong>the</strong> VDisk instead of using <strong>the</strong> round-robin algorithm. A<br />

VDisk is defined for an I/O Group.<br />

– Even if you have eight paths for each VDisk, all I/O traffic flows only toward one node<br />

(<strong>the</strong> preferred node). Therefore, only four paths are really used by <strong>the</strong> <strong>IBM</strong> Subsystem<br />

Device Driver (SDD). The o<strong>the</strong>r four paths are used only in <strong>the</strong> case of a failure of <strong>the</strong><br />

preferred node or when concurrent code upgrade is running.<br />

► Creating image mode VDisks:<br />

– Use image mode VDisks when an MDisk already has data on it, from a non-virtualized<br />

disk subsystem. When an image mode VDisk is created, it directly corresponds to <strong>the</strong><br />

MDisk from which it is created. Therefore, VDisk logical block address (LBA) x = MDisk<br />

LBA x. The capacity of image mode VDisks defaults to <strong>the</strong> capacity of <strong>the</strong> supplied<br />

MDisk.<br />

– When you create an image mode disk, <strong>the</strong> MDisk must have a mode of unmanaged<br />

and <strong>the</strong>refore does not belong to any MDG. A capacity of 0 is not allowed. Image mode<br />

VDisks can be created in sizes with a minimum granularity of 512 bytes, and <strong>the</strong>y must<br />

be at least one block (512 bytes) in size.<br />

► Creating managed mode VDisks with sequential or striped policy<br />

When creating a managed mode VDisk with sequential or striped policy, you must use a<br />

number of MDisks containing extents that are free and of a size that is equal or greater<br />

than <strong>the</strong> size of <strong>the</strong> VDisk that you want to create. There might be sufficient extents<br />

available on <strong>the</strong> MDisk, but <strong>the</strong>re might not be a contiguous block large enough to satisfy<br />

<strong>the</strong> request.<br />

► Space-Efficient VDisk considerations:<br />

– While creating <strong>the</strong> space-efficient volume, it is necessary to understand <strong>the</strong> utilization<br />

patterns by <strong>the</strong> applications or group users accessing this volume. Items, such as <strong>the</strong><br />

actual size of <strong>the</strong> data, <strong>the</strong> rate of creation of new data, modifying or deletion of existing<br />

data, and so on, all need to be taken into consideration.<br />

– There are two operating modes for Space-Efficient VDisks. Autoexpand VDisks<br />

allocate storage from an MDG on demand with minimal user intervention required, but<br />

a misbehaving application can cause a VDisk to expand until it has consumed all of <strong>the</strong><br />

storage in an MDG. Non-autoexpand VDisks have a fixed amount of storage assigned.<br />

In this case, <strong>the</strong> user must monitor <strong>the</strong> VDisk and assign additional capacity if or when<br />

required. A misbehaving application can only cause <strong>the</strong> VDisk that it is using to fill up.<br />

– Depending on <strong>the</strong> initial size for <strong>the</strong> real capacity, <strong>the</strong> grain size and a warning level can<br />

be set. If a disk goes offline, ei<strong>the</strong>r through a lack of available physical storage on<br />

autoexpand, or because a disk marked as non-expand has not been expanded, <strong>the</strong>re<br />

is a danger of data being left in <strong>the</strong> cache until storage is made available. This situation<br />

is not a data integrity or data loss issue, but you must not rely on <strong>the</strong> SVC cache as a<br />

backup storage mechanism.<br />

Recommendations:<br />

► We highly recommend that you keep a warning level on <strong>the</strong> used capacity so that it<br />

provides adequate time for <strong>the</strong> provision of more physical storage.<br />

► Warnings must not be ignored by an administrator.<br />

► Use <strong>the</strong> autoexpand feature of <strong>the</strong> Space-Efficient VDisks.<br />

Chapter 3. Planning and configuration 91


– The grain size allocation unit for <strong>the</strong> real capacity in <strong>the</strong> VDisk can be set as 32 KB,<br />

64 KB, 128 KB, or 256 KB. A smaller grain size utilizes space more effectively, but it<br />

results in a larger directory map, which can reduce performance.<br />

– Space-Efficient VDisks require more I/Os because of directory accesses. For truly<br />

random workloads with 70% read and 30% write, a Space-Efficient VDisk will require<br />

approximately one directory I/O for every user I/O so performance can be up to 50%<br />

less than that of a normal VDisk.<br />

– The directory is two-way write-back-cached (just like <strong>the</strong> SVC fastwrite cache), so<br />

certain applications will perform better.<br />

– Space-Efficient VDisks require more CPU processing, so <strong>the</strong> performance per I/O<br />

Group will be poorer.<br />

– Starting with SVC 5.1, we have Space-Efficient VDisks - zero detect. This feature<br />

enables clients to reclaim unused allocated disk space (zeros) when converting a fully<br />

allocated VDisk to a Space-Efficient VDisk (SEV) using VDisk Mirroring.<br />

► VDisk Mirroring. If you are planning to use <strong>the</strong> VDisk Mirroring option, you must apply <strong>the</strong><br />

following guidelines:<br />

– Create or identify two separate MDGs to allocate space for your mirrored VDisk.<br />

– If it is possible to use an MDG with MDisks that share <strong>the</strong> same characteristics;<br />

o<strong>the</strong>rwise, <strong>the</strong> VDisk performance can be affected by <strong>the</strong> poorer performing MDisk.<br />

3.3.8 Host mapping (LUN masking)<br />

For <strong>the</strong> host and application servers, <strong>the</strong> following guidelines apply:<br />

► Each SVC node presents a VDisk to <strong>the</strong> <strong>SAN</strong> through four paths. Because two nodes are<br />

used in normal operations to provide redundant paths to <strong>the</strong> same storage, a host with two<br />

HBAs can see eight paths to each LUN that is presented by <strong>the</strong> SVC. We suggest using<br />

zoning to limit <strong>the</strong> pathing from a minimum of two paths to <strong>the</strong> maximum available of eight<br />

paths, depending on <strong>the</strong> kind of high availability and performance that you want to have in<br />

your configuration.<br />

We recommend using zoning to limit <strong>the</strong> pathing to four paths. The hosts must run a<br />

multipathing device driver to resolve this back to a single device. The multipathing driver<br />

supported and delivered by SVC is <strong>the</strong> <strong>IBM</strong> Subsystem Device Driver (SDD). Native<br />

multipath I/O (MPIO) drivers on selected hosts are supported. For operating system<br />

specific information about MPIO support, see this Web site:<br />

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html<br />

► The number of paths to a VDisk from a host to <strong>the</strong> nodes in <strong>the</strong> I/O Group that owns <strong>the</strong><br />

VDisk must not exceed eight, even if eight is not <strong>the</strong> maximum number of paths supported<br />

by <strong>the</strong> multipath driver (SDD supports up to 32). To restrict <strong>the</strong> number of paths to a host<br />

VDisk, <strong>the</strong> fabrics must be zoned so that each host FC port is zoned with one port from<br />

each SVC node in <strong>the</strong> I/O Group that owns <strong>the</strong> VDisk.<br />

VDisk paths: The recommended number of VDisk paths is four.<br />

► If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports<br />

to maximize high availability and performance.<br />

► In order to configure greater than 256 hosts, you will need to configure <strong>the</strong> host to iogrp<br />

mappings on <strong>the</strong> SVC. Each iogrp can contain a maximum of 256 hosts, so it is possible to<br />

create 1,024 host objects on an eight-node SVC cluster. VDisks can only be mapped to a<br />

host that is associated with <strong>the</strong> I/O Group to which <strong>the</strong> VDisk belongs.<br />

92 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► Port masking. You can use a port mask to control <strong>the</strong> node target ports that a host can<br />

access, which satisfies two requirements:<br />

– As part of a security policy, to limit <strong>the</strong> set of WWPNs that are able to obtain access to<br />

any VDisks through a given SVC port<br />

– As part of a scheme to limit <strong>the</strong> number of logins with mapped VDisks visible to a host<br />

multipathing driver (such as SDD) and thus limit <strong>the</strong> number of host objects configured<br />

without resorting to switch zoning<br />

► The port mask is an optional parameter of <strong>the</strong> svctask mkhost and chhost commands.<br />

The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to<br />

1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The<br />

default value is 1111 (all ports enabled).<br />

► The SVC supports connection to <strong>the</strong> Cisco MDS family and Brocade family. See <strong>the</strong><br />

following Web site for <strong>the</strong> latest support information:<br />

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html<br />

3.3.9 Advanced Copy Services<br />

The SVC offers <strong>the</strong>se Advanced Copy Services:<br />

► FlashCopy<br />

► Metro Mirror<br />

► Global Mirror<br />

SVC Advanced Copy Services must apply <strong>the</strong> following guidelines.<br />

FlashCopy guidelines<br />

Consider <strong>the</strong>se FlashCopy guidelines:<br />

► Identify each application that must have a FlashCopy function implemented for its VDisk.<br />

► FlashCopy is a relationship between VDisks. Those VDisks can belong to separate MDGs<br />

and separate storage subsystems.<br />

► You can use FlashCopy for backup purposes by interacting with <strong>the</strong> Tivoli <strong>Storage</strong><br />

Manager Agent, or for cloning a particular environment.<br />

► Define which FlashCopy best fits your requirements: NO copy, Full copy, Space Efficient,<br />

or Incremental.<br />

► Define which FlashCopy rate best fits your requirement in terms of performance and time<br />

to get <strong>the</strong> FlashCopy completed. The relationship of <strong>the</strong> background copy rate value to <strong>the</strong><br />

attempted number of grains to be split per second is shown in Table 3-3 on page 94.<br />

► Define <strong>the</strong> grain size that you want to use. Larger grain sizes can cause a longer<br />

FlashCopy elapsed time and a higher space usage in <strong>the</strong> FlashCopy target VDisk. Smaller<br />

grain sizes can have <strong>the</strong> opposite effect. Remember that <strong>the</strong> data structure and <strong>the</strong> source<br />

data location can modify those effects. In an actual environment, check <strong>the</strong> results of your<br />

FlashCopy procedure in terms of <strong>the</strong> data copied at every run and in terms of elapsed<br />

time, comparing <strong>the</strong>m to <strong>the</strong> new SVC FlashCopy results, and eventually adapt <strong>the</strong><br />

grain/second and <strong>the</strong> copy rate parameter to fit your environment’s requirements.<br />

Chapter 3. Planning and configuration 93


Table 3-3 Grain splits per second<br />

User percentage Data copied per<br />

second<br />

Metro Mirror and Global Mirror guidelines<br />

SVC supports both intracluster and intercluster Metro Mirror and Global Mirror. From <strong>the</strong><br />

intracluster point of view, any single cluster is a reasonable candidate for a Metro Mirror or<br />

Global Mirror operation. Intercluster operation, however, needs at least two clusters, which<br />

are separated by a number of moderately high bandwidth links.<br />

Figure 3-19 shows a schematic of Metro Mirror connections.<br />

Figure 3-19 Metro Mirror connections<br />

Figure 3-19 contains two redundant fabrics. Part of each fabric exists at <strong>the</strong> local cluster and<br />

at <strong>the</strong> remote cluster. There is no direct connection between <strong>the</strong> two fabrics.<br />

94 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

256 KB grain per<br />

second<br />

1 - 10 128 KB 0.5 2<br />

11 - 20 256 KB 1 4<br />

21 - 30 512 KB 2 8<br />

31 - 40 1 MB 4 16<br />

41 - 50 2 MB 8 32<br />

51 - 60 4 MB 16 64<br />

61 - 70 8 MB 32 128<br />

71 - 80 16 Mb 64 256<br />

81 - 90 32 MB 128 512<br />

91 - 100 64 MB 256 1,024<br />

64 KB grain per<br />

second


Technologies for extending <strong>the</strong> distance between two SVC clusters can be broadly divided<br />

into two categories:<br />

► FC extenders<br />

► <strong>SAN</strong> multiprotocol routers<br />

Due to <strong>the</strong> more complex interactions involved, <strong>IBM</strong> explicitly tests products of this class for<br />

interoperability with <strong>the</strong> SVC. The current list of supported <strong>SAN</strong> routers can be found in <strong>the</strong><br />

supported hardware list on <strong>the</strong> SVC support Web site:<br />

http://www.ibm.com/storage/support/2145<br />

<strong>IBM</strong> has tested a number of FC extenders and <strong>SAN</strong> router technologies with <strong>the</strong> SVC, which<br />

must be planned, installed, and tested so that <strong>the</strong> following requirements are met:<br />

► For SVC 4.1.0.x, <strong>the</strong> round-trip latency between sites must not exceed 68 ms (34 ms one<br />

way) for FC extenders, or 20 ms (10 ms one-way) for <strong>SAN</strong> routers.<br />

► For SVC 4.1.1.x and later, <strong>the</strong> round-trip latency between sites must not exceed 80 ms<br />

(40 ms one-way). For Global Mirror, this limit allows a distance between <strong>the</strong> primary and<br />

secondary sites of up to 8,000 km (4,970.96 miles) using a planning assumption of 100<br />

km (62.13 miles) per 1 ms of round-trip link latency.<br />

► The latency of long distance links depends upon <strong>the</strong> technology that is used to implement<br />

<strong>the</strong>m. A point-to-point dark fiber-based link will typically provide a round-trip latency of<br />

1ms per 100 km (62.13 miles) or better. O<strong>the</strong>r technologies will provide longer round-trip<br />

latencies, which will affect <strong>the</strong> maximum supported distance.<br />

► The configuration must be tested with <strong>the</strong> expected peak workloads.<br />

► When Metro Mirror or Global Mirror is used, a certain amount of bandwidth will be required<br />

for SVC intercluster heartbeat traffic. The amount of traffic depends on how many nodes<br />

are in each of <strong>the</strong> two clusters.<br />

Figure 3-20 shows <strong>the</strong> amount of heartbeat traffic, in megabits per second, that is<br />

generated by various sizes of clusters.<br />

Figure 3-20 Amount of heartbeat traffic<br />

► These numbers represent <strong>the</strong> total traffic between <strong>the</strong> two clusters, when no I/O is taking<br />

place to mirrored VDisks. Half of <strong>the</strong> data is sent by one cluster, and half of <strong>the</strong> data is sent<br />

by <strong>the</strong> o<strong>the</strong>r cluster. The traffic will be divided evenly over all available intercluster links;<br />

<strong>the</strong>refore, if you have two redundant links, half of this traffic will be sent over each link,<br />

during fault free operation.<br />

► The bandwidth between sites must be, at <strong>the</strong> least, sized to meet <strong>the</strong> peak workload<br />

requirements while maintaining <strong>the</strong> maximum latency specified previously. The peak<br />

workload requirement must be evaluated by considering <strong>the</strong> average write workload over a<br />

period of one minute or less, plus <strong>the</strong> required synchronization copy bandwidth. With no<br />

synchronization copies active and no write I/O disks in Metro Mirror or Global Mirror<br />

relationships, <strong>the</strong> SVC protocols will operate with <strong>the</strong> bandwidth indicated in Figure 3-20,<br />

Chapter 3. Planning and configuration 95


ut <strong>the</strong> true bandwidth required for <strong>the</strong> link can only be determined by considering <strong>the</strong><br />

peak write bandwidth to VDisks participating in Metro Mirror or Global Mirror relationships<br />

and adding to it <strong>the</strong> peak synchronization copy bandwidth.<br />

► If <strong>the</strong> link between <strong>the</strong> sites is configured with redundancy so that it can tolerate single<br />

failures, <strong>the</strong> link must be sized so that <strong>the</strong> bandwidth and latency statements continue to<br />

be true even during single failure conditions.<br />

► The configuration is tested to simulate <strong>the</strong> failure of <strong>the</strong> primary site (to test <strong>the</strong> recovery<br />

capabilities and procedures), including eventual failback to <strong>the</strong> primary site from <strong>the</strong><br />

secondary.<br />

► The configuration must be tested to confirm that any failover mechanisms in <strong>the</strong><br />

intercluster links interoperate satisfactorily with <strong>the</strong> SVC.<br />

► The FC extender must be treated as a normal link.<br />

► The bandwidth and latency measurements must be made by, or on behalf of <strong>the</strong> client,<br />

and are not part of <strong>the</strong> standard installation of <strong>the</strong> SVC by <strong>IBM</strong>. <strong>IBM</strong> recommends that<br />

<strong>the</strong>se measurements are made during installation and that records are kept. Testing must<br />

be repeated following any significant changes to <strong>the</strong> equipment providing <strong>the</strong> intercluster<br />

link.<br />

Global Mirror guidelines<br />

Consider <strong>the</strong>se guidelines:<br />

► When using SVC Global Mirror, all components in <strong>the</strong> <strong>SAN</strong> must be capable of sustaining<br />

<strong>the</strong> workload generated by application hosts, as well as <strong>the</strong> Global Mirror background copy<br />

workload. If not true, Global Mirror can automatically stop your relationships to protect<br />

your application hosts from increased response times. Therefore, it is important to<br />

configure each component correctly.<br />

► In addition, use a <strong>SAN</strong> performance monitoring tool, such as <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong><br />

Productivity Center, which will allow you to continuously monitor <strong>the</strong> <strong>SAN</strong> components for<br />

error conditions and performance problems. This tool will assist you to detect potential<br />

issues before <strong>the</strong>y impact your disaster recovery solution.<br />

► The long-distance link between <strong>the</strong> two clusters must be provisioned to allow for <strong>the</strong> peak<br />

application write workload to <strong>the</strong> Global Mirror source VDisks, plus <strong>the</strong> client-defined level<br />

of background copy.<br />

► The peak application write workload must ideally be determined by analyzing <strong>the</strong> SVC<br />

performance statistics.<br />

► Statistics must be ga<strong>the</strong>red over a typical application I/O workload cycle, which might be<br />

days, weeks, or months depending on <strong>the</strong> environment on which <strong>the</strong> SVC is used. These<br />

statistics must be used to find <strong>the</strong> peak write workload that <strong>the</strong> link must be able to<br />

support.<br />

► Characteristics of <strong>the</strong> link can change with use, for example, <strong>the</strong> latency might increase as<br />

<strong>the</strong> link is used to carry an increased bandwidth. The user must be aware of <strong>the</strong> link’s<br />

behavior in such situations and ensure that <strong>the</strong> link remains within <strong>the</strong> specified limits. If<br />

<strong>the</strong> characteristics are not known, testing must be performed to gain confidence of <strong>the</strong><br />

link’s suitability.<br />

► Users of Global Mirror must consider how to optimize <strong>the</strong> performance of <strong>the</strong><br />

long-distance link, which will depend upon <strong>the</strong> technology that is used to implement <strong>the</strong><br />

link. For example, when transmitting FC traffic over an IP link, it might be desirable to<br />

enable jumbo frames to improve efficiency.<br />

► Using Global Mirror and Metro Mirror between <strong>the</strong> same two clusters is supported.<br />

► It is not supported for cache-disabled VDisks to participate in a Global Mirror relationship.<br />

96 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► The gmlinktolerance parameter of <strong>the</strong> remote copy partnership must be set to an<br />

appropriate value. The default value is 300 seconds (5 minutes), which will be appropriate<br />

for most clients.<br />

► During <strong>SAN</strong> maintenance, <strong>the</strong> user must ei<strong>the</strong>r reduce <strong>the</strong> application I/O workload for <strong>the</strong><br />

duration of <strong>the</strong> maintenance (so that <strong>the</strong> degraded <strong>SAN</strong> components are capable of <strong>the</strong><br />

new workload), disable <strong>the</strong> gmlinktolerance feature, increase <strong>the</strong> gmlinktolerance value<br />

(meaning that application hosts might see extended response times from Global Mirror<br />

VDisks), or stop <strong>the</strong> Global Mirror relationships. If <strong>the</strong> gmlinktolerance value is increased<br />

for maintenance lasting x minutes, it must only be reset to <strong>the</strong> normal value x minutes after<br />

<strong>the</strong> end of <strong>the</strong> maintenance activity. If gmlinktolerance is disabled for <strong>the</strong> duration of <strong>the</strong><br />

maintenance, it must be re-enabled after <strong>the</strong> maintenance is complete.<br />

► Global Mirror VDisks must have <strong>the</strong>ir preferred nodes evenly distributed between <strong>the</strong><br />

nodes of <strong>the</strong> clusters. Each VDisk within an I/O Group has a preferred node property that<br />

can be used to balance <strong>the</strong> I/O load between nodes in that group.<br />

Figure 3-21 shows <strong>the</strong> correct relationship between VDisks in a Metro Mirror or Global Mirror<br />

solution.<br />

Figure 3-21 Correct VDisk relationship<br />

► The capabilities of <strong>the</strong> storage controllers at <strong>the</strong> secondary cluster must be provisioned to<br />

allow for <strong>the</strong> peak application workload to <strong>the</strong> Global Mirror VDisks, plus <strong>the</strong> client-defined<br />

level of background copy, plus any o<strong>the</strong>r I/O being performed at <strong>the</strong> secondary site. The<br />

performance of applications at <strong>the</strong> primary cluster can be limited by <strong>the</strong> performance of<br />

<strong>the</strong> back-end storage controllers at <strong>the</strong> secondary cluster to maximize <strong>the</strong> amount of I/O<br />

that applications can perform to Global Mirror VDisks.<br />

► We do not recommend using SATA for Metro Mirror or Global Mirror secondary VDisks<br />

without complete review. Be careful using a slower disk subsystem for <strong>the</strong> secondary<br />

VDisks for high performance primary VDisks, because SVC cache might not be able to<br />

buffer all <strong>the</strong> writes, and flushing cache writes to SATA might slow I/O at <strong>the</strong> production<br />

site.<br />

► Global Mirror VDisks at <strong>the</strong> secondary cluster must be in dedicated MDisk groups (which<br />

contain no non-Global Mirror VDisks).<br />

Chapter 3. Planning and configuration 97


► <strong>Storage</strong> controllers must be configured to support <strong>the</strong> Global Mirror workload that is<br />

required of <strong>the</strong>m. Ei<strong>the</strong>r dedicate storage controllers to only Global Mirror VDisks,<br />

configure <strong>the</strong> controller to guarantee sufficient quality of service for <strong>the</strong> disks being used<br />

by Global Mirror, or ensure that physical disks are not shared between Global Mirror<br />

VDisks and o<strong>the</strong>r I/O (for example, by not splitting an individual RAID array).<br />

► MDisks within a Global Mirror MDisk group must be similar in <strong>the</strong>ir characteristics (for<br />

example, RAID level, physical disk count, and disk speed). This requirement is true of all<br />

MDisk groups, but it is particularly important to maintain performance when using Global<br />

Mirror.<br />

► When a consistent relationship is stopped, for example, by a persistent I/O error on <strong>the</strong><br />

intercluster link, <strong>the</strong> relationship enters <strong>the</strong> consistent_stopped state. I/O at <strong>the</strong> primary<br />

site continues, but <strong>the</strong> updates are not mirrored to <strong>the</strong> secondary site. Restarting <strong>the</strong><br />

relationship will begin <strong>the</strong> process of synchronizing new data to <strong>the</strong> secondary disk. While<br />

this synchronization is in progress, <strong>the</strong> relationship will be in <strong>the</strong> inconsistent_copying<br />

state. Therefore, <strong>the</strong> Global Mirror secondary VDisk will not be in a usable state until <strong>the</strong><br />

copy has completed and <strong>the</strong> relationship has returned to a Consistent state. Therefore, it<br />

is highly advisable to create a FlashCopy of <strong>the</strong> secondary VDisk before restarting <strong>the</strong><br />

relationship. When started, <strong>the</strong> FlashCopy will provide a consistent copy of <strong>the</strong> data, even<br />

while <strong>the</strong> Global Mirror relationship is copying. If <strong>the</strong> Global Mirror relationship does not<br />

reach <strong>the</strong> Synchronized state (if, for example, <strong>the</strong> intercluster link experiences fur<strong>the</strong>r<br />

persistent I/O errors), <strong>the</strong> FlashCopy target can be used at <strong>the</strong> secondary site for disaster<br />

recovery purposes.<br />

► If you are planning to use an FCIP intercluster link, it is extremely important to design and<br />

size <strong>the</strong> pipe correctly.<br />

Example 3-2 shows a best-guess bandwidth sizing formula.<br />

Example 3-2 WAN link calculation example<br />

Amount of write data within 24 hours times 4 to allow for peaks<br />

Translate into MB/s to determine WAN link needed<br />

Example:<br />

250 GB a day<br />

250 GB * 4 = 1 TB<br />

24 hours * 3600 secs/hr. = 86400 secs<br />

1,000,000,000,000/ 86400 = approximately 12 MB/s<br />

Which means OC3 or higher is needed (155 Mbps or higher)<br />

► If compression is available on routers or WAN communication devices, smaller pipelines<br />

might be adequate. Note that workload is probably not evenly spread across 24 hours. If<br />

<strong>the</strong>re are extended periods of high data change rates, you might want to consider<br />

suspending Global Mirror during that time frame.<br />

► If <strong>the</strong> network bandwidth is too small to handle <strong>the</strong> traffic, application write I/O response<br />

times might be elongated. For <strong>the</strong> SVC, Global Mirror must support short term “Peak<br />

Write” bandwidth requirements. Remember that SVC Global Mirror is much more sensitive<br />

to a lack of bandwidth than <strong>the</strong> DS8000.<br />

► You will need to consider <strong>the</strong> initial sync and re-sync workload, as well. The Global Mirror<br />

partnership’s background copy rate must be set to a value that is appropriate to <strong>the</strong> link<br />

and secondary back-end storage. Remember, <strong>the</strong> more bandwidth that you give to <strong>the</strong><br />

sync and re-sync operation, <strong>the</strong> less workload can be delivered by <strong>the</strong> SVC for <strong>the</strong> regular<br />

data traffic.<br />

► The Metro Mirror or Global Mirror background copy rate is predefined, <strong>the</strong> per VDisk limit<br />

is 25 MBps, and <strong>the</strong> maximum per I/O Group is roughly 250 MBps.<br />

98 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► Be careful using space-efficient secondary VDisks at <strong>the</strong> disaster recovery site, because a<br />

Space-Efficient VDisk can have performance of up to 50% less of a normal VDisk and can<br />

affect <strong>the</strong> performance of <strong>the</strong> VDisks at <strong>the</strong> primary site.<br />

► Do not propose Global Mirror if <strong>the</strong> data change rate will exceed <strong>the</strong> communication<br />

bandwidth or if <strong>the</strong> round-trip latency exceeds 80 - 120 ms. Greater than 80 ms round-trip<br />

latency requires SCORE/RPQ submission.<br />

3.3.10 <strong>SAN</strong> boot support<br />

The SVC supports <strong>SAN</strong> boot or startup for AIX, Windows 2003 Server, and o<strong>the</strong>r operating<br />

systems. <strong>SAN</strong> boot support can change from time to time, so we recommend regularly<br />

checking <strong>the</strong> following Web site:<br />

http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html<br />

3.3.11 Data migration from a non-virtualized storage subsystem<br />

Data migration is an extremely important part of an SVC implementation. So, a data migration<br />

plan must be accurately prepared. You might need to migrate your data because of one of<br />

<strong>the</strong>se reasons:<br />

► Redistributing workload within a cluster across <strong>the</strong> disk subsystem<br />

► Moving workload onto newly installed storage<br />

► Moving workload off old or failing storage, ahead of decommissioning it<br />

► Moving workload to rebalance a changed workload<br />

► Migrating data from an older disk subsystem to SVC-managed storage<br />

► Migrating data from one disk subsystem to ano<strong>the</strong>r disk subsystem<br />

Because <strong>the</strong>re are multiple data migration methods, we suggest that you choose <strong>the</strong> data<br />

migration method that best fits your environment, your operating system platform, your kind of<br />

data, and your application’s service level agreement.<br />

We can define data migration as belonging to three groups:<br />

► Based on operating system Logical <strong>Volume</strong> Manager (LVM) or commands<br />

► Based on special data migration software<br />

► Based on <strong>the</strong> SVC data migration feature<br />

With data migration, we recommend that you apply <strong>the</strong> following guidelines:<br />

► Choose which data migration method best fits your operating system platform, your kind of<br />

data, and your service level agreement.<br />

► Check <strong>the</strong> interoperability matrix for <strong>the</strong> storage subsystem to which your data is being<br />

migrated:<br />

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html<br />

► Choose where you want to place your data after migration in terms of <strong>the</strong> MDG related to<br />

a specific storage subsystem tier.<br />

► Check if a sufficient amount of free space or extents are available in <strong>the</strong> target MDG.<br />

► Decide if your data is critical and must be protected by a VDisk Mirroring option or if it has<br />

to be replicated in a remote site for disaster recovery.<br />

► Prepare offline all of <strong>the</strong> zone and LUN masking/host mappings that you might need in<br />

order to minimize downtime during <strong>the</strong> migration.<br />

► Prepare a detailed operation plan so that you do not overlook anything at data migration<br />

time.<br />

Chapter 3. Planning and configuration 99


► Execute a data backup before you start any data migration. Data backup must be part of<br />

<strong>the</strong> regular data management process.<br />

► You might want to use <strong>the</strong> SVC as a data mover to migrate data from a non-virtualized<br />

storage subsystem to ano<strong>the</strong>r non-virtualized storage subsystem. In this case, you might<br />

have to add additional checks that are related to <strong>the</strong> specific storage subsystem to which<br />

you want to migrate. Be careful using slower disk subsystems for <strong>the</strong> secondary VDisks for<br />

high performance primary VDisks, because SVC cache might not be able to buffer all <strong>the</strong><br />

writes and flushing cache writes to SATA might slow I/O at <strong>the</strong> production site.<br />

3.3.12 SVC configuration backup procedure<br />

We recommend that you save <strong>the</strong> configuration externally when changes, such as adding<br />

new nodes, disk subsystems, and so on, have been performed on <strong>the</strong> cluster. Configuration<br />

saving is a crucial part of <strong>the</strong> SVC management, and various methods can be applied to back<br />

up your SVC configuration. We suggest that you implement an automatic configuration<br />

backup by applying <strong>the</strong> configuration backup command. We describe this command for <strong>the</strong><br />

CLI and <strong>the</strong> GUI in Chapter 7, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line<br />

interface” on page 339 and in Chapter 8, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI”<br />

on page 469.<br />

3.4 Performance considerations<br />

While storage virtualization with <strong>the</strong> SVC improves flexibility and provides simpler<br />

management of a storage infrastructure, it can also provide a substantial performance<br />

advantage for a variety of workloads. The SVC’s caching capability and its ability to stripe<br />

VDisks across multiple disk arrays are <strong>the</strong> reasons why performance improvement is<br />

significant when implemented with midrange disk subsystems, because this technology is<br />

often only provided with high-end enterprise disk subsystems.<br />

Tip: Technically, almost all storage controllers provide both striping (RAID-1 or RAID-10)<br />

and a form of caching. The real advantage is <strong>the</strong> degree to which you can stripe <strong>the</strong> data,<br />

that is, across all MDisks in a group and <strong>the</strong>refore have <strong>the</strong> maximum number of spindles<br />

active at one time. The caching is secondary. The SVC provides additional caching to what<br />

midrange controllers provide (usually a couple of GB), whereas enterprise systems have<br />

much larger caches.<br />

To ensure <strong>the</strong> desired performance and capacity of your storage infrastructure, we<br />

recommend that you do a performance and capacity analysis to reveal <strong>the</strong> business<br />

requirements of your storage environment. When this is done, you can use <strong>the</strong> guidelines in<br />

this chapter to design a solution that meets <strong>the</strong> business requirements.<br />

When discussing performance for a system, it always comes down to identifying <strong>the</strong><br />

bottleneck, and <strong>the</strong>reby <strong>the</strong> limiting factor of a given system. At <strong>the</strong> same time, you must take<br />

into consideration <strong>the</strong> component for whose workload you identify a limiting factor, because it<br />

might not be <strong>the</strong> same component that is identified as <strong>the</strong> limiting factor for o<strong>the</strong>r workloads.<br />

When designing a storage infrastructure using SVC, or implementing SVC in an existing<br />

storage infrastructure, you must <strong>the</strong>refore take into consideration <strong>the</strong> performance and<br />

capacity of <strong>the</strong> <strong>SAN</strong>, <strong>the</strong> disk subsystems, <strong>the</strong> SVC, and <strong>the</strong> known/expected workload.<br />

100 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


3.4.1 <strong>SAN</strong><br />

The SVC now has many models: 2145-4F2, 2145-8F2, 2145-8F4, 2145-8G4, 2145-8A4, and<br />

2145-CF8. All of <strong>the</strong>m can connect to 2 Gbps, 4 Gbps, or 8 Gbps switches. From a<br />

performance point of view, it is better to connect <strong>the</strong> SVC to 8 Gbps switches.<br />

Correct zoning on <strong>the</strong> <strong>SAN</strong> switch will bring security and performance toge<strong>the</strong>r. We<br />

recommend that you implement a dual HBA approach at <strong>the</strong> host to access <strong>the</strong> SVC.<br />

3.4.2 Disk subsystems<br />

From a performance perspective, <strong>the</strong>re are a few guidelines in connecting to an SVC:<br />

► Connect all storage ports to <strong>the</strong> switch, and zone <strong>the</strong>m to all of <strong>the</strong> SVC ports. You zone all<br />

ports on <strong>the</strong> disk back-end storage to all ports on <strong>the</strong> SVC nodes in a cluster. And, you<br />

must also make sure to configure <strong>the</strong> storage subsystem LUN masking settings to map all<br />

LUNs to all <strong>the</strong> SVC WWPNs in <strong>the</strong> cluster. The SVC is designed to handle large<br />

quantities of multiple paths from <strong>the</strong> back-end storage.<br />

► Using as many as possible 15,000 RPM disks will improve performance considerably.<br />

► Creating one LUN per array will help in a sequential workload environment.<br />

In most cases, <strong>the</strong> SVC will be able to improve <strong>the</strong> performance, especially on middle to low<br />

end disk subsystems, older disk subsystems with slow controllers, or uncached disk systems,<br />

for <strong>the</strong>se reasons:<br />

► The SVC has <strong>the</strong> capability to stripe across disk arrays, and it can do so across <strong>the</strong> entire<br />

set of supported physical disk resources.<br />

► The SVC has a 4 GB, 8 GB, or 24 GB cache in <strong>the</strong> latest 2145-CF8 model and it has an<br />

advanced caching mechanism.<br />

The SVC’s large cache and advanced cache management algorithms also allow it to improve<br />

upon <strong>the</strong> performance of many types of underlying disk technologies. The SVC’s capability to<br />

manage, in <strong>the</strong> background, <strong>the</strong> destaging operations incurred by writes (while still supporting<br />

full data integrity) has <strong>the</strong> potential to be particularly important in achieving good database<br />

performance.<br />

Depending upon <strong>the</strong> size, age, and technology level of <strong>the</strong> disk storage system, <strong>the</strong> total<br />

cache available in <strong>the</strong> SVC can be larger, smaller, or about <strong>the</strong> same as that associated with<br />

<strong>the</strong> disk storage. Because hits to <strong>the</strong> cache can occur in ei<strong>the</strong>r <strong>the</strong> upper (SVC) or <strong>the</strong> lower<br />

(disk controller) level of <strong>the</strong> overall system, <strong>the</strong> system as a whole can take advantage of <strong>the</strong><br />

larger amount of cache wherever it is located. Thus, if <strong>the</strong> storage control level of cache has<br />

<strong>the</strong> greater capacity, expect hits to this cache to occur, in addition to hits in <strong>the</strong> SVC cache.<br />

Also, regardless of <strong>the</strong>ir relative capacities, both levels of cache will tend to play an important<br />

role in allowing sequentially organized data to flow smoothly through <strong>the</strong> system. The SVC<br />

cannot increase <strong>the</strong> throughput potential of <strong>the</strong> underlying disks in all cases. Its ability to do<br />

so depends upon both <strong>the</strong> underlying storage technology, as well as <strong>the</strong> degree to which <strong>the</strong><br />

workload exhibits “hot spots” or sensitivity to cache size or cache algorithms.<br />

<strong>IBM</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 4.2.1 Cache Partitioning, REDP-4426, shows <strong>the</strong> SVC’s cache<br />

partitioning capability:<br />

http://www.redbooks.ibm.com/abstracts/redp4426.html?Open<br />

Chapter 3. Planning and configuration 101


3.4.3 SVC<br />

The SVC cluster is scalable up to eight nodes, and <strong>the</strong> performance is almost linear when<br />

adding more nodes into an SVC cluster, until it becomes limited by o<strong>the</strong>r components in <strong>the</strong><br />

storage infrastructure. While virtualization with <strong>the</strong> SVC provides a great deal of flexibility, it<br />

does not diminish <strong>the</strong> necessity to have a <strong>SAN</strong> and disk subsystems that can deliver <strong>the</strong><br />

desired performance. Essentially, SVC performance improvements are gained by having as<br />

many MDisks as possible, <strong>the</strong>refore creating a greater level of concurrent I/O to <strong>the</strong> back end<br />

without overloading a single disk or array.<br />

Assuming that <strong>the</strong>re are no bottlenecks in <strong>the</strong> <strong>SAN</strong> or on <strong>the</strong> disk subsystem, remember that<br />

specific guidelines must be followed when you are performing <strong>the</strong>se tasks:<br />

► Creating an MDG<br />

► Creating VDisks<br />

► Connecting or configuring hosts that must receive disk space from an SVC cluster<br />

You can obtain more detailed information about performance and best practices for <strong>the</strong> SVC<br />

in <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Best Practices and Performance Guidelines, SG24-7521:<br />

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open<br />

3.4.4 Performance monitoring<br />

Performance monitoring must be an integral part of <strong>the</strong> overall IT environment. For <strong>the</strong> SVC,<br />

just as for <strong>the</strong> o<strong>the</strong>r <strong>IBM</strong> storage subsystems, <strong>the</strong> official <strong>IBM</strong> tool to collect performance<br />

statistics and supply a performance report is <strong>the</strong> Total<strong>Storage</strong>® Productivity Center.<br />

You can obtain more information about using <strong>the</strong> Total<strong>Storage</strong> Productivity Center to monitor<br />

your storage subsystem in Monitoring Your <strong>Storage</strong> Subsystems with Total<strong>Storage</strong><br />

Productivity Center, SG24-7364:<br />

http://www.redbooks.ibm.com/abstracts/sg247364.html?Open<br />

See Chapter 8, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI” on page 469 for detailed<br />

information about collecting performance statistics.<br />

102 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial<br />

configuration<br />

In this chapter, we discuss <strong>the</strong>se topics:<br />

► Managing <strong>the</strong> cluster<br />

► <strong>System</strong> <strong>Storage</strong> Productivity Center overview<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC) Hardware Management Console<br />

► SVC initial configuration steps<br />

► SVC ICA application upgrade<br />

4<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 103


4.1 Managing <strong>the</strong> cluster<br />

There are three ways to manage <strong>the</strong> SVC:<br />

► Using <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center (SSPC)<br />

► Using an SVC Management Console<br />

► Using a PuTTY-based SVC command-line interface<br />

Figure 4-1 shows <strong>the</strong> three ways to manage an SVC cluster.<br />

HMC<br />

•icat<br />

•http://<br />

•Putty client<br />

Figure 4-1 SVC cluster management<br />

SSPC<br />

•icat<br />

•http://<br />

•Putty client<br />

•TPC-SE<br />

You still have full management control of <strong>the</strong> SVC no matter which method you choose. <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> Productivity Center is supplied by default when you purchase your SVC<br />

cluster.<br />

If you already have a previously installed SVC cluster in your environment, it is possible that<br />

you are using <strong>the</strong> SVC Console (Hardware Management Console (HMC)). You can still use it<br />

toge<strong>the</strong>r with <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center, but you can only log in to your SVC<br />

from one of <strong>the</strong>m at a time.<br />

If you decide to manage your SVC cluster with <strong>the</strong> SVC CLI, it does not matter if you are<br />

using <strong>the</strong> SVC Console or <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center, because <strong>the</strong> SVC CLI is<br />

located on <strong>the</strong> cluster and accessed via Secure Shell (SSH), which can be installed<br />

anywhere.<br />

4.1.1 TCP/IP requirements for <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

To plan your installation, consider <strong>the</strong> TCP/IP address requirements of <strong>the</strong> SVC cluster and<br />

<strong>the</strong> requirements for <strong>the</strong> SVC to access o<strong>the</strong>r services. You must also plan <strong>the</strong> address<br />

allocation and <strong>the</strong> E<strong>the</strong>rnet router, gateway, and firewall configuration to provide <strong>the</strong> required<br />

access and network security.<br />

Figure 4-2 shows <strong>the</strong> TCP/IP ports and services that are used by <strong>the</strong> SVC.<br />

104 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

OEM<br />

Desktop<br />

•http://<br />

•Putty client


Figure 4-2 TCP/IP ports<br />

For more information about TCP/IP prerequisites, see Chapter 3, “Planning and<br />

configuration” on page 65 and also <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center: Introduction<br />

and Planning Guide, SC23-8824.<br />

In order to start an SVC initial configuration, Figure 4-3 shows a common flowchart that<br />

covers all of <strong>the</strong> types of management.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 105


Figure 4-3 SVC initial configuration flowchart<br />

In <strong>the</strong> next sections, we describe each of <strong>the</strong> steps shown in Figure 4-3.<br />

106 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


4.2 <strong>System</strong>s <strong>Storage</strong> Productivity Center overview<br />

The <strong>System</strong> <strong>Storage</strong> Productivity Center (SSPC) is an integrated hardware and software<br />

solution that provides a single management console for managing <strong>IBM</strong> SVC, <strong>IBM</strong> DS8000,<br />

and o<strong>the</strong>r components of your data storage infrastructure.<br />

The current release of <strong>System</strong> <strong>Storage</strong> Productivity Center consists of <strong>the</strong> following<br />

components:<br />

► <strong>IBM</strong> Tivoli <strong>Storage</strong> Productivity Center Basic Edition 4.1.1<br />

<strong>IBM</strong> Tivoli <strong>Storage</strong> Productivity Center Basic Edition 4.1.1 is preinstalled on <strong>the</strong> <strong>System</strong><br />

<strong>Storage</strong> Productivity Center server.<br />

► Tivoli <strong>Storage</strong> Productivity Center for Replication is preinstalled. An additional license<br />

is required.<br />

► <strong>IBM</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console 5.1.0<br />

<strong>IBM</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console 5.1.0 is preinstalled on <strong>the</strong> <strong>System</strong> <strong>Storage</strong><br />

Productivity Center server. Because this level of <strong>the</strong> console no longer requires a<br />

Common Information Model (CIM) agent to communicate with <strong>the</strong> SVC, a CIM Agent is<br />

not installed with <strong>the</strong> console. Instead, you can use <strong>the</strong> CIM Agent that is embedded in <strong>the</strong><br />

SVC hardware. To manage prior levels of <strong>the</strong> SVC, install <strong>the</strong> corresponding CIM Agent on<br />

<strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center server. PuTTY remains installed on <strong>the</strong><br />

<strong>System</strong> <strong>Storage</strong> Productivity Center and is available for key generation.<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> DS® <strong>Storage</strong> Manager 10.60 is available for you to optionally<br />

install on <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center server, or on a remote server. The DS<br />

<strong>Storage</strong> Manager 10.60 can manage <strong>the</strong> <strong>IBM</strong> DS3000, <strong>IBM</strong> DS4000, and <strong>IBM</strong> DS5000.<br />

With DS <strong>Storage</strong> Manager 10.60, when you use Tivoli <strong>Storage</strong> Productivity Center to add<br />

and discover a DS CIM Agent, you can launch <strong>the</strong> DS <strong>Storage</strong> Manager from <strong>the</strong> topology<br />

viewer, <strong>the</strong> Configuration Utility, or <strong>the</strong> Disk Manager of <strong>the</strong> Tivoli <strong>Storage</strong> Productivity<br />

Center.<br />

► <strong>IBM</strong> Java 1.5 is preinstalled. <strong>IBM</strong> Java is preinstalled and supports DS <strong>Storage</strong><br />

Manager 10.60. You do not need to download Java from Sun Microsystems.<br />

► DS CIM Agent management commands. The DS CIM Agent management commands<br />

(DSCIMCLI) for 5.4.3 are preinstalled on <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center.<br />

Figure 4-4 shows <strong>the</strong> product stack in <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center Console<br />

1.4.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 107


Figure 4-4 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center 1.4 product stack<br />

The <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center Console replaces <strong>the</strong> functionality of <strong>the</strong> SVC<br />

Master Console (MC), which was a dedicated management console for <strong>the</strong> SVC. The Master<br />

Console is still supported and will run <strong>the</strong> latest code levels of <strong>the</strong> SVC Console software<br />

components.<br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center has all of <strong>the</strong> software components preinstalled and<br />

tested on a <strong>System</strong> x TM machine model <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center 2805-MC4<br />

with Windows installed on it.<br />

All <strong>the</strong> software components installed on <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center can be<br />

ordered and installed on hardware that meets or exceeds minimum requirements. The SVC<br />

Console software components are also available on <strong>the</strong> Web.<br />

When using <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center with <strong>the</strong> SVC, you have to install it<br />

and configure it before configuring <strong>the</strong> SVC. For a detailed guide to <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong><br />

Productivity Center, we recommend that you refer to <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity<br />

Center Software Installation and User’s Guide, SC23-8823.<br />

For information pertaining to physical connectivity to <strong>the</strong> SVC, see Chapter 3, “Planning and<br />

configuration” on page 65.<br />

4.2.1 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center hardware<br />

The hardware used by <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center solution is <strong>the</strong> <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> Productivity Center 2805-MC4. It is a 1U rack-mounted server. It has <strong>the</strong><br />

following initial configuration:<br />

► One Intel Xeon® quad-core central processing unit, with speed of 2.4 GHz, cache of<br />

8 MB, and power consumption of 80 W<br />

108 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► 8 GB of RAM (eight 1-inch dual inline memory modules of double-data-rate 3 (DDR3)<br />

memory, with a data rate of 1,333 MHz<br />

► Two 146 GB hard disk drives, each with a speed of 15,000 RPM<br />

► One Broadcom 6708 E<strong>the</strong>rnet card<br />

► One CD/DVD bay with read and write-read capability Microsoft Windows 2008 Enterprise<br />

Edition<br />

It is designed to perform <strong>System</strong> <strong>Storage</strong> Productivity Center functions. If you plan to upgrade<br />

<strong>System</strong> <strong>Storage</strong> Productivity Center for more functions, you can purchase <strong>the</strong> Performance<br />

Upgrade Kit to add more capacity to your hardware.<br />

4.2.2 SVC installation planning information for <strong>System</strong> <strong>Storage</strong> Productivity<br />

Center<br />

Consider <strong>the</strong> following steps when planning <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center<br />

installation:<br />

► Verify that <strong>the</strong> hardware and software prerequisites have been met.<br />

► Determine <strong>the</strong> location of <strong>the</strong> rack where <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center is to be<br />

installed.<br />

► Verify that <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center will be installed in line of sight to <strong>the</strong><br />

SVC nodes.<br />

► Verify that you have a keyboard, mouse, and monitor available to use.<br />

► Determine <strong>the</strong> cabling required.<br />

► Determine <strong>the</strong> network IP address.<br />

► Determine <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center host name.<br />

For detailed installation guidance, see <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center:<br />

Introduction and Planning Guide, SC23-8824:<br />

https://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=<br />

5000033&familyind=5356448<br />

Also, see <strong>the</strong> <strong>IBM</strong> Tivoli <strong>Storage</strong> Productivity Center <strong>IBM</strong> Tivoli <strong>Storage</strong> Productivity Center<br />

for Replication Installation and Configuration Guide, SC27-2337:<br />

http://http://www-01.ibm.com/support/docview.wss?rs=1181&uid=ssg1S7002597<br />

Figure 4-5 shows <strong>the</strong> front view of <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center Console based on<br />

<strong>the</strong> 2805-MC4 hardware.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 109


Figure 4-5 <strong>System</strong> <strong>Storage</strong> Productivity Center 2805-MC4 front view<br />

Figure 4-6 shows a rear view of <strong>System</strong> <strong>Storage</strong> Productivity Center Console based on <strong>the</strong><br />

2805-MC4 hardware.<br />

Figure 4-6 <strong>System</strong> <strong>Storage</strong> Productivity Center 2805-MC4 rear view<br />

4.2.3 SVC installation planning information for <strong>the</strong> HMC<br />

Consider <strong>the</strong> following steps when planning for HMC installation:<br />

► Verify that <strong>the</strong> hardware and software prerequisites have been met.<br />

► Determine <strong>the</strong> location of <strong>the</strong> rack where <strong>the</strong> HMC is to be installed.<br />

► Verify that <strong>the</strong> HMC will be installed in line of sight to <strong>the</strong> SVC nodes.<br />

► Verify that you have a keyboard, mouse, and monitor available to use.<br />

► Determine <strong>the</strong> cabling required.<br />

► Determine <strong>the</strong> network IP address.<br />

► Determine <strong>the</strong> HMC host name.<br />

For detailed installation guidance, see <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>:<br />

Master Console Guide, SC27-2223:<br />

http://www-01.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH&d<br />

c=DA400&q1=english&q2=-Japanese&uid=ssg1S7002609&loc=en_US&cs=utf-8&lang=en<br />

110 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


4.3 Setting up <strong>the</strong> SVC cluster<br />

This section provides step-by-step instructions for building <strong>the</strong> SVC cluster initially.<br />

4.3.1 Creating <strong>the</strong> cluster (first time) using <strong>the</strong> service panel<br />

This section provides <strong>the</strong> step-by-step instructions that are needed to create <strong>the</strong> cluster for<br />

<strong>the</strong> first time using <strong>the</strong> service panel.<br />

Use Figure 4-7 as a reference for <strong>the</strong> SVC 2145-8F2 and 2145-8F4 node model buttons to be<br />

pushed in <strong>the</strong> steps that follow. Use Figure 4-8 for <strong>the</strong> SVC Node 2145-8G4 and 2145-8A4<br />

models. And, use Figure 4-9 as a reference for <strong>the</strong> SVC Node 2145-CF8 model.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 111


Figure 4-7 SVC 8F2 node and SVC 8F4 node front and operator panel<br />

112 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-8 SVC 8G4 node front and operator panel<br />

Figure 4-9 shows <strong>the</strong> CF8 model front panel.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 113


4.3.2 Prerequisites<br />

Figure 4-9 CF8 front panel<br />

Ensure that <strong>the</strong> SVC nodes are physically installed. Prior to configuring <strong>the</strong> cluster, ensure<br />

that <strong>the</strong> following information is available:<br />

► License: The license indicates whe<strong>the</strong>r <strong>the</strong> client is permitted to use FlashCopy,<br />

MetroMirror, or both. It also indicates how much capacity <strong>the</strong> client is licensed to virtualize.<br />

► For IPv4 addressing:<br />

– Cluster IPv4 addresses: These addresses include one address for <strong>the</strong> cluster and<br />

ano<strong>the</strong>r address for <strong>the</strong> service address.<br />

– IPv4 subnet mask.<br />

– Gateway IPv4 address.<br />

► For IPv6 addressing:<br />

– Cluster IPv6 addresses: These addresses include one address for <strong>the</strong> cluster and<br />

ano<strong>the</strong>r address for <strong>the</strong> service address.<br />

– IPv6 prefix.<br />

– Gateway IPv6 address.<br />

114 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


4.3.3 Initial configuration using <strong>the</strong> service panel<br />

After <strong>the</strong> hardware is physically installed into racks, complete <strong>the</strong> following steps to initially<br />

configure <strong>the</strong> cluster through <strong>the</strong> service panel:<br />

1. Choose any node that is to become a member of <strong>the</strong> cluster being created.<br />

2. At <strong>the</strong> service panel of that node, press and release <strong>the</strong> up or down navigation button<br />

continuously until Node: is displayed.<br />

Important: If a time-out occurs when entering <strong>the</strong> input for <strong>the</strong> fields during <strong>the</strong>se<br />

steps, you must begin again from step 2. All of <strong>the</strong> changes are lost, so be sure to have<br />

all of <strong>the</strong> information available before beginning again.<br />

3. Press and release <strong>the</strong> left or right navigation button continuously until Create Cluster? is<br />

displayed. Press <strong>the</strong> select button.<br />

4. If IPv4 Address: is displayed on line 1 of <strong>the</strong> service display, go to step 5. If Delete<br />

Cluster? is displayed on line 1 of <strong>the</strong> service display, this node is already a member of a<br />

cluster. Ei<strong>the</strong>r <strong>the</strong> wrong node was selected, or this node was already used in a previous<br />

cluster. The ID of this existing cluster is displayed on line 2 of <strong>the</strong> service display:<br />

a. If <strong>the</strong> wrong node was selected, this procedure can be exited by pressing <strong>the</strong> left, right,<br />

up, or down button (it cancels automatically after 60 seconds).<br />

b. If you are certain that <strong>the</strong> existing cluster is not required, follow <strong>the</strong>se steps:<br />

i. Press and hold <strong>the</strong> up button.<br />

ii. Press and release <strong>the</strong> select button. Then, release <strong>the</strong> up button, which deletes <strong>the</strong><br />

cluster information from <strong>the</strong> node. Go back to step 1 and start again.<br />

Important: When a cluster is deleted, all of <strong>the</strong> client data that is contained in that<br />

cluster is lost.<br />

5. If you are creating <strong>the</strong> cluster with IPv4, <strong>the</strong>n, press <strong>the</strong> select button; o<strong>the</strong>rwise for IPv6,<br />

press <strong>the</strong> down arrow to display IPv6 Address:, and press <strong>the</strong> select button.<br />

6. Use <strong>the</strong> up or down navigation buttons to change <strong>the</strong> value of <strong>the</strong> first field of <strong>the</strong> IP<br />

address to <strong>the</strong> value that has been chosen.<br />

Note: For IPv4, pressing and holding <strong>the</strong> up or down buttons will increment or<br />

decrease <strong>the</strong> IP address field by units of 10. The field value rotates from 0 to 255 with<br />

<strong>the</strong> down button, and from 255 to 0 with <strong>the</strong> up button.<br />

For IPv6, you do <strong>the</strong> same steps except that it is a 4-digit hexadecimal field, and <strong>the</strong><br />

individual characters will increment.<br />

7. Use <strong>the</strong> right navigation button to move to <strong>the</strong> next field. Use <strong>the</strong> up or down navigation<br />

buttons to change <strong>the</strong> value of this field.<br />

8. Repeat step 7 for each of <strong>the</strong> remaining fields of <strong>the</strong> IP address.<br />

9. When <strong>the</strong> last field of <strong>the</strong> IP address has been changed, press <strong>the</strong> select button.<br />

10.Press <strong>the</strong> right arrow button:<br />

a. For IPv4, IPv4 Subnet: is displayed.<br />

b. For IPv6, IPv6 Prefix: is displayed.<br />

11.Press <strong>the</strong> select button.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 115


12.Change <strong>the</strong> fields for IPv4 Subnet in <strong>the</strong> same way that <strong>the</strong> IPv4 IP address fields were<br />

changed. There is only a single field for IPv6 Prefix.<br />

13.When <strong>the</strong> last field of IPv4 Subnet/IPv6 Mask has been changed, press <strong>the</strong> select button.<br />

14.Press <strong>the</strong> right navigation button:<br />

a. For IPv4, IPv4 Gateway: is displayed.<br />

b. For IPv6, IPv6 Gateway: is displayed.<br />

15.Press <strong>the</strong> select button.<br />

16.Change <strong>the</strong> fields for <strong>the</strong> appropriate Gateway in <strong>the</strong> same way that <strong>the</strong> IPv4/IPv6 address<br />

fields were changed.<br />

17.When <strong>the</strong> changes to all of <strong>the</strong> Gateway fields have been made, press <strong>the</strong> select button.<br />

18.Press <strong>the</strong> right navigation button:<br />

a. For IPv4, IPv4 Create Now? is displayed.<br />

b. For IPv6, IPv6 Create Now? is displayed.<br />

19.When <strong>the</strong> settings have all been verified as accurate, press <strong>the</strong> select button.<br />

To review <strong>the</strong> settings before creating <strong>the</strong> cluster, use <strong>the</strong> right and left buttons. Make any<br />

necessary changes, return to Create Now?, and press <strong>the</strong> select button.<br />

If <strong>the</strong> cluster is created successfully, Password: is displayed on line 1 of <strong>the</strong> service display<br />

panel. Line 2 contains a randomly generated password, which is used to complete <strong>the</strong><br />

cluster configuration in <strong>the</strong> next section.<br />

Important: Make a note of this password now. It is case sensitive. The password is<br />

displayed only for approximately 60 seconds. If <strong>the</strong> password is not recorded, <strong>the</strong><br />

cluster configuration procedure must be started again from <strong>the</strong> beginning.<br />

20.When Cluster: is displayed on line 1 of <strong>the</strong> service display and <strong>the</strong> Password: display has<br />

timed out, <strong>the</strong> cluster was created successfully. Also, <strong>the</strong> cluster IP address is displayed<br />

on line 2 when <strong>the</strong> initial creation of <strong>the</strong> cluster is completed.<br />

If <strong>the</strong> cluster is not created, Create Failed: is displayed on line 1 of <strong>the</strong> service display.<br />

Line 2 contains an error code. Refer to <strong>the</strong> error codes that are documented in <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Service Guide, GC26-7901, to identify <strong>the</strong><br />

reason why <strong>the</strong> cluster creation failed and <strong>the</strong> corrective action to take.<br />

Important: At this time, do not repeat this procedure to add o<strong>the</strong>r nodes to <strong>the</strong> cluster.<br />

Adding nodes to <strong>the</strong> cluster is accomplished in 7.8.2, “Adding a node” on page 388 and in<br />

8.10.3, “Adding nodes to <strong>the</strong> cluster” on page 560.<br />

4.4 Adding <strong>the</strong> cluster to <strong>the</strong> SSPC or <strong>the</strong> SVC HMC<br />

After you have performed <strong>the</strong> activities in 4.3, “Setting up <strong>the</strong> SVC cluster” on page 111,<br />

complete <strong>the</strong> cluster setup using <strong>the</strong> SVC Console. Follow 4.4.1, “Configuring <strong>the</strong> GUI” on<br />

page 117 to create <strong>the</strong> cluster and complete <strong>the</strong> configuration.<br />

Important: Make sure that <strong>the</strong> SVC cluster IP address (svcclusterip) can be reached<br />

successfully with a ping command from <strong>the</strong> SVC Console.<br />

116 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


4.4.1 Configuring <strong>the</strong> GUI<br />

If this is <strong>the</strong> first time that <strong>the</strong> SVC administration GUI is being used, you must configure it:<br />

1. Open <strong>the</strong> GUI using one of <strong>the</strong> following methods:<br />

– Double-click <strong>the</strong> icon marked <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console on <strong>the</strong> SVC Console’s<br />

desktop.<br />

– Open a Web browser on <strong>the</strong> SVC Console and point to this address:<br />

http://localhost:9080/ica (We accessed <strong>the</strong> SVC Console using this method.)<br />

– Open a Web browser on a separate workstation and point to this address:<br />

http://svcconsoleipaddress:9080/ica<br />

Figure 4-10 shows <strong>the</strong> SVC 5.1 Welcome window.<br />

Figure 4-10 Welcome window<br />

2. Click Add <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Cluster, and you will be presented with <strong>the</strong> window<br />

that is shown in Figure 4-11.<br />

Figure 4-11 Adding <strong>the</strong> SVC cluster IP address<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 117


Important: Do not forget to select Create Initialize Cluster. Without this flag, you will<br />

not be able to initialize <strong>the</strong> cluster and you will get <strong>the</strong> error message CMMVC5753E.<br />

Figure 4-12 shows <strong>the</strong> CMMVC5753E error.<br />

Figure 4-12 CMMVC5753E error<br />

3. Click OK and a pop-up window opens and prompts for <strong>the</strong> user ID and <strong>the</strong> password of <strong>the</strong><br />

SVC cluster, as shown in Figure 4-13. Enter <strong>the</strong> user ID admin and <strong>the</strong> cluster admin<br />

password that was set earlier in 4.3.1, “Creating <strong>the</strong> cluster (first time) using <strong>the</strong> service<br />

panel” on page 111, and click OK.<br />

Figure 4-13 SVC cluster user ID and password sign-on window<br />

4. The browser accesses <strong>the</strong> SVC and displays <strong>the</strong> Create New Cluster wizard window, as<br />

shown in Figure 4-14. Click Continue.<br />

Figure 4-14 Create New Cluster wizard<br />

118 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5. At <strong>the</strong> Create New Cluster window (Figure 4-15), fill in <strong>the</strong> following details:<br />

– A new superuser password to replace <strong>the</strong> random one that <strong>the</strong> cluster generated: The<br />

password is case sensitive and can consist of A to Z, a to z, 0 to 9, and <strong>the</strong> underscore.<br />

It cannot start with a number and has a minimum of one character and a maximum of<br />

15 characters.<br />

Users: The Admin user that was previously used will no longer be needed. It will be<br />

replaced by <strong>the</strong> superuser user that will be created at <strong>the</strong> cluster initialization time.<br />

Starting from SVC 5.1, <strong>the</strong> CIM Agent has been moved inside <strong>the</strong> SVC cluster.<br />

– A service password to access <strong>the</strong> cluster for service operation: The password is case<br />

sensitive and can consist of A to Z, a to z, 0 to 9, and <strong>the</strong> underscore. It cannot start<br />

with a number and has a minimum of one character and a maximum of 15 characters.<br />

– A cluster name: The cluster name is case sensitive and can consist of A to Z, a to z, 0<br />

to 9, and <strong>the</strong> underscore. It cannot start with a number and has a minimum of one<br />

character and a maximum of 15 characters.<br />

– A service IP address to access <strong>the</strong> cluster for service operations. Choose between an<br />

automatically assigned IP address from Dynamic Host Configuration Protocol (DHCP)<br />

or a static IP address.<br />

Tip: The service IP address differs from <strong>the</strong> cluster IP address. However, because<br />

<strong>the</strong> service IP address is configured for <strong>the</strong> cluster, it must be on <strong>the</strong> same IP<br />

subnet.<br />

– The fabric speed of <strong>the</strong> FC network.<br />

– The Administrator Password Policy check box, if selected, enables a user to reset <strong>the</strong><br />

password from <strong>the</strong> service panel (this reset is helpful, for example, if <strong>the</strong> password is<br />

forgotten). This check box is optional.<br />

Important: The SVC must be in a secure room if this function is enabled, because<br />

anyone who knows <strong>the</strong> correct key sequence can reset <strong>the</strong> admin password:<br />

► Use this key sequence:<br />

a. From <strong>the</strong> Cluster: menu item displayed on <strong>the</strong> service panel, press <strong>the</strong> left or<br />

right button until Recover Cluster? is displayed.<br />

b. Press <strong>the</strong> select button. Service Access? is displayed.<br />

c. Press and hold <strong>the</strong> up button, and <strong>the</strong>n press and release <strong>the</strong> select button.<br />

This step generates a new random password. Write it down.<br />

► Important: Be careful, because pressing and holding <strong>the</strong> down button, and<br />

pressing and releasing <strong>the</strong> select button, places <strong>the</strong> node in service mode.<br />

6. After you have filled in <strong>the</strong> details, click Create New Cluster (Figure 4-15).<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 119


Figure 4-15 Cluster details<br />

Important: Make sure that you confirm <strong>the</strong> Administrator and Service passwords and<br />

retain <strong>the</strong>m in a safe place for future use.<br />

7. A Creating New Cluster window opens, as shown in Figure 4-16. Click Continue each<br />

time when prompted.<br />

120 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-16 Creating New Cluster<br />

8. A Created New Cluster window opens, as shown in Figure 4-17. Click Continue.<br />

Figure 4-17 Created New Cluster<br />

9. A Password Changed window will confirm that <strong>the</strong> password has been modified, as shown<br />

in Figure 4-18. Click Continue.<br />

Figure 4-18 Password Changed<br />

Note: By this time, <strong>the</strong> service panel display on <strong>the</strong> front of <strong>the</strong> configured node<br />

displays <strong>the</strong> cluster name that was entered previously (for example, ITSO-CLS3).<br />

10.Then, you are redirected to <strong>the</strong> License setting window, as shown in Figure 4-19. Choose<br />

<strong>the</strong> type of license that is appropriate for your purchase, and click GO to continue.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 121


Figure 4-19 License Settings<br />

11.Next, <strong>the</strong> Capacity Licensing Settings window is displayed, as shown in Figure 4-20. To<br />

continue, fill out <strong>the</strong> fields for Virtualization Limit, FlashCopy Limit, and Global and Metro<br />

Mirror Limit for <strong>the</strong> number of Terabytes that are licensed. If you do not have a license for<br />

any of <strong>the</strong>se features, leave <strong>the</strong> value at 0. Click Set License Settings.<br />

Figure 4-20 Capacity Licensing Settings<br />

12.A confirmation window will confirm <strong>the</strong> settings for <strong>the</strong> features, as shown in Figure 4-21.<br />

Click Continue.<br />

122 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-21 Capacity Licensing Settings confirmation<br />

13.A window confirming that you have successfully created <strong>the</strong> initial settings for <strong>the</strong> cluster<br />

opens, as shown in Figure 4-22.<br />

Figure 4-22 Cluster successfully created<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 123


14.Closing <strong>the</strong> previous task window by clicking X in <strong>the</strong> upper-right corner will redirect you to<br />

<strong>the</strong> Viewing Clusters window (<strong>the</strong> cluster will appear as unau<strong>the</strong>nticated). After selecting<br />

your cluster and clicking Go, you will be asked to au<strong>the</strong>nticate your access by inserting<br />

your predefined superuser user ID and password.<br />

Figure 4-23 shows <strong>the</strong> Viewing Clusters window.<br />

Figure 4-23 Viewing Clusters window<br />

15.Perform <strong>the</strong> following steps to complete <strong>the</strong> SVC cluster configuration:<br />

a. Add an additional node to <strong>the</strong> cluster.<br />

b. Configure SSH keys for <strong>the</strong> command line user, as shown in 4.5, “Secure Shell<br />

overview and CIM Agent” on page 125.<br />

c. Configure user au<strong>the</strong>ntication and authorization.<br />

d. Set up <strong>the</strong> call home options.<br />

e. Set up event notifications and inventory reporting.<br />

f. Create <strong>the</strong> MDGs.<br />

g. Add an MDisk to <strong>the</strong> MDG.<br />

h. Identify and create VDisks.<br />

i. Create a map host objects map.<br />

j. Identify and configure FlashCopy mappings and Metro Mirror relationship.<br />

k. Back up configuration data.<br />

We describe all of <strong>the</strong>se steps in Chapter 7, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong><br />

command-line interface” on page 339, and in Chapter 8, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations<br />

using <strong>the</strong> GUI” on page 469.<br />

124 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


4.5 Secure Shell overview and CIM Agent<br />

Prior to SVC Version 5.1, Secure Shell (SSH) was used to secure data flow between <strong>the</strong> SVC<br />

cluster configuration node (SSH server) and a client, ei<strong>the</strong>r a command-line client through <strong>the</strong><br />

command-line interface (CLI) or <strong>the</strong> Common Information Model object manager (CIMOM).<br />

The connection is secured by <strong>the</strong> means of a private key and a public key pair:<br />

1. A public key and a private key are generated toge<strong>the</strong>r as a pair.<br />

2. A public key is uploaded to <strong>the</strong> SSH server.<br />

3. A private key identifies <strong>the</strong> client and is checked against <strong>the</strong> public key during <strong>the</strong><br />

connection. The private key must be protected.<br />

4. The SSH server must also identify itself with a specific host key.<br />

5. If <strong>the</strong> client does not have that host key yet, it is added to a list of known hosts.<br />

Secure Shell is <strong>the</strong> communication vehicle between <strong>the</strong> management system (usually <strong>the</strong><br />

<strong>System</strong> <strong>Storage</strong> Productivity Center) and <strong>the</strong> SVC cluster.<br />

SSH is a client/server network application. The SVC cluster acts as <strong>the</strong> SSH server in this<br />

relationship. The SSH client provides a secure environment from which to connect to a<br />

remote machine. It uses <strong>the</strong> principles of public and private keys for au<strong>the</strong>ntication.<br />

The communication interfaces prior to SVC version 5.1 are shown in Figure 4-24.<br />

Figure 4-24 Communication interfaces<br />

SSH keys are generated by <strong>the</strong> SSH client software. The SSH keys include a public key,<br />

which is uploaded and maintained by <strong>the</strong> cluster, and a private key that is kept private to <strong>the</strong><br />

workstation that is running <strong>the</strong> SSH client. These keys authorize specific users to access <strong>the</strong><br />

administration and service functions on <strong>the</strong> cluster. Each key pair is associated with a<br />

user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored<br />

on <strong>the</strong> cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.<br />

To use <strong>the</strong> CLI or, for <strong>the</strong> SVC graphical user interface (GUI) prior to SVC 5.1, an SSH client<br />

must be installed on that system, <strong>the</strong> SSH key pair must be generated on <strong>the</strong> client system,<br />

and <strong>the</strong> client’s SSH public key must be stored on <strong>the</strong> SVC cluster or clusters.<br />

The <strong>System</strong> <strong>Storage</strong> Productivity Center and <strong>the</strong> HMC must have <strong>the</strong> freeware<br />

implementation of SSH-2 for Windows called PuTTY preinstalled. This software provides <strong>the</strong><br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 125


SSH client function for users logged into <strong>the</strong> SVC Console that want to invoke <strong>the</strong> CLI or GUI<br />

to manage <strong>the</strong> SVC cluster.<br />

Starting with SVC 5.1, <strong>the</strong> management design has been changed, and <strong>the</strong> CIM Agent has<br />

been moved into <strong>the</strong> SVC cluster.<br />

With SVC 5.1, SSH keys au<strong>the</strong>ntication is no longer needed for <strong>the</strong> GUI but only for <strong>the</strong> SVC<br />

command-line interface.<br />

Figure 4-25 shows <strong>the</strong> SVC management design.<br />

Figure 4-25 SVC management design<br />

4.5.1 Generating public and private SSH key pairs using PuTTY<br />

Perform <strong>the</strong> following steps to generate SSH keys on <strong>the</strong> SSH client system:<br />

Note: These keys will be used in <strong>the</strong> step documented in 4.6, “Using IPv6” on page 136.<br />

1. Start <strong>the</strong> PuTTY Key Generator to generate public and private SSH keys. From <strong>the</strong> client<br />

desktop, select Start Programs PuTTY PuTTYgen.<br />

2. On <strong>the</strong> PuTTY Key Generator GUI window (Figure 4-26), generate <strong>the</strong> keys:<br />

a. Select SSH2 RSA.<br />

b. Leave <strong>the</strong> number of bits in a generated key value at 1024.<br />

c. Click Generate.<br />

126 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-26 PuTTY key generator GUI<br />

3. Move <strong>the</strong> cursor on <strong>the</strong> blank area in order to generate <strong>the</strong> keys.<br />

To generate keys: The blank area indicated by <strong>the</strong> message is <strong>the</strong> large blank<br />

rectangle on <strong>the</strong> GUI inside <strong>the</strong> section of <strong>the</strong> GUI labelled Key. Continue to move <strong>the</strong><br />

mouse pointer over <strong>the</strong> blank area until <strong>the</strong> progress bar reaches <strong>the</strong> far right. This<br />

action generates random characters to create a unique key pair.<br />

4. After <strong>the</strong> keys are generated, save <strong>the</strong>m for later use:<br />

a. Click Save public key, as shown in Figure 4-27.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 127


Figure 4-27 Saving <strong>the</strong> public key<br />

b. You are prompted for a name (for example, pubkey) and a location for <strong>the</strong> public key<br />

(for example, C:\Support Utils\PuTTY). Click Save.<br />

If ano<strong>the</strong>r name or location is chosen, ensure that a record of <strong>the</strong> name or location is<br />

kept, because <strong>the</strong> name and location of this SSH public key must be specified in <strong>the</strong><br />

steps that are documented in 4.5.2, “Uploading <strong>the</strong> SSH public key to <strong>the</strong> SVC cluster”<br />

on page 129.<br />

Tip: The PuTTY Key Generator saves <strong>the</strong> public key with no extension, by default.<br />

We recommend that you use <strong>the</strong> string “pub” in naming <strong>the</strong> public key, for example,<br />

“pubkey”, to easily differentiate <strong>the</strong> SSH public key from <strong>the</strong> SSH private key.<br />

c. In <strong>the</strong> PuTTY Key Generator window, click Save private key.<br />

d. You are prompted with a warning message, as shown in Figure 4-28. Click Yes to save<br />

<strong>the</strong> private key without a passphrase.<br />

Figure 4-28 Saving <strong>the</strong> private key without a passphrase<br />

128 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


e. When prompted, enter a name (for example, icat) and location for <strong>the</strong> private key (for<br />

example, C:\Support Utils\PuTTY). Click Save.<br />

If you choose ano<strong>the</strong>r name or location, ensure that you keep a record of it, because<br />

<strong>the</strong> name and location of <strong>the</strong> SSH private key must be specified when <strong>the</strong> PuTTY<br />

session is configured in <strong>the</strong> steps that are documented in 4.6, “Using IPv6” on<br />

page 136.<br />

We suggest that you use <strong>the</strong> default name icat.ppk, because, in SVC clusters running<br />

on versions prior to SVC 5.1, this key has been used for icat application au<strong>the</strong>ntication<br />

and must have this default name.<br />

Private key extension: The PuTTY Key Generator saves <strong>the</strong> private key with <strong>the</strong><br />

PPK extension.<br />

5. Close <strong>the</strong> PuTTY Key Generator GUI.<br />

6. Navigate to <strong>the</strong> directory where <strong>the</strong> private key was saved (for example, C:\Support<br />

Utils\PuTTY).<br />

7. Copy <strong>the</strong> private key file (for example, icat.ppk) to <strong>the</strong> C:\Program<br />

Files\<strong>IBM</strong>\svcconsole\cimom directory.<br />

Important: If <strong>the</strong> private key was named something o<strong>the</strong>r than icat.ppk, make sure that<br />

you rename it to <strong>the</strong> icat.ppk file in <strong>the</strong> C:\Program Files\<strong>IBM</strong>\svcconsole\cimom<br />

folder. The GUI (which will be used later) expects <strong>the</strong> file to be called icat.ppk and for it<br />

to be in this location. This key is no longer used in SVC 5.1, but it is still valid for <strong>the</strong><br />

previous version.<br />

4.5.2 Uploading <strong>the</strong> SSH public key to <strong>the</strong> SVC cluster<br />

After you have created your SSH key pair, you need to upload your SSH private key into <strong>the</strong><br />

SVC cluster:<br />

1. From your browser:<br />

http://svcconsoleipaddress:9080/ica<br />

Select Users, and <strong>the</strong>n on <strong>the</strong> next window, select Create a User from <strong>the</strong> list, as shown<br />

Figure 4-29, and click Go.<br />

Figure 4-29 Create a user<br />

2. From <strong>the</strong> Create a User window, insert <strong>the</strong> user ID name that you want to create and <strong>the</strong><br />

password. At <strong>the</strong> bottom of <strong>the</strong> window, select <strong>the</strong> access level that you want to assign to<br />

your user (remember that <strong>the</strong> Security Administrator is <strong>the</strong> maximum level) and choose<br />

<strong>the</strong> location where you want to upload <strong>the</strong> SSH pub key file you have created for this user,<br />

as shown Figure 4-30. Click Ok.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 129


Figure 4-30 Create user and password<br />

3. You have completed your user creation process and uploaded <strong>the</strong> users’ SSH public key<br />

that will be paired later with <strong>the</strong> users’ private .ppk key, as described in 4.5.3, “Configuring<br />

<strong>the</strong> PuTTY session for <strong>the</strong> CLI” on page 130. Figure 4-31 shows <strong>the</strong> successful upload of<br />

<strong>the</strong> SSH admin key.<br />

Figure 4-31 Adding <strong>the</strong> SSH admin key successfully<br />

4. You have now completed <strong>the</strong> basic setup requirements for <strong>the</strong> SVC cluster using <strong>the</strong> SVC<br />

cluster Web interface.<br />

4.5.3 Configuring <strong>the</strong> PuTTY session for <strong>the</strong> CLI<br />

Before <strong>the</strong> CLI can be used, <strong>the</strong> PuTTY session must be configured using <strong>the</strong> SSH keys that<br />

were generated earlier in 4.5.1, “Generating public and private SSH key pairs using PuTTY”<br />

on page 126.<br />

130 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Perform <strong>the</strong>se steps to configure <strong>the</strong> PuTTY session on <strong>the</strong> SSH client system:<br />

1. From <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center Windows desktop, select Start <br />

Programs PuTTY PuTTY to open <strong>the</strong> PuTTY Configuration GUI window.<br />

2. In <strong>the</strong> PuTTY Configuration window (Figure 4-32), from <strong>the</strong> Category pane on <strong>the</strong> left,<br />

click Session, if it is not selected.<br />

Tip: The items selected in <strong>the</strong> Category pane affect <strong>the</strong> content that appears in <strong>the</strong> right<br />

pane.<br />

Figure 4-32 PuTTY Configuration window<br />

3. In <strong>the</strong> right pane, under <strong>the</strong> “Specify <strong>the</strong> destination you want to connect to” section, select<br />

SSH. Under <strong>the</strong> “Close window on exit” section, select Only on clean exit, which ensures<br />

that if <strong>the</strong>re are any connection errors, <strong>the</strong>y will be displayed on <strong>the</strong> user’s window.<br />

4. From <strong>the</strong> Category pane on <strong>the</strong> left side of <strong>the</strong> PuTTY Configuration window, click<br />

Connection SSH to display <strong>the</strong> PuTTY SSH Configuration window, as shown in<br />

Figure 4-33.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 131


Figure 4-33 PuTTY SSH connection configuration window<br />

5. In <strong>the</strong> right pane, in <strong>the</strong> “Preferred SSH protocol version” section, select 2.<br />

6. From <strong>the</strong> Category pane on <strong>the</strong> left side of <strong>the</strong> PuTTY Configuration window, select<br />

Connection SSH Auth.<br />

7. On Figure 4-34, in <strong>the</strong> right pane, in <strong>the</strong> “Private key file for au<strong>the</strong>ntication:” field under <strong>the</strong><br />

Au<strong>the</strong>ntication Parameters section, ei<strong>the</strong>r browse to or type <strong>the</strong> fully qualified directory<br />

path and file name of <strong>the</strong> SSH client private key file created earlier (for example,<br />

C:\Support Utils\PuTTY\icat.PPK). See Figure 4-34.<br />

132 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-34 PuTTY Configuration: Private key location<br />

8. From <strong>the</strong> Category pane on <strong>the</strong> left side of <strong>the</strong> PuTTY Configuration window, click<br />

Session.<br />

9. In <strong>the</strong> right pane, follow <strong>the</strong>se steps, as shown in Figure 4-35:<br />

a. Under <strong>the</strong> “Load, save, or delete a stored session” section, select Default Settings,<br />

and click Save.<br />

b. For <strong>the</strong> Host Name (or IP address), type <strong>the</strong> IP address of <strong>the</strong> SVC cluster.<br />

c. In <strong>the</strong> Saved Sessions field, type a name (for example, SVC) to associate with this<br />

session.<br />

d. Click Save.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 133


Figure 4-35 PuTTY Configuration: Saving a session<br />

You can now ei<strong>the</strong>r close <strong>the</strong> PuTTY Configuration window or leave it open to continue.<br />

Tip: Normally, output that comes from <strong>the</strong> SVC is wider than <strong>the</strong> default PuTTY window<br />

size. We recommend that you change your PuTTY window appearance to use a font with a<br />

character size of 8. To change, click <strong>the</strong> Appearance item in <strong>the</strong> Category tree, as shown<br />

in Figure 4-35, and <strong>the</strong>n, click Font. Choose a font with a character size of 8.<br />

4.5.4 Starting <strong>the</strong> PuTTY CLI session<br />

The PuTTY application is required for all CLI tasks. If it was closed for any reason, restart <strong>the</strong><br />

session as detailed here:<br />

1. From <strong>the</strong> SVC Console desktop, open <strong>the</strong> PuTTY application by selecting Start <br />

Programs PuTTY.<br />

2. On <strong>the</strong> PuTTY Configuration window (Figure 4-36), select <strong>the</strong> session saved earlier (in our<br />

example, ITSO-SVC1), and click Load.<br />

3. Click Open.<br />

134 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-36 Open PuTTY command-line session<br />

4. If this is <strong>the</strong> first time that <strong>the</strong> PuTTY application is being used since generating and<br />

uploading <strong>the</strong> SSH key pair, a PuTTY Security Alert window with a prompt opens stating<br />

that <strong>the</strong>re is a mismatch between <strong>the</strong> private and public keys, as shown in Figure 4-37.<br />

Click Yes, which invokes <strong>the</strong> CLI.<br />

Figure 4-37 PuTTY Security Alert<br />

5. At <strong>the</strong> Login as: prompt, type admin and press Enter (<strong>the</strong> user ID is case sensitive). As<br />

shown in Example 4-1, <strong>the</strong> private key used in this PuTTY session is now au<strong>the</strong>nticated<br />

against <strong>the</strong> public key that was uploaded to <strong>the</strong> SVC cluster.<br />

Example 4-1 Au<strong>the</strong>nticating<br />

login as: admin<br />

Au<strong>the</strong>nticating with public key "rsa-key-20080617"<br />

Last login: Wed Aug 18 03:30:21 2009 from 10.64.210.240<br />

<strong>IBM</strong>_2145:ITSO-CL1:admin><br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 135


You have now completed <strong>the</strong> tasks that are required to configure <strong>the</strong> CLI for SVC<br />

administration from <strong>the</strong> SVC Console. You can close <strong>the</strong> PuTTY session.<br />

4.5.5 Configuring SSH for AIX clients<br />

4.6 Using IPv6<br />

To configure SSH for AIX clients, follow <strong>the</strong>se steps:<br />

1. The SVC cluster IP address must be able to be successfully reached using <strong>the</strong> ping<br />

command from <strong>the</strong> AIX workstation from which cluster access is desired.<br />

2. Open SSL must be installed for OpenSSH to work. Install OpenSSH on <strong>the</strong> AIX client:<br />

a. The installation images can be found at this Web site:<br />

https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp<br />

http://sourceforge.net/projects/openssh-aix<br />

b. Follow <strong>the</strong> instructions carefully, because OpenSSL must be installed before using<br />

SSH.<br />

3. Generate an SSH key pair:<br />

a. Run <strong>the</strong> cd command to go to <strong>the</strong> /.ssh directory.<br />

b. Run <strong>the</strong> ssh-keygen -t rsa command.<br />

c. The following message is displayed:<br />

Generating public/private rsa key pair. Enter file in which to save <strong>the</strong> key<br />

(//.ssh/id_rsa)<br />

d. Pressing Enter will use <strong>the</strong> default file that is shown in paren<strong>the</strong>ses; o<strong>the</strong>rwise, enter a<br />

file name (for example, aixkey), and press Enter.<br />

e. The following prompt is displayed:<br />

Enter a passphrase (empty for no passphrase)<br />

We recommend entering a passphrase when <strong>the</strong> CLI will be used interactively,<br />

because <strong>the</strong>re is no o<strong>the</strong>r au<strong>the</strong>ntication when connecting through <strong>the</strong> CLI. After typing<br />

in <strong>the</strong> passphrase, press Enter.<br />

f. The following prompt is displayed:<br />

Enter same passphrase again:<br />

Type <strong>the</strong> passphrase again, and <strong>the</strong>n, press Enter again.<br />

g. A message is displayed indicating that <strong>the</strong> key pair has been created. The private key<br />

file will have <strong>the</strong> name entered previously (for example, aixkey). The public key file will<br />

have <strong>the</strong> name entered previously with an extension of .pub (for example, aixkey.pub).<br />

Using a passphrase: If you are generating an SSH keypair so that you can<br />

interactively use <strong>the</strong> CLI, we recommend that you use a passphrase so you will need to<br />

au<strong>the</strong>nticate every time that you connect to <strong>the</strong> cluster. It is possible to have a<br />

passphrase-protected key for scripted usage, but you will have to use <strong>the</strong> expect<br />

command or a similar command to have <strong>the</strong> passphrase parsed into <strong>the</strong> ssh command.<br />

SVC V4.3 introduced IPv6 functionality to <strong>the</strong> console and clusters. You can use IPv4, or IPv6<br />

in a dual stack configuration. Migrating to (or from) IPv6 can be done remotely and is<br />

nondisruptive, except that you need to remove and redefine <strong>the</strong> cluster to <strong>the</strong> SVC Console.<br />

136 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Using IPv6: To remotely access <strong>the</strong> SVC Console and clusters running IPv6, you are<br />

required to run Internet Explorer 7 and have IPv6 configured on your local workstation.<br />

4.6.1 Migrating a cluster from IPv4 to IPv6<br />

As a prerequisite, have IPv6 already enabled and configured on <strong>the</strong> <strong>System</strong> <strong>Storage</strong><br />

Productivity Center/Windows server running <strong>the</strong> SVC Console. We have configured an<br />

interface with IPv4 and IPv6 addresses on <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center, as shown<br />

in Example 4-2.<br />

Example 4-2 Output of ipconfig on <strong>System</strong> <strong>Storage</strong> Productivity Center<br />

C:\Documents and Settings\Administrator>ipconfig<br />

Windows IP Configuration<br />

E<strong>the</strong>rnet adapter IPv6:<br />

Connection-specific DNS Suffix . :<br />

IP Address. . . . . . . . . . . . : 10.0.1.115<br />

Subnet Mask . . . . . . . . . . . : 255.255.255.0<br />

IP Address. . . . . . . . . . . . : 2001:610::115<br />

IP Address. . . . . . . . . . . . : fe80::214:5eff:fecd:9352%5<br />

Default Gateway . . . . . . . . . :<br />

To migrate a cluster, follow <strong>the</strong>se steps:<br />

1. Select Manage Cluster Modify IP Addresses, as shown in Figure 4-38.<br />

Figure 4-38 Modify IP Addresses window<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 137


2. In <strong>the</strong> IPv6 section that is shown in Figure 4-38, select an IPv6 interface, and click Modify.<br />

3. Then, in <strong>the</strong> window that is shown in Figure 4-39:<br />

a. Type an IPv6 prefix in <strong>the</strong> IPv6 Network Prefix field. The Prefix field can have a value of<br />

0 to 127.<br />

b. Type an IPv6 address in <strong>the</strong> Cluster IP field.<br />

c. Type an IPv6 address in <strong>the</strong> Service IP address field.<br />

d. Type an IPv6 gateway in <strong>the</strong> Gateway field.<br />

e. Click Modify Settings.<br />

Figure 4-39 Modify IP Addresses: Adding IPv6 addresses<br />

4. A confirmation window displays (Figure 4-40). Click X in <strong>the</strong> upper-right corner to close<br />

this tab.<br />

Figure 4-40 Modify IP Addresses window<br />

5. Before you remove <strong>the</strong> cluster from <strong>the</strong> SVC Console, test <strong>the</strong> IPv6 connectivity using <strong>the</strong><br />

ping command from a cmd.exe session on <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center (as<br />

shown in Example 4-3 on page 139).<br />

138 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 4-3 Testing IPv6 connectivity to <strong>the</strong> SVC cluster<br />

C:\Documents and Settings\Administrator>ping<br />

2001:0610:0000:0000:0000:0000:0000:119<br />

Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data:<br />

Reply from 2001:610::119: time=3ms<br />

Reply from 2001:610::119: time


Figure 4-43 Insert CIM user ID and password<br />

10.The Viewing Clusters window reopens with <strong>the</strong> cluster displaying an IPv6 address, as<br />

shown in Figure 4-44. Click Launch <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console for <strong>the</strong> cluster,<br />

and go back to modifying IP addresses, as you did in step 1.<br />

Figure 4-44 Viewing Clusters window: Displaying <strong>the</strong> new cluster using <strong>the</strong> IPv6 address<br />

11.In <strong>the</strong> Modify IP Addresses window, select <strong>the</strong> IPv4 address port, select Clear Port<br />

Settings, and click GO, as shown in Figure 4-45.<br />

140 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-45 Clear Port Settings<br />

12.A confirmation message appears, as shown in Figure 4-46. Click OK.<br />

Figure 4-46 Confirmation of IP address change<br />

13.A second window (Figure 4-47) opens, confirming that <strong>the</strong> IPv4 stack has been disabled<br />

and <strong>the</strong> associated addresses have been removed. Click Return.<br />

Figure 4-47 IPv4 stack has been removed<br />

4.6.2 Migrating a cluster from IPv6 to IPv4<br />

The process of migrating a cluster from IPv6 to IPv4 is identical to <strong>the</strong> process described in<br />

4.6.1, “Migrating a cluster from IPv4 to IPv6” on page 137, except that you add IPv4<br />

addresses and remove <strong>the</strong> IPv6 addresses.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 141


4.7 Upgrading <strong>the</strong> SVC Console software<br />

This section takes you through <strong>the</strong> steps to upgrade your existing SVC Console GUI. You can<br />

also use <strong>the</strong>se steps to install a new SVC Console on ano<strong>the</strong>r server.<br />

Follow <strong>the</strong>se steps:<br />

1. Download <strong>the</strong> latest available version of <strong>the</strong> ICA application and check for compatibility<br />

with your running version from <strong>the</strong> following Web site:<br />

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002888<br />

2. Save your account definitions, documenting all defined users, password, and SSH keys,<br />

because you might need to reuse <strong>the</strong>se users, password, and keys in case you encounter<br />

any problems during <strong>the</strong> GUI upgrade process.<br />

Example 4-4 shows you how to list <strong>the</strong> defined accounts using <strong>the</strong> CLI.<br />

Example 4-4 Accounts list<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svcinfo lsuser<br />

id name<br />

password ssh_key remote u<br />

sergrp_id usergrp_name<br />

0 superuser<br />

yes no no 0<br />

SecurityAdmin<br />

1 admin<br />

yes yes no 0<br />

SecurityAdmin<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svcinfo lsuser 0<br />

id 0<br />

name superuser<br />

password yes<br />

ssh_key no<br />

remote no<br />

usergrp_id 0<br />

usergrp_name SecurityAdmin<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svcinfo lsuser 1<br />

id 1<br />

name admin<br />

password yes<br />

ssh_key yes<br />

remote no<br />

usergrp_id 0<br />

usergrp_name SecurityAdmin<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin><br />

3. Execute <strong>the</strong> setup.exe file from <strong>the</strong> location where you have saved and unzipped <strong>the</strong><br />

latest SVC Console file.<br />

Figure 4-48 shows <strong>the</strong> location of <strong>the</strong> setup.exe file on our system.<br />

142 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-48 Location of <strong>the</strong> setup.exe file<br />

4. The Installation wizard will start. This first window (as shown in Figure 4-49) asks you to<br />

shut down any running Windows programs, stop all SVC services, and review <strong>the</strong> readme<br />

file.<br />

5. Figure 4-49 Shows how to stop SVC services.<br />

Figure 4-49 Stop CIMOM service<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 143


6. Figure 4-50 shows <strong>the</strong> wizard Welcome window.<br />

Figure 4-50 Wizard welcome window<br />

After you have reviewed <strong>the</strong> installation instructions and <strong>the</strong> readme file, click Next.<br />

7. The Installation will ask you to read and accept <strong>the</strong> terms of <strong>the</strong> license agreement, as<br />

shown in Figure 4-51. Click Next.<br />

144 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-51 License agreement window<br />

8. The installation detects your existing SVC Console installation (if you are upgrading). If it<br />

does detect your existing SVC Console installation, it will ask you to perform <strong>the</strong>se steps:<br />

– Select Preserve Configuration if you want to keep your existing configuration. (You<br />

must make sure that this option is checked.)<br />

– Manually shut down <strong>the</strong> SVC Console services:<br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Pegasus Server<br />

Service Location Protocol<br />

<strong>IBM</strong> WebSphere Application Server V6 - SVC<br />

There might be differences in <strong>the</strong> existing services, depending on which version you<br />

are upgrading from. Follow <strong>the</strong> instructions on <strong>the</strong> dialog wizard for which services to<br />

shut down, as shown in Figure 4-52. Click Next.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 145


Figure 4-52 Product Installation Check<br />

Important: If you want to keep your SVC configuration, make sure that you select<br />

Preserve Configuration. If you omit this selection, you will lose your entire SVC<br />

Console setup, and you will have to reconfigure your console as though it were a new<br />

installation.<br />

9. The installation wizard <strong>the</strong>n checks that <strong>the</strong> appropriate services are shut down, removes<br />

<strong>the</strong> previous version, and shows <strong>the</strong> Installation Confirmation window, as shown in<br />

Figure 4-53. If <strong>the</strong> wizard detects any problems, it first shows you a page detailing <strong>the</strong><br />

possible problems, giving you time to fix <strong>the</strong>m before proceeding.<br />

146 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-53 Installation Confirmation<br />

10.Figure 4-54 shows <strong>the</strong> progress of <strong>the</strong> installation. For our environment, it took<br />

approximately 10 minutes to complete.<br />

Figure 4-54 Installation Progress<br />

11.The installation process now starts <strong>the</strong> migration for <strong>the</strong> cluster user accounts. Starting<br />

with SVC 5.1, <strong>the</strong> CIMOM has been moved into <strong>the</strong> cluster, and it is no longer present in<br />

<strong>the</strong> SVC Console or <strong>System</strong> <strong>Storage</strong> Productivity Center. The CIMOM au<strong>the</strong>ntication login<br />

process will be performed in <strong>the</strong> ICA application when we launch <strong>the</strong> SVC management<br />

application.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 147


As part of <strong>the</strong> migration input, Figure 4-55 shows where to enter <strong>the</strong> “admin” password to<br />

each of <strong>the</strong> clusters that you already own.<br />

This password was generated during <strong>the</strong> SVC cluster first creation and must be carefully<br />

saved.<br />

Figure 4-55 Migration Input<br />

12.At <strong>the</strong> end of <strong>the</strong> user accounts migration process, you might get <strong>the</strong> error that is shown in<br />

Figure 4-56.<br />

148 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 4-56 SVC cluster user account migration error<br />

This message is normal behavior, because, in our environment, we have implemented<br />

only <strong>the</strong> superuser user ID. The GUI upgrade wizard is intended to work only for user<br />

accounts; it is not intended to be used for migrating <strong>the</strong> superuser user.<br />

If you get this error, when you try to access your SVC cluster using <strong>the</strong> GUI, it will require<br />

you to enter <strong>the</strong> default CIMOM user id=superuser and password=passw0rd, because <strong>the</strong><br />

superuser account has not been migrated and you will have to use <strong>the</strong> default in <strong>the</strong><br />

meantime.<br />

13.Click Next. The wizard will ei<strong>the</strong>r restart all of <strong>the</strong> appropriate SVC Console processes, or<br />

inform you that you will need to reboot, and <strong>the</strong>n, give you a summary of <strong>the</strong> installation. In<br />

this case, we were told we need to reboot, as shown in Figure 4-57.<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 149


Figure 4-57 Installation summary<br />

14.The wizard requires us to restart our computer (Figure 4-58).<br />

Figure 4-58 Installation finished: Requesting reboot<br />

150 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


15.And finally, to see <strong>the</strong> new interface, you can launch <strong>the</strong> SVC Console by using <strong>the</strong> icon on<br />

<strong>the</strong> desktop. Log in and confirm that <strong>the</strong> upgrade was successful by noting <strong>the</strong> Console<br />

Version number on <strong>the</strong> right side of <strong>the</strong> window under <strong>the</strong> graphic. See Figure 4-59.<br />

Figure 4-59 Launching <strong>the</strong> upgraded SVC Console<br />

You have completed <strong>the</strong> upgrade of your SVC Console.<br />

To access <strong>the</strong> SVC, you must click Clusters on <strong>the</strong> left pane. You will be redirected to <strong>the</strong><br />

Viewing Clusters window, as shown in Figure 4-60.<br />

Figure 4-60 Viewing Clusters<br />

Chapter 4. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration 151


As you can see, <strong>the</strong> cluster’s availability status is “Unau<strong>the</strong>nticated”, which is to be expected.<br />

Select <strong>the</strong> cluster, click GO, and launch <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Application. You will be<br />

required to insert your CIMOM user ID (superuser) and your password (password) as shown<br />

in Figure 4-61.<br />

Figure 4-61 Sign on to cluster<br />

Finally, you can manage your SVC cluster, as shown in Figure 4-62.<br />

Figure 4-62 Cluster management window<br />

152 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Chapter 5. Host configuration<br />

5<br />

In this chapter, we describe <strong>the</strong> basic host configuration procedures that are required to<br />

connect supported hosts to <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC).<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 153


5.1 SVC setup<br />

Traditionally in <strong>IBM</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC) environments, hosts were connected to an<br />

SVC via a storage area network (<strong>SAN</strong>). In actual implementations that have high availability<br />

requirements (<strong>the</strong> majority of <strong>the</strong> target clients for SVC), <strong>the</strong> <strong>SAN</strong> is implemented as two<br />

separate fabrics providing a fault tolerant arrangement of two or more counterpart <strong>SAN</strong>s. For<br />

<strong>the</strong> hosts, each <strong>SAN</strong> provides alternate paths to <strong>the</strong> resources (virtual disks (VDisks)) that are<br />

provided by <strong>the</strong> SVC.<br />

Starting with SVC 5.1, iSCSI is introduced as an alternative protocol to attaching hosts via a<br />

LAN to <strong>the</strong> SVC. However, within <strong>the</strong> SVC, all communications with back-end storage<br />

subsystems, and with o<strong>the</strong>r SVC clusters, take place via Fibre Channel (FC).<br />

For iSCSI/LAN-based access networks to <strong>the</strong> SVC using a single network, or using two<br />

physically separated networks, is supported. The iSCSI feature is a software feature that is<br />

provided by <strong>the</strong> SVC 5.1 code. It will be available on <strong>the</strong> new CF8 nodes and also on <strong>the</strong><br />

existing nodes that support <strong>the</strong> SVC 5.1 release. The existing SVC node hardware has<br />

multiple 1 Gbps E<strong>the</strong>rnet ports. Until now, only one 1 Gbps E<strong>the</strong>rnet port has been used, and<br />

it has been used for cluster configuration. With <strong>the</strong> introduction of iSCSI, both ports can now<br />

be used.<br />

Redundant paths to VDisks can be provided for <strong>the</strong> <strong>SAN</strong>, as well as for <strong>the</strong> iSCSI<br />

environment.<br />

Figure 5-1 shows <strong>the</strong> attachments that are supported with <strong>the</strong> SVC 5.1 release.<br />

Figure 5-1 SVC host attachment overview<br />

5.1.1 Fibre Channel and <strong>SAN</strong> setup overview<br />

Hosts using Fibre Channel (FC) as <strong>the</strong> connection to an SVC are always connected to a <strong>SAN</strong><br />

switch. For SVC configurations, we strongly recommend <strong>the</strong> use of two redundant <strong>SAN</strong><br />

fabrics. Therefore, each server is equipped with a minimum of two host bus adapters (HBAs),<br />

with each of <strong>the</strong> HBAs connected to a <strong>SAN</strong> switch in one of <strong>the</strong> two fabrics (assuming one<br />

port per HBA).<br />

154 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


SVC imposes no special limit on <strong>the</strong> FC optical distance between <strong>the</strong> SVC nodes and <strong>the</strong><br />

host servers. A server can <strong>the</strong>refore be attached to an edge switch in a core-edge<br />

configuration while <strong>the</strong> SVC cluster is at <strong>the</strong> core. SVC supports up to three inter-switch link<br />

(ISL) hops in <strong>the</strong> fabric. Therefore, <strong>the</strong> server and <strong>the</strong> SVC node can be separated by up to<br />

five actual FC links, four of which can be 10 km (6.2 miles) long if longwave small form-factor<br />

pluggables (SFPs) are used. For high performance servers, <strong>the</strong> rule is to avoid ISL hops, that<br />

is, connect <strong>the</strong> servers to <strong>the</strong> same switch to which <strong>the</strong> SVC is connected, if possible.<br />

Remember <strong>the</strong>se limits when connecting host servers to an SVC:<br />

► Up to 256 hosts per I/O Group, which results in a total of 1,024 hosts per cluster. Note that<br />

if <strong>the</strong> same host is connected to multiple I/O Groups of a cluster, it counts as a host in<br />

each of <strong>the</strong>se groups.<br />

► A total of 512 distinct configured host worldwide port names (WWPNs) are supported per<br />

I/O Group. This limit is <strong>the</strong> sum of <strong>the</strong> FC host ports and <strong>the</strong> host iSCSI names (an internal<br />

WWPN is generated for each iSCSI name) that are associated with all of <strong>the</strong> hosts that are<br />

associated with a single I/O Group.<br />

The access from a server to an SVC cluster via <strong>the</strong> <strong>SAN</strong> fabrics is defined by <strong>the</strong> use of<br />

zoning. Consider <strong>the</strong>se rules for host zoning with <strong>the</strong> SVC:<br />

► For configurations of fewer than 64 hosts per cluster, <strong>the</strong> SVC supports a simple set of<br />

zoning rules that enables <strong>the</strong> creation of a small set of host zones for various<br />

environments. Switch zones containing HBAs must contain fewer than 40 initiators in total,<br />

including <strong>the</strong> SVC ports that act as initiators. Thus, a valid zone is 32 host ports, plus eight<br />

SVC ports. This restriction exists, because <strong>the</strong> order N2 scaling of <strong>the</strong> number of remote<br />

status change notification messages (RSCN) with <strong>the</strong> number of initiators per zone [N]<br />

can cause problems. We recommend that you zone using single HBA port zoning, as<br />

described in <strong>the</strong> next paragraph.<br />

► For configurations of more than 64 hosts per cluster, <strong>the</strong> SVC supports a more restrictive<br />

set of host zoning rules. Each HBA port must be placed in a separate zone. Also included<br />

in this zone is exactly one port from each SVC node in <strong>the</strong> I/O Groups that are associated<br />

with this host. We recommend that hosts are zoned this way in smaller configurations, too,<br />

but it is not mandatory.<br />

► Switch zones containing HBAs must contain HBAs from similar hosts or similar HBAs in<br />

<strong>the</strong> same host. For example, AIX and Windows NT® hosts must be in separate zones, and<br />

t QLogic and Emulex adapters must be in separate zones.<br />

► To obtain <strong>the</strong> best performance from a host with multiple FC ports, ensure that each FC<br />

port of a host is zoned with a separate group of SVC ports.<br />

► To obtain <strong>the</strong> best overall performance of <strong>the</strong> subsystem and to prevent overloading, <strong>the</strong><br />

workload to each SVC port must be equal, typically by zoning approximately <strong>the</strong> same<br />

number of host FC ports to each SVC FC port.<br />

► For any given VDisk, <strong>the</strong> number of paths through <strong>the</strong> <strong>SAN</strong> from <strong>the</strong> SVC nodes to a host<br />

must not exceed eight. For most configurations, four paths to an I/O Group (four paths to<br />

each VDisk that is provided by this I/O Group) are sufficient.<br />

Figure 5-2 on page 156 shows an overview for a setup with servers that have two single port<br />

HBAs each. Follow this method to connect <strong>the</strong>m:<br />

► Try to distribute <strong>the</strong> actual hosts equally between two logical sets per I/O Group. Connect<br />

hosts from each set always to <strong>the</strong> same group of SVC ports. This “port group” includes<br />

exactly one port from each SVC node in <strong>the</strong> I/O Group. The zoning defines <strong>the</strong> correct<br />

connections.<br />

Chapter 5. Host configuration 155


► The “port groups” are defined this way:<br />

– Hosts in host set one of an I/O Group are always zoned to <strong>the</strong> P1 and P4 ports on both<br />

nodes, for example, N1/N2 of I/O Group zero.<br />

– Hosts in host set two of an I/O Group are always zoned to <strong>the</strong> P2 and P3 ports on both<br />

nodes of an I/O Group.<br />

► You can create aliases for <strong>the</strong>se “port groups” (per I/O Group):<br />

– Fabric A: IOGRP0_PG1 N1_P1;N2_P1,IOGRP0_PG2 N1_P3;N2_P3<br />

– Fabric B: IOGRP0_PG1 N1_P4;N2_P4,IOGRP0_PG2 N1_P2;N2_P2<br />

► Create host zones by always using <strong>the</strong> host port WWPN, plus <strong>the</strong> PG1 alias for hosts in<br />

<strong>the</strong> first host set. Always use <strong>the</strong> host port WWPN, plus <strong>the</strong> PG2 alias for hosts from <strong>the</strong><br />

second host set. If a host has to be zoned to multiple I/O Groups, simply add <strong>the</strong> PG1 or<br />

PG2 aliases from <strong>the</strong> specific I/O Groups to <strong>the</strong> host zone.<br />

Using this schema provides four paths to one I/O Group for each host. It helps to maintain an<br />

equal distribution of host connections on <strong>the</strong> SVC ports. Figure 5-2 shows an overview of this<br />

host zoning schema.<br />

Figure 5-2 Overview of four path host zoning<br />

We recommend whenever possible to use <strong>the</strong> minimum number of paths that are necessary<br />

to achieve sufficient redundancy in <strong>the</strong> <strong>SAN</strong> environment, for SVC environments, no more<br />

than four paths per I/O Group or VDisk.<br />

156 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5.1.2 Port mask<br />

Remember that all paths have to be managed by <strong>the</strong> multipath driver on <strong>the</strong> host side. If we<br />

assume a server is connected via four ports to <strong>the</strong> SVC, each VDisk is seen via eight paths.<br />

With 125 VDisks mapped to this server, <strong>the</strong> multipath driver has to support handling up to<br />

1,000 active paths (8 x 125). You can obtain details and current limitations for <strong>the</strong> <strong>IBM</strong><br />

Subsystem Device Driver (SDD) in <strong>Storage</strong> Multipath Subsystem Device Driver User’s Guide,<br />

GC52-1309-01, at this Web site:<br />

http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1<br />

For hosts using four HBAs/ports with eight connections to an I/O Group, use <strong>the</strong> zoning<br />

schema that is shown in Figure 5-3. You can combine this schema with <strong>the</strong> previous four path<br />

zoning schema.<br />

Figure 5-3 Overview of eight path host zoning<br />

SVC V4.1 added <strong>the</strong> concept of a port mask. With prior releases, any particular host saw <strong>the</strong><br />

same set of SCSI logical unit numbers (LUNs) from each of <strong>the</strong> four FC ports in each node in<br />

a particular I/O Group.<br />

The port mask is associated with a host object. The port mask controls which SVC (target)<br />

ports any particular host can access. The port mask applies to logins from any of <strong>the</strong> host<br />

(initiator) ports associated with <strong>the</strong> host object in <strong>the</strong> configuration model. The port mask<br />

consists of four binary bits, represented in <strong>the</strong> command-line interface (CLI) as 0 or 1. The<br />

rightmost bit is associated with FC port 1 on each node. The leftmost bit is associated with<br />

port 4. A 1 in any particular bit position allows access to that port and a zero denies access.<br />

The default port mask is 1111, preserving <strong>the</strong> behavior of <strong>the</strong> product prior to <strong>the</strong> introduction<br />

of this feature.<br />

Chapter 5. Host configuration 157


For each login between an HBA port and an SVC node port, SVC decides whe<strong>the</strong>r to allow<br />

access or to deny access by examining <strong>the</strong> port mask that is associated with <strong>the</strong> host object<br />

to which <strong>the</strong> HBA belongs. If access is denied, SVC responds to SCSI commands as though<br />

<strong>the</strong> HBA port is unknown to <strong>the</strong> SVC.<br />

5.2 iSCSI overview<br />

iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and,<br />

<strong>the</strong>reby, leverages an existing IP network instead of requiring FC HBAs and <strong>SAN</strong> fabric<br />

infrastructure.<br />

5.2.1 Initiators and targets<br />

5.2.2 Nodes<br />

5.2.3 IQN<br />

An iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP<br />

network to an iSCSI target. We refer to a single iSCSI initiator or iSCSI target as an iSCSI<br />

node. An iSCSI target refers to a storage resource that is located on an iSCSI server, or, to be<br />

more precise, to one of potentially many instances of iSCSI nodes running on that server as a<br />

“target.”<br />

There are one or more iSCSI nodes within a network entity. The iSCSI node is accessible via<br />

one or more network portals. A network portal is a component of a network entity that has a<br />

TCP/IP network address and that can be used by an iSCSI node.<br />

An iSCSI node is identified by its unique iSCSI name and is referred to as an IQN. Remember<br />

that this name serves only for <strong>the</strong> identification of <strong>the</strong> node; it is not <strong>the</strong> node’s address, and in<br />

iSCSI, <strong>the</strong> name is separated from <strong>the</strong> addresses. This separation allows multiple iSCSI<br />

nodes to use <strong>the</strong> same addresses, or, while it is implemented in <strong>the</strong> SVC, <strong>the</strong> same iSCSI<br />

node to use multiple addresses.<br />

An SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its<br />

own IQN, which by default will be in this form:<br />

iqn.1986-03.com.ibm:2145..<br />

An iSCSI host in SVC is defined by specifying its iSCSI initiator names, for an example of an<br />

IQN of a Windows Server:<br />

iqn.1991-05.com.microsoft:itsoserver01<br />

During <strong>the</strong> configuration of an iSCSI host in <strong>the</strong> SVC, you must specify <strong>the</strong> host’s initiator<br />

IQNs. You can read about host creation in detail in Chapter 7, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

operations using <strong>the</strong> command-line interface” on page 339, and in Chapter 8, “<strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> operations using <strong>the</strong> GUI” on page 469.<br />

An alias string can also be associated with an iSCSI node. The alias allows an organization to<br />

associate a user friendly string with <strong>the</strong> iSCSI name. However, <strong>the</strong> alias string is not a<br />

substitute for <strong>the</strong> iSCSI name.<br />

Figure 5-4 on page 159 shows an overview of iSCSI implementation in <strong>the</strong> SVC.<br />

158 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 5-4 SVC iSCSI overview<br />

A host that is using iSCSI as <strong>the</strong> communication protocol to access its VDisks on an SVC<br />

cluster uses its single or multiple E<strong>the</strong>rnet adapters to connect to an IP LAN. The nodes of<br />

<strong>the</strong> SVC cluster are connected to <strong>the</strong> LAN by <strong>the</strong> existing 1 Gbps E<strong>the</strong>rnet ports on <strong>the</strong> node.<br />

For iSCSI, both ports can be used.<br />

Note that E<strong>the</strong>rnet link aggregation (port trunking) or “channel bonding” for <strong>the</strong> SVC nodes’<br />

E<strong>the</strong>rnet ports is not supported for <strong>the</strong> 1 Gbps ports in this release. The support for Jumbo<br />

Frames, that is, support for MTU sizes greater than 1,500 bytes, is planned for future SVC<br />

releases.<br />

For each SVC node, that is, for each instance of an iSCSI target node in <strong>the</strong> SVC node, two<br />

IPv4 and two IPv6 addresses or iSCSI network portals can be defined. Figure 2-12 on<br />

page 29 shows one IPv4 and one IPv6 address per E<strong>the</strong>rnet port.<br />

5.3 VDisk discovery<br />

Hosts can discover VDisks through one of <strong>the</strong> following three mechanisms:<br />

► Internet <strong>Storage</strong> Name Service (iSNS)<br />

SVC can register itself with an iSNS name server; <strong>the</strong> IP address of this server is set using<br />

<strong>the</strong> svctask chcluster command. A host can <strong>the</strong>n query <strong>the</strong> iSNS server for available<br />

iSCSI targets.<br />

► Service Location Protocol (SLP)<br />

The SVC node runs an SLP daemon, which responds to host requests. This daemon<br />

reports <strong>the</strong> available services on <strong>the</strong> node. One service is <strong>the</strong> CIMOM, which runs on <strong>the</strong><br />

configuration node; iSCSI I/O service can now also be reported.<br />

Chapter 5. Host configuration 159


► SCSI Send Target request<br />

The host can also send a Send Target request using <strong>the</strong> iSCSI protocol to <strong>the</strong> iSCSI<br />

TCP/IP port (port 3260). You must define <strong>the</strong> network portal IP addresses of <strong>the</strong> iSCSI<br />

targets before a discovery can be started.<br />

5.4 Au<strong>the</strong>ntication<br />

Au<strong>the</strong>ntication of hosts is optional; by default, it is disabled. The user can choose to enable<br />

Challenge Handshake Au<strong>the</strong>ntication Protocol (CHAP) or CHAP au<strong>the</strong>ntication, which<br />

involves sharing a CHAP secret between <strong>the</strong> cluster and <strong>the</strong> host. If <strong>the</strong> correct key is not<br />

provided by <strong>the</strong> host, <strong>the</strong> SVC will not allow it to perform I/O to VDisks. The cluster can also<br />

be assigned a CHAP secret.<br />

A new feature with iSCSI is you can move IP addresses, which are used to address an iSCSI<br />

target on <strong>the</strong> SVC node, between <strong>the</strong> nodes of an I/O Group. IP addresses will only be moved<br />

from one node to its partner node if a node goes through a planned or unplanned restart. If<br />

<strong>the</strong> E<strong>the</strong>rnet link to <strong>the</strong> SVC cluster fails due to a cause outside of <strong>the</strong> SVC (such as <strong>the</strong> cable<br />

being disconnected, <strong>the</strong> E<strong>the</strong>rnet router failing, and so on), <strong>the</strong> SVC makes no attempt to fail<br />

over an IP address to restore IP access to <strong>the</strong> cluster. To enable validation of <strong>the</strong> E<strong>the</strong>rnet<br />

access to <strong>the</strong> nodes, it will respond to ping with <strong>the</strong> standard one-per-second rate without<br />

frame loss.<br />

The SVC 5.1 release introduced a new concept, which is used for handling <strong>the</strong> iSCSI IP<br />

address failover, that is called a “clustered E<strong>the</strong>rnet port”. A clustered E<strong>the</strong>rnet port consists<br />

of one physical E<strong>the</strong>rnet port on each node in <strong>the</strong> cluster and contains configuration settings<br />

that are shared by all of <strong>the</strong>se ports. These clustered ports are referred to as Port 1 and Port<br />

2 in <strong>the</strong> CLI or GUI on each node of an SVC cluster. Clustered E<strong>the</strong>rnet ports can be used for<br />

iSCSI or management ports.<br />

Figure 5-5 on page 161 shows an example of an iSCSI target node failover. It gives a<br />

simplified overview of what happens during a planned or unplanned node restart in an SVC<br />

I/O Group:<br />

1. During normal operation, one iSCSI node target node instance is running on each SVC<br />

node. All of <strong>the</strong> IP addresses (IPv4/IPv6) belonging to this iSCSI target, including <strong>the</strong><br />

management addresses if <strong>the</strong> node acts as <strong>the</strong> configuration node, are presented on <strong>the</strong><br />

two ports (P1/P2) of a node.<br />

2. During a restart of an SVC node (N1), <strong>the</strong> iSCSI initiator, including all of its network portal<br />

(IPv4/IPv6) IP addresses defined on Port1/Port2 and <strong>the</strong> management (IPv4/IPv6) IP<br />

addresses (if N1 acted as <strong>the</strong> configuration node), will fail over to Port1/Port2 of <strong>the</strong><br />

partner node within <strong>the</strong> I/O Group, that is, node N2. An iSCSI initiator running on a server<br />

will execute a reconnect to its iSCSI target, that is, <strong>the</strong> same IP addresses presented now<br />

by a new node of <strong>the</strong> SVC cluster.<br />

3. As soon as <strong>the</strong> node (N1) has finished its restart, <strong>the</strong> iSCSI target node (including its IP<br />

addresses) running on N2 will fail back to N1. Again, <strong>the</strong> iSCSI initiator running on a server<br />

will execute a reconnect to its iSCSI target. The management addresses will not fail back.<br />

N2 will remain in <strong>the</strong> role of <strong>the</strong> configuration node for this cluster.<br />

160 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 5-5 iSCSI node failover scenario<br />

From <strong>the</strong> server’s point of view, it is not required to have a multipathing driver (MPIO) in place<br />

to be able to handle an SVC node failover. In <strong>the</strong> case of a node restart, <strong>the</strong> server simply<br />

reconnects to <strong>the</strong> IP addresses of <strong>the</strong> iSCSI target node that will reappear after several<br />

seconds on <strong>the</strong> ports of <strong>the</strong> partner node.<br />

A host multipathing driver for iSCSI is required in <strong>the</strong>se situations:<br />

► To protect a server from network link failures, including port failures on <strong>the</strong> SVC nodes<br />

► To protect a server from a server HBA failure (if two HBAs are in use)<br />

► To protect a server form network failures, if <strong>the</strong> server is connected via two HBAs to two<br />

separate networks<br />

► To provide load balancing on <strong>the</strong> server’s HBA and <strong>the</strong> network links<br />

The commands for <strong>the</strong> configuration of <strong>the</strong> iSCSI IP addresses have been separated from <strong>the</strong><br />

configuration of <strong>the</strong> cluster IP addresses.<br />

The following commands are new commands for managing iSCSI IP addresses:<br />

► The svcinfo lsportip command lists <strong>the</strong> iSCSI IP addresses assigned for each port on<br />

each node in <strong>the</strong> cluster.<br />

► The svctask cfgportip command assigns an IP address to each node’s E<strong>the</strong>rnet port for<br />

iSCSI I/O.<br />

The following commands are new commands for managing <strong>the</strong> cluster IP addresses:<br />

► The svcinfo lsclusterip command returns a list of <strong>the</strong> cluster management IP<br />

addresses configured for each port.<br />

► The svctask chclusterip command modifies <strong>the</strong> IP configuration parameters for <strong>the</strong><br />

cluster.<br />

You can obtain a detailed description of how to use <strong>the</strong>se commands in Chapter 7, “<strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface” on page 339.<br />

Chapter 5. Host configuration 161


The parameters for remote services (ssh and Web services) will remain associated with <strong>the</strong><br />

cluster object. During a software upgrade from 4.3.1, <strong>the</strong> configuration settings for <strong>the</strong> cluster<br />

will be used to configure clustered E<strong>the</strong>rnet Port1.<br />

For iSCSI-based access, using two separate networks and separating iSCSI traffic within <strong>the</strong><br />

networks by using a dedicated VLAN path for storage traffic will prevent any IP interface,<br />

switch, or target port failure from compromising <strong>the</strong> host server’s access to <strong>the</strong> VDisk LUNs.<br />

5.5 AIX-specific information<br />

The following section details specific information that relates to <strong>the</strong> connection of AIX-based<br />

hosts into an SVC environment.<br />

AIX-specific information: In this section, <strong>the</strong> <strong>IBM</strong> <strong>System</strong> p information applies to all AIX<br />

hosts that are listed on <strong>the</strong> SVC interoperability support site, including <strong>IBM</strong> <strong>System</strong> i<br />

partitions and <strong>IBM</strong> JS blades.<br />

5.5.1 Configuring <strong>the</strong> AIX host<br />

To configure <strong>the</strong> AIX host, follow <strong>the</strong>se steps:<br />

1. Install <strong>the</strong> HBAs in <strong>the</strong> AIX host system.<br />

2. Ensure that you have installed <strong>the</strong> correct operating systems and version levels on your<br />

host, including any updates and Authorized Program Analysis Reports (APARs) for <strong>the</strong><br />

operating system.<br />

3. Connect <strong>the</strong> AIX host system to <strong>the</strong> FC switches.<br />

4. Configure <strong>the</strong> FC switches (zoning) if needed.<br />

5. Install and configure <strong>the</strong> 2145 and <strong>IBM</strong> Subsystem Device Driver (SDD) drivers.<br />

6. Configure <strong>the</strong> host, VDisks, and host mapping on <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>.<br />

7. Run <strong>the</strong> cfgmgr command to discover <strong>the</strong> VDisks created on <strong>the</strong> SVC.<br />

The following sections detail <strong>the</strong> current support information. It is vital that you check <strong>the</strong> Web<br />

sites that are listed regularly for any updates.<br />

5.5.2 Operating system versions and maintenance levels<br />

At <strong>the</strong> time of writing, <strong>the</strong> following AIX levels are supported:<br />

► AIX V4.3.3<br />

► AIX 5L <strong>V5.1</strong><br />

► AIX 5L V5.2<br />

► AIX 5L V5.3<br />

► AIX V6.1.3<br />

For <strong>the</strong> latest information, and device driver support, always refer to this site:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX<br />

5.5.3 HBAs for <strong>IBM</strong> <strong>System</strong> p hosts<br />

Ensure that your <strong>IBM</strong> <strong>System</strong> p AIX hosts use <strong>the</strong> correct host bus adapters (HBAs).<br />

162 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


The following <strong>IBM</strong> Web site provides current interoperability information about supported<br />

HBAs and firmware:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_pSeries<br />

Note: The maximum number of FC ports that are supported in a single host (or logical<br />

partition) is four. These ports can be four single-port adapters or two dual-port adapters or<br />

a combination, as long as <strong>the</strong> maximum number of ports that are attached to <strong>the</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> does not exceed four.<br />

Installing <strong>the</strong> host attachment script on <strong>IBM</strong> <strong>System</strong> p hosts<br />

To attach an <strong>IBM</strong> <strong>System</strong> p AIX host, you must install <strong>the</strong> AIX host attachment script.<br />

Perform <strong>the</strong> following steps to install <strong>the</strong> host attachment scripts:<br />

1. Access <strong>the</strong> following Web site:<br />

http://www.ibm.com/servers/storage/support/software/sdd/downloading.html<br />

2. Select Host Attachment Scripts for AIX.<br />

3. Select ei<strong>the</strong>r Host Attachment Script for SDDPCM or Host Attachment Scripts for<br />

SDD from <strong>the</strong> options, depending on your multipath device driver.<br />

4. Download <strong>the</strong> AIX host attachment script for your multipath device driver.<br />

5. Follow <strong>the</strong> instructions that are provided on <strong>the</strong> Web site or any readme files to install <strong>the</strong><br />

script.<br />

5.5.4 Configuring for fast fail and dynamic tracking<br />

For hosts systems that run an AIX 5L V5.2 or later operating system, you can achieve <strong>the</strong><br />

best results by using <strong>the</strong> fast fail and dynamic tracking attributes.<br />

Perform <strong>the</strong> following steps to configure your host system to use <strong>the</strong> fast fail and dynamic<br />

tracking attributes:<br />

1. Issue <strong>the</strong> following command to set <strong>the</strong> FC SCSI I/O <strong>Controller</strong> Protocol Device to each<br />

Adapter:<br />

chdev -l fscsi0 -a fc_err_recov=fast_fail<br />

The previous command was for adapter fscsi0. Example 5-1 shows <strong>the</strong> command for both<br />

adapters on our test system running AIX 5L V5.3.<br />

Example 5-1 Enable fast fail<br />

#chdev -l fscsi0 -a fc_err_recov=fast_fail<br />

fscsi0 changed<br />

#chdev -l fscsi1 -a fc_err_recov=fast_fail<br />

fscsi1 changed<br />

2. Issue <strong>the</strong> following command to enable dynamic tracking for each FC device:<br />

chdev -l fscsi0 -a dyntrk=yes<br />

The previous example command was for adapter fscsi0. Example 5-2 on page 164 shows<br />

<strong>the</strong> command for both adapters on our test system running AIX 5L V5.3.<br />

Chapter 5. Host configuration 163


Example 5-2 Enable dynamic tracking<br />

#chdev -l fscsi0 -a dyntrk=yes<br />

fscsi0 changed<br />

#chdev -l fscsi1 -a dyntrk=yes<br />

fscsi1 changed<br />

Host adapter configuration settings<br />

You can check <strong>the</strong> availability of <strong>the</strong> FC host adapters by using <strong>the</strong> command shown in<br />

Example 5-3.<br />

Example 5-3 FC host adapter availability<br />

#lsdev -Cc adapter |grep fcs<br />

fcs0 Available 1Z-08 FC Adapter<br />

fcs1 Available 1D-08 FC<br />

Adapter<br />

You can find <strong>the</strong> worldwide port number (WWPN) of your FC host adapter and check <strong>the</strong><br />

firmware level, as shown in Example 5-4. The network address is <strong>the</strong> worldwide port name<br />

(WWPN) for <strong>the</strong> FC adapter.<br />

Example 5-4 FC host adapter settings and WWPN<br />

#lscfg -vpl fcs0<br />

fcs0 U0.1-P2-I4/Q1 FC Adapter<br />

Part Number.................00P4494<br />

EC Level....................A<br />

Serial Number...............1E3120A68D<br />

Manufacturer................001E<br />

Device Specific.(CC)........2765<br />

FRU Number.................. 00P4495<br />

Network Address.............10000000C932A7FB<br />

ROS Level and ID............02C03951<br />

Device Specific.(Z0)........2002606D<br />

Device Specific.(Z1)........00000000<br />

Device Specific.(Z2)........00000000<br />

Device Specific.(Z3)........03000909<br />

Device Specific.(Z4)........FF401210<br />

Device Specific.(Z5)........02C03951<br />

Device Specific.(Z6)........06433951<br />

Device Specific.(Z7)........07433951<br />

Device Specific.(Z8)........20000000C932A7FB<br />

Device Specific.(Z9)........CS3.91A1<br />

Device Specific.(ZA)........C1D3.91A1<br />

Device Specific.(ZB)........C2D3.91A1<br />

Device Specific.(YL)........U0.1-P2-I4/Q1<br />

PLATFORM SPECIFIC<br />

Name: fibre-channel<br />

Model: LP9002<br />

Node: fibre-channel@1<br />

164 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Device Type: fcp<br />

Physical Location:<br />

U0.1-P2-I4/Q1<br />

5.5.5 Subsystem Device Driver (SDD) Path Control Module (SDDPCM)<br />

SDD is a pseudo device driver that is designed to support <strong>the</strong> multipath configuration<br />

environments within <strong>IBM</strong> products. It resides on a host system along with <strong>the</strong> native disk<br />

device driver and provides <strong>the</strong> following functions:<br />

► Enhanced data availability<br />

► Dynamic I/O load balancing across multiple paths<br />

► Automatic path failover protection<br />

► Concurrent download of licensed internal code<br />

SDD works by grouping each physical path to an SVC logical unit number (LUN), represented<br />

by individual hdisk devices within AIX, into a vpath device. For example, if you have four<br />

physical paths to an SVC LUN, this design produces four new hdisk devices within AIX). From<br />

this point forward, AIX uses this vpath device to route I/O to <strong>the</strong> SVC LUN. Therefore, when<br />

making a Logical <strong>Volume</strong> Manager (LVM) <strong>Volume</strong> Group using mkvg, we specify <strong>the</strong> vpath<br />

device as <strong>the</strong> destination and not <strong>the</strong> hdisk device.<br />

The SDD support matrix for AIX is available at this Web site:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX<br />

SDD/SDDPCM installation<br />

After downloading <strong>the</strong> appropriate version of SDD, install it using <strong>the</strong> standard AIX installation<br />

procedure. The currently supported SDD Levels are available at:<br />

http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5<br />

000033&familyind=5329528&taskind=2<br />

Check <strong>the</strong> driver readmefile and make sure your AIX system fulfills all <strong>the</strong> prerequisites.<br />

SDD installation<br />

In Example 5-5, we show <strong>the</strong> appropriate version of SDD downloaded into <strong>the</strong> /tmp/sdd<br />

directory. From here, we extract it and initiate <strong>the</strong> inutoc command, which generates a<br />

dot.toc (.toc) file that is needed by <strong>the</strong> installp command prior to installing SDD. Finally,<br />

we initiate <strong>the</strong> installp command, which installs SDD onto this AIX host.<br />

Example 5-5 Installing SDD on AIX<br />

#ls -l<br />

total 3032<br />

-rw-r----- 1 root system 1546240 Jun 24 15:29 devices.sdd.53.rte.tar<br />

#tar -tvf devices.sdd.53.rte.tar<br />

-rw-r----- 0 0 1536000 Oct 06 11:37:13 2006 devices.sdd.53.rte<br />

#tar -xvf devices.sdd.53.rte.tar<br />

x devices.sdd.53.rte, 1536000 bytes, 3000 media blocks.<br />

# inutoc .<br />

#ls -l<br />

total 6032<br />

-rw-r--r-- 1 root system 476 Jun 24 15:33 .toc<br />

-rw-r----- 1 root system 1536000 Oct 06 2006 devices.sdd.53.rte<br />

-rw-r----- 1 root system 1546240 Jun 24 15:29 devices.sdd.53.rte.tar<br />

Chapter 5. Host configuration 165


# installp -ac -d . all<br />

Example 5-6 checks <strong>the</strong> installation of SDD.<br />

Example 5-6 Checking SDD device driver<br />

#lslpp -l | grep -i sdd<br />

devices.sdd.53.rte 1.7.0.0 COMMITTED <strong>IBM</strong> Subsystem Device Driver<br />

devices.sdd.53.rte 1.7.0.0 COMMITTED <strong>IBM</strong> Subsystem Device Driver<br />

The 2145 devices.fcp file: A specific “2145” devices.fcp file no longer exists. The<br />

standard devices.fcp file now has combined support for SVC/Enterprise <strong>Storage</strong><br />

Server/DS8000/DS6000.<br />

We can also check that <strong>the</strong> SDD server is operational, as shown in Example 5-7.<br />

Example 5-7 SDD server is operational<br />

#lssrc -s sddsrv<br />

Subsystem Group PID Status<br />

sddsrv 168430 active<br />

#ps -aef | grep sdd<br />

root 135174 41454 0 15:38:20 pts/1 0:00 grep sdd<br />

root 168430 127292 0 15:10:27 - 0:00<br />

/usr/sbin/sddsrv<br />

Enabling <strong>the</strong> SDD or SDDPCM Web interface is shown in 5.15, “Using SDDDSM, SDDPCM,<br />

and SDD Web interface” on page 251.<br />

SDDPCM installation<br />

In Example 5-8, we show <strong>the</strong> appropriate version of SDDPCM downloaded into <strong>the</strong><br />

/tmp/sddpcm directory. From here, we extract it and initiate <strong>the</strong> inutoc command, which<br />

generates a dot.toc (.toc) file that is needed by <strong>the</strong> installp command prior to installing<br />

SDDPCM. Finally, we initiate <strong>the</strong> installp command, which installs SDDPCM onto this AIX<br />

host.<br />

Example 5-8 Installing SDDPCM on AIX<br />

# ls -l<br />

total 3232<br />

-rw-r----- 1 root system 1648640 Jul 15 13:24<br />

devices.sddpcm.61.rte.tar<br />

# tar -tvf devices.sddpcm.61.rte.tar<br />

-rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte<br />

# tar -xvf devices.sddpcm.61.rte.tar<br />

x devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks.<br />

# inutoc .<br />

# ls -l<br />

total 6432<br />

-rw-r--r-- 1 root system 531 Jul 15 13:25 .toc<br />

-rw-r----- 1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte<br />

-rw-r----- 1 root system 1648640 Jul 15 13:24<br />

devices.sddpcm.61.rte.tar<br />

# installp -ac -d . all<br />

166 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 5-9 checks <strong>the</strong> installation of SDDPCM.<br />

Example 5-9 Checking SDDPCM device driver<br />

# lslpp -l | grep sddpcm<br />

devices.sddpcm.61.rte 2.2.0.0 COMMITTED <strong>IBM</strong> SDD PCM for AIX V61<br />

devices.sddpcm.61.rte 2.2.0.0 COMMITTED <strong>IBM</strong> SDD PCM for AIX V61<br />

Enabling <strong>the</strong> SDD or SDDPCM Web interface is shown in 5.15, “Using SDDDSM, SDDPCM,<br />

and SDD Web interface” on page 251.<br />

5.5.6 Discovering <strong>the</strong> assigned VDisk using SDD and AIX 5L V5.3<br />

Before adding a new volume from <strong>the</strong> SVC, <strong>the</strong> AIX host system Kanga had a simple, typical<br />

configuration, as shown in Example 5-10.<br />

Example 5-10 Status of AIX host system Kanaga<br />

#lspv<br />

hdisk0 0009cddaea97bf61 rootvg active<br />

hdisk1 0009cdda43c9dfd5 rootvg active<br />

hdisk2 0009cddabaef1d99 rootvg active<br />

#lsvg<br />

rootvg<br />

In Example 5-11, we show SVC configuration information relating to our AIX host, specifically,<br />

<strong>the</strong> host definition, <strong>the</strong> VDisks created for this host, and <strong>the</strong> VDisk-to-host mappings for this<br />

configuration.<br />

Using <strong>the</strong> SVC CLI, we can check that <strong>the</strong> host WWPNs, which are listed in Example 5-4 on<br />

page 164, are logged into <strong>the</strong> SVC for <strong>the</strong> host definition “aix_test”, by entering:<br />

svcinfo lshost aix_test<br />

We can also find <strong>the</strong> serial numbers of <strong>the</strong> VDisks using <strong>the</strong> following command:<br />

svcinfo lshostvdiskmap<br />

Example 5-11 SVC definitions for host system aix_test<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshost Kanaga<br />

id 2<br />

name Kanaga<br />

port_count 2<br />

type generic<br />

mask 1111<br />

iogrp_count 2<br />

WWPN 10000000C932A7FB<br />

node_logged_in_count 2<br />

state active<br />

WWPN 10000000C932A800<br />

node_logged_in_count 2<br />

state active<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Kanaga<br />

id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID<br />

Chapter 5. Host configuration 167


2 Kanaga 0 13 Kanaga0001 10000000C932A7FB<br />

60050768018301BF2800000000000015<br />

2 Kanaga 1 14 Kanaga0002 10000000C932A7FB<br />

60050768018301BF2800000000000016<br />

2 Kanaga 2 15 Kanaga0003 10000000C932A7FB<br />

60050768018301BF2800000000000017<br />

2 Kanaga 3 16 Kanaga0004 10000000C932A7FB<br />

60050768018301BF2800000000000018<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0001<br />

id 13<br />

name Kanaga0001<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status offline<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

capacity 5.0GB<br />

type striped<br />

formatted yes<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018301BF2800000000000015<br />

throttling 0<br />

preferred_node_id 1<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status offline<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 5.00GB<br />

real_capacity 5.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

168 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap Kanaga0001<br />

id name SCSI_id host_id host_name wwpn vdisk_UID<br />

13 Kanaga0001 0 2 Kanaga 10000000C932A7FB 60050768018301BF2800000000000015<br />

13 Kanaga0001 0 2 Kanaga 10000000C932A800 60050768018301BF2800000000000015<br />

We need to run cfgmgr on <strong>the</strong> AIX host to discover <strong>the</strong> new disks and enable us to start <strong>the</strong><br />

vpath configuration; if we run <strong>the</strong> config manager (cfgmgr) on each FC adapter, it will not<br />

create <strong>the</strong> vpaths, only <strong>the</strong> new hdisks. To configure <strong>the</strong> vpaths, we need to run <strong>the</strong><br />

cfallvpath command after issuing <strong>the</strong> cfgmgr command on each of <strong>the</strong> FC adapters:<br />

# cfgmgr -l fcs0<br />

# cfgmgr -l fcs1<br />

# cfallvpath<br />

Alternatively, use <strong>the</strong> cfgmgr -vS command to check <strong>the</strong> complete system. This command<br />

will probe <strong>the</strong> devices sequentially across all FC adapters and attached disks; however, it is<br />

extremely time intensive:<br />

# cfgmgr -vS<br />

The raw SVC disk configuration of <strong>the</strong> AIX host system now appears, as shown in<br />

Example 5-12. We can see <strong>the</strong> multiple hdisk devices, representing <strong>the</strong> multiple routes to <strong>the</strong><br />

same SVC LUN, and we can see <strong>the</strong> vpath devices available for configuration.<br />

Example 5-12 VDisks from SVC added with multiple separate paths for each VDisk<br />

#lsdev -Cc disk<br />

hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive<br />

hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive<br />

hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive<br />

hdisk3 Available 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk4 Available 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk5 Available 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk6 Available 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk7 Available 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk8 Available 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk9 Available 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk10 Available 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk11 Available 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk12 Available 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk13 Available 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk14 Available 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk15 Available 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk16 Available 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk17 Available 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk18 Available 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

vpath0 Available Data Path Optimizer Pseudo Device Driver<br />

vpath1 Available Data Path Optimizer Pseudo Device Driver<br />

vpath2 Available Data Path Optimizer Pseudo Device Driver<br />

vpath3 Available Data Path Optimizer Pseudo Device Driver<br />

To make a <strong>Volume</strong> Group (for example, itsoaixvg) to host <strong>the</strong> vpath1 device, we use <strong>the</strong> mkvg<br />

command passing <strong>the</strong> vpath device as a parameter instead of <strong>the</strong> hdisk device, which is<br />

shown in Example 5-13 on page 170.<br />

Chapter 5. Host configuration 169


5.5.7 Using SDD<br />

Example 5-13 Running <strong>the</strong> mkvg command<br />

#mkvg -y itsoaixvg vpath1<br />

0516-1254 mkvg: Changing <strong>the</strong> PVID in <strong>the</strong> ODM.<br />

itsoaixvg<br />

Now, by running <strong>the</strong> lspv command, we can see that vpath1 has been assigned into <strong>the</strong><br />

itsoaixvg <strong>Volume</strong> Group, as shown in Example 5-14.<br />

Example 5-14 Showing <strong>the</strong> vpath assignment into <strong>the</strong> <strong>Volume</strong> Group<br />

#lspv<br />

hdisk0 0009cddaea97bf61 rootvg active<br />

hdisk1 0009cdda43c9dfd5 rootvg active<br />

hdisk2 0009cddabaef1d99 rootvg active<br />

vpath1 0009cddabce27ba5 itsoaixvg active<br />

The lsvpcfg command also displays <strong>the</strong> new relationship between vpath1 and <strong>the</strong> itsoaixvg<br />

<strong>Volume</strong> Group, but it also shows each hdisk that is associated with vpath1, as shown in<br />

Example 5-15.<br />

Example 5-15 Displaying <strong>the</strong> vpath to hdisk to <strong>Volume</strong> Group relationship<br />

#lsvpcfg<br />

vpath0 (Avail ) 60050768018301BF2800000000000015 = hdisk3 (Avail ) hdisk7 (Avail )<br />

vpath1 (Avail pv itsoaixvg) 60050768018301BF2800000000000016 = hdisk4 (Avail )<br />

hdisk8 (Avail )<br />

vpath2 (Avail ) 60050768018301BF2800000000000017 = hdisk5 (Avail ) hdisk9 (Avail )<br />

vpath3 (Avail ) 60050768018301BF2800000000000018 = hdisk6 (Avail ) hdisk10 (Avail<br />

)<br />

In Example 5-16, running <strong>the</strong> lspv vpath1 command shows a more verbose output for<br />

vpath1.<br />

Example 5-16 Verbose details of vpath1<br />

#lspv vpath1<br />

PHYSICAL VOLUME: vpath1 VOLUME GROUP: itsoaixvg<br />

PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER<br />

0009cdda00004c000000011abce27c89<br />

PV STATE: active<br />

STALE PARTITIONS: 0 ALLOCATABLE: yes<br />

PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 0<br />

TOTAL PPs: 639 (5112 megabytes) VG DESCRIPTORS: 2<br />

FREE PPs: 639 (5112 megabytes) HOT SPARE: no<br />

USED PPs: 0 (0 megabytes) MAX REQUEST: 256 kilobytes<br />

FREE DISTRIBUTION: 128..128..127..128..128<br />

USED DISTRIBUTION:<br />

00..00..00..00..00<br />

Within SDD, we are able to check <strong>the</strong> status of <strong>the</strong> adapters and devices now under SDD<br />

control with <strong>the</strong> use of <strong>the</strong> datapath command set. In Example 5-17 on page 171, we can see<br />

<strong>the</strong> status of both HBA cards as NORMAL and ACTIVE.<br />

170 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 5-17 SDD commands used to check <strong>the</strong> availability of <strong>the</strong> adapters<br />

#datapath query adapter<br />

Active Adapters :2<br />

Adpt# Name State Mode Select Errors Paths Active<br />

0 fscsi0 NORMAL ACTIVE 0 0 4 1<br />

1 fscsi1 NORMAL ACTIVE 56 0 4 1<br />

In Example 5-18, we see detailed information about each vpath device. Initially, we see that<br />

vpath1 is <strong>the</strong> only vpath device in an open status. It is open, because it is <strong>the</strong> only vpath that<br />

is currently assigned to a <strong>Volume</strong> Group. Additionally, for vpath1, we see that only path 1 and<br />

path 3 have been selected (used) by SDD. These paths are <strong>the</strong> two physical paths that<br />

connect to <strong>the</strong> preferred node of <strong>the</strong> I/O Group of this SVC cluster. The remaining two paths<br />

within this vpath device are only accessed in a failover scenario.<br />

Example 5-18 SDD commands that are used to check <strong>the</strong> availability of <strong>the</strong> devices<br />

#datapath query device<br />

Total Devices : 4<br />

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized<br />

SERIAL: 60050768018301BF2800000000000015<br />

==========================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 fscsi0/hdisk3 CLOSE NORMAL 0 0<br />

1 fscsi1/hdisk7 CLOSE NORMAL 0 0<br />

2 fscsi0/hdisk11 CLOSE NORMAL 0 0<br />

3 fscsi1/hdisk15 CLOSE NORMAL 0 0<br />

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized<br />

SERIAL: 60050768018301BF2800000000000016<br />

==========================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 fscsi0/hdisk4 OPEN NORMAL 0 0<br />

1 fscsi1/hdisk8 OPEN NORMAL 28 0<br />

2 fscsi0/hdisk12 OPEN NORMAL 32 0<br />

3 fscsi1/hdisk16 OPEN NORMAL 0 0<br />

DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: Optimized<br />

SERIAL: 60050768018301BF2800000000000017<br />

==========================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 fscsi0/hdisk5 CLOSE NORMAL 0 0<br />

1 fscsi1/hdisk9 CLOSE NORMAL 0 0<br />

2 fscsi0/hdisk13 CLOSE NORMAL 0 0<br />

3 fscsi1/hdisk17 CLOSE NORMAL 0 0<br />

DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: Optimized<br />

SERIAL: 60050768018301BF2800000000000018<br />

==========================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

Chapter 5. Host configuration 171


0 fscsi0/hdisk6 CLOSE NORMAL 0 0<br />

1 fscsi1/hdisk10 CLOSE NORMAL 0 0<br />

2 fscsi0/hdisk14 CLOSE NORMAL 0 0<br />

3 fscsi1/hdisk18 CLOSE NORMAL 0 0<br />

5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD<br />

The itsoaixvg <strong>Volume</strong> Group is created using vpath1. A logical volume is created using <strong>the</strong><br />

<strong>Volume</strong> Group. Then, <strong>the</strong> testlv1 file system is created and mounted on <strong>the</strong> /testlv1 mount<br />

point, as shown in Example 5-19.<br />

Example 5-19 Host system new <strong>Volume</strong> Group and file system configuration<br />

#lsvg -o<br />

itsoaixvg<br />

rootvg<br />

#lsvg -l itsoaixvg<br />

itsoaixvg:<br />

LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT<br />

loglv01 jfs2log 1 1 1 open/syncd N/A<br />

fslv00 jfs2 128 128 1 open/syncd /teslv1<br />

fslv01 jfs2 128 128 1 open/syncd /teslv2<br />

#df -g<br />

Filesystem GB blocks Free %Used Iused %Iused Mounted on<br />

/dev/hd4 0.03 0.01 62% 1357 31% /<br />

/dev/hd2 9.06 4.32 53% 17341 2% /usr<br />

/dev/hd9var 0.03 0.03 10% 137 3% /var<br />

/dev/hd3 0.12 0.12 7% 31 1% /tmp<br />

/dev/hd1 0.03 0.03 2% 11 1% /home<br />

/proc - - - - - /proc<br />

/dev/hd10opt 0.09 0.01 86% 1947 38% /opt<br />

/dev/lv00 0.41 0.39 4% 19 1% /usr/sys/inst.images<br />

/dev/fslv00 2.00 2.00 1% 4 1% /teslv1<br />

/dev/fslv01 2.00 2.00 1% 4 1% /teslv2<br />

5.5.9 Discovering <strong>the</strong> assigned VDisk using AIX V6.1 and SDDPCM<br />

Before adding a new volume from <strong>the</strong> SVC, <strong>the</strong> AIX host system Atlantic had a simple, typical<br />

configuration, as shown in Example 5-20.<br />

Example 5-20 Status of AIX host system Kanaga<br />

# lspv<br />

hdisk0 0009cdcaeb48d3a3 rootvg active<br />

hdisk1 0009cdcac26dbb7c rootvg active<br />

hdisk2 0009cdcab5657239 rootvg active<br />

# lsvg<br />

rootvg<br />

In Example 5-22 on page 174, we show <strong>the</strong> SVC configuration information relating to our AIX<br />

host, specifically <strong>the</strong> host definition, <strong>the</strong> VDisks that were created for this host, and <strong>the</strong><br />

VDisk-to-host mappings for this configuration.<br />

172 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Our example host is named Atlantic. Example 5-21 shows <strong>the</strong> HBA information for our<br />

example host.<br />

Example 5-21 Example of HBA information for <strong>the</strong> host Atlantic<br />

## lsdev -Cc adapter | grep fcs<br />

fcs1 Available 1H-08 FC Adapter<br />

fcs2 Available 1D-08 FC Adapter<br />

# lscfg -vpl fcs1<br />

fcs1 U0.1-P2-I4/Q1 FC Adapter<br />

Part Number.................00P4494<br />

EC Level....................A<br />

Serial Number...............1E3120A644<br />

Manufacturer................001E<br />

Customer Card ID Number.....2765<br />

FRU Number.................. 00P4495<br />

Network Address.............10000000C932A865<br />

ROS Level and ID............02C039D0<br />

Device Specific.(Z0)........2002606D<br />

Device Specific.(Z1)........00000000<br />

Device Specific.(Z2)........00000000<br />

Device Specific.(Z3)........03000909<br />

Device Specific.(Z4)........FF401411<br />

Device Specific.(Z5)........02C039D0<br />

Device Specific.(Z6)........064339D0<br />

Device Specific.(Z7)........074339D0<br />

Device Specific.(Z8)........20000000C932A865<br />

Device Specific.(Z9)........CS3.93A0<br />

Device Specific.(ZA)........C1D3.93A0<br />

Device Specific.(ZB)........C2D3.93A0<br />

Device Specific.(ZC)........00000000<br />

Hardware Location Code......U0.1-P2-I4/Q1<br />

PLATFORM SPECIFIC<br />

Name: fibre-channel<br />

Model: LP9002<br />

Node: fibre-channel@1<br />

Device Type: fcp<br />

Physical Location: U0.1-P2-I4/Q1<br />

## lscfg -vpl fcs2<br />

fcs2 U0.1-P2-I5/Q1 FC Adapter<br />

Part Number.................80P4383<br />

EC Level....................A<br />

Serial Number...............1F5350CD42<br />

Manufacturer................001F<br />

Customer Card ID Number.....2765<br />

FRU Number.................. 80P4384<br />

Network Address.............10000000C94C8C1C<br />

ROS Level and ID............02C03951<br />

Device Specific.(Z0)........2002606D<br />

Device Specific.(Z1)........00000000<br />

Device Specific.(Z2)........00000000<br />

Chapter 5. Host configuration 173


#<br />

Device Specific.(Z3)........03000909<br />

Device Specific.(Z4)........FF401210<br />

Device Specific.(Z5)........02C03951<br />

Device Specific.(Z6)........06433951<br />

Device Specific.(Z7)........07433951<br />

Device Specific.(Z8)........20000000C94C8C1C<br />

Device Specific.(Z9)........CS3.91A1<br />

Device Specific.(ZA)........C1D3.91A1<br />

Device Specific.(ZB)........C2D3.91A1<br />

Device Specific.(ZC)........00000000<br />

Hardware Location Code......U0.1-P2-I5/Q1<br />

PLATFORM SPECIFIC<br />

Name: fibre-channel<br />

Model: LP9002<br />

Node: fibre-channel@1<br />

Device Type: fcp<br />

Physical Location: U0.1-P2-I5/Q1<br />

Using <strong>the</strong> SVC CLI, we can check that <strong>the</strong> host WWPNs, as listed in Example 5-22, are<br />

logged into <strong>the</strong> SVC for <strong>the</strong> host definition Atlantic, by entering this command:<br />

svcinfo lshost Atlantic<br />

We can also discover <strong>the</strong> serial numbers of <strong>the</strong> VDisks by using <strong>the</strong> following command:<br />

svcinfo lshostvdiskmap Atlantic<br />

Example 5-22 SVC definitions for host system Atlantic<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshost Atlantic<br />

id 8<br />

name Atlantic<br />

port_count 2<br />

type generic<br />

mask 1111<br />

iogrp_count 4<br />

WWPN 10000000C94C8C1C<br />

node_logged_in_count 2<br />

state active<br />

WWPN 10000000C932A865<br />

node_logged_in_count 2<br />

state active<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Atlantic<br />

id name SCSI_id vdisk_id vdisk_name<br />

wwpn vdisk_UID<br />

8 Atlantic 0 14 Atlantic0001<br />

10000000C94C8C1C 6005076801A180E90800000000000060<br />

8 Atlantic 1 22 Atlantic0002<br />

10000000C94C8C1C 6005076801A180E90800000000000061<br />

8 Atlantic 2 23 Atlantic0003<br />

10000000C94C8C1C 6005076801A180E90800000000000062<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

174 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


We need to run <strong>the</strong> cfgmgr command on <strong>the</strong> AIX host to discover <strong>the</strong> new disks and to enable<br />

us to use <strong>the</strong> disks:<br />

# cfgmgr -l fcs1<br />

# cfgmgr -l fcs2<br />

Alternatively, use <strong>the</strong> cfgmgr -vS command to check <strong>the</strong> complete system. This command<br />

will probe <strong>the</strong> devices sequentially across all FC adapters and attached disks; however, it is<br />

extremely time intensive:<br />

# cfgmgr -vS<br />

The raw SVC disk configuration of <strong>the</strong> AIX host system now appears, as shown in<br />

Example 5-23. We can see <strong>the</strong> multiple MPIO FC 2145 devices, representing <strong>the</strong> SVC LUN.<br />

Example 5-23 VDisks from SVC added with multiple various paths for each VDisk<br />

# lsdev -Cc disk<br />

hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive<br />

hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive<br />

hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive<br />

hdisk3 Available 1D-08-02 MPIO FC 2145<br />

hdisk4 Available 1D-08-02 MPIO FC 2145<br />

hdisk5 Available 1D-08-02 MPIO FC 2145<br />

To make a <strong>Volume</strong> Group (for example, itsoaixvg) to host <strong>the</strong> LUNs, we use <strong>the</strong> mkvg<br />

command passing <strong>the</strong> device as a parameter. This action is shown in Example 5-24.<br />

Example 5-24 Running <strong>the</strong> mkvg command<br />

# mkvg -y itsoaixvg hdisk3<br />

0516-1254 mkvg: Changing <strong>the</strong> PVID in <strong>the</strong> ODM.<br />

itsoaixvg<br />

# mkvg -y itsoaixvg1 hdisk4<br />

0516-1254 mkvg: Changing <strong>the</strong> PVID in <strong>the</strong> ODM.<br />

itsoaixvg1<br />

# mkvg -y itsoaixvg2 hdisk5<br />

0516-1254 mkvg: Changing <strong>the</strong> PVID in <strong>the</strong> ODM.<br />

itsoaixvg2<br />

Now, by running <strong>the</strong> lspv command, we can see <strong>the</strong> disks and <strong>the</strong> assigned <strong>Volume</strong> Groups,<br />

as shown in Example 5-25.<br />

Example 5-25 Showing <strong>the</strong> vpath assignment into <strong>the</strong> <strong>Volume</strong> Group<br />

# lspv<br />

hdisk0 0009cdcaeb48d3a3 rootvg active<br />

hdisk1 0009cdcac26dbb7c rootvg active<br />

hdisk2 0009cdcab5657239 rootvg active<br />

hdisk3 0009cdca28b589f5 itsoaixvg active<br />

hdisk4 0009cdca28b87866 itsoaixvg1 active<br />

hdisk5 0009cdca28b8ad5b itsoaixvg2 active<br />

In Example 5-26 on page 176, we show that running <strong>the</strong> lspv hdisk3 command shows a<br />

more verbose output for one of <strong>the</strong> SVC LUNs.<br />

Chapter 5. Host configuration 175


5.5.10 Using SDDPCM<br />

Example 5-26 Verbose details of hdisk3<br />

# lspv hdisk3<br />

PHYSICAL VOLUME: hdisk3 VOLUME GROUP: itsoaixvg<br />

PV IDENTIFIER: 0009cdca28b589f5 VG IDENTIFIER<br />

0009cdca00004c000000011b28b58ae2<br />

PV STATE: active<br />

STALE PARTITIONS: 0 ALLOCATABLE: yes<br />

PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 0<br />

TOTAL PPs: 511 (4088 megabytes) VG DESCRIPTORS: 2<br />

FREE PPs: 511 (4088 megabytes) HOT SPARE: no<br />

USED PPs: 0 (0 megabytes) MAX REQUEST: 256 kilobytes<br />

FREE DISTRIBUTION: 103..102..102..102..102<br />

USED DISTRIBUTION: 00..00..00..00..00<br />

#<br />

Within SDD, we are able to check <strong>the</strong> status of <strong>the</strong> adapters and devices that are now under<br />

SDDPCM control with <strong>the</strong> use of <strong>the</strong> pcmpath command set. In Example 5-27, we can see <strong>the</strong><br />

status and mode of both HBA cards as NORMAL and ACTIVE.<br />

Example 5-27 SDDPCM commands that are used to check <strong>the</strong> availability of <strong>the</strong> adapters<br />

# pcmpath query adapter<br />

Active Adapters :2<br />

Adpt# Name State Mode Select Errors Paths Active<br />

0 fscsi1 NORMAL ACTIVE 407 0 6 6<br />

1 fscsi2 NORMAL ACTIVE 425 0 6 6<br />

From Example 5-28, we see detailed information about each MPIO device. The asterisk (*)<br />

next to <strong>the</strong> path numbers shows which paths have been selected (used) by SDDPCM. These<br />

paths are <strong>the</strong> two physical paths that connect to <strong>the</strong> preferred node of <strong>the</strong> I/O Group of this<br />

SVC cluster. The remaining two paths within this MPIO device are only accessed in a failover<br />

scenario.<br />

Example 5-28 SDDPCM commands that are used to check <strong>the</strong> availability of <strong>the</strong> devices<br />

# pcmpath query device<br />

Total Devices : 3<br />

DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance<br />

SERIAL: 6005076801A180E90800000000000060<br />

==========================================================================<br />

Path# Adapter/Path Name State Mode Select Errors<br />

0 fscsi1/path0 OPEN NORMAL 152 0<br />

1* fscsi1/path1 OPEN NORMAL 48 0<br />

2* fscsi2/path2 OPEN NORMAL 48 0<br />

3 fscsi2/path3 OPEN NORMAL 160 0<br />

DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance<br />

176 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


SERIAL: 6005076801A180E90800000000000061<br />

==========================================================================<br />

Path# Adapter/Path Name State Mode Select Errors<br />

0* fscsi1/path0 OPEN NORMAL 37 0<br />

1 fscsi1/path1 OPEN NORMAL 66 0<br />

2 fscsi2/path2 OPEN NORMAL 71 0<br />

3* fscsi2/path3 OPEN NORMAL 38 0<br />

DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance<br />

SERIAL: 6005076801A180E90800000000000062<br />

==========================================================================<br />

Path# Adapter/Path Name State Mode Select Errors<br />

0 fscsi1/path0 OPEN NORMAL 66 0<br />

1* fscsi1/path1 OPEN NORMAL 38 0<br />

2* fscsi2/path2 OPEN NORMAL 38 0<br />

3 fscsi2/path3 OPEN NORMAL 70 0<br />

#<br />

5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM<br />

The itsoaixvg <strong>Volume</strong> Group is created using hdisk3. A logical volume is created using <strong>the</strong><br />

<strong>Volume</strong> Group. Then, <strong>the</strong> testlv1 file system is created and mounted on <strong>the</strong> /testlv1 mount<br />

point, as shown in Example 5-29.<br />

Example 5-29 Host system new <strong>Volume</strong> Group and file system configuration<br />

# lsvg -o<br />

itsoaixvg2<br />

itsoaixvg1<br />

itsoaixvg<br />

rootvg<br />

# crfs -v jfs2 -g itsoaixvg -a size=3G -m /itsoaixvg -p rw -a agblksize=4096<br />

File system created successfully.<br />

3145428 kilobytes total disk space.<br />

New File <strong>System</strong> size is 6291456<br />

# lsvg -l itsoaixvg<br />

itsoaixvg:<br />

LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT<br />

loglv00 jfs2log 1 1 1 closed/syncd N/A<br />

fslv00 jfs2 384 384 1 closed/syncd /itsoaixvg<br />

#<br />

5.5.12 Expanding an AIX volume<br />

It is possible to expand a VDisk in <strong>the</strong> SVC cluster, even if it is mapped to a host. Certain<br />

operating systems, such as AIX 5L Version 5.2 and later-level versions, can handle <strong>the</strong><br />

volumes being expanded, even if <strong>the</strong> host has applications running. In <strong>the</strong> following examples,<br />

we show <strong>the</strong> procedure with AIX 5L V5.3 and SDD, but <strong>the</strong> procedure is also <strong>the</strong> same<br />

procedure when using AIX V6 or SDDPCM. The <strong>Volume</strong> Group where <strong>the</strong> VDisk is assigned,<br />

if it is assigned to any <strong>Volume</strong> Group, must not be a concurrent accessible <strong>Volume</strong> Group. A<br />

VDisk that is defined in a FlashCopy, Metro Mirror, or Global Mirror mapping on <strong>the</strong> SVC<br />

cannot be expanded, unless <strong>the</strong> mapping is removed, which means that <strong>the</strong> FlashCopy,<br />

Metro Mirror, or Global Mirror on that VDisk has to be stopped before it is possible to expand<br />

<strong>the</strong> VDisk.<br />

Chapter 5. Host configuration 177


The following steps show how to expand a volume on an AIX host, where <strong>the</strong> volume is a<br />

VDisk from <strong>the</strong> SVC:<br />

1. To list a VDisk size, use <strong>the</strong> svcinfo lsvdisk command. Example 5-30<br />

shows <strong>the</strong> Kanga0002 VDisk that we have allocated to our AIX server before we expand it.<br />

Here, <strong>the</strong> capacity is 5 GB, and <strong>the</strong> vdisk_UID is 60050768018301BF2800000000000016.<br />

Example 5-30 Expanding a VDisk on AIX<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002<br />

id 14<br />

name Kanaga0002<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

capacity 5.0GB<br />

type striped<br />

formatted yes<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018301BF2800000000000016<br />

throttling 0<br />

preferred_node_id 2<br />

fast_write_state not_empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 5.00GB<br />

real_capacity 5.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

178 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2. To identify to which vpath this VDisk is associated on <strong>the</strong> AIX host, we use <strong>the</strong> datapath<br />

query device SDD command, as shown in Example 5-19 on page 172. Here, we can see<br />

that <strong>the</strong> VDisk with vdisk_UID 60050768018301BF2800000000000016 is associated with<br />

vpath1, because <strong>the</strong> vdisk_UID matches <strong>the</strong> SERIAL field on <strong>the</strong> AIX host.<br />

3. To see <strong>the</strong> size of <strong>the</strong> volume on <strong>the</strong> AIX host, we use <strong>the</strong> lspv command, as shown in<br />

Example 5-31. This command shows that <strong>the</strong> volume size is 5,112 MB, equal to 5 GB, as<br />

shown in Example 5-30 on page 178.<br />

Example 5-31 Finding <strong>the</strong> size of <strong>the</strong> volume in AIX<br />

#lspv vpath1<br />

PHYSICAL VOLUME: vpath1 VOLUME GROUP: itsoaixvg<br />

PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER<br />

0009cdda00004c000000011abce27c89<br />

PV STATE: active<br />

STALE PARTITIONS: 0 ALLOCATABLE: yes<br />

PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 2<br />

TOTAL PPs: 639 (5112 megabytes) VG DESCRIPTORS: 2<br />

FREE PPs: 0 (0 megabytes) HOT SPARE: no<br />

USED PPs: 639 (5112 megabytes) MAX REQUEST: 256 kilobytes<br />

FREE DISTRIBUTION: 00..00..00..00..00<br />

USED DISTRIBUTION:<br />

128..128..127..128..128<br />

4. To expand <strong>the</strong> volume on <strong>the</strong> SVC, we use <strong>the</strong> svctask expandvdisksize command to<br />

increase <strong>the</strong> capacity on <strong>the</strong> VDisk. In Example 5-32, we expand <strong>the</strong> VDisk by 1 GB.<br />

Example 5-32 Expanding a VDisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 1 -unit gb Kanaga0002<br />

5. To check that <strong>the</strong> VDisk has been expanded, use <strong>the</strong> svcinfo lsvdisk command. Here,<br />

we can see that <strong>the</strong> Kanaga0002 VDisk has been expanded to a capacity of 6 GB<br />

(Example 5-33).<br />

Example 5-33 Verifying that <strong>the</strong> VDisk has been expanded<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002<br />

id 14<br />

name Kanaga0002<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

capacity 6.0GB<br />

type striped<br />

formatted yes<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018301BF2800000000000016<br />

throttling 0<br />

Chapter 5. Host configuration 179


preferred_node_id 2<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 6.00GB<br />

real_capacity 6.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

6. AIX has not yet recognized a change in <strong>the</strong> capacity of <strong>the</strong> vpath1 volume, because no<br />

dynamic mechanism exists within <strong>the</strong> operating system to provide a configuration update<br />

communication. Therefore, to encourage AIX to recognize <strong>the</strong> extra capacity on <strong>the</strong><br />

volume without stopping any applications, we use <strong>the</strong> chvg -g fc_source_vg command,<br />

where fc_source_vg is <strong>the</strong> name of <strong>the</strong> <strong>Volume</strong> Group to which vpath1 belongs.<br />

If AIX does not return any messages, <strong>the</strong> command was successful, and <strong>the</strong> volume<br />

changes in this <strong>Volume</strong> Group have been saved. If AIX cannot see any changes in <strong>the</strong><br />

volumes, it will return an explanatory message.<br />

7. To verify that <strong>the</strong> size of vpath1 has changed, we use <strong>the</strong> lspv command again, as shown<br />

in Example 5-34.<br />

Example 5-34 Verify that AIX can see <strong>the</strong> newly expanded VDisk<br />

#lspv vpath1<br />

PHYSICAL VOLUME: vpath1 VOLUME GROUP: itsoaixvg<br />

PV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER<br />

0009cdda00004c000000011abce27c89<br />

PV STATE: active<br />

STALE PARTITIONS: 0 ALLOCATABLE: yes<br />

PP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 2<br />

TOTAL PPs: 767 (6136 megabytes) VG DESCRIPTORS: 2<br />

FREE PPs: 128 (1024 megabytes) HOT SPARE: no<br />

USED PPs: 639 (5112 megabytes) MAX REQUEST: 256 kilobytes<br />

FREE DISTRIBUTION: 00..00..00..00..128<br />

USED DISTRIBUTION:<br />

154..153..153..153..26<br />

180 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Here, we can see that <strong>the</strong> volume now has a size of 6,136 MB, equal to 6 GB. Now, we can<br />

expand <strong>the</strong> file systems in this <strong>Volume</strong> Group to use <strong>the</strong> new capacity.<br />

5.5.13 Removing an SVC volume on AIX<br />

Before we remove a VDisk assigned to an AIX host, we have to make sure that <strong>the</strong>re is no<br />

data on it, and that no applications are dependent upon <strong>the</strong> volume. This procedure is a<br />

standard AIX procedure. We move all data off <strong>the</strong> volume, remove <strong>the</strong> volume in <strong>the</strong> <strong>Volume</strong><br />

Group, and delete <strong>the</strong> vpath and <strong>the</strong> hdisks that are associated with <strong>the</strong> vpath. Next, we<br />

remove <strong>the</strong> vdiskhostmap on <strong>the</strong> SVC for that volume, and that VDisk is no longer needed.<br />

Then, we delete it so that <strong>the</strong> extents will be available when we create a new VDisk on <strong>the</strong><br />

SVC.<br />

5.5.14 Running SVC commands from an AIX host system<br />

To issue CLI commands, you must install and prepare <strong>the</strong> SSH client system on <strong>the</strong> AIX host<br />

system. For AIX 5L <strong>V5.1</strong> and later, you can get OpenSSH from <strong>the</strong> Bonus Packs. You also<br />

need its prerequisite, OpenSSL, from <strong>the</strong> AIX toolbox for Linux applications for Power<br />

<strong>System</strong>s. For AIX V4.3.3, <strong>the</strong> software is available from <strong>the</strong> AIX toolbox for Linux<br />

applications.<br />

The AIX installation images from <strong>IBM</strong> developerWorks® are available at this Web site:<br />

http://sourceforge.net/projects/openssh-aix<br />

Perform <strong>the</strong> following steps:<br />

1. To generate <strong>the</strong> key files on AIX, issue <strong>the</strong> following command:<br />

ssh-keygen -t rsa -f filename<br />

The -t parameter specifies <strong>the</strong> type of key to generate: rsa1, rsa2, or dsa. The value for<br />

rsa2 is only rsa. For rsa1, <strong>the</strong> type must be rsa1. When creating <strong>the</strong> key to <strong>the</strong> SVC, use<br />

type rsa2. The -f parameter specifies <strong>the</strong> file names of <strong>the</strong> private and public keys on <strong>the</strong><br />

AIX server (<strong>the</strong> public key gets <strong>the</strong> extension .pub after <strong>the</strong> file name).<br />

2. Next, you have to install <strong>the</strong> public key on <strong>the</strong> SVC, which can be done by using <strong>the</strong> Master<br />

Console. Copy <strong>the</strong> public key to <strong>the</strong> Master Console, and install <strong>the</strong> key to <strong>the</strong> SVC, as<br />

described in Chapter 4, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> initial configuration” on page 103.<br />

3. On <strong>the</strong> AIX server, make sure that <strong>the</strong> private key and <strong>the</strong> public key are in <strong>the</strong> .ssh<br />

directory and in <strong>the</strong> home directory of <strong>the</strong> user.<br />

4. To connect to <strong>the</strong> SVC and use a CLI session from <strong>the</strong> AIX host, issue <strong>the</strong> following<br />

command:<br />

ssh -l admin -i filename svc<br />

5. You can also issue <strong>the</strong> commands directly on <strong>the</strong> AIX host, which is useful when making<br />

scripts. To do this, add <strong>the</strong> SVC commands to <strong>the</strong> previous command. For example, to list<br />

<strong>the</strong> hosts that are defined on <strong>the</strong> SVC, enter <strong>the</strong> following command:<br />

ssh -l admin -i filename svc svcinfo lshost<br />

In this command, -l admin is <strong>the</strong> user on <strong>the</strong> SVC to which we will connect, -i filename is<br />

<strong>the</strong> filename of <strong>the</strong> private key generated, and svc is <strong>the</strong> name or IP address of <strong>the</strong> SVC.<br />

Chapter 5. Host configuration 181


5.6 Windows-specific information<br />

In <strong>the</strong> following sections, we detail specific information about <strong>the</strong> connection of Windows<br />

2000-based hosts to <strong>the</strong> SVC environment.<br />

5.6.1 Configuring Windows Server 2000, Windows 2003 Server, and Windows<br />

Server 2008 hosts<br />

This section provides an overview of <strong>the</strong> requirements for attaching <strong>the</strong> SVC to a host running<br />

Windows Server 2000, Windows 2003 Server, or Windows Server 2008.<br />

Before you attach <strong>the</strong> SVC to your host, make sure that all of <strong>the</strong> following requirements are<br />

fulfilled:<br />

► For Windows Server 2003 x64 Edition operating system, you must install <strong>the</strong> Hotfix from<br />

KB 908980. If you do not install it before operation, preferred pathing is not available. You<br />

can find <strong>the</strong> Hotfix at this Web site:<br />

http://support.microsoft.com/kb/908980<br />

► Check LUN limitations for your host system. Ensure that <strong>the</strong>re are enough FC adapters<br />

installed in <strong>the</strong> server to handle <strong>the</strong> total LUNs that you want to attach.<br />

5.6.2 Configuring Windows<br />

To configure <strong>the</strong> Windows hosts, follow <strong>the</strong>se steps:<br />

1. Make sure that <strong>the</strong> latest OS Hotfixes are applied to your Microsoft server.<br />

2. Use <strong>the</strong> latest firmware and driver levels on your host system.<br />

3. Install <strong>the</strong> HBA or HBAs on <strong>the</strong> Windows server, as shown in 5.6.4, “Host adapter<br />

installation and configuration” on page 183.<br />

4. Connect <strong>the</strong> Windows 2000/2003/2008 server FC host adapters to <strong>the</strong> switches.<br />

5. Configure <strong>the</strong> switches (zoning).<br />

6. Install <strong>the</strong> FC host adapter driver, as described in 5.6.3, “Hardware lists, device driver,<br />

HBAs, and firmware levels” on page 183.<br />

7. Configure <strong>the</strong> HBA for hosts running Windows, as described in 5.6.4, “Host adapter<br />

installation and configuration” on page 183.<br />

8. Check <strong>the</strong> HBA driver readme file for <strong>the</strong> required Windows registry settings, as described<br />

in 5.6.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 183.<br />

9. Check <strong>the</strong> disk timeout on Microsoft Windows Server, as described in 5.6.5, “Changing <strong>the</strong><br />

disk timeout on Microsoft Windows Server” on page 185.<br />

10.Install and configure SDD/Subsystem Device Driver Device Specific Module (SDDDSM).<br />

11.Restart <strong>the</strong> Windows 2000/2003/2008 host system.<br />

12.Configure <strong>the</strong> host, VDisks, and host mapping in <strong>the</strong> SVC.<br />

13.Use Rescan disk in Computer Management of <strong>the</strong> Windows server to discover <strong>the</strong> VDisks<br />

that were created on <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>.<br />

182 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5.6.3 Hardware lists, device driver, HBAs, and firmware levels<br />

The latest information about supported hardware, device driver, and firmware is available at<br />

this Web site:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_Windows<br />

At this Web site, you will also find <strong>the</strong> hardware list for supported HBAs and <strong>the</strong> driver levels<br />

for Windows. Check <strong>the</strong> supported firmware and driver level for your HBA and follow <strong>the</strong><br />

manufacturer’s instructions to upgrade <strong>the</strong> firmware and driver levels for each type of HBA. In<br />

most manufacturers’ driver readme files, you will find instructions for <strong>the</strong> Windows registry<br />

parameters that have to be set for <strong>the</strong> HBA driver:<br />

► For <strong>the</strong> Emulex HBA driver, SDD requires <strong>the</strong> port driver, not <strong>the</strong> miniport port driver.<br />

► For <strong>the</strong> QLogic HBA driver, SDDDSM requires <strong>the</strong> storport version of <strong>the</strong> miniport driver.<br />

► For <strong>the</strong> QLogic HBA driver, SDD requires <strong>the</strong> scsiport version of <strong>the</strong> miniport driver.<br />

5.6.4 Host adapter installation and configuration<br />

Install <strong>the</strong> host adapters into your system. Refer to <strong>the</strong> manufacturer’s instructions for<br />

installation and configuration of <strong>the</strong> HBAs.<br />

In <strong>IBM</strong> <strong>System</strong> x servers, <strong>the</strong> HBA must always be installed in <strong>the</strong> first slots. If you install, for<br />

example, two HBAs and two network cards, <strong>the</strong> HBAs must be installed in slot 1 and slot 2,<br />

and <strong>the</strong> network cards can be installed in <strong>the</strong> remaining slots.<br />

Configure <strong>the</strong> QLogic HBA for hosts running Windows<br />

After you have installed <strong>the</strong> HBA in <strong>the</strong> server, and have applied <strong>the</strong> HBA firmware and device<br />

driver, you have to configure <strong>the</strong> HBA. Perform <strong>the</strong> following steps:<br />

1. Restart <strong>the</strong> server.<br />

2. When you see <strong>the</strong> QLogic banner, press <strong>the</strong> Ctrl+Q keys to open <strong>the</strong> FAST!UTIL menu<br />

panel.<br />

3. From <strong>the</strong> Select Host Adapter menu, select <strong>the</strong> Adapter Type QLA2xxx.<br />

4. From <strong>the</strong> Fast!UTIL Options menu, select Configuration Settings.<br />

5. From <strong>the</strong> Configuration Settings menu, click Host Adapter Settings.<br />

6. From <strong>the</strong> Host Adapter Settings menu, select <strong>the</strong> following values:<br />

a. Host Adapter BIOS: Disabled<br />

b. Frame size: 2048<br />

c. Loop Reset Delay: 5 (minimum)<br />

d. Adapter Hard Loop ID: Disabled<br />

e. Hard Loop ID: 0<br />

f. Spinup Delay: Disabled<br />

g. Connection Options: 1 - point to point only<br />

h. Fibre Channel Tape Support: Disabled<br />

i. Data Rate: 2<br />

7. Press <strong>the</strong> Esc key to return to <strong>the</strong> Configuration Settings menu.<br />

8. From <strong>the</strong> Configuration Settings menu, select Advanced Adapter Settings.<br />

9. From <strong>the</strong> Advanced Adapter Settings menu, set <strong>the</strong> following parameters:<br />

Chapter 5. Host configuration 183


a. Execution throttle: 100<br />

b. Luns per Target: 0<br />

c. Enable LIP Reset: No<br />

d. Enable LIP Full Login: Yes<br />

e. Enable Target Reset: No Note: If you are using a subsystem device driver (SDD) lower<br />

than 1.6, set Enable Target Reset to Yes.<br />

f. Login Retry Count: 30<br />

g. Port Down Retry Count: 15<br />

h. Link Down Timeout: 30<br />

i. Extended error logging: Disabled (might be enabled for debugging)<br />

j. RIO Operation Mode: 0<br />

k. Interrupt Delay Timer: 0<br />

10.Press Esc to return to <strong>the</strong> Configuration Settings menu.<br />

11.Press Esc.<br />

12.From <strong>the</strong> Configuration settings modified window, select Save changes.<br />

13.From <strong>the</strong> Fast!UTIL Options menu, select Select Host Adapter if more than one QLogic<br />

adapter were installed in your system.<br />

14.Select <strong>the</strong> o<strong>the</strong>r Host Adapter and repeat all steps from step 4 to 12.<br />

15.You have to repeat this process for all installed QLogic adapters in your system. When you<br />

are done, press Esc to exit <strong>the</strong> QLogic BIOS and restart <strong>the</strong> server.<br />

Configuring <strong>the</strong> Emulex HBA for hosts running Windows<br />

After you have installed <strong>the</strong> Emulex HBA and driver, you must configure your HBA.<br />

For <strong>the</strong> Emulex HBA StorPort driver, accept <strong>the</strong> default settings and set <strong>the</strong> topology to 1 (1 =<br />

F Port Fabric). For <strong>the</strong> Emulex HBA FC Port driver, use <strong>the</strong> default settings and change <strong>the</strong><br />

parameters to <strong>the</strong> parameters that are provided in Table 5-1.<br />

Table 5-1 FC port driver changes<br />

Parameters Recommended settings<br />

Query name server for all N-ports (BrokenRSCN) Enabled<br />

LUN mapping (MapLuns) Enabled (1)<br />

Automatic LUN mapping (MapLuns) Enabled (1)<br />

Allow multiple paths to SCSI target<br />

(MultipleSCSIClaims)<br />

Enabled<br />

Scan in device ID order (ScanDeviceIDOrder) Disabled<br />

Translate queue full to busy (TransleteQueueFull) Enabled<br />

Retry timer (RetryTimer) 2000 milliseconds<br />

Maximum number of LUNs (MaximumLun) Equal to or greater than <strong>the</strong> number of <strong>the</strong> SVC<br />

LUNs that are available to <strong>the</strong> HBA<br />

184 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Note: The parameters that are shown in Table 5-1 correspond to <strong>the</strong> parameters in<br />

HBAnywhere.<br />

5.6.5 Changing <strong>the</strong> disk timeout on Microsoft Windows Server<br />

This section describes how to change <strong>the</strong> disk I/O timeout value on Windows Server 2000,<br />

Windows 2003 Server, and Windows Server 2008 operating systems.<br />

On your Windows server hosts, change <strong>the</strong> disk I/O timeout value to 60 in <strong>the</strong> Windows<br />

registry:<br />

1. In Windows, click Start, and select Run.<br />

2. In <strong>the</strong> dialog text box, type regedit and press Enter.<br />

3. In <strong>the</strong> registry browsing tool, locate <strong>the</strong><br />

HKEY_LOCAL_MACHINE\<strong>System</strong>\CurrentControlSet\Services\Disk\TimeOutValue key.<br />

4. Confirm that <strong>the</strong> value for <strong>the</strong> key is 60 (decimal value), and, if necessary, change <strong>the</strong><br />

value to 60, as shown in Figure 5-6.<br />

Figure 5-6 Regedit<br />

5.6.6 Installing <strong>the</strong> SDD driver on Windows<br />

At <strong>the</strong> time of writing, <strong>the</strong> SDD levels in Table 5-2 are supported.<br />

Table 5-2 Currently supported SDD levels<br />

Windows operating system SDD level<br />

NT 4 1.5.1.1<br />

Windows 2000 Server and Windows 2003 Server<br />

service pack (SP2) (32-bit)/2003 SP2 (IA-64)<br />

Windows 2000 Server with Microsoft Cluster<br />

Server (MSCS) and Veritas <strong>Volume</strong> Manager/<br />

Windows 2003 Server SP2 (32-bit) with MSCS<br />

and Veritas <strong>Volume</strong> Manager<br />

1.6.3.0-2<br />

Not available<br />

See <strong>the</strong> following Web site for <strong>the</strong> latest information about SDD for Windows:<br />

http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7<br />

001350&loc=en_US&cs=utf-8&lang=en<br />

Chapter 5. Host configuration 185


SDD: We recommend that you use SDD only on existing systems where you do not want<br />

to change from SDD to SDDDSM. New operating systems will only be supported with<br />

SDDDSM.<br />

Before installing <strong>the</strong> SDD driver, <strong>the</strong> HBA driver has to be installed on your system. SDD<br />

requires <strong>the</strong> HBA SCSI port driver.<br />

After downloading <strong>the</strong> appropriate version of SDD from <strong>the</strong> Web site, extract <strong>the</strong> file and run<br />

setup.exe to install SDD. A command line will appear. Answer Y (Figure 5-7) to install <strong>the</strong><br />

driver.<br />

Figure 5-7 Confirm SDD installation<br />

After <strong>the</strong> setup has completed, answer Y again to reboot your system (Figure 5-8).<br />

Figure 5-8 Reboot system after installation<br />

To check if your SDD installation is complete, open <strong>the</strong> Windows Device Manager, expand<br />

SCSI and RAID <strong>Controller</strong>s, right-click Subsystem Device Driver Management, and click<br />

Properties (see Figure 5-9 on page 187).<br />

186 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 5-9 Subsystem Device Driver Management<br />

Chapter 5. Host configuration 187


The Subsystem Device Driver Management Properties window opens. Select <strong>the</strong> Driver tab,<br />

and make sure that you have installed <strong>the</strong> correct driver version (see Figure 5-10).<br />

Figure 5-10 Subsystem Device Driver Management Properties Driver tab<br />

5.6.7 Installing <strong>the</strong> SDDDSM driver on Windows<br />

The following sections show how to install <strong>the</strong> SDDDSM driver on Windows.<br />

Windows 2003 Server, Windows Server 2008, and MPIO<br />

Microsoft Multi Path Input Output (MPIO) solutions are designed to work in conjunction with<br />

device-specific modules (DSMs) written by vendors, but <strong>the</strong> MPIO driver package does not,<br />

by itself, form a complete solution. This joint solution allows <strong>the</strong> storage vendors to design<br />

device-specific solutions that are tightly integrated with <strong>the</strong> Windows operating system.<br />

MPIO is not shipped with <strong>the</strong> Windows operating system; storage vendors must pack <strong>the</strong><br />

MPIO drivers with <strong>the</strong>ir own DSM. <strong>IBM</strong> Subsystem Device Driver DSM (SDDDSM) is <strong>the</strong> <strong>IBM</strong><br />

multipath I/O solution that is based on Microsoft MPIO technology; it is a device-specific<br />

module specifically designed to support <strong>IBM</strong> storage devices on Windows 2003 Server and<br />

Windows Server 2008 servers.<br />

The intention of MPIO is to get a better integration of multipath storage solution with <strong>the</strong><br />

operating system, and it allows <strong>the</strong> use of multipaths in <strong>the</strong> <strong>SAN</strong> infrastructure during <strong>the</strong> boot<br />

process for <strong>SAN</strong> boot hosts.<br />

Subsystem Device Driver Device Specific Module (SDDDSM) for SVC<br />

Subsystem Device Driver Device Specific Module (SDDDSM) installation is a package for <strong>the</strong><br />

SVC device for <strong>the</strong> Windows 2003 Server and Windows Server 2008 operating systems.<br />

188 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


SDDDSM is <strong>the</strong> <strong>IBM</strong> multipath I/O solution that is based on Microsoft MPIO technology, and it<br />

is a device-specific module that is specifically designed to support <strong>IBM</strong> storage devices.<br />

Toge<strong>the</strong>r with MPIO, it is designed to support <strong>the</strong> multipath configuration environments in <strong>the</strong><br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>. It resides in a host system with <strong>the</strong> native disk<br />

device driver and provides <strong>the</strong> following functions:<br />

► Enhanced data availability<br />

► Dynamic I/O load-balancing across multiple paths<br />

► Automatic path failover protection<br />

► Concurrent download of licensed internal code<br />

► Path-selection policies for <strong>the</strong> host system<br />

► No SDDDSM support for Windows Server 2000<br />

► For <strong>the</strong> HBA driver, SDDDSM requires <strong>the</strong> StorPort version of HBA miniport driver<br />

Table 5-3 shows, at <strong>the</strong> time of writing, <strong>the</strong> supported SDDDSM driver levels.<br />

Table 5-3 Currently supported SDDDSM driver levels<br />

Windows operating system SDD level<br />

Windows 2003 Server SP2 (32-bit)/Windows<br />

2003 Server SP2 (x64)<br />

Windows Server 2008 (32-bit)/Windows Server<br />

2008 (x64)<br />

2.2.0.0-11<br />

2.2.0.0-11<br />

To check which levels are available, go to <strong>the</strong> Web site:<br />

http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7<br />

001350&loc=en_US&cs=utf-8&lang=en#WindowsSDDDSM<br />

To download SDDDSM, go to <strong>the</strong> Web site:<br />

http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S40<br />

00350&loc=en_US&cs=utf-8&lang=en<br />

The installation procedure for SDDDSM and SDD are <strong>the</strong> same, but remember that you have<br />

to use <strong>the</strong> StorPort HBA driver instead of <strong>the</strong> SCSI driver. We describe <strong>the</strong> SDD installation in<br />

5.6.6, “Installing <strong>the</strong> SDD driver on Windows” on page 185. After completing <strong>the</strong> installation,<br />

you will see <strong>the</strong> Microsoft MPIO in Device Manager (Figure 5-11 on page 190).<br />

Chapter 5. Host configuration 189


Figure 5-11 Windows Device Manager: MPIO<br />

We describe <strong>the</strong> SDDDSM installation for Windows Server 2008 in 5.8, “Example<br />

configuration of attaching an SVC to a Windows Server 2008 host” on page 200.<br />

5.7 Discovering assigned VDisks in Windows Server 2000 and<br />

Windows 2003 Server<br />

In this section, we describe how to discover assigned VDisks in Windows Server 2000 and<br />

Windows 2003 Server. The screen captures show a Windows 2003 Server host with<br />

SDDDSM installed. Discovering <strong>the</strong> disks in Windows Server 2000 or with SDD is <strong>the</strong> same<br />

procedure.<br />

Before adding a new volume from <strong>the</strong> SVC, <strong>the</strong> Windows 2003 Server host system had <strong>the</strong><br />

configuration that is shown in Figure 5-12 on page 191, with only local disks.<br />

190 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 5-12 Windows 2003 Server host system before adding a new volume from SVC<br />

We can check that <strong>the</strong> WWPN is logged into <strong>the</strong> SVC for <strong>the</strong> host named Senegal by entering<br />

<strong>the</strong> following command (Example 5-35):<br />

svcinfo lshost Senegal<br />

Example 5-35 Host information for Senegal<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshost Senegal<br />

id 1<br />

name Senegal<br />

port_count 2<br />

type generic<br />

mask 1111<br />

iogrp_count 4<br />

WWPN 210000E08B89B9C0<br />

node_logged_in_count 2<br />

state active<br />

WWPN 210000E08B89CCC2<br />

node_logged_in_count 2<br />

state active<br />

The configuration of <strong>the</strong> Senegal host, <strong>the</strong> Senegal_bas0001 VDisk, and <strong>the</strong> mapping<br />

between <strong>the</strong> host and <strong>the</strong> VDisk are defined in <strong>the</strong> SVC, as described in Example 5-36. In our<br />

example, <strong>the</strong> Senegal_bas0002 and Senegal_bas003 VDisks have <strong>the</strong> same configuration as<br />

<strong>the</strong> Senegal_bas0001 VDisk.<br />

Example 5-36 VDisk mapping: Senegal<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal<br />

id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID<br />

1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0<br />

6005076801A180E9080000000000000F<br />

1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0<br />

6005076801A180E90800000000000010<br />

Chapter 5. Host configuration 191


1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0<br />

6005076801A180E90800000000000011<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001<br />

id 7<br />

name Senegal_bas0001<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_0_DS45<br />

capacity 10.0GB<br />

type striped<br />

formatted yes<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 6005076801A180E9080000000000000F<br />

throttling 0<br />

preferred_node_id 3<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_0_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 10.00GB<br />

real_capacity 10.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

We can also obtain <strong>the</strong> serial number of <strong>the</strong> VDisks by entering <strong>the</strong> following command<br />

(Example 5-37):<br />

svcinfo lsvdiskhostmap Senegal_bas0001<br />

Example 5-37 VDisk serial number: Senegal_bas0001<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdiskhostmap Senegal_bas0001<br />

192 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


id name SCSI_id host_id host_name wwpn vdisk_UID<br />

7 Senegal_bas0001 0 1 Senegal 210000E08B89B9C0<br />

6005076801A180E9080000000000000F<br />

7 Senegal_bas0001 0 1 Senegal 210000E08B89CCC2<br />

6005076801A180E9080000000000000F<br />

After installing <strong>the</strong> necessary drivers and <strong>the</strong> rescan disks operation completes, <strong>the</strong> new disks<br />

are found in <strong>the</strong> Computer Management window, as shown in Figure 5-13.<br />

Figure 5-13 Windows 2003 Server host system with three new volumes from SVC<br />

In Windows Device Manager, <strong>the</strong> disks are shown as <strong>IBM</strong> 2145 SCSI Disk Device<br />

(Figure 5-14 on page 194). The number of <strong>IBM</strong> 2145 SCSI Disk Devices that you see is equal<br />

to:<br />

(number of VDisks) x (number of paths per I/O Group per HBA) x (number of HBAs)<br />

The <strong>IBM</strong> 2145 Multi-Path Disk Devices are <strong>the</strong> devices that are created by <strong>the</strong> multipath driver<br />

(Figure 5-14 on page 194). The number of <strong>the</strong>se devices is equal to <strong>the</strong> number of VDisks<br />

that are presented to <strong>the</strong> host.<br />

Chapter 5. Host configuration 193


Figure 5-14 Windows 2003 Server Device Manager with assigned VDisks<br />

When following <strong>the</strong> <strong>SAN</strong> zoning recommendation, this calculation gives us, for one VDisk and<br />

a host with two HBAs:<br />

(number of VDisks) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x<br />

2 = 4 paths<br />

You can check if all of <strong>the</strong> paths are available if you select Start All Programs <br />

Subsystem Device Driver (DSM) Subsystem Device Driver (DSM). The SDD (DSM)<br />

command-line interface will appear. Enter <strong>the</strong> following command to see which paths are<br />

available to your system (Example 5-38).<br />

Example 5-38 Datapath query device<br />

Microsoft Windows [Version 5.2.3790]<br />

(C) Copyright 1985-2003 Microsoft Corp.<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM>datapath query device<br />

Total Devices : 3<br />

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E9080000000000002A<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 47 0<br />

1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0<br />

2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0<br />

3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 28 0<br />

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E90800000000000010<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0<br />

194 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 162 0<br />

2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 155 0<br />

3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0<br />

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E90800000000000011<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 51 0<br />

1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0<br />

2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0<br />

3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 25 0<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM><br />

Note: All path states have to be OPEN. The path state can be OPEN or CLOSE. If one<br />

path state is CLOSE, it means that <strong>the</strong> system is missing a path that it saw during startup.<br />

If you restart your system, <strong>the</strong> CLOSE paths are removed from this view.<br />

5.7.1 Extending a Windows Server 2000 or Windows 2003 Server volume<br />

It is possible to expand a VDisk in <strong>the</strong> SVC cluster, even if it is mapped to a host. Certain<br />

operating systems, such as Windows Server 2000 and Windows 2003 Server, can handle <strong>the</strong><br />

volumes being expanded even if <strong>the</strong> host has applications running. A VDisk that is defined to<br />

be in a FlashCopy, Metro Mirror, or Global Mirror mapping on <strong>the</strong> SVC cannot be expanded<br />

unless <strong>the</strong> mapping is removed, which means that <strong>the</strong> FlashCopy, Metro Mirror, or Global<br />

Mirror on that VDisk has to be stopped before it is possible to expand <strong>the</strong> VDisk.<br />

Important:<br />

► For VDisk expansion to work on Windows Server 2000, apply Windows Server 2000<br />

Hotfix Q327020, which is available from <strong>the</strong> Microsoft Knowledge Base at this Web site:<br />

http://support.microsoft.com/kb/327020<br />

► If you want to expand a logical drive in a extended partition in Windows 2003 Server,<br />

apply <strong>the</strong> Hotfix from KB 841650, which is available from <strong>the</strong> Microsoft Knowledge Base<br />

at this Web site:<br />

http://support.microsoft.com/kb/841650/en-us<br />

► Use <strong>the</strong> updated Diskpart version for Windows 2003 Server, which is available from <strong>the</strong><br />

Microsoft Knowledge Base at this Web site:<br />

http://support.microsoft.com/kb/923076/en-us<br />

If <strong>the</strong> volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut<br />

down all nodes except one node, and that applications in <strong>the</strong> resource that use <strong>the</strong> volume<br />

that is going to be expanded are stopped before expanding <strong>the</strong> volume. Applications running<br />

in o<strong>the</strong>r resources can continue. After expanding <strong>the</strong> volume, start <strong>the</strong> application and <strong>the</strong><br />

resource, and <strong>the</strong>n restart <strong>the</strong> o<strong>the</strong>r nodes in <strong>the</strong> MSCS.<br />

Chapter 5. Host configuration 195


To expand a volume in use on Windows Server 2000 and Windows 2003 Server, we used<br />

Diskpart. The Diskpart tool is part of Windows 2003 Server; for o<strong>the</strong>r Windows versions, you<br />

can download it free of charge from Microsoft. Diskpart is a tool that was developed by<br />

Microsoft to ease administration of storage. It is a command-line interface where you can<br />

manage disks, partitions, and volumes, by using scripts or direct input on <strong>the</strong> command line.<br />

You can list disks and volumes, select <strong>the</strong>m, and after selecting <strong>the</strong>m, get more detailed<br />

information, create partitions, extend volumes, and more. For more information, see <strong>the</strong><br />

Microsoft Web site:<br />

http://www.microsoft.com<br />

or<br />

http://support.microsoft.com/default.aspx?scid=kb;en-us;304736&sd=tech<br />

An example of how to expand a volume on a Windows 2003 Server host, where <strong>the</strong> volume is<br />

a VDisk from <strong>the</strong> SVC, is shown in <strong>the</strong> following discussion.<br />

To list a VDisk size, use <strong>the</strong> svcinfo lsvdisk command. This command gives<br />

this information for <strong>the</strong> Senegal_bas0001 before expanding <strong>the</strong> VDisk (Example 5-36 on<br />

page 191).<br />

Here, we can see that <strong>the</strong> capacity is 10 GB, and also what <strong>the</strong> vdisk_UID is. To find on what<br />

vpath this VDisk is on <strong>the</strong> Windows 2003 Server host, we use <strong>the</strong> datapath query device<br />

SDD command on <strong>the</strong> Windows host (Figure 5-15).<br />

We can see that <strong>the</strong> serial 6005076801A180E9080000000000000F of Disk1 on <strong>the</strong> Windows<br />

host (Figure 5-15) matches <strong>the</strong> vdisk ID of Senegal_bas0001 (Example 5-36 on page 191).<br />

To see <strong>the</strong> size of <strong>the</strong> volume on <strong>the</strong> Windows host, we use Disk Manager, as shown in<br />

Figure 5-15.<br />

Figure 5-15 Windows 2003 Server: Disk Management<br />

196 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


This window shows that <strong>the</strong> volume size is 10 GB. To expand <strong>the</strong> volume on <strong>the</strong> SVC, we use<br />

<strong>the</strong> svctask expandvdisksize command to increase <strong>the</strong> capacity on <strong>the</strong> VDisk. In this<br />

example, we expand <strong>the</strong> VDisk by 1 GB (Example 5-39).<br />

Example 5-39 svctask expandvdisksize command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001<br />

id 7<br />

name Senegal_bas0001<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_0_DS45<br />

capacity 11.0GB<br />

type striped<br />

formatted yes<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 6005076801A180E9080000000000000F<br />

throttling 0<br />

preferred_node_id 3<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_0_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 11.00GB<br />

real_capacity 11.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

To check that <strong>the</strong> VDisk has been expanded, we use <strong>the</strong> svctask expandvdisksize<br />

command. In Example 5-39, we can see that <strong>the</strong> Senegal_bas0001 VDisk has been<br />

expanded to 11 GB in capacity.<br />

Chapter 5. Host configuration 197


After performing a “Disk Rescan” in Windows, you will see <strong>the</strong> new unallocated space in<br />

Windows Disk Management, as shown in Figure 5-16.<br />

Figure 5-16 Expanded volume in Disk Manager<br />

This window shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity<br />

available for <strong>the</strong> file system, use <strong>the</strong> following commands, as shown in Example 5-40:<br />

diskpart Starts DiskPart in a DOS prompt<br />

list volume Shows you all available volumes<br />

select volume Selects <strong>the</strong> volume to expand<br />

detail volume Displays details for <strong>the</strong> selected volume, including <strong>the</strong> unallocated<br />

capacity<br />

extend Extends <strong>the</strong> volume to <strong>the</strong> available unallocated space<br />

Example 5-40 Using Diskpart<br />

C:\>diskpart<br />

Microsoft DiskPart version 5.2.3790.3959<br />

Copyright (C) 1999-2001 Microsoft Corporation.<br />

On computer: SENEGAL<br />

DISKPART> list volume<br />

<strong>Volume</strong> ### Ltr Label Fs Type Size Status Info<br />

---------- --- ----------- ----- ---------- ------- --------- --------<br />

<strong>Volume</strong> 0 C NTFS Partition 75 GB Healthy <strong>System</strong><br />

<strong>Volume</strong> 1 S SVC_Senegal NTFS Partition 10 GB Healthy<br />

<strong>Volume</strong> 2 D DVD-ROM 0 B Healthy<br />

DISKPART> select volume 1<br />

<strong>Volume</strong> 1 is <strong>the</strong> selected volume.<br />

DISKPART> detail volume<br />

198 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Disk ### Status Size Free Dyn Gpt<br />

-------- ---------- ------- ------- --- ---<br />

* Disk 1 Online 11 GB 1020 MB<br />

Readonly : No<br />

Hidden : No<br />

No Default Drive Letter: No<br />

Shadow Copy : No<br />

DISKPART> extend<br />

DiskPart successfully extended <strong>the</strong> volume.<br />

DISKPART> detail volume<br />

Disk ### Status Size Free Dyn Gpt<br />

-------- ---------- ------- ------- --- ---<br />

* Disk 1 Online 11 GB 0 B<br />

Readonly : No<br />

Hidden : No<br />

No Default Drive Letter: No<br />

Shadow Copy : No<br />

After extending <strong>the</strong> volume, <strong>the</strong> detail volume command shows that <strong>the</strong>re is no free capacity<br />

on <strong>the</strong> volume anymore. The list volume command shows <strong>the</strong> file system size. The Disk<br />

Management window also shows <strong>the</strong> new disk size, as shown in Figure 5-17.<br />

Figure 5-17 Disk Management after extending disk<br />

The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded<br />

by expanding <strong>the</strong> underlying SVC VDisk. The new space will appear as unallocated space at<br />

<strong>the</strong> end of <strong>the</strong> disk.<br />

Chapter 5. Host configuration 199


In this case, you do not need to use <strong>the</strong> DiskPart tool; you can use Windows Disk<br />

Management functions to allocate <strong>the</strong> new space. Expansion works irrespective of <strong>the</strong> volume<br />

type (simple, spanned, mirrored, and so on) on <strong>the</strong> disk. Dynamic disks can be expanded<br />

without stopping I/O in most cases.<br />

Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without<br />

backing up your data, because this operation is disruptive for <strong>the</strong> data, due to a change in<br />

<strong>the</strong> position of <strong>the</strong> logical block address (LBA) on <strong>the</strong> disks.<br />

5.8 Example configuration of attaching an SVC to a Windows<br />

Server 2008 host<br />

This section describes an example configuration that shows <strong>the</strong> attachment of a Windows<br />

Server 2008 host system to <strong>the</strong> SVC. We discuss more details about Windows Server 2008<br />

and <strong>the</strong> SVC in 5.6, “Windows-specific information” on page 182.<br />

5.8.1 Installing SDDDSM on a Windows Server 2008 host<br />

Download <strong>the</strong> HBA driver and <strong>the</strong> SDDDSM package and copy <strong>the</strong>m to your host system. We<br />

describe information about <strong>the</strong> recommended SDDDSM package in 5.6.7, “Installing <strong>the</strong><br />

SDDDSM driver on Windows” on page 188. We list <strong>the</strong> HBA driver details in 5.6.3, “Hardware<br />

lists, device driver, HBAs, and firmware levels” on page 183. We perform <strong>the</strong> steps that are<br />

described in 5.6.2, “Configuring Windows” on page 182 to achieve this task.<br />

As a prerequisite for this example, we have already performed steps 1 to 5 for <strong>the</strong> hardware<br />

installation, <strong>SAN</strong> configuration is done, and <strong>the</strong> hotfixes are applied. The Disk timeout value is<br />

set to 60 seconds (see 5.6.5, “Changing <strong>the</strong> disk timeout on Microsoft Windows Server” on<br />

page 185), and we will start with <strong>the</strong> driver installation.<br />

Installing <strong>the</strong> HBA driver<br />

Perform <strong>the</strong>se steps to install <strong>the</strong> HBA driver:<br />

1. Extract <strong>the</strong> QLogic driver package to your hard drive.<br />

2. Select Start Run.<br />

3. Enter <strong>the</strong> devmgmt.msc command, click OK, and <strong>the</strong> Device Manager will appear.<br />

4. Expand <strong>Storage</strong> <strong>Controller</strong>s.<br />

200 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5. Right-click <strong>the</strong> HBA, and select Update driver Software (Figure 5-18).<br />

Figure 5-18 Windows Server 2008 driver update<br />

6. Click Browse my computer for driver software (Figure 5-19).<br />

Figure 5-19 Windows Server 2008 driver update<br />

7. Enter <strong>the</strong> path to <strong>the</strong> extracted QLogic driver, and click Next (Figure 5-20 on page 202).<br />

Chapter 5. Host configuration 201


Figure 5-20 Windows Server 2008 driver update<br />

8. Windows installs <strong>the</strong> driver (Figure 5-21).<br />

Figure 5-21 Windows Server 2008 driver installation<br />

202 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


9. When <strong>the</strong> driver update is complete, click Close to exit <strong>the</strong> wizard (Figure 5-22).<br />

Figure 5-22 Windows Server 2008 driver installation<br />

10.Repeat steps 1 to 8 for all of <strong>the</strong> HBAs that are installed in <strong>the</strong> system.<br />

5.8.2 Installing SDDDSM<br />

To install <strong>the</strong> SDDDSM driver on your system, perform <strong>the</strong> following steps:<br />

1. Extract <strong>the</strong> SDDDSM driver package to a folder on your hard drive.<br />

2. Open <strong>the</strong> folder with <strong>the</strong> extracted files.<br />

3. Run <strong>the</strong> setup.exe command, and a DOS command prompt will appear.<br />

4. Type Y and press Enter to install SDDDSM (Figure 5-23).<br />

Figure 5-23 Installing SDDDSM<br />

5. After <strong>the</strong> SDDDSM Setup is finished, type Y and press Enter to restart your system.<br />

After <strong>the</strong> reboot, <strong>the</strong> SDDDSM installation is complete. You can verify <strong>the</strong> installation<br />

completion in Device Manager, because <strong>the</strong> SDDDSM device will appear (Figure 5-24 on<br />

page 204), and <strong>the</strong> SDDDSM tools will have been installed (Figure 5-25 on page 204).<br />

Chapter 5. Host configuration 203


Figure 5-24 SDDDSM installation<br />

Figure 5-25 SDDDSM installation<br />

204 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5.8.3 Attaching SVC VDisks to Windows Server 2008<br />

Create <strong>the</strong> VDisks on <strong>the</strong> SVC and map <strong>the</strong>m to <strong>the</strong> Windows Server 2008 host.<br />

In this example, we have mapped three SVC disks to <strong>the</strong> Windows Server 2008 host named<br />

Diomede, as shown in Example 5-41.<br />

Example 5-41 SVC host mapping to host Diomede<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Diomede<br />

id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID<br />

0 Diomede 0 20 Diomede_0001 210000E08B0541BC<br />

6005076801A180E9080000000000002B<br />

0 Diomede 1 21 Diomede_0002 210000E08B0541BC<br />

6005076801A180E9080000000000002C<br />

0 Diomede 2 22 Diomede_0003 210000E08B0541BC<br />

6005076801A180E9080000000000002D<br />

Perform <strong>the</strong> following steps to use <strong>the</strong> devices on your Windows Server 2008 host:<br />

1. Click Start, and click Run.<br />

2. Enter <strong>the</strong> diskmgmt.msc command, and click OK. The Disk Management window opens.<br />

3. Select Action, and click Rescan Disks (Figure 5-26).<br />

Figure 5-26 Windows Server 2008: Rescan disks<br />

4. The SVC disks will now appear in <strong>the</strong> Disk Management window (Figure 5-27 on<br />

page 206).<br />

Chapter 5. Host configuration 205


Figure 5-27 Windows Server 2008 Disk Management window<br />

After you have assigned <strong>the</strong> SVC disks, <strong>the</strong>y are also available in Device Manager. The three<br />

assigned drives are represented by SDDDSM/MPIO as <strong>IBM</strong>-2145 Multipath disk devices in<br />

<strong>the</strong> Device Manager (Figure 5-28).<br />

Figure 5-28 Windows Server 2008 Device Manager<br />

206 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5. To check that <strong>the</strong> disks are available, select Start All Programs Subsystem Device<br />

Driver DSM, and click Subsystem Device Driver DSM (Figure 5-29). The SDDDSM<br />

Command Line Utility will appear.<br />

Figure 5-29 Windows Server 2008 Subsystem Device Driver DSM utility<br />

6. Enter <strong>the</strong> datapath query device command and press Enter (Example 5-42). This<br />

command will display all of <strong>the</strong> disks and <strong>the</strong> available paths, including <strong>the</strong>ir states.<br />

Example 5-42 Windows Server 2008 SDDDSM command-line utility<br />

Microsoft Windows [Version 6.0.6001]<br />

Copyright (c) 2006 Microsoft Corporation. All rights reserved.<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM>datapath query device<br />

Total Devices : 3<br />

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E9080000000000002B<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0<br />

1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0<br />

2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0<br />

3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0<br />

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E9080000000000002C<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0<br />

1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0<br />

Chapter 5. Host configuration 207


2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0<br />

3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 1517 0<br />

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E9080000000000002D<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0<br />

1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0<br />

2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0<br />

3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM><br />

<strong>SAN</strong> zoning recommendation: When following <strong>the</strong> <strong>SAN</strong> zoning recommendation, we get<br />

this result, using one VDisk and a host with two HBAs, (number of VDisks) x (number of<br />

paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.<br />

7. Right-click <strong>the</strong> disk in Disk Management, and select Online to place <strong>the</strong> disk online<br />

(Figure 5-30).<br />

Figure 5-30 Windows Server 2008: Place disk online<br />

8. Repeat step 7 for all of your attached SVC disks.<br />

9. Right-click one disk again, and select Initialize Disk (Figure 5-31).<br />

Figure 5-31 Windows Server 2008: Initialize Disk<br />

208 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


10.Mark all of <strong>the</strong> disks that you want to initialize, and click OK (Figure 5-32).<br />

Figure 5-32 Windows Server 2008: Initialize Disk<br />

11.Right-click <strong>the</strong> unallocated disk space, and select New Simple <strong>Volume</strong> (Figure 5-33).<br />

Figure 5-33 Windows Server 2008: New Simple <strong>Volume</strong><br />

12.The New Simple <strong>Volume</strong> Wizard window opens. Click Next.<br />

13.Enter a disk size, and click Next (Figure 5-34).<br />

Figure 5-34 Windows Server 2008: New Simple <strong>Volume</strong><br />

14.Assign a drive letter, and click Next (Figure 5-35 on page 210).<br />

Chapter 5. Host configuration 209


Figure 5-35 Windows Server 2008: New Simple <strong>Volume</strong><br />

15.Enter a volume label, and click Next (Figure 5-36).<br />

Figure 5-36 Windows Server 2008: New Simple <strong>Volume</strong><br />

210 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


16.Click Finish, and repeat this step for every SVC disk on your host system (Figure 5-37).<br />

Figure 5-37 Windows Server 2008: Disk Management<br />

5.8.4 Extending a Windows Server 2008 volume<br />

Using SVC and Windows Server 2008 gives you <strong>the</strong> ability to extend volumes while <strong>the</strong>y are<br />

in use. We describe <strong>the</strong> steps to extend a volume in 5.7.1, “Extending a Windows Server<br />

2000 or Windows 2003 Server volume” on page 195.<br />

Windows Server 2008 also uses <strong>the</strong> DiskPart utility to extend volumes. To start it, select<br />

Start Run, and enter DiskPart. The DiskPart utility will appear. The procedure is exactly<br />

<strong>the</strong> same as <strong>the</strong> procedure in Windows 2003 Server. Follow <strong>the</strong> Windows 2003 Server<br />

description to extend your volume.<br />

5.8.5 Removing a disk on Windows<br />

When we want to remove a disk from Windows, and <strong>the</strong> disk is an SVC VDisk, we follow <strong>the</strong><br />

standard Windows procedure to make sure that <strong>the</strong>re is no data that we want to preserve on<br />

<strong>the</strong> disk, that no applications are using <strong>the</strong> disk, and that no I/O is going to <strong>the</strong> disk. After<br />

completing this procedure, we remove <strong>the</strong> VDisk mapping on <strong>the</strong> SVC. We must make sure<br />

that we are removing <strong>the</strong> correct VDisk. To verify, we use SDD to find <strong>the</strong> serial number for <strong>the</strong><br />

disk, and on <strong>the</strong> SVC, we use lshostvdiskmap to find <strong>the</strong> VDisk name and number. We also<br />

check that <strong>the</strong> SDD Serial number on <strong>the</strong> host matches <strong>the</strong> UID on <strong>the</strong> SVC for <strong>the</strong> VDisk.<br />

When <strong>the</strong> VDisk mapping is removed, we perform a rescan for <strong>the</strong> disk, Disk Management on<br />

<strong>the</strong> server removes <strong>the</strong> disk, and <strong>the</strong> vpath goes into <strong>the</strong> status of CLOSE on <strong>the</strong> server. We<br />

can verify <strong>the</strong>se actions by using <strong>the</strong> datapath query device SDD command, but <strong>the</strong> vpath<br />

that is closed will first be removed after a reboot of <strong>the</strong> server.<br />

In <strong>the</strong> following sequence of examples, we show how we can remove an SVC VDisk from a<br />

Windows server. We show it on a Windows 2003 Server operating system, but <strong>the</strong> steps also<br />

apply to Windows Server 2000 and Windows Server 2008.<br />

Chapter 5. Host configuration 211


Figure 5-15 on page 196 shows <strong>the</strong> Disk Manager before removing <strong>the</strong> disk.<br />

We will remove Disk 1. To find <strong>the</strong> correct VDisk information, we find <strong>the</strong> Serial/UID number<br />

using SDD (Example 5-43).<br />

Example 5-43 Removing SVC disk from <strong>the</strong> Windows server<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM>datapath query device<br />

Total Devices : 3<br />

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E9080000000000000F<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0<br />

1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0<br />

2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0<br />

3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0<br />

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E90800000000000010<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0<br />

1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0<br />

2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0<br />

3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0<br />

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E90800000000000011<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0<br />

1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0<br />

2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0<br />

3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69<br />

0<br />

Knowing <strong>the</strong> Serial/UID of <strong>the</strong> VDisk and <strong>the</strong> host name Senegal, we find <strong>the</strong> VDisk mapping<br />

to remove by using <strong>the</strong> lshostvdiskmap command on <strong>the</strong> SVC, and <strong>the</strong>n, we remove <strong>the</strong><br />

actual VDisk mapping (Example 5-44).<br />

Example 5-44 Finding and removing <strong>the</strong> VDisk mapping<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal<br />

id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID<br />

1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0<br />

6005076801A180E9080000000000000F<br />

1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0<br />

6005076801A180E90800000000000010<br />

1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0<br />

6005076801A180E90800000000000011<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Senegal Senegal_bas0001<br />

212 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegal<br />

id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID<br />

1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0<br />

6005076801A180E90800000000000010<br />

1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0<br />

6005076801A180E90800000000000011<br />

Here, we can see that <strong>the</strong> VDisk is removed from <strong>the</strong> server. On <strong>the</strong> server, we <strong>the</strong>n perform<br />

a disk rescan in Disk Management, and we now see that <strong>the</strong> correct disk (Disk1) has been<br />

removed, as shown in Figure 5-38.<br />

Figure 5-38 Disk Management: Disk has been removed<br />

SDD also shows us that <strong>the</strong> status for all paths to Disk1 has changed to CLOSE, because <strong>the</strong><br />

disk is not available (Example 5-45 on page 214).<br />

Chapter 5. Host configuration 213


Example 5-45 SDD: Closed path<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM>datapath query device<br />

Total Devices : 3<br />

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E9080000000000000F<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0<br />

1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0<br />

2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0<br />

3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0<br />

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E90800000000000010<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0<br />

1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0<br />

2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0<br />

3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0<br />

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E90800000000000011<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0<br />

1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0<br />

2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0<br />

3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0<br />

The disk (Disk1) is now removed from <strong>the</strong> server. However, to remove <strong>the</strong> SDD information of<br />

<strong>the</strong> disk, we need to reboot <strong>the</strong> server, but we can wait until a more suitable time.<br />

5.9 Using <strong>the</strong> SVC CLI from a Windows host<br />

To issue CLI commands, we must install and prepare <strong>the</strong> SSH client system on <strong>the</strong> Windows<br />

host system.<br />

We can install <strong>the</strong> PuTTY SSH client software on a Windows host by using <strong>the</strong> PuTTY<br />

installation program. This program is in <strong>the</strong> SSHClient\PuTTY directory of <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> Console CD-ROM, or you can download PuTTY from <strong>the</strong> following Web site:<br />

http://www.chiark.greenend.org.uk/~sgtatham/putty/<br />

The following Web site offers SSH client alternatives for Windows:<br />

http://www.openssh.com/windows.html<br />

Cygwin software has an option to install an OpenSSH client. You can download Cygwin from<br />

<strong>the</strong> following Web site:<br />

http://www.cygwin.com/<br />

214 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


We discuss more information about <strong>the</strong> CLI in Chapter 7, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations<br />

using <strong>the</strong> command-line interface” on page 339.<br />

5.10 Microsoft <strong>Volume</strong> Shadow Copy<br />

The SVC provides support for <strong>the</strong> Microsoft <strong>Volume</strong> Shadow Copy Service. The Microsoft<br />

<strong>Volume</strong> Shadow Copy Service can provide a point-in-time (shadow) copy of a Windows host<br />

volume while <strong>the</strong> volume is mounted and <strong>the</strong> files are in use.<br />

In this section, we discuss how to install <strong>the</strong> Microsoft <strong>Volume</strong> Copy Shadow Service.<br />

The following operating system versions are supported:<br />

► Windows 2003 Server Standard Server Edition, 32-bit and 64-bit (x64) versions<br />

► Windows 2003 Server Enterprise Edition, 32-bit and 64-bit (x64) versions<br />

► Windows 2003 Server Standard Server R2 Edition, 32-bit and 64-bit (x64) versions<br />

► Windows 2003 Server Enterprise R2 Edition, 32-bit and 64-bit (x64) versions<br />

► Windows Server 2008 Standard<br />

► Windows Server 2008 Enterprise<br />

The following components are used to provide support for <strong>the</strong> service:<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Master Console<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider, known as <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for<br />

Microsoft <strong>Volume</strong> Shadow Copy Service<br />

► Microsoft <strong>Volume</strong> Shadow Copy Service<br />

The <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> provider is installed on <strong>the</strong> Windows host.<br />

To provide <strong>the</strong> point-in-time shadow copy, <strong>the</strong> components complete <strong>the</strong> following process:<br />

1. A backup application on <strong>the</strong> Windows host initiates a snapshot backup.<br />

2. The <strong>Volume</strong> Shadow Copy Service notifies <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider<br />

that a copy is needed.<br />

3. The <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> prepares <strong>the</strong> volume for a snapshot.<br />

4. The <strong>Volume</strong> Shadow Copy Service quiesces <strong>the</strong> software applications that are writing<br />

data on <strong>the</strong> host and flushes file system buffers to prepare for a copy.<br />

5. The <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> creates <strong>the</strong> shadow copy using <strong>the</strong> FlashCopy Service.<br />

6. The <strong>Volume</strong> Shadow Copy Service notifies <strong>the</strong> writing applications that I/O operations can<br />

resume and notifies <strong>the</strong> backup application that <strong>the</strong> backup was successful.<br />

The <strong>Volume</strong> Shadow Copy Service maintains a free pool of VDisks for use as a FlashCopy<br />

target and a reserved pool of VDisks. These pools are implemented as virtual host systems<br />

on <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>.<br />

Chapter 5. Host configuration 215


5.10.1 Installation overview<br />

The steps for implementing <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow<br />

Copy Service must be completed in <strong>the</strong> correct sequence.<br />

Before you begin, you must have experience with, or knowledge of, administering a Windows<br />

operating system. And you must also have experience with, or knowledge of, administering a<br />

<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>.<br />

You will need to complete <strong>the</strong> following tasks:<br />

► Verify that <strong>the</strong> system requirements are met.<br />

► Install <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console if it is not already installed.<br />

► Install <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider.<br />

► Verify <strong>the</strong> installation.<br />

► Create a free pool of volumes and a reserved pool of volumes on <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong>.<br />

5.10.2 <strong>System</strong> requirements for <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider<br />

Ensure that your system satisfies <strong>the</strong> following requirements before you install <strong>the</strong> <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy Service and Virtual Disk Service<br />

software on <strong>the</strong> Windows operating system:<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> and Master Console Version 2.1.0 or later with FlashCopy<br />

enabled. You must install <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console before you install <strong>the</strong> <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> Hardware provider.<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy Service and Virtual Disk<br />

Service software Version 3.1 or later.<br />

5.10.3 Installing <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider<br />

This section includes <strong>the</strong> steps to install <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider on a<br />

Windows server. You must satisfy all of <strong>the</strong> system requirements before starting <strong>the</strong><br />

installation.<br />

During <strong>the</strong> installation, you will be prompted to enter information about <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> Master Console, including <strong>the</strong> location of <strong>the</strong> truststore file. The truststore file is<br />

generated during <strong>the</strong> installation of <strong>the</strong> Master Console. You must copy this file to a location<br />

that is accessible to <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider on <strong>the</strong> Windows server.<br />

When <strong>the</strong> installation is complete, <strong>the</strong> installation program might prompt you to restart <strong>the</strong><br />

system. Complete <strong>the</strong> following steps to install <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider on<br />

<strong>the</strong> Windows server:<br />

1. Download <strong>the</strong> installation program files from <strong>the</strong> <strong>IBM</strong> Web site, and place a copy on <strong>the</strong><br />

Windows server where you will install <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH<br />

&dc=D400&uid=ssg1S4000663&loc=en_US&cs=utf-8&lang=en<br />

2. Log on to <strong>the</strong> Windows server as an administrator, and navigate to <strong>the</strong> directory where <strong>the</strong><br />

installation program is located.<br />

3. Run <strong>the</strong> installation program by double-clicking <strong>IBM</strong>VSS.exe.<br />

216 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


4. The Welcome window opens, as shown in Figure 5-39. Click Next to continue with <strong>the</strong><br />

installation. You can click Cancel at any time to exit <strong>the</strong> installation. To move back to<br />

previous windows while using <strong>the</strong> wizard, click Back.<br />

Figure 5-39 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy installation<br />

5. The License Agreement window opens (Figure 5-40). Read <strong>the</strong> license agreement<br />

information. Select whe<strong>the</strong>r you accept <strong>the</strong> terms of <strong>the</strong> license agreement, and click<br />

Next. If you do not accept, it means that you cannot continue with <strong>the</strong> installation.<br />

Figure 5-40 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy installation<br />

Chapter 5. Host configuration 217


6. The Choose Destination Location window opens (Figure 5-41). Click Next to accept <strong>the</strong><br />

default directory where <strong>the</strong> setup program will install <strong>the</strong> files, or click Change to select<br />

ano<strong>the</strong>r directory. Click Next.<br />

Figure 5-41 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy installation<br />

7. Click Install to begin <strong>the</strong> installation (Figure 5-42).<br />

Figure 5-42 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy installation<br />

218 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8. From <strong>the</strong> next window, select <strong>the</strong> required CIM server, or select “Enter <strong>the</strong> CIM Server<br />

address manually”, and click Next (Figure 5-43).<br />

Figure 5-43 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy installation<br />

9. The Enter CIM Server Details window opens. Enter <strong>the</strong> following information in <strong>the</strong> fields<br />

(Figure 5-44):<br />

a. In <strong>the</strong> CIM Server Address field, type <strong>the</strong> name of <strong>the</strong> server where <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> Console is installed.<br />

b. In <strong>the</strong> CIM User field, type <strong>the</strong> user name that <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for<br />

Microsoft <strong>Volume</strong> Shadow Copy Service and Virtual Disk Service software will use to<br />

gain access to <strong>the</strong> server where <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console is installed.<br />

c. In <strong>the</strong> CIM Password field, type <strong>the</strong> password for <strong>the</strong> user name that <strong>the</strong> <strong>IBM</strong> <strong>System</strong><br />

<strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy Service and Virtual Disk Service<br />

software will use to gain access to <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console.<br />

d. Click Next.<br />

Figure 5-44 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy installation<br />

10.In <strong>the</strong> next window, click Finish. If necessary, <strong>the</strong> InstallShield Wizard prompts you to<br />

restart <strong>the</strong> system (Figure 5-45 on page 220).<br />

Chapter 5. Host configuration 219


Figure 5-45 <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong> Shadow Copy installation<br />

Add it on al information:<br />

► If <strong>the</strong>se settings change after installation, you can use <strong>the</strong> ibmvcfg.exe tool to update<br />

<strong>the</strong> Microsoft <strong>Volume</strong> Shadow Copy and Virtual Disk Services software with <strong>the</strong> new<br />

settings.<br />

► If you do not have <strong>the</strong> CIM Agent server, port, or user information, contact your CIM<br />

Agent administrator.<br />

5.10.4 Verifying <strong>the</strong> installation<br />

Perform <strong>the</strong> following steps to verify <strong>the</strong> installation:<br />

1. Select Start All Programs Administrative Tools Services from <strong>the</strong> Windows<br />

server task bar.<br />

2. Ensure that <strong>the</strong> service named “<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft <strong>Volume</strong><br />

Shadow Copy Service and Virtual Disk Service” software appears and that Status is set to<br />

Started and that Startup Type is set to Automatic.<br />

3. Open a command prompt window, and issue <strong>the</strong> following command:<br />

vssadmin list providers<br />

220 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


This command ensures that <strong>the</strong> service named <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Support for Microsoft<br />

<strong>Volume</strong> Shadow Copy Service and Virtual Disk Service software is listed as a provider<br />

(Example 5-46).<br />

Example 5-46 Microsoft Software Shadow copy provider<br />

C:\Documents and Settings\Administrator>vssadmin list providers<br />

vssadmin 1.1 - <strong>Volume</strong> Shadow Copy Service administrative command-line tool<br />

(C) Copyright 2001 Microsoft Corp.<br />

Provider name: 'Microsoft Software Shadow Copy provider 1.0'<br />

Provider type: <strong>System</strong><br />

Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5}<br />

Version: 1.0.0.7<br />

Provider name: '<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>Volume</strong> Shadow Copy Service Hardware<br />

Provider'<br />

Provider type: Hardware<br />

Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b}<br />

Version: 3.1.0.1108<br />

If you are able to successfully perform all of <strong>the</strong>se verification tasks, <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong><br />

Support for Microsoft <strong>Volume</strong> Shadow Copy Service and Virtual Disk Service software was<br />

successfully installed on <strong>the</strong> Windows server.<br />

5.10.5 Creating <strong>the</strong> free and reserved pools of volumes<br />

The <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider maintains a free pool of volumes and a reserved<br />

pool of volumes. Because <strong>the</strong>se objects do not exist on <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>, <strong>the</strong> free<br />

pool of volumes and <strong>the</strong> reserved pool of volumes are implemented as virtual host systems.<br />

You must define <strong>the</strong>se two virtual host systems on <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>.<br />

When a shadow copy is created, <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> hardware provider selects a<br />

volume in <strong>the</strong> free pool, assigns it to <strong>the</strong> reserved pool, and <strong>the</strong>n removes it from <strong>the</strong> free<br />

pool. This process protects <strong>the</strong> volume from being overwritten by o<strong>the</strong>r <strong>Volume</strong> Shadow Copy<br />

Service users.<br />

To successfully perform a <strong>Volume</strong> Shadow Copy Service operation, <strong>the</strong>re must be enough<br />

VDisks mapped to <strong>the</strong> free pool. The VDisks must be <strong>the</strong> same size as <strong>the</strong> source VDisks.<br />

Use <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console or <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> command-line<br />

interface (CLI) to perform <strong>the</strong> following steps:<br />

1. Create a host for <strong>the</strong> free pool of VDisks. You can use <strong>the</strong> default name VSS_FREE or<br />

specify ano<strong>the</strong>r name. Associate <strong>the</strong> host with <strong>the</strong> worldwide port name (WWPN)<br />

5000000000000000 (15 zeroes) (Example 5-47).<br />

Example 5-47 Creating an mkhost for <strong>the</strong> free pool<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_FREE -hbawwpn 5000000000000000<br />

-force Host, id [2], successfully created<br />

2. Create a virtual host for <strong>the</strong> reserved pool of volumes. You can use <strong>the</strong> default name<br />

VSS_RESERVED or specify ano<strong>the</strong>r name. Associate <strong>the</strong> host with <strong>the</strong> WWPN<br />

5000000000000001 (14 zeroes) (Example 5-48 on page 222).<br />

Chapter 5. Host configuration 221


Example 5-48 Creating an mkhost for <strong>the</strong> reserved pool<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_RESERVED -hbawwpn<br />

5000000000000001 -force<br />

Host, id [3], successfully created<br />

3. Map <strong>the</strong> logical units (VDisks) to <strong>the</strong> free pool of volumes. The VDisks cannot be mapped<br />

to any o<strong>the</strong>r hosts. If you already have VDisks created for <strong>the</strong> free pool of volumes, you<br />

must assign <strong>the</strong> VDisks to <strong>the</strong> free pool.<br />

4. Create VDisk-to-host mappings between <strong>the</strong> VDisks selected in step 3 and <strong>the</strong><br />

VSS_FREE host to add <strong>the</strong> VDisks to <strong>the</strong> free pool. Alternatively, you can use <strong>the</strong> ibmvcfg<br />

add command to add VDisks to <strong>the</strong> free pool (Example 5-49).<br />

Example 5-49 Host mappings<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001<br />

Virtual Disk to Host map, id [0], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002<br />

Virtual Disk to Host map, id [1], successfully created<br />

5. Verify that <strong>the</strong> VDisks have been mapped. If you do not use <strong>the</strong> default WWPNs<br />

5000000000000000 and 5000000000000001, you must configure <strong>the</strong> <strong>IBM</strong> <strong>System</strong><br />

<strong>Storage</strong> hardware provider with <strong>the</strong> WWPNs (Example 5-50).<br />

Example 5-50 Verify hosts<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap VSS_FREE<br />

id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID<br />

2 VSS_FREE 0 10 msvc0001 5000000000000000<br />

6005076801A180E90800000000000012<br />

2 VSS_FREE 1 11 msvc0002 5000000000000000<br />

6005076801A180E90800000000000013<br />

5.10.6 Changing <strong>the</strong> configuration parameters<br />

You can change <strong>the</strong> parameters that you defined when you installed <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong><br />

Support for Microsoft <strong>Volume</strong> Shadow Copy Service and Virtual Disk Service software.<br />

Therefore, you must use <strong>the</strong> ibmvcfg.exe utility. It is a command-line utility that is <strong>the</strong> located<br />

in C:\Program Files\<strong>IBM</strong>\Hardware Provider for VSS-VDS directory (Example 5-51).<br />

Example 5-51 Using ibmvcfg.exe utility help<br />

C:\Program Files\<strong>IBM</strong>\Hardware Provider for VSS-VDS>ibmvcfg.exe<br />

<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> VSS Provider Configuration Tool Commands<br />

---------------------------------------ibmvcfg.exe<br />

<br />

Commands:<br />

/h | /help | -? | /?<br />

showcfg<br />

listvols <br />

add (separated by spaces)<br />

rem (separated by spaces)<br />

Configuration:<br />

set user <br />

222 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


set password <br />

set trace [0-7]<br />

set trustpassword <br />

set truststore <br />

set usingSSL <br />

set vssFreeInitiator <br />

set vssReservedInitiator <br />

set FlashCopyVer (only applies to ESS)<br />

set cimomPort <br />

set cimomHost <br />

set namespace <br />

set targetSVC <br />

set backgroundCopy <br />

Table 5-4 shows <strong>the</strong> available commands.<br />

Table 5-4 Available ibmvcfg.util commands<br />

Command Description Example<br />

ibmvcfg showcfg Lists <strong>the</strong> current settings. ibmvcfg showcfg<br />

ibmvcfg set username<br />

<br />

ibmvcfg set password<br />

<br />

ibmvcfg set targetSVC<br />

<br />

Sets <strong>the</strong> user name to access<br />

<strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

Console.<br />

Sets <strong>the</strong> password of <strong>the</strong> user<br />

name that will access <strong>the</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> Console.<br />

Specifies <strong>the</strong> IP address of <strong>the</strong><br />

<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> on<br />

which <strong>the</strong> VDisks are located<br />

when VDisks are moved to and<br />

from <strong>the</strong> free pool with <strong>the</strong><br />

ibmvcfg add and ibmvcfg rem<br />

commands. The IP address is<br />

overridden if you use <strong>the</strong> -s flag<br />

with <strong>the</strong> ibmvcfg add and<br />

ibmvcfg rem commands.<br />

set backgroundCopy Sets <strong>the</strong> background copy rate<br />

for FlashCopy.<br />

ibmvcfg set usingSSL Specifies whe<strong>the</strong>r to use<br />

Secure Sockets Layer protocol<br />

to connect to <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> Console.<br />

ibmvcfg set cimomPort<br />

<br />

ibmvcfg set cimomHost<br />

<br />

Specifies <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> Console port<br />

number. The default value is<br />

5,999.<br />

Sets <strong>the</strong> name of <strong>the</strong> server<br />

where <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> Console is installed.<br />

ibmvcfg set username Dan<br />

ibmvcfg set password<br />

mypassword<br />

set targetSVC 9.43.86.120<br />

set backgroundCopy 80<br />

ibmvcfg set usingSSL yes<br />

ibmvcfg set cimomPort 5999<br />

ibmvcfg set cimomHost<br />

cimomserver<br />

Chapter 5. Host configuration 223


Command Description Example<br />

ibmvcfg set namespace<br />

<br />

ibmvcfg set vssFreeInitiator<br />

<br />

ibmvcfg set<br />

vssReservedInitiator <br />

Specifies <strong>the</strong> namespace value<br />

that <strong>the</strong> Master Console is<br />

using. The default value is<br />

\root\ibm.<br />

Specifies <strong>the</strong> WWPN of <strong>the</strong><br />

host. The default value is<br />

5000000000000000. Modify<br />

this value only if <strong>the</strong>re is a host<br />

already in your environment<br />

with a WWPN of<br />

5000000000000000.<br />

Specifies <strong>the</strong> WWPN of <strong>the</strong><br />

host. The default value is<br />

5000000000000001. Modify<br />

this value only if <strong>the</strong>re is a host<br />

already in your environment<br />

with a WWPN of<br />

5000000000000001.<br />

ibmvcfg listvols Lists all VDisks, including<br />

information about <strong>the</strong> size,<br />

location, and VDisk to host<br />

mappings.<br />

ibmvcfg listvols all Lists all VDisks, including<br />

information about <strong>the</strong> size,<br />

location, and VDisk to host<br />

mappings.<br />

ibmvcfg listvols free Lists <strong>the</strong> volumes that are<br />

currently in <strong>the</strong> free pool.<br />

ibmvcfg listvols unassigned Lists <strong>the</strong> volumes that are<br />

currently not mapped to any<br />

hosts.<br />

ibmvcfg add -s ipaddress Adds one or more volumes to<br />

<strong>the</strong> free pool of volumes. Use<br />

<strong>the</strong> -s parameter to specify <strong>the</strong><br />

IP address of <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> where <strong>the</strong> VDisks are<br />

located. The -s parameter<br />

overrides <strong>the</strong> default IP address<br />

that is set with <strong>the</strong> ibmvcfg set<br />

targetSVC command.<br />

ibmvcfg rem -s ipaddress Removes one or more volumes<br />

from <strong>the</strong> free pool of volumes.<br />

Use <strong>the</strong> -s parameter to specify<br />

<strong>the</strong> IP address of <strong>the</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> where <strong>the</strong><br />

VDisks are located. The -s<br />

parameter overrides <strong>the</strong> default<br />

IP address that is set with <strong>the</strong><br />

ibmvcfg set targetSVC<br />

command.<br />

224 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

ibmvcfg set namespace<br />

\root\ibm<br />

ibmvcfg set vssFreeInitiator<br />

5000000000000000<br />

ibmvcfg set vssFreeInitiator<br />

5000000000000001<br />

ibmvcfg listvols<br />

ibmvcfg listvols all<br />

ibmvcfg listvols free<br />

ibmvcfg listvols unassigned<br />

ibmvcfg add vdisk12 ibmvcfg<br />

add 600507 68018700035000000<br />

0000000BA -s 66.150.210.141<br />

ibmvcfg rem vdisk12 ibmvcfg<br />

rem 600507 68018700035000000<br />

0000000BA -s 66.150.210.141


5.11 Specific Linux (on Intel) information<br />

The following sections describe specific information pertaining to <strong>the</strong> connection of Linux on<br />

Intel-based hosts to <strong>the</strong> SVC environment.<br />

5.11.1 Configuring <strong>the</strong> Linux host<br />

Follow <strong>the</strong>se steps to configure <strong>the</strong> Linux host:<br />

1. Use <strong>the</strong> latest firmware levels on your host system.<br />

2. Install <strong>the</strong> HBA or HBAs on <strong>the</strong> Linux server, as described in 5.6.4, “Host adapter<br />

installation and configuration” on page 183.<br />

3. Install <strong>the</strong> supported HBA driver/firmware and upgrade <strong>the</strong> kernel if required, as described<br />

in 5.11.2, “Configuration information” on page 225.<br />

4. Connect <strong>the</strong> Linux server FC host adapters to <strong>the</strong> switches.<br />

5. Configure <strong>the</strong> switches (zoning) if needed.<br />

6. Install SDD for Linux, as described in 5.11.5, “Multipathing in Linux” on page 226.<br />

7. Configure <strong>the</strong> host, VDisks, and host mapping in <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>.<br />

8. Rescan for LUNs on <strong>the</strong> Linux server to discover <strong>the</strong> VDisks that were created on <strong>the</strong><br />

SVC.<br />

5.11.2 Configuration information<br />

The <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> supports hosts that run <strong>the</strong> following Linux distributions:<br />

► Red Hat Enterprise Linux<br />

► SUSE Linux Enterprise Server<br />

For <strong>the</strong> latest information, always refer to this site:<br />

http://www.ibm.com/storage/support/2145<br />

For SVC Version 4.3, <strong>the</strong> following support information was available at <strong>the</strong> time of writing:<br />

► Software supported levels:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278<br />

► Hardware supported levels:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277<br />

At this Web site, you will find <strong>the</strong> hardware list for supported HBAs and device driver levels for<br />

Windows. Check <strong>the</strong> supported firmware and driver level for your HBA, and follow <strong>the</strong><br />

manufacture’s instructions to upgrade <strong>the</strong> firmware and driver levels for each type of HBA.<br />

5.11.3 Disabling automatic Linux system updates<br />

Many Linux distributions give you <strong>the</strong> ability to configure your systems for automatic system<br />

updates. Red Hat provides this ability in <strong>the</strong> form of a program called up2date, while Novell<br />

SUSE provides <strong>the</strong> YaST Online Update utility. These features periodically query for updates<br />

that are available for each host and can be configured to automatically install any new<br />

updates that <strong>the</strong>y find.<br />

Chapter 5. Host configuration 225


Often, <strong>the</strong> automatic update process also upgrades <strong>the</strong> system to <strong>the</strong> latest kernel level.<br />

Hosts running SDD must turn off <strong>the</strong> automatic update of kernel levels, because certain<br />

drivers that are supplied by <strong>IBM</strong>, such as SDD, are dependent on a specific kernel and will<br />

cease to function on a new kernel. Similarly, HBA drivers need to be compiled against specific<br />

kernels in order to function optimally. By allowing automatic updates of <strong>the</strong> kernel, you risk<br />

affecting your host systems unexpectedly.<br />

5.11.4 Setting queue depth with QLogic HBAs<br />

The queue depth is <strong>the</strong> number of I/O operations that can be run in parallel on a device.<br />

Configure your host running <strong>the</strong> Linux operating system by using <strong>the</strong> formula that is specified<br />

in 5.16, “Calculating <strong>the</strong> queue depth” on page 252.<br />

Perform <strong>the</strong> following steps to set <strong>the</strong> maximum queue depth:<br />

1. Add <strong>the</strong> following line to <strong>the</strong> /etc/modules.conf file:<br />

– For <strong>the</strong> 2.4 kernel (SUSE Linux Enterprise Server 8 or Red Hat Enterprise Linux):<br />

options qla2300 ql2xfailover=0 ql2xmaxqdepth=new_queue_depth<br />

– For <strong>the</strong> 2.6 kernel (SUSE Linux Enterprise Server 9, or later, or Red Hat Enterprise<br />

Linux 4, or later):<br />

options qla2xxx ql2xfailover=0 ql2xmaxqdepth=new_queue_depth<br />

2. Rebuild <strong>the</strong> RAM disk that is associated with <strong>the</strong> kernel being used by using one of <strong>the</strong><br />

following commands:<br />

– If you are running on a SUSE Linux Enterprise Server operating system, run <strong>the</strong><br />

mk_initrd command.<br />

– If you are running on a Red Hat Enterprise Linux operating system, run <strong>the</strong> mkinitrd<br />

command, and <strong>the</strong>n restart.<br />

5.11.5 Multipathing in Linux<br />

Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide<br />

<strong>the</strong>ir own multipath support by <strong>the</strong> operating system. On older systems, it is necessary to<br />

install <strong>the</strong> <strong>IBM</strong> SDD multipath driver.<br />

Installing SDD<br />

This section describes how to install SDD for older distributions. Before performing <strong>the</strong>se<br />

steps, always check for <strong>the</strong> currently supported levels, as described in 5.11.2, “Configuration<br />

information” on page 225.<br />

226 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


The cat /proc/scsi/scsi command in Example 5-52 shows <strong>the</strong> devices that <strong>the</strong> SCSI driver<br />

has probed. In our configuration, we have two HBAs installed in our server, and we configured<br />

<strong>the</strong> zoning to access our VDisk from four paths.<br />

Example 5-52 cat /proc/scsi/scsi command example<br />

[root@diomede sdd]# cat /proc/scsi/scsi<br />

Attached devices:<br />

Host: scsi4 Channel: 00 Id: 00 Lun: 00<br />

Vendor: <strong>IBM</strong> Model: 2145 Rev: 0000<br />

Type: Unknown ANSI SCSI revision: 04<br />

Host: scsi5 Channel: 00 Id: 00 Lun: 00<br />

Vendor: <strong>IBM</strong> Model: 2145 Rev: 0000<br />

Type: Unknown ANSI SCSI revision: 04<br />

[root@diomede sdd]#<br />

The rpm -ivh <strong>IBM</strong>sdd-1.6.3.0-5.i686.rhel4.rpm command installs <strong>the</strong> package, as shown<br />

in Example 5-53.<br />

Example 5-53 rpm command example<br />

[root@Palau sdd]# rpm -ivh <strong>IBM</strong>sdd-1.6.3.0-5.i686.rhel4.rpm<br />

Preparing... ########################################### [100%]<br />

1:<strong>IBM</strong>sdd ########################################### [100%]<br />

Added following line to /etc/inittab:<br />

srv:345:respawn:/opt/<strong>IBM</strong>sdd/bin/sddsrv > /dev/null 2>&1<br />

[root@Palau sdd]#<br />

To manually load and configure SDD on Linux, use <strong>the</strong> service sdd start command (SUSE<br />

Linux users can use <strong>the</strong> sdd start command). If you are not running a supported kernel, you<br />

will get an error message.<br />

If your kernel is supported, you see an OK success message, as shown in Example 5-54.<br />

Example 5-54 Supported kernel for SDD<br />

[root@Palau sdd]# sdd start<br />

Starting <strong>IBM</strong>sdd driver load:<br />

[ OK ]<br />

Issuing killall sddsrv to trigger respawn...<br />

Starting <strong>IBM</strong>sdd configuration: [ OK ]<br />

Issue <strong>the</strong> cfgvpath query command to view <strong>the</strong> name and serial number of <strong>the</strong> VDisk that is<br />

configured in <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>, as shown in Example 5-55.<br />

Example 5-55 cfgvpath query example<br />

[root@Palau ~]# cfgvpath query<br />

RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00<br />

total datalen=52 datalen_str=0x00 00 00 30<br />

RTPG succeeded: sd_name=/dev/sda df_ctlr=0<br />

/dev/sda ( 8, 0) host=0 ch=0 id=0 lun=0 vid=<strong>IBM</strong> pid=2145<br />

serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035<br />

ctlr_flag=1 ctlr_nbr=1 df_ctlr=0<br />

RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00<br />

total datalen=52 datalen_str=0x00 00 00 30<br />

RTPG succeeded: sd_name=/dev/sdb df_ctlr=0<br />

Chapter 5. Host configuration 227


dev/sdb ( 8, 16) host=0 ch=0 id=1 lun=0 vid=<strong>IBM</strong> pid=2145<br />

serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035<br />

ctlr_flag=1 ctlr_nbr=0 df_ctlr=0<br />

RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00<br />

total datalen=52 datalen_str=0x00 00 00 30<br />

RTPG succeeded: sd_name=/dev/sdc df_ctlr=0<br />

/dev/sdc ( 8, 32) host=1 ch=0 id=0 lun=0 vid=<strong>IBM</strong> pid=2145<br />

serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035<br />

ctlr_flag=1 ctlr_nbr=0 df_ctlr=0<br />

RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00<br />

total datalen=52 datalen_str=0x00 00 00 30<br />

RTPG succeeded: sd_name=/dev/sdd df_ctlr=0<br />

/dev/sdd ( 8, 48) host=1 ch=0 id=1 lun=0 vid=<strong>IBM</strong> pid=2145<br />

serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035<br />

ctlr_flag=1 ctlr_nbr=1 df_ctlr=0<br />

[root@Palau ~]#<br />

The cfgvpath command configures <strong>the</strong> SDD vpath devices, as shown in Example 5-56.<br />

Example 5-56 cfgvpath command example<br />

[root@Palau ~]# cfgvpath<br />

c--------- 1 root root 253, 0 Jun 5 09:04 /dev/<strong>IBM</strong>sdd<br />

WARNING: vpatha path sda has already been configured.<br />

WARNING: vpatha path sdb has already been configured.<br />

WARNING: vpatha path sdc has already been configured.<br />

WARNING: vpatha path sdd has already been configured.<br />

Writing out new configuration to file /etc/vpath.conf<br />

[root@Palau ~]#<br />

The configuration information is saved by default in <strong>the</strong> /etc/vpath.conf file. You can save<br />

<strong>the</strong> configuration information to a specified file name by entering <strong>the</strong> following command:<br />

cfgvpath -f file_name.cfg<br />

Issue <strong>the</strong> chkconfig command to enable SDD to run at system startup:<br />

chkconfig sdd on<br />

To verify <strong>the</strong> setting, enter <strong>the</strong> following command:<br />

chkconfig --list sdd<br />

This verification is shown in Example 5-57.<br />

Example 5-57 sdd run level example<br />

[root@Palau sdd]# chkconfig --list sdd<br />

sdd 0:off 1:off 2:on 3:on 4:on 5:on 6:off<br />

[root@Palau sdd]#<br />

If necessary, you can disable <strong>the</strong> startup option by entering this command:<br />

chkconfig sdd off<br />

228 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Run <strong>the</strong> datapath query commands to display <strong>the</strong> online adapters and <strong>the</strong> paths to <strong>the</strong><br />

adapters. Notice that <strong>the</strong> preferred paths are used from one of <strong>the</strong> nodes, that is, path 0 and<br />

path 2. Path 1 and path 3 connect to <strong>the</strong> o<strong>the</strong>r node and are used as alternate or backup<br />

paths for high availability, as shown in Example 5-58.<br />

Example 5-58 datapath query command example<br />

[root@Palau ~]# datapath query adapter<br />

Active Adapters :2<br />

Adpt# Name State Mode Select Errors Paths Active<br />

0 Host0Channel0 NORMAL ACTIVE 1 0 2 0<br />

1 Host1Channel0 NORMAL ACTIVE 0 0 2 0<br />

[root@Palau ~]#<br />

[root@Palau ~]# datapath query device<br />

Total Devices : 1<br />

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential<br />

SERIAL: 60050768018201bee000000000000035<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Host0Channel0/sda CLOSE NORMAL 1 0<br />

1 Host0Channel0/sdb CLOSE NORMAL 0 0<br />

2 Host1Channel0/sdc CLOSE NORMAL 0 0<br />

3 Host1Channel0/sdd CLOSE NORMAL 0 0<br />

[root@Palau ~]#<br />

SDD has three path-selection policy algorithms:<br />

► Failover only (fo): All I/O operations for <strong>the</strong> device are sent to <strong>the</strong> same (preferred) path<br />

unless <strong>the</strong> path fails because of I/O errors. Then, an alternate path is chosen for<br />

subsequent I/O operations.<br />

► Load balancing (lb): The path to use for an I/O operation is chosen by estimating <strong>the</strong> load<br />

on <strong>the</strong> adapter to which each path is attached. The load is a function of <strong>the</strong> number of I/O<br />

operations currently in process. If multiple paths have <strong>the</strong> same load, a path is chosen at<br />

random from those paths. Load-balancing mode also incorporates failover protection. The<br />

load-balancing policy is also known as <strong>the</strong> optimized policy.<br />

► Round robin (rr): The path to use for each I/O operation is chosen at random from paths<br />

that were not used for <strong>the</strong> last I/O operation. If a device has only two paths, SDD<br />

alternates between <strong>the</strong> two paths.<br />

You can dynamically change <strong>the</strong> SDD path-selection policy algorithm by using <strong>the</strong> datapath<br />

set device policy SDD command.<br />

You can see <strong>the</strong> SDD path-selection policy algorithm that is active on <strong>the</strong> device when you<br />

use <strong>the</strong> datapath query device command. Example 5-58 shows that <strong>the</strong> active policy is<br />

optimized, which means that <strong>the</strong> SDD path-selection policy algorithm active is Optimized<br />

Sequential.<br />

Chapter 5. Host configuration 229


Example 5-59 shows <strong>the</strong> VDisk information from <strong>the</strong> SVC command-line interface.<br />

Example 5-59 svcinfo redhat1<br />

<strong>IBM</strong>_2145:ITSOSVC42A:admin>svcinfo lshost linux2<br />

id 6<br />

name linux2<br />

port_count 2<br />

type generic<br />

mask 1111<br />

iogrp_count 4<br />

WWPN 210000E08B89C1CD<br />

node_logged_in_count 2<br />

state active<br />

WWPN 210000E08B054CAA<br />

node_logged_in_count 2<br />

state active<br />

<strong>IBM</strong>_2145:ITSOSVC42A:admin><br />

<strong>IBM</strong>_2145:ITSOSVC42A:admin>svcinfo lshostvdiskmap linux2<br />

id name SCSI_id vdisk_id vdisk_name<br />

wwpn vdisk_UID<br />

6 linux2 0 33 linux_vd1<br />

210000E08B89C1CD 60050768018201BEE000000000000035<br />

<strong>IBM</strong>_2145:ITSOSVC42A:admin><br />

<strong>IBM</strong>_2145:ITSOSVC42A:admin>svcinfo lsvdisk linux_vd1<br />

id 33<br />

name linux_vd1<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG0<br />

capacity 1.0GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018201BEE000000000000035<br />

throttling 0<br />

preferred_node_id 1<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

<strong>IBM</strong>_2145:ITSOSVC42A:admin><br />

230 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5.11.6 Creating and preparing <strong>the</strong> SDD volumes for use<br />

Follow <strong>the</strong>se steps to create and prepare <strong>the</strong> volumes:<br />

1. Create a partition on <strong>the</strong> vpath device, as shown in Example 5-60.<br />

Example 5-60 fdisk example<br />

[root@Palau ~]# fdisk /dev/vpatha<br />

Device contains nei<strong>the</strong>r a valid DOS partition table, nor Sun, SGI or OSF disklabel<br />

Building a new DOS disklabel. Changes will remain in memory only,<br />

until you decide to write <strong>the</strong>m. After that, of course, <strong>the</strong> previous<br />

content won't be recoverable.<br />

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)<br />

Command (m for help): m<br />

Command action<br />

a toggle a bootable flag<br />

b edit bsd disklabel<br />

c toggle <strong>the</strong> dos compatibility flag<br />

d delete a partition<br />

l list known partition types<br />

m print this menu<br />

n add a new partition<br />

o create a new empty DOS partition table<br />

p print <strong>the</strong> partition table<br />

q quit without saving changes<br />

s create a new empty Sun disklabel<br />

t change a partition's system id<br />

u change display/entry units<br />

v verify <strong>the</strong> partition table<br />

w write table to disk and exit<br />

x extra functionality (experts only)<br />

Command (m for help): n<br />

Command action<br />

e extended<br />

p primary partition (1-4)<br />

e<br />

Partition number (1-4): 1<br />

First cylinder (1-1011, default 1):<br />

Using default value 1<br />

Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011):<br />

Using default value 1011<br />

Command (m for help): w<br />

The partition table has been altered!<br />

Calling ioctl() to re-read partition table.<br />

Syncing disks.<br />

[root@Palau ~]#<br />

Chapter 5. Host configuration 231


2. Create a file system on <strong>the</strong> vpath, as shown in Example 5-61.<br />

Example 5-61 mkfs command example<br />

[root@Palau ~]# mkfs -t ext3 /dev/vpatha<br />

mke2fs 1.35 (28-Feb-2004)<br />

Filesystem label=<br />

OS type: Linux<br />

Block size=4096 (log=2)<br />

Fragment size=4096 (log=2)<br />

131072 inodes, 262144 blocks<br />

13107 blocks (5.00%) reserved for <strong>the</strong> super user<br />

First data block=0<br />

Maximum filesystem blocks=268435456<br />

8 block groups<br />

32768 blocks per group, 32768 fragments per group<br />

16384 inodes per group<br />

Superblock backups stored on blocks:<br />

32768, 98304, 163840, 229376<br />

Writing inode tables: done<br />

Creating journal (8192 blocks): done<br />

Writing superblocks and filesystem accounting information: done<br />

This filesystem will be automatically checked every 27 mounts or<br />

180 days, whichever comes first. Use tune2fs -c or -i to override.<br />

[root@Palau ~]#<br />

3. Create <strong>the</strong> mount point, and mount <strong>the</strong> vpath drive, as shown in Example 5-62.<br />

Example 5-62 Mount point<br />

[root@Palau ~]# mkdir /itsosvc<br />

[root@Palau ~]# mount -t ext3 /dev/vpatha /itsosvc<br />

4. The drive is now ready for use. The df command shows us <strong>the</strong> mounted disk /itsosvc, and<br />

<strong>the</strong> datapath query command shows that four paths are available (Example 5-63).<br />

Example 5-63 Display mounted drives<br />

[root@Palau ~]# df<br />

Filesystem 1K-blocks Used Available Use% Mounted on<br />

/dev/mapper/VolGroup00-LogVol00<br />

74699952 2564388 68341032 4% /<br />

/dev/hda1 101086 13472 82395 15% /boot<br />

none 1033136 0 1033136 0% /dev/shm<br />

/dev/vpatha 1032088 34092 945568 4% /itsosvc<br />

[root@Palau ~]#<br />

[root@Palau ~]# datapath query device<br />

Total Devices : 1<br />

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized Sequential<br />

SERIAL: 60050768018201bee000000000000035<br />

============================================================================<br />

232 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Host0Channel0/sda OPEN NORMAL 1 0<br />

1 Host0Channel0/sdb OPEN NORMAL 6296 0<br />

2 Host1Channel0/sdc OPEN NORMAL 6178 0<br />

3 Host1Channel0/sdd OPEN NORMAL 0 0<br />

[root@Palau ~]#<br />

5.11.7 Using <strong>the</strong> operating system MPIO<br />

Red Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide<br />

<strong>the</strong>ir own multipath support for <strong>the</strong> operating system. Therefore, you do not have to install an<br />

additional device driver. Always check whe<strong>the</strong>r your operating system includes one of <strong>the</strong><br />

supported multipath drivers.<br />

You will find this information in <strong>the</strong> links that are provided in 5.11.2, “Configuration<br />

information” on page 225. In SLES10, <strong>the</strong> multipath drivers and tools are installed by default,<br />

but for RHEL5, <strong>the</strong> user has to explicitly choose <strong>the</strong> multipath components during <strong>the</strong> OS<br />

installation to install <strong>the</strong>m.<br />

Each of <strong>the</strong> attached <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> LUNs has a special device file in <strong>the</strong> Linux /dev<br />

directory.<br />

Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as <strong>the</strong> SVC<br />

allows. The following Web site provides <strong>the</strong> most current information about <strong>the</strong> maximum<br />

configuration for <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>:<br />

http://www.ibm.com/storage/support/2145<br />

5.11.8 Creating and preparing MPIO volumes for use<br />

First, you have to start <strong>the</strong> MPIO daemon on your system. Run <strong>the</strong> following commands on<br />

your host system:<br />

1. Enable MPIO for SLES10 by running <strong>the</strong> following commands:<br />

a. /etc/init.d/boot.multipath {start|stop}<br />

b. /etc/init.d/multipathd<br />

{start|stop|status|try-restart|restart|force-reload|reload|probe}<br />

Tip: Run insserv boot.multipath multipathd to automatically load <strong>the</strong> multipath driver<br />

and multipathd daemon during startup.<br />

2. Enable MPIO for RHEL5 by running <strong>the</strong> following commands:<br />

a. modprobe dm-multipath<br />

b. modprobe dm-round-robin<br />

c. service multipathd start<br />

d. chkconfig multipathd on<br />

Example 5-64 on page 234 shows <strong>the</strong> commands issued on a Red Hat Enterprise Linux 5.1<br />

operating system.<br />

Chapter 5. Host configuration 233


Example 5-64 Starting MPIO daemon on Red Hat Enterprise Linux<br />

[root@palau ~]# modprobe dm-round-robin<br />

[root@palau ~]# multipathd start<br />

[root@palau ~]# chkconfig multipathd on<br />

[root@palau ~]#<br />

3. Open <strong>the</strong> multipath.conf file, and follow <strong>the</strong> instructions to enable multipathing for <strong>IBM</strong><br />

devices. The file is located in <strong>the</strong> /etc directory. Example 5-65 shows editing using vi.<br />

Example 5-65 Editing <strong>the</strong> multipath.conf file<br />

[root@palau etc]# vi multipath.conf<br />

4. Add <strong>the</strong> following entry to <strong>the</strong> multipath.conf file:<br />

device {<br />

vendor "<strong>IBM</strong>"<br />

product "2145"<br />

path_grouping_policy group_by_prio<br />

prio_callout "/sbin/mpath_prio_alua /dev/%n"<br />

}<br />

5. Restart <strong>the</strong> multipath daemon (Example 5-66).<br />

Example 5-66 Stopping and starting <strong>the</strong> multipath daemon<br />

[root@palau ~]# service multipathd stop<br />

Stopping multipathd daemon: [ OK ]<br />

[root@palau ~]# service multipathd start<br />

Starting multipathd daemon: [ OK ]<br />

6. Type <strong>the</strong> multipath -dl command to see <strong>the</strong> mpio configuration. You will see two groups<br />

with two paths each. All paths must have <strong>the</strong> state [active][ready] and one group will be<br />

[enabled].<br />

234 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7. Use <strong>the</strong> fdisk command to create a partition on <strong>the</strong> SVC disk, as shown in Example 5-67.<br />

Example 5-67 fdisk<br />

[root@palau scsi]# fdisk -l<br />

Disk /dev/hda: 80.0 GB, 80032038912 bytes<br />

255 heads, 63 sectors/track, 9730 cylinders<br />

Units = cylinders of 16065 * 512 = 8225280 bytes<br />

Device Boot Start End Blocks Id <strong>System</strong><br />

/dev/hda1 * 1 13 104391 83 Linux<br />

/dev/hda2 14 9730 78051802+ 8e Linux LVM<br />

Disk /dev/sda: 4244 MB, 4244635648 bytes<br />

131 heads, 62 sectors/track, 1020 cylinders<br />

Units = cylinders of 8122 * 512 = 4158464 bytes<br />

Disk /dev/sda doesn't contain a valid partition table<br />

Disk /dev/sdb: 4244 MB, 4244635648 bytes<br />

131 heads, 62 sectors/track, 1020 cylinders<br />

Units = cylinders of 8122 * 512 = 4158464 bytes<br />

Disk /dev/sdb doesn't contain a valid partition table<br />

Disk /dev/sdc: 4244 MB, 4244635648 bytes<br />

131 heads, 62 sectors/track, 1020 cylinders<br />

Units = cylinders of 8122 * 512 = 4158464 bytes<br />

Disk /dev/sdc doesn't contain a valid partition table<br />

Disk /dev/sdd: 4244 MB, 4244635648 bytes<br />

131 heads, 62 sectors/track, 1020 cylinders<br />

Units = cylinders of 8122 * 512 = 4158464 bytes<br />

Disk /dev/sdd doesn't contain a valid partition table<br />

Disk /dev/sde: 4244 MB, 4244635648 bytes<br />

131 heads, 62 sectors/track, 1020 cylinders<br />

Units = cylinders of 8122 * 512 = 4158464 bytes<br />

Disk /dev/sde doesn't contain a valid partition table<br />

Disk /dev/sdf: 4244 MB, 4244635648 bytes<br />

131 heads, 62 sectors/track, 1020 cylinders<br />

Units = cylinders of 8122 * 512 = 4158464 bytes<br />

Disk /dev/sdf doesn't contain a valid partition table<br />

Disk /dev/sdg: 4244 MB, 4244635648 bytes<br />

131 heads, 62 sectors/track, 1020 cylinders<br />

Units = cylinders of 8122 * 512 = 4158464 bytes<br />

Disk /dev/sdg doesn't contain a valid partition table<br />

Chapter 5. Host configuration 235


Disk /dev/sdh: 4244 MB, 4244635648 bytes<br />

131 heads, 62 sectors/track, 1020 cylinders<br />

Units = cylinders of 8122 * 512 = 4158464 bytes<br />

Disk /dev/sdh doesn't contain a valid partition table<br />

Disk /dev/dm-2: 4244 MB, 4244635648 bytes<br />

255 heads, 63 sectors/track, 516 cylinders<br />

Units = cylinders of 16065 * 512 = 8225280 bytes<br />

Disk /dev/dm-2 doesn't contain a valid partition table<br />

Disk /dev/dm-3: 4244 MB, 4244635648 bytes<br />

255 heads, 63 sectors/track, 516 cylinders<br />

Units = cylinders of 16065 * 512 = 8225280 bytes<br />

Disk /dev/dm-3 doesn't contain a valid partition table<br />

[root@palau scsi]# fdisk /dev/dm-2<br />

Device contains nei<strong>the</strong>r a valid DOS partition table, nor Sun, SGI or OSF disklabel<br />

Building a new DOS disklabel. Changes will remain in memory only,<br />

until you decide to write <strong>the</strong>m. After that, of course, <strong>the</strong> previous<br />

content won't be recoverable.<br />

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)<br />

Command (m for help): n<br />

Command action<br />

e extended<br />

p primary partition (1-4)<br />

e<br />

Partition number (1-4): 1<br />

First cylinder (1-516, default 1):<br />

Using default value 1<br />

Last cylinder or +size or +sizeM or +sizeK (1-516, default 516):<br />

Using default value 516<br />

Command (m for help): w<br />

The partition table has been altered!<br />

Calling ioctl() to re-read partition table.<br />

WARNING: Re-reading <strong>the</strong> partition table failed with error 22: Invalid argument.<br />

The kernel still uses <strong>the</strong> old table.<br />

The new table will be used at <strong>the</strong> next reboot.<br />

[root@palau scsi]# shutdown -r now<br />

236 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8. Create a file system using <strong>the</strong> mkfs command (Example 5-68).<br />

Example 5-68 mkfs command<br />

[root@palau ~]# mkfs -t ext3 /dev/dm-2<br />

mke2fs 1.39 (29-May-2006)<br />

Filesystem label=<br />

OS type: Linux<br />

Block size=4096 (log=2)<br />

Fragment size=4096 (log=2)<br />

518144 inodes, 1036288 blocks<br />

51814 blocks (5.00%) reserved for <strong>the</strong> super user<br />

First data block=0<br />

Maximum filesystem blocks=1061158912<br />

32 block groups<br />

32768 blocks per group, 32768 fragments per group<br />

16192 inodes per group<br />

Superblock backups stored on blocks:<br />

32768, 98304, 163840, 229376, 294912, 819200, 884736<br />

Writing inode tables: done<br />

Creating journal (16384 blocks): done<br />

Writing superblocks and filesystem accounting information: done<br />

This filesystem will be automatically checked every 29 mounts or<br />

180 days, whichever comes first. Use tune2fs -c or -i to override.<br />

[root@palau ~]#<br />

9. Create a mount point, and mount <strong>the</strong> drive, as shown in Example 5-69.<br />

Example 5-69 Mount point<br />

[root@palau ~]# mkdir /svcdisk_0<br />

[root@palau ~]# cd /svcdisk_0/<br />

[root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0<br />

[root@palau svcdisk_0]# df<br />

Filesystem 1K-blocks Used Available Use% Mounted on<br />

/dev/mapper/VolGroup00-LogVol00<br />

73608360 1970000 67838912 3% /<br />

/dev/hda1 101086 15082 80785 16% /boot<br />

tmpfs 967984 0 967984 0% /dev/shm<br />

/dev/dm-2 4080064 73696 3799112 2% /svcdisk_0<br />

5.12 VMware configuration information<br />

This section explains <strong>the</strong> requirements and additional information for attaching <strong>the</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> to a variety of guest host operating systems running on <strong>the</strong> VMware<br />

operating system.<br />

Chapter 5. Host configuration 237


5.12.1 Configuring VMware hosts<br />

To configure <strong>the</strong> VMware hosts, follow <strong>the</strong>se steps:<br />

1. Install <strong>the</strong> HBAs in your host system, as described in 5.12.4, “HBAs for hosts running<br />

VMware” on page 238.<br />

2. Connect <strong>the</strong> server FC host adapters to <strong>the</strong> switches.<br />

3. Configure <strong>the</strong> switches (zoning), as described in 5.12.6, “VMware storage and zoning<br />

recommendations” on page 240.<br />

4. Install <strong>the</strong> VMware operating system (if not already done) and check <strong>the</strong> HBA timeouts, as<br />

described in 5.12.7, “Setting <strong>the</strong> HBA timeout for failover in VMware” on page 241.<br />

5. Configure <strong>the</strong> host, VDisks, and host mapping in <strong>the</strong> SVC, as described in 5.12.9,<br />

“Attaching VMware to VDisks” on page 242.<br />

5.12.2 Operating system versions and maintenance levels<br />

For <strong>the</strong> latest information about VMware support, refer to this Web site:<br />

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html<br />

At <strong>the</strong> time of writing, <strong>the</strong> following versions are supported:<br />

► ESX V3.5<br />

► ESX V3.51<br />

► ESX V3.02<br />

► ESX V2.5.3<br />

► ESX V2.5.2<br />

► ESX V2.1 with Virtual Machine File <strong>System</strong> (VMFS) disks<br />

Important: If you are running <strong>the</strong> VMware V3.01 build, you are required to move to a<br />

minimum VMware level of V3.02 for continued support.<br />

5.12.3 Guest operating systems<br />

Also, make sure that you are using supported guest operating systems. The latest information<br />

is available at this Web site:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_VMWare<br />

5.12.4 HBAs for hosts running VMware<br />

Ensure that your hosts that are running on VMware operating systems use <strong>the</strong> correct HBAs<br />

and firmware levels.<br />

Install <strong>the</strong> host adapters in your system. Refer to <strong>the</strong> manufacturer’s instructions for<br />

installation and configuration of <strong>the</strong> HBAs.<br />

In <strong>IBM</strong> <strong>System</strong> x servers, <strong>the</strong> HBA must always be installed in <strong>the</strong> first slots. Therefore, if you<br />

install, for example, two HBAs and two network cards, <strong>the</strong> HBAs must be installed in slot 1<br />

and slot 2 and <strong>the</strong> network cards can be installed in <strong>the</strong> remaining slots.<br />

For older ESX versions, you will find <strong>the</strong> supported HBAs at <strong>the</strong> <strong>IBM</strong> Web Site:<br />

http://www.ibm.com/storage/support/2145<br />

The interoperability matrixes for ESX V3.02, V3.5, and V3.51 are available at <strong>the</strong> VMware<br />

Web site (clicking this link opens or downloads <strong>the</strong> PDF):<br />

238 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► V3.02<br />

http://www.vmware.com/pdf/vi3_io_guide.pdf<br />

► V3.5<br />

http://www.vmware.com/pdf/vi35_io_guide.pdf<br />

The supported HBA device drivers are already included in <strong>the</strong> ESX server build.<br />

After installing, load <strong>the</strong> default configuration of your FC HBAs. We recommend using <strong>the</strong><br />

same model of HBA with <strong>the</strong> same firmware in one server. It is not supported to have Emulex<br />

and QLogic HBAs that access <strong>the</strong> same target in one server.<br />

5.12.5 Multipath solutions supported<br />

Only single path is supported in ESX V2.1, and multipathing is supported in ESX V2.5.x.<br />

The VMware operating system provides multipathing support, so installing multipathing<br />

software is not required.<br />

VMware multipathing software dynamic pathing<br />

VMware multipathing software does not support dynamic pathing. Preferred paths that are set<br />

in <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> are ignored. The VMware multipathing software performs static<br />

load balancing for I/O, based upon a host setting that defines <strong>the</strong> preferred path for a given<br />

volume.<br />

Multipathing configuration maximums<br />

When you configure, remember <strong>the</strong> maximum configuration for <strong>the</strong> VMware multipathing<br />

software: 256 is <strong>the</strong> maximum number of SCSI devices supported by <strong>the</strong> VMware software<br />

and <strong>the</strong> maximum number of paths to each VDisk is four, giving you a total number of paths,<br />

on a server, of 1,024.<br />

Paths: Each path to a VDisk equates to a single SCSI device.<br />

Clustering support for hosts running VMware<br />

The SVC provides cluster support on VMware guest operating systems. The following Web<br />

Site provides <strong>the</strong> current interoperability information:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_VMware<br />

<strong>SAN</strong> boot support<br />

<strong>SAN</strong> boot of any guest OS is supported under VMware. The very nature of VMware means<br />

that <strong>SAN</strong> boot is a requirement on any guest OS. The guest OS must reside on a <strong>SAN</strong> disk.<br />

If you are unfamiliar with <strong>the</strong> VMware environments and <strong>the</strong> advantages of storing virtual<br />

machines and application data on a <strong>SAN</strong>, we recommend that you get an overview about <strong>the</strong><br />

VMware products before continuing.<br />

VMware documentation is available at this Web site:<br />

http://www.vmware.com/support/pubs/<br />

Chapter 5. Host configuration 239


5.12.6 VMware storage and zoning recommendations<br />

The VMware ESX server can use a Virtual Machine File <strong>System</strong> (VMFS), which is a file<br />

system that is optimized to run multiple virtual machines as one workload to minimize disk<br />

I/O. It is also able to handle concurrent access from multiple physical machines, because it<br />

enforces <strong>the</strong> appropriate access controls. Therefore, multiple ESX hosts can share <strong>the</strong> same<br />

set of LUNs (Figure 5-46).<br />

Figure 5-46 VMware: SVC zoning example<br />

Theoretically, you can run all of your virtual machines on one LUN, but for performance<br />

reasons, in more complex scenarios, it can be better to load balance virtual machines over<br />

separate HBAs, storages, or arrays.<br />

For example, if you run an ESX host, with several virtual machines, it makes sense to use one<br />

“slow” array, for example, for Print and Active Directory Services guest operating systems<br />

without high I/O, and ano<strong>the</strong>r fast array for database guest operating systems.<br />

Using fewer VDisks has <strong>the</strong> following advantages:<br />

► More flexibility to create virtual machines without creating new space on <strong>the</strong> SVC<br />

► More possibilities for taking VMware snapshots<br />

► Fewer VDisks to manage<br />

Using more and smaller VDisks has <strong>the</strong> following advantages:<br />

► Separate I/O characteristics of <strong>the</strong> guest operating systems<br />

240 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► More flexibility (<strong>the</strong> multipathing policy and disk shares are set per VDisk)<br />

► Microsoft Cluster Service requires its own VDisk for each cluster disk resource<br />

More documentation about designing your VMware infrastructure is provided at one of <strong>the</strong>se<br />

Web sites:<br />

► http://www.vmware.com/vmtn/resources/<br />

► http://www.vmware.com/resources/techresources/1059<br />

Guidelines:<br />

► ESX Server hosts that use shared storage for virtual machine failover or load balancing<br />

must be in <strong>the</strong> same zone.<br />

► You can have only one VMFS volume per VDisk.<br />

5.12.7 Setting <strong>the</strong> HBA timeout for failover in VMware<br />

The timeout for failover for ESX hosts must be set to 30 seconds:<br />

► For QLogic HBAs, <strong>the</strong> timeout depends on <strong>the</strong> PortDownRetryCount parameter. The<br />

timeout value is 2 x PortDownRetryCount + 5 sec. It is recommended to set <strong>the</strong><br />

qlport_down_retry parameter to 14.<br />

► For Emulex HBAs, <strong>the</strong> lpfc_linkdown_tmo and <strong>the</strong> lpcf_nodev_tmo parameters must be<br />

set to 30 seconds.<br />

To make <strong>the</strong>se changes on your system, perform <strong>the</strong> following steps (Example 5-70):<br />

1. Back up <strong>the</strong> /etc/vmware/esx.cof file.<br />

2. Open <strong>the</strong> /etc/vmware/esx.cof file for editing.<br />

3. The file includes a section for every installed SCSI device.<br />

4. Locate your SCSI adapters, and edit <strong>the</strong> previously described parameters.<br />

5. Repeat this process for every installed HBA.<br />

Example 5-70 Setting <strong>the</strong> HBA timeout<br />

[root@nile svc]# cp /etc/vmware/esx.conf /etc/vmware/esx.confbackup<br />

[root@nile svc]# vi /etc/vmware/esx.conf<br />

Chapter 5. Host configuration 241


5.12.8 Multipathing in ESX<br />

The ESX Server performs multipathing. You do not need to install a multipathing driver, such<br />

as SDD, ei<strong>the</strong>r on <strong>the</strong> ESX server or on <strong>the</strong> guest operating systems.<br />

5.12.9 Attaching VMware to VDisks<br />

First, we make sure that <strong>the</strong> VMware host is logged into <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>. In our<br />

examples, we use <strong>the</strong> VMware ESX server V3.5 and <strong>the</strong> host name Nile.<br />

Enter <strong>the</strong> following command to check <strong>the</strong> status of <strong>the</strong> host:<br />

svcinfo lshost <br />

Example 5-71 shows that <strong>the</strong> host Nile is logged into <strong>the</strong> SVC with two HBAs.<br />

Example 5-71 lshost Nile<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshost Nile<br />

id 1<br />

name Nile<br />

port_count 2<br />

type generic<br />

mask 1111<br />

iogrp_count 2<br />

WWPN 210000E08B892BCD<br />

node_logged_in_count 4<br />

state active<br />

WWPN 210000E08B89B8C0<br />

node_logged_in_count 4<br />

state active<br />

Then, we have to set <strong>the</strong> SCSI <strong>Controller</strong> Type in VMware. By default, ESX Server disables<br />

<strong>the</strong> SCSI bus sharing and does not allow multiple virtual machines to access <strong>the</strong> same VMFS<br />

file at <strong>the</strong> same time (Figure 5-47 on page 243).<br />

But in many configurations, such as those configurations for high availability, <strong>the</strong> virtual<br />

machines have to share <strong>the</strong> same VMFS file to share a disk.<br />

To set <strong>the</strong> SCSI <strong>Controller</strong> Type in VMware:<br />

1. Log on to your Infrastructure Client, shut down <strong>the</strong> virtual machine, right-click it, and select<br />

Edit settings.<br />

2. Highlight <strong>the</strong> SCSI <strong>Controller</strong>, and select one of <strong>the</strong> three available settings, depending on<br />

your configuration:<br />

– None: Disks cannot be shared by o<strong>the</strong>r virtual machines.<br />

– Virtual: Disks can be shared by virtual machines on <strong>the</strong> same server.<br />

– Physical: Disks can be shared by virtual machines on any server.<br />

Click OK to apply <strong>the</strong> setting.<br />

242 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 5-47 Changing SCSI bus settings<br />

3. Create your VDisks on <strong>the</strong> SVC, and map <strong>the</strong>m to <strong>the</strong> ESX hosts.<br />

Tips:<br />

► If you want to use features, such as VMotion, <strong>the</strong> VDisks that own <strong>the</strong> VMFS file have to<br />

be visible to every ESX host that will be able to host <strong>the</strong> virtual machine. In SVC, select<br />

Allow <strong>the</strong> virtual disks to be mapped even if <strong>the</strong>y are already mapped to a host.<br />

► The VDisk has to have <strong>the</strong> same SCSI ID on each ESX host.<br />

For this example configuration, we have created one VDisk and have mapped it to our ESX<br />

host, as shown in Example 5-72.<br />

Example 5-72 Mapped VDisk to ESX host Nile<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nile<br />

id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID<br />

1 Nile 0 12 VMW_pool 210000E08B892BCD<br />

60050768018301BF2800000000000010<br />

ESX does not automatically scan for <strong>SAN</strong> changes (except when rebooting <strong>the</strong> entire ESX<br />

server). If you have made any changes to your SVC or <strong>SAN</strong> configuration, perform <strong>the</strong><br />

following steps:<br />

1. Open your VMware Infrastructure Client.<br />

2. Select <strong>the</strong> host.<br />

3. In <strong>the</strong> Hardware window, choose <strong>Storage</strong> Adapters.<br />

4. Click Rescan.<br />

Chapter 5. Host configuration 243


To configure a storage device to use it in VMware, perform <strong>the</strong> following steps:<br />

1. Open your VMware Infrastructure Client.<br />

2. Select <strong>the</strong> host for which you want to see <strong>the</strong> assigned VDisks, and click <strong>the</strong><br />

Configuration tab.<br />

3. In <strong>the</strong> Hardware window on <strong>the</strong> left side, click <strong>Storage</strong>.<br />

4. To create a new storage pool, select click here to create a datastore or Add storage if<br />

<strong>the</strong> yellow field does not appear (Figure 5-48).<br />

Figure 5-48 VMWare add datastore<br />

5. The Add storage wizard will appear.<br />

6. Select Create Disk/Lun, and click Next.<br />

7. Select <strong>the</strong> SVC VDisk that you want to use for <strong>the</strong> datastore, and click Next.<br />

8. Review <strong>the</strong> disk layout and click Next.<br />

9. Enter a datastore name and click Next.<br />

10.Select a block size, enter <strong>the</strong> size of <strong>the</strong> new partition, and <strong>the</strong>n, click Next.<br />

11.Review your selections, and click Finish.<br />

Now, <strong>the</strong> created VMFS datastore appears in <strong>the</strong> <strong>Storage</strong> window (Figure 5-49). You will see<br />

<strong>the</strong> details for <strong>the</strong> highlighted datastore. Check whe<strong>the</strong>r all of <strong>the</strong> paths are available and that<br />

<strong>the</strong> Path Selection is set to Most Recently Used.<br />

Figure 5-49 VMWare storage configuration<br />

If not all of <strong>the</strong> paths are available, check your <strong>SAN</strong> and storage configuration. After fixing <strong>the</strong><br />

problem, select Refresh to perform a path rescan. The view will be updated to <strong>the</strong> new<br />

configuration.<br />

244 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


The recommended Multipath Policy for SVC is Most Recently Used. If you have to edit this<br />

policy, perform <strong>the</strong> following steps:<br />

1. Highlight <strong>the</strong> datastore.<br />

2. Click Properties.<br />

3. Click Managed Paths.<br />

4. Click Change (see Figure 5-50).<br />

5. Select Most Recently Used.<br />

6. Click OK.<br />

7. Click Close.<br />

Now, your VMFS datastore has been created, and you can start using it for your guest<br />

operating systems.<br />

5.12.10 VDisk naming in VMware<br />

In <strong>the</strong> Virtual Infrastructure Client, a VDisk is displayed as a sequence of three or four<br />

numbers, separated by colons (Figure 5-50):<br />

:::<br />

where:<br />

► SCSI HBA<br />

The number of <strong>the</strong> SCSI HBA (can change).<br />

► SCSI target<br />

The number of <strong>the</strong> SCSI target (can change).<br />

► SCSI VDisk<br />

The number of <strong>the</strong> VDisk (never changes).<br />

► disk partition<br />

The number of <strong>the</strong> disk partition (never changes). If <strong>the</strong> last number is not displayed, <strong>the</strong><br />

name stands for <strong>the</strong> entire VDisk.<br />

Figure 5-50 VDisk naming in VMware<br />

Chapter 5. Host configuration 245


5.12.11 Setting <strong>the</strong> Microsoft guest operating system timeout<br />

For a Microsoft Windows 2000 Server or Windows 2003 Server installed as a VMware guest<br />

operating system, <strong>the</strong> disk timeout value must be set to 60 seconds.<br />

We provide <strong>the</strong> instructions to perform this task in 5.6.5, “Changing <strong>the</strong> disk timeout on<br />

Microsoft Windows Server” on page 185.<br />

5.12.12 Extending a VMFS volume<br />

It is possible to extend VMFS volumes while virtual machines are running. First, you have to<br />

extend <strong>the</strong> VDisk on <strong>the</strong> SVC, and <strong>the</strong>n, you are able to extend <strong>the</strong> VMFS volume. Before<br />

performing <strong>the</strong>se steps, we recommend having a backup of your data.<br />

Perform <strong>the</strong> following steps to extend a volume:<br />

1. The VDisk can be expanded with <strong>the</strong> svctask expandvdisksize -size 1 -unit gb<br />

command (Example 5-73).<br />

Example 5-73 Expanding a VDisk in SVC<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool<br />

id 12<br />

name VMW_pool<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

capacity 60.0GB<br />

type striped<br />

formatted yes<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018301BF2800000000000010<br />

throttling 0<br />

preferred_node_id 2<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

246 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


fast_write_state empty<br />

used_capacity 60.00GB<br />

real_capacity 60.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_pool<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_pool<br />

id 12<br />

name VMW_pool<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

capacity 65.0GB<br />

type striped<br />

formatted yes<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018301BF2800000000000010<br />

throttling 0<br />

preferred_node_id 2<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 65.00GB<br />

real_capacity 65.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Chapter 5. Host configuration 247


2. Open <strong>the</strong> Virtual Infrastructure Client.<br />

3. Select <strong>the</strong> host.<br />

4. Select Configuration.<br />

5. Select <strong>Storage</strong> Adapters.<br />

6. Click Rescan.<br />

7. Make sure that <strong>the</strong> Scan for new <strong>Storage</strong> Devices check box is marked, and click OK.<br />

After <strong>the</strong> scan has completed, <strong>the</strong> new capacity is displayed in <strong>the</strong> Details section.<br />

8. Click <strong>Storage</strong>.<br />

9. Right-click <strong>the</strong> VMFS volume and click Properties.<br />

10.Click Add Extend.<br />

11.Select <strong>the</strong> new free space, and click Next.<br />

12.Click Next.<br />

13.Click Finish.<br />

The VMFS volume has now been extended, and <strong>the</strong> new space is ready for use.<br />

5.12.13 Removing a datastore from an ESX host<br />

Before you remove a datastore from an ESX host, you have to migrate or delete all of <strong>the</strong><br />

virtual machines that reside on this datastore.<br />

To remove it, perform <strong>the</strong> following steps:<br />

1. Back up <strong>the</strong> data.<br />

2. Open <strong>the</strong> Virtual Infrastructure Client.<br />

3. Select <strong>the</strong> host.<br />

4. Select Configuration.<br />

5. Select <strong>Storage</strong>.<br />

6. Highlight <strong>the</strong> datastore that you want to remove.<br />

7. Click Remove.<br />

8. Read <strong>the</strong> warning, and if you are sure that you want to remove <strong>the</strong> datastore and delete all<br />

of <strong>the</strong> data on it, click Yes.<br />

9. Remove <strong>the</strong> host mapping on <strong>the</strong> SVC, or delete <strong>the</strong> VDisk (as shown in Example 5-74).<br />

10.In <strong>the</strong> VI Client, select <strong>Storage</strong> Adapters.<br />

11.Click Rescan.<br />

12.Make sure that <strong>the</strong> Scan for new <strong>Storage</strong> Devices check box is marked, and click OK.<br />

13.After <strong>the</strong> scan completes, <strong>the</strong> disk disappears from <strong>the</strong> view.<br />

Your datastore has been successfully removed from <strong>the</strong> system.<br />

Example 5-74 Remove VDisk host mapping: Delete VDisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile VMW_pool<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdisk VMW_pool<br />

248 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5.13 SUN Solaris support information<br />

For <strong>the</strong> latest information about supported software and driver levels, always refer to this site:<br />

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html<br />

5.13.1 Operating system versions and maintenance levels<br />

At <strong>the</strong> time of writing, Sun Solaris 8, Sun Solaris 9, and Sun Solaris 10 are supported in 64-bit<br />

only.<br />

5.13.2 SDD dynamic pathing<br />

Solaris supports dynamic pathing when you ei<strong>the</strong>r add more paths to an existing VDisk, or if<br />

you present a new VDisk to a host. No user intervention is required. SDD is aware of <strong>the</strong><br />

preferred paths that SVC sets per VDisk.<br />

SDD will use a round-robin algorithm when failing over paths, that is, it will try <strong>the</strong> next known<br />

preferred path. If this method fails and all preferred paths have been tried, it will use a<br />

round-robin algorithm on <strong>the</strong> non-preferred paths until it finds a path that is available. If all<br />

paths are unavailable, <strong>the</strong> VDisk will go offline. Therefore, it can take time to perform path<br />

failover when multiple paths go offline.<br />

SDD under Solaris performs load balancing across <strong>the</strong> preferred paths where appropriate.<br />

Veritas <strong>Volume</strong> Manager with dynamic multipathing<br />

Veritas <strong>Volume</strong> Manager (VM) with dynamic multipathing (DMP) automatically selects <strong>the</strong><br />

next available I/O path for I/O requests without action from <strong>the</strong> administrator. VM with DMP is<br />

also informed when you repair or restore a connection, and when you add or remove devices<br />

after <strong>the</strong> system has been fully booted (provided that <strong>the</strong> operating system recognizes <strong>the</strong><br />

devices correctly). The new Java Native Interface (JNI) drivers support <strong>the</strong> mapping of new<br />

VDisks without rebooting <strong>the</strong> Solaris host.<br />

Note <strong>the</strong> following support characteristics:<br />

► Veritas VM with DMP does not support preferred pathing with SVC.<br />

► Veritas VM with DMP does support load balancing across multiple paths with SVC.<br />

Co-existence with SDD and Veritas VM with DMP<br />

Veritas <strong>Volume</strong> Manager with DMP will coexist in “pass-through” mode with SDD. DMP will<br />

use <strong>the</strong> vpath devices that are provided by SDD.<br />

OS cluster support<br />

Solaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/5.0, and Solaris with<br />

Sun Cluster V3.1/3.2 are supported at <strong>the</strong> time of writing.<br />

<strong>SAN</strong> boot support<br />

Note <strong>the</strong> following support characteristics:<br />

► Boot from <strong>SAN</strong> is supported under Solaris 9 running Symantec <strong>Volume</strong> Manager.<br />

► Boot from <strong>SAN</strong> is not supported when SDD is used as <strong>the</strong> multipathing software.<br />

Chapter 5. Host configuration 249


5.14 Hewlett-Packard UNIX configuration information<br />

For <strong>the</strong> latest information about Hewlett-Packard UNIX® (HP-UX) support, refer to this Web<br />

site:<br />

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html<br />

5.14.1 Operating system versions and maintenance levels<br />

At <strong>the</strong> time of writing, HP-UX V11.0 and V11i v1/v2/v3 are supported (64-bit only).<br />

5.14.2 Multipath solutions supported<br />

At <strong>the</strong> time of writing, SDD V1.6.3.0 for HP-UX is supported. Multipathing Software PV Link<br />

and Cluster Software Service Guard V11.14/11.16/11.17/11.18 are also supported, but in a<br />

cluster environment, we recommend SDD.<br />

SDD dynamic pathing<br />

HP-UX supports dynamic pathing when you ei<strong>the</strong>r add more paths to an existing VDisk or if<br />

you present a new VDisk to a host.<br />

SDD is aware of <strong>the</strong> preferred paths that SVC sets per VDisk. SDD will use a round-robin<br />

algorithm when failing over paths, that is, it will try <strong>the</strong> next known preferred path. If this<br />

method fails and all preferred paths have been tried, it will use a round-robin algorithm on <strong>the</strong><br />

non-preferred paths until it finds a path that is available. If all paths are unavailable, <strong>the</strong> VDisk<br />

will go offline. It can take time, <strong>the</strong>refore, to perform path failover when multiple paths go<br />

offline.<br />

SDD under HP-UX performs load balancing across <strong>the</strong> preferred paths where appropriate.<br />

Physical volume links (PVLinks) dynamic pathing<br />

Unlike SDD, PVLinks does not load balance and is unaware of <strong>the</strong> preferred paths that SVC<br />

sets per VDisk. Therefore, we strongly recommend SDD, except when in a clustering<br />

environment or when using an SVC VDisk as your boot disk.<br />

When creating a <strong>Volume</strong> Group, specify <strong>the</strong> primary path that you want HP-UX to use when<br />

accessing <strong>the</strong> Physical <strong>Volume</strong> that is presented by SVC. This path, and only this path, will be<br />

used to access <strong>the</strong> PV as long as it is available, no matter what <strong>the</strong> SVC’s preferred path to<br />

that VDisk is. Therefore, be careful when creating <strong>Volume</strong> Groups so that <strong>the</strong> primary links to<br />

<strong>the</strong> PVs (and load) are balanced over both HBAs, FC switches, SVC nodes, and so on.<br />

When extending a <strong>Volume</strong> Group to add alternate paths to <strong>the</strong> PVs, <strong>the</strong> order in which you<br />

add <strong>the</strong>se paths is HP-UX’s order of preference if <strong>the</strong> primary path becomes unavailable.<br />

Therefore, when extending a <strong>Volume</strong> Group, <strong>the</strong> first alternate path that you add must be from<br />

<strong>the</strong> same SVC node as <strong>the</strong> primary path, to avoid unnecessary node failover due to an HBA,<br />

FC link, or FC switch failure.<br />

5.14.3 Co-existence of SDD and PV Links<br />

If you want to multipath a VDisk with PVLinks while SDD is installed, you need to make sure<br />

that SDD does not configure a vpath for that VDisk. To do this, you need to put <strong>the</strong> serial<br />

number of any VDisks that you want SDD to ignore in <strong>the</strong> /etc/vpathmanualexcl.cfg<br />

directory. In <strong>the</strong> case of <strong>SAN</strong> boot, if you are booting from an SVC VDisk, when you install<br />

SDD (from Version 1.6 onward), SDD will automatically ignore <strong>the</strong> boot VDisk.<br />

250 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>SAN</strong> boot support<br />

<strong>SAN</strong> boot is supported on HP-UX by using PVLinks as <strong>the</strong> multipathing software on <strong>the</strong> boot<br />

device. You can use PVLinks or SDD to provide <strong>the</strong> multipathing support for <strong>the</strong> o<strong>the</strong>r devices<br />

that are attached to <strong>the</strong> system.<br />

5.14.4 Using an SVC VDisk as a cluster lock disk<br />

ServiceGuard does not provide a way to specify alternate links to a cluster lock disk. When<br />

using an SVC VDisk as your lock disk, if <strong>the</strong> path to FIRST_CLUSTER_LOCK_PV becomes<br />

unavailable, <strong>the</strong> HP node will not be able to access <strong>the</strong> lock disk if a 50-50 split in quorum<br />

occurs.<br />

To ensure redundancy, when editing your Cluster Configuration ASCII file, make sure that <strong>the</strong><br />

variable FIRST_CLUSTER_LOCK_PV has a separate path to <strong>the</strong> lock disk for each HP node<br />

in your cluster. For example, when configuring a two-node HP cluster, make sure that<br />

FIRST_CLUSTER_LOCK_PV on HP server A is on a separate SVC node and through a<br />

separate FC switch than <strong>the</strong> FIRST_CLUSTER_LOCK_PV on HP server B.<br />

5.14.5 Support for HP-UX with greater than eight LUNs<br />

HP-UX will not recognize more than eight LUNS per port using <strong>the</strong> generic SCSI behavior.<br />

To accommodate this behavior, SVC supports a “type” associated with a host. This type can<br />

be set using <strong>the</strong> svctask mkhost command and modified using <strong>the</strong> svctask chhost<br />

command. The type can be set to generic, which is <strong>the</strong> default for HP-UX.<br />

When an initiator port, which is a member of a host of type HP-UX, accesses an SVC, <strong>the</strong><br />

SVC will behave in <strong>the</strong> following way:<br />

► Flat Space Addressing mode is used ra<strong>the</strong>r than <strong>the</strong> Peripheral Device Addressing Mode.<br />

► When an inquiry command for any page is sent to LUN 0 using Peripheral Device<br />

Addressing, it is reported as Peripheral Device Type 0Ch (controller).<br />

► When any command o<strong>the</strong>r than an inquiry is sent to LUN 0 using Peripheral Device<br />

Addressing, SVC will respond as an unmapped LUN 0 normally responds.<br />

► When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral<br />

Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown<br />

Device Type.<br />

► When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device<br />

Addressing, <strong>the</strong> Peripheral qualifier returned is 001b and <strong>the</strong> Peripheral Device type is 1Fh<br />

(unknown or no device type). This response is in contrast to <strong>the</strong> behavior for generic hosts,<br />

where peripheral Device Type 00h is returned.<br />

5.15 Using SDDDSM, SDDPCM, and SDD Web interface<br />

After installing <strong>the</strong> SDDDSM or SDD driver, <strong>the</strong>re are specific commands available. To open a<br />

command window for SDDDSM or SDD, from <strong>the</strong> desktop, select Start Programs <br />

Subsystem Device Driver Subsystem Device Driver Management.<br />

The command documentation for <strong>the</strong> various operating systems is available in <strong>the</strong> Multipath<br />

Subsystem Device Driver User Guides:<br />

http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7<br />

000303&loc=en_US&cs=utf-8&lang=en<br />

Chapter 5. Host configuration 251


It is also possible to configure <strong>the</strong> multipath driver so that it offers a Web interface to run <strong>the</strong><br />

commands. Before this configuration can work, we need to configure <strong>the</strong> Web interface.<br />

Sddsrv does not bind to any TCP/IP port by default, but it allows port binding to be<br />

dynamically enabled or disabled.<br />

For all platforms except Linux, <strong>the</strong> multipath driver package ships an sddsrv.conf template<br />

file named <strong>the</strong> sample_sddsrv.conf file. On all UNIX platforms except Linux, <strong>the</strong><br />

sample_sddsrv.conf file is located in <strong>the</strong> /etc directory. On Windows platforms, <strong>the</strong><br />

sample_sddsrv.conf file is in <strong>the</strong> directory where SDD is installed.<br />

You must use <strong>the</strong> sample_sddsrv.conf file to create <strong>the</strong> sddsrv.conf file in <strong>the</strong> same directory<br />

as <strong>the</strong> sample_sddsrv.conf file by simply copying it and naming <strong>the</strong> copied file sddsrv.conf.<br />

You can <strong>the</strong>n dynamically change port binding by modifying <strong>the</strong> parameters in <strong>the</strong><br />

sddsrv.conf file and changing <strong>the</strong> values of Enableport and Loopbackbind to True.<br />

Figure 5-51 shows <strong>the</strong> start window of <strong>the</strong> multipath driver Web interface.<br />

Figure 5-51 SDD Web interface<br />

5.16 Calculating <strong>the</strong> queue depth<br />

The queue depth is <strong>the</strong> number of I/O operations that can be run in parallel on a device. It is<br />

usually possible to set a limit on <strong>the</strong> queue depth on <strong>the</strong> SDD paths (or equivalent) or <strong>the</strong><br />

HBA. Ensure that you configure <strong>the</strong> servers to limit <strong>the</strong> queue depth on all of <strong>the</strong> paths to <strong>the</strong><br />

<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> disks in configurations that contain a large number of servers or<br />

VDisks.<br />

You might have a number of servers in <strong>the</strong> configuration that are idle, or do not initiate <strong>the</strong><br />

calculated quantity of I/O operations. If so, you might not need to limit <strong>the</strong> queue depth.<br />

252 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5.17 Fur<strong>the</strong>r sources of information<br />

For more information about host attachment and configuration to <strong>the</strong> SVC, refer to <strong>the</strong> <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Host Attachment Guide, SC26-7563.<br />

For more information about SDDDSM or SDD configuration, refer to <strong>the</strong> <strong>IBM</strong> Total<strong>Storage</strong><br />

Multipath Subsystem Device Driver User’s Guide, SC30-4096.<br />

When looking for information about certain storage subsystems, this link is usually helpful:<br />

http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp<br />

5.17.1 Publications containing SVC storage subsystem attachment guidelines<br />

It is beyond <strong>the</strong> intended scope of this book to describe <strong>the</strong> attachment to each and every<br />

subsystem that <strong>the</strong> SVC supports. Here is a short list of what we found especially useful in<br />

<strong>the</strong> writing of this book, and in <strong>the</strong> field:<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Best Practices and Performance Guidelines, SG24-7521,<br />

describes in detail how you can tune your back-end storage to maximize your performance<br />

on <strong>the</strong> SVC:<br />

http://www.redbooks.ibm.com/redbooks/pdfs/sg247521.pdf<br />

► Chapter 14 in DS8000 Performance Monitoring and Tuning, SG24-7146, describes <strong>the</strong><br />

guidelines and procedures to make <strong>the</strong> most of <strong>the</strong> performance that is available from your<br />

DS8000 storage subsystem when attached to <strong>the</strong> <strong>IBM</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>:<br />

http://www.redbooks.ibm.com/redbooks/pdfs/sg247146.pdf<br />

► DS4000 Best Practices and Performance Tuning Guide, SG24-6363, explains how to<br />

connect and configure your storage for optimized performance on <strong>the</strong> SVC:<br />

http://www.redbooks.ibm.com/redbooks/pdfs/sg2476363.pdf<br />

► <strong>IBM</strong> XIV <strong>Storage</strong> <strong>System</strong>: Architecture, Implementation and Usage, SG24-7659,<br />

discusses specific considerations for attaching <strong>the</strong> XIV <strong>Storage</strong> <strong>System</strong> to a <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong>:<br />

http://www.redbooks.ibm.com/redpieces/pdfs/sg247659.pdf<br />

Chapter 5. Host configuration 253


254 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Chapter 6. Advanced Copy Services<br />

6<br />

Before we discuss FlashCopy, Metro Mirror, and Global Mirror, we first describe <strong>the</strong> <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC) Advanced Copy Services of FlashCopy, Metro<br />

Mirror, and Global Mirror.<br />

In Chapter 7, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface” on<br />

page 339, we describe how to use <strong>the</strong> command-line interface and Advanced Copy Services.<br />

In Chapter 8, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI” on page 469, we describe<br />

how to use <strong>the</strong> GUI and Advanced Copy Services.<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 255


6.1 FlashCopy<br />

The FlashCopy function of <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC) provides<br />

<strong>the</strong> capability to perform a point-in-time copy of one or more virtual disks (VDisks).<br />

In <strong>the</strong> topics that follow, we describe how FlashCopy works on <strong>the</strong> SVC, and we present<br />

examples of configuring and utilizing FlashCopy.<br />

FlashCopy is also known as point-in-time copy. You can use <strong>the</strong> FlashCopy technique to help<br />

solve <strong>the</strong> challenge of making a consistent copy of a data set that is constantly being<br />

updated. The FlashCopy source is frozen for a few seconds or less during <strong>the</strong> point-in-time<br />

copy process. It will be able to accept I/O when <strong>the</strong> point-in-time copy bitmap is set up and <strong>the</strong><br />

FlashCopy function is ready to intercept read/write requests in <strong>the</strong> I/O path. Although <strong>the</strong><br />

background copy operation takes time, <strong>the</strong> resulting data at <strong>the</strong> target appears as though <strong>the</strong><br />

copy were made instantaneously.<br />

SVC’s FlashCopy service provides <strong>the</strong> capability to perform a point-in-time copy of one or<br />

more VDisks. Because <strong>the</strong> copy is performed at <strong>the</strong> block level, it operates underneath <strong>the</strong><br />

operating system and application caches. The image that is presented is “crash-consistent”:<br />

that is to say, it is similar to an image that is seen in a crash event, such as an unexpected<br />

power failure.<br />

6.1.1 Business requirement<br />

The business applications for FlashCopy are many and various. An important use is<br />

facilitating consistent backups of constantly changing data, and, in <strong>the</strong>se instances, a<br />

FlashCopy is created to capture a point-in-time copy. The resulting image can be backed up<br />

to tertiary storage, such as tape. After <strong>the</strong> copied data is on tape, <strong>the</strong> FlashCopy target is<br />

redundant.<br />

Various tasks can benefit from <strong>the</strong> use of FlashCopy. In <strong>the</strong> following sections, we describe<br />

<strong>the</strong> most common situations.<br />

6.1.2 Moving and migrating data<br />

When you need to move a consistent data set from one host to ano<strong>the</strong>r host, FlashCopy can<br />

facilitate this action with a minimum of downtime for <strong>the</strong> host application that is dependent on<br />

<strong>the</strong> source VDisk.<br />

It might be beneficial to quiesce <strong>the</strong> application on <strong>the</strong> host and flush <strong>the</strong> application and OS<br />

buffers so that <strong>the</strong> new VDisk contains data that is “clean” to <strong>the</strong> application. Though without<br />

this step, <strong>the</strong> newly created VDisk data will still be usable by <strong>the</strong> application, it will require<br />

recovery procedures (such as log replay) to use. Quiescing <strong>the</strong> application ensures that <strong>the</strong><br />

startup time against <strong>the</strong> mirrored copy is minimized.<br />

The cache on <strong>the</strong> SVC is also flushed using <strong>the</strong> FlashCopy prestartfcmap command; see<br />

“Preparing” on page 275 prior to performing <strong>the</strong> FlashCopy.<br />

The data set that has been created on <strong>the</strong> FlashCopy target is immediately available, as well<br />

as <strong>the</strong> source VDisk.<br />

256 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


6.1.3 Backup<br />

6.1.4 Restore<br />

FlashCopy does not affect your backup time, but it allows you to create a point-in-time<br />

consistent data set (across VDisks), with a minimum of downtime for your source host. The<br />

FlashCopy target can <strong>the</strong>n be mounted on ano<strong>the</strong>r host (or <strong>the</strong> backup server) and backed<br />

up. Using this procedure, <strong>the</strong> backup speed becomes less important, because <strong>the</strong> backup<br />

time does not require downtime for <strong>the</strong> host that is dependent on <strong>the</strong> source VDisks.<br />

You can keep periodically created FlashCopy targets online to provide extremely fast restore<br />

of specific files from <strong>the</strong> point-in-time consistent data set revealed on <strong>the</strong> FlashCopy targets.<br />

You simply copy <strong>the</strong> specific files to <strong>the</strong> source VDisk in case a restore is needed.<br />

6.1.5 Application testing<br />

You can test new applications and new operating system releases against a FlashCopy of<br />

your production data. The risk of data corruption is eliminated, and your application does not<br />

need to be taken offline for an extended period of time to perform <strong>the</strong> copy of <strong>the</strong> data.<br />

Data mining is a good example of an area where FlashCopy can help you. Data mining can<br />

now extract data without affecting your application.<br />

6.1.6 SVC FlashCopy features<br />

The FlashCopy function in SVC supports <strong>the</strong>se features:<br />

► The target is <strong>the</strong> time-zero copy of <strong>the</strong> source (known as FlashCopy mapping targets).<br />

► The source VDisk and target VDisk are available (almost) immediately.<br />

► One source VDisk can have up to 256 target VDisks at <strong>the</strong> same or various points in time.<br />

► Consistency groups are supported to enable FlashCopy across multiple VDisks.<br />

► The target VDisk can be updated independently of <strong>the</strong> source VDisk.<br />

► Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of<br />

<strong>the</strong> SVC I/O Group to prevent a single point of failure.<br />

► FlashCopy mapping can be automatically withdrawn after <strong>the</strong> completion of background<br />

copy.<br />

► FlashCopy consistency groups can be automatically withdrawn after <strong>the</strong> completion of<br />

background copy.<br />

► Multiple Target FlashCopy: FlashCopy now supports up to 256 target copies from a single<br />

source VDisk.<br />

► Space-Efficient FlashCopy: Space-Efficient FlashCopy uses disk space only for changes<br />

between source and target data and not for <strong>the</strong> entire capacity of a VDisk copy.<br />

► FlashCopy licensing: The FlashCopy previously was licensed by <strong>the</strong> source and target<br />

virtual capacity. It will now be licensed only by source virtual capacity.<br />

► Incremental FlashCopy: A mapping created with <strong>the</strong> “incremental” flag copies only <strong>the</strong><br />

data that has been changed on <strong>the</strong> source or <strong>the</strong> target since <strong>the</strong> previous copy<br />

completed. This incremental FlashCopy can substantially reduce <strong>the</strong> time that is required<br />

to recreate an independent image.<br />

Chapter 6. Advanced Copy Services 257


► Reverse FlashCopy enables FlashCopy targets to become restore points for <strong>the</strong> source<br />

without breaking <strong>the</strong> FlashCopy relationship and without having to wait for <strong>the</strong> original<br />

copy operation to complete.<br />

► Cascaded FlashCopy: The target VDisk of a FlashCopy mapping can be <strong>the</strong> source VDisk<br />

in a future FlashCopy mapping.<br />

6.2 Reverse FlashCopy<br />

With SVC Version 5.1.x, Reverse FlashCopy support is available. Reverse FlashCopy<br />

enables FlashCopy targets to become restore points for <strong>the</strong> source without breaking <strong>the</strong><br />

FlashCopy relationship and without having to wait for <strong>the</strong> original copy operation to complete.<br />

It supports multiple targets and thus multiple rollback points.<br />

A key advantage of SVC Multiple Target Reverse FlashCopy function is that <strong>the</strong> reverse<br />

FlashCopy does not destroy <strong>the</strong> original target. Thus, any process using <strong>the</strong> target, such as a<br />

tape backup process, will not be disrupted. Multiple recovery points can be tested.<br />

SVC is also unique in that an optional copy of <strong>the</strong> source VDisk can be made before starting<br />

<strong>the</strong> reverse copy operation in order to diagnose problems.<br />

When a user suffers a disaster and needs to restore from an on-disk backup, <strong>the</strong> user follows<br />

this procedure:<br />

1. (Optional) Create a new target VDisk (VDisk Z) and FlashCopy <strong>the</strong> production VDisk<br />

(VDisk X) onto <strong>the</strong> new target for later problem analysis.<br />

2. Create a new FlashCopy map with <strong>the</strong> backup to be restored (VDisk Y) or (VDisk W) as<br />

<strong>the</strong> source VDisk and VDisk X as <strong>the</strong> target VDisk, if this map does not already exist.<br />

3. Start <strong>the</strong> FlashCopy map (VDisk Y VDisk X) with <strong>the</strong> new -restore option to copy <strong>the</strong><br />

backup data onto <strong>the</strong> production disk.<br />

4. The production disk is instantly available with <strong>the</strong> backup data.<br />

Figure 6-1 on page 259 shows an example of Reverse FlashCopy.<br />

258 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 6-1 Reverse FlashCopy<br />

Regardless of whe<strong>the</strong>r <strong>the</strong> initial FlashCopy map (VDisk X VDisk Y) is incremental, <strong>the</strong><br />

reverse operation only copies <strong>the</strong> modified data.<br />

Consistency groups are reversed by creating a set of new “reverse” FlashCopy maps and<br />

adding <strong>the</strong>m to a new “reverse” consistency group. A consistency group cannot contain more<br />

than one FlashCopy map with <strong>the</strong> same target VDisk.<br />

6.2.1 FlashCopy and Tivoli <strong>Storage</strong> Manager<br />

The management of many large Reverse FlashCopy consistency groups is a complex task,<br />

without a tool for assistance.<br />

<strong>IBM</strong> Tivoli FlashCopy Manager V2.1 is a new product that will improve <strong>the</strong> interlock between<br />

SVC and Tivoli <strong>Storage</strong> Manager for Advanced Copy Services, as well.<br />

Figure 6-2 on page 260 shows <strong>the</strong> Tivoli <strong>Storage</strong> Manager for Advanced Copy Services<br />

features.<br />

Chapter 6. Advanced Copy Services 259


Figure 6-2 Tivoli <strong>Storage</strong> Manager for Advanced Copy Services features<br />

Tivoli FlashCopy Manager provides many of <strong>the</strong> features of Tivoli <strong>Storage</strong> Manager for<br />

Advanced Copy Services without <strong>the</strong> requirement to use Tivoli <strong>Storage</strong> Manager. With Tivoli<br />

FlashCopy Manager, you can coordinate and automate host preparation steps before issuing<br />

FlashCopy start commands to ensure that a consistent backup of <strong>the</strong> application is made.<br />

You can put databases into hot backup mode, and before starting FlashCopy, you flush <strong>the</strong><br />

filesystem cache.<br />

FlashCopy Manager also allows for easier management of on-disk backups using FlashCopy<br />

and provides a simple interface to <strong>the</strong> “reverse” operation.<br />

Figure 6-3 on page 261 shows <strong>the</strong> FlashCopy Manager feature.<br />

260 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 6-3 Tivoli <strong>Storage</strong> Manager FlashCopy Manager features<br />

It is beyond <strong>the</strong> intended scope of this book to describe Tivoli <strong>Storage</strong> Manager FlashCopy<br />

Manager.<br />

6.3 How FlashCopy works<br />

FlashCopy works by defining a FlashCopy mapping that consists of one source VDisk<br />

toge<strong>the</strong>r with one target VDisk. You can define multiple FlashCopy mappings, and<br />

point-in-time consistency can be observed across multiple FlashCopy mappings using<br />

consistency groups. See “Consistency group with Multiple Target FlashCopy” on page 265.<br />

When FlashCopy is started, it makes a copy of a source VDisk to a target VDisk, and <strong>the</strong><br />

original contents of <strong>the</strong> target VDisk are overwritten. When <strong>the</strong> FlashCopy operation is<br />

started, <strong>the</strong> target VDisk presents <strong>the</strong> contents of <strong>the</strong> source VDisk as <strong>the</strong>y existed at <strong>the</strong><br />

single point-in-time of FlashCopy starting. This operation is also referred to as a time-zero<br />

copy (T 0 ).<br />

When a FlashCopy is started, <strong>the</strong> source and target VDisks are instantaneously available.<br />

When FlashCopy starts, bitmaps are created to govern and redirect I/O to <strong>the</strong> source or target<br />

VDisk, depending on where <strong>the</strong> requested block is located, while <strong>the</strong> blocks are copied in <strong>the</strong><br />

background from <strong>the</strong> source VDisk to <strong>the</strong> target VDisk.<br />

For more details about background copy, see 6.4.5, “Grains and <strong>the</strong> FlashCopy bitmap” on<br />

page 266.<br />

Figure 6-4 on page 262 illustrates <strong>the</strong> redirection of <strong>the</strong> host I/O toward <strong>the</strong> source VDisk and<br />

<strong>the</strong> target VDisk.<br />

Chapter 6. Advanced Copy Services 261


Figure 6-4 Redirection of host I/O<br />

6.4 <strong>Implementing</strong> SVC FlashCopy<br />

In <strong>the</strong> topics that follow, we describe how FlashCopy is implemented in <strong>the</strong> SVC.<br />

6.4.1 FlashCopy mappings<br />

In <strong>the</strong> SVC, FlashCopy occurs between a source VDisk and a target VDisk. The source and<br />

target VDisks must be <strong>the</strong> same size. The minimum granularity that SVC supports for<br />

FlashCopy is an entire VDisk; it is not possible to use FlashCopy to copy only part of a VDisk.<br />

The source and target VDisks must both belong to <strong>the</strong> same SVC cluster, but <strong>the</strong>y can be in<br />

separate I/O Groups within that cluster. SVC FlashCopy associates a source VDisk to a target<br />

VDisk in a FlashCopy mapping.<br />

VDisks, which are members of a FlashCopy mapping, cannot have <strong>the</strong>ir size increased or<br />

decreased while <strong>the</strong>y are members of <strong>the</strong> FlashCopy mapping. The SVC supports <strong>the</strong><br />

creation of enough FlashCopy mappings to allow every VDisk to be a member of a FlashCopy<br />

mapping.<br />

A FlashCopy mapping is <strong>the</strong> act of creating a relationship between a source VDisk and a<br />

target VDisk. FlashCopy mappings can be ei<strong>the</strong>r stand-alone or a member of a consistency<br />

group. You can perform <strong>the</strong> act of preparing, starting, or stopping on ei<strong>the</strong>r <strong>the</strong> stand-alone<br />

mapping or <strong>the</strong> consistency group.<br />

Rule: After a mapping is in a consistency group, you can only operate on <strong>the</strong> group, and<br />

you can no longer prepare, start, or stop <strong>the</strong> individual mapping.<br />

Figure 6-5 on page 263 illustrates <strong>the</strong> concept of FlashCopy mapping.<br />

262 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 6-5 FlashCopy mapping<br />

6.4.2 Multiple Target FlashCopy<br />

SVC supports up copying up to 256 target VDisks from a single source VDisk. Each copy is<br />

managed by a unique mapping. In general, each mapping acts independently and is not<br />

affected by o<strong>the</strong>r mappings sharing <strong>the</strong> same source VDisk. Figure 6-6 illustrates how a view<br />

of a Multiple Target FlashCopy implementation.<br />

Figure 6-6 Multiple Target FlashCopy implementation<br />

Figure 6-6 shows four targets and mappings taken from a single source. It also shows that<br />

<strong>the</strong>re is an ordering to <strong>the</strong> targets: Target 1 is <strong>the</strong> oldest (as measured from <strong>the</strong> time it was<br />

started) through to Target 4, which is <strong>the</strong> newest. The ordering is important because of <strong>the</strong><br />

way in which data is copied when multiple target VDisks are defined and because of <strong>the</strong><br />

dependency chain that results. A write to <strong>the</strong> source VDisk does not cause its data to be<br />

copied to all of <strong>the</strong> targets; instead, it is copied to <strong>the</strong> newest target VDisk only (Target 4 in<br />

Figure 6-6). The older targets will refer to new targets first before referring to <strong>the</strong> source.<br />

From <strong>the</strong> point of view of an intermediate target disk (nei<strong>the</strong>r <strong>the</strong> oldest or <strong>the</strong> newest), it<br />

treats <strong>the</strong> set of newer target VDisks and <strong>the</strong> true source VDisk as a type of composite<br />

source.<br />

It treats all older VDisks as a kind of target (and behaves like a source to <strong>the</strong>m). If <strong>the</strong><br />

mapping for an intermediate target VDisk shows 100% progress, its target VDisk contains a<br />

complete set of data. In this case, mappings treat <strong>the</strong> set of newer target VDisks, up to and<br />

including <strong>the</strong> 100% progress target, as a form of composite source. A dependency<br />

relationship exists between a particular target and all newer targets (up to and including a<br />

target that shows 100% progress) that share <strong>the</strong> same source until all data has been copied<br />

to this target and all older targets.<br />

You can read more information about Multiple Target FlashCopy in 6.4.6, “Interaction and<br />

dependency between Multiple Target FlashCopy mappings” on page 267.<br />

Chapter 6. Advanced Copy Services 263


6.4.3 Consistency groups<br />

Consistency groups address <strong>the</strong> issue where <strong>the</strong> objective is to preserve data consistency<br />

across multiple VDisks, because <strong>the</strong> applications have related data that spans multiple<br />

VDisks. A requirement for preserving <strong>the</strong> integrity of data that is being written is to ensure that<br />

“dependent writes” are executed in <strong>the</strong> application’s intended sequence. Because <strong>the</strong> SVC<br />

provides point-in-time semantics, a self-consistent data set is obtained.<br />

FlashCopy mappings can be members of a consistency group, or <strong>the</strong>y can be operated in a<br />

stand-alone manner, not as part of a consistency group.<br />

FlashCopy commands can be issued to a FlashCopy consistency group, which affects all<br />

FlashCopy mappings in <strong>the</strong> consistency group, or to a single FlashCopy mapping if it is not<br />

part of a defined FlashCopy consistency group.<br />

Figure 6-7 illustrates a consistency group consisting of two FlashCopy mappings.<br />

Figure 6-7 FlashCopy consistency group<br />

Dependent writes<br />

To illustrate why it is crucial to use consistency groups when a data set spans multiple VDisks,<br />

consider <strong>the</strong> following typical sequence of writes for a database update transaction:<br />

1. A write is executed to update <strong>the</strong> database log, indicating that a database update is to be<br />

performed.<br />

2. A second write is executed to update <strong>the</strong> database.<br />

3. A third write is executed to update <strong>the</strong> database log, indicating that <strong>the</strong> database update<br />

has completed successfully.<br />

The database ensures <strong>the</strong> correct ordering of <strong>the</strong>se writes by waiting for each step to<br />

complete before starting <strong>the</strong> next step. However, if <strong>the</strong> database log (updates 1 and 3) and <strong>the</strong><br />

database itself (update 2) are on separate VDisks and a FlashCopy mapping is started during<br />

this update, you need to exclude <strong>the</strong> possibility that <strong>the</strong> database itself is copied slightly<br />

before <strong>the</strong> database log. This will result in <strong>the</strong> target VDisks seeing writes (1) and (3) but not<br />

(2), because <strong>the</strong> database was copied before <strong>the</strong> write was completed.<br />

In this case, if <strong>the</strong> database was restarted using <strong>the</strong> backup that was made from <strong>the</strong><br />

FlashCopy target disks, <strong>the</strong> database log indicates that <strong>the</strong> transaction had completed<br />

successfully when, in fact, that is not <strong>the</strong> case, because <strong>the</strong> FlashCopy of <strong>the</strong> VDisk with <strong>the</strong><br />

database file was started (bitmap was created) before <strong>the</strong> write was on <strong>the</strong> disk. Therefore,<br />

<strong>the</strong> transaction is lost, and <strong>the</strong> integrity of <strong>the</strong> database is in question.<br />

264 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


To overcome <strong>the</strong> issue of dependent writes across VDisks and to create a consistent image of<br />

<strong>the</strong> client data, it is necessary to perform a FlashCopy operation on multiple VDisks as an<br />

atomic operation. To achieve this condition, <strong>the</strong> SVC supports <strong>the</strong> concept of consistency<br />

groups.<br />

A FlashCopy consistency group can contain up to 512 FlashCopy mappings (up to <strong>the</strong><br />

maximum number of FlashCopy mappings supported by <strong>the</strong> SVC cluster). FlashCopy<br />

commands can <strong>the</strong>n be issued to <strong>the</strong> FlashCopy consistency group and <strong>the</strong>reby<br />

simultaneously for all of <strong>the</strong> FlashCopy mappings that are defined in <strong>the</strong> consistency group.<br />

For example, when issuing a FlashCopy start command to <strong>the</strong> consistency group, all of <strong>the</strong><br />

FlashCopy mappings in <strong>the</strong> consistency group are started at <strong>the</strong> same time, resulting in a<br />

point-in-time copy that is consistent across all of <strong>the</strong> FlashCopy mappings that are contained<br />

in <strong>the</strong> consistency group.<br />

Consistency group with Multiple Target FlashCopy<br />

It is important to note that a consistency group aggregates FlashCopy mappings, not VDisks.<br />

Thus, where a source VDisk has multiple FlashCopy mappings, <strong>the</strong>y can be in <strong>the</strong> same or<br />

separate consistency groups. If a particular VDisk is <strong>the</strong> source VDisk for multiple FlashCopy<br />

mappings, you might want to create separate consistency groups to separate each mapping<br />

of <strong>the</strong> same source VDisk. If <strong>the</strong> source VDisk with multiple target VDisks is in <strong>the</strong> same<br />

consistency group, <strong>the</strong> result is that when <strong>the</strong> consistency group is started, multiple identical<br />

copies of <strong>the</strong> VDisk will be created. However, this result might be what <strong>the</strong> user wants. For<br />

example, <strong>the</strong> user might want to run multiple simulations on <strong>the</strong> same set of source data. IF<br />

so, this approach is one way of obtaining identical sets of source data.<br />

Maximum configurations<br />

Table 6-1 shows <strong>the</strong> FlashCopy properties and maximum configurations.<br />

Table 6-1 FlashCopy properties and maximum configuration<br />

FlashCopy property Maximum Comment<br />

FlashCopy targets per source 256 This maximum is <strong>the</strong> maximum number of<br />

FlashCopy mappings that can exist with <strong>the</strong> same<br />

source VDisk.<br />

FlashCopy mappings per cluster 4,096 The number of mappings is no longer limited by<br />

<strong>the</strong> number of VDisks in <strong>the</strong> cluster, and so, <strong>the</strong><br />

FlashCopy component limit applies.<br />

FlashCopy consistency groups<br />

per cluster<br />

127 This maximum is an arbitrary limit that is policed<br />

by <strong>the</strong> software.<br />

FlashCopy VDisks per I/O Group 1,024 This maximum is a limit on <strong>the</strong> quantity of<br />

FlashCopy mappings using bitmap space from<br />

this I/O Group. This maximum configuration will<br />

consume all 512 MB of bitmap space for <strong>the</strong> I/O<br />

Group and allow no Metro and Global Mirror<br />

bitmap space. The default is 40 TB.<br />

FlashCopy mappings per<br />

consistency group<br />

512 This limit is due to <strong>the</strong> time that is taken to prepare<br />

a consistency group with a large number of<br />

mappings.<br />

Chapter 6. Advanced Copy Services 265


6.4.4 FlashCopy indirection layer<br />

The FlashCopy indirection layer governs <strong>the</strong> I/O to both <strong>the</strong> source and target VDisks when a<br />

FlashCopy mapping is started, which is done using a FlashCopy bitmap. The purpose of <strong>the</strong><br />

FlashCopy indirection layer is to enable both <strong>the</strong> source and target VDisks for read and write<br />

I/O immediately after <strong>the</strong> FlashCopy has been started.<br />

To illustrate how <strong>the</strong> FlashCopy indirection layer works, we look at what happens when a<br />

FlashCopy mapping is prepared and subsequently started.<br />

When a FlashCopy mapping is prepared and started, <strong>the</strong> following sequence is applied:<br />

1. Flush write <strong>the</strong> data in cache onto a source VDisk or VDisks that are part of a consistency<br />

group.<br />

2. Put cache into write-through on <strong>the</strong> source VDisks.<br />

3. Discard cache for <strong>the</strong> target VDisks.<br />

4. Establish a sync point on all of <strong>the</strong> source VDisks in <strong>the</strong> consistency group (creating <strong>the</strong><br />

FlashCopy bitmap).<br />

5. Ensure that <strong>the</strong> indirection layer governs all of <strong>the</strong> I/O to <strong>the</strong> source VDisks and target<br />

VDisks.<br />

6. Enable cache on both <strong>the</strong> source VDisks and target VDisks.<br />

FlashCopy provides <strong>the</strong> semantics of a point-in-time copy, using <strong>the</strong> indirection layer, which<br />

intercepts <strong>the</strong> I/Os that targeted at ei<strong>the</strong>r <strong>the</strong> source VDisks or target VDisks. The act of<br />

starting a FlashCopy mapping causes this indirection layer to become active in <strong>the</strong> I/O path,<br />

which occurs as an atomic command across all FlashCopy mappings in <strong>the</strong> consistency<br />

group. The indirection layer makes a decision about each I/O. This decision is based upon<br />

<strong>the</strong>se factors:<br />

► The VDisk and <strong>the</strong> logical block address (LBA) to which <strong>the</strong> I/O is addressed<br />

► Its direction (read or write)<br />

► The state of an internal data structure, <strong>the</strong> FlashCopy bitmap<br />

The indirection layer ei<strong>the</strong>r allows <strong>the</strong> I/O to go through <strong>the</strong> underlying storage, redirects <strong>the</strong><br />

I/O from <strong>the</strong> target VDisk to <strong>the</strong> source VDisk, or stalls <strong>the</strong> I/O while it arranges for data to be<br />

copied from <strong>the</strong> source VDisk to <strong>the</strong> target VDisk. To explain in more detail which action is<br />

applied for each I/O, we first look at <strong>the</strong> FlashCopy bitmap.<br />

6.4.5 Grains and <strong>the</strong> FlashCopy bitmap<br />

When data is copied between VDisks by FlashCopy, ei<strong>the</strong>r from source to target or from<br />

target to target, it is copied in units of address space known as grains. The grain size is<br />

256 KB or 64 KB. The FlashCopy bitmap contains one bit for each grain. The bit records<br />

whe<strong>the</strong>r <strong>the</strong> associated grain has yet been split by copying <strong>the</strong> grain from <strong>the</strong> source to <strong>the</strong><br />

target.<br />

Source reads<br />

Reads of <strong>the</strong> source are always passed through to <strong>the</strong> underlying source disk.<br />

Target reads<br />

In order for FlashCopy to process a read from <strong>the</strong> target disk, FlashCopy must consult its<br />

bitmap. If <strong>the</strong> data being read has already been copied to <strong>the</strong> target, <strong>the</strong> read is sent to <strong>the</strong><br />

target disk. If it has not, <strong>the</strong> read is sent to <strong>the</strong> source VDisk or possibly to ano<strong>the</strong>r target<br />

VDisk if multiple FlashCopy mappings exist for <strong>the</strong> source VDisk. Clearly, this algorithm<br />

266 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


equires that while this read is outstanding, no writes are allowed to execute that change <strong>the</strong><br />

data being read. The SVC satisfies this requirement by using by a cluster-wide locking<br />

scheme.<br />

Writes to <strong>the</strong> source or target<br />

Where writes occur to source or target to an area (grain), which has not yet been copied,<br />

<strong>the</strong>se writes will usually be stalled while a copy operation is performed to copy data from <strong>the</strong><br />

source to <strong>the</strong> target, to maintain <strong>the</strong> illusion that <strong>the</strong> target contains its own copy. A specific<br />

optimization is performed where an entire grain is written to <strong>the</strong> target VDisk. In this case, <strong>the</strong><br />

new grain contents are written to <strong>the</strong> target VDisk. If this write succeeds, <strong>the</strong> grain is marked<br />

as split in <strong>the</strong> FlashCopy bitmap without a copy from <strong>the</strong> source to <strong>the</strong> target having been<br />

performed. If <strong>the</strong> write fails, <strong>the</strong> grain is not marked as split.<br />

The rate at which <strong>the</strong> grains are copied across from <strong>the</strong> source VDisk to <strong>the</strong> target VDisk is<br />

called <strong>the</strong> copy rate. By default, <strong>the</strong> copy rate is 50, although you can alter this rate. For more<br />

information about copy rates, see 6.4.13, “Space-efficient FlashCopy” on page 276.<br />

The FlashCopy indirection layer algorithm<br />

Imagine <strong>the</strong> FlashCopy indirection layer as <strong>the</strong> I/O traffic cop when a FlashCopy mapping is<br />

active. The I/O is intercepted and handled according to whe<strong>the</strong>r it is directed at <strong>the</strong> source<br />

VDisk or at <strong>the</strong> target VDisk, depending on <strong>the</strong> nature of <strong>the</strong> I/O (read or write) and <strong>the</strong> state<br />

of <strong>the</strong> grain (whe<strong>the</strong>r it has been copied).<br />

In Figure 6-8, we illustrate how <strong>the</strong> background copy runs while I/Os are handled according to<br />

<strong>the</strong> indirection layer algorithm.<br />

Figure 6-8 I/O processing with FlashCopy<br />

6.4.6 Interaction and dependency between Multiple Target FlashCopy<br />

mappings<br />

Figure 6-9 on page 268 represents a set of four FlashCopy mappings that share a common<br />

source. The FlashCopy mappings will target VDisks Target 0, Target 1, Target 2, and Target 3.<br />

Chapter 6. Advanced Copy Services 267


Figure 6-9 Interactions between MTFC mappings<br />

Target 0 is not dependent on a source, because it has completed copying. Target 0 has two<br />

dependent mappings (Target 1 and Target 2).<br />

Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been<br />

copied. Target 2 is dependent on it, because Target 2 is 20% copy complete. After all of<br />

Target 1 has been copied, it can <strong>the</strong>n move to <strong>the</strong> idle_copied state.<br />

Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target<br />

2 has been copied. No target is dependent on Target 2, so when all of <strong>the</strong> data has been<br />

copied to Target 2, it can move to <strong>the</strong> Idle_copied state.<br />

Target 3 has actually completed copying, so it is not dependent on any o<strong>the</strong>r maps.<br />

Write to target VDisk<br />

A write to an intermediate or newest target VDisk must consider <strong>the</strong> state of <strong>the</strong> grain within<br />

its own mapping, as well as that of <strong>the</strong> grain of <strong>the</strong> next oldest mapping:<br />

► If <strong>the</strong> grain of <strong>the</strong> next oldest mapping has not yet been copied, it must be copied before<br />

<strong>the</strong> write is allowed to proceed in order to preserve <strong>the</strong> contents of <strong>the</strong> next oldest<br />

mapping. The data written to <strong>the</strong> next oldest mapping comes from a target or source.<br />

► If <strong>the</strong> grain in <strong>the</strong> target being written has not yet been copied, <strong>the</strong> grain is copied from <strong>the</strong><br />

oldest already copied grain in <strong>the</strong> mappings that are newer than it, or <strong>the</strong> source if none<br />

are already copied. After this copy has been done, <strong>the</strong> write can be applied to <strong>the</strong> target.<br />

Read to target VDisk<br />

If <strong>the</strong> grain being read has been split, <strong>the</strong> read simply returns data from <strong>the</strong> target being read.<br />

If <strong>the</strong> read is to an uncopied grain on an intermediate target VDisk, each of <strong>the</strong> newer<br />

mappings is examined in turn to see if <strong>the</strong> grain has been split. The read is surfaced from <strong>the</strong><br />

first split grain found or from <strong>the</strong> source VDisk if none of <strong>the</strong> newer mappings has a split grain.<br />

268 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Stopping <strong>the</strong> copy process<br />

An important scenario arises when a stop command is delivered to a mapping for a target that<br />

has dependent mappings.<br />

After a mapping is in <strong>the</strong> Stopped state, it can be deleted or restarted, which must not be<br />

allowed if <strong>the</strong>re are still grains that hold data upon which o<strong>the</strong>r mappings depend. To avoid<br />

this situation, when a mapping receives a stopfcmap or stopfcconsistgrp command, ra<strong>the</strong>r<br />

than immediately moving to <strong>the</strong> Stopped state, it enters <strong>the</strong> Stopping state. An automatic copy<br />

process is driven that will find and copy all of <strong>the</strong> data that is uniquely held on <strong>the</strong> target VDisk<br />

of <strong>the</strong> mapping that is being stopped, to <strong>the</strong> next oldest mapping that is in <strong>the</strong> Copying state.<br />

Stopping <strong>the</strong> copy process: The stopping copy process can be ongoing for several<br />

mappings sharing <strong>the</strong> same source at <strong>the</strong> same time. At <strong>the</strong> completion of this process, <strong>the</strong><br />

mapping will automatically make an asynchronous state transition to <strong>the</strong> Stopped state or<br />

<strong>the</strong> idle_copied state if <strong>the</strong> mapping was in <strong>the</strong> Copying state with progress = 100%.<br />

For example, if <strong>the</strong> mapping associated with Target 0 was issued a stopfcmap or<br />

stopfcconsistgrp command, Target 0 enters <strong>the</strong> Stopping state while a process copies <strong>the</strong><br />

data of Target 0 to Target 1. After all of <strong>the</strong> data has been copied, Target 0 enters <strong>the</strong> Stopped<br />

state, and Target 1 is no longer dependent upon Target 0, but Target 1 remains dependent on<br />

Target 2.<br />

6.4.7 Summary of <strong>the</strong> FlashCopy indirection layer algorithm<br />

Table 6-2 summarizes <strong>the</strong> indirection layer algorithm.<br />

Table 6-2 Summary table of <strong>the</strong> FlashCopy indirection layer algorithm<br />

VDisk being<br />

accessed<br />

Has <strong>the</strong> grain<br />

been split<br />

(copied)?<br />

6.4.8 Interaction with <strong>the</strong> cache<br />

Host I/O operation<br />

Read Write<br />

Source No Read from source VDisk. Copy grain to most recently<br />

started target for this source,<br />

<strong>the</strong>n write to <strong>the</strong> source.<br />

Yes Read from source VDisk. Write to source VDisk.<br />

Target No If any newer targets exist for<br />

this source in which this grain<br />

has already been copied,<br />

read from <strong>the</strong> oldest of <strong>the</strong>se<br />

targets. O<strong>the</strong>rwise, read from<br />

<strong>the</strong> source.<br />

Hold <strong>the</strong> write. Check <strong>the</strong><br />

dependency target VDisks to<br />

see if <strong>the</strong> grain is split. If <strong>the</strong><br />

grain is not already copied to<br />

<strong>the</strong> next oldest target for this<br />

source, copy <strong>the</strong> grain to <strong>the</strong><br />

next oldest target. Then,<br />

write to <strong>the</strong> target.<br />

Yes Read from target VDisk. Write to target VDisk.<br />

This copy-on-write process can introduce significant latency into write operations. In order to<br />

isolate <strong>the</strong> active application from this latency, <strong>the</strong> FlashCopy indirection layer is placed<br />

logically beneath <strong>the</strong> cache.<br />

Chapter 6. Advanced Copy Services 269


6.4.9 FlashCopy rules<br />

Therefore, <strong>the</strong> copy latency is typically only seen when destaged from <strong>the</strong> cache, ra<strong>the</strong>r than<br />

for write operations from an application; o<strong>the</strong>rwise, <strong>the</strong> copy operation might be blocked<br />

waiting for <strong>the</strong> write to complete.<br />

In Figure 6-10, we illustrate <strong>the</strong> logical placement of <strong>the</strong> FlashCopy indirection layer.<br />

Figure 6-10 Logical placement of <strong>the</strong> FlashCopy indirection layer<br />

With SVC 5.1, <strong>the</strong> maximum number of supported FlashCopy mappings has been improved<br />

to 8,192 per SVC cluster. Consider <strong>the</strong> following rules when defining FlashCopy mappings:<br />

► There is a one-to-one mapping of <strong>the</strong> source VDisk to <strong>the</strong> target VDisk.<br />

► One source VDisk can have 256 target VDisks.<br />

► The source VDisks and target VDisks can be in separate I/O Groups of <strong>the</strong> same cluster.<br />

► The minimum FlashCopy granularity is <strong>the</strong> entire VDisk.<br />

► The source and target must be exactly equal in size.<br />

► The size of <strong>the</strong> source VDisk and <strong>the</strong> target VDisk cannot be altered (increased or<br />

decreased) after <strong>the</strong> FlashCopy mapping is created.<br />

► There is a per I/O Group limit of 1,024 TB on <strong>the</strong> quantity of <strong>the</strong> source VDisk and target<br />

VDisk capacity that can participate in FlashCopy mappings.<br />

6.4.10 FlashCopy and image mode disks<br />

You can use FlashCopy with an image mode VDisk. Because <strong>the</strong> source and target VDisks<br />

must be exactly <strong>the</strong> same size when creating a FlashCopy mapping, you must create a VDisk<br />

with <strong>the</strong> exact same size as <strong>the</strong> image mode VDisk. To accomplish this task, use <strong>the</strong> svcinfo<br />

lsvdisk -bytes VDiskName command. The size in bytes is <strong>the</strong>n used to create <strong>the</strong> VDisk to<br />

use in <strong>the</strong> FlashCopy mapping.<br />

In Example 6-1 on page 271, we list <strong>the</strong> size of <strong>the</strong> Image_VDisk_A VDisk. Subsequently, <strong>the</strong><br />

VDisk_A_copy VDisk is created, specifying <strong>the</strong> same size.<br />

270 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 6-1 Listing <strong>the</strong> size of a VDisk in bytes and creating a VDisk of equal size<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_VDisk_A<br />

id 8<br />

name Image_VDisk_A<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 2<br />

mdisk_grp_name MDG_Image<br />

capacity 36.0GB<br />

type image<br />

.<br />

.<br />

.<br />

autoexpand<br />

warning<br />

grainsize<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdisk -size 36 -unit gb -name VDisk_A_copy<br />

-mdiskgrp MDG_DS47 -vtype striped -iogrp 1<br />

Virtual Disk, id [19], successfully created<br />

Tip: Alternatively, you can use <strong>the</strong> expandvdisksize and shrinkvdisksize VDisk<br />

commands to modify <strong>the</strong> size of <strong>the</strong> VDisk. See 7.4.10, “Expanding a VDisk” on page 367<br />

and 7.4.16, “Shrinking a VDisk” on page 372 for more information.<br />

You can use an image mode VDisk as ei<strong>the</strong>r a FlashCopy source VDisk or target VDisk.<br />

6.4.11 FlashCopy mapping events<br />

In this section, we explain <strong>the</strong> series of events that modify <strong>the</strong> states of a FlashCopy. In<br />

Figure 6-11 on page 272, <strong>the</strong> FlashCopy mapping state diagram shows an overview of <strong>the</strong><br />

states that apply to a FlashCopy mapping. We describe <strong>the</strong> mapping events in Table 6-3 on<br />

page 272.<br />

Overview of a FlashCopy sequence of events:<br />

1. Associate <strong>the</strong> source data set with a target location (one or more source and target<br />

VDisks).<br />

2. Create a FlashCopy mapping for each source VDisk to <strong>the</strong> corresponding target VDisk.<br />

The target VDisk must be equal in size to <strong>the</strong> source VDisk.<br />

3. Discontinue access to <strong>the</strong> target (application dependent).<br />

4. Prepare (pre-trigger) <strong>the</strong> FlashCopy:<br />

a. Flush cache for <strong>the</strong> source.<br />

b. Discard cache for <strong>the</strong> target.<br />

5. Start (trigger) <strong>the</strong> FlashCopy:<br />

a. Pause I/O (briefly) on <strong>the</strong> source.<br />

b. Resume I/O on <strong>the</strong> source.<br />

c. Start I/O on <strong>the</strong> target.<br />

Chapter 6. Advanced Copy Services 271


Figure 6-11 FlashCopy mapping state diagram<br />

Table 6-3 Mapping events<br />

Mapping event Description<br />

Create A new FlashCopy mapping is created between <strong>the</strong> specified source<br />

VDisk and <strong>the</strong> specified target VDisk. The operation fails if any of <strong>the</strong><br />

following conditions is true:<br />

► For <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> software Version 4.1.0 or earlier, <strong>the</strong><br />

source or target VDisk is already a member of a FlashCopy<br />

mapping.<br />

► For <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> software Version 4.2.0 or later, <strong>the</strong><br />

source or target VDisk is already a target VDisk of a FlashCopy<br />

mapping.<br />

► For <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> software Version 4.2.0 or later, <strong>the</strong><br />

source VDisk is already a member of 16 FlashCopy mappings.<br />

► For <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> software Version 4.3.0 or later, <strong>the</strong><br />

source VDisk is already a member of 256 FlashCopy mappings.<br />

► The node has insufficient bitmap memory.<br />

► The source and target VDisk sizes differ.<br />

Prepare The prestartfcmap or prestartfcconsistgrp command is directed to<br />

ei<strong>the</strong>r a consistency group for FlashCopy mappings that are members<br />

of a normal consistency group or to <strong>the</strong> mapping name for FlashCopy<br />

mappings that are stand-alone mappings. The prestartfcmap or<br />

prestartfcconsistgrp command places <strong>the</strong> FlashCopy mapping into<br />

<strong>the</strong> Preparing state.<br />

Important: The prestartfcmap or prestartfcconsistgrp command<br />

can corrupt any data that previously resided on <strong>the</strong> target VDisk<br />

because cached writes are discarded. Even if <strong>the</strong> FlashCopy mapping<br />

is never started, <strong>the</strong> data from <strong>the</strong> target might have logically changed<br />

during <strong>the</strong> act of preparing to start <strong>the</strong> FlashCopy mapping.<br />

272 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Mapping event Description<br />

Flush done The FlashCopy mapping automatically moves from <strong>the</strong> Preparing state<br />

to <strong>the</strong> Prepared state after all cached data for <strong>the</strong> source is flushed and<br />

all cached data for <strong>the</strong> target is no longer valid.<br />

Start When all of <strong>the</strong> FlashCopy mappings in a consistency group are in <strong>the</strong><br />

Prepared state, <strong>the</strong> FlashCopy mappings can be started.<br />

To preserve <strong>the</strong> cross volume consistency group, <strong>the</strong> start of all of <strong>the</strong><br />

FlashCopy mappings in <strong>the</strong> consistency group must be synchronized<br />

correctly with respect to I/Os that are directed at <strong>the</strong> VDisks by using<br />

<strong>the</strong> startfcmap or startfcconsistgrp command.<br />

The following actions occur during <strong>the</strong> startfcmap or<br />

startfcconsistgrp command’s run:<br />

► New reads and writes to all source VDisks in <strong>the</strong> consistency<br />

group are paused in <strong>the</strong> cache layer until all ongoing reads and<br />

writes beneath <strong>the</strong> cache layer are completed.<br />

► After all FlashCopy mappings in <strong>the</strong> consistency group are<br />

paused, <strong>the</strong> internal cluster state is set to allow FlashCopy<br />

operations.<br />

► After <strong>the</strong> cluster state is set for all FlashCopy mappings in <strong>the</strong><br />

consistency group, read and write operations continue on <strong>the</strong><br />

source VDisks.<br />

► The target VDisks are brought online.<br />

As part of <strong>the</strong> startfcmap or startfcconsistgrp command, read and<br />

write caching is enabled for both <strong>the</strong> source and target VDisks.<br />

Modify You can modify <strong>the</strong> following FlashCopy mapping properties:<br />

► FlashCopy mapping name<br />

► Clean rate<br />

► Consistency group<br />

► Copy rate (for background copy)<br />

► Automatic deletion of <strong>the</strong> mapping when <strong>the</strong> background copy is<br />

complete<br />

Stop There are two separate mechanisms by which a FlashCopy mapping<br />

can be stopped:<br />

► You have issued a command.<br />

► An I/O error has occurred.<br />

Delete This command requests that <strong>the</strong> specified FlashCopy mapping be<br />

deleted. If <strong>the</strong> FlashCopy mapping is in <strong>the</strong> Stopped state, <strong>the</strong> force<br />

flag must be used.<br />

Flush failed If <strong>the</strong> flush of data from <strong>the</strong> cache cannot be completed, <strong>the</strong> FlashCopy<br />

mapping enters <strong>the</strong> Stopped state.<br />

Copy complete After all of <strong>the</strong> source data has been copied to <strong>the</strong> target and <strong>the</strong>re are<br />

no dependent mappings, <strong>the</strong> state is set to Copied. If <strong>the</strong> option to<br />

automatically delete <strong>the</strong> mapping after <strong>the</strong> background copy completes<br />

is specified, <strong>the</strong> FlashCopy mapping is automatically deleted. If this<br />

option is not specified, <strong>the</strong> FlashCopy mapping is not automatically<br />

deleted and can be reactivated by preparing and starting again.<br />

Bitmap online/offline The node has failed.<br />

Chapter 6. Advanced Copy Services 273


6.4.12 FlashCopy mapping states<br />

In this section, we explain <strong>the</strong> states of a FlashCopy mapping in more detail.<br />

Idle_or_copied<br />

Read and write caching is enabled for both <strong>the</strong> source and <strong>the</strong> target. A FlashCopy mapping<br />

exists between <strong>the</strong> source and target, but <strong>the</strong> source and <strong>the</strong> target behave as independent<br />

VDisks in this state.<br />

Copying<br />

The FlashCopy indirection layer governs all I/O to <strong>the</strong> source and target VDisks while <strong>the</strong><br />

background copy is running.<br />

Reads and writes are executed on <strong>the</strong> target as though <strong>the</strong> contents of <strong>the</strong> source were<br />

instantaneously copied to <strong>the</strong> target during <strong>the</strong> startfcmap or startfcconsistgrp command.<br />

The source and target can be independently updated. Internally, <strong>the</strong> target depends on <strong>the</strong><br />

source for certain tracks.<br />

Read and write caching is enabled on <strong>the</strong> source and <strong>the</strong> target.<br />

Stopped<br />

The FlashCopy was stopped ei<strong>the</strong>r by a user command or by an I/O error.<br />

When a FlashCopy mapping is stopped, any useful data in <strong>the</strong> target VDisk is lost. Therefore,<br />

while <strong>the</strong> FlashCopy mapping is in this state, <strong>the</strong> target VDisk is in <strong>the</strong> Offline state. To regain<br />

access to <strong>the</strong> target, <strong>the</strong> mapping must be started again (<strong>the</strong> previous point-in-time will be<br />

lost) or <strong>the</strong> FlashCopy mapping must be deleted. The source VDisk is accessible, and<br />

read/write caching is enabled for <strong>the</strong> source. In <strong>the</strong> Stopped state, a mapping can be<br />

prepared again or it can be deleted.<br />

Stopping<br />

The mapping is in <strong>the</strong> process of transferring data to an dependency mapping. The behavior<br />

of <strong>the</strong> target VDisk depends on whe<strong>the</strong>r <strong>the</strong> background copy process had completed while<br />

<strong>the</strong> mapping was in <strong>the</strong> Copying state. If <strong>the</strong> copy process had completed, <strong>the</strong> target VDisk<br />

remains online while <strong>the</strong> stopping copy process completes. If <strong>the</strong> copy process had not<br />

completed, data in <strong>the</strong> cache is discarded for <strong>the</strong> target VDisk. The target VDisk is taken<br />

offline, and <strong>the</strong> stopping copy process runs. After <strong>the</strong> data has been copied, a stop complete<br />

asynchronous event notification is issued. The mapping will move to <strong>the</strong> Idle/Copied state if<br />

<strong>the</strong> background copy has completed or to <strong>the</strong> Stopped state if <strong>the</strong> background copy has not<br />

completed.<br />

The source VDisk remains accessible for I/O.<br />

Suspended<br />

The target has been “flashed” from <strong>the</strong> source and was in <strong>the</strong> Copying or Stopping state.<br />

Access to <strong>the</strong> metadata has been lost, and as a consequence, both <strong>the</strong> source and target<br />

VDisks are offline. The background copy process has been halted.<br />

When <strong>the</strong> metadata becomes available again, <strong>the</strong> FlashCopy mapping will return to <strong>the</strong><br />

Copying or Stopping state, <strong>the</strong> access to <strong>the</strong> source and target VDisks will be restored, and<br />

<strong>the</strong> background copy or stopping process will be resumed. Unflushed data that was written to<br />

<strong>the</strong> source or target before <strong>the</strong> FlashCopy was suspended is pinned in <strong>the</strong> cache, consuming<br />

resources, until <strong>the</strong> FlashCopy mapping leaves <strong>the</strong> Suspended state.<br />

274 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Preparing<br />

Because <strong>the</strong> FlashCopy function is placed logically beneath <strong>the</strong> cache to anticipate any write<br />

latency problem, it demands no read or write data for <strong>the</strong> target and no write data for <strong>the</strong><br />

source in <strong>the</strong> cache at <strong>the</strong> time that <strong>the</strong> FlashCopy operation is started. This design ensures<br />

that <strong>the</strong> resulting copy is consistent.<br />

Performing <strong>the</strong> necessary cache flush as part of <strong>the</strong> startfcmap or startfcconsistgrp<br />

command unnecessarily delays <strong>the</strong> I/Os that are received after <strong>the</strong> startfcmap or<br />

startfcconsistgrp command is executed, because <strong>the</strong>se I/Os must wait for <strong>the</strong> cache flush<br />

to complete.<br />

To overcome this problem, SVC FlashCopy supports <strong>the</strong> prestartfcmap or<br />

prestartfcconsistgrp command, which prepares for a FlashCopy start while still allowing<br />

I/Os to continue to <strong>the</strong> source VDisk.<br />

In <strong>the</strong> Preparing state, <strong>the</strong> FlashCopy mapping is prepared by <strong>the</strong> following steps:<br />

1. Flushing any modified write data associated with <strong>the</strong> source VDisk from <strong>the</strong> cache. Read<br />

data for <strong>the</strong> source will be left in <strong>the</strong> cache.<br />

2. Placing <strong>the</strong> cache for <strong>the</strong> source VDisk into write-through mode, so that subsequent writes<br />

wait until data has been written to disk before completing <strong>the</strong> write command that is<br />

received from <strong>the</strong> host.<br />

3. Discarding any read or write data that is associated with <strong>the</strong> target VDisk from <strong>the</strong> cache.<br />

While in this state, writes to <strong>the</strong> source VDisk will experience additional latency, because <strong>the</strong><br />

cache is operating in write-through mode.<br />

While <strong>the</strong> FlashCopy mapping is in this state, <strong>the</strong> target VDisk is reported as online, but it will<br />

not perform reads or writes. These reads and writes are failed by <strong>the</strong> SCSI front end.<br />

Before starting <strong>the</strong> FlashCopy mapping, it is important that any cache at <strong>the</strong> host level, for<br />

example, <strong>the</strong> buffers in <strong>the</strong> host OSs or applications, are also instructed to flush any<br />

outstanding writes to <strong>the</strong> source VDisk.<br />

Prepared<br />

When in <strong>the</strong> Prepared state, <strong>the</strong> FlashCopy mapping is ready to perform a start. While <strong>the</strong><br />

FlashCopy mapping is in this state, <strong>the</strong> target VDisk is in <strong>the</strong> Offline state. In <strong>the</strong> Prepared<br />

state, writes to <strong>the</strong> source VDisk experience additional latency because <strong>the</strong> cache is<br />

operating in write-through mode.<br />

Summary of FlashCopy mapping states<br />

Table 6-4 on page 276 lists <strong>the</strong> various FlashCopy mapping states and <strong>the</strong> corresponding<br />

states of <strong>the</strong> source and target VDisks.<br />

Chapter 6. Advanced Copy Services 275


Table 6-4 FlashCopy mapping state summary<br />

State Source Target<br />

6.4.13 Space-efficient FlashCopy<br />

Online/Offline Cache state Online/Offline Cache state<br />

Idling/Copied Online Write-back Online Write-back<br />

Copying Online Write-back Online Write-back<br />

Stopped Online Write-back Offline N/A<br />

Stopping Online Write-back Online if copy<br />

complete<br />

Offline if copy not<br />

complete<br />

Suspended Offline Write-back Offline N/A<br />

Preparing Online Write-through Online but not<br />

accessible<br />

Prepared Online Write-through Online but not<br />

accessible<br />

You can have a mix of space-efficient and fully allocated VDisks in FlashCopy mappings. One<br />

common combination is a fully allocated source with a space-efficient target, which allows <strong>the</strong><br />

target to consume a smaller amount of real storage than <strong>the</strong> source.<br />

For <strong>the</strong> best performance, <strong>the</strong> grain size of <strong>the</strong> Space-Efficient VDisk must match <strong>the</strong> grain<br />

size of <strong>the</strong> FlashCopy mapping. However, if <strong>the</strong> grain sizes differ, <strong>the</strong> mapping still proceeds.<br />

Consider <strong>the</strong> following information when you create your FlashCopy mappings:<br />

► If you are using a fully allocated source with a space-efficient target, disable <strong>the</strong><br />

background copy and cleaning mode on <strong>the</strong> FlashCopy map by setting both <strong>the</strong><br />

background copy rate and cleaning rate to zero. O<strong>the</strong>rwise, if <strong>the</strong>se features are enabled,<br />

all of <strong>the</strong> source is copied onto <strong>the</strong> target VDisk, which causes <strong>the</strong> Space-Efficient VDisk<br />

to ei<strong>the</strong>r go offline or to grow as large as <strong>the</strong> source.<br />

► If you are using only a space-efficient source, only <strong>the</strong> space that is used on <strong>the</strong> source<br />

VDisk is copied to <strong>the</strong> target VDisk. For example, if <strong>the</strong> source VDisk has a virtual size of<br />

800 GB and a real size of 100 GB, of which 50 GB has been used, only <strong>the</strong> used 50 GB is<br />

copied.<br />

Multiple space-efficient targets for FlashCopy<br />

The SVC implementation of Multiple Target FlashCopy ensures that when new data is written<br />

to a source or target, that data is copied to zero or one o<strong>the</strong>r targets. A consequence of this<br />

implementation is that Space-Efficient VDisks can be used in conjunction with Multiple Target<br />

FlashCopy without causing allocations to occur on multiple targets when data is written to <strong>the</strong><br />

source.<br />

Space-efficient incremental FlashCopy<br />

The implementation of Space-Efficient VDisks does not preclude <strong>the</strong> use of incremental<br />

FlashCopy on <strong>the</strong> same VDisks. It does not make sense to have a fully allocated source<br />

VDisk and to use incremental FlashCopy to copy this fully allocated source VDisk to a<br />

space-efficient target VDisk; however, this combination is possible.<br />

276 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

N/A<br />

N/A<br />

N/A


Two more interesting combinations of incremental FlashCopy and Space-Efficient VDisks are:<br />

► A space-efficient source VDisk can be incrementally copied using FlashCopy to a<br />

space-efficient target VDisk. Whenever <strong>the</strong> FlashCopy is retriggered, only data that has<br />

been modified is recopied to <strong>the</strong> target. Note that if space is allocated on <strong>the</strong> target<br />

because of I/O to <strong>the</strong> target VDisk, this space is not reclaimed when <strong>the</strong> FlashCopy is<br />

retriggered.<br />

► A fully allocated source VDisk can be incrementally copied using FlashCopy to ano<strong>the</strong>r<br />

fully allocated VDisk at <strong>the</strong> same time as being copied to multiple space-efficient targets<br />

(taken at separate points in time). This combination allows a single full backup to be kept<br />

for recovery purposes and separates <strong>the</strong> backup workload from <strong>the</strong> production workload,<br />

and at <strong>the</strong> same time, allowing older space-efficient backups to be retained.<br />

Migration from and to a Space-Efficient VDisk<br />

There are various scenarios to migrate a non-Space-Efficient VDisk to a Space-Efficient<br />

VDisk. We describe migration fully in Chapter 9, “Data migration” on page 675.<br />

6.4.14 Background copy<br />

The FlashCopy background feature enables you to copy all of <strong>the</strong> data in a source VDisk to<br />

<strong>the</strong> corresponding target VDisk. Without <strong>the</strong> FlashCopy background feature, only data that<br />

changed on <strong>the</strong> source VDisk can be copied to <strong>the</strong> target VDisk. The benefit of using a<br />

FlashCopy mapping with background copy enabled is that <strong>the</strong> target VDisk becomes a real<br />

clone (independent from <strong>the</strong> source VDisk) of <strong>the</strong> FlashCopy mapping source VDisk.<br />

The background copy rate is a property of a FlashCopy mapping that is expressed as a value<br />

between 0 and 100. It can be changed in any FlashCopy mapping state and can differ in <strong>the</strong><br />

mappings of one consistency group. A value of 0 disables background copy.<br />

The relationship of <strong>the</strong> background copy rate value to <strong>the</strong> attempted number of grains to be<br />

split (copied) per second is shown in Table 6-5.<br />

Table 6-5 Background copy rate<br />

Value Data copied per second Grains per second<br />

1 - 10 128 KB 0.5<br />

11 - 20 256 KB 1<br />

21 - 30 512 KB 2<br />

31 - 40 1 MB 4<br />

41 - 50 2 MB 8<br />

51 - 60 4 MB 16<br />

61 - 70 8 MB 32<br />

71 - 80 16 MB 64<br />

81 - 90 32 MB 128<br />

91 - 100 64 MB 256<br />

The grains per second numbers represent <strong>the</strong> maximum number of grains that <strong>the</strong> SVC will<br />

copy per second, assuming that <strong>the</strong> bandwidth to <strong>the</strong> managed disks (MDisks) can<br />

accommodate this rate.<br />

Chapter 6. Advanced Copy Services 277


6.4.15 Syn<strong>the</strong>sis<br />

If <strong>the</strong> SVC is unable to achieve <strong>the</strong>se copy rates because of insufficient bandwidth from <strong>the</strong><br />

SVC nodes to <strong>the</strong> MDisks, background copy I/O contends for resources on an equal basis<br />

with <strong>the</strong> I/O that is arriving from <strong>the</strong> hosts. Both background copy I/O and I/O that is arriving<br />

from <strong>the</strong> hosts tend to see an increase in latency and a consequential reduction in<br />

throughput. Both background copy and foreground I/O continue to make forward progress,<br />

and do not stop, hang, or cause <strong>the</strong> node to fail. The background copy is performed by both<br />

nodes of <strong>the</strong> I/O Group in which <strong>the</strong> source VDisk resides.<br />

The FlashCopy functionality in SVC simply creates copy VDisks. All of <strong>the</strong> data in <strong>the</strong> source<br />

VDisk is copied to <strong>the</strong> destination VDisk, including operating system control information, as<br />

well as application data and metadata.<br />

Certain operating systems are unable to use FlashCopy without an additional step, which is<br />

termed syn<strong>the</strong>sis. In summary, syn<strong>the</strong>sis performs a type of transformation on <strong>the</strong> operating<br />

system metadata in <strong>the</strong> target VDisk so that <strong>the</strong> operating system can use <strong>the</strong> disk.<br />

6.4.16 Serialization of I/O by FlashCopy<br />

6.4.17 Error handling<br />

In general, <strong>the</strong> FlashCopy function in <strong>the</strong> SVC introduces no explicit serialization into <strong>the</strong> I/O<br />

path. Therefore, many concurrent I/Os are allowed to <strong>the</strong> source and target VDisks.<br />

However, <strong>the</strong>re is a lock for each grain. The lock can be in shared or exclusive mode. For<br />

multiple targets, a common lock is shared and <strong>the</strong> mappings are derived from a particular<br />

source VDisk. The lock is used in <strong>the</strong> following modes under <strong>the</strong> following conditions:<br />

► The lock is held in shared mode for <strong>the</strong> duration of a read from <strong>the</strong> target VDisk, which<br />

touches a grain that is not split.<br />

► The lock is held in exclusive mode during a grain split, which happens prior to FlashCopy<br />

starting any destage (or write-through) from <strong>the</strong> cache to a grain that is going to be split<br />

(<strong>the</strong> destage waits for <strong>the</strong> grain to be split). The lock is held during <strong>the</strong> grain split and<br />

released before <strong>the</strong> destage is processed.<br />

If <strong>the</strong> lock is held in shared mode, and ano<strong>the</strong>r process wants to use <strong>the</strong> lock in shared mode,<br />

this request is granted unless a process is already waiting to use <strong>the</strong> lock in exclusive mode.<br />

If <strong>the</strong> lock is held in shared mode and it is requested to be exclusive, <strong>the</strong> requesting process<br />

must wait until all holders of <strong>the</strong> shared lock free it.<br />

Similarly, if <strong>the</strong> lock is held in exclusive mode, a process wanting to use <strong>the</strong> lock in ei<strong>the</strong>r<br />

shared or exclusive mode must wait for it to be freed.<br />

When a FlashCopy mapping is not copying or stopping, <strong>the</strong> FlashCopy function does not<br />

affect <strong>the</strong> error handling or <strong>the</strong> reporting of errors in <strong>the</strong> I/O path. Error handling and reporting<br />

are only affected by FlashCopy when a FlashCopy mapping is copying or stopping.<br />

We describe <strong>the</strong>se scenarios in <strong>the</strong> following sections.<br />

Node failure<br />

Normally, two copies of <strong>the</strong> FlashCopy bitmaps are maintained; one copy of <strong>the</strong> FlashCopy<br />

bitmaps is on each of <strong>the</strong> two nodes making up <strong>the</strong> I/O Group of <strong>the</strong> source VDisk. When a<br />

node fails, one copy of <strong>the</strong> bitmaps, for all FlashCopy mappings whose source VDisk is a<br />

278 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


member of <strong>the</strong> failing node’s I/O Group, will become inaccessible. FlashCopy will continue<br />

with a single copy of <strong>the</strong> FlashCopy bitmap being stored as non-volatile in <strong>the</strong> remaining node<br />

in <strong>the</strong> source I/O Group. The cluster metadata is updated to indicate that <strong>the</strong> missing node no<br />

longer holds up-to-date bitmap information.<br />

When <strong>the</strong> failing node recovers, or a replacement node is added to <strong>the</strong> I/O Group, up-to-date<br />

bitmaps will be reestablished on <strong>the</strong> new node, and it will again provide a redundant location<br />

for <strong>the</strong> bitmaps:<br />

► When <strong>the</strong> FlashCopy bitmap becomes available again (at least one of <strong>the</strong> SVC nodes in<br />

<strong>the</strong> I/O Group is accessible), <strong>the</strong> FlashCopy mapping will return to <strong>the</strong> Copying state,<br />

access to <strong>the</strong> source and target VDisks will be restored, and <strong>the</strong> background copy process<br />

will be resumed. Unflushed data that was written to <strong>the</strong> source or target before <strong>the</strong><br />

FlashCopy was suspended is pinned in <strong>the</strong> cache until <strong>the</strong> FlashCopy mapping leaves <strong>the</strong><br />

Suspended state.<br />

► Normally, two copies of <strong>the</strong> FlashCopy bitmaps are maintained (in non-volatile memory),<br />

one copy on each of <strong>the</strong> two SVC nodes making up <strong>the</strong> I/O Group of <strong>the</strong> source VDisk. If<br />

only one of <strong>the</strong> SVC nodes in <strong>the</strong> I/O Group to which <strong>the</strong> source VDisk belongs goes<br />

offline, <strong>the</strong> FlashCopy mapping will continue in <strong>the</strong> Copying state, with a single copy of <strong>the</strong><br />

FlashCopy bitmap. When <strong>the</strong> failed SVC node recovers, or a replacement SVC node is<br />

added to <strong>the</strong> I/O Group, up-to-date FlashCopy bitmaps will be reestablished on <strong>the</strong><br />

resuming SVC node and again provide a redundant location for <strong>the</strong> FlashCopy bitmaps.<br />

If both nodes in <strong>the</strong> I/O Group become unavailable: If both nodes in <strong>the</strong> I/O Group to<br />

which <strong>the</strong> target VDisk belongs become unavailable, <strong>the</strong> host cannot access <strong>the</strong> target<br />

VDisk.<br />

Path failure (Path Offline state)<br />

In a fully functioning cluster, all of <strong>the</strong> nodes have a software representation of every VDisk in<br />

<strong>the</strong> cluster within <strong>the</strong>ir application hierarchy.<br />

Because <strong>the</strong> storage area network (<strong>SAN</strong>) that links <strong>the</strong> SVC nodes to each o<strong>the</strong>r and to <strong>the</strong><br />

MDisks is made up of many independent links, it is possible for a subset of <strong>the</strong> nodes to be<br />

temporarily isolated from several of <strong>the</strong> MDisks. When this situation happens, <strong>the</strong> managed<br />

disks are said to be Path Offline on certain nodes.<br />

O<strong>the</strong>r nodes: O<strong>the</strong>r nodes might see <strong>the</strong> managed disks as Online, because <strong>the</strong>ir<br />

connection to <strong>the</strong> managed disks is still functioning.<br />

When an MDisk enters <strong>the</strong> Path Offline state on an SVC node, all of <strong>the</strong> VDisks that have any<br />

extents on <strong>the</strong> MDisk also become Path Offline. Again, this situation happens only on <strong>the</strong><br />

affected nodes. When a VDisk is Path Offline on a particular SVC node, <strong>the</strong> host access to<br />

that VDisk through <strong>the</strong> node will fail with <strong>the</strong> SCSI sensor indicating Offline.<br />

Path Offline for <strong>the</strong> source VDisk<br />

If a FlashCopy mapping is in <strong>the</strong> Copying state and <strong>the</strong> source VDisk goes Path Offline, this<br />

Path Offline state is propagated to all target VDisks up to but not including <strong>the</strong> target VDisk for<br />

<strong>the</strong> newest mapping that is 100% copied but remains in <strong>the</strong> Copying state. If no mappings are<br />

100% copied, all of <strong>the</strong> target VDisks are taken offline. Again, note that Path Offline is a state<br />

that exists on a per-node basis. O<strong>the</strong>r nodes might not be affected. If <strong>the</strong> source VDisk comes<br />

Online, <strong>the</strong> target and source VDisks are brought back Online.<br />

Chapter 6. Advanced Copy Services 279


Path Offline for <strong>the</strong> target VDisk<br />

If a target VDisk goes Path Offline, but <strong>the</strong> source VDisk is still Online, and if <strong>the</strong>re are any<br />

dependent mappings, those target VDisks will also go Path Offline. The source VDisk will<br />

remain Online.<br />

6.4.18 Asynchronous notifications<br />

FlashCopy raises informational error logs when mappings or consistency groups make<br />

certain state transitions.<br />

These state transitions occur as a result of configuration events that complete<br />

asynchronously, and <strong>the</strong> informational errors can be used to generate Simple Network<br />

Management Protocol (SNMP) traps to notify <strong>the</strong> user. O<strong>the</strong>r configuration events complete<br />

synchronously, and no informational errors are logged as a result of <strong>the</strong>se events:<br />

► PREPARE_COMPLETED: This state transition is logged when <strong>the</strong> FlashCopy mapping or<br />

consistency group enters <strong>the</strong> Prepared state as a result of a user request to prepare. The<br />

user can now start (or stop) <strong>the</strong> mapping or consistency group.<br />

► COPY_COMPLETED: This state transition is logged when <strong>the</strong> FlashCopy mapping or<br />

consistency group enters <strong>the</strong> Idle_or_copied state when it was previously in <strong>the</strong> Copying<br />

or Stopping state. This state transition indicates that <strong>the</strong> target disk now contains a<br />

complete copy and no longer depends on <strong>the</strong> source.<br />

► STOP_COMPLETED: This state transition is logged when <strong>the</strong> FlashCopy mapping or<br />

consistency group has entered <strong>the</strong> Stopped state as a result of a user request to stop. It<br />

will be logged after <strong>the</strong> automatic copy process has completed. This state transition<br />

includes mappings where no copying needed to be performed. This state transition differs<br />

from <strong>the</strong> error that is logged when a mapping or group enters <strong>the</strong> Stopped state as a result<br />

of an I/O error.<br />

6.4.19 Interoperation with Metro Mirror and Global Mirror<br />

FlashCopy can work toge<strong>the</strong>r with Metro Mirror and Global Mirror to provide better protection<br />

of <strong>the</strong> data. For example, we can perform a Metro Mirror copy to duplicate data from Site_A to<br />

Site_B and, <strong>the</strong>n, perform a daily FlashCopy and copy <strong>the</strong> data elsewhere.<br />

Table 6-6 lists which combinations of FlashCopy and Remote Copy are supported. In <strong>the</strong><br />

table, remote copy refers to Metro Mirror and Global Mirror.<br />

Table 6-6 FlashCopy and remote copy interaction<br />

Component Remote copy primary Remote copy secondary<br />

FlashCopy<br />

Source<br />

FlashCopy<br />

Destination<br />

280 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

Supported Supported<br />

Latency: When <strong>the</strong> FlashCopy<br />

relationship is in <strong>the</strong> Preparing<br />

and Prepared states, <strong>the</strong> cache<br />

at <strong>the</strong> remote copy secondary<br />

site operates in write-through<br />

mode.<br />

This process adds additional<br />

latency to <strong>the</strong> already latent<br />

remote copy relationship.<br />

Not supported Not supported


6.4.20 Recovering data from FlashCopy<br />

6.5 Metro Mirror<br />

You can use FlashCopy to recover <strong>the</strong> data if a form of corruption has happened. For<br />

example, if a user deletes data by mistake, you can map <strong>the</strong> FlashCopy target VDisks to <strong>the</strong><br />

application server, and import all <strong>the</strong> logical volume-level configurations, start <strong>the</strong> application,<br />

and restore <strong>the</strong> data back to a given point in time.<br />

Tip: It is better to map a FlashCopy target VDisk to a backup machine with <strong>the</strong> same<br />

application installed. We do not recommend that you map a FlashCopy target VDisk to <strong>the</strong><br />

same application server to which <strong>the</strong> FlashCopy source VDisk is mapped, because <strong>the</strong><br />

FlashCopy target and source VDisks have <strong>the</strong> same signature, pvid, vgda, and so on.<br />

Special steps are necessary to handle <strong>the</strong> conflict at <strong>the</strong> OS level. For example, you can<br />

use <strong>the</strong> recreatevg command in AIX to generate separate vg, lv, file system, and so on,<br />

names in order to avoid a naming conflict.<br />

FlashCopy backup is a disk-based backup copy that can be used to restore service more<br />

quickly than o<strong>the</strong>r backup techniques. This application is fur<strong>the</strong>r enhanced by <strong>the</strong> ability to<br />

maintain multiple backup targets, spread over a range of time, allowing <strong>the</strong> user to choose a<br />

backup from before <strong>the</strong> time of <strong>the</strong> corruption.<br />

In <strong>the</strong> following topics, we describe <strong>the</strong> Metro Mirror copy service, which is a synchronous<br />

remote copy function. Metro Mirror in SVC is similar to Metro Mirror in <strong>the</strong> <strong>IBM</strong> <strong>System</strong><br />

<strong>Storage</strong> DS family.<br />

SVC provides a single point of control when enabling Metro Mirror in your <strong>SAN</strong>, regardless of<br />

<strong>the</strong> disk subsystems that are used.<br />

The general application of Metro Mirror is to maintain two real-time synchronized copies of a<br />

disk. Often, two copies are geographically dispersed to two SVC clusters, although it is<br />

possible to use Metro Mirror in a single cluster (within an I/O Group). If <strong>the</strong> primary copy fails,<br />

you can enable a secondary copy for I/O operation.<br />

Tips: Intracluster Metro Mirror will consume more resources for a specific cluster,<br />

compared to an intercluster Metro Mirror relationship. We recommend using intercluster<br />

Metro Mirror when possible.<br />

A typical application of this function is to set up a dual-site solution using two SVC clusters.<br />

The first site is considered <strong>the</strong> primary or production site, and <strong>the</strong> second site is considered<br />

<strong>the</strong> backup site or failover site, which is activated when a failure at <strong>the</strong> first site is detected.<br />

6.5.1 Metro Mirror overview<br />

Metro Mirror works by establishing a Metro Mirror relationship between two VDisks of equal<br />

size. To maintain data integrity for dependency writes, you can use consistency groups to<br />

group a number of Metro Mirror relationships toge<strong>the</strong>r, similar to FlashCopy consistency<br />

groups. SVC provides both intracluster and intercluster Metro Mirror.<br />

Intracluster Metro Mirror<br />

You can apply intracluster Metro Mirror within a single I/O Group.<br />

Chapter 6. Advanced Copy Services 281


Applying Metro Mirror across I/O Groups in <strong>the</strong> same SVC cluster is not supported, because<br />

intracluster Metro Mirror can only be performed between VDisks in <strong>the</strong> same I/O Group.<br />

Intercluster Metro Mirror<br />

Intercluster Metro Mirror operations require a pair of SVC clusters that are separated by a<br />

number of moderately high bandwidth links. Two SVC clusters must be defined in an SVC<br />

partnership, which must be performed on both SVC clusters to establish a fully functional<br />

Metro Mirror partnership.<br />

Using standard single mode connections, <strong>the</strong> supported distance between two SVC clusters<br />

in a Metro Mirror partnership is 10 km (6.2 miles), although greater distances can be achieved<br />

by using extenders. For extended distance solutions, contact your <strong>IBM</strong> representative.<br />

Limit: When a local and a remote fabric are connected toge<strong>the</strong>r for Metro Mirror purposes,<br />

<strong>the</strong> inter-switch link (ISL) hop count between a local node and a remote node cannot<br />

exceed seven.<br />

6.5.2 Remote copy techniques<br />

Metro Mirror is a synchronous remote copy, which we briefly explain next. To illustrate <strong>the</strong><br />

differences between synchronous and asynchronous remote copy, we also explain<br />

asynchronous remote copy.<br />

Synchronous remote copy<br />

Metro Mirror is a fully synchronous remote copy technique that ensures that, as long as writes<br />

to <strong>the</strong> secondary VDisks are possible, writes are committed at both <strong>the</strong> primary and<br />

secondary VDisks before <strong>the</strong> application is given an acknowledgement of <strong>the</strong> completion of a<br />

write.<br />

Errors, such as a loss of connectivity between <strong>the</strong> two clusters, can mean that it is not<br />

possible to replicate data from <strong>the</strong> primary VDisk to <strong>the</strong> secondary VDisk. In this case, Metro<br />

Mirror operates to ensure that a consistent image is left at <strong>the</strong> secondary VDisk, and <strong>the</strong>n<br />

continues to allow I/O to <strong>the</strong> primary VDisk, so as not to affect <strong>the</strong> operations at <strong>the</strong><br />

production site.<br />

Figure 6-12 on page 283 illustrates how a write to <strong>the</strong> master VDisk is mirrored to <strong>the</strong> cache<br />

of <strong>the</strong> auxiliary VDisk before an acknowledgement of <strong>the</strong> write is sent back to <strong>the</strong> host that<br />

issued <strong>the</strong> write. This process ensures that <strong>the</strong> secondary is synchronized in real time, in<br />

case it is needed in a failover situation.<br />

However, this process also means that <strong>the</strong> application is fully exposed to <strong>the</strong> latency and<br />

bandwidth limitations (if any) of <strong>the</strong> communication link to <strong>the</strong> secondary site. This process<br />

might lead to unacceptable application performance, particularly when placed under peak<br />

load. Therefore, using Metro Mirror has distance limitations.<br />

282 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 6-12 Write on VDisk in Metro Mirror relationship<br />

6.5.3 SVC Metro Mirror features<br />

SVC Metro Mirror supports <strong>the</strong> following features:<br />

► Synchronous remote copy of VDisks dispersed over metropolitan scale distances is<br />

supported.<br />

► SVC implements Metro Mirror relationships between VDisk pairs, with each VDisk in a pair<br />

managed by an SVC cluster.<br />

► SVC supports intracluster Metro Mirror, where both VDisks belong to <strong>the</strong> same cluster<br />

(and I/O Group).<br />

► SVC supports intercluster Metro Mirror, where each VDisk belongs to a separate SVC<br />

cluster. You can configure a specific SVC cluster for partnership with ano<strong>the</strong>r cluster. All<br />

intercluster Metro Mirror processing takes place between two SVC clusters that are<br />

configured in a partnership.<br />

► Intercluster and intracluster Metro Mirror can be used concurrently within a cluster for<br />

separate relationships.<br />

► SVC does not require that a control network or fabric is installed to manage Metro Mirror.<br />

For intercluster Metro Mirror, SVC maintains a control link between two clusters. This<br />

control link is used to control <strong>the</strong> state and coordinate updates at ei<strong>the</strong>r end. The control<br />

link is implemented on top of <strong>the</strong> same FC fabric connection that <strong>the</strong> SVC uses for Metro<br />

Mirror I/O.<br />

► SVC implements a configuration model that maintains <strong>the</strong> Metro Mirror configuration and<br />

state through major events, such as failover, recovery, and resynchronization, to minimize<br />

user configuration action through <strong>the</strong>se events.<br />

Chapter 6. Advanced Copy Services 283


► SVC maintains and polices a strong concept of consistency and makes this concept<br />

available to guide configuration activity.<br />

► SVC implements flexible resynchronization support enabling it to resynchronize VDisk<br />

pairs that have suffered write I/O to both disks and to resynchronize only those regions<br />

that are known to have changed.<br />

6.5.4 Multiple Cluster Mirroring<br />

With <strong>the</strong> introduction of Multiple Cluster Mirroring in SVC 5.1, you can configure a cluster with<br />

multiple partner clusters.<br />

Multiple Cluster Mirroring enables Metro Mirror and Global Mirror relationships to exist<br />

between a maximum of four SVC clusters.<br />

The SVC clusters can take advantage of <strong>the</strong> maximum number of remote mirror relationships<br />

because Multiple Cluster Mirroring enables clients to copy from several remote sites to a<br />

single SVC cluster at a disaster recovery (DR) site. It supports implementation of<br />

consolidated DR strategies and helps clients that are moving or consolidating data centers.<br />

Figure 6-13 shows an example of a Multiple Cluster Mirroring configuration.<br />

Figure 6-13 Multiple Cluster Mirroring configuration example<br />

Supported Multiple Cluster Mirroring topologies<br />

Prior to SVC 5.1, you used one of <strong>the</strong> two cluster topologies that were allowed:<br />

► A (no partnership configured)<br />

► A B (one partnership configured)<br />

284 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


With Multiple Cluster Mirroring, <strong>the</strong>re is a wider range of possible topologies. You can connect<br />

a maximum of four clusters, directly or indirectly. Therefor, a cluster can never have any more<br />

than three partners.<br />

For example, <strong>the</strong>se topologies are allowed:<br />

► A B, A C, and A D<br />

Figure 6-14 shows a star topology.<br />

Figure 6-14 SVC star topology<br />

Figure 6-14 shows four clusters in a star topology, with cluster A at <strong>the</strong> center. Cluster A can<br />

be a central DR site for <strong>the</strong> three o<strong>the</strong>r locations.<br />

Using a star topology, you can migrate separate applications at separate times by using a<br />

process, such as this example:<br />

1. Suspend application at A.<br />

2. Remove <strong>the</strong> A B relationship.<br />

3. Create <strong>the</strong> A C relationship (or alternatively, <strong>the</strong> B C relationship).<br />

4. Synchronize to cluster C, and ensure A C is established:<br />

– A B, A C, A D, B C, B D, and C D<br />

– A B, A C, and B C<br />

Figure 6-15 on page 286 shows a triangle topology.<br />

Chapter 6. Advanced Copy Services 285


Figure 6-15 SVC triangle topology<br />

There are three clusters in a triangle topology.<br />

Figure 6-16 shows a fully connected topology.<br />

Figure 6-16 SVC fully connected topology<br />

Figure 6-16 is a fully connected mesh where every cluster has a partnership to each of <strong>the</strong><br />

three o<strong>the</strong>r clusters. Therefore, VDisks can be replicated between any pair of clusters, but<br />

note that this topology is not required, unless relationships are needed between every pair of<br />

clusters:<br />

A B, B C, and C D<br />

The o<strong>the</strong>r option is a daisy-chain topology between four clusters, where we have a cascading<br />

solution; however, a VDisk must be in only one relationship, such as A B, for example. At<br />

<strong>the</strong> time of writing, a three-site solution, such as DS8000 Metro Global Mirror, is not<br />

supported.<br />

Figure 6-17 on page 287 shows a daisy-chain topology.<br />

286 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 6-17 SVC daisy-chain topology<br />

Unsupported topology<br />

As an illustration of what is not supported, we show this example:<br />

A B, B C, C D, and D E<br />

Figure 6-18 shows this unsupported topology.<br />

Figure 6-18 SVC unsupported topology<br />

This topology is unsupported, because five clusters are indirectly connected. If <strong>the</strong> cluster can<br />

detect this topology at <strong>the</strong> time of <strong>the</strong> fourth mkpartnership command, <strong>the</strong> command will be<br />

rejected.<br />

Upgrade restrictions: The introduction of Multiple Cluster Mirroring necessitates upgrade<br />

restrictions:<br />

► Concurrent code upgrade to 5.1.0 is supported from 4.3.1.x only.<br />

► If <strong>the</strong> cluster is in a partnership, <strong>the</strong> partnered cluster must meet a minimum software<br />

level to allow concurrent I/O; <strong>the</strong> partnered cluster must be running 4.2.1 or higher.<br />

6.5.5 Metro Mirror relationship<br />

A Metro Mirror relationship is composed of two VDisks that are equal in size. The master<br />

VDisk and <strong>the</strong> auxiliary VDisk can be in <strong>the</strong> same I/O Group, within <strong>the</strong> same SVC cluster<br />

(intracluster Metro Mirror), or <strong>the</strong>y can be on separate SVC clusters that are defined as SVC<br />

partners (intercluster Metro Mirror).<br />

Rules:<br />

► A VDisk can only be part of one Metro Mirror relationship at a time.<br />

► A VDisk that is a FlashCopy target cannot be part of a Metro Mirror relationship.<br />

Chapter 6. Advanced Copy Services 287


Figure 6-19 illustrates <strong>the</strong> Metro Mirror relationship.<br />

Figure 6-19 Metro Mirror relationship<br />

Metro Mirror relationship between primary and secondary VDisks<br />

When creating a Metro Mirror relationship, you must define one VDisk as <strong>the</strong> master and <strong>the</strong><br />

o<strong>the</strong>r VDisk as <strong>the</strong> auxiliary. The relationship between two copies is symmetric. When a Metro<br />

Mirror relationship is created, <strong>the</strong> master VDisk is initially considered <strong>the</strong> primary copy (often<br />

referred to as <strong>the</strong> source), and <strong>the</strong> auxiliary VDisk is considered <strong>the</strong> secondary copy (often<br />

referred to as <strong>the</strong> target). The initial copy direction mirrors <strong>the</strong> master VDisk to <strong>the</strong> auxiliary<br />

VDisk. After <strong>the</strong> initial synchronization is complete, you can change <strong>the</strong> copy direction, if<br />

appropriate.<br />

In <strong>the</strong> most common applications of Metro Mirror, <strong>the</strong> master VDisk contains <strong>the</strong> production<br />

copy of <strong>the</strong> data and is used by <strong>the</strong> host application, while <strong>the</strong> auxiliary VDisk contains a<br />

mirrored copy of <strong>the</strong> data and is used for failover in DR scenarios. The terms master and<br />

auxiliary describe this use. However, if Metro Mirror is applied differently, <strong>the</strong> terms master<br />

VDisk and auxiliary VDisk need to be interpreted appropriately.<br />

6.5.6 Importance of write ordering<br />

Many applications that use block storage must survive failures, such as <strong>the</strong> loss of power or a<br />

software crash, and not lose <strong>the</strong> data that existed prior to <strong>the</strong> failure. Because many<br />

applications need to perform large numbers of update operations in parallel with storage,<br />

maintaining write ordering is key to ensuring <strong>the</strong> correct operation of applications following a<br />

disruption.<br />

An application that performs a high volume of database updates is usually designed with <strong>the</strong><br />

concept of dependent writes. With dependent writes, it is important to ensure that an earlier<br />

write has completed before a later write is started. Reversing <strong>the</strong> order of dependent writes<br />

can undermine an application’s algorithms and can lead to problems, such as detected, or<br />

undetected, data corruption.<br />

Dependent writes that span multiple VDisks<br />

The following scenario illustrates a simple example of a sequence of dependent writes, and in<br />

particular, what can happen if <strong>the</strong>y span multiple VDisks.<br />

Consider <strong>the</strong> following typical sequence of writes for a database update transaction:<br />

1. A write is executed to update <strong>the</strong> database log, indicating that a database update will be<br />

performed.<br />

2. A second write is executed to update <strong>the</strong> database.<br />

3. A third write is executed to update <strong>the</strong> database log, indicating that a database update has<br />

completed successfully.<br />

288 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 6-20 shows <strong>the</strong> write sequence.<br />

Figure 6-20 Dependent writes for a database<br />

The database ensures <strong>the</strong> correct ordering of <strong>the</strong>se writes by waiting for each step to<br />

complete before starting <strong>the</strong> next step.<br />

Database logs: All databases have logs associated with <strong>the</strong>m. These logs keep records of<br />

database changes. If a database needs to be restored to a point beyond <strong>the</strong> last full, offline<br />

backup, logs are required to roll <strong>the</strong> data forward to <strong>the</strong> point of failure.<br />

But imagine if <strong>the</strong> database log and <strong>the</strong> database itself are on separate VDisks and a Metro<br />

Mirror relationship is stopped during this update. In this case, you need to consider <strong>the</strong><br />

possibility that <strong>the</strong> Metro Mirror relationship for <strong>the</strong> VDisk with <strong>the</strong> database file is stopped<br />

slightly before <strong>the</strong> VDisk containing <strong>the</strong> database log. If this situation occurs, it is possible that<br />

<strong>the</strong> secondary VDisks see writes (1) and (3), but not (2).<br />

Then, if <strong>the</strong> database was restarted using data available from secondary disks, <strong>the</strong> database<br />

log will indicate that <strong>the</strong> transaction had completed successfully, when it did not. In this<br />

scenario, <strong>the</strong> integrity of <strong>the</strong> database is in question.<br />

Metro Mirror consistency groups<br />

Metro Mirror consistency groups address <strong>the</strong> issue of dependent writes across VDisks, where<br />

<strong>the</strong> objective is to preserve data consistency across multiple Metro Mirrored VDisks.<br />

Consistency groups ensure a consistent data set, because applications have relational data<br />

spanning across multiple VDisks.<br />

Chapter 6. Advanced Copy Services 289


A Metro Mirror consistency group can contain an arbitrary number of relationships up to <strong>the</strong><br />

maximum number of Metro Mirror relationships that is supported by <strong>the</strong> SVC cluster. Metro<br />

Mirror commands can be issued to a Metro Mirror consistency group and, <strong>the</strong>refore,<br />

simultaneously for all Metro Mirror relationships defined within that consistency group, or to a<br />

single Metro Mirror relationship that is not part of a Metro Mirror consistency group. For<br />

example, when issuing a Metro Mirror startrcconsistgrp command to <strong>the</strong> consistency<br />

group, all of <strong>the</strong> Metro Mirror relationships in <strong>the</strong> consistency group are started at <strong>the</strong> same<br />

time.<br />

Figure 6-21 illustrates <strong>the</strong> concept of Metro Mirror consistency groups.<br />

Because <strong>the</strong> MM_Relationship 1 and 2 are part of <strong>the</strong> consistency group, <strong>the</strong>y can be<br />

handled as one entity, while <strong>the</strong> stand-alone MM_Relationship 3 is handled separately.<br />

Figure 6-21 Metro Mirror consistency group<br />

Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror<br />

consistency groups can provide <strong>the</strong> ability to group relationships, so that <strong>the</strong>y are<br />

manipulated in unison. Metro Mirror relationships within a consistency group can be in any<br />

form:<br />

► Metro Mirror relationships can be part of a consistency group, or <strong>the</strong>y can be stand-alone<br />

and <strong>the</strong>refore handled as single instances.<br />

► A consistency group can contain zero or more relationships. An empty consistency group,<br />

with zero relationships in it, has little purpose until it is assigned its first relationship, except<br />

that it has a name.<br />

► All of <strong>the</strong> relationships in a consistency group must have matching master and auxiliary<br />

SVC clusters.<br />

290 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Although it is possible to use consistency groups to manipulate sets of relationships that do<br />

not need to satisfy <strong>the</strong>se strict rules, this manipulation can lead to undesired side effects. The<br />

rules behind a consistency group mean that certain configuration commands are prohibited.<br />

These configuration commands are not prohibited if <strong>the</strong> relationship is not part of a<br />

consistency group.<br />

For example, consider <strong>the</strong> case of two applications that are completely independent, yet <strong>the</strong>y<br />

are placed into a single consistency group. In <strong>the</strong> event of an error, <strong>the</strong>re is a loss of<br />

synchronization, and a background copy process is required to recover synchronization.<br />

While this process is in progress, Metro Mirror rejects attempts to enable access to secondary<br />

VDisks of ei<strong>the</strong>r application.<br />

If one application finishes its background copy much more quickly than <strong>the</strong> o<strong>the</strong>r application,<br />

Metro Mirror still refuses to grant access to its secondary VDisks even though it is safe in this<br />

case, because Metro Mirror policy is to refuse access to <strong>the</strong> entire consistency group if any<br />

part of it is inconsistent.<br />

Stand-alone relationships and consistency groups share a common configuration and state<br />

model. All of <strong>the</strong> relationships in a non-empty consistency group have same state as <strong>the</strong><br />

consistency group.<br />

6.5.7 How Metro Mirror works<br />

In <strong>the</strong> sections that follow, we describe how Metro Mirror works.<br />

Intercluster communication and zoning<br />

All intercluster communication is performed over <strong>the</strong> <strong>SAN</strong>. Prior to creating intercluster Metro<br />

Mirror relationships, you must create a partnership between <strong>the</strong> two clusters.<br />

SVC node ports on each SVC cluster must be able to access each o<strong>the</strong>r to facilitate <strong>the</strong><br />

partnership creation. Therefore, you must define a zone in each fabric for intercluster<br />

communication (see Chapter 3, “Planning and configuration” on page 65).<br />

SVC cluster partnership<br />

Each SVC cluster can only be in a partnership with between one and three o<strong>the</strong>r SVC<br />

clusters. When an SVC cluster partnership has been defined on both clusters of a pair of<br />

clusters, fur<strong>the</strong>r communication facilities between nodes in each of <strong>the</strong> clusters are<br />

established:<br />

► A single control channel, which is used to exchange and coordinate configuration<br />

information<br />

► I/O channels between each of <strong>the</strong>se nodes in <strong>the</strong> clusters<br />

These channels are maintained and updated as nodes appear and disappear and as links<br />

fail, and <strong>the</strong>y are repaired to maintain operation where possible. If communication between<br />

SVC clusters is interrupted or lost, an error is logged (and consequently, Metro Mirror<br />

relationships will stop).<br />

To handle error conditions, you can configure SVC to raise Simple Network Management<br />

Protocol (SNMP) traps to <strong>the</strong> enterprise monitoring system.<br />

Maintenance of <strong>the</strong> intercluster link<br />

All SVC nodes maintain a database of o<strong>the</strong>r devices that are visible on <strong>the</strong> fabric. This<br />

database is updated as devices appear and disappear.<br />

Chapter 6. Advanced Copy Services 291


Devices that advertise <strong>the</strong>mselves as SVC nodes are categorized according to <strong>the</strong> SVC<br />

cluster to which <strong>the</strong>y belong. SVC nodes that belong to <strong>the</strong> same cluster establish<br />

communication channels between <strong>the</strong>mselves and begin to exchange messages to<br />

implement clustering and <strong>the</strong> functional protocols of SVC.<br />

Nodes that are in separate clusters do not exchange messages after initial discovery is<br />

complete, unless <strong>the</strong>y have been configured toge<strong>the</strong>r to perform Metro Mirror.<br />

The intercluster link carries control traffic to coordinate activity between two clusters. It is<br />

formed between one node in each cluster. The traffic between <strong>the</strong> designated nodes is<br />

distributed among logins that exist between those nodes.<br />

If <strong>the</strong> designated node fails (or all of its logins to <strong>the</strong> remote cluster fail), a new node is chosen<br />

to carry control traffic. This node change causes <strong>the</strong> I/O to pause, but it does not put <strong>the</strong><br />

relationships in a Consistent Stopped state.<br />

6.5.8 Metro Mirror process<br />

Several major steps exist in <strong>the</strong> Metro Mirror process:<br />

1. An SVC cluster partnership is created between two SVC clusters (for intercluster Metro<br />

Mirror).<br />

2. A Metro Mirror relationship is created between two VDisks of <strong>the</strong> same size.<br />

3. To manage multiple Metro Mirror relationships as one entity, relationships can be made<br />

part of a Metro Mirror consistency group, which ensures data consistency across multiple<br />

Metro Mirror relationships and provides ease of management.<br />

4. When a Metro Mirror relationship is started, and when <strong>the</strong> background copy has<br />

completed, <strong>the</strong> relationship becomes consistent and synchronized.<br />

5. After <strong>the</strong> relationship is synchronized, <strong>the</strong> secondary VDisk holds a copy of <strong>the</strong> production<br />

data at <strong>the</strong> primary, which can be used for DR.<br />

6. To access <strong>the</strong> auxiliary VDisk, <strong>the</strong> Metro Mirror relationship must be stopped with <strong>the</strong><br />

access option enabled before write I/O is submitted to <strong>the</strong> secondary.<br />

7. The remote host server is mapped to <strong>the</strong> auxiliary VDisk, and <strong>the</strong> disk is available for I/O.<br />

6.5.9 Methods of synchronization<br />

This section describes three methods that can be used to establish a relationship.<br />

Full synchronization after creation<br />

The full synchronization after creation method is <strong>the</strong> default method. It is <strong>the</strong> simplest in that it<br />

requires no administrative activity apart from issuing <strong>the</strong> necessary commands. However, in<br />

certain environments, <strong>the</strong> available bandwidth can make this method unsuitable.<br />

Use this command sequence for a single relationship:<br />

1. Run mkrcrelationship without specifying <strong>the</strong> -sync option.<br />

2. Run startrcrelationship without specifying <strong>the</strong> -clean option.<br />

292 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Synchronized before creation<br />

In this method, <strong>the</strong> administrator must ensure that <strong>the</strong> master and auxiliary VDisks contain<br />

identical data before creating <strong>the</strong> relationship. There are two ways to ensure that <strong>the</strong> master<br />

and auxiliary VDisks contain identical data:<br />

► Both disks are created with <strong>the</strong> security delete feature so as to make all data zero.<br />

► A complete tape image (or o<strong>the</strong>r method of moving data) is copied from one disk to <strong>the</strong><br />

o<strong>the</strong>r disk.<br />

In ei<strong>the</strong>r technique, no write I/O must take place to ei<strong>the</strong>r <strong>the</strong> master or <strong>the</strong> auxiliary before<br />

<strong>the</strong> relationship is established.<br />

Then, <strong>the</strong> administrator must run <strong>the</strong>se commands:<br />

► Run mkrcrelationship with <strong>the</strong> -sync flag.<br />

► Run startrcrelationship without <strong>the</strong> -clean flag.<br />

If <strong>the</strong>se steps are performed incorrectly, Metro Mirror will report <strong>the</strong> relationship as being<br />

consistent when it is not, <strong>the</strong>refore, likely making any secondary disk useless. This method<br />

has an advantage over full synchronization, because it does not require all of <strong>the</strong> data to be<br />

copied over a constrained link. However, if data needs to be copied, <strong>the</strong> master and auxiliary<br />

disks cannot be used until <strong>the</strong> copy is complete, which might be unacceptable.<br />

Quick synchronization after creation<br />

In this method, <strong>the</strong> administrator must still copy data from <strong>the</strong> master to <strong>the</strong> auxiliary, but <strong>the</strong><br />

administrator can use this method without stopping <strong>the</strong> application at <strong>the</strong> master. The<br />

administrator must ensure that <strong>the</strong>se steps are taken:<br />

► A mkrcrelationship command is issued with <strong>the</strong> -sync flag.<br />

► A stoprcrelationship command is issued with <strong>the</strong> -access flag.<br />

► A tape image (or o<strong>the</strong>r method of transferring data) is used to copy <strong>the</strong> entire master disk<br />

to <strong>the</strong> auxiliary disk.<br />

After <strong>the</strong> copy is complete, <strong>the</strong> administrator must ensure that a startrcrelationship<br />

command is issued with <strong>the</strong> -clean flag.<br />

With this technique, only data that has changed since <strong>the</strong> relationship was created, including<br />

all regions that were incorrect in <strong>the</strong> tape image, is copied from <strong>the</strong> master to <strong>the</strong> auxiliary. As<br />

with “Synchronized before creation” on page 293, <strong>the</strong> copy step must be performed correctly<br />

or <strong>the</strong> auxiliary will be useless, although <strong>the</strong> copy operation will report it as being<br />

synchronized.<br />

Metro Mirror states and events<br />

In this section, we explain <strong>the</strong> various states of a Metro Mirror relationship and <strong>the</strong> series of<br />

events that modify <strong>the</strong>se states.<br />

In Figure 6-22 on page 294, <strong>the</strong> Metro Mirror relationship state diagram shows an overview of<br />

states that can apply to a Metro Mirror relationship in a connected state.<br />

Chapter 6. Advanced Copy Services 293


Figure 6-22 Metro Mirror mapping state diagram<br />

When creating <strong>the</strong> Metro Mirror relationship, you can specify if <strong>the</strong> auxiliary VDisk is already<br />

in sync with <strong>the</strong> master VDisk, and <strong>the</strong> background copy process is <strong>the</strong>n skipped. This<br />

capability is especially useful when creating Metro Mirror relationships for VDisks that have<br />

been created with <strong>the</strong> format option.<br />

The numbers in Figure 6-22 relate to <strong>the</strong> following numbers. To create <strong>the</strong> relationship:<br />

► Step 1:<br />

a. The Metro Mirror relationship is created with <strong>the</strong> -sync option, and <strong>the</strong> Metro Mirror<br />

relationship enters <strong>the</strong> Consistent stopped state.<br />

b. The Metro Mirror relationship is created without specifying that <strong>the</strong> master and auxiliary<br />

VDisks are in sync, and <strong>the</strong> Metro Mirror relationship enters <strong>the</strong> Inconsistent stopped<br />

state.<br />

► Step 2:<br />

a. When starting a Metro Mirror relationship in <strong>the</strong> Consistent stopped state, <strong>the</strong> Metro<br />

Mirror relationship enters <strong>the</strong> Consistent synchronized state. Therefore, no updates<br />

(write I/O) have been performed on <strong>the</strong> primary VDisk while in <strong>the</strong> Consistent stopped<br />

state. O<strong>the</strong>rwise, <strong>the</strong> -force option must be specified, and <strong>the</strong> Metro Mirror relationship<br />

<strong>the</strong>n enters <strong>the</strong> Inconsistent copying state, while <strong>the</strong> background copy is started.<br />

b. When starting a Metro Mirror relationship in <strong>the</strong> Inconsistent stopped state, <strong>the</strong> Metro<br />

Mirror relationship enters <strong>the</strong> Inconsistent copying state, while <strong>the</strong> background copy is<br />

started.<br />

294 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


6.5.10 State overview<br />

► Step 3<br />

When <strong>the</strong> background copy completes, <strong>the</strong> Metro Mirror relationship transits from <strong>the</strong><br />

Inconsistent copying state to <strong>the</strong> Consistent synchronized state.<br />

► Step 4:<br />

a. When stopping a Metro Mirror relationship in <strong>the</strong> Consistent synchronized state,<br />

specifying <strong>the</strong> -access option, which enables write I/O on <strong>the</strong> secondary VDisk, <strong>the</strong><br />

Metro Mirror relationship enters <strong>the</strong> Idling state.<br />

b. To enable write I/O on <strong>the</strong> secondary VDisk, when <strong>the</strong> Metro Mirror relationship is in<br />

<strong>the</strong> Consistent stopped state, issue <strong>the</strong> command svctask stoprcrelationship<br />

specifying <strong>the</strong> -access option, and <strong>the</strong> Metro Mirror relationship enters <strong>the</strong> Idling state.<br />

► Step 5:<br />

a. When starting a Metro Mirror relationship that is in <strong>the</strong> Idling state, you must specify <strong>the</strong><br />

-primary argument to set <strong>the</strong> copy direction. Given that no write I/O has been<br />

performed (to ei<strong>the</strong>r <strong>the</strong> master or auxiliary VDisk) while in <strong>the</strong> Idling state, <strong>the</strong> Metro<br />

Mirror relationship enters <strong>the</strong> Consistent synchronized state.<br />

b. If write I/O has been performed to ei<strong>the</strong>r <strong>the</strong> master or <strong>the</strong> auxiliary VDisk, <strong>the</strong> -force<br />

option must be specified, and <strong>the</strong> Metro Mirror relationship <strong>the</strong>n enters <strong>the</strong> Inconsistent<br />

copying state, while <strong>the</strong> background copy is started.<br />

Stop or Error: When a Metro Mirror relationship is stopped (ei<strong>the</strong>r intentionally or due to an<br />

error), a state transition is applied:<br />

► For example, <strong>the</strong> Metro Mirror relationships in <strong>the</strong> Consistent synchronized state enter <strong>the</strong><br />

Consistent stopped state, and <strong>the</strong> Metro Mirror relationships in <strong>the</strong> Inconsistent copying<br />

state enter <strong>the</strong> Inconsistent stopped state.<br />

► In case <strong>the</strong> connection is broken between <strong>the</strong> SVC clusters in a partnership, <strong>the</strong>n all<br />

(intercluster) Metro Mirror relationships enter a Disconnected state. For fur<strong>the</strong>r<br />

information, refer to “Connected versus disconnected” on page 295.<br />

Common states: Stand-alone relationships and consistency groups share a common<br />

configuration and state model. All Metro Mirror relationships in a consistency group that is<br />

not empty have <strong>the</strong> same state as <strong>the</strong> consistency group.<br />

SVC-defined concepts of state are key to understanding configuration concepts. We explain<br />

<strong>the</strong>m in more detail next.<br />

Connected versus disconnected<br />

This distinction can arise when a Metro Mirror relationship is created with <strong>the</strong> two VDisks in<br />

separate clusters.<br />

Under certain error scenarios, communications between <strong>the</strong> two clusters might be lost. For<br />

example, power might fail, causing one complete cluster to disappear. Alternatively, <strong>the</strong> fabric<br />

connection between <strong>the</strong> two clusters might fail, leaving <strong>the</strong> two clusters running but unable to<br />

communicate with each o<strong>the</strong>r.<br />

When <strong>the</strong> two clusters can communicate, <strong>the</strong> clusters and <strong>the</strong> relationships spanning <strong>the</strong>m<br />

are described as connected. When <strong>the</strong>y cannot communicate, <strong>the</strong> clusters and <strong>the</strong><br />

relationships spanning <strong>the</strong>m are described as disconnected.<br />

Chapter 6. Advanced Copy Services 295


In this scenario, each cluster is left with half of <strong>the</strong> relationship and has only a portion of <strong>the</strong><br />

information that was available to it before. Limited configuration activity is possible and is a<br />

subset of what was possible before.<br />

The disconnected relationships are portrayed as having a changed state. The new states<br />

describe what is known about <strong>the</strong> relationship and what configuration commands are<br />

permitted.<br />

When <strong>the</strong> clusters can communicate again, <strong>the</strong> relationships become connected again. Metro<br />

Mirror automatically reconciles <strong>the</strong> two state fragments, taking into account any configuration<br />

or o<strong>the</strong>r event that took place while <strong>the</strong> relationship was disconnected. As a result, <strong>the</strong><br />

relationship can ei<strong>the</strong>r return to <strong>the</strong> state that it was in when it became disconnected or it can<br />

enter ano<strong>the</strong>r connected state.<br />

Relationships that are configured between VDisks in <strong>the</strong> same SVC cluster (intracluster) will<br />

never be described as being in a disconnected state.<br />

Consistent versus inconsistent<br />

Relationships that contain VDisks that are operating as secondaries can be described as<br />

being consistent or inconsistent. Consistency groups that contain relationships can also be<br />

described as being consistent or inconsistent. The consistent or inconsistent property<br />

describes <strong>the</strong> relationship of <strong>the</strong> data on <strong>the</strong> secondary to <strong>the</strong> one on <strong>the</strong> primary VDisk. It<br />

can be considered a property of <strong>the</strong> secondary VDisk itself.<br />

A secondary is described as consistent if it contains data that might have been read by a host<br />

system from <strong>the</strong> primary if power had failed at an imaginary point in time while I/O was in<br />

progress, and power was later restored. This imaginary point in time is defined as <strong>the</strong><br />

recovery point. The requirements for consistency are expressed with respect to activity at <strong>the</strong><br />

primary up to <strong>the</strong> recovery point:<br />

► The secondary VDisk contains <strong>the</strong> data from all of <strong>the</strong> writes to <strong>the</strong> primary for which <strong>the</strong><br />

host received successful completion and that data had not been overwritten by a<br />

subsequent write (before <strong>the</strong> recovery point).<br />

► For writes for which <strong>the</strong> host did not receive a successful completion (that is, it received<br />

bad completion or no completion at all), and <strong>the</strong> host subsequently performed a read from<br />

<strong>the</strong> primary of that data and that read returned successful completion and no later write<br />

was sent (before <strong>the</strong> recovery point), <strong>the</strong> secondary contains <strong>the</strong> same data as that<br />

returned by <strong>the</strong> read from <strong>the</strong> primary.<br />

From <strong>the</strong> point of view of an application, consistency means that a secondary VDisk contains<br />

<strong>the</strong> same data as <strong>the</strong> primary VDisk at <strong>the</strong> recovery point (<strong>the</strong> time at which <strong>the</strong> imaginary<br />

power failure occurred).<br />

If an application is designed to cope with unexpected power failure, this guarantee of<br />

consistency means that <strong>the</strong> application will be able to use <strong>the</strong> secondary and begin operation<br />

just as though it had been restarted after <strong>the</strong> hypo<strong>the</strong>tical power failure.<br />

Again, <strong>the</strong> application is dependent on <strong>the</strong> key properties of consistency:<br />

► Write ordering<br />

► Read stability for correct operation at <strong>the</strong> secondary<br />

If a relationship, or set of relationships, is inconsistent and an attempt is made to start an<br />

application using <strong>the</strong> data in <strong>the</strong> secondaries, a number of outcomes are possible:<br />

► The application might decide that <strong>the</strong> data is corrupt and crash or exit with an error code.<br />

► The application might fail to detect that <strong>the</strong> data is corrupt and return erroneous data.<br />

296 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► The application might work without a problem.<br />

Because of <strong>the</strong> risk of data corruption, and in particular undetected data corruption, Metro<br />

Mirror strongly enforces <strong>the</strong> concept of consistency and prohibits access to inconsistent data.<br />

Consistency as a concept can be applied to a single relationship or a set of relationships in a<br />

consistency group. Write ordering is a concept that an application can maintain across a<br />

number of disks accessed through multiple systems; <strong>the</strong>refore, consistency must operate<br />

across all those disks.<br />

When deciding how to use consistency groups, <strong>the</strong> administrator must consider <strong>the</strong> scope of<br />

an application’s data, taking into account all of <strong>the</strong> interdependent systems that communicate<br />

and exchange information.<br />

If two programs or systems communicate and store details as a result of <strong>the</strong> information<br />

exchanged, ei<strong>the</strong>r of <strong>the</strong> following actions might occur:<br />

► All of <strong>the</strong> data accessed by <strong>the</strong> group of systems must be placed into a single consistency<br />

group.<br />

► The systems must be recovered independently (each within its own consistency group).<br />

Then, each system must perform recovery with <strong>the</strong> o<strong>the</strong>r applications to become<br />

consistent with <strong>the</strong>m.<br />

Consistent versus synchronized<br />

A copy that is consistent and up-to-date is described as synchronized. In a synchronized<br />

relationship, <strong>the</strong> primary and secondary VDisks only differ in regions where writes are<br />

outstanding from <strong>the</strong> host.<br />

Consistency does not mean that <strong>the</strong> data is up-to-date. A copy can be consistent and yet<br />

contain data that was frozen at a point in time in <strong>the</strong> past. Write I/O might have continued to a<br />

primary and not have been copied to <strong>the</strong> secondary. This state arises when it becomes<br />

impossible to keep up-to-date and maintain consistency. An example is a loss of<br />

communication between clusters when writing to <strong>the</strong> secondary.<br />

When communication is lost for an extended period of time, Metro Mirror tracks <strong>the</strong> changes<br />

that happen at <strong>the</strong> primary, but not <strong>the</strong> order of such changes, or <strong>the</strong> details of such changes<br />

(write data). When communication is restored, it is impossible to synchronize <strong>the</strong> secondary<br />

without sending write data to <strong>the</strong> secondary out-of-order and, <strong>the</strong>refore, losing consistency.<br />

Two policies can be used to cope with this situation:<br />

► Make a point-in-time copy of <strong>the</strong> consistent secondary before allowing <strong>the</strong> secondary to<br />

become inconsistent. In <strong>the</strong> event of a disaster before consistency is achieved again, <strong>the</strong><br />

point-in-time copy target provides a consistent, although out-of-date, image.<br />

► Accept <strong>the</strong> loss of consistency and <strong>the</strong> loss of a useful secondary, while synchronizing <strong>the</strong><br />

secondary.<br />

Chapter 6. Advanced Copy Services 297


6.5.11 Detailed states<br />

The following sections detail <strong>the</strong> states that are portrayed to <strong>the</strong> user, for ei<strong>the</strong>r consistency<br />

groups or relationships. It also details <strong>the</strong> extra information that is available in each state. The<br />

major states are designed to provide guidance about <strong>the</strong> configuration commands that are<br />

available.<br />

InconsistentStopped<br />

InconsistentStopped is a connected state. In this state, <strong>the</strong> primary is accessible for read and<br />

write I/O, but <strong>the</strong> secondary is not accessible for ei<strong>the</strong>r read or write I/O. A copy process<br />

needs to be started to make <strong>the</strong> secondary consistent.<br />

This state is entered when <strong>the</strong> relationship or consistency group was InconsistentCopying<br />

and has ei<strong>the</strong>r suffered a persistent error or received a stop command that has caused <strong>the</strong><br />

copy process to stop.<br />

A start command causes <strong>the</strong> relationship or consistency group to move to <strong>the</strong><br />

InconsistentCopying state. A stop command is accepted, but it has no effect.<br />

If <strong>the</strong> relationship or consistency group becomes disconnected, <strong>the</strong> secondary side transits to<br />

InconsistentDisconnected. The primary side transits to IdlingDisconnected.<br />

InconsistentCopying<br />

InconsistentCopying is a connected state. In this state, <strong>the</strong> primary is accessible for read and<br />

write I/O, but <strong>the</strong> secondary is not accessible for ei<strong>the</strong>r read or write I/O.<br />

This state is entered after a start command is issued to an InconsistentStopped relationship<br />

or a consistency group. It is also entered when a forced start is issued to an Idling or<br />

ConsistentStopped relationship or consistency group.<br />

In this state, a background copy process runs that copies data from <strong>the</strong> primary to <strong>the</strong><br />

secondary VDisk.<br />

In <strong>the</strong> absence of errors, an InconsistentCopying relationship is active, and <strong>the</strong> copy progress<br />

increases until <strong>the</strong> copy process completes. In certain error situations, <strong>the</strong> copy progress<br />

might freeze or even regress.<br />

A persistent error or stop command places <strong>the</strong> relationship or consistency group into an<br />

InconsistentStopped state. A start command is accepted, but it has no effect.<br />

If <strong>the</strong> background copy process completes on a stand-alone relationship, or on all<br />

relationships for a consistency group, <strong>the</strong> relationship or consistency group transits to <strong>the</strong><br />

ConsistentSynchronized state.<br />

If <strong>the</strong> relationship or consistency group becomes disconnected, <strong>the</strong> secondary side transits to<br />

InconsistentDisconnected. The primary side transitions to IdlingDisconnected.<br />

ConsistentStopped<br />

ConsistentStopped is a connected state. In this state, <strong>the</strong> secondary contains a consistent<br />

image, but it might be out-of-date with respect to <strong>the</strong> primary.<br />

This state can arise when a relationship was in a Consistent Synchronized state and suffers<br />

an error that forces a Consistency Freeze. It can also arise when a relationship is created with<br />

a CreateConsistentFlag set to TRUE.<br />

298 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Normally, following an I/O error, subsequent write activity causes updates to <strong>the</strong> primary and<br />

<strong>the</strong> secondary is no longer synchronized (set to false). In this case, to re-establish<br />

synchronization, consistency must be given up for a period. You must use a start command<br />

with <strong>the</strong> -force option to acknowledge this situation, and <strong>the</strong> relationship or consistency group<br />

transits to InconsistentCopying. Enter this command only after all of <strong>the</strong> outstanding errors<br />

are repaired.<br />

In <strong>the</strong> unusual case where <strong>the</strong> primary and <strong>the</strong> secondary are still synchronized (perhaps<br />

following a user stop, and no fur<strong>the</strong>r write I/O was received), a start command takes <strong>the</strong><br />

relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual<br />

case, you can enter a switch command that moves <strong>the</strong> relationship or consistency group to<br />

ConsistentSynchronized and reverses <strong>the</strong> roles of <strong>the</strong> primary and <strong>the</strong> secondary.<br />

If <strong>the</strong> relationship or consistency group becomes disconnected, <strong>the</strong> secondary transits to<br />

ConsistentDisconnected. The primary transitions to IdlingDisconnected.<br />

An informational status log is generated every time that a relationship or consistency group<br />

enters <strong>the</strong> ConsistentStopped with a status of Online state. You can configure this situation to<br />

enable an SNMP trap and provide a trigger to automation software to consider issuing a<br />

start command following a loss of synchronization.<br />

ConsistentSynchronized<br />

ConsistentSynchronized is a connected state. In this state, <strong>the</strong> primary VDisk is accessible for<br />

read and write I/O, and <strong>the</strong> secondary VDisk is accessible for read-only I/O.<br />

Writes that are sent to <strong>the</strong> primary VDisk are sent to both <strong>the</strong> primary and secondary VDisks.<br />

Ei<strong>the</strong>r successful completion must be received for both writes, <strong>the</strong> write must be failed to <strong>the</strong><br />

host, or a state must transit out of <strong>the</strong> ConsistentSynchronized state before a write is<br />

completed to <strong>the</strong> host.<br />

A stop command takes <strong>the</strong> relationship to <strong>the</strong> ConsistentStopped state. A stop command<br />

with <strong>the</strong> -access parameter takes <strong>the</strong> relationship to <strong>the</strong> Idling state.<br />

A switch command leaves <strong>the</strong> relationship in <strong>the</strong> ConsistentSynchronized state, but it<br />

reverses <strong>the</strong> primary and secondary roles.<br />

A start command is accepted, but it has no effect.<br />

If <strong>the</strong> relationship or consistency group becomes disconnected, <strong>the</strong> same transitions are<br />

made as for ConsistentStopped.<br />

Idling<br />

Idling is a connected state. Both master and auxiliary disks operate in <strong>the</strong> primary role.<br />

Consequently, both master and auxiliary are accessible for write I/O.<br />

In this state, <strong>the</strong> relationship or consistency group accepts a start command. Metro Mirror<br />

maintains a record of regions on each disk that received write I/O while idling. This record is<br />

used to determine what areas need to be copied following a start command.<br />

The start command must specify <strong>the</strong> new copy direction. A start command can cause a<br />

loss of consistency if ei<strong>the</strong>r VDisk in any relationship has received write I/O, which is indicated<br />

by <strong>the</strong> Synchronized status. If <strong>the</strong> start command leads to loss of consistency, you must<br />

specify <strong>the</strong> -force parameter.<br />

Chapter 6. Advanced Copy Services 299


Following a start command, <strong>the</strong> relationship or consistency group transits to<br />

ConsistentSynchronized if <strong>the</strong>re is no loss of consistency, or to InconsistentCopying if <strong>the</strong>re is<br />

a loss of consistency.<br />

Also, while in this state, <strong>the</strong> relationship or consistency group accepts a -clean option on <strong>the</strong><br />

start command. If <strong>the</strong> relationship or consistency group becomes disconnected, both sides<br />

change <strong>the</strong>ir state to IdlingDisconnected.<br />

IdlingDisconnected<br />

IdlingDisconnected is a disconnected state. The VDisk or disks in this half of <strong>the</strong> relationship<br />

or consistency group are all in <strong>the</strong> primary role and accept read or write I/O.<br />

The major priority in this state is to recover <strong>the</strong> link and make <strong>the</strong> relationship or consistency<br />

group connected again.<br />

No configuration activity is possible (except for deletes or stops) until <strong>the</strong> relationship<br />

becomes connected again. At that point, <strong>the</strong> relationship transits to a connected state. The<br />

exact connected state that is entered depends on <strong>the</strong> state of <strong>the</strong> o<strong>the</strong>r half of <strong>the</strong> relationship<br />

or consistency group, which depends on <strong>the</strong>se factors:<br />

► The state when it became disconnected<br />

► The write activity since it was disconnected<br />

► The configuration activity since it was disconnected<br />

If both halves are IdlingDisconnected, <strong>the</strong> relationship becomes Idling when reconnected.<br />

While IdlingDisconnected, if a write I/O is received that causes loss of synchronization<br />

(synchronized attribute transits from true to false) and <strong>the</strong> relationship was not already<br />

stopped (ei<strong>the</strong>r through a user stop or a persistent error), an error log is raised to notify you of<br />

this situation. This error log is <strong>the</strong> same error log that occurs when <strong>the</strong> same situation arises<br />

for ConsistentSynchronized.<br />

InconsistentDisconnected<br />

InconsistentDisconnected is a disconnected state. The VDisks in this half of <strong>the</strong> relationship<br />

or consistency group are all in <strong>the</strong> secondary role and do not accept read or write I/O.<br />

No configuration activity, except for deletes, is permitted until <strong>the</strong> relationship becomes<br />

connected again.<br />

When <strong>the</strong> relationship or consistency group becomes connected again, <strong>the</strong> relationship<br />

becomes InconsistentCopying automatically unless ei<strong>the</strong>r condition is true:<br />

► The relationship was InconsistentStopped when it became disconnected.<br />

► The user issued a stop command while disconnected.<br />

In ei<strong>the</strong>r case, <strong>the</strong> relationship or consistency group becomes InconsistentStopped.<br />

ConsistentDisconnected<br />

ConsistentDisconnected is a disconnected state. The VDisks in this half of <strong>the</strong> relationship or<br />

consistency group are all in <strong>the</strong> secondary role and accept read I/O but not write I/O.<br />

This state is entered from ConsistentSynchronized or ConsistentStopped when <strong>the</strong><br />

secondary side of a relationship becomes disconnected.<br />

In this state, <strong>the</strong> relationship or consistency group displays an attribute of FreezeTime, which<br />

is <strong>the</strong> point in time that Consistency was frozen. When entered from ConsistentStopped, it<br />

retains <strong>the</strong> time that it had in that state. When entered from ConsistentSynchronized, <strong>the</strong><br />

300 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


FreezeTime shows <strong>the</strong> last time at which <strong>the</strong> relationship or consistency group was known to<br />

be consistent. This time corresponds to <strong>the</strong> time of <strong>the</strong> last successful heartbeat to <strong>the</strong> o<strong>the</strong>r<br />

cluster.<br />

A stop command with <strong>the</strong> -access flag set to true transits <strong>the</strong> relationship or consistency<br />

group to <strong>the</strong> IdlingDisconnected state. This state allows write I/O to be performed to <strong>the</strong><br />

secondary VDisk and is used as part of a DR scenario.<br />

When <strong>the</strong> relationship or consistency group becomes connected again, <strong>the</strong> relationship or<br />

consistency group becomes ConsistentSynchronized only if this action does not lead to a loss<br />

of consistency. These conditions must be true:<br />

► The relationship was ConsistentSynchronized when it became disconnected.<br />

► No writes received successful completion at <strong>the</strong> primary while disconnected.<br />

O<strong>the</strong>rwise, <strong>the</strong> relationship become ConsistentStopped. The FreezeTime setting is retained.<br />

Empty<br />

This state only applies to consistency groups. It is <strong>the</strong> state of a consistency group that has<br />

no relationships and no o<strong>the</strong>r state information to show.<br />

It is entered when a consistency group is first created. It is exited when <strong>the</strong> first relationship is<br />

added to <strong>the</strong> consistency group, at which point, <strong>the</strong> state of <strong>the</strong> relationship becomes <strong>the</strong><br />

state of <strong>the</strong> consistency group.<br />

Background copy<br />

Metro Mirror paces <strong>the</strong> rate at which background copy is performed by <strong>the</strong> appropriate<br />

relationships. Background copy takes place on relationships that are in <strong>the</strong><br />

InconsistentCopying state with a status of Online.<br />

The quota of background copy (configured on <strong>the</strong> intercluster link) is divided evenly between<br />

all of <strong>the</strong> nodes that are performing background copy for one of <strong>the</strong> eligible relationships. This<br />

allocation is made irrespective of <strong>the</strong> number of disks for which <strong>the</strong> node is responsible. Each<br />

node in turn divides its allocation evenly between <strong>the</strong> multiple relationships performing a<br />

background copy.<br />

For intracluster relationships, each node is assigned a static quota of 25 MBps.<br />

6.5.12 Practical use of Metro Mirror<br />

The master VDisk is <strong>the</strong> production VDisk and updates to this copy are mirrored in real time<br />

to <strong>the</strong> auxiliary VDisk. The contents of <strong>the</strong> auxiliary VDisk that existed when <strong>the</strong> relationship<br />

was created are destroyed.<br />

Switching copy direction: The copy direction for a Metro Mirror relationship can be<br />

switched so <strong>the</strong> auxiliary VDisk becomes <strong>the</strong> primary, and <strong>the</strong> master VDisk becomes <strong>the</strong><br />

secondary.<br />

While <strong>the</strong> Metro Mirror relationship is active, <strong>the</strong> secondary copy (VDisk) is not accessible for<br />

host application write I/O at any time. The SVC allows read-only access to <strong>the</strong> secondary<br />

VDisk when it contains a “consistent” image. This time period is only intended to allow boot<br />

time operating system discovery to complete without error, so that any hosts at <strong>the</strong> secondary<br />

site can be ready to start up <strong>the</strong> applications with minimum delay, if required.<br />

Chapter 6. Advanced Copy Services 301


For example, many operating systems must read logical block address (LBA) zero to<br />

configure a logical unit. Although read access is allowed at <strong>the</strong> secondary in practice, <strong>the</strong> data<br />

on <strong>the</strong> secondary volumes cannot be read by a host, because most operating systems write a<br />

“dirty bit” to <strong>the</strong> file system when it is mounted. Because this write operation is not allowed on<br />

<strong>the</strong> secondary volume, <strong>the</strong> volume cannot be mounted.<br />

This access is only provided where consistency can be guaranteed. However, <strong>the</strong>re is no way<br />

in which coherency can be maintained between reads that are performed at <strong>the</strong> secondary<br />

and later write I/Os that are performed at <strong>the</strong> primary.<br />

To enable access to <strong>the</strong> secondary VDisk for host operations, you must stop <strong>the</strong> Metro Mirror<br />

relationship by specifying <strong>the</strong> -access parameter.<br />

While access to <strong>the</strong> secondary VDisk for host operations is enabled, <strong>the</strong> host must be<br />

instructed to mount <strong>the</strong> VDisk and related tasks before <strong>the</strong> application can be started, or<br />

instructed to perform a recovery process.<br />

For example, <strong>the</strong> Metro Mirror requirement to enable <strong>the</strong> secondary copy for access<br />

differentiates it from third-party mirroring software on <strong>the</strong> host, which aims to emulate a<br />

single, reliable disk regardless of what system is accessing it. Metro Mirror retains <strong>the</strong><br />

property that <strong>the</strong>re are two volumes in existence, but it suppresses one volume while <strong>the</strong> copy<br />

is being maintained.<br />

Using a secondary copy demands a conscious policy decision by <strong>the</strong> administrator that a<br />

failover is required and that <strong>the</strong> tasks to be performed on <strong>the</strong> host involved in establishing<br />

operation on <strong>the</strong> secondary copy are substantial. The goal is to make this rapid (much faster<br />

when compared to recovering from a backup copy) but not seamless.<br />

The failover process can be automated through failover management software. The SVC<br />

provides Simple Network Management Protocol (SNMP) traps and programming (or<br />

scripting) for <strong>the</strong> command-line interface (CLI) to enable this automation.<br />

6.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror<br />

functions<br />

Table 6-7 outlines <strong>the</strong> combinations of FlashCopy and Metro Mirror or Global Mirror functions<br />

that are valid for a single VDisk.<br />

Table 6-7 VDisk valid combination<br />

FlashCopy Metro Mirror or Global Mirror<br />

Primary<br />

FlashCopy Source Supported Supported<br />

FlashCopy Target Not supported Not supported<br />

6.5.14 Metro Mirror configuration limits<br />

Table 6-8 lists <strong>the</strong> Metro Mirror configuration limits.<br />

Table 6-8 Metro Mirror configuration limits<br />

Parameter Value<br />

Number of Metro Mirror<br />

consistency groups per cluster<br />

256<br />

302 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

Metro Mirror or Global Mirror<br />

Secondary


Parameter Value<br />

Number of Metro Mirror<br />

relationships per cluster<br />

Number of Metro Mirror<br />

relationships per consistency<br />

group<br />

6.6 Metro Mirror commands<br />

8,192<br />

8,192<br />

Total VDisk size per I/O Group There is a per I/O Group limit of 1,024 TB on <strong>the</strong> quantity of<br />

primary and secondary VDisk address space that can participate<br />

in Metro Mirror and Global Mirror relationships. This maximum<br />

configuration will consume all 512 MB of bitmap space for <strong>the</strong> I/O<br />

Group and allow no FlashCopy bitmap space.<br />

For comprehensive details about Metro Mirror Commands, refer to <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong><br />

<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Command-Line Interface User’s Guide, SC26-7903.<br />

The command set for Metro Mirror contains two broad groups:<br />

► Commands to create, delete, and manipulate relationships and consistency groups<br />

► Commands to cause state changes<br />

Where a configuration command affects more than one cluster, Metro Mirror performs <strong>the</strong><br />

work to coordinate configuration activity between <strong>the</strong> clusters. Certain configuration<br />

commands can only be performed when <strong>the</strong> clusters are connected and fail with no effect<br />

when <strong>the</strong>y are disconnected.<br />

O<strong>the</strong>r configuration commands are permitted even though <strong>the</strong> clusters are disconnected. The<br />

state is reconciled automatically by Metro Mirror when <strong>the</strong> clusters become connected again.<br />

For any given command, with one exception, a single cluster actually receives <strong>the</strong> command<br />

from <strong>the</strong> administrator. This design is significant for defining <strong>the</strong> context for a<br />

CreateRelationship mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp<br />

command, in which case, <strong>the</strong> cluster receiving <strong>the</strong> command is called <strong>the</strong> local cluster.<br />

The exception mentioned previously is <strong>the</strong> command that sets clusters into a Metro Mirror<br />

partnership. The mkpartnership command must be issued to both <strong>the</strong> local and remote<br />

clusters.<br />

The commands here are described as an abstract command set and are implemented as<br />

ei<strong>the</strong>r method:<br />

► A command-line interface (CLI), which can be used for scripting and automation<br />

► A graphical user interface (GUI), which can be used for one-off tasks<br />

6.6.1 Listing available SVC cluster partners<br />

To create an SVC cluster partnership, use <strong>the</strong> svcinfo lsclustercandidate command.<br />

svcinfo lsclustercandidate<br />

The svcinfo lsclustercandidate command is used to list <strong>the</strong> clusters that are available for<br />

setting up a two-cluster partnership. This command is a prerequisite for creating Metro Mirror<br />

relationships.<br />

Chapter 6. Advanced Copy Services 303


6.6.2 Creating <strong>the</strong> SVC cluster partnership<br />

To create an SVC cluster partnership, use <strong>the</strong> svctask mkpartnership command.<br />

svctask mkpartnership<br />

The svctask mkpartnership command is used to establish a one-way Metro Mirror<br />

partnership between <strong>the</strong> local cluster and a remote cluster.<br />

To establish a fully functional Metro Mirror partnership, you must issue this command to both<br />

clusters. This step is a prerequisite to creating Metro Mirror relationships between VDisks on<br />

<strong>the</strong> SVC clusters.<br />

When creating <strong>the</strong> partnership, you can specify <strong>the</strong> bandwidth to be used by <strong>the</strong> background<br />

copy process between <strong>the</strong> local and <strong>the</strong> remote SVC cluster, and if it is not specified, <strong>the</strong><br />

bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or<br />

equal to <strong>the</strong> bandwidth that can be sustained by <strong>the</strong> intercluster link.<br />

Background copy bandwidth effect on foreground I/O latency<br />

The background copy bandwidth determines <strong>the</strong> rate at which <strong>the</strong> background copy for <strong>the</strong><br />

SVC will be attempted. The background copy bandwidth can affect <strong>the</strong> foreground I/O latency<br />

in one of three ways:<br />

► The following results can occur if <strong>the</strong> background copy bandwidth is set too high for <strong>the</strong><br />

Metro Mirror intercluster link capacity:<br />

– The background copy I/Os can back up on <strong>the</strong> Metro Mirror intercluster link.<br />

– There is a delay in <strong>the</strong> synchronous secondary writes of foreground I/Os.<br />

– The foreground I/O latency will increase as perceived by applications.<br />

► If <strong>the</strong> background copy bandwidth is set too high for <strong>the</strong> storage at <strong>the</strong> primary site, <strong>the</strong><br />

background copy read I/Os overload <strong>the</strong> primary storage and delay foreground I/Os.<br />

► If <strong>the</strong> background copy bandwidth is set too high for <strong>the</strong> storage at <strong>the</strong> secondary site,<br />

background copy writes at <strong>the</strong> secondary overload <strong>the</strong> secondary storage and again delay<br />

<strong>the</strong> synchronous secondary writes of foreground I/Os.<br />

In order to set <strong>the</strong> background copy bandwidth optimally, make sure that you consider all<br />

three resources (<strong>the</strong> primary storage, <strong>the</strong> intercluster link bandwidth, and <strong>the</strong> secondary<br />

storage). Provision <strong>the</strong> most restrictive of <strong>the</strong>se three resources between <strong>the</strong> background<br />

copy bandwidth and <strong>the</strong> peak foreground I/O workload. This provisioning can be done by a<br />

calculation (as previously described) or alternatively by determining experimentally how much<br />

background copy can be allowed before <strong>the</strong> foreground I/O latency becomes unacceptable,<br />

and <strong>the</strong>n backing off to allow for peaks in workload and a safety margin.<br />

svctask chpartnership<br />

In case it is needed to change <strong>the</strong> bandwidth that is available for background copy in an SVC<br />

cluster partnership, you can use <strong>the</strong> svctask chpartnership command to specify <strong>the</strong> new<br />

bandwidth.<br />

6.6.3 Creating a Metro Mirror consistency group<br />

To create a Metro Mirror consistency group, use <strong>the</strong> svctask mkrcconsistgrp command.<br />

svctask mkrcconsistgrp<br />

The svctask mkrcconsistgrp command is used to create a new empty Metro Mirror<br />

consistency group.<br />

304 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


The Metro Mirror consistency group name must be unique across all of <strong>the</strong> consistency<br />

groups that are known to <strong>the</strong> clusters owning this consistency group. If <strong>the</strong> consistency group<br />

involves two clusters, <strong>the</strong> clusters must be in communication throughout <strong>the</strong> creation process.<br />

The new consistency group does not contain any relationships and will be in <strong>the</strong> Empty state.<br />

Metro Mirror relationships can be added to <strong>the</strong> group ei<strong>the</strong>r upon creation or afterward by<br />

using <strong>the</strong> svctask chrelationship command.<br />

6.6.4 Creating a Metro Mirror relationship<br />

To create a Metro Mirror relationship, use <strong>the</strong> command svctask mkrcrelationship.<br />

svctask mkrcrelationship<br />

The svctask mkrcrelationship command is used to create a new Metro Mirror relationship.<br />

This relationship persists until it is deleted.<br />

The auxiliary VDisk must be equal in size to <strong>the</strong> master VDisk or <strong>the</strong> command will fail, and if<br />

both VDisks are in <strong>the</strong> same cluster, <strong>the</strong>y must both be in <strong>the</strong> same I/O Group. The master<br />

and auxiliary VDisk cannot be in an existing relationship and cannot be <strong>the</strong> target of a<br />

FlashCopy mapping. This command returns <strong>the</strong> new relationship (relationship_id) when<br />

successful.<br />

When creating <strong>the</strong> Metro Mirror relationship, it can be added to an already existing<br />

consistency group, or it can be a stand-alone Metro Mirror relationship if no consistency<br />

group is specified.<br />

To check whe<strong>the</strong>r <strong>the</strong> master or auxiliary VDisks comply with <strong>the</strong> prerequisites to participate<br />

in a Metro Mirror relationship, use <strong>the</strong> svcinfo lsrcrelationshipcandidate command.<br />

svcinfo lsrcrelationshipcandidate<br />

The svcinfo lsrcrelationshipcandidate command is used to list available VDisks that are<br />

eligible for a Metro Mirror relationship.<br />

When issuing <strong>the</strong> command, you can specify <strong>the</strong> master VDisk name and auxiliary cluster to<br />

list candidates that comply with prerequisites to create a Metro Mirror relationship. If <strong>the</strong><br />

command is issued with no flags, all VDisks that are not disallowed by ano<strong>the</strong>r configuration<br />

state, such as being a FlashCopy target, are listed.<br />

6.6.5 Changing a Metro Mirror relationship<br />

To modify <strong>the</strong> properties of a Metro Mirror relationship, use <strong>the</strong> command svctask<br />

chrcrelationship.<br />

svctask chrcrelationship<br />

The svctask chrcrelationship command is used to modify <strong>the</strong> following properties of a<br />

Metro Mirror relationship:<br />

► Change <strong>the</strong> name of a Metro Mirror relationship.<br />

► Add a relationship to a group.<br />

► Remove a relationship from a group using <strong>the</strong> -force flag.<br />

Adding a Metro Mirror relationship: When adding a Metro Mirror relationship to a<br />

consistency group that is not empty, <strong>the</strong> relationship must have <strong>the</strong> same state and copy<br />

direction as <strong>the</strong> group in order to be added to it.<br />

Chapter 6. Advanced Copy Services 305


6.6.6 Changing a Metro Mirror consistency group<br />

To change <strong>the</strong> name of a Metro Mirror consistency group, use <strong>the</strong> svctask chrcconsistgrp<br />

command.<br />

svctask chrcconsistgrp<br />

The svctask chrcconsistgrp command is used to change <strong>the</strong> name of a Metro Mirror<br />

consistency group.<br />

6.6.7 Starting a Metro Mirror relationship<br />

To start a stand-alone Metro Mirror relationship, use <strong>the</strong> svctask startrcrelationship<br />

command.<br />

svctask startrcrelationship<br />

The svctask startrcrelationship command is used to start <strong>the</strong> copy process of a Metro<br />

Mirror relationship.<br />

When issuing <strong>the</strong> command, <strong>the</strong> copy direction can be set, if it is undefined, and optionally<br />

mark <strong>the</strong> secondary VDisk of <strong>the</strong> relationship as clean. The command fails it if it is used to<br />

attempt to start a relationship that is part of a consistency group.<br />

This command can only be issued to a relationship that is connected. For a relationship that is<br />

idling, this command assigns a copy direction (primary and secondary roles) and begins <strong>the</strong><br />

copy process. O<strong>the</strong>rwise, this command restarts a previous copy process that was stopped<br />

ei<strong>the</strong>r by a stop command or by an I/O error.<br />

If <strong>the</strong> resumption of <strong>the</strong> copy process leads to a period when <strong>the</strong> relationship is inconsistent,<br />

you must specify <strong>the</strong> -force flag when restarting <strong>the</strong> relationship. This situation can arise if, for<br />

example, <strong>the</strong> relationship was stopped, and <strong>the</strong>n, fur<strong>the</strong>r writes were performed on <strong>the</strong><br />

original primary of <strong>the</strong> relationship. The use of <strong>the</strong> -force flag here is a reminder that <strong>the</strong> data<br />

on <strong>the</strong> secondary will become inconsistent while resynchronization (background copying)<br />

occurs, and <strong>the</strong>refore, <strong>the</strong> data is not usable for DR purposes before <strong>the</strong> background copy has<br />

completed.<br />

In <strong>the</strong> Idling state, you must specify <strong>the</strong> primary VDisk to indicate <strong>the</strong> copy direction. In o<strong>the</strong>r<br />

connected states, you can provide <strong>the</strong> -primary argument, but it must match <strong>the</strong> existing<br />

setting.<br />

6.6.8 Stopping a Metro Mirror relationship<br />

To stop a stand-alone Metro Mirror relationship, use <strong>the</strong> svctask stoprcrelationship<br />

command.<br />

svctask stoprcrelationship<br />

The svctask stoprcrelationship command is used to stop <strong>the</strong> copy process for a<br />

relationship. It can also be used to enable write access to a consistent secondary VDisk by<br />

specifying <strong>the</strong> -access flag.<br />

This command applies to a stand-alone relationship. It is rejected if it is addressed to a<br />

relationship that is part of a consistency group. You can issue this command to stop a<br />

relationship that is copying from primary to secondary.<br />

306 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


If <strong>the</strong> relationship is in an Inconsistent state, any copy operation stops and does not resume<br />

until you issue a svctask startrcrelationship command. Write activity is no longer copied<br />

from <strong>the</strong> primary to <strong>the</strong> secondary VDisk. For a relationship in <strong>the</strong> ConsistentSynchronized<br />

state, this command causes a consistency freeze.<br />

When a relationship is in a Consistent state (that is, in <strong>the</strong> ConsistentStopped,<br />

ConsistentSynchronized, or ConsistentDisconnected state), you can use <strong>the</strong> -access<br />

parameter with <strong>the</strong> stoprcrelationship command to enable write access to <strong>the</strong> secondary<br />

VDisk.<br />

6.6.9 Starting a Metro Mirror consistency group<br />

To start a Metro Mirror consistency group, use <strong>the</strong> svctask startrcconsistgrp command.<br />

The svctask startrcconsistgrp command is used to start a Metro Mirror consistency group.<br />

This command can only be issued to a consistency group that is connected.<br />

For a consistency group that is idling, this command assigns a copy direction (primary and<br />

secondary roles) and begins <strong>the</strong> copy process. O<strong>the</strong>rwise, this command restarts a previous<br />

copy process that was stopped ei<strong>the</strong>r by a stop command or by an I/O error.<br />

6.6.10 Stopping a Metro Mirror consistency group<br />

To stop a Metro Mirror consistency group, use <strong>the</strong> svctask stoprcconsistgrp command.<br />

svctask stoprcconsistgrp<br />

The svctask startrcconsistgrp command is used to stop <strong>the</strong> copy process for a Metro<br />

Mirror consistency group. It can also be used to enable write access to <strong>the</strong> secondary VDisks<br />

in <strong>the</strong> group if <strong>the</strong> group is in a Consistent state.<br />

If <strong>the</strong> consistency group is in an Inconsistent state, any copy operation stops and does not<br />

resume until you issue <strong>the</strong> svctask startrcconsistgrp command. Write activity is no longer<br />

copied from <strong>the</strong> primary to <strong>the</strong> secondary VDisks belonging to <strong>the</strong> relationships in <strong>the</strong> group.<br />

For a consistency group in <strong>the</strong> ConsistentSynchronized state, this command causes a<br />

consistency freeze.<br />

When a consistency group is in a Consistent state (for example, in <strong>the</strong> ConsistentStopped,<br />

ConsistentSynchronized, or ConsistentDisconnected state), <strong>the</strong> -access argument can be<br />

used with <strong>the</strong> svctask stoprcconsistgrp command to enable write access to <strong>the</strong> secondary<br />

VDisks within that group.<br />

6.6.11 Deleting a Metro Mirror relationship<br />

To delete a Metro Mirror relationship, use <strong>the</strong> svctask rmrcrelationship command.<br />

svctask rmrcrelationship<br />

The svctask rmrcrelationship command is used to delete <strong>the</strong> relationship that is specified.<br />

Deleting a relationship only deletes <strong>the</strong> logical relationship between <strong>the</strong> two VDisks. It does<br />

not affect <strong>the</strong> VDisks <strong>the</strong>mselves.<br />

If <strong>the</strong> relationship is disconnected at <strong>the</strong> time that <strong>the</strong> command is issued, <strong>the</strong> relationship is<br />

only deleted on <strong>the</strong> cluster on which <strong>the</strong> command is being run. When <strong>the</strong> clusters reconnect,<br />

<strong>the</strong>n <strong>the</strong> relationship is automatically deleted on <strong>the</strong> o<strong>the</strong>r cluster.<br />

Chapter 6. Advanced Copy Services 307


Alternatively, if <strong>the</strong> clusters are disconnected, and you still want to remove <strong>the</strong> relationship on<br />

both clusters, you can issue <strong>the</strong> rmrcrelationship command independently on both of <strong>the</strong><br />

clusters.<br />

If you delete an inconsistent relationship, <strong>the</strong> secondary VDisk becomes accessible even<br />

though it is still inconsistent. This situation is <strong>the</strong> one case in which Metro Mirror does not<br />

inhibit access to inconsistent data.<br />

6.6.12 Deleting a Metro Mirror consistency group<br />

To delete a Metro Mirror consistency group, use <strong>the</strong> svctask rmrcconsistgrp command.<br />

svctask rmrcconsistgrp<br />

The svctask rmrcconsistgrp command is used to delete a Metro Mirror consistency group.<br />

This command deletes <strong>the</strong> specified consistency group. You can issue this command for any<br />

existing consistency group.<br />

If <strong>the</strong> consistency group is disconnected at <strong>the</strong> time that <strong>the</strong> command is issued, <strong>the</strong><br />

consistency group is only deleted on <strong>the</strong> cluster on which <strong>the</strong> command is being run. When<br />

<strong>the</strong> clusters reconnect, <strong>the</strong> consistency group is automatically deleted on <strong>the</strong> o<strong>the</strong>r cluster.<br />

Alternatively, if <strong>the</strong> clusters are disconnected, and you still want to remove <strong>the</strong> consistency<br />

group on both clusters, you can issue <strong>the</strong> svctask rmrcconsistgrp command separately on<br />

both of <strong>the</strong> clusters.<br />

If <strong>the</strong> consistency group is not empty, <strong>the</strong> relationships within it are removed from <strong>the</strong><br />

consistency group before <strong>the</strong> group is deleted. These relationships <strong>the</strong>n become stand-alone<br />

relationships. The state of <strong>the</strong>se relationships is not changed by <strong>the</strong> action of removing <strong>the</strong>m<br />

from <strong>the</strong> consistency group.<br />

6.6.13 Reversing a Metro Mirror relationship<br />

To reverse a Metro Mirror relationship, use <strong>the</strong> svctask switchrcrelationship command.<br />

svctask switchrcrelationship<br />

The svctask switchrcrelationship command is used to reverse <strong>the</strong> roles of <strong>the</strong> primary and<br />

secondary VDisks when a stand-alone relationship is in a Consistent state. When issuing <strong>the</strong><br />

command, <strong>the</strong> desired primary is specified.<br />

6.6.14 Reversing a Metro Mirror consistency group<br />

To reverse a Metro Mirror consistency group, use <strong>the</strong> svctask switchrcconsistgrp<br />

command.<br />

svctask switchrcconsistgrp<br />

The svctask switchrcconsistgrp command is used to reverse <strong>the</strong> roles of <strong>the</strong> primary and<br />

secondary VDisks when a consistency group is in a Consistent state. This change is applied<br />

to all of <strong>the</strong> relationships in <strong>the</strong> consistency group, and when issuing <strong>the</strong> command, <strong>the</strong><br />

desired primary is specified.<br />

308 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


6.6.15 Background copy<br />

Metro Mirror paces <strong>the</strong> rate at which background copy is performed by <strong>the</strong> appropriate<br />

relationships. Background copy takes place on relationships that are in <strong>the</strong><br />

InconsistentCopying state with a status of Online.<br />

The quota of background copy (configured on <strong>the</strong> intercluster link) is divided evenly between<br />

<strong>the</strong> nodes that are performing background copy for one of <strong>the</strong> eligible relationships. This<br />

allocation is made without regard for <strong>the</strong> number of disks for which <strong>the</strong> node is responsible.<br />

Each node in turn divides its allocation evenly between <strong>the</strong> multiple relationships performing a<br />

background copy.<br />

For intracluster relationships, each node is assigned a static quota of 25 MBps.<br />

6.7 Global Mirror overview<br />

In <strong>the</strong> following topics, we describe <strong>the</strong> Global Mirror copy service, which is an asynchronous<br />

remote copy service. It provides and maintains a consistent mirrored copy of a source VDisk<br />

to a target VDisk. Data is written from <strong>the</strong> source VDisk to <strong>the</strong> target VDisk asynchronously.<br />

This method was previously known as Asynchronous Peer-to-Peer Remote Copy.<br />

Global Mirror works by defining a Global Mirror relationship between two VDisks of equal size<br />

and maintains <strong>the</strong> data consistency in an asynchronous manner. Therefore, when a host<br />

writes to a source VDisk, <strong>the</strong> data is copied from <strong>the</strong> source VDisk cache to <strong>the</strong> target VDisk<br />

cache. At <strong>the</strong> initiation of that data copy, <strong>the</strong> confirmation of I/O completion is transmitted back<br />

to <strong>the</strong> host.<br />

Minimum firmware requirement: The minimum firmware requirement for Global Mirror<br />

functionality is V4.1.1. Any cluster or partner cluster that is not running this minimum level<br />

will not have Global Mirror functionality available. Even if you have a Global Mirror<br />

relationship running on a down-level partner cluster and you only want to use intracluster<br />

Global Mirror, <strong>the</strong> functionality will not be available to you.<br />

SVC provides both intracluster and intercluster Global Mirror.<br />

6.7.1 Intracluster Global Mirror<br />

Although Global Mirror is available for intracluster, it has no functional value for production<br />

use. Intracluster Metro Mirror provides <strong>the</strong> same capability with less overhead. However,<br />

leaving this functionality in place simplifies testing and allows for client experimentation and<br />

testing (for example, to validate server failover on a single test cluster).<br />

6.7.2 Intercluster Global Mirror<br />

Intercluster Global Mirror operations require a pair of SVC clusters that are commonly<br />

separated by a number of moderately high bandwidth links. The two SVC clusters must be<br />

defined in an SVC cluster partnership to establish a fully functional Global Mirror relationship.<br />

Limit: When a local and a remote fabric are connected toge<strong>the</strong>r for Global Mirror<br />

purposes, <strong>the</strong> ISL hop count between a local node and a remote node must not exceed<br />

seven hops.<br />

Chapter 6. Advanced Copy Services 309


6.8 Remote copy techniques<br />

Global Mirror is an asynchronous remote copy, which we explain next. To illustrate <strong>the</strong><br />

differences between synchronous and asynchronous remote copy, we also explain<br />

synchronous remote copy.<br />

6.8.1 Asynchronous remote copy<br />

Global Mirror is an asynchronous remote copy technique. In asynchronous remote copy, write<br />

operations are completed on <strong>the</strong> primary site and <strong>the</strong> write acknowledgement is sent to <strong>the</strong><br />

host before it is received at <strong>the</strong> secondary site. An update of this write operation is sent to <strong>the</strong><br />

secondary site at a later stage, which provides <strong>the</strong> capability to perform remote copy over<br />

distances exceeding <strong>the</strong> limitations of synchronous remote copy.<br />

The Global Mirror function provides <strong>the</strong> same function as Metro Mirror Remote Copy, but over<br />

long distance links with higher latency, without requiring <strong>the</strong> hosts to wait for <strong>the</strong> full round-trip<br />

delay of <strong>the</strong> long distance link.<br />

Figure 6-23 shows that a write operation to <strong>the</strong> master VDisk is acknowledged back to <strong>the</strong><br />

host issuing <strong>the</strong> write before <strong>the</strong> write operation is mirrored to <strong>the</strong> cache for <strong>the</strong> auxiliary<br />

VDisk.<br />

Figure 6-23 Global Mirror write sequence<br />

The Global Mirror algorithms maintain a consistent image at <strong>the</strong> secondary at all times. They<br />

achieve this consistent image by identifying sets of I/Os that are active concurrently at <strong>the</strong><br />

primary, assigning an order to those sets, and applying those sets of I/Os in <strong>the</strong> assigned<br />

order at <strong>the</strong> secondary. As a result, Global Mirror maintains <strong>the</strong> features of Write Ordering<br />

and Read Stability that are described in this chapter.<br />

The multiple I/Os within a single set are applied concurrently. The process that marshals <strong>the</strong><br />

sequential sets of I/Os operates at <strong>the</strong> secondary cluster and, so, is not subject to <strong>the</strong> latency<br />

of <strong>the</strong> long distance link. These two elements of <strong>the</strong> protocol ensure that <strong>the</strong> throughput of <strong>the</strong><br />

total cluster can be grown by increasing cluster size, while maintaining consistency across a<br />

growing data set.<br />

310 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


In a failover scenario, where <strong>the</strong> secondary site needs to become <strong>the</strong> primary source of data,<br />

certain updates might be missing at <strong>the</strong> secondary site. Therefore, any applications that will<br />

use this data must have an external mechanism for recovering <strong>the</strong> missing updates and<br />

reapplying <strong>the</strong>m, for example, such as a transaction log replay.<br />

6.8.2 SVC Global Mirror features<br />

SVC Global Mirror supports <strong>the</strong> following features:<br />

► Asynchronous remote copy of VDisks dispersed over metropolitan scale distances is<br />

supported.<br />

► SVC implements <strong>the</strong> Global Mirror relationship between a VDisk pair, with each VDisk in<br />

<strong>the</strong> pair being managed by an SVC cluster.<br />

► SVC supports intracluster Global Mirror, where both VDisks belong to <strong>the</strong> same cluster<br />

(and I/O Group). Although, as stated earlier, this functionality is better suited to Metro<br />

Mirror.<br />

► SVC supports intercluster Global Mirror, where each VDisk belongs to its separate SVC<br />

cluster. A given SVC cluster can be configured for partnership with between one and three<br />

o<strong>the</strong>r clusters.<br />

► Intercluster and intracluster Global Mirror can be used concurrently within a cluster for<br />

separate relationships.<br />

► SVC does not require a control network or fabric to be installed to manage Global Mirror.<br />

For intercluster Global Mirror, <strong>the</strong> SVC maintains a control link between <strong>the</strong> two clusters.<br />

This control link is used to control <strong>the</strong> state and to coordinate <strong>the</strong> updates at ei<strong>the</strong>r end.<br />

The control link is implemented on top of <strong>the</strong> same FC fabric connection that <strong>the</strong> SVC<br />

uses for Global Mirror I/O.<br />

► SVC implements a configuration model that maintains <strong>the</strong> Global Mirror configuration and<br />

state through major events, such as failover, recovery, and resynchronization, to minimize<br />

user configuration action through <strong>the</strong>se events.<br />

► SVC maintains and polices a strong concept of consistency and makes this concept<br />

available to guide configuration activity.<br />

► SVC implements flexible resynchronization support, enabling it to resynchronize VDisk<br />

pairs that have experienced write I/Os to both disks and to resynchronize only those<br />

regions that are known to have changed.<br />

► Colliding writes are supported.<br />

► An optional feature for Global Mirror permits a delay simulation to be applied on writes that<br />

are sent to secondary VDisks.<br />

► SVC 5.1 introduces Multiple Cluster Mirroring.<br />

Colliding writes<br />

Prior to V4.3.1, <strong>the</strong> Global Mirror algorithm required that only a single write is active on any<br />

given 512 byte LBA of a VDisk. If a fur<strong>the</strong>r write is received from a host while <strong>the</strong> secondary<br />

write is still active, even though <strong>the</strong> primary write might have completed, <strong>the</strong> new host write<br />

will be delayed until <strong>the</strong> secondary write is complete. This restriction is needed in case a<br />

series of writes to <strong>the</strong> secondary have to be retried (called “reconstruction”). Conceptually,<br />

<strong>the</strong> data for reconstruction comes from <strong>the</strong> primary VDisk.<br />

If multiple writes are allowed to be applied to <strong>the</strong> primary for a given sector, only <strong>the</strong> most<br />

recent write will get <strong>the</strong> correct data during reconstruction, and if reconstruction is interrupted<br />

for any reason, <strong>the</strong> intermediate state of <strong>the</strong> secondary is Inconsistent.<br />

Chapter 6. Advanced Copy Services 311


Applications that deliver such write activity will not achieve <strong>the</strong> performance that Global Mirror<br />

is intended to support. A VDisk statistic is maintained about <strong>the</strong> frequency of <strong>the</strong>se collisions.<br />

From V4.3.1 onward, an attempt is made to allow multiple writes to a single location to be<br />

outstanding in <strong>the</strong> Global Mirror algorithm. There is still a need for primary writes to be<br />

serialized, and <strong>the</strong> intermediate states of <strong>the</strong> primary data must be kept in a non-volatile<br />

journal while <strong>the</strong> writes are outstanding to maintain <strong>the</strong> correct write ordering during<br />

reconstruction. Reconstruction must never overwrite data on <strong>the</strong> secondary with an earlier<br />

version. The VDisk statistic monitoring colliding writes is now limited to those writes that are<br />

not affected by this change.<br />

Figure 6-24 shows a colliding write sequence example.<br />

Figure 6-24 Colliding writes example<br />

These numbers correspond to <strong>the</strong> numbers in Figure 6-24:<br />

► (1) Original Global Mirror write in progress<br />

► (2) Second write to same sector and in-flight write logged to <strong>the</strong> journal file<br />

► (3 and 4) Second write to <strong>the</strong> secondary cluster<br />

► (5) Initial write completes<br />

Delay simulation<br />

An optional feature for Global Mirror permits a delay simulation to be applied on writes that<br />

are sent to secondary VDisks. This feature allows testing to be performed that detects<br />

colliding writes, and <strong>the</strong>refore, this feature can be used to test an application before <strong>the</strong> full<br />

deployment of <strong>the</strong> feature. The feature can be enabled separately for each of <strong>the</strong> intracluster<br />

or intercluster Global Mirrors. You specify <strong>the</strong> delay setting by using <strong>the</strong> chcluster command<br />

and viewed by using <strong>the</strong> lscluster command. The gm_intra_delay_simulation field<br />

expresses <strong>the</strong> amount of time that intracluster secondary I/Os are delayed. The<br />

gm_inter_delay_simulation field expresses <strong>the</strong> amount of time that intercluster secondary<br />

I/Os are delayed. A value of zero disables <strong>the</strong> feature.<br />

312 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Multiple Cluster Mirroring<br />

SVC 5.1 introduces Multiple Cluster Mirroring. The rules for a Global Mirror Multiple Cluster<br />

Mirroring environment are <strong>the</strong> same as <strong>the</strong> rules in an Metro Mirror environment. For more<br />

detailed information, see 6.5.4, “Multiple Cluster Mirroring” on page 284.<br />

6.9 Global Mirror relationships<br />

Global Mirror relationships are similar to FlashCopy mappings. They can be stand-alone or<br />

combined in consistency groups. You can issue <strong>the</strong> start and stop commands ei<strong>the</strong>r against<br />

<strong>the</strong> stand-alone relationship or <strong>the</strong> consistency group.<br />

Figure 6-25 illustrates <strong>the</strong> Global Mirror relationship.<br />

Figure 6-25 Global Mirror relationship<br />

A Global Mirror relationship is composed of two VDisks that are equal in size. The master<br />

VDisk and <strong>the</strong> auxiliary VDisk can be in <strong>the</strong> same I/O Group, within <strong>the</strong> same SVC cluster<br />

(intracluster Global Mirror), or can be on separate SVC clusters that are defined as SVC<br />

partners (intercluster Global Mirror).<br />

Rules:<br />

► A VDisk can only be part of one Global Mirror relationship at a time.<br />

► A VDisk that is a FlashCopy target cannot be part of a Global Mirror relationship.<br />

6.9.1 Global Mirror relationship between primary and secondary VDisks<br />

When creating a Global Mirror relationship, <strong>the</strong> master VDisk is initially assigned as <strong>the</strong><br />

primary, and <strong>the</strong> auxiliary VDisk is initially assigned as <strong>the</strong> secondary. This design implies that<br />

<strong>the</strong> initial copy direction is mirroring <strong>the</strong> master VDisk to <strong>the</strong> auxiliary VDisk. After <strong>the</strong> initial<br />

synchronization is complete, <strong>the</strong> copy direction can be changed, if appropriate.<br />

In <strong>the</strong> most common applications of Global Mirror, <strong>the</strong> master VDisk contains <strong>the</strong> production<br />

copy of <strong>the</strong> data and is used by <strong>the</strong> host application, while <strong>the</strong> auxiliary VDisk contains <strong>the</strong><br />

mirrored copy of <strong>the</strong> data and is used for failover in DR scenarios. The terms master and<br />

auxiliary help explain this use. If Global Mirror is applied differently, <strong>the</strong> terms master and<br />

auxiliary VDisks need to be interpreted appropriately.<br />

6.9.2 Importance of write ordering<br />

Many applications that uses block storage have a requirement to survive failures, such as loss<br />

of power or a software crash, and to not lose data that existed prior to <strong>the</strong> failure. Because<br />

many applications must perform large numbers of update operations in parallel to that block<br />

storage, maintaining write ordering is key to ensuring <strong>the</strong> correct operation of applications<br />

following a disruption.<br />

Chapter 6. Advanced Copy Services 313


An application that performs a high volume of database updates is usually designed with <strong>the</strong><br />

concept of dependent writes. With dependent writes, it is important to ensure that an earlier<br />

write has completed before a later write is started. Reversing <strong>the</strong> order of dependent writes<br />

can undermine <strong>the</strong> application’s algorithms and can lead to problems, such as detected or<br />

undetected data corruption.<br />

6.9.3 Dependent writes that span multiple VDisks<br />

The following scenario illustrates a simple example of a sequence of dependent writes and, in<br />

particular, what can happen if <strong>the</strong>y span multiple VDisks. Consider <strong>the</strong> following typical<br />

sequence of writes for a database update transaction:<br />

1. A write is executed to update <strong>the</strong> database log, indicating that a database update is to be<br />

performed.<br />

2. A second write is executed to update <strong>the</strong> database.<br />

3. A third write is executed to update <strong>the</strong> database log, indicating that <strong>the</strong> database update<br />

has completed successfully.<br />

Figure 6-26 illustrates <strong>the</strong> write sequence.<br />

Figure 6-26 Dependent writes for a database<br />

The database ensures <strong>the</strong> correct ordering of <strong>the</strong>se writes by waiting for each step to<br />

complete before starting <strong>the</strong> next step.<br />

Database logs: All databases have logs associated with <strong>the</strong>m. These logs keep records of<br />

database changes. If a database needs to be restored to a point beyond <strong>the</strong> last full, offline<br />

backup, logs are required to roll <strong>the</strong> data forward to <strong>the</strong> point of failure.<br />

314 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


But imagine if <strong>the</strong> database log and <strong>the</strong> database are on separate VDisks and a Global Mirror<br />

relationship is stopped during this update. In this case, you must consider <strong>the</strong> possibility that<br />

<strong>the</strong> Global Mirror relationship for <strong>the</strong> VDisk with <strong>the</strong> database file is stopped slightly before <strong>the</strong><br />

VDisk containing <strong>the</strong> database log.<br />

If this happens, it is possible that <strong>the</strong> secondary VDisks see writes (1) and (3) but not write<br />

(2). Then, if <strong>the</strong> database was restarted using <strong>the</strong> data available from <strong>the</strong> secondary disks,<br />

<strong>the</strong> database log indicates that <strong>the</strong> transaction had completed successfully, when it did not. In<br />

this scenario, <strong>the</strong> integrity of <strong>the</strong> database is in question.<br />

6.9.4 Global Mirror consistency groups<br />

Global Mirror consistency groups address <strong>the</strong> issue of dependent writes across VDisks,<br />

where <strong>the</strong> objective is to preserve data consistency across multiple Global Mirrored VDisks.<br />

Consistency groups ensure a consistent data set, because applications have relational data<br />

spanning across multiple VDisks.<br />

A Global Mirror consistency group can contain an arbitrary number of relationships up to <strong>the</strong><br />

maximum number of Global Mirror relationships that is supported by <strong>the</strong> SVC cluster. Global<br />

Mirror commands can be issued to a Global Mirror consistency group, and <strong>the</strong>reby<br />

simultaneously for all Global Mirror relationships that are defined within that consistency<br />

group, or to a single Metro Mirror relationship, if not part of a Global Mirror consistency group.<br />

For example, when issuing a Global Mirror start command to <strong>the</strong> consistency group, all of<br />

<strong>the</strong> Global Mirror relationships in <strong>the</strong> consistency group are started at <strong>the</strong> same time.<br />

Figure 6-27 on page 316 illustrates <strong>the</strong> concept of Global Mirror consistency groups. Because<br />

GM_Relationship 1 and GM_Relationship 2 are part of <strong>the</strong> consistency group, <strong>the</strong>y can be<br />

handled as one entity, while <strong>the</strong> stand-alone GM_Relationship 3 is handled separately.<br />

Chapter 6. Advanced Copy Services 315


Figure 6-27 Global Mirror consistency group<br />

Certain uses of Global Mirror require <strong>the</strong> manipulation of more than one relationship. Global<br />

Mirror consistency groups can provide <strong>the</strong> ability to group relationships so that <strong>the</strong>y are<br />

manipulated in unison. Global Mirror relationships within a consistency group can be in any<br />

form:<br />

► Global Mirror relationships can be part of a consistency group, or be stand-alone and<br />

<strong>the</strong>refore handled as single instances.<br />

► A consistency group can contain zero or more relationships. An empty consistency group,<br />

with zero relationships in it, has little purpose until it is assigned its first relationship, except<br />

that it has a name.<br />

► All of <strong>the</strong> relationships in a consistency group must have matching master and auxiliary<br />

SVC clusters.<br />

Although it is possible to use consistency groups to manipulate sets of relationships that do<br />

not need to satisfy <strong>the</strong>se strict rules, such manipulation can lead to undesired side effects.<br />

The rules behind a consistency group mean that certain configuration commands are<br />

prohibited. These specific configuration commands are not prohibited if <strong>the</strong> relationship is not<br />

part of a consistency group.<br />

For example, consider <strong>the</strong> case of two applications that are completely independent, yet <strong>the</strong>y<br />

are placed into a single consistency group. In <strong>the</strong> event of an error, <strong>the</strong>re is a loss of<br />

synchronization, and a background copy process is required to recover synchronization.<br />

While this process is in progress, Global Mirror rejects attempts to enable access to <strong>the</strong><br />

secondary VDisks of ei<strong>the</strong>r application.<br />

If one application finishes its background copy much more quickly than <strong>the</strong> o<strong>the</strong>r application,<br />

Global Mirror still refuses to grant access to its secondary VDisk. Even though it is safe in this<br />

316 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


case, Global Mirror policy refuses access to <strong>the</strong> entire consistency group if any part of it is<br />

inconsistent.<br />

Stand-alone relationships and consistency groups share a common configuration and state<br />

model. All of <strong>the</strong> relationships in a consistency group that is not empty have <strong>the</strong> same state as<br />

<strong>the</strong> consistency group.<br />

6.10 Global Mirror<br />

This section discusses how Global Mirror works.<br />

6.10.1 Intercluster communication and zoning<br />

All intercluster communication is performed through <strong>the</strong> <strong>SAN</strong>. Prior to creating intercluster<br />

Global Mirror relationships, you must create a partnership between <strong>the</strong> two clusters.<br />

SVC node ports on each SVC cluster must be able to access each o<strong>the</strong>r to facilitate <strong>the</strong><br />

partnership creation. Therefore, you must define a zone in each fabric for intercluster<br />

communication; see Chapter 3, “Planning and configuration” on page 65 for more<br />

information.<br />

6.10.2 SVC cluster partnership<br />

When <strong>the</strong> SVC cluster partnership has been defined on both clusters, fur<strong>the</strong>r communication<br />

facilities between <strong>the</strong> nodes in each of <strong>the</strong> clusters are established. The communication<br />

facilities consist of <strong>the</strong>se components:<br />

► A single control channel, which is used to exchange and coordinate configuration<br />

information<br />

► I/O channels between each of <strong>the</strong> nodes in <strong>the</strong> clusters<br />

These channels are maintained and updated as nodes appear and disappear and as links<br />

fail, and are repaired to maintain operation where possible. If communication between <strong>the</strong><br />

SVC clusters is interrupted or lost, an error is logged (and, consequently, Global Mirror<br />

relationships will stop).<br />

To handle error conditions, you can configure <strong>the</strong> SVC to raise SNMP traps or e-mail. Or, if<br />

Tivoli <strong>Storage</strong> Productivity Center for Replication is in place, <strong>the</strong> Tivoli <strong>Storage</strong> Productivity<br />

Center for Replication can control <strong>the</strong> link’s status and issue an alert using SNMP traps or<br />

e-mail, too.<br />

6.10.3 Maintenance of <strong>the</strong> intercluster link<br />

All SVC nodes maintain a database of <strong>the</strong> o<strong>the</strong>r devices that are visible on <strong>the</strong> fabric. This<br />

database is updated as devices appear and disappear.<br />

Devices that advertise <strong>the</strong>mselves as SVC nodes are categorized according to <strong>the</strong> SVC<br />

cluster to which <strong>the</strong>y belong. SVC nodes that belong to <strong>the</strong> same cluster establish<br />

communication channels between <strong>the</strong>mselves and begin to exchange messages to<br />

implement <strong>the</strong> clustering and functional protocols of SVC.<br />

Nodes that are in separate clusters do not exchange messages after <strong>the</strong> initial discovery is<br />

complete unless <strong>the</strong>y have been configured toge<strong>the</strong>r to perform Global Mirror.<br />

Chapter 6. Advanced Copy Services 317


The intercluster link carries control traffic to coordinate activity between two clusters. It is<br />

formed between one node in each cluster. The traffic between <strong>the</strong> designated nodes is<br />

distributed among logins that exist between those nodes.<br />

If <strong>the</strong> designated node fails (or if all of its logins to <strong>the</strong> remote cluster fail), a new node is<br />

chosen to carry control traffic. This event causes I/O to pause, but it does not cause<br />

relationships to become Consistent Stopped.<br />

6.10.4 Distribution of work among nodes<br />

Global Mirror VDisks must have <strong>the</strong>ir preferred nodes evenly distributed among <strong>the</strong> nodes of<br />

<strong>the</strong> clusters. Each VDisk within an I/O Group has a preferred node property that can be used<br />

to balance <strong>the</strong> I/O load between nodes in that group. Global Mirror also uses this property to<br />

route I/O between clusters.<br />

Figure 6-28 shows <strong>the</strong> best relationship between VDisks and <strong>the</strong>ir preferred nodes in order to<br />

get <strong>the</strong> best performance.<br />

Figure 6-28 Preferred VDisk Global Mirror relationship<br />

6.10.5 Background copy performance<br />

Background copy resources for intercluster remote copy are available within two nodes of an<br />

I/O Group to perform background copy at a maximum of 200 MBps (each data read and data<br />

written) total. The background copy performance is subject to sufficient RAID controller<br />

bandwidth. Performance is also subject to o<strong>the</strong>r potential bottlenecks (such as <strong>the</strong> intercluster<br />

fabric) and possible contention from host I/O for <strong>the</strong> SVC bandwidth resources.<br />

Background copy I/O will be scheduled to avoid bursts of activity that might have an adverse<br />

effect on system behavior. An entire grain of tracks on one VDisk will be processed at around<br />

<strong>the</strong> same time but not as a single I/O. Double buffering is used to try to take advantage of<br />

sequential performance within a grain. However, <strong>the</strong> next grain within <strong>the</strong> VDisk might not be<br />

scheduled for a while. Multiple grains might be copied simultaneously and might be enough to<br />

satisfy <strong>the</strong> requested rate, unless <strong>the</strong> available resources cannot sustain <strong>the</strong> requested rate.<br />

Background copy proceeds from <strong>the</strong> low LBA to <strong>the</strong> high LBA in sequence to avoid convoying<br />

conflicts with FlashCopy, which operates in <strong>the</strong> opposite direction. It is expected that<br />

background copy will not convoy conflict with sequential applications, because it tends to vary<br />

disks more often.<br />

318 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


6.10.6 Space-efficient background copy<br />

Prior to SVC 4.3.1, if a primary VDisk was space-efficient, <strong>the</strong> background copy process<br />

caused <strong>the</strong> secondary to become fully allocated. When both primary and secondary clusters<br />

are running SVC 4.3.1 or higher, Metro Mirror and Global Mirror relationships can preserve<br />

<strong>the</strong> space-efficiency of <strong>the</strong> primary.<br />

Conceptually, <strong>the</strong> background copy process detects an unallocated region of <strong>the</strong> primary and<br />

sends a special “zero buffer” to <strong>the</strong> secondary. If <strong>the</strong> secondary VDisk is space-efficient, and<br />

<strong>the</strong> region is unallocated, <strong>the</strong> special buffer prevents a write (and, <strong>the</strong>refore, an allocation). If<br />

<strong>the</strong> secondary VDisk is not space-efficient, or <strong>the</strong> region in question is an allocated region of<br />

a Space-Efficient VDisk, a buffer of “real” zeros is syn<strong>the</strong>sized on <strong>the</strong> secondary and written<br />

as normal.<br />

If <strong>the</strong> secondary cluster is running code prior to SVC 4.3.1, this version of <strong>the</strong> code is<br />

detected by <strong>the</strong> primary cluster and a buffer of “real” zeros is transmitted and written on <strong>the</strong><br />

secondary. The background copy rate controls <strong>the</strong> rate at which <strong>the</strong> virtual capacity is being<br />

copied.<br />

6.11 Global Mirror process<br />

There are several steps in <strong>the</strong> Global Mirror process:<br />

1. An SVC cluster partnership is created between two SVC clusters (for intercluster Global<br />

Mirror).<br />

2. A Global Mirror relationship is created between two VDisks of <strong>the</strong> same size.<br />

3. To manage multiple Global Mirror relationships as one entity, <strong>the</strong> relationships can be<br />

made part of a Global Mirror consistency group to ensure data consistency across multiple<br />

Global Mirror relationships, or simply for ease of management.<br />

4. The Global Mirror relationship is started, and when <strong>the</strong> background copy has completed,<br />

<strong>the</strong> relationship is consistent and synchronized.<br />

5. When synchronized, <strong>the</strong> secondary VDisk holds a copy of <strong>the</strong> production data at <strong>the</strong><br />

primary that can be used for DR.<br />

6. To access <strong>the</strong> auxiliary VDisk, <strong>the</strong> Global Mirror relationship must be stopped with <strong>the</strong><br />

access option enabled, before write I/O is submitted to <strong>the</strong> secondary.<br />

7. The remote host server is mapped to <strong>the</strong> auxiliary VDisk, and <strong>the</strong> disk is available for I/O.<br />

6.11.1 Methods of synchronization<br />

This section describes three methods that can be used to establish a relationship.<br />

Full synchronization after creation<br />

Full synchronization after creation is <strong>the</strong> default method. It is <strong>the</strong> simplest method, and it<br />

requires no administrative activity apart from issuing <strong>the</strong> necessary commands. However, in<br />

certain environments, <strong>the</strong> bandwidth that is available makes this method unsuitable.<br />

Use this sequence for a single relationship:<br />

► A new relationship is created (mkrcrelationship is issued) without specifying <strong>the</strong> -sync<br />

flag.<br />

► A new relationship is started (startrcrelationship is issued) without <strong>the</strong> -clean flag.<br />

Chapter 6. Advanced Copy Services 319


Synchronized before creation<br />

In this method, <strong>the</strong> administrator must ensure that <strong>the</strong> master and auxiliary VDisks contain<br />

identical data before creating <strong>the</strong> relationship. There are two ways to ensure that <strong>the</strong> master<br />

and auxiliary VDisks contain identical data:<br />

► Both disks are created with <strong>the</strong> security delete (-fmtdisk) feature to make all data zero.<br />

► A complete tape image (or o<strong>the</strong>r method of moving data) is copied from one disk to <strong>the</strong><br />

o<strong>the</strong>r disk.<br />

In ei<strong>the</strong>r technique, no write I/O must take place ei<strong>the</strong>r on <strong>the</strong> master or <strong>the</strong> auxiliary before<br />

<strong>the</strong> relationship is established.<br />

Then, <strong>the</strong> administrator must ensure that commands are issued:<br />

► A new relationship is created (mkrcrelationship is issued) with <strong>the</strong> -sync flag.<br />

► A new relationship is started (startrcrelationship is issued) without <strong>the</strong> -clean flag.<br />

If <strong>the</strong>se steps are not performed correctly, <strong>the</strong> relationship is reported as being consistent,<br />

when it is not. This situation most likely makes any secondary disk useless. This method has<br />

an advantage over full synchronization: It does not require all of <strong>the</strong> data to be copied over a<br />

constrained link. However, if <strong>the</strong> data must be copied, <strong>the</strong> master and auxiliary disks cannot<br />

be used until <strong>the</strong> copy is complete, which might be unacceptable.<br />

Quick synchronization after creation<br />

In this method, <strong>the</strong> administrator must still copy data from <strong>the</strong> master to <strong>the</strong> auxiliary, but <strong>the</strong><br />

data can be used without stopping <strong>the</strong> application at <strong>the</strong> master. The administrator must<br />

ensure that <strong>the</strong>se commands are issued:<br />

► A new relationship is created (mkrcrelationship is issued) with <strong>the</strong> -sync flag.<br />

► A new relationship is stopped (mkrcrelationship is issued) with <strong>the</strong> -access flag.<br />

► A tape image (or o<strong>the</strong>r method of transferring data) is used to copy <strong>the</strong> entire master disk<br />

to <strong>the</strong> auxiliary disk.<br />

After <strong>the</strong> copy is complete, <strong>the</strong> administrator must ensure that a new relationship is started<br />

(startrcrelationship is issued) with <strong>the</strong> -clean flag.<br />

With this technique, only <strong>the</strong> data that has changed since <strong>the</strong> relationship was created,<br />

including all regions that were incorrect in <strong>the</strong> tape image, is copied from master and auxiliary.<br />

As with “Synchronized before creation” on page 320, <strong>the</strong> copy step must be performed<br />

correctly, or else <strong>the</strong> auxiliary is useless, although <strong>the</strong> copy reports it as being synchronized.<br />

Global Mirror states and events<br />

In this section, we explain <strong>the</strong> states of a Global Mirror relationship and <strong>the</strong> series of events<br />

that modify <strong>the</strong>se states.<br />

Figure 6-29 on page 321 shows an overview of <strong>the</strong> states that apply to a Global Mirror<br />

relationship in <strong>the</strong> connected state.<br />

320 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 6-29 Global Mirror state diagram<br />

When creating <strong>the</strong> Global Mirror relationship, you can specify whe<strong>the</strong>r <strong>the</strong> auxiliary VDisk is<br />

already in sync with <strong>the</strong> master VDisk, and <strong>the</strong> background copy process is <strong>the</strong>n skipped.<br />

This capability is especially useful when creating Global Mirror relationships for VDisks that<br />

have been created with <strong>the</strong> format option. The following steps explain <strong>the</strong> Global Mirror state<br />

diagram (<strong>the</strong>se numbers correspond to <strong>the</strong> numbers in Figure 6-29):<br />

► Step 1:<br />

a. The Global Mirror relationship is created with <strong>the</strong> -sync option, and <strong>the</strong> Global Mirror<br />

relationship enters <strong>the</strong> Consistent stopped state.<br />

b. The Global Mirror relationship is created without specifying that <strong>the</strong> master and<br />

auxiliary VDisks are in sync, and <strong>the</strong> Global Mirror relationship enters <strong>the</strong> Inconsistent<br />

stopped state.<br />

► Step 2:<br />

a. When starting a Global Mirror relationship in <strong>the</strong> Consistent stopped state, it enters <strong>the</strong><br />

Consistent synchronized state. This state implies that no updates (write I/O) have been<br />

performed on <strong>the</strong> primary VDisk while in <strong>the</strong> Consistent stopped state. O<strong>the</strong>rwise, you<br />

must specify <strong>the</strong> -force option, and <strong>the</strong> Global Mirror relationship <strong>the</strong>n enters <strong>the</strong><br />

Inconsistent copying state, while <strong>the</strong> background copy is started.<br />

b. When starting a Global Mirror relationship in <strong>the</strong> Inconsistent stopped state, it enters<br />

<strong>the</strong> Inconsistent copying state, while <strong>the</strong> background copy is started.<br />

► Step 3:<br />

a. When <strong>the</strong> background copy completes, <strong>the</strong> Global Mirror relationship transits from <strong>the</strong><br />

Inconsistent copying state to <strong>the</strong> Consistent synchronized state.<br />

Chapter 6. Advanced Copy Services 321


6.11.2 State overview<br />

► Step 4:<br />

a. When stopping a Global Mirror relationship in <strong>the</strong> Consistent synchronized state,<br />

where specifying <strong>the</strong> -access option enables write I/O on <strong>the</strong> secondary VDisk, <strong>the</strong><br />

Global Mirror relationship enters <strong>the</strong> Idling state.<br />

b. To enable write I/O on <strong>the</strong> secondary VDisk, when <strong>the</strong> Global Mirror relationship is in<br />

<strong>the</strong> Consistent stopped state, issue <strong>the</strong> command svctask stoprcrelationship,<br />

specifying <strong>the</strong> -access option, and <strong>the</strong> Global Mirror relationship enters <strong>the</strong> Idling state.<br />

► Step 5:<br />

a. When starting a Global Mirror relationship that is in <strong>the</strong> Idling state, you must specify<br />

<strong>the</strong> -primary argument to set <strong>the</strong> copy direction. Because no write I/O has been<br />

performed (to ei<strong>the</strong>r <strong>the</strong> master or auxiliary VDisk) while in <strong>the</strong> Idling state, <strong>the</strong> Global<br />

Mirror relationship enters <strong>the</strong> Consistent synchronized state.<br />

b. In case write I/O has been performed to ei<strong>the</strong>r <strong>the</strong> master or <strong>the</strong> auxiliary VDisk, <strong>the</strong>n<br />

you must specify <strong>the</strong> -force option. The Global Mirror relationship <strong>the</strong>n enters <strong>the</strong><br />

Inconsistent copying state, while <strong>the</strong> background copy is started.<br />

If <strong>the</strong> Global Mirror relationship is intentionally stopped or experiences an error, a state<br />

transition is applied. For example, Global Mirror relationships in <strong>the</strong> Consistent synchronized<br />

state enter <strong>the</strong> Consistent stopped state, and Global Mirror relationships in <strong>the</strong> Inconsistent<br />

copying state enter <strong>the</strong> Inconsistent stopped state.<br />

In a case where <strong>the</strong> connection is broken between <strong>the</strong> SVC clusters in a partnership, all of <strong>the</strong><br />

(intercluster) Global Mirror relationships enter a Disconnected state. For fur<strong>the</strong>r information,<br />

refer to “Connected versus disconnected” on page 322.<br />

Common configuration and state model: Stand-alone relationships and consistency<br />

groups share a common configuration and state model. All of <strong>the</strong> Global Mirror<br />

relationships in a consistency group that is not empty have <strong>the</strong> same state as <strong>the</strong><br />

consistency group.<br />

The SVC defined concepts of state are key to understanding <strong>the</strong> configuration concepts. We<br />

explain <strong>the</strong>m in more detail next.<br />

Connected versus disconnected<br />

This distinction can arise when a Global Mirror relationship is created with <strong>the</strong> two VDisks in<br />

separate clusters.<br />

Under certain error scenarios, communications between <strong>the</strong> two clusters might be lost. For<br />

example, power might fail, causing one complete cluster to disappear. Alternatively, <strong>the</strong> fabric<br />

connection between <strong>the</strong> two clusters might fail, leaving <strong>the</strong> two clusters running but unable to<br />

communicate with each o<strong>the</strong>r.<br />

When <strong>the</strong> two clusters can communicate, <strong>the</strong> clusters and <strong>the</strong> relationships spanning <strong>the</strong>m<br />

are described as connected. When <strong>the</strong>y cannot communicate, <strong>the</strong> clusters and <strong>the</strong><br />

relationships spanning <strong>the</strong>m are described as disconnected.<br />

In this scenario, each cluster is left with half of <strong>the</strong> relationship, and each cluster has only a<br />

portion of <strong>the</strong> information that was available to it before. Only a subset of <strong>the</strong> normal<br />

configuration activity is available.<br />

322 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


The disconnected relationships are portrayed as having a changed state. The new states<br />

describe what is known about <strong>the</strong> relationship and which configuration commands are<br />

permitted.<br />

When <strong>the</strong> clusters can communicate again, <strong>the</strong> relationships become connected again.<br />

Global Mirror automatically reconciles <strong>the</strong> two state fragments, taking into account any<br />

configuration activity or o<strong>the</strong>r event that took place while <strong>the</strong> relationship was disconnected.<br />

As a result, <strong>the</strong> relationship can ei<strong>the</strong>r return to <strong>the</strong> state that it was in when it became<br />

disconnected or it can enter ano<strong>the</strong>r connected state.<br />

Relationships that are configured between VDisks in <strong>the</strong> same SVC cluster (intracluster) will<br />

never be described as being in a disconnected state.<br />

Consistent versus inconsistent<br />

Relationships or consistency groups that contain relationships can be described as being<br />

consistent or inconsistent. The consistent or inconsistent property describes <strong>the</strong> state of <strong>the</strong><br />

data on <strong>the</strong> secondary VDisk in relation to <strong>the</strong> data on <strong>the</strong> primary VDisk. Consider <strong>the</strong><br />

consistent or inconsistent property to be a property of <strong>the</strong> secondary VDisk.<br />

A secondary is described as consistent if it contains data that might have been read by a host<br />

system from <strong>the</strong> primary if power had failed at an imaginary point in time while I/O was in<br />

progress, and power was later restored. This imaginary point in time is defined as <strong>the</strong><br />

recovery point. The requirements for consistency are expressed with respect to activity at <strong>the</strong><br />

primary up to <strong>the</strong> recovery point:<br />

► The secondary VDisk contains <strong>the</strong> data from all writes to <strong>the</strong> primary for which <strong>the</strong> host<br />

had received successful completion and that data has not been overwritten by a<br />

subsequent write (before <strong>the</strong> recovery point).<br />

► The writes are on <strong>the</strong> secondary and <strong>the</strong> host did not receive successful completion for<br />

<strong>the</strong>se writes (that is, <strong>the</strong> host received bad completion or no completion at all), and <strong>the</strong><br />

host subsequently performed a read from <strong>the</strong> primary of that data. If that read returned<br />

successful completion and no later write was sent (before <strong>the</strong> recovery point), <strong>the</strong><br />

secondary contains <strong>the</strong> same data as <strong>the</strong> data that was returned by <strong>the</strong> read from <strong>the</strong><br />

primary.<br />

From <strong>the</strong> point of view of an application, consistency means that a secondary VDisk contains<br />

<strong>the</strong> same data as <strong>the</strong> primary VDisk at <strong>the</strong> recovery point (<strong>the</strong> time at which <strong>the</strong> imaginary<br />

power failure occurred).<br />

If an application is designed to cope with an unexpected power failure, this guarantee of<br />

consistency means that <strong>the</strong> application will be able to use <strong>the</strong> secondary and begin operation<br />

just as though it had been restarted after <strong>the</strong> hypo<strong>the</strong>tical power failure.<br />

Again, <strong>the</strong> application is dependent on <strong>the</strong> key properties of consistency:<br />

► Write ordering<br />

► Read stability for correct operation at <strong>the</strong> secondary<br />

If a relationship, or a set of relationships, is inconsistent and if an attempt is made to start an<br />

application using <strong>the</strong> data in <strong>the</strong> secondaries, a number of outcomes are possible:<br />

► The application might decide that <strong>the</strong> data is corrupt and crash or exit with an error code.<br />

► The application might fail to detect that <strong>the</strong> data is corrupt and return erroneous data.<br />

► The application might work without a problem.<br />

Because of <strong>the</strong> risk of data corruption, and, in particular, undetected data corruption, Global<br />

Mirror strongly enforces <strong>the</strong> concept of consistency and prohibits access to inconsistent data.<br />

Chapter 6. Advanced Copy Services 323


6.11.3 Detailed states<br />

You can apply consistency as a concept to a single relationship or to a set of relationships in<br />

a consistency group. Write ordering is a concept that an application can maintain across a<br />

number of disks that are accessed through multiple systems, and <strong>the</strong>refore, consistency must<br />

operate across all of those disks.<br />

When deciding how to use consistency groups, <strong>the</strong> administrator must consider <strong>the</strong> scope of<br />

an application’s data, taking into account all of <strong>the</strong> interdependent systems that communicate<br />

and exchange information.<br />

If two programs or systems communicate and store details as a result of <strong>the</strong> information<br />

exchanged, ei<strong>the</strong>r of <strong>the</strong> following actions might occur:<br />

► All of <strong>the</strong> data that is accessed by <strong>the</strong> group of systems must be placed into a single<br />

consistency group.<br />

► The systems must be recovered independently (each within its own consistency group).<br />

Then, each system must perform recovery with <strong>the</strong> o<strong>the</strong>r applications to become<br />

consistent with <strong>the</strong>m.<br />

Consistent versus synchronized<br />

A copy that is consistent and up-to-date is described as synchronized. In a synchronized<br />

relationship, <strong>the</strong> primary and secondary VDisks only differ in <strong>the</strong> regions where writes are<br />

outstanding from <strong>the</strong> host.<br />

Consistency does not mean that <strong>the</strong> data is up-to-date. A copy can be consistent and yet<br />

contain data that was frozen at an earlier point in time. Write I/O might have continued to a<br />

primary and not have been copied to <strong>the</strong> secondary. This state arises when it becomes<br />

impossible to keep up-to-date and maintain consistency. An example is a loss of<br />

communication between clusters when writing to <strong>the</strong> secondary.<br />

When communication is lost for an extended period of time, Global Mirror tracks <strong>the</strong> changes<br />

that happen at <strong>the</strong> primary, but not <strong>the</strong> order of <strong>the</strong>se changes, or <strong>the</strong> details of <strong>the</strong>se<br />

changes (write data). When communication is restored, it is impossible to make <strong>the</strong><br />

secondary synchronized without sending write data to <strong>the</strong> secondary out-of-order and,<br />

<strong>the</strong>refore, losing consistency.<br />

You can use two policies to cope with this situation:<br />

► Make a point-in-time copy of <strong>the</strong> consistent secondary before allowing <strong>the</strong> secondary to<br />

become inconsistent. In <strong>the</strong> event of a disaster, before consistency is achieved again, <strong>the</strong><br />

point-in-time copy target provides a consistent, though out-of-date, image.<br />

► Accept <strong>the</strong> loss of consistency, and <strong>the</strong> loss of a useful secondary, while making it<br />

synchronized.<br />

The following sections detail <strong>the</strong> states that are portrayed to <strong>the</strong> user, for ei<strong>the</strong>r consistency<br />

groups or relationships. It also details <strong>the</strong> extra information that is available in each state. We<br />

described <strong>the</strong> various major states to provide guidance regarding <strong>the</strong> available configuration<br />

commands.<br />

InconsistentStopped<br />

InconsistentStopped is a connected state. In this state, <strong>the</strong> primary is accessible for read and<br />

write I/O, but <strong>the</strong> secondary is inaccessible for ei<strong>the</strong>r read or write I/O. A copy process needs<br />

to be started to make <strong>the</strong> secondary consistent.<br />

324 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


This state is entered when <strong>the</strong> relationship or consistency group was InconsistentCopying<br />

and has ei<strong>the</strong>r suffered a persistent error or received a stop command that has caused <strong>the</strong><br />

copy process to stop.<br />

A start command causes <strong>the</strong> relationship or consistency group to move to <strong>the</strong><br />

InconsistentCopying state. A stop command is accepted, but it has no effect.<br />

If <strong>the</strong> relationship or consistency group becomes disconnected, <strong>the</strong> secondary side transits to<br />

InconsistentDisconnected. The primary side transits to IdlingDisconnected.<br />

InconsistentCopying<br />

InconsistentCopying is a connected state. In this state, <strong>the</strong> primary is accessible for read and<br />

write I/O, but <strong>the</strong> secondary is inaccessible for ei<strong>the</strong>r read or write I/O.<br />

This state is entered after a start command is issued to an InconsistentStopped relationship<br />

or consistency group. It is also entered when a forced start is issued to an Idling or<br />

ConsistentStopped relationship or consistency group.<br />

In this state, a background copy process runs, which copies data from <strong>the</strong> primary to <strong>the</strong><br />

secondary VDisk.<br />

In <strong>the</strong> absence of errors, an InconsistentCopying relationship is active, and <strong>the</strong> copy progress<br />

increases until <strong>the</strong> copy process completes. In certain error situations, <strong>the</strong> copy progress<br />

might freeze or even regress.<br />

A persistent error or stop command places <strong>the</strong> relationship or consistency group into <strong>the</strong><br />

InconsistentStopped state. A start command is accepted, but it has no effect.<br />

If <strong>the</strong> background copy process completes on a stand-alone relationship, or on all<br />

relationships for a consistency group, <strong>the</strong> relationship or consistency group transits to <strong>the</strong><br />

ConsistentSynchronized state.<br />

If <strong>the</strong> relationship or consistency group becomes disconnected, <strong>the</strong> secondary side transits to<br />

InconsistentDisconnected. The primary side transitions to IdlingDisconnected.<br />

ConsistentStopped<br />

ConsistentStopped is a connected state. In this state, <strong>the</strong> secondary contains a consistent<br />

image, but it might be out-of-date with respect to <strong>the</strong> primary.<br />

This state can arise when a relationship is in <strong>the</strong> Consistent Synchronized state and<br />

experiences an error that forces a Consistency Freeze. It can also arise when a relationship is<br />

created with a CreateConsistentFlag set to true.<br />

Normally, following an I/O error, subsequent write activity causes updates to <strong>the</strong> primary, and<br />

<strong>the</strong> secondary is no longer synchronized (set to false). In this case, to re-establish<br />

synchronization, consistency must be given up for a period. A start command with <strong>the</strong> -force<br />

option must be used to acknowledge this situation, and <strong>the</strong> relationship or consistency group<br />

transits to InconsistentCopying. Issue this command only after all of <strong>the</strong> outstanding errors<br />

are repaired.<br />

In <strong>the</strong> unusual case where <strong>the</strong> primary and secondary are still synchronized (perhaps<br />

following a user stop, and no fur<strong>the</strong>r write I/O was received), a start command takes <strong>the</strong><br />

relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual<br />

case, a switch command is permitted that moves <strong>the</strong> relationship or consistency group to<br />

ConsistentSynchronized and reverses <strong>the</strong> roles of <strong>the</strong> primary and <strong>the</strong> secondary.<br />

Chapter 6. Advanced Copy Services 325


If <strong>the</strong> relationship or consistency group becomes disconnected, <strong>the</strong>n <strong>the</strong> secondary side<br />

transits to ConsistentDisconnected. The primary side transitions to IdlingDisconnected.<br />

An informational status log is generated every time a relationship or consistency group enters<br />

<strong>the</strong> ConsistentStopped with a status of Online state. This can be configured to enable an<br />

SNMP trap and provide a trigger to automation software to consider issuing a start<br />

command following a loss of synchronization.<br />

ConsistentSynchronized<br />

This is a connected state. In this state, <strong>the</strong> primary VDisk is accessible for read and write I/O.<br />

The secondary VDisk is accessible for read-only I/O.<br />

Writes that are sent to <strong>the</strong> primary VDisk are sent to both primary and secondary VDisks.<br />

Ei<strong>the</strong>r successful completion must be received for both writes, <strong>the</strong> write must be failed to <strong>the</strong><br />

host, or a state must transit out of <strong>the</strong> ConsistentSynchronized state before a write is<br />

completed to <strong>the</strong> host.<br />

A stop command takes <strong>the</strong> relationship to <strong>the</strong> ConsistentStopped state. A stop command<br />

with <strong>the</strong> -access parameter takes <strong>the</strong> relationship to <strong>the</strong> Idling state.<br />

A switch command leaves <strong>the</strong> relationship in <strong>the</strong> ConsistentSynchronized state, but it<br />

reverses <strong>the</strong> primary and secondary roles.<br />

A start command is accepted, but it has no effect.<br />

If <strong>the</strong> relationship or consistency group becomes disconnected, <strong>the</strong> same transitions are<br />

made as for ConsistentStopped.<br />

Idling<br />

Idling is a connected state. Both master and auxiliary disks are operating in <strong>the</strong> primary role.<br />

Consequently, both master and auxiliary disks are accessible for write I/O.<br />

In this state, <strong>the</strong> relationship or consistency group accepts a start command. Global Mirror<br />

maintains a record of regions on each disk that received write I/O while Idling. This record is<br />

used to determine what areas need to be copied following a start command.<br />

The start command must specify <strong>the</strong> new copy direction. A start command can cause a<br />

loss of consistency if ei<strong>the</strong>r VDisk in any relationship has received write I/O, which is indicated<br />

by <strong>the</strong> synchronized status. If <strong>the</strong> start command leads to loss of consistency, you must<br />

specify a -force parameter.<br />

Following a start command, <strong>the</strong> relationship or consistency group transits to<br />

ConsistentSynchronized if <strong>the</strong>re is no loss of consistency, or to InconsistentCopying if <strong>the</strong>re is<br />

a loss of consistency.<br />

Also, while in this state, <strong>the</strong> relationship or consistency group accepts a -clean option on <strong>the</strong><br />

start command. If <strong>the</strong> relationship or consistency group becomes disconnected, both sides<br />

change <strong>the</strong>ir state to IdlingDisconnected.<br />

IdlingDisconnected<br />

IdlingDisconnected is a disconnected state. The VDisk or disks in this half of <strong>the</strong> relationship<br />

or consistency group are all in <strong>the</strong> primary role and accept read or write I/O.<br />

The major priority in this state is to recover <strong>the</strong> link and reconnect <strong>the</strong> relationship or<br />

consistency group.<br />

326 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


No configuration activity is possible (except for deletes or stops) until <strong>the</strong> relationship is<br />

reconnected. At that point, <strong>the</strong> relationship transits to a connected state. The exact connected<br />

state that is entered depends on <strong>the</strong> state of <strong>the</strong> o<strong>the</strong>r half of <strong>the</strong> relationship or consistency<br />

group, which depends on <strong>the</strong>se factors:<br />

► The state when it became disconnected<br />

► The write activity since it was disconnected<br />

► The configuration activity since it was disconnected<br />

If both halves are IdlingDisconnected, <strong>the</strong> relationship becomes Idling when reconnected.<br />

While IdlingDisconnected, if a write I/O is received that causes loss of synchronization<br />

(synchronized attribute transits from true to false) and <strong>the</strong> relationship was not already<br />

stopped (ei<strong>the</strong>r through a user stop or a persistent error), an error log is raised. This error log<br />

is <strong>the</strong> same error log that is raised when <strong>the</strong> same situation arises in <strong>the</strong><br />

ConsistentSynchronized state.<br />

InconsistentDisconnected<br />

InconsistentDisconnected is a disconnected state. The VDisks in this half of <strong>the</strong> relationship<br />

or consistency group are all in <strong>the</strong> secondary role and do not accept read or write I/O.<br />

No configuration activity, except for deletes, is permitted until <strong>the</strong> relationship reconnects.<br />

When <strong>the</strong> relationship or consistency group reconnects, <strong>the</strong> relationship becomes<br />

InconsistentCopying automatically unless ei<strong>the</strong>r of <strong>the</strong>se conditions exist:<br />

► The relationship was InconsistentStopped when it became disconnected.<br />

► The user issued a stop while disconnected.<br />

In ei<strong>the</strong>r case, <strong>the</strong> relationship or consistency group becomes InconsistentStopped.<br />

ConsistentDisconnected<br />

ConsistentDisconnected is a disconnected state. The VDisks in this half of <strong>the</strong> relationship or<br />

consistency group are all in <strong>the</strong> secondary role and accept read I/O but not write I/O.<br />

This state is entered from ConsistentSynchronized or ConsistentStopped when <strong>the</strong><br />

secondary side of a relationship becomes disconnected.<br />

In this state, <strong>the</strong> relationship or consistency group displays an attribute of FreezeTime, which<br />

is <strong>the</strong> point in time that Consistency was frozen. When entered from ConsistentStopped, it<br />

retains <strong>the</strong> time that it had in that state. When entered from ConsistentSynchronized, <strong>the</strong><br />

FreezeTime shows <strong>the</strong> last time at which <strong>the</strong> relationship or consistency group was known to<br />

be consistent. This time corresponds to <strong>the</strong> time of <strong>the</strong> last successful heartbeat to <strong>the</strong> o<strong>the</strong>r<br />

cluster.<br />

A stop command with <strong>the</strong> -access flag set to true transits <strong>the</strong> relationship or consistency<br />

group to <strong>the</strong> IdlingDisconnected state. This state allows write I/O to be performed to <strong>the</strong><br />

secondary VDisk and is used as part of a DR scenario.<br />

When <strong>the</strong> relationship or consistency group reconnects, <strong>the</strong> relationship or consistency group<br />

becomes ConsistentSynchronized only if this state does not lead to a loss of consistency.<br />

This is <strong>the</strong> case provided that <strong>the</strong>se conditions are true:<br />

► The relationship was ConsistentSynchronized when it became disconnected.<br />

► No writes received successful completion at <strong>the</strong> primary while disconnected.<br />

O<strong>the</strong>rwise, <strong>the</strong> relationship becomes ConsistentStopped. The FreezeTime setting is retained.<br />

Chapter 6. Advanced Copy Services 327


Empty<br />

This state only applies to consistency groups. It is <strong>the</strong> state of a consistency group that has<br />

no relationships and no o<strong>the</strong>r state information to show.<br />

It is entered when a consistency group is first created. It is exited when <strong>the</strong> first relationship is<br />

added to <strong>the</strong> consistency group, at which point, <strong>the</strong> state of <strong>the</strong> relationship becomes <strong>the</strong><br />

state of <strong>the</strong> consistency group.<br />

6.11.4 Practical use of Global Mirror<br />

To use Global Mirror, you must define a relationship between two VDisks.<br />

When creating <strong>the</strong> Global Mirror relationship, one VDisk is defined as <strong>the</strong> master, and <strong>the</strong><br />

o<strong>the</strong>r VDisk is defined as <strong>the</strong> auxiliary. The relationship between <strong>the</strong> two copies is<br />

asymmetric. When <strong>the</strong> Global Mirror relationship is created, <strong>the</strong> master VDisk is initially<br />

considered <strong>the</strong> primary copy (often referred to as <strong>the</strong> source), and <strong>the</strong> auxiliary VDisk is<br />

considered <strong>the</strong> secondary copy (often referred to as <strong>the</strong> target).<br />

The master VDisk is <strong>the</strong> production VDisk, and updates to this copy are real-time mirrored to<br />

<strong>the</strong> auxiliary VDisk. The contents of <strong>the</strong> auxiliary VDisk that existed when <strong>the</strong> relationship was<br />

created are destroyed.<br />

Switching <strong>the</strong> copy direction: The copy direction for a Global Mirror relationship can be<br />

switched so <strong>the</strong> auxiliary VDisk becomes <strong>the</strong> primary and <strong>the</strong> master VDisk becomes <strong>the</strong><br />

secondary.<br />

While <strong>the</strong> Global Mirror relationship is active, <strong>the</strong> secondary copy (VDisk) is inaccessible for<br />

host application write I/O at any time. The SVC allows read-only access to <strong>the</strong> secondary<br />

VDisk when it contains a “consistent” image. This read-only access is only intended to allow<br />

boot time operating system discovery to complete without error, so that any hosts at <strong>the</strong><br />

secondary site can be ready to start up <strong>the</strong> applications with minimal delay, if required.<br />

For example, many operating systems need to read logical block address (LBA) 0 (zero) to<br />

configure a logical unit. Although read access is allowed at <strong>the</strong> secondary in practice, <strong>the</strong> data<br />

on <strong>the</strong> secondary volumes cannot be read by a host, because most operating systems write a<br />

“dirty bit” to <strong>the</strong> file system when it is mounted. Because this write operation is not allowed on<br />

<strong>the</strong> secondary volume, <strong>the</strong> volume cannot be mounted.<br />

This access is only provided where consistency can be guaranteed. However, <strong>the</strong>re is no way<br />

in which coherency can be maintained between reads that are performed at <strong>the</strong> secondary<br />

and later write I/Os that are performed at <strong>the</strong> primary.<br />

To enable access to <strong>the</strong> secondary VDisk for host operations, you must stop <strong>the</strong> Global Mirror<br />

relationship by specifying <strong>the</strong> -access parameter.<br />

While access to <strong>the</strong> secondary VDisk for host operations is enabled, you must instruct <strong>the</strong><br />

host to mount <strong>the</strong> VDisk and o<strong>the</strong>r related tasks, before <strong>the</strong> application can be started or<br />

instructed to perform a recovery process.<br />

Using a secondary copy demands a conscious policy decision by <strong>the</strong> administrator that a<br />

failover is required, and <strong>the</strong> tasks to be performed on <strong>the</strong> host that is involved in establishing<br />

operation on <strong>the</strong> secondary copy are substantial. The goal is to make this failover rapid (much<br />

faster than recovering from a backup copy), but it is not seamless.<br />

328 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


You can automate <strong>the</strong> failover process by using failover management software. The SVC<br />

provides Simple Network Management Protocol (SNMP) traps and programming (or<br />

scripting) for <strong>the</strong> command-line interface (CLI) to enable this automation.<br />

6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror<br />

functions<br />

Table 6-7 on page 302 outlines <strong>the</strong> combinations of FlashCopy and Metro Mirror or Global<br />

Mirror functions that are valid for a VDisk.<br />

Table 6-9 VDisk valid combinations<br />

FlashCopy Metro Mirror or Global Mirror<br />

Primary<br />

FlashCopy Source Supported Supported<br />

FlashCopy Target Not supported Not supported<br />

6.11.6 Global Mirror configuration limits<br />

Table 6-10 lists <strong>the</strong> Global Mirror configuration limits.<br />

Table 6-10 Global Mirror configuration limits<br />

Parameter Value<br />

Number of Metro Mirror<br />

consistency groups per cluster<br />

Number of Metro Mirror<br />

relationships per cluster<br />

Number of Metro Mirror<br />

relationships per consistency<br />

group<br />

6.12 Global Mirror commands<br />

256<br />

8,192<br />

8,192<br />

Metro Mirror or Global Mirror<br />

Secondary<br />

Total VDisk size per I/O Group A per I/O Group limit of 1,024 TB exists on <strong>the</strong> quantity of Primary<br />

and Secondary VDisk address spaces that can participate in<br />

Metro Mirror and Global Mirror relationships. This maximum<br />

configuration will consume all 512 MB of bitmap space for <strong>the</strong> I/O<br />

Group and allow no FlashCopy bitmap space.<br />

Here, we summarize several of <strong>the</strong> most important Global Mirror commands. For complete<br />

details about all of <strong>the</strong> Global Mirror commands, see <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong>: Command-Line Interface User's Guide, SC26-7903.<br />

The command set for Global Mirror contains two broad groups:<br />

► Commands to create, delete, and manipulate relationships and consistency groups<br />

► Commands that cause state changes<br />

Where a configuration command affects more than one cluster, Global Mirror performs <strong>the</strong><br />

work to coordinate configuration activity between <strong>the</strong> clusters. Certain configuration<br />

commands can only be performed when <strong>the</strong> clusters are connected, and those commands<br />

fail with no effect when <strong>the</strong> clusters are disconnected.<br />

Chapter 6. Advanced Copy Services 329


O<strong>the</strong>r configuration commands are permitted even though <strong>the</strong> clusters are disconnected. The<br />

state is reconciled automatically by Global Mirror when <strong>the</strong> clusters are reconnected.<br />

For any given command, with one exception, a single cluster actually receives <strong>the</strong> command<br />

from <strong>the</strong> administrator. This action is significant for defining <strong>the</strong> context for a<br />

CreateRelationship (mkrcrelationship) command or a CreateConsistencyGroup<br />

(mkrcconsistgrp) command, in which case, <strong>the</strong> cluster receiving <strong>the</strong> command is called <strong>the</strong><br />

local cluster.<br />

The exception is <strong>the</strong> command that sets clusters into a Global Mirror partnership. The<br />

administrator must issue <strong>the</strong> mkpartnership command to both <strong>the</strong> local and to <strong>the</strong> remote<br />

cluster.<br />

The commands are described here as an abstract command set. You can implement <strong>the</strong>se<br />

commands in one of two ways:<br />

► A command-line interface (CLI), which can be used for scripting and automation<br />

► A graphical user interface (GUI), which can be used for one-off tasks<br />

6.12.1 Listing <strong>the</strong> available SVC cluster partners<br />

To create an SVC cluster partnership, we use <strong>the</strong> svcinfo lsclustercandidate command.<br />

svcinfo lsclustercandidate<br />

Use <strong>the</strong> svcinfo lsclustercandidate command to list <strong>the</strong> clusters that are available for<br />

setting up a two-cluster partnership. This command is a prerequisite for creating Global Mirror<br />

relationships.<br />

To display <strong>the</strong> characteristics of <strong>the</strong> cluster, use <strong>the</strong> svcinfo lscluster command, specifying<br />

<strong>the</strong> name of <strong>the</strong> cluster.<br />

svctask chcluster<br />

There are three parameters for Global Mirror in <strong>the</strong> command output:<br />

► -gmlinktolerance link_tolerance<br />

This parameter specifies <strong>the</strong> maximum period of time that <strong>the</strong> system will tolerate delay<br />

before stopping Global Mirror relationships. Specify values between 60 and 86400<br />

seconds in increments of 10 seconds. The default value is 300. Do not change this value<br />

except under <strong>the</strong> direction of <strong>IBM</strong> Support.<br />

► -gminterdelaysimulation link_tolerance<br />

This parameter specifies <strong>the</strong> number of milliseconds that I/O activity (intercluster copying<br />

to a secondary VDisk) is delayed. This parameter permits you to test performance<br />

implications before deploying Global Mirror and obtaining a long distance link. Specify a<br />

value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use<br />

this argument to test each intercluster Global Mirror relationship separately.<br />

► -gmintradelaysimulation link_tolerance<br />

This parameter specifies <strong>the</strong> number of milliseconds that I/O activity (intracluster copying<br />

to a secondary VDisk) is delayed. This parameter permits you to test performance<br />

implications before deploying Global Mirror and obtaining a long distance link. Specify a<br />

value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use<br />

this argument to test each intracluster Global Mirror relationship separately.<br />

Use <strong>the</strong> svctask chcluster command to adjust <strong>the</strong>se values:<br />

svctask chcluster -gmlinktolerance 300<br />

330 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


You can view all of <strong>the</strong>se parameter values with <strong>the</strong> svcinfo lscluster <br />

command.<br />

gmlinktolerance<br />

The gmlinktolerance parameter needs a particular and detailed note.<br />

If poor response extends past <strong>the</strong> specified tolerance, a 1920 error is logged and one or more<br />

Global Mirror relationships are automatically stopped, which protects <strong>the</strong> application hosts at<br />

<strong>the</strong> primary site. During normal operation, application hosts experience a minimal effect from<br />

<strong>the</strong> response times, because <strong>the</strong> Global Mirror feature uses asynchronous replication.<br />

However, if Global Mirror operations experience degraded response times from <strong>the</strong><br />

secondary cluster for an extended period of time, I/O operations begin to queue at <strong>the</strong><br />

primary cluster. This queue results in an extended response time to application hosts. In this<br />

situation, <strong>the</strong> gmlinktolerance feature stops Global Mirror relationships and <strong>the</strong> application<br />

host’s response time returns to normal. After a 1920 error has occurred, <strong>the</strong> Global Mirror<br />

auxiliary VDisks are no longer in <strong>the</strong> consistent_synchronized state until you fix <strong>the</strong> cause of<br />

<strong>the</strong> error and restart your Global Mirror relationships. For this reason, ensure that you monitor<br />

<strong>the</strong> cluster to track when this 1920 error occurs.<br />

You can disable <strong>the</strong> gmlinktolerance feature by setting <strong>the</strong> gmlinktolerance value to 0 (zero).<br />

However, <strong>the</strong> gmlinktolerance feature cannot protect applications from extended response<br />

times if it is disabled. It might be appropriate to disable <strong>the</strong> gmlinktolerance feature in <strong>the</strong><br />

following circumstances:<br />

► During <strong>SAN</strong> maintenance windows where degraded performance is expected from <strong>SAN</strong><br />

components and application hosts can withstand extended response times from Global<br />

Mirror VDisks.<br />

► During periods when application hosts can tolerate extended response times and it is<br />

expected that <strong>the</strong> gmlinktolerance feature might stop <strong>the</strong> Global Mirror relationships. For<br />

example, if you test using an I/O generator, which is configured to stress <strong>the</strong> back-end<br />

storage, <strong>the</strong> gmlinktolerance feature might detect <strong>the</strong> high latency and stop <strong>the</strong> Global<br />

Mirror relationships. Disabling <strong>the</strong> gmlinktolerance feature prevents this result at <strong>the</strong> risk of<br />

exposing <strong>the</strong> test host to extended response times.<br />

We suggest using a script to periodically monitor <strong>the</strong> Global Mirror status.<br />

Example 6-2 shows an example of a script in ksh to check <strong>the</strong> Global Mirror status.<br />

Example 6-2 Script example<br />

[AIX1@root] /usr/GMC > cat checkSVCgm<br />

#!/bin/sh<br />

#<br />

# Description<br />

#<br />

# GM_STATUS GM Status variable<br />

# HOSTsvcNAME SVC cluster ipaddress<br />

# PARA_TEST Consistent syncronized variable<br />

# PARA_TESTSTOPIN Stop inconsistent variable<br />

# PARA_TESTSTOP Consistent stopped variable<br />

# IDCONS Consistency Group ID variable<br />

# variable definition<br />

HOSTsvcNAME="128.153.3.237"<br />

IDCONS=255<br />

PARA_TEST="consistent_synchronized"<br />

PARA_TESTSTOP="consistent_stopped"<br />

PARA_TESTSTOPIN="inconsistent_stopped"<br />

Chapter 6. Advanced Copy Services 331


FLOG="/usr/GMC/log/gmtest.log"<br />

VAR=0<br />

# Start Programm<br />

if [[ $1 == "" ]]<br />

<strong>the</strong>n<br />

CICLI="true"<br />

fi<br />

while $CICLI<br />

do<br />

GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:<br />

'NR==2 {print $8 }'`<br />

echo "`date` Gobal Mirror STATUS " >> $FLOG<br />

if [[ $GM_STATUS = $PARA_TEST ]]<br />

<strong>the</strong>n<br />

sleep 600<br />

else<br />

sleep 600<br />

GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:<br />

'NR==2 {print $8 }'`<br />

if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]]<br />

<strong>the</strong>n<br />

ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS<br />

TESTEX=`echo $?`<br />

echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG<br />

fi<br />

GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F:<br />

'NR==2 {print $8 }'`<br />

if [[ $GM_STATUS = $PARA_TESTSTOP ]]<br />

<strong>the</strong>n<br />

echo "`date` Global Mirror restarted "<br />

else<br />

echo "`date` ERROR Global Mirro Failed "<br />

fi<br />

sleep 600<br />

fi<br />

((VAR+=1))<br />

done<br />

PARA_TESTSTOP="consistent_stopped"<br />

The script in Example 6-2 on page 331 performs <strong>the</strong>se functions:<br />

► Check <strong>the</strong> Global Mirror status every 600 seconds.<br />

► If <strong>the</strong> status is Consistent_Syncronized, wait ano<strong>the</strong>r 600 seconds and test again.<br />

► If <strong>the</strong> status is Consistent_Stopped or Inconsistent_Stopped, wait ano<strong>the</strong>r 600 seconds<br />

and <strong>the</strong>n try to restart Global Mirror. If <strong>the</strong> status is <strong>the</strong> status is Consistent_Stopped or<br />

Inconsistent_Stopped, probably we have a 1920 error scenario, which means that we<br />

might have a performance problem. Waiting 600 seconds before restarting Global Mirror<br />

can give <strong>the</strong> SVC enough time to deliver <strong>the</strong> high workload that is requested by <strong>the</strong> server.<br />

Because Global Mirror has been stopped for 10 minutes (600 seconds), <strong>the</strong> secondary<br />

copy is now out-of-date by this amount of time and must be resynchronized.<br />

Sample script: The script that is described in Example 6-2 on page 331 is supplied as-is.<br />

332 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


A 1920 error indicates that one or more of <strong>the</strong> <strong>SAN</strong> components are unable to provide <strong>the</strong><br />

performance that is required by <strong>the</strong> application hosts. This situation can be temporary (for<br />

example, a result of a maintenance activity) or permanent (for example, a result of a hardware<br />

failure or an unexpected host I/O workload).<br />

If you experience 1920 errors, we suggest that you install a <strong>SAN</strong> performance analysis tool,<br />

such as <strong>the</strong> <strong>IBM</strong> Tivoli <strong>Storage</strong> Productivity Center, and make sure that <strong>the</strong> tool is correctly<br />

configured and monitoring statistics to look for problems and to try to prevent <strong>the</strong>m.<br />

6.12.2 Creating an SVC cluster partnership<br />

To create an SVC cluster partnership, use <strong>the</strong> svctask mkpartnership command.<br />

svctask mkpartnership<br />

Use <strong>the</strong> svctask mkpartnership command to establish a one-way Global Mirror partnership<br />

between <strong>the</strong> local cluster and a remote cluster.<br />

To establish a fully functional Global Mirror partnership, you must issue this command on both<br />

clusters. This step is a prerequisite for creating Global Mirror relationships between VDisks on<br />

<strong>the</strong> SVC clusters.<br />

When creating <strong>the</strong> partnership, you can specify <strong>the</strong> bandwidth to be used by <strong>the</strong> background<br />

copy process between <strong>the</strong> local and <strong>the</strong> remote SVC cluster, and if it is not specified, <strong>the</strong><br />

bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or<br />

equal to <strong>the</strong> bandwidth that can be sustained by <strong>the</strong> intercluster link.<br />

Background copy bandwidth effect on foreground I/O latency<br />

The background copy bandwidth determines <strong>the</strong> rate at which <strong>the</strong> background copy will be<br />

attempted for Global Mirror. The background copy bandwidth can affect foreground I/O<br />

latency in one of three ways:<br />

► The following result can occur if <strong>the</strong> background copy bandwidth is set too high compared<br />

to <strong>the</strong> Global Mirror intercluster link capacity:<br />

– The background copy I/Os can back up on <strong>the</strong> Global Mirror intercluster link.<br />

– There is a delay in <strong>the</strong> synchronous secondary writes of foreground I/Os.<br />

– The foreground I/O latency will increase as perceived by applications.<br />

► If <strong>the</strong> background copy bandwidth is set too high for <strong>the</strong> storage at <strong>the</strong> primary site,<br />

background copy read I/Os overload <strong>the</strong> primary storage and delay foreground I/Os.<br />

► If <strong>the</strong> background copy bandwidth is set too high for <strong>the</strong> storage at <strong>the</strong> secondary site,<br />

background copy writes at <strong>the</strong> secondary overload <strong>the</strong> secondary storage and again delay<br />

<strong>the</strong> synchronous secondary writes of foreground I/Os.<br />

In order to set <strong>the</strong> background copy bandwidth optimally, make sure that you consider all<br />

three resources (<strong>the</strong> primary storage, <strong>the</strong> intercluster link bandwidth, and <strong>the</strong> secondary<br />

storage). Provision <strong>the</strong> most restrictive of <strong>the</strong>se three resources between <strong>the</strong> background<br />

copy bandwidth and <strong>the</strong> peak foreground I/O workload. Perform this provisioning by<br />

calculation or, alternatively, by determining experimentally how much background copy can<br />

be allowed before <strong>the</strong> foreground I/O latency becomes unacceptable and <strong>the</strong>n reducing <strong>the</strong><br />

background copy to accommodate peaks in workload and an additional safety margin.<br />

svctask chpartnership<br />

To change <strong>the</strong> bandwidth that is available for background copy in an SVC cluster partnership,<br />

use <strong>the</strong> svctask chpartnership command to specify <strong>the</strong> new bandwidth.<br />

Chapter 6. Advanced Copy Services 333


6.12.3 Creating a Global Mirror consistency group<br />

To create a Global Mirror consistency group, use <strong>the</strong> svctask mkrcconsistgrp command.<br />

svctask mkrcconsistgrp<br />

Use <strong>the</strong> svctask mkrcconsistgrp command to create a new, empty Global Mirror consistency<br />

group.<br />

The Global Mirror consistency group name must be unique across all consistency groups that<br />

are known to <strong>the</strong> clusters owning this consistency group. If <strong>the</strong> consistency group involves two<br />

clusters, <strong>the</strong> clusters must be in communication throughout <strong>the</strong> creation process.<br />

The new consistency group does not contain any relationships and will be in <strong>the</strong> Empty state.<br />

You can add Global Mirror relationships to <strong>the</strong> group, ei<strong>the</strong>r upon creation or afterward, by<br />

using <strong>the</strong> svctask chrelationship command.<br />

6.12.4 Creating a Global Mirror relationship<br />

To create a Global Mirror relationship, use <strong>the</strong> svctask mkrcrelationship command.<br />

Optional parameter: If you do not use <strong>the</strong> -global optional parameter, a Metro Mirror<br />

relationship will be created instead of a Global Mirror relationship.<br />

svctask mkrcrelationship<br />

Use <strong>the</strong> svctask mkrcrelationship command to create a new Global Mirror relationship. This<br />

relationship persists until it is deleted.<br />

The auxiliary VDisk must be equal in size to <strong>the</strong> master VDisk or <strong>the</strong> command will fail, and if<br />

both VDisks are in <strong>the</strong> same cluster, <strong>the</strong>y must both be in <strong>the</strong> same I/O Group. The master<br />

and auxiliary VDisk cannot be in an existing relationship, and <strong>the</strong>y cannot be <strong>the</strong> target of a<br />

FlashCopy mapping. This command returns <strong>the</strong> new relationship (relationship_id) when<br />

successful.<br />

When creating <strong>the</strong> Global Mirror relationship, you can add it to a consistency group that<br />

already exists, or it can be a stand-alone Global Mirror relationship if no consistency group is<br />

specified.<br />

To check whe<strong>the</strong>r <strong>the</strong> master or auxiliary VDisks comply with <strong>the</strong> prerequisites to participate<br />

in a Global Mirror relationship, use <strong>the</strong> svcinfo lsrcrelationshipcandidate command, as<br />

shown in “svcinfo lsrcrelationshipcandidate” on page 334.<br />

svcinfo lsrcrelationshipcandidate<br />

Use <strong>the</strong> svcinfo lsrcrelationshipcandidate command to list <strong>the</strong> available VDisks that are<br />

eligible to form a Global Mirror relationship.<br />

When issuing <strong>the</strong> command, you can specify <strong>the</strong> master VDisk name and auxiliary cluster to<br />

list candidates that comply with <strong>the</strong> prerequisites to create a Global Mirror relationship. If <strong>the</strong><br />

command is issued with no parameters, all VDisks that are not disallowed by ano<strong>the</strong>r<br />

configuration state, such as being a FlashCopy target, are listed.<br />

6.12.5 Changing a Global Mirror relationship<br />

To modify <strong>the</strong> properties of a Global Mirror relationship, use <strong>the</strong> svctask chrcrelationship<br />

command.<br />

334 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


svctask chrcrelationship<br />

Use <strong>the</strong> svctask chrcrelationship command to modify <strong>the</strong> following properties of a Global<br />

Mirror relationship:<br />

► Change <strong>the</strong> name of a Global Mirror relationship.<br />

► Add a relationship to a group.<br />

► Remove a relationship from a group using <strong>the</strong> -force flag.<br />

Adding a Global Mirror relationship: When adding a Global Mirror relationship to a<br />

consistency group that is not empty, <strong>the</strong> relationship must have <strong>the</strong> same state and copy<br />

direction as <strong>the</strong> group in order to be added to it.<br />

6.12.6 Changing a Global Mirror consistency group<br />

To change <strong>the</strong> name of a Global Mirror consistency group, use <strong>the</strong> following command.<br />

svctask chrcconsistgrp<br />

Use <strong>the</strong> svctask chrcconsistgrp command to change <strong>the</strong> name of a Global Mirror<br />

consistency group.<br />

6.12.7 Starting a Global Mirror relationship<br />

To start a stand-alone Global Mirror relationship, use <strong>the</strong> following command.<br />

svctask startrcrelationship<br />

Use <strong>the</strong> svctask startrcrelationship command to start <strong>the</strong> copy process of a Global Mirror<br />

relationship.<br />

When issuing <strong>the</strong> command, you can set <strong>the</strong> copy direction if it is undefined, and, optionally,<br />

you can mark <strong>the</strong> secondary VDisk of <strong>the</strong> relationship as clean. The command fails if it is<br />

used as an attempt to start a relationship that is already a part of a consistency group.<br />

You can only issue this command to a relationship that is connected. For a relationship that is<br />

idling, this command assigns a copy direction (primary and secondary roles) and begins <strong>the</strong><br />

copy process. O<strong>the</strong>rwise, this command restarts a previous copy process that was stopped<br />

ei<strong>the</strong>r by a stop command or by an I/O error.<br />

If <strong>the</strong> resumption of <strong>the</strong> copy process leads to a period when <strong>the</strong> relationship is inconsistent,<br />

you must specify <strong>the</strong> -force parameter when restarting <strong>the</strong> relationship. This situation can<br />

arise if, for example, <strong>the</strong> relationship was stopped and <strong>the</strong>n fur<strong>the</strong>r writes were performed on<br />

<strong>the</strong> original primary of <strong>the</strong> relationship. The use of <strong>the</strong> -force parameter here is a reminder<br />

that <strong>the</strong> data on <strong>the</strong> secondary will become inconsistent while resynchronization (background<br />

copying) takes place and, <strong>the</strong>refore, is unusable for DR purposes before <strong>the</strong> background copy<br />

has completed.<br />

In <strong>the</strong> Idling state, you must specify <strong>the</strong> primary VDisk to indicate <strong>the</strong> copy direction. In o<strong>the</strong>r<br />

connected states, you can provide <strong>the</strong> primary argument, but it must match <strong>the</strong> existing<br />

setting.<br />

6.12.8 Stopping a Global Mirror relationship<br />

To stop a stand-alone Global Mirror relationship, use <strong>the</strong> svctask stoprcrelationship<br />

command.<br />

Chapter 6. Advanced Copy Services 335


svctask stoprcrelationship<br />

Use <strong>the</strong> svctask stoprcrelationship command to stop <strong>the</strong> copy process for a relationship.<br />

You can also use this command to enable write access to a consistent secondary VDisk by<br />

specifying <strong>the</strong> -access parameter.<br />

This command applies to a stand-alone relationship. It is rejected if it is addressed to a<br />

relationship that is part of a consistency group. You can issue this command to stop a<br />

relationship that is copying from primary to secondary.<br />

If <strong>the</strong> relationship is in an inconsistent state, any copy operation stops and does not resume<br />

until you issue an svctask startrcrelationship command. Write activity is no longer copied<br />

from <strong>the</strong> primary to <strong>the</strong> secondary VDisk. For a relationship in <strong>the</strong> ConsistentSynchronized<br />

state, this command causes a Consistency Freeze.<br />

When a relationship is in a consistent state (that is, in <strong>the</strong> ConsistentStopped,<br />

ConsistentSynchronized, or ConsistentDisconnected state), you can use <strong>the</strong> -access<br />

parameter with <strong>the</strong> svctask stoprcrelationship command to enable write access to <strong>the</strong><br />

secondary VDisk.<br />

6.12.9 Starting a Global Mirror consistency group<br />

To start a Global Mirror consistency group, use <strong>the</strong> svctask startrcconsistgrp command.<br />

svctask startrcconsistgrp<br />

Use <strong>the</strong> svctask startrcconsistgrp command to start a Global Mirror consistency group.<br />

You can only issue this command to a consistency group that is connected.<br />

For a consistency group that is idling, this command assigns a copy direction (primary and<br />

secondary roles) and begins <strong>the</strong> copy process. O<strong>the</strong>rwise, this command restarts a previous<br />

copy process that was stopped ei<strong>the</strong>r by a stop command or by an I/O error.<br />

6.12.10 Stopping a Global Mirror consistency group<br />

To stop a Global Mirror consistency group, use <strong>the</strong> svctask stoprcconsistgrp command.<br />

svctask stoprcconsistgrp<br />

Use <strong>the</strong> svctask startrcconsistgrp command to stop <strong>the</strong> copy process for a Global Mirror<br />

consistency group. You can also use this command to enable write access to <strong>the</strong> secondary<br />

VDisks in <strong>the</strong> group if <strong>the</strong> group is in a consistent state.<br />

If <strong>the</strong> consistency group is in an inconsistent state, any copy operation stops and does not<br />

resume until you issue <strong>the</strong> svctask startrcconsistgrp command. Write activity is no longer<br />

copied from <strong>the</strong> primary to <strong>the</strong> secondary VDisks, which belong to <strong>the</strong> relationships in <strong>the</strong><br />

group. For a consistency group in <strong>the</strong> ConsistentSynchronized state, this command causes a<br />

Consistency Freeze.<br />

When a consistency group is in a consistent state (for example, in <strong>the</strong> ConsistentStopped,<br />

ConsistentSynchronized, or ConsistentDisconnected state), you can use <strong>the</strong> -access<br />

parameter with <strong>the</strong> svctask stoprcconsistgrp command to enable write access to <strong>the</strong><br />

secondary VDisks within that group.<br />

6.12.11 Deleting a Global Mirror relationship<br />

To delete a Global Mirror relationship, use <strong>the</strong> svctask rmrcrelationship command.<br />

336 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


svctask rmrcrelationship<br />

Use <strong>the</strong> svctask rmrcrelationship command to delete <strong>the</strong> relationship that is specified.<br />

Deleting a relationship only deletes <strong>the</strong> logical relationship between <strong>the</strong> two VDisks. It does<br />

not affect <strong>the</strong> VDisks <strong>the</strong>mselves.<br />

If <strong>the</strong> relationship is disconnected at <strong>the</strong> time that <strong>the</strong> command is issued, <strong>the</strong> relationship is<br />

only deleted on <strong>the</strong> cluster on which <strong>the</strong> command is being run. When <strong>the</strong> clusters reconnect,<br />

<strong>the</strong> relationship is automatically deleted on <strong>the</strong> o<strong>the</strong>r cluster.<br />

Alternatively, if <strong>the</strong> clusters are disconnected, and you still want to remove <strong>the</strong> relationship on<br />

both clusters, you can issue <strong>the</strong> rmrcrelationship command independently on both of <strong>the</strong><br />

clusters.<br />

A relationship cannot be deleted if it is part of a consistency group. You must first remove <strong>the</strong><br />

relationship from <strong>the</strong> consistency group.<br />

If you delete an inconsistent relationship, <strong>the</strong> secondary VDisk becomes accessible even<br />

though it is still inconsistent. This situation is <strong>the</strong> one case in which Global Mirror does not<br />

inhibit access to inconsistent data.<br />

6.12.12 Deleting a Global Mirror consistency group<br />

To delete a Global Mirror consistency group, use <strong>the</strong> svctask rmrcconsistgrp command.<br />

svctask rmrcconsistgrp<br />

Use <strong>the</strong> svctask rmrcconsistgrp command to delete a Global Mirror consistency group. This<br />

command deletes <strong>the</strong> specified consistency group. You can issue this command for any<br />

existing consistency group.<br />

If <strong>the</strong> consistency group is disconnected at <strong>the</strong> time that <strong>the</strong> command is issued, <strong>the</strong><br />

consistency group is only deleted on <strong>the</strong> cluster on which <strong>the</strong> command is being run. When<br />

<strong>the</strong> clusters reconnect, <strong>the</strong> consistency group is automatically deleted on <strong>the</strong> o<strong>the</strong>r cluster.<br />

Alternatively, if <strong>the</strong> clusters are disconnected, and you still want to remove <strong>the</strong> consistency<br />

group on both clusters, you can issue <strong>the</strong> svctask rmrcconsistgrp command separately on<br />

both of <strong>the</strong> clusters.<br />

If <strong>the</strong> consistency group is not empty, <strong>the</strong> relationships within it are removed from <strong>the</strong><br />

consistency group before <strong>the</strong> group is deleted. These relationships <strong>the</strong>n become stand-alone<br />

relationships. The state of <strong>the</strong>se relationships is not changed by <strong>the</strong> action of removing <strong>the</strong>m<br />

from <strong>the</strong> consistency group.<br />

6.12.13 Reversing a Global Mirror relationship<br />

To reverse a Global Mirror relationship, use <strong>the</strong> svctask switchrcrelationship command.<br />

svctask switchrcrelationship<br />

Use <strong>the</strong> svctask switchrcrelationship command to reverse <strong>the</strong> roles of <strong>the</strong> primary VDisk<br />

and <strong>the</strong> secondary VDisk when a stand-alone relationship is in a consistent state; when<br />

issuing <strong>the</strong> command, <strong>the</strong> desired primary needs to be specified.<br />

6.12.14 Reversing a Global Mirror consistency group<br />

To reverse a Global Mirror consistency group, use <strong>the</strong> svctask switchrcconsistgrp<br />

command.<br />

Chapter 6. Advanced Copy Services 337


svctask switchrcconsistgrp<br />

Use <strong>the</strong> svctask switchrcconsistgrp command to reverse <strong>the</strong> roles of <strong>the</strong> primary VDisk and<br />

<strong>the</strong> secondary VDisk when a consistency group is in a consistent state. This change is<br />

applied to all of <strong>the</strong> relationships in <strong>the</strong> consistency group, and when issuing <strong>the</strong> command,<br />

<strong>the</strong> desired primary needs to be specified.<br />

338 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

operations using <strong>the</strong><br />

command-line interface<br />

In this chapter, we describe operational management. We use <strong>the</strong> command-line interface<br />

(CLI) to demonstrate both normal operation and, <strong>the</strong>n, advanced operation.<br />

You can use ei<strong>the</strong>r <strong>the</strong> CLI or GUI to manage <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

(SVC) operations. We prefer to use <strong>the</strong> CLI in this chapter. You might want to script <strong>the</strong>se<br />

operations, and we think it is easier to create <strong>the</strong> documentation for <strong>the</strong> scripts using <strong>the</strong> CLI.<br />

This chapter assumes a fully functional SVC environment.<br />

7<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 339


7.1 Normal operations using CLI<br />

In <strong>the</strong> following topics, we describe those commands that best represent normal operational<br />

commands.<br />

7.1.1 Command syntax and online help<br />

Two major command sets are available:<br />

► The svcinfo command set allows us to query <strong>the</strong> various components within <strong>the</strong> SVC<br />

environment.<br />

► The svctask command set allows us to make changes to <strong>the</strong> various components within<br />

<strong>the</strong> SVC.<br />

When <strong>the</strong> command syntax is shown, you will see certain parameters in square brackets, for<br />

example, [parameter], indicating that <strong>the</strong> parameter is optional in most, if not all, instances.<br />

Any information that is not in square brackets is required information. You can view <strong>the</strong> syntax<br />

of a command by entering one of <strong>the</strong> following commands:<br />

► svcinfo -?: Shows a complete list of information commands.<br />

► svctask -?: Shows a complete list of task commands.<br />

► svcinfo commandname -?: Shows <strong>the</strong> syntax of information commands.<br />

► svctask commandname -?: Shows <strong>the</strong> syntax of task commands.<br />

► svcinfo commandname -filtervalue?: Shows <strong>the</strong> filters that you can use to reduce <strong>the</strong><br />

output of <strong>the</strong> information commands.<br />

Help: You can also use -h instead of -?, for example, <strong>the</strong> svcinfo -h or svctask<br />

commandname -h command.<br />

If you look at <strong>the</strong> syntax of <strong>the</strong> command by typing svcinfo command name -?, you often see<br />

-filter listed as a parameter. Be aware that <strong>the</strong> correct parameter is -filtervalue.<br />

Tip: You can use <strong>the</strong> up and down arrow keys on your keyboard to recall commands that<br />

were recently issued. Then, you can use <strong>the</strong> left and right, backspace, and delete keys to<br />

edit commands before you resubmit <strong>the</strong>m.<br />

7.2 Working with managed disks and disk controller systems<br />

This section details <strong>the</strong> various configuration and administration tasks that you can perform<br />

on <strong>the</strong> managed disks (MDisks) within <strong>the</strong> SVC environment and <strong>the</strong> tasks that you can<br />

perform at a disk controller level.<br />

7.2.1 Viewing disk controller details<br />

Use <strong>the</strong> svcinfo lscontroller command to display summary information about all available<br />

back-end storage systems.<br />

To display more detailed information about a specific controller, run <strong>the</strong> command again and<br />

append <strong>the</strong> controller name parameter, for example, controller id 0, as shown in Example 7-1<br />

on page 341.<br />

340 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-1 svctask lscontroller command<br />

<strong>IBM</strong>_2145:ITSO_SVC_4:admin>svcinfo lscontroller 0<br />

id 0<br />

controller_name ITSO_XIV_01<br />

WWNN 50017380022C0000<br />

mdisk_link_count 10<br />

max_mdisk_link_count 10<br />

degraded no<br />

vendor_id <strong>IBM</strong><br />

product_id_low 2810XIVproduct_id_high<br />

LUN-0<br />

product_revision 10.1<br />

ctrl_s/n<br />

allow_quorum yes<br />

WWPN 50017380022C0170<br />

path_count 2<br />

max_path_count 4<br />

WWPN 50017380022C0180<br />

path_count 2<br />

max_path_count 2<br />

WWPN 50017380022C0190<br />

path_count 4<br />

max_path_count 6<br />

WWPN 50017380022C0182<br />

path_count 4<br />

max_path_count 12<br />

WWPN 50017380022C0192<br />

path_count 4<br />

max_path_count 6<br />

WWPN 50017380022C0172<br />

path_count 4<br />

max_path_count 6<br />

7.2.2 Renaming a controller<br />

Use <strong>the</strong> svctask chcontroller command to change <strong>the</strong> name of a storage controller. To<br />

verify <strong>the</strong> change, run <strong>the</strong> svcinfo lscontroller command. Example 7-2 shows both of<br />

<strong>the</strong>se commands.<br />

Example 7-2 svctask chcontroller command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chcontroller -name DS4500 controller0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscontroller -delim ,<br />

id,controller_name,ctrl_s/n,vendor_id,product_id_low,product_id_high<br />

0,DS4500,,<strong>IBM</strong> ,1742-900,<br />

1,DS4700,,<strong>IBM</strong> ,1814 , FAStT<br />

This command renames <strong>the</strong> controller named controller0 to DS4500.<br />

Choosing a new name: The chcontroller command specifies <strong>the</strong> new name first. You<br />

can use letters A to Z, a to z, numbers 0 to 9, <strong>the</strong> dash (-), and <strong>the</strong> underscore (_). The new<br />

name can be between one and 15 characters in length. However, <strong>the</strong> new name cannot<br />

start with a number, dash, or <strong>the</strong> word “controller” (because this prefix is reserved for SVC<br />

assignment only).<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 341


7.2.3 Discovery status<br />

Use <strong>the</strong> svcinfo lsdiscoverystatus command, as shown in Example 7-3, to determine if a<br />

discovery operation is in progress. The output of this command is <strong>the</strong> status of active or<br />

inactive.<br />

Example 7-3 lsdiscoverystatus command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsdiscoverystatus<br />

status<br />

inactive<br />

7.2.4 Discovering MDisks<br />

In general, <strong>the</strong> cluster detects <strong>the</strong> MDisks automatically when <strong>the</strong>y appear in <strong>the</strong> network.<br />

However, certain Fibre Channel (FC) controllers do not send <strong>the</strong> required Small Computer<br />

<strong>System</strong> Interface (SCSI) primitives that are necessary to automatically discover <strong>the</strong> new<br />

MDisks.<br />

If new storage has been attached and <strong>the</strong> cluster has not detected it, it might be necessary to<br />

run this command before <strong>the</strong> cluster can detect <strong>the</strong> new MDisks.<br />

Use <strong>the</strong> svctask detectmdisk command to scan for newly added MDisks (Example 7-4).<br />

Example 7-4 svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask detectmdisk<br />

To check whe<strong>the</strong>r any newly added MDisks were successfully detected, run <strong>the</strong> svcinfo<br />

lsmdisk command and look for new unmanaged MDisks.<br />

If <strong>the</strong> disks do not appear, check that <strong>the</strong> disk is appropriately assigned to <strong>the</strong> SVC in <strong>the</strong> disk<br />

subsystem, and that <strong>the</strong> zones are set up properly.<br />

Note: If you have assigned a large number of logical unit numbers (LUNs) to your SVC, <strong>the</strong><br />

discovery process can take time. Check, several times, using <strong>the</strong> svcinfo lsmdisk<br />

command if all of <strong>the</strong> MDisks that you were expecting are present.<br />

When all of <strong>the</strong> disks allocated to <strong>the</strong> SVC are seen from <strong>the</strong> SVC cluster, <strong>the</strong> following<br />

procedure is a good way to verify which MDisks are unmanaged and ready to be added to <strong>the</strong><br />

Managed Disk Group (MDG).<br />

Perform <strong>the</strong> following steps to display MDisks:<br />

1. Enter <strong>the</strong> svcinfo lsmdiskcandidate command, as shown in Example 7-5. This command<br />

displays all detected MDisks that are not currently part of an MDG.<br />

Example 7-5 svcinfo lsmdiskcandidate command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskcandidate<br />

id<br />

0<br />

1<br />

2<br />

.<br />

.<br />

342 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Alternatively, you can list all MDisks (managed or unmanaged) by issuing <strong>the</strong> svcinfo<br />

lsmdisk command, as shown in Example 7-6.<br />

Example 7-6 svcinfo lsmdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,<br />

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID<br />

0,mdisk0,online,unmanaged,,,36.0GB,0000000000000000,controller0,600a0b8000174431000000eb<br />

47139cca00000000000000000000000000000000<br />

1,mdisk1,online,unmanaged,,,36.0GB,0000000000000001,controller0,600a0b8000174431000000ef<br />

47139e1c00000000000000000000000000000000<br />

2,mdisk2,online,unmanaged,,,36.0GB,0000000000000002,controller0,600a0b8000174431000000f1<br />

47139e7200000000000000000000000000000000<br />

3,mdisk3,online,unmanaged,,,36.0GB,0000000000000003,controller0,600a0b8000174431000000e4<br />

4713575400000000000000000000000000000000<br />

4,mdisk4,online,unmanaged,,,36.0GB,0000000000000004,controller0,600a0b8000174431000000e6<br />

4713576000000000000000000000000000000000<br />

5,mdisk5,online,unmanaged,,,36.0GB,0000000000000000,controller1,600a0b800026b28200003ea3<br />

4851577c00000000000000000000000000000000<br />

6,mdisk6,online,unmanaged,,,36.0GB,0000000000000005,controller0,600a0b8000174431000000e7<br />

47139cb600000000000000000000000000000000<br />

7,mdisk7,online,unmanaged,,,36.0GB,0000000000000001,controller1,600a0b80002904de00004188<br />

485157a400000000000000000000000000000000<br />

8,mdisk8,online,unmanaged,,,36.0GB,0000000000000006,controller0,600a0b8000174431000000ea<br />

47139cc400000000000000000000000000000000<br />

From this output, you can see additional information about each MDisk (such as <strong>the</strong><br />

current status). For <strong>the</strong> purpose of our current task, we are only interested in <strong>the</strong><br />

unmanaged disks, because <strong>the</strong>y are candidates for MDGs (all MDisks, in our case).<br />

Tip: The -delim parameter collapses output instead of wrapping text over multiple lines.<br />

2. If not all of <strong>the</strong> MDisks that you expected are visible, rescan <strong>the</strong> available FC network by<br />

entering <strong>the</strong> svctask detectmdisk command, as shown in Example 7-7.<br />

Example 7-7 svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask detectmdisk<br />

3. If you run <strong>the</strong> svcinfo lsmdiskcandidate command again and your MDisk or MDisks are<br />

still not visible, check that <strong>the</strong> LUNs from your subsystem have been properly assigned to<br />

<strong>the</strong> SVC and that appropriate zoning is in place (for example, <strong>the</strong> SVC can see <strong>the</strong> disk<br />

subsystem). See Chapter 3, “Planning and configuration” on page 65 for details about<br />

setting up your storage area network (<strong>SAN</strong>) fabric.<br />

7.2.5 Viewing MDisk information<br />

When viewing information about <strong>the</strong> MDisks (managed or unmanaged), we can use <strong>the</strong><br />

svcinfo lsmdisk command to display overall summary information about all available<br />

managed disks. To display more detailed information about a specific MDisk, run <strong>the</strong><br />

command again and append <strong>the</strong> -mdisk name parameter (for example, mdisk0).<br />

The overview command is svcinfo lsmdisk -delim, as shown in Example 7-8 on page 344.<br />

The summary for an individual MDisk is svcinfo lsmdisk (name/ID of <strong>the</strong> MDisk from which<br />

you want <strong>the</strong> information), as shown in Example 7-9 on page 344.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 343


Example 7-8 svcinfo lsmdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,<br />

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam<br />

e,UID<br />

0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b80004<br />

86a6600000ae94a89575900000000000000000000000000000000<br />

1,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000<br />

000e134a895d6e00000000000000000000000000000000<br />

2,mdisk2,online,managed,0,MDG_DS47,16.0GB,0000000000000002,controller0,600a0b80004<br />

858a000000e144a895d9400000000000000000000000000000000<br />

3,mdisk3,online,managed,0,MDG_DS47,16.0GB,0000000000000003,controller0,600a0b80004<br />

858a000000e154a895db000000000000000000000000000000000<br />

Example 7-9 shows a summary for a single MDisk.<br />

Example 7-9 Usage of <strong>the</strong> command svcinfo lsmdisk (ID)<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk 2<br />

id 2<br />

name mdisk2<br />

status online<br />

mode managed<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

capacity 16.0GB<br />

quorum_index 0<br />

block_size 512<br />

controller_name controller0<br />

ctrl_type 4<br />

ctrl_WWNN 200600A0B84858A0<br />

controller_id 0<br />

path_count 2<br />

max_path_count 2<br />

ctrl_LUN_# 0000000000000002<br />

UID 600a0b80004858a000000e144a895d9400000000000000000000000000000000<br />

preferred_WWPN 200600A0B84858A2<br />

active_WWPN 200600A0B84858A2<br />

7.2.6 Renaming an MDisk<br />

Use <strong>the</strong> svctask chmdisk command to change <strong>the</strong> name of an MDisk. When using <strong>the</strong><br />

command, be aware that <strong>the</strong> new name comes first and <strong>the</strong>n <strong>the</strong> ID/name of <strong>the</strong> MDisk being<br />

renamed. Use this format: svcinfo chmdisk -name (new name) (current ID/name). Use <strong>the</strong><br />

svcinfo lsmdisk command to verify <strong>the</strong> change. Example 7-10 show both of <strong>the</strong>se<br />

commands.<br />

Example 7-10 svctask chmdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name mdisk_6 mdisk6<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,<br />

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam<br />

e,UID<br />

6,mdisk_6,online,managed,0,MDG_DS45,36.0GB,0000000000000005,DS4500,600a0b800017443<br />

1000000e747139cb600000000000000000000000000000000<br />

344 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


This command renamed <strong>the</strong> MDisk named mdisk6 to mdisk_6.<br />

The chmdisk command: The chmdisk command specifies <strong>the</strong> new name first. You can<br />

use letters A to Z, a to z, numbers 0 to 9, <strong>the</strong> dash (-), and <strong>the</strong> underscore (_). The new<br />

name can be between one and 15 characters in length. However, <strong>the</strong> new name cannot<br />

start with a number, dash, or <strong>the</strong> word “mdisk” (because this prefix is reserved for SVC<br />

assignment only).<br />

7.2.7 Including an MDisk<br />

If a significant number of errors occur on an MDisk, <strong>the</strong> SVC automatically excludes it. These<br />

errors can result from a hardware problem, a <strong>SAN</strong> problem, or <strong>the</strong> result of poorly planned<br />

maintenance. If it is a hardware fault, you receive Simple Network Management Protocol<br />

(SNMP) alerts about <strong>the</strong> state of <strong>the</strong> disk subsystem (before <strong>the</strong> disk was excluded), and you<br />

can undertake preventive maintenance. If not, <strong>the</strong> hosts that were using virtual disks<br />

(VDisks), which used <strong>the</strong> excluded MDisk, now have I/O errors.<br />

By running <strong>the</strong> svcinfo lsmdisk command, you can see that mdisk9 is excluded in<br />

Example 7-11.<br />

Example 7-11 svcinfo lsmdisk command: Excluded MDisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,<br />

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam<br />

e,UID<br />

8,mdisk8,online,managed,0,MDG_DS45,36.0GB,0000000000000006,DS4500,600a0b8000174431<br />

000000ea47139cc400000000000000000000000000000000<br />

9,mdisk9,excluded,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b2<br />

8200003ed6485157b600000000000000000000000000000000<br />

After taking <strong>the</strong> necessary corrective action to repair <strong>the</strong> MDisk (for example, replace <strong>the</strong><br />

failed disk, repair <strong>the</strong> <strong>SAN</strong> zones, and so on), we need to include <strong>the</strong> MDisk again by issuing<br />

<strong>the</strong> svctask includemdisk command (Example 7-12), because <strong>the</strong> SVC cluster does not<br />

include <strong>the</strong> MDisk automatically.<br />

Example 7-12 svctask includemdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask includemdisk mdisk9<br />

Running <strong>the</strong> svcinfo lsmdisk command again shows mdisk9 online again, as shown in<br />

Example 7-13.<br />

Example 7-13 svcinfo lsmdisk command: Verifying that MDisk is included<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,<br />

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam<br />

e,UID<br />

8,mdisk8,online,managed,0,MDG_DS45,36.0GB,0000000000000006,DS4500,600a0b8000174431<br />

000000ea47139cc400000000000000000000000000000000<br />

9,mdisk9,online,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b282<br />

00003ed6485157b600000000000000000000000000000000<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 345


7.2.8 Adding MDisks to a managed disk group<br />

If you created an empty MDG or you simply assign additional MDisks to your already<br />

configured MDG, you can use <strong>the</strong> svctask addmdisk command to populate <strong>the</strong> MDG<br />

(Example 7-14).<br />

Example 7-14 svctask addmdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk mdisk6 MDG_DS45<br />

You can only add unmanaged MDisks to an MDG. This command adds <strong>the</strong> MDisk named<br />

mdisk6 to <strong>the</strong> MDG named MDG_DS45.<br />

Important: Do not add this MDisk to an MDG if you want to create an image mode VDisk<br />

from <strong>the</strong> MDisk that you are adding. As soon as you add an MDisk to an MDG, it becomes<br />

managed, and extent mapping is not necessarily one-to-one anymore.<br />

7.2.9 Showing <strong>the</strong> Managed Disk Group<br />

Use <strong>the</strong> svcinfo lsmdisk command as before to display information about <strong>the</strong> MDG to which<br />

an MDisk belongs, as shown in Example 7-15.<br />

Example 7-15 svcinfo lsmdisk command<br />

id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_<br />

capacity,used_capacity,real_capacity,overallocation,warning<br />

0,MDG_DS45,online,13,4,468.0GB,512,355.0GB,140.00GB,100.00GB,112.00GB,29,0<br />

1,MDG_DS47,online,8,3,288.0GB,512,217.5GB,120.00GB,20.00GB,70.00GB,41,0<br />

7.2.10 Showing MDisks in an managed disk group<br />

Use <strong>the</strong> svcinfo lsmdisk -filtervalue command, as shown in Example 7-16, to see which<br />

MDisks are part of a specific MDG. This command shows all of <strong>the</strong> MDisks that are part of <strong>the</strong><br />

MDG named MDG2.<br />

Example 7-16 svcinfo lsmdisk -filtervalue: Mdisks in MDG<br />

<strong>IBM</strong>_2145:ITSOSVC42A:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG2 -delim<br />

:<br />

id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_nam<br />

e:UID<br />

6:mdisk6:online:managed:2:MDG2:3.0GB:0000000000000006:DS4000:600a0b800017423300000<br />

044465c0a2700000000000000000000000000000000<br />

7:mdisk7:online:managed:2:MDG2:6.0GB:0000000000000007:DS4000:600a0b800017443100000<br />

06f465bf93200000000000000000000000000000000<br />

21:mdisk21:online:image:2:MDG2:2.0GB:0000000000000015:DS4000:600a0b800017443100000<br />

0874664018600000000000000000000000000000000<br />

7.2.11 Working with Managed Disk Groups<br />

Before we can create any volumes on <strong>the</strong> SVC cluster, we need to virtualize <strong>the</strong> allocated<br />

storage that is assigned to <strong>the</strong> SVC. After we have assigned volumes to <strong>the</strong> SVC’s “managed<br />

disks”, we cannot start using <strong>the</strong>m until <strong>the</strong>y are members of an MDG. Therefore, one of our<br />

first operations is to create an MDG where we can place our MDisks.<br />

346 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


This section describes <strong>the</strong> operations using MDisks and MDGs. It explains <strong>the</strong> tasks that we<br />

can perform at an MDG level.<br />

7.2.12 Creating a managed disk group<br />

After <strong>the</strong> successful login to <strong>the</strong> CLI interface of <strong>the</strong> SVC, we create <strong>the</strong> MDG.<br />

Using <strong>the</strong> svctask mkmdiskgrp command, create an MDG, as shown in Example 7-17.<br />

Example 7-17 svctask mkmdiskgrp<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512<br />

MDisk Group, id [0], successfully created<br />

This command creates an MDG called MDG_DS47. The extent size that is used within this<br />

group is 512 MB, which is <strong>the</strong> most commonly used extent size.<br />

We have not added any MDisks to <strong>the</strong> MDG yet, so it is an empty MDG.<br />

There is a way to add unmanaged MDisks and create <strong>the</strong> MDG in <strong>the</strong> same command. Using<br />

<strong>the</strong> command svctask mkmdiskgrp with <strong>the</strong> -mdisk parameter and entering <strong>the</strong> IDs or names<br />

of <strong>the</strong> MDisks adds <strong>the</strong> MDisks immediately after <strong>the</strong> MDG is created.<br />

So, prior to <strong>the</strong> creation of <strong>the</strong> MDG, enter <strong>the</strong> svcinfo lsmdisk command, as shown in<br />

Example 7-18, where we list all of <strong>the</strong> available MDisks that are seen by <strong>the</strong> SVC cluster.<br />

Example 7-18 Listing available MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,<br />

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam<br />

e,UID<br />

0,mdisk0,online,unmanaged,,,16.0GB,0000000000000000,controller0,600a0b8000486a6600<br />

000ae94a89575900000000000000000000000000000000<br />

1,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000<br />

000e134a895d6e00000000000000000000000000000000<br />

2,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004<br />

858a000000e144a895d9400000000000000000000000000000000<br />

3,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004<br />

858a000000e154a895db000000000000000000000000000000000<br />

Using <strong>the</strong> same command as before (svctask mkmdiskgrp) and knowing <strong>the</strong> MDisk IDs that<br />

we are using, we can add multiple MDisks to <strong>the</strong> MDG at <strong>the</strong> same time. We now add <strong>the</strong><br />

unmanaged MDisks, as shown in Example 7-18, to <strong>the</strong> MDG that we created, as shown in<br />

Example 7-19.<br />

Example 7-19 Creating an MDG and adding available MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512 -mdisk 0:1<br />

MDisk Group, id [0], successfully created<br />

This command creates an MDG called MDG_DS47. The extent size that is used within this<br />

group is 512 MB, and two MDisks (0 and 1) are added to <strong>the</strong> group.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 347


MDG name: The -name and -mdisk parameters are optional. If you do not enter a -name,<br />

<strong>the</strong> default is MDiskgrpx, where x is <strong>the</strong> ID sequence number that is assigned by <strong>the</strong> SVC<br />

internally. If you do not enter <strong>the</strong> -mdisk parameter, an empty MDG is created.<br />

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and <strong>the</strong><br />

underscore. The name can be between one and 15 characters in length, but it cannot start<br />

with a number or <strong>the</strong> word “mDiskgrp” (because this prefix is reserved for SVC assignment<br />

only).<br />

By running <strong>the</strong> svcinfo lsmdisk command, you now see <strong>the</strong> MDisks as “managed” and as<br />

part of <strong>the</strong> MDG_DS47, as shown in Example 7-20.<br />

Example 7-20 svcinfo lsmdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,<br />

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam<br />

e,UID<br />

0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b80004<br />

86a6600000ae94a89575900000000000000000000000000000000<br />

1,mdisk1,online,managed,0,MDG_DS47,16.0GB,0000000000000001,controller0,600a0b80004<br />

858a000000e134a895d6e00000000000000000000000000000000<br />

2,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004<br />

858a000000e144a895d9400000000000000000000000000000000<br />

3,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004<br />

858a000000e154a895db000000000000000000000000000000000<br />

You have completed <strong>the</strong> tasks that are required to create an MDG.<br />

7.2.13 Viewing Managed Disk Group information<br />

Use <strong>the</strong> svcinfo lsmdiskgrp command, as shown in Example 7-21, to display information<br />

about <strong>the</strong> MDGs that are defined in <strong>the</strong> SVC.<br />

Example 7-21 svcinfo lsmdiskgrp command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim ,<br />

id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_<br />

capacity,used_capacity,real_capacity,overallocation,warning<br />

0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0<br />

1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0<br />

7.2.14 Renaming a managed disk group<br />

Use <strong>the</strong> svctask chmdiskgrp command to change <strong>the</strong> name of an MDG. To verify <strong>the</strong><br />

change, run <strong>the</strong> svcinfo lsmdiskgrp command. Example 7-22 shows both of <strong>the</strong>se<br />

commands.<br />

Example 7-22 svctask chmdiskgrp command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdiskgrp -name MDG_DS81 MDG_DS83<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim ,<br />

id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_<br />

capacity,used_capacity,real_capacity,overallocation,warning<br />

0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0<br />

1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0<br />

348 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2,MDG_DS81,online,0,0,0,512,0,0.00MB,0.00MB,0.00MB,0,85<br />

This command renamed <strong>the</strong> MDG from MDG_DS83 to MDG_DS81.<br />

Changing <strong>the</strong> MDG name: The chmdiskgrp command specifies <strong>the</strong> new name first. You<br />

can use letters A to Z, a to z, numbers 0 to 9, <strong>the</strong> dash (-), and <strong>the</strong> underscore (_). The new<br />

name can be between one and 15 characters in length. However, <strong>the</strong> new name cannot<br />

start with a number, dash, or <strong>the</strong> word “mdiskgrp” (because this prefix is reserved for SVC<br />

assignment only).<br />

7.2.15 Deleting a managed disk group<br />

Use <strong>the</strong> svctask rmmdiskgrp command to remove an MDG from <strong>the</strong> SVC cluster<br />

configuration (Example 7-23).<br />

Example 7-23 svctask rmmdiskgrp<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmmdiskgrp MDG_DS81<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim ,<br />

id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_<br />

capacity,used_capacity,real_capacity,overallocation,warning<br />

0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,0<br />

1,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0<br />

This command removes MDG_DS81 from <strong>the</strong> SVC cluster configuration.<br />

Removing an MDG from <strong>the</strong> SVC cluster configuration: If <strong>the</strong>re are MDisks within <strong>the</strong><br />

MDG, you must use <strong>the</strong> -force flag to remove <strong>the</strong> MDG from <strong>the</strong> SVC cluster configuration,<br />

for example:<br />

svctask rmmdiskgrp MDG_DS81 -force<br />

Ensure that you definitely want to use this flag, because it destroys all mapping information<br />

and data held on <strong>the</strong> VDisks, which cannot be recovered.<br />

7.2.16 Removing MDisks from a managed disk group<br />

Use <strong>the</strong> svctask rmmdisk command to remove an MDisk from an MDG (Example 7-24).<br />

Example 7-24 svctask rmmdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmmdisk -mdisk 6 -force MDG_DS45<br />

This command removes <strong>the</strong> MDisk called mdisk6 from <strong>the</strong> MDG named MDG_DS45.The<br />

-force flag is set, because <strong>the</strong>re are VDisks using this MDG.<br />

Sufficient space: The removal only takes place if <strong>the</strong>re is sufficient space to migrate <strong>the</strong><br />

VDisk data to o<strong>the</strong>r extents on o<strong>the</strong>r MDisks that remain in <strong>the</strong> MDG. After you remove <strong>the</strong><br />

MDisk group, it takes time to change <strong>the</strong> mode from managed to unmanaged.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 349


7.3 Working with hosts<br />

This section explains <strong>the</strong> tasks that can be performed at a host level.<br />

When we create a host in our SVC cluster, we need to define <strong>the</strong> connection method. Starting<br />

with SVC 5.1, we can now define our host as iSCSI-attached or FC-attached, and we<br />

describe <strong>the</strong>se connection methods in detail in Chapter 2, “<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong>” on page 7.<br />

7.3.1 Creating a Fibre Channel-attached host<br />

We show creating an FC-attached host under various circumstances in <strong>the</strong> following sections.<br />

Host is powered on, connected, and zoned to <strong>the</strong> SVC<br />

When you create your host on <strong>the</strong> SVC, it is good practice to check whe<strong>the</strong>r <strong>the</strong> host bus<br />

adapter (HBA) worldwide port names (WWPNs) of <strong>the</strong> server are visible to <strong>the</strong> SVC. By doing<br />

that, you ensure that zoning is done and that <strong>the</strong> correct WWPN will be used. Issue <strong>the</strong><br />

svcinfo lshbaportcandidate command, as shown in Example 7-25.<br />

Example 7-25 svcinfo lshbaportcandidate command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate<br />

id<br />

210000E08B89C1CD<br />

210000E08B054CAA<br />

After you know that <strong>the</strong> WWPNs that are displayed match your host (use host or <strong>SAN</strong> switch<br />

utilities to verify), use <strong>the</strong> svctask mkhost command to create your host.<br />

Name: If you do not provide <strong>the</strong> -name parameter, <strong>the</strong> SVC automatically generates <strong>the</strong><br />

name hostx (where x is <strong>the</strong> ID sequence number that is assigned by <strong>the</strong> SVC internally).<br />

You can use <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0 to 9, <strong>the</strong> dash (-), and <strong>the</strong><br />

underscore (_). The name can be between one and 15 characters in length. However, <strong>the</strong><br />

name cannot start with a number, dash, or <strong>the</strong> word “host” (because this prefix is reserved<br />

for SVC assignment only).<br />

The command to create a host is shown in Example 7-26.<br />

Example 7-26 svctask mkhost<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn<br />

210000E08B89C1CD:210000E08B054CAA<br />

Host, id [0], successfully created<br />

This command creates a host called Palau using WWPN 21:00:00:E0:8B:89:C1:CD and<br />

21:00:00:E0:8B:05:4C:AA.<br />

Ports: You can define from one up to eight ports per host, or you can use <strong>the</strong> addport<br />

command, which we show in 7.3.5, “Adding ports to a defined host” on page 354.<br />

350 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Host is not powered on or not connected to <strong>the</strong> <strong>SAN</strong><br />

If you want to create a host on <strong>the</strong> SVC without seeing your target WWPN by using <strong>the</strong><br />

svcinfo lshbaportcandidate command, add <strong>the</strong> -force flag to your mkhost command, as<br />

shown in Example 7-27. This option is more open for human errors than if you choose <strong>the</strong><br />

WWPN from a list, but it is typically used when many host definitions are created at <strong>the</strong> same<br />

time, such as through a script.<br />

In this case, you can type <strong>the</strong> WWPN of your HBA or HBAs and use <strong>the</strong> -force flag to create<br />

<strong>the</strong> host, regardless of whe<strong>the</strong>r <strong>the</strong>y are connected, as shown in Example 7-27.<br />

Example 7-27 mkhost -force<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkhost -name Guinea -hbawwpn 210000E08B89C1DC<br />

-force<br />

Host, id [4], successfully created<br />

This command forces <strong>the</strong> creation of a host called Guinea using WWPN<br />

210000E08B89C1DC.<br />

Note: WWPNs are not case sensitive in <strong>the</strong> CLI.<br />

If you run <strong>the</strong> svcinfo lshost command again, you now see your host named Guinea under<br />

host ID 4.<br />

7.3.2 Creating an iSCSI-attached host<br />

Now, we can create host definitions to a host that is not connected to <strong>the</strong> <strong>SAN</strong> but that has<br />

LAN access to our SVC nodes. Before we create <strong>the</strong> host definition, we configure our SVC<br />

clusters to use <strong>the</strong> new iSCSI connection method. We describe additional information about<br />

configuring your nodes to use iSCSI in 7.7.4, “iSCSI configuration” on page 382.<br />

The iSCSI functionality allows <strong>the</strong> host to access volumes through <strong>the</strong> SVC without being<br />

attached to <strong>the</strong> <strong>SAN</strong>. Back-end storage and node-to-node communication still need <strong>the</strong> FC<br />

network to communicate, but <strong>the</strong> host does not necessarily need to be connected to <strong>the</strong> <strong>SAN</strong>.<br />

When we create a host that is going to use iSCSI as a communication method, iSCSI initiator<br />

software must be installed on <strong>the</strong> host to initiate <strong>the</strong> communication between <strong>the</strong> SVC and <strong>the</strong><br />

host. This installation creates an iSCSI qualified name (IQN) identifier that is needed before<br />

we create our host.<br />

Before we start, we check our server’s IQN address. We are running Windows Server 2008.<br />

We select Start Programs Administrative tools, and we select iSCSI initiator. In our<br />

example, our IQN, as shown in Figure 7-1 on page 352, is:<br />

iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 351


Figure 7-1 IQN from <strong>the</strong> iSCSI initiator tool<br />

We create <strong>the</strong> host by issuing <strong>the</strong> mkhost command, as shown in Example 7-28. When <strong>the</strong><br />

command completes successfully, we display our newly created host.<br />

It is important to know that when <strong>the</strong> host is initially configured, <strong>the</strong> default au<strong>the</strong>ntication<br />

method is set to no au<strong>the</strong>ntication and no Challenge Handshake Au<strong>the</strong>ntication Protocol<br />

(CHAP) secret is set. To set a CHAP secret for au<strong>the</strong>nticating <strong>the</strong> iSCSI host with <strong>the</strong> SVC<br />

cluster, use <strong>the</strong> svctask chhost command with <strong>the</strong> chapsecret parameter.<br />

Example 7-28 mkhost command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkhost -name Baldur -iogrp 0 -iscsiname<br />

iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com<br />

Host, id [4], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshost 4<br />

id 4<br />

name Baldur<br />

port_count 1<br />

type generic<br />

mask 1111<br />

iogrp_count 1<br />

iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com<br />

node_logged_in_count 0<br />

state offline<br />

We have now created our host definition. We map a VDisk to our new iSCSI server, as shown<br />

in Example 7-29. We have already created <strong>the</strong> VDisk, as shown in 7.4.1, “Creating a VDisk”<br />

on page 356. In our scenario, our VDisk has ID 21 and <strong>the</strong> host name is Baldur. We map it to<br />

our iSCSI host.<br />

Example 7-29 Mapping VDisk to iSCSI host<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Baldur 21<br />

Virtual Disk to Host map, id [0], successfully created<br />

352 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


After <strong>the</strong> VDisk has been mapped to <strong>the</strong> host, we display <strong>the</strong> host information again, as<br />

shown in Example 7-30.<br />

Example 7-30 svcinfo lshost<br />

7.3.3 Modifying a host<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshost 4<br />

id 4<br />

name Baldur<br />

port_count 1<br />

type generic<br />

mask 1111<br />

iogrp_count 1<br />

iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com<br />

node_logged_in_count 1<br />

state online<br />

Note: FC hosts and iSCSI hosts are handled in <strong>the</strong> same way operationally after <strong>the</strong>y have<br />

been created.<br />

If you need to display a CHAP secret for an already defined server, use <strong>the</strong> svcinfo<br />

lsiscsiauth command.<br />

Use <strong>the</strong> svctask chhost command to change <strong>the</strong> name of a host. To verify <strong>the</strong> change, run<br />

<strong>the</strong> svcinfo lshost command. Example 7-31 shows both of <strong>the</strong>se commands.<br />

Example 7-31 svctask chhost command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chhost -name Angola Guinea<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshost<br />

id name port_count iogrp_count<br />

0 Palau 2 4<br />

1 Nile 2 1<br />

2 Kanaga 2 1<br />

3 Siam 2 2<br />

4 Angola 1 4<br />

This command renamed <strong>the</strong> host from Guinea to Angola.<br />

Note: The chhost command specifies <strong>the</strong> new name first. You can use letters A to Z and a<br />

to z, numbers 0 to 9, <strong>the</strong> dash (-), and <strong>the</strong> underscore (_). The new name can be between<br />

one and 15 characters in length. However, it cannot start with a number, dash, or <strong>the</strong> word<br />

“host” (because this prefix is reserved for SVC assignment only).<br />

Note: If you use Hewlett-Packard UNIX (HP-UX), you use <strong>the</strong> -type option. See <strong>the</strong> <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Host Attachment Guide,<br />

SC26-7563, for more information about <strong>the</strong> hosts that require <strong>the</strong> -type parameter.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 353


7.3.4 Deleting a host<br />

Use <strong>the</strong> svctask rmhost command to delete a host from <strong>the</strong> SVC configuration. If your host is<br />

still mapped to VDisks and you use <strong>the</strong> -force flag, <strong>the</strong> host and all of <strong>the</strong> mappings with it are<br />

deleted. The VDisks are not deleted, only <strong>the</strong> mappings to <strong>the</strong>m.<br />

The command that is shown in Example 7-32 deletes <strong>the</strong> host called Angola from <strong>the</strong> SVC<br />

configuration.<br />

Example 7-32 svctask rmhost Angola<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmhost Angola<br />

Deleting a host: If <strong>the</strong>re are any VDisks assigned to <strong>the</strong> host, you must use <strong>the</strong> -force flag,<br />

for example: svctask rmhost -force Angola.<br />

7.3.5 Adding ports to a defined host<br />

If you add an HBA or a network interface controller (NIC) to a server that is already defined<br />

within <strong>the</strong> SVC, you can use <strong>the</strong> svctask addhostport command to add <strong>the</strong> new port<br />

definitions to your host configuration.<br />

If your host is currently connected through <strong>SAN</strong> with FC and if <strong>the</strong> WWPN is already zoned to<br />

<strong>the</strong> SVC cluster, issue <strong>the</strong> svcinfo lshbaportcandidate command, as shown in<br />

Example 7-33, to compare with <strong>the</strong> information that you have from <strong>the</strong> server administrator.<br />

Example 7-33 svcinfo lshbaportcandidate<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate<br />

id<br />

210000E08B054CAA<br />

If <strong>the</strong> WWPN matches your information (use host or <strong>SAN</strong> switch utilities to verify), use <strong>the</strong><br />

svctask addhostport command to add <strong>the</strong> port to <strong>the</strong> host.<br />

Example 7-34 shows <strong>the</strong> command to add a host port.<br />

Example 7-34 svctask addhostport<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA Palau<br />

This command adds <strong>the</strong> WWPN of 210000E08B054CAA to <strong>the</strong> Palau host.<br />

Adding multiple ports: You can add multiple ports all at one time by using <strong>the</strong> separator<br />

or colon (:) between WWPNs, for example:<br />

svctask addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau<br />

If <strong>the</strong> new HBA is not connected or zoned, <strong>the</strong> svcinfo lshbaportcandidate command does<br />

not display your WWPN. In this case, you can manually type <strong>the</strong> WWPN of your HBA or HBAs<br />

and use <strong>the</strong> -force flag to create <strong>the</strong> host, as shown in Example 7-35.<br />

Example 7-35 svctask addhostport<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA -force<br />

Palau<br />

354 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.3.6 Deleting ports<br />

This command forces <strong>the</strong> addition of <strong>the</strong> WWPN named 210000E08B054CAA to <strong>the</strong> host<br />

called Palau.<br />

WWPNs: WWPNs are not case sensitive within <strong>the</strong> CLI.<br />

If you run <strong>the</strong> svcinfo lshost command again, you see your host with an updated port count<br />

of 2 in Example 7-36.<br />

Example 7-36 svcinfo lshost command: Port count<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshost<br />

id name port_count iogrp_count<br />

0 Palau 2 4<br />

1 ITSO_W2008 1 4<br />

2 Thor 3 1<br />

3 Frigg 1 1<br />

4 Baldur 1 1<br />

If your host currently uses iSCSI as a connection method, you must have <strong>the</strong> new iSCSI IQN<br />

ID before you add <strong>the</strong> port. Unlike FC-attached hosts, you cannot check for available<br />

candidates with iSCSI.<br />

After you have acquired <strong>the</strong> additional iSCSI IQN, use <strong>the</strong> svctask addhostport command,<br />

as shown in Example 7-37.<br />

Example 7-37 Adding an iSCSI port to an already configured host<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addhostport -iscsiname<br />

iqn.1991-05.com.microsoft:baldur 4<br />

If you make a mistake when adding a port, or if you remove an HBA from a server that is<br />

already defined within <strong>the</strong> SVC, you can use <strong>the</strong> svctask rmhostport command to remove<br />

WWPN definitions from an existing host.<br />

Before you remove <strong>the</strong> WWPN, be sure that it is <strong>the</strong> correct WWPN by issuing <strong>the</strong> svcinfo<br />

lshost command, as shown in Example 7-38.<br />

Example 7-38 svcinfo lshost command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshost Palau<br />

id 0<br />

name Palau<br />

port_count 2<br />

type generic<br />

mask 1111<br />

iogrp_count 4<br />

WWPN 210000E08B054CAA<br />

node_logged_in_count 2<br />

state active<br />

WWPN 210000E08B89C1CD<br />

node_logged_in_count 2<br />

state offline<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 355


When you know <strong>the</strong> WWPN or iSCSI IQN, use <strong>the</strong> svctask rmhostport command to delete a<br />

host port, as shown in Example 7-39.<br />

Example 7-39 svctask rmhostport<br />

For removing WWPN<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmhostport -hbawwpn 210000E08B89C1CD Palau<br />

and for removing iSCSI IQN<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmhostport -iscsiname<br />

iqn.1991-05.com.microsoft:baldur Baldur<br />

This command removes <strong>the</strong> WWPN of 210000E08B89C1CD from <strong>the</strong> Palau host and <strong>the</strong><br />

iSCSI IQN iqn.1991-05.com.microsoft:baldur from <strong>the</strong> Baldur host.<br />

7.4 Working with VDisks<br />

7.4.1 Creating a VDisk<br />

Removing multiple ports: You can remove multiple ports at one time by using <strong>the</strong><br />

separator or colon (:) between <strong>the</strong> port names, for example:<br />

svctask rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola<br />

This section details <strong>the</strong> various configuration and administration tasks that can be performed<br />

on <strong>the</strong> VDisks within <strong>the</strong> SVC environment.<br />

The mkvdisk command creates sequential, striped, or image mode VDisk objects. When <strong>the</strong>y<br />

are mapped to a host object, <strong>the</strong>se objects are seen as disk drives with which <strong>the</strong> host can<br />

perform I/O operations.<br />

When creating a VDisk, you must enter several parameters at <strong>the</strong> CLI. There are both<br />

mandatory and optional parameters.<br />

See <strong>the</strong> full command string and detailed information in <strong>the</strong> Command-Line Interface User’s<br />

Guide, SC26-7903-05.<br />

Creating an image mode disk: If you do not specify <strong>the</strong> -size parameter when you create<br />

an image mode disk, <strong>the</strong> entire MDisk capacity is used.<br />

When you are ready to create a VDisk, you must know <strong>the</strong> following information before you<br />

start creating <strong>the</strong> VDisk:<br />

► In which MDG is <strong>the</strong> VDisk going to have its extents<br />

► From what I/O Group will <strong>the</strong> VDisk be accessed<br />

► Size of <strong>the</strong> VDisk<br />

► Name of <strong>the</strong> VDisk<br />

When you are ready to create your striped VDisk, you use <strong>the</strong> svctask mkvdisk command<br />

(we discuss sequential and image mode VDisks later). In Example 7-40 on page 357, this<br />

command creates a 10 GB, striped VDisk with VDisk id0 within <strong>the</strong> MDG_DS47 MDG and<br />

assigns it to <strong>the</strong> iogrp_0 I/O Group.<br />

356 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-40 svctask mkvdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS47 -iogrp io_grp0 -size 10 -unit<br />

gb -name Tiger<br />

Virtual Disk, id [0], successfully created<br />

To verify <strong>the</strong> results, you can use <strong>the</strong> svcinfo lsvdisk command, as shown in Example 7-41.<br />

Example 7-41 svcinfo lsvdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk 0<br />

id 0<br />

name Tiger<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

capacity 10.00GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 6005076801AB813F1000000000000000<br />

throttling 0<br />

preferred_node_id 2<br />

fast_write_state empty<br />

cache readwrite<br />

udid<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 10.00MB<br />

real_capacity 10.00MB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

You have completed <strong>the</strong> required tasks to create a VDisk.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 357


7.4.2 VDisk information<br />

Use <strong>the</strong> svcinfo lsvdisk command to display summary information about all VDisks defined<br />

within <strong>the</strong> SVC environment. To display more detailed information about a specific VDisk, run<br />

<strong>the</strong> command again and append <strong>the</strong> VDisk name parameter (for example, VDisk_D).<br />

Example 7-42 shows both of <strong>the</strong>se commands.<br />

Example 7-42 svcinfo lsvdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk -delim ,<br />

id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC<br />

_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count<br />

0,vdisk_A,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,,,60050768018301BF2800000000000008,0<br />

,1<br />

1,vdisk_B,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF2800000000000001,<br />

0,1<br />

2,vdisk_C,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000000002,0<br />

,1<br />

3,vdisk_D,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000000003,0<br />

,1<br />

4,MM_DBLog_Pri,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,4,MMREL2,60050768018301BF280000<br />

0000000004,0,1<br />

5,MM_DB_Pri,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,5,MMREL1,60050768018301BF280000000<br />

0000005,0,1<br />

6,MM_App_Pri,1,io_grp1,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF280000000000000<br />

6,0,1<br />

7.4.3 Creating a Space-Efficient VDisk<br />

Example 7-43 shows an example of creating a Space-Efficient VDisk. It is important to know<br />

that, in addition to <strong>the</strong> normal parameters, you must use <strong>the</strong> following parameters:<br />

► -rsize: This parameter makes <strong>the</strong> VDisk space-efficient; o<strong>the</strong>rwise, <strong>the</strong> VDisk is fully<br />

allocated.<br />

► -autoexpand: This parameter specifies that space-efficient copies automatically expand<br />

<strong>the</strong>ir real capacities by allocating new extents from <strong>the</strong>ir MDG.<br />

► -grainsize: This parameter sets <strong>the</strong> grain size (KB) for a Space-Efficient VDisk.<br />

Example 7-43 Usage of <strong>the</strong> command svctask mkvdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS45 -iogrp 1 -vtype<br />

striped -size 10 -unit gb -rsize 50% -autoexpand -grainsize 32<br />

Virtual Disk, id [7], successfully created<br />

This command creates a space-efficient 10 GB VDisk. The VDisk belongs to mdiskgrp MDG<br />

with <strong>the</strong> name of MDG_DS45 and is owned by <strong>the</strong> io_grp1 I/O Group. The real_capacity<br />

automatically expands until <strong>the</strong> VDisk size of 10 GB is reached. The grain size is set to 32 K,<br />

which is <strong>the</strong> default.<br />

358 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Disk size: When using <strong>the</strong> -rsize parameter, you have <strong>the</strong> following options: disk_size,<br />

disk_size_percentage, and auto.<br />

► Specify <strong>the</strong> disk_size_percentage value using an integer, or an integer immediately<br />

followed by <strong>the</strong> percent character (%).<br />

► Specify <strong>the</strong> units for a disk_size integer using <strong>the</strong> -unit parameter; <strong>the</strong> default is MB.<br />

The -rsize value can be greater than, equal to, or less than <strong>the</strong> size of <strong>the</strong> VDisk.<br />

► The auto option creates a VDisk copy that uses <strong>the</strong> entire size of <strong>the</strong> MDisk; if you<br />

specify <strong>the</strong> -rsize auto option, you must also specify <strong>the</strong> -vtype image option.<br />

An entry of 1 GB uses 1,024 MB.<br />

7.4.4 Creating a VDisk in image mode<br />

This virtualization type allows image mode VDisks to be created when an MDisk already has<br />

data on it, perhaps from a pre-virtualized subsystem. When an image mode VDisk is created,<br />

it directly corresponds to <strong>the</strong> previously unmanaged MDisk from which it was created.<br />

Therefore, with <strong>the</strong> exception of space-efficient image mode VDisks, VDisk logical block<br />

address (LBA) x equals MDisk LBA x.<br />

You can use this command to bring a non-virtualized disk under <strong>the</strong> control of <strong>the</strong> cluster.<br />

After it is under <strong>the</strong> control of <strong>the</strong> cluster, you can migrate <strong>the</strong> VDisk from <strong>the</strong> single managed<br />

disk. When it is migrated, <strong>the</strong> VDisk is no longer an image mode VDisk. You can add image<br />

mode VDisks to an already populated MDG with o<strong>the</strong>r types of VDisks, such as a striped or<br />

sequential VDisk.<br />

Size: An image mode VDisk must be at least 512 bytes (<strong>the</strong> capacity cannot be 0). That is,<br />

<strong>the</strong> minimum size that can be specified for an image mode VDisk must be <strong>the</strong> same as <strong>the</strong><br />

MDisk group extent size to which it is added, with a minimum of 16 MB.<br />

You must use <strong>the</strong> -mdisk parameter to specify an MDisk that has a mode of unmanaged. The<br />

-fmtdisk parameter cannot be used to create an image mode VDisk.<br />

Capacity: If you create a mirrored VDisk from two image mode MDisks without specifying<br />

a -capacity value, <strong>the</strong> capacity of <strong>the</strong> resulting VDisk is <strong>the</strong> smaller of <strong>the</strong> two MDisks, and<br />

<strong>the</strong> remaining space on <strong>the</strong> larger MDisk is inaccessible.<br />

If you do not specify <strong>the</strong> -size parameter when you create an image mode disk, <strong>the</strong> entire<br />

MDisk capacity is used.<br />

Use <strong>the</strong> svctask mkvdisk command to create an image mode VDisk, as shown in<br />

Example 7-44.<br />

Example 7-44 svctask mkvdisk (image mode)<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Image -iogrp 0 -mdisk<br />

mdisk20 -vtype image -name Image_Vdisk_A<br />

Virtual Disk, id [8], successfully created<br />

This command creates an image mode VDisk called Image_Vdisk_A using <strong>the</strong> mdisk20<br />

MDisk. The VDisk belongs to <strong>the</strong> MDG_Image MDG and is owned by <strong>the</strong> io_grp0 I/O Group.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 359


If we run <strong>the</strong> svcinfo lsmdisk command again, notice that mdisk20 now has a status of<br />

image, as shown in Example 7-45.<br />

Example 7-45 svcinfo lsmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,<br />

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam<br />

e,UID<br />

19,mdisk19,online,managed,1,MDG_DS47,36.0GB,0000000000000006,DS4700,600a0b800026b2<br />

8200003f9f4851588700000000000000000000000000000000<br />

20,mdisk20,online,image,2,MDG_Image,36.0GB,0000000000000007,DS4700,600a0b80002904d<br />

e00004282485158aa00000000000000000000000000000000<br />

7.4.5 Adding a mirrored VDisk copy<br />

You can create a mirrored copy of a VDisk, which keeps a VDisk accessible even when <strong>the</strong><br />

MDisk on which it depends has become unavailable. You can create a copy of a VDisk ei<strong>the</strong>r<br />

on separate MDGs or by creating an image mode copy of <strong>the</strong> VDisk. Copies increase <strong>the</strong><br />

availability of data; however, <strong>the</strong>y are not separate objects. You can only create or change<br />

mirrored copies from <strong>the</strong> VDisk.<br />

In addition, you can use VDisk Mirroring as an alternative method of migrating VDisks<br />

between MDGs.<br />

For example, if you have a non-mirrored VDisk in one MDG and want to migrate that VDisk to<br />

ano<strong>the</strong>r MDG, you can add a new copy of <strong>the</strong> VDisk and specify <strong>the</strong> second MDG. After <strong>the</strong><br />

copies are synchronized, you can delete <strong>the</strong> copy on <strong>the</strong> first MDG. The VDisk is migrated to<br />

<strong>the</strong> second MDG while remaining online during <strong>the</strong> migration.<br />

To create a mirrored copy of an VDisk, use <strong>the</strong> addvdiskcopy command. This command adds<br />

a copy of <strong>the</strong> chosen VDisk to <strong>the</strong> selected MDG, which changes a non-mirrored VDisk into a<br />

mirrored VDisk.<br />

In <strong>the</strong> following scenario, we show creating a VDisk copy mirror from one MDG to ano<strong>the</strong>r<br />

MDG.<br />

As you can see in Example 7-46, <strong>the</strong> VDisk has a copy with copy_id 0.<br />

Example 7-46 svcinfo lsvdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_C<br />

id 2<br />

name vdisk_C<br />

IO_group_id 1<br />

IO_group_name io_grp1<br />

status online<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS47<br />

capacity 45.0GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

360 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


RC_name<br />

vdisk_UID 60050768018301BF2800000000000002<br />

virtual_disk_throttling (MB) 20<br />

preferred_node_id 3<br />

fast_write_state empty<br />

cache readwrite<br />

udid<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 12.00GB<br />

free_capacity 12.00GB<br />

overallocation 375<br />

autoexpand off<br />

warning 23<br />

grainsize 32<br />

In Example 7-47, we add <strong>the</strong> VDisk copy mirror by using <strong>the</strong> svctask addvdiskcopy<br />

command.<br />

Example 7-47 svctask addvdiskcopy<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addvdiskcopy -mdiskgrp MDG_DS45 -vtype striped<br />

-rsize 20 -autoexpand -grainsize 64 -unit gb vdisk_C<br />

Vdisk [2] copy [1] successfully created<br />

During <strong>the</strong> synchronization process, you can see <strong>the</strong> status by using <strong>the</strong> svcinfo<br />

lsvdisksyncprogress command. As shown in Example 7-48, <strong>the</strong> first time that <strong>the</strong> status is<br />

checked, <strong>the</strong> synchronization progress is at 86%, and <strong>the</strong> estimated completion time is<br />

19:16:54. The second time that <strong>the</strong> command is run, <strong>the</strong> progress status is at 100%, and <strong>the</strong><br />

synchronization is complete.<br />

Example 7-48 Synchronization<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_C<br />

vdisk_id vdisk_name copy_id progress<br />

estimated_completion_time<br />

2 vdisk_C 1 86 080710191654<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_C<br />

vdisk_id vdisk_name copy_id progress<br />

estimated_completion_time<br />

2 vdisk_C 1 100<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 361


As you can see in Example 7-49, <strong>the</strong> new VDisk copy mirror (copy_id 1) has been added and<br />

can be seen by using <strong>the</strong> svcinfo lsvdisk command.<br />

Example 7-49 svcinfo lsvdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_C<br />

id 2<br />

name vdisk_C<br />

IO_group_id 1<br />

IO_group_name io_grp1<br />

status online<br />

mdisk_grp_id many<br />

mdisk_grp_name many<br />

capacity 45.0GB<br />

type many<br />

formatted no<br />

mdisk_id many<br />

mdisk_name many<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018301BF2800000000000002<br />

virtual_disk_throttling (MB) 20<br />

preferred_node_id 3<br />

fast_write_state empty<br />

cache readwrite<br />

udid<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 2<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 12.00GB<br />

free_capacity 12.00GB<br />

overallocation 375<br />

autoexpand off<br />

warning 23<br />

grainsize 32<br />

copy_id 1<br />

status online<br />

sync yes<br />

primary no<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

362 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.44MB<br />

real_capacity 20.02GB<br />

free_capacity 20.02GB<br />

overallocation 224<br />

autoexpand on<br />

warning 80<br />

grainsize 64<br />

Notice that <strong>the</strong> VDisk copy mirror (copy_id 1) does not have <strong>the</strong> same values as <strong>the</strong> VDisk<br />

copy. While adding a VDisk copy mirror, you are able to define a mirror with separate<br />

parameters than <strong>the</strong> VDisk copy. Therefore, you can define a Space-Efficient VDisk copy<br />

mirror for a non-Space-Efficient VDisk copy and vice-versa, which is one way to migrate a<br />

non-Space-Efficient VDisk to a Space-Efficient VDisk.<br />

Note: To change <strong>the</strong> parameters of a VDisk copy mirror, you must delete <strong>the</strong> VDisk copy<br />

mirror and redefine it with <strong>the</strong> new values.<br />

7.4.6 Splitting a VDisk Copy<br />

The splitvdiskcopy command creates a new VDisk in <strong>the</strong> specified I/O Group from a copy of<br />

<strong>the</strong> specified VDisk. If <strong>the</strong> copy that you are splitting is not synchronized, you must use <strong>the</strong><br />

-force parameter. The command fails if you are attempting to remove <strong>the</strong> only synchronized<br />

copy. To avoid this failure, wait for <strong>the</strong> copy to synchronize, or split <strong>the</strong> unsynchronized copy<br />

from <strong>the</strong> VDisk by using <strong>the</strong> -force parameter. You can run <strong>the</strong> command when ei<strong>the</strong>r VDisk<br />

copy is offline.<br />

Example 7-50 shows <strong>the</strong> svctask splitvdiskcopy command, which is used to split a VDisk<br />

copy. It creates a new vdisk_N from <strong>the</strong> copy of vdisk_B.<br />

Example 7-50 Split VDisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask splitvdiskcopy -copy 1 -iogrp 0 -name vdisk_N<br />

vdisk_B<br />

Virtual Disk, id [2], successfully created<br />

As you can see in Example 7-51, <strong>the</strong> new VDisk, vdisk_N, has been created as an<br />

independent VDisk.<br />

Example 7-51 svcinfo lsvdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_N<br />

id 2<br />

name vdisk_N<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

capacity 100.0GB<br />

type striped<br />

formatted no<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 363


mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018301BF280000000000002F<br />

throttling 0<br />

preferred_node_id 2<br />

fast_write_state empty<br />

cache readwrite<br />

udid<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 84.75MB<br />

real_capacity 20.10GB<br />

free_capacity 20.01GB<br />

overallocation 497<br />

autoexpand on<br />

warning 80<br />

grainsize 64<br />

The VDisk copy of vdisk_B VDisk has now lost its mirror. Therefore, a new VDisk has been<br />

created.<br />

7.4.7 Modifying a VDisk<br />

Executing <strong>the</strong> svctask chvdisk command will modify a single property of a VDisk. Only one<br />

property can be modified at a time. So, changing <strong>the</strong> name and modifying <strong>the</strong> I/O Group<br />

require two invocations of <strong>the</strong> command.<br />

You can specify a new name or label. The new name can be used subsequently to reference<br />

<strong>the</strong> VDisk. The I/O Group with which this VDisk is associated can be changed. Note that this<br />

requires a flush of <strong>the</strong> cache within <strong>the</strong> nodes in <strong>the</strong> current I/O Group to ensure that all data<br />

is written to disk. I/O must be suspended at <strong>the</strong> host level before performing this operation.<br />

364 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.4.8 I/O governing<br />

Tips: If <strong>the</strong> VDisk has a mapping to any hosts, it is not possible to move <strong>the</strong> VDisk to an I/O<br />

Group that does not include any of those hosts.<br />

This operation will fail if <strong>the</strong>re is not enough space to allocate bitmaps for a mirrored VDisk<br />

in <strong>the</strong> target I/O Group.<br />

If <strong>the</strong> -force parameter is used and <strong>the</strong> cluster is unable to destage all write data from <strong>the</strong><br />

cache, <strong>the</strong> contents of <strong>the</strong> VDisk are corrupted by <strong>the</strong> loss of <strong>the</strong> cached data.<br />

If <strong>the</strong> -force parameter is used to move a VDisk that has out-of-sync copies, a full<br />

resynchronization is required.<br />

You can set a limit on <strong>the</strong> amount of I/O transactions that is accepted for a VDisk. It is set in<br />

terms of I/Os per second or MB per second. By default, no I/O governing rate is set when a<br />

VDisk is created.<br />

Base <strong>the</strong> choice between I/O and MB as <strong>the</strong> I/O governing throttle on <strong>the</strong> disk access profile<br />

of <strong>the</strong> application. Database applications generally issue large amounts of I/O, but <strong>the</strong>y only<br />

transfer a relatively small amount of data. In this case, setting an I/O governing throttle based<br />

on MBs per second does not achieve much. It is better to use an I/Os per second as a second<br />

throttle.<br />

At <strong>the</strong> o<strong>the</strong>r extreme, a streaming video application generally issues a small amount of I/O,<br />

but transfers large amounts of data. In contrast to <strong>the</strong> database example, setting an I/O<br />

governing throttle based on I/Os per second does not achieve much, so it is better to use an<br />

MB per second throttle.<br />

I/O governing rate: An I/O governing rate of 0 (displayed as throttling in <strong>the</strong> CLI output of<br />

<strong>the</strong> svcinfo lsvdisk command) does not mean that zero I/Os per second (or MBs per<br />

second) can be achieved. It means that no throttle is set.<br />

An example of <strong>the</strong> chvdisk command is shown in Example 7-52.<br />

Example 7-52 svctask chvdisk (rate/warning Space-Efficient VDisk)<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chvdisk -rate 20 -unitmb vdisk_C<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chvdisk -warning 85% vdisk7<br />

New name: The chvdisk command specifies <strong>the</strong> new name first. The name can consist of<br />

letters A to Z and a to z, numbers 0 to 9, <strong>the</strong> dash (-), and <strong>the</strong> underscore (_). It can be<br />

between one and 15 characters in length. However, it cannot start with a number, <strong>the</strong> dash,<br />

or <strong>the</strong> word “vdisk” (because this prefix is reserved for SVC assignment only).<br />

The first command changes <strong>the</strong> VDisk throttling of vdisk7 to 20 MBps, while <strong>the</strong> second<br />

command changes <strong>the</strong> Space-Efficient VDisk warning to 85%.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 365


If you want to verify <strong>the</strong> changes, issue <strong>the</strong> svcinfo lsvdisk command, as shown in<br />

Example 7-53.<br />

Example 7-53 svcinfo lsvdisk command: Verifying throttling<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk7<br />

id 7<br />

name vdisk7<br />

IO_group_id 1<br />

IO_group_name io_grp1<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

capacity 10.0GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018301BF280000000000000A<br />

virtual_disk_throttling (MB) 20<br />

preferred_node_id 6<br />

fast_write_state empty<br />

cache readwrite<br />

udid<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 5.02GB<br />

free_capacity 5.02GB<br />

overallocation 199<br />

autoexpand on<br />

warning 85<br />

grainsize 32<br />

366 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.4.9 Deleting a VDisk<br />

When executing this command on an existing managed mode VDisk, any data that remained<br />

on it will be lost. The extents that made up this VDisk will be returned to <strong>the</strong> pool of free<br />

extents available in <strong>the</strong> MDG.<br />

If any Remote Copy, FlashCopy, or host mappings still exist for this VDisk, <strong>the</strong> delete fails<br />

unless <strong>the</strong> -force flag is specified. This flag ensures <strong>the</strong> deletion of <strong>the</strong> VDisk and any VDisk<br />

to host mappings and copy mappings.<br />

If <strong>the</strong> VDisk is currently <strong>the</strong> subject of a migrate to image mode, <strong>the</strong> delete fails unless <strong>the</strong><br />

-force flag is specified. This flag halts <strong>the</strong> migration and <strong>the</strong>n deletes <strong>the</strong> VDisk.<br />

If <strong>the</strong> command succeeds (without <strong>the</strong> -force flag) for an image mode disk, <strong>the</strong> underlying<br />

back-end controller logical unit will be consistent with <strong>the</strong> data that a host might previously<br />

have read from <strong>the</strong> image mode VDisk. That is, all fast write data has been flushed to <strong>the</strong><br />

underlying LUN. If <strong>the</strong> -force flag is used, <strong>the</strong>re is no guarantee.<br />

If <strong>the</strong>re is any nondestaged data in <strong>the</strong> fast write cache for this VDisk, <strong>the</strong> deletion of <strong>the</strong><br />

VDisk fails unless <strong>the</strong> -force flag is specified. Now, any nondestaged data in <strong>the</strong> fast write<br />

cache is deleted.<br />

Use <strong>the</strong> svctask rmvdisk command to delete a VDisk from your SVC configuration, as shown<br />

in Example 7-54.<br />

Example 7-54 svctask rmvdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdisk vdisk_A<br />

This command deletes <strong>the</strong> vdisk_A VDisk from <strong>the</strong> SVC configuration. If <strong>the</strong> VDisk is<br />

assigned to a host, you need to use <strong>the</strong> -force flag to delete <strong>the</strong> VDisk (Example 7-55).<br />

Example 7-55 svctask rmvdisk (-force)<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdisk -force vdisk_A<br />

7.4.10 Expanding a VDisk<br />

Expanding a VDisk presents a larger capacity disk to your operating system. Although this<br />

expansion can be easily performed using <strong>the</strong> SVC, you must ensure that your operating<br />

systems support expansion before using this function.<br />

Assuming your operating systems support it, you can use <strong>the</strong> svctask expandvdisksize<br />

command to increase <strong>the</strong> capacity of a given VDisk.<br />

Example 7-56 shows a sample of this command.<br />

Example 7-56 svctask expandvdisksize<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb vdisk_C<br />

This command expands <strong>the</strong> vdisk_C VDisk, which was 35 GB before, by ano<strong>the</strong>r 5 GB to give<br />

it a total size of 40 GB.<br />

To expand a Space-Efficient VDisk, you can use <strong>the</strong> -rsize option, as shown in Example 7-57<br />

on page 368. This command changes <strong>the</strong> real size of <strong>the</strong> vdisk_B VDisk to a real capacity of<br />

55 GB. The capacity of <strong>the</strong> VDisk remains unchanged.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 367


Example 7-57 svcinfo lsvdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_B<br />

id 1<br />

name vdisk_B<br />

capacity 100.0GB<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 50.00GB<br />

free_capacity 50.00GB<br />

overallocation 200<br />

autoexpand off<br />

warning 40<br />

grainsize 32<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask expandvdisksize -rsize 5 -unit gb vdisk_B<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_B<br />

id 1<br />

name vdisk_B<br />

capacity 100.0GB<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 55.00GB<br />

free_capacity 55.00GB<br />

overallocation 181<br />

autoexpand off<br />

warning 40<br />

grainsize 32<br />

Important: If a VDisk is expanded, its type will become striped even if it was previously<br />

sequential or in image mode. If <strong>the</strong>re are not enough extents to expand your VDisk to <strong>the</strong><br />

specified size, you receive <strong>the</strong> following error message:<br />

CMMVC5860E Ic_failed_vg_insufficient_virtual_extents<br />

7.4.11 Assigning a VDisk to a host<br />

Use <strong>the</strong> svctask mkvdiskhostmap command to map a VDisk to a host. When executed, this<br />

command creates a new mapping between <strong>the</strong> VDisk and <strong>the</strong> specified host, which<br />

essentially presents this VDisk to <strong>the</strong> host, as though <strong>the</strong> disk was directly attached to <strong>the</strong><br />

host. It is only after this command is executed that <strong>the</strong> host can perform I/O to <strong>the</strong> VDisk.<br />

Optionally, a SCSI LUN ID can be assigned to <strong>the</strong> mapping.<br />

When <strong>the</strong> HBA on <strong>the</strong> host scans for devices that are attached to it, it discovers all of <strong>the</strong><br />

VDisks that are mapped to its FC ports. When <strong>the</strong> devices are found, each one is allocated an<br />

identifier (SCSI LUN ID).<br />

For example, <strong>the</strong> first disk found is generally SCSI LUN 1, and so on. You can control <strong>the</strong><br />

order in which <strong>the</strong> HBA discovers VDisks by assigning <strong>the</strong> SCSI LUN ID as required. If you do<br />

368 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


not specify a SCSI LUN ID, <strong>the</strong> cluster automatically assigns <strong>the</strong> next available SCSI LUN ID,<br />

given any mappings that already exist with that host.<br />

Using <strong>the</strong> VDisk and host definition that we created in <strong>the</strong> previous sections, we assign<br />

VDisks to hosts that are ready for <strong>the</strong>ir use. We use <strong>the</strong> svctask mkvdiskhostmap command<br />

(see Example 7-58).<br />

Example 7-58 svctask mkvdiskhostmap<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Tiger vdisk_B<br />

Virtual Disk to Host map, id [2], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Tiger vdisk_C<br />

Virtual Disk to Host map, id [1], successfully created<br />

This command assigns vdisk_B and vdisk_C to host Tiger as shown in Example 7-59.<br />

Example 7-59 svcinfo lshostvdiskmap -delim, command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim ,<br />

id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID<br />

1,Tiger,2,1,vdisk_B,210000E08B892BCD,60050768018301BF2800000000000001<br />

1,Tiger,1,2,vdisk_C,210000E08B892BCD,60050768018301BF2800000000000002<br />

Assigning a specific LUN ID to a VDisk: The optional -scsi scsi_num parameter can help<br />

assign a specific LUN ID to a VDisk that is to be associated with a given host. The default<br />

(if nothing is specified) is to increment based on what is already assigned to <strong>the</strong> host.<br />

Be aware that certain HBA device drivers stop when <strong>the</strong>y find a gap in <strong>the</strong> SCSI LUN IDs. For<br />

example:<br />

► VDisk 1 is mapped to Host 1 with SCSI LUN ID 1.<br />

► VDisk 2 is mapped to Host 1 with SCSI LUN ID 2.<br />

► VDisk 3 is mapped to Host 1 with SCSI LUN ID 4.<br />

When <strong>the</strong> device driver scans <strong>the</strong> HBA, it might stop after discovering VDisks 1 and 2,<br />

because <strong>the</strong>re is no SCSI LUN mapped with ID 3. Be careful to ensure that <strong>the</strong> SCSI LUN ID<br />

allocation is contiguous.<br />

It is not possible to map a VDisk to a host more than one time at separate LUNs<br />

(Example 7-60).<br />

Example 7-60 svctask mkvdiskhostmap<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Siam vdisk_A<br />

Virtual Disk to Host map, id [0], successfully created<br />

This command maps <strong>the</strong> VDisk called vdisk_A to <strong>the</strong> host called Siam.<br />

You have completed all of <strong>the</strong> tasks that are required to assign a VDisk to an attached host.<br />

7.4.12 Showing VDisk-to-host mapping<br />

Use <strong>the</strong> svcinfo lshostvdiskmap command to show which VDisks are assigned to a specific<br />

host (Example 7-61 on page 370).<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 369


Example 7-61 svcinfo lshostvdiskmap<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam<br />

id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID<br />

3,Siam,0,0,vdisk_A,210000E08B18FF8A,60050768018301BF280000000000000C<br />

From this command, you can see that <strong>the</strong> host Siam has only one assigned VDisk called<br />

vdisk_A. The SCSI LUN ID is also shown, which is <strong>the</strong> ID by which <strong>the</strong> VDisk is presented to<br />

<strong>the</strong> host. If no host is specified, all defined host to VDisk mappings will be returned.<br />

Specifying <strong>the</strong> flag before <strong>the</strong> host name: Although <strong>the</strong> -delim flag normally comes at<br />

<strong>the</strong> end of <strong>the</strong> command string, in this case, you must specify this flag before <strong>the</strong> host<br />

name. O<strong>the</strong>rwise, it returns <strong>the</strong> following message:<br />

CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or<br />

incorrect argument sequence has been detected. Ensure that <strong>the</strong> input is as per<br />

<strong>the</strong> help.<br />

7.4.13 Deleting a VDisk-to-host mapping<br />

When deleting a VDisk mapping, you are not deleting <strong>the</strong> volume itself, only <strong>the</strong> connection<br />

from <strong>the</strong> host to <strong>the</strong> volume. If you mapped a VDisk to a host by mistake, or you simply want<br />

to reassign <strong>the</strong> volume to ano<strong>the</strong>r host, use <strong>the</strong> svctask rmvdiskhostmap command to unmap<br />

a VDisk from a host (Example 7-62).<br />

Example 7-62 svctask rmvdiskhostmap<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Tiger vdisk_D<br />

This command unmaps <strong>the</strong> VDisk called vdisk_D from <strong>the</strong> host called Tiger.<br />

7.4.14 Migrating a VDisk<br />

From time to time, you might want to migrate VDisks from one set of MDisks to ano<strong>the</strong>r set of<br />

MDisks to decommission an old disk subsystem, to have better balanced performance across<br />

your virtualized environment, or simply to migrate data into <strong>the</strong> SVC environment<br />

transparently using image mode.<br />

You can obtain fur<strong>the</strong>r information about migration in Chapter 9, “Data migration” on<br />

page 675.<br />

Important: After migration is started, it continues until completion unless it is stopped or<br />

suspended by an error condition or unless <strong>the</strong> VDisk being migrated is deleted.<br />

As you can see from <strong>the</strong> parameters in Example 7-63, before you can migrate your VDisk,<br />

you must know <strong>the</strong> name of <strong>the</strong> VDisk you want to migrate and <strong>the</strong> name of <strong>the</strong> MDG to which<br />

you want to migrate. To discover <strong>the</strong> name, simply run <strong>the</strong> svcinfo lsvdisk and svcinfo<br />

lsmdiskgrp commands.<br />

When you know <strong>the</strong>se details, you can issue <strong>the</strong> migratevdisk command, as shown in<br />

Example 7-63.<br />

Example 7-63 svctask migratevdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratevdisk -mdiskgrp MDG_DS47 -vdisk vdisk_C<br />

370 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


This command moves vdisk_C to MDG_DS47.<br />

Tips: If insufficient extents are available within your target MDG, you receive an error<br />

message. Make sure that <strong>the</strong> source and target MDisk group have <strong>the</strong> same extent size.<br />

The optional threads parameter allows you to assign a priority to <strong>the</strong> migration process.<br />

The default is 4, which is <strong>the</strong> highest priority setting. However, if you want <strong>the</strong> process to<br />

take a lower priority over o<strong>the</strong>r types of I/O, you can specify 3, 2, or 1.<br />

You can run <strong>the</strong> svcinfo lsmigrate command at any time to see <strong>the</strong> status of <strong>the</strong> migration<br />

process (Example 7-64).<br />

Example 7-64 svcinfo lsmigrate command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmigrate<br />

migrate_type MDisk_Group_Migration<br />

progress 12<br />

migrate_source_vdisk_index 2<br />

migrate_target_mdisk_grp 1<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmigrate<br />

migrate_type MDisk_Group_Migration<br />

progress 16<br />

migrate_source_vdisk_index 2<br />

migrate_target_mdisk_grp 1<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

Progress: The progress is given as percent complete. If you get no more replies, <strong>the</strong><br />

process has finished.<br />

7.4.15 Migrate a VDisk to an image mode VDisk<br />

Migrating a VDisk to an image mode VDisk allows <strong>the</strong> SVC to be removed from <strong>the</strong> data path,<br />

which might be useful where <strong>the</strong> SVC is used as a data mover appliance. You can use <strong>the</strong><br />

svctask migratetoimage command.<br />

To migrate a VDisk to an image mode VDisk, <strong>the</strong> following rules apply:<br />

► The destination MDisk must be greater than or equal to <strong>the</strong> size of <strong>the</strong> VDisk.<br />

► The MDisk that is specified as <strong>the</strong> target must be in an unmanaged state.<br />

► Regardless of <strong>the</strong> mode in which <strong>the</strong> VDisk starts, it is reported as managed mode during<br />

<strong>the</strong> migration.<br />

► Both of <strong>the</strong> MDisks involved are reported as being image mode during <strong>the</strong> migration.<br />

► If <strong>the</strong> migration is interrupted by a cluster recovery or by a cache problem, <strong>the</strong> migration<br />

resumes after <strong>the</strong> recovery completes.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 371


Example 7-65 shows an example of <strong>the</strong> command.<br />

Example 7-65 svctask migratetoimage<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk vdisk_A -mdisk mdisk8<br />

-mdiskgrp MDG_Image<br />

In this example, you migrate <strong>the</strong> data from vdisk_A onto mdisk8, and <strong>the</strong> MDisk must be put<br />

into <strong>the</strong> MDG_Image MDG.<br />

7.4.16 Shrinking a VDisk<br />

The shrinkvdisksize command reduces <strong>the</strong> capacity that is allocated to <strong>the</strong> particular VDisk<br />

by <strong>the</strong> amount that you specify. You cannot shrink <strong>the</strong> real size of a space-efficient volume to<br />

less than its used size. All capacities, including changes, must be in multiples of 512 bytes.<br />

An entire extent is reserved even if it is only partially used. The default capacity units are MB.<br />

The command can be used to shrink <strong>the</strong> physical capacity that is allocated to a particular<br />

VDisk by <strong>the</strong> specified amount. The command can also be used to shrink <strong>the</strong> virtual capacity<br />

of a Space-Efficient VDisk without altering <strong>the</strong> physical capacity assigned to <strong>the</strong> VDisk:<br />

► For a non-Space-Efficient VDisk, use <strong>the</strong> -size parameter.<br />

► For a Space-Efficient VDisk real capacity, use <strong>the</strong> -rsize parameter.<br />

► For <strong>the</strong> Space-Efficient VDisk virtual capacity, use <strong>the</strong> -size parameter.<br />

When <strong>the</strong> virtual capacity of a Space-Efficient VDisk is changed, <strong>the</strong> warning threshold is<br />

automatically scaled to match. The new threshold is stored as a percentage.<br />

The cluster arbitrarily reduces <strong>the</strong> capacity of <strong>the</strong> VDisk by removing a partial extent, one<br />

extent, or multiple extents from those extents that are allocated to <strong>the</strong> VDisk. You cannot<br />

control which extents are removed, and so, you cannot assume that it is unused space that is<br />

removed.<br />

Reducing disk size: Image mode VDisks cannot be reduced in size. They must first be<br />

migrated to Managed Mode. To run <strong>the</strong> shrinkvdisksize command on a mirrored VDisk,<br />

all of <strong>the</strong> copies of <strong>the</strong> VDisk must be synchronized.<br />

Important:<br />

► If <strong>the</strong> VDisk contains data, do not shrink <strong>the</strong> disk.<br />

► Certain operating systems or file systems use what <strong>the</strong>y consider to be <strong>the</strong> outer edge<br />

of <strong>the</strong> disk for performance reasons. This command can shrink FlashCopy target<br />

VDisks to <strong>the</strong> same capacity as <strong>the</strong> source.<br />

► Before you shrink a VDisk, validate that <strong>the</strong> VDisk is not mapped to any host objects. If<br />

<strong>the</strong> VDisk is mapped, data is displayed. You can determine <strong>the</strong> exact capacity of <strong>the</strong><br />

source or master VDisk by issuing <strong>the</strong> svcinfo lsvdisk -bytes vdiskname command.<br />

Shrink <strong>the</strong> VDisk by <strong>the</strong> required amount by issuing <strong>the</strong> svctask shrinkvdisksize<br />

-size disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id<br />

command.<br />

Assuming your operating system supports it, you can use <strong>the</strong> svctask shrinkvdisksize<br />

command to decrease <strong>the</strong> capacity of a given VDisk.<br />

Example 7-66 on page 373 shows an example of this command.<br />

372 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-66 svctask shrinkvdisksize<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask shrinkvdisksize -size 44 -unit gb vdisk_A<br />

This command shrinks a volume called Vdisk_A from a total size of 80 GB, by 44 GB, to a<br />

new total size of 36 GB.<br />

7.4.17 Showing a VDisk on an MDisk<br />

Use <strong>the</strong> svcinfo lsmdiskmember command to display information about <strong>the</strong> VDisks that use<br />

space on a specific MDisk, as shown in Example 7-67.<br />

Example 7-67 svcinfo lsmdiskmember command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskmember mdisk1<br />

id copy_id<br />

0 0<br />

2 0<br />

3 0<br />

4 0<br />

5 0<br />

This command displays a list of all of <strong>the</strong> VDisk IDs that correspond to <strong>the</strong> VDisk copies that<br />

use mdisk1.<br />

To correlate <strong>the</strong> IDs displayed in this output to VDisk names, we can run <strong>the</strong> svcinfo lsvdisk<br />

command, which we discuss in more detail in 7.4, “Working with VDisks” on page 356.<br />

7.4.18 Showing VDisks using a managed disk group<br />

Use <strong>the</strong> svcinfo lsvdisk -filtervalue command, as shown in Example 7-68, to see which<br />

VDisks are part of a specific MDG. This command shows all of <strong>the</strong> VDisks that are part of <strong>the</strong><br />

MDG called MDG_DS47.<br />

Example 7-68 svcinfo lsvdisk -filtervalue: VDisks in <strong>the</strong> MDG<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG_DS47<br />

-delim ,<br />

id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_nam<br />

e,UID<br />

5,mdisk5,online,managed,1,MDG_DS47,36.0GB,0000000000000000,DS4700,600a0b800026b282<br />

00003ea34851577c00000000000000000000000000000000<br />

7,mdisk7,online,managed,1,MDG_DS47,36.0GB,0000000000000001,DS4700,600a0b80002904de<br />

00004188485157a400000000000000000000000000000000<br />

9,mdisk9,online,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b282<br />

00003ed6485157b600000000000000000000000000000000<br />

12,mdisk12,online,managed,1,MDG_DS47,36.0GB,0000000000000003,DS4700,600a0b80002904<br />

de000041ba485157d000000000000000000000000000000000<br />

14,mdisk14,online,managed,1,MDG_DS47,36.0GB,0000000000000004,DS4700,600a0b800026b2<br />

8200003f6c4851585200000000000000000000000000000000<br />

18,mdisk18,online,managed,1,MDG_DS47,36.0GB,0000000000000005,DS4700,600a0b80002904<br />

de000042504851586800000000000000000000000000000000<br />

19,mdisk19,online,managed,1,MDG_DS47,36.0GB,0000000000000006,DS4700,600a0b800026b2<br />

8200003f9f4851588700000000000000000000000000000000<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 373


20,mdisk20,online,managed,1,MDG_DS47,36.0GB,0000000000000007,DS4700,600a0b80002904<br />

de00004282485158aa00000000000000000000000000000000<br />

7.4.19 Showing which MDisks are used by a specific VDisk<br />

Use <strong>the</strong> svcinfo lsvdiskmember command, as shown in Example 7-69, to show which<br />

MDisks a specific VDisk’s extents are from.<br />

Example 7-69 svcinfo lsvdiskmember command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember vdisk_D<br />

id<br />

0<br />

1<br />

2<br />

3<br />

4<br />

6<br />

10<br />

11<br />

13<br />

15<br />

16<br />

17<br />

If you want to know more about <strong>the</strong>se MDisks, you can run <strong>the</strong> svcinfo lsmdisk command,<br />

as explained in 7.2, “Working with managed disks and disk controller systems” on page 340<br />

(using <strong>the</strong> ID displayed in Example 7-69 ra<strong>the</strong>r than <strong>the</strong> name).<br />

7.4.20 Showing from which Managed Disk Group a VDisk has its extents<br />

Use <strong>the</strong> svcinfo lsvdisk command, as shown in Example 7-70, to show to which MDG a<br />

specific VDisk belongs.<br />

Example 7-70 svcinfo lsvdisk command: MDG name<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_D<br />

id 3<br />

name vdisk_D<br />

IO_group_id 1<br />

IO_group_name io_grp1<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

capacity 80.0GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018301BF2800000000000003<br />

throttling 0<br />

374 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


preferred_node_id 6<br />

fast_write_state empty<br />

cache readwrite<br />

udid<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 80.00GB<br />

real_capacity 80.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

If you want to know more about <strong>the</strong>se MDGs, you can run <strong>the</strong> svcinfo lsmdiskgrp command,<br />

as explained in 7.2.11, “Working with Managed Disk Groups” on page 346.<br />

7.4.21 Showing <strong>the</strong> host to which <strong>the</strong> VDisk is mapped<br />

To show <strong>the</strong> hosts to which a specific VDisk has been assigned, run <strong>the</strong> svcinfo<br />

lsvdiskhostmap command, as shown in Example 7-71.<br />

Example 7-71 svcinfo lsvdiskhostmap command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap -delim , vdisk_B<br />

id,name,SCSI_id,host_id,host_name,wwpn,vdisk_UID<br />

1,vdisk_B,2,1,Nile,210000E08B892BCD,60050768018301BF2800000000000001<br />

1,vdisk_B,2,1,Nile,210000E08B89B8C0,60050768018301BF2800000000000001<br />

This command shows <strong>the</strong> host or hosts to which <strong>the</strong> vdisk_B VDisk was mapped. It is normal<br />

for you to see duplicate entries, because <strong>the</strong>re are more paths between <strong>the</strong> cluster and <strong>the</strong><br />

host. To be sure that <strong>the</strong> operating system on <strong>the</strong> host sees <strong>the</strong> disk only one time, you must<br />

install and configure a multipath software application, such as <strong>the</strong> <strong>IBM</strong> Subsystem Driver<br />

(SDD).<br />

Specifying <strong>the</strong> -delim flag: Although <strong>the</strong> optional -delim flag normally comes at <strong>the</strong> end of<br />

<strong>the</strong> command string, in this case, you must specify this flag before <strong>the</strong> VDisk name.<br />

O<strong>the</strong>rwise, <strong>the</strong> command does not return any data.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 375


7.4.22 Showing <strong>the</strong> VDisk to which <strong>the</strong> host is mapped<br />

To show <strong>the</strong> VDisk to which a specific host has been assigned, run <strong>the</strong> svcinfo<br />

lshostvdiskmap command, as shown in Example 7-72.<br />

Example 7-72 lshostvdiskmap command example<br />

id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID<br />

3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005<br />

3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004<br />

3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006<br />

This command shows which VDisks are mapped to <strong>the</strong> host called Siam.<br />

Specifying <strong>the</strong> -delim flag: Although <strong>the</strong> optional -delim flag normally comes at <strong>the</strong> end of<br />

<strong>the</strong> command string, in this case. you must specify this flag before <strong>the</strong> VDisk name.<br />

O<strong>the</strong>rwise, <strong>the</strong> command does not return any data.<br />

7.4.23 Tracing a VDisk from a host back to its physical disk<br />

In many cases, you must verify exactly what physical disk is presented to <strong>the</strong> host, for<br />

example, from what MDG a specific volume comes. From <strong>the</strong> host side, it is not possible for<br />

<strong>the</strong> server administrator via <strong>the</strong> GUI to see on which physical disks <strong>the</strong> volumes are running.<br />

You must enter <strong>the</strong> command (listed in Example 7-73) from your multipath command prompt.<br />

1. On your host, run <strong>the</strong> datapath query device command. You see a long disk serial<br />

number for each vpath device, as shown in Example 7-73.<br />

Example 7-73 datapath query device<br />

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 60050768018301BF2800000000000005<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0<br />

1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0<br />

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 60050768018301BF2800000000000004<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0<br />

1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0<br />

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 60050768018301BF2800000000000006<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0<br />

1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0<br />

State: In Example 7-73, <strong>the</strong> state of each path is OPEN. Sometimes, you will see <strong>the</strong> state<br />

CLOSED, which does not necessarily indicate a problem, because it might be a result of <strong>the</strong><br />

path’s processing stage.<br />

376 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2. Run <strong>the</strong> svcinfo lshostvdiskmap command to return a list of all assigned VDisks<br />

(Example 7-74).<br />

Example 7-74 svcinfo lshostvdiskmap<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siam<br />

id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID<br />

3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF2800000000000005<br />

3,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF2800000000000004<br />

3,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006<br />

Look for <strong>the</strong> disk serial number that matches your datapath query device output. This<br />

host was defined in our SVC as Siam.<br />

3. Run <strong>the</strong> svcinfo lsvdiskmember vdiskname command for a list of <strong>the</strong> MDisk or MDisks<br />

that make up <strong>the</strong> specified VDisk (Example 7-75).<br />

Example 7-75 svcinfo lsvdiskmember<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember MM_DBLog_Pri<br />

id<br />

0<br />

1<br />

2<br />

3<br />

4<br />

10<br />

11<br />

13<br />

15<br />

16<br />

17<br />

4. Query <strong>the</strong> MDisks with <strong>the</strong> svcinfo lsmdisk mdiskID to find <strong>the</strong>ir controller and LUN<br />

number information, as shown in Example 7-76. The output displays <strong>the</strong> controller name<br />

and <strong>the</strong> controller LUN ID to help you (provided you named your controller a unique name,<br />

such as a serial number) to track back to a LUN within <strong>the</strong> disk subsystem<br />

(Example 7-76).<br />

Example 7-76 svcinfo lsmdisk command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk 3<br />

id 3<br />

name mdisk3<br />

status online<br />

mode managed<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS45<br />

capacity 36.0GB<br />

quorum_index<br />

block_size 512<br />

controller_name DS4500<br />

ctrl_type 4<br />

ctrl_WWNN 200400A0B8174431<br />

controller_id 0<br />

path_count 4<br />

max_path_count 4<br />

ctrl_LUN_# 0000000000000003<br />

UID 600a0b8000174431000000e44713575400000000000000000000000000000000<br />

preferred_WWPN 200400A0B8174433<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 377


active_WWPN 200400A0B8174433<br />

7.5 Scripting under <strong>the</strong> CLI for SVC task automation<br />

Using scripting constructs works better for <strong>the</strong> automation of regular operational jobs. You can<br />

use available shells to develop scripts. To run an SVC Console where <strong>the</strong> operating system is<br />

Windows 2000 and higher, you can ei<strong>the</strong>r purchase licensed shell emulation software or<br />

download Cygwin from this Web site:<br />

http://www.cygwin.com<br />

Scripting enhances <strong>the</strong> productivity of SVC administrators and <strong>the</strong> integration of <strong>the</strong>ir storage<br />

virtualization environment.<br />

We show an example of scripting in Appendix A, “Scripting” on page 785.<br />

You can create your own customized scripts to automate a large number of tasks for<br />

completion at a variety of times and run <strong>the</strong>m through <strong>the</strong> CLI.<br />

We recommend that in large <strong>SAN</strong> environments, where scripting with svctask commands is<br />

used, that you keep <strong>the</strong> scripting as simple as possible. It is harder to manage fallback,<br />

documentation, and verifying a successful script prior to execution in a large <strong>SAN</strong><br />

environment.<br />

7.6 SVC advanced operations using <strong>the</strong> CLI<br />

In <strong>the</strong> following topics, we describe <strong>the</strong> commands that we think best represent advanced<br />

operational commands.<br />

7.6.1 Command syntax<br />

Two major command sets are available:<br />

► The svcinfo command set allows us to query <strong>the</strong> various components within <strong>the</strong> SVC<br />

environment.<br />

► The svctask command set allows us to make changes to <strong>the</strong> various components within<br />

<strong>the</strong> SVC.<br />

When <strong>the</strong> command syntax is shown, you see several parameters in square brackets, for<br />

example, [parameter], which indicates that <strong>the</strong> parameter is optional in most if not all<br />

instances. Any parameter that is not in square brackets is required information. You can view<br />

<strong>the</strong> syntax of a command by entering one of <strong>the</strong> following commands:<br />

► svcinfo -?: Shows a complete list of information commands.<br />

► svctask -?: Shows a complete list of task commands.<br />

► svcinfo commandname -?: Shows <strong>the</strong> syntax of information commands.<br />

► svctask commandname -?: Shows <strong>the</strong> syntax of task commands.<br />

► svcinfo commandname -filtervalue?: Shows which filters you can use to reduce <strong>the</strong><br />

output of <strong>the</strong> information commands.<br />

Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname<br />

-h.<br />

378 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


If you look at <strong>the</strong> syntax of <strong>the</strong> command by typing svcinfo command name -?, you often see<br />

-filter listed as a parameter. Be aware that <strong>the</strong> correct parameter is -filtervalue.<br />

Tip: You can use <strong>the</strong> up and down arrow keys on your keyboard to recall commands that<br />

were recently issued. Then, you can use <strong>the</strong> left and right, backspace, and delete keys to<br />

edit commands before you resubmit <strong>the</strong>m.<br />

7.6.2 Organizing on window content<br />

Sometimes <strong>the</strong> output of a command can be long and difficult to read in <strong>the</strong> window. In cases<br />

where you need information about a subset of <strong>the</strong> total number of available items, you can<br />

use filtering to reduce <strong>the</strong> output to a more manageable size.<br />

Filtering<br />

To reduce <strong>the</strong> output that is displayed by an svcinfo command, you can specify a number of<br />

filters, depending on which svcinfo command you are running. To see which filters are<br />

available, type <strong>the</strong> command followed by <strong>the</strong> -filtervalue? flag, as shown in Example 7-77.<br />

Example 7-77 svcinfo lsvdisk -filtervalue? command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue?<br />

Filters for this view are :<br />

name<br />

id<br />

IO_group_id<br />

IO_group_name<br />

status<br />

mdisk_grp_name<br />

mdisk_grp_id<br />

capacity<br />

type<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_name<br />

vdisk_id<br />

vdisk_UID<br />

fc_map_count<br />

copy_count<br />

When you know <strong>the</strong> filters, you can be more selective in generating output:<br />

► Multiple filters can be combined to create specific searches.<br />

► You can use an asterisk (*) as a wildcard when using names.<br />

► When capacity is used, <strong>the</strong> units must also be specified using -u b | kb | mb | gb | tb | pb.<br />

For example, if we issue <strong>the</strong> svcinfo lsvdisk command with no filters, we see <strong>the</strong> output that<br />

is shown in Example 7-78 on page 380.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 379


Example 7-78 svcinfo lsvdisk command: No filters<br />

id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,typ<br />

e,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count<br />

0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000<br />

000000,0,1<br />

1,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF280000000<br />

0000001,0,1<br />

2,vdisk2,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000<br />

000002,0,1<br />

3,vdisk3,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000<br />

000003,0,1<br />

Tip: The -delim parameter truncates <strong>the</strong> content in <strong>the</strong> window and separates data fields<br />

with colons as opposed to wrapping text over multiple lines. This parameter is normally<br />

used in cases where you need to get reports during script execution.<br />

If we now add a filter to our svcinfo command (such as FC_name), we can reduce <strong>the</strong><br />

output, as shown in Example 7-79.<br />

Example 7-79 svcinfo lsvdisk command: With a filter<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue mdisk_grp_name=*7 -delim ,<br />

id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type<br />

,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count<br />

0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000<br />

000000,0,1<br />

1,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF280000000<br />

0000001,0,1<br />

The first command shows all VDisks with <strong>the</strong> IO_group_id=0. The second command shows<br />

us all VDisks where <strong>the</strong> mdisk_grp_name ends with a 7. You can use <strong>the</strong> wildcard asterisk<br />

character (*) when names are used.<br />

7.7 Managing <strong>the</strong> cluster using <strong>the</strong> CLI<br />

In <strong>the</strong>se sections, we show cluster administration.<br />

7.7.1 Viewing cluster properties<br />

Use <strong>the</strong> svcinfo lscluster command to display summary information about all of <strong>the</strong><br />

clusters that are configured to <strong>the</strong> SVC, as shown in Example 7-80 on page 381.<br />

380 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-80 svcinfo lscluster command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster<br />

id name location partnership bandwidth<br />

id_alias<br />

000002006AE04FC4 ITSO-CLS1 local<br />

000002006AE04FC4<br />

0000020063E03A38 ITSO-CLS4 remote fully_configured 20<br />

0000020063E03A38<br />

0000020061006FCA ITSO-CLS2 remote fully_configured 50<br />

0000020061006FCA<br />

7.7.2 Changing cluster settings<br />

Use <strong>the</strong> svctask chcluster command to change <strong>the</strong> settings of <strong>the</strong> cluster. This command<br />

modifies <strong>the</strong> specific features of a cluster. You can change multiple features by issuing a<br />

single command.<br />

If <strong>the</strong> cluster IP address is changed, <strong>the</strong> open command-line shell closes during <strong>the</strong><br />

processing of <strong>the</strong> command. You must reconnect to <strong>the</strong> new IP address. The service IP<br />

address is not used until a node is removed from <strong>the</strong> cluster. If this node cannot rejoin <strong>the</strong><br />

cluster, you can bring <strong>the</strong> node up in service mode. In this mode, <strong>the</strong> node can be accessed<br />

as a stand-alone node using <strong>the</strong> service IP address.<br />

All command parameters are optional; however, you must specify at least one parameter.<br />

Note: Only a user with administrator authority can change <strong>the</strong> password.<br />

After <strong>the</strong> cluster IP address is changed, you lose <strong>the</strong> open shell connection to <strong>the</strong> cluster.<br />

You must reconnect with <strong>the</strong> newly specified IP address.<br />

Important: Changing <strong>the</strong> speed on a running cluster breaks I/O service to <strong>the</strong> attached<br />

hosts. Before changing <strong>the</strong> fabric speed, stop I/O from <strong>the</strong> active hosts and force <strong>the</strong>se<br />

hosts to flush any cached data by unmounting volumes (for UNIX host types) or by<br />

removing drive letters (for Windows host types). Specific hosts might need to be rebooted<br />

to detect <strong>the</strong> new fabric speed. The fabric speed setting applies only to <strong>the</strong> 4F2 and 8F2<br />

model nodes in a cluster. The 8F4 nodes automatically negotiate <strong>the</strong> fabric speed on a<br />

per-port basis.<br />

7.7.3 Cluster au<strong>the</strong>ntication<br />

An important point with respect to au<strong>the</strong>ntication in SVC 5.1 is that <strong>the</strong> superuser password<br />

replaces <strong>the</strong> previous cluster admin. This user is a member of <strong>the</strong> Security admin. If this<br />

password is not known, you can reset it from <strong>the</strong> cluster front panel.<br />

We describe <strong>the</strong> au<strong>the</strong>ntication method in detail in Chapter 2, “<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong>” on page 7.<br />

Tip: If you do not want <strong>the</strong> password to display as you enter it on <strong>the</strong> command line, omit<br />

<strong>the</strong> new password. The command line <strong>the</strong>n prompts you to enter and confirm <strong>the</strong> password<br />

without <strong>the</strong> password being displayed.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 381


The only au<strong>the</strong>ntication that can be changed from <strong>the</strong> chcluster command is <strong>the</strong> Service<br />

account user password, and to be able to change that, you need to have administrative rights.<br />

The Service account user password is changed in Example 7-81.<br />

Example 7-81 svctask chcluster -servicepwd (for <strong>the</strong> Service account)<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chcluster -servicepwd<br />

Enter a value for -password :<br />

Enter password:<br />

Confirm password:<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

See 7.10.1, “Managing users using <strong>the</strong> CLI” on page 394 for more information about<br />

managing users.<br />

7.7.4 iSCSI configuration<br />

Starting with SVC 5.1, iSCSI is introduced as a supported method of communication between<br />

<strong>the</strong> SVC and hosts. All back-end storage and intracluster communication still uses FC and <strong>the</strong><br />

<strong>SAN</strong>, so iSCSI cannot be used for that communication.<br />

In Chapter 2, “<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>” on page 7, we described in detail<br />

how iSCSi works. In this section, we show how to configure our cluster for usage with iSCSI.<br />

We will configure our nodes to use <strong>the</strong> primary and secondary E<strong>the</strong>rnet ports for iSCSI, as<br />

well as contain <strong>the</strong> cluster IP. When we configure our nodes to be used with iSCSI, we do not<br />

affect our cluster IP. The cluster IP is changed, as shown in 7.7.2, “Changing cluster settings”<br />

on page 381.<br />

It is important to know that we can have more than a one IP address to one physical<br />

connection relationship. We have <strong>the</strong> capability to have a four to one relationship (4:1),<br />

consisting of two IPv4 plus two IPv6 addresses (four total) to one physical connection per port<br />

per node.<br />

We describe this function in Chapter 2, “<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>” on<br />

page 7.<br />

Tip: When reconfiguring IP ports, be aware that already configured iSCSI connections will<br />

need to reconnect if changes are made to <strong>the</strong> IP addresses of <strong>the</strong> nodes.<br />

There are two ways to perform iSCSI au<strong>the</strong>ntication or CHAP, ei<strong>the</strong>r for <strong>the</strong> whole cluster or<br />

per host connection. Example 7-82 shows configuring CHAP for <strong>the</strong> whole cluster.<br />

Example 7-82 Setting a CHAP secret for <strong>the</strong> entire cluster to “passw0rd”<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chcluster -iscsiauthmethod chap -chapsecret<br />

passw0rd<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

In our scenario, we have our cluster IP of 9.64.210.64, which is not affected during our<br />

configuration of <strong>the</strong> node’s IP addresses.<br />

We start by listing our ports using <strong>the</strong> svcinfo lsportip command. We can see that we have<br />

two ports per node with which to work. Both ports can have two IP addresses that can be<br />

used for iSCSI.<br />

382 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


In our example, we configure <strong>the</strong> secondary port in both nodes in our I/O Group, which is<br />

shown in Example 7-83.<br />

Example 7-83 Configuring secondary E<strong>the</strong>rnet port on SVC nodes<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask<br />

255.255.255.0 2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask<br />

255.255.255.0 2<br />

While both nodes are online, each node will be available to iSCSI hosts on <strong>the</strong> IP address that<br />

we have configured. Because iSCSI failover between nodes is enabled automatically, if a<br />

node goes offline for any reason, its partner node in <strong>the</strong> I/O group will become available on<br />

<strong>the</strong> failed node’s port IP address, ensuring that hosts will continue to be able to perform I/O.<br />

The svcinfo lsportip command will display which port IP addresses are currently active on<br />

each node.<br />

7.7.5 Modifying IP addresses<br />

Starting with SVC 5.1, we can use both IP ports of <strong>the</strong> nodes. However, <strong>the</strong> first time that you<br />

configure a second port, all IP information is required, because port 1 on <strong>the</strong> cluster must<br />

always have one stack fully configured.<br />

There are now two active cluster ports on <strong>the</strong> configuration node. If <strong>the</strong> cluster IP address is<br />

changed, <strong>the</strong> open command-line shell closes during <strong>the</strong> processing of <strong>the</strong> command. You<br />

must reconnect to <strong>the</strong> new IP address if connected through that port.<br />

List <strong>the</strong> IP address of <strong>the</strong> cluster by issuing <strong>the</strong> svcinfo lsclusterip command. Modify <strong>the</strong> IP<br />

address by issuing <strong>the</strong> svctask chclusterip command. You can ei<strong>the</strong>r specify a static IP<br />

address or have <strong>the</strong> system assign a dynamic IP address, as shown in Example 7-84.<br />

Example 7-84 svctask chclusterip -clusterip<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chclusterip -clusterip 10.20.133.5 -gw<br />

10.20.135.1 -mask 255.255.255.0 -port 1<br />

This command changes <strong>the</strong> current IP address of <strong>the</strong> cluster to 10.20.133.5.<br />

Important: If you specify a new cluster IP address, <strong>the</strong> existing communication with <strong>the</strong><br />

cluster through <strong>the</strong> CLI is broken and <strong>the</strong> PuTTY application automatically closes. You<br />

must relaunch <strong>the</strong> PuTTY application and point to <strong>the</strong> new IP address, but your SSH key<br />

will still work.<br />

7.7.6 Supported IP address formats<br />

Table 7-1 on page 384 shows <strong>the</strong> IP address formats.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 383


Table 7-1 ip_address_list formats<br />

IP type ip_address_list format<br />

IPv4 (no port set, SVC uses default) 1.2.3.4<br />

IPv4 with specific port 1.2.3.4:22<br />

Full IPv6, default port 1234:1234:0001:0123:1234:1234:1234:1234<br />

Full IPv6, default port, leading zeros suppressed 1234:1234:1:123:1234:1234:1234:1234<br />

Full IPv6 with port [2002:914:fc12:848:209:6bff:fe8c:4ff6]:23<br />

Zero-compressed IPv6, default port 2002::4ff6<br />

Zero-compressed IPv6 with port [2002::4ff6]:23<br />

We have completed <strong>the</strong> tasks that are required to change <strong>the</strong> IP addresses (cluster and<br />

service) of <strong>the</strong> SVC environment.<br />

7.7.7 Setting <strong>the</strong> cluster time zone and time<br />

Use <strong>the</strong> -timezone parameter to specify <strong>the</strong> numeric ID of <strong>the</strong> time zone that you want to set.<br />

Issue <strong>the</strong> svcinfo lstimezones command to list <strong>the</strong> time zones that are available on <strong>the</strong><br />

cluster; this command displays a list of valid time zone settings.<br />

Tip: If you have changed <strong>the</strong> time zone, you must clear <strong>the</strong> error log dump directory before<br />

you can view <strong>the</strong> error log through <strong>the</strong> Web application.<br />

Setting <strong>the</strong> cluster time zone<br />

Perform <strong>the</strong> following steps to set <strong>the</strong> cluster time zone and time:<br />

1. Find out for which time zone your cluster is currently configured. Enter <strong>the</strong> svcinfo<br />

showtimezone command, as shown in Example 7-85.<br />

Example 7-85 svcinfo showtimezone command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo showtimezone<br />

id timezone<br />

522 UTC<br />

2. To find <strong>the</strong> time zone code that is associated with your time zone, enter <strong>the</strong> svcinfo<br />

lstimezones command, as shown in Example 7-86. A truncated list is provided for this<br />

example. If this setting is correct (for example, 522 UTC), you can go to Step 4. If not,<br />

continue with Step 3.<br />

Example 7-86 svcinfo lstimezones command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lstimezones<br />

id timezone<br />

.<br />

.<br />

507 Turkey<br />

508 UCT<br />

509 Universal<br />

510 US/Alaska<br />

511 US/Aleutian<br />

512 US/Arizona<br />

384 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


513 US/Central<br />

514 US/Eastern<br />

515 US/East-Indiana<br />

516 US/Hawaii<br />

517 US/Indiana-Starke<br />

518 US/Michigan<br />

519 US/Mountain<br />

520 US/Pacific<br />

521 US/Samoa<br />

522 UTC<br />

.<br />

.<br />

3. Now that you know which time zone code is correct for you, set <strong>the</strong> time zone by issuing<br />

<strong>the</strong> svctask settimezone (Example 7-87) command.<br />

Example 7-87 svctask settimezone command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask settimezone -timezone 520<br />

4. Set <strong>the</strong> cluster time by issuing <strong>the</strong> svctask setclustertime command (Example 7-88).<br />

Example 7-88 svctask setclustertime command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask setclustertime -time 061718402008<br />

The format of <strong>the</strong> time is MMDDHHmmYYYY.<br />

You have completed <strong>the</strong> necessary tasks to set <strong>the</strong> cluster time zone and time.<br />

7.7.8 Start statistics collection<br />

Statistics are collected at <strong>the</strong> end of each sampling period (as specified by <strong>the</strong> -interval<br />

parameter). These statistics are written to a file. A new file is created at <strong>the</strong> end of each<br />

sampling period. Separate files are created for MDisks, VDisks, and node statistics.<br />

Use <strong>the</strong> svctask startstats command to start <strong>the</strong> collection of statistics, as shown in<br />

Example 7-89.<br />

Example 7-89 svctask startstats command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startstats -interval 15<br />

The interval that we specify (minimum 1, maximum 60) is in minutes. This command starts<br />

statistics collection and ga<strong>the</strong>rs data at 15 minute intervals.<br />

Statistics collection: To verify that statistics collection is set, display <strong>the</strong> cluster properties<br />

again, as shown in Example 7-90.<br />

Example 7-90 Statistics collection status and frequency<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1<br />

statistics_status on<br />

statistics_frequency 15<br />

-- Note that <strong>the</strong> output has been shortened for easier reading. --<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 385


We have completed <strong>the</strong> required tasks to start statistics collection on <strong>the</strong> cluster.<br />

7.7.9 Stopping a statistics collection<br />

Use <strong>the</strong> svctask stopstats command to stop <strong>the</strong> collection of statistics within <strong>the</strong> cluster<br />

(Example 7-91).<br />

Example 7-91 svctask stopstats command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stopstats<br />

This command stops <strong>the</strong> statistics collection. Do not expect any prompt message from this<br />

command.<br />

To verify that <strong>the</strong> statistics collection is stopped, display <strong>the</strong> cluster properties again, as<br />

shown in Example 7-92.<br />

Example 7-92 Statistics collection status and frequency<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1<br />

statistics_status off<br />

statistics_frequency 15<br />

-- Note that <strong>the</strong> output has been shortened for easier reading. --<br />

Notice that <strong>the</strong> interval parameter is not changed, but <strong>the</strong> status is off. We have completed <strong>the</strong><br />

required tasks to stop statistics collection on our cluster.<br />

7.7.10 Status of copy operation<br />

Use <strong>the</strong> svcinfo lscopystatus command, as shown in Example 7-93, to determine if a file<br />

copy operation is in progress. Only one file copy operation can be performed at a time. The<br />

output of this command is a status of active or inactive.<br />

Example 7-93 lscopystatus command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsdiscoverystatus<br />

status<br />

inactive<br />

7.7.11 Shutting down a cluster<br />

If all input power to an SVC cluster is to be removed for more than a few minutes (for example,<br />

if <strong>the</strong> machine room power is to be shut down for maintenance), it is important to shut down<br />

<strong>the</strong> cluster before removing <strong>the</strong> power. If <strong>the</strong> input power is removed from <strong>the</strong> uninterruptible<br />

power supply units without first shutting down <strong>the</strong> cluster and <strong>the</strong> uninterruptible power supply<br />

units, <strong>the</strong> uninterruptible power supply units remain operational and eventually become<br />

drained of power.<br />

When input power is restored to <strong>the</strong> uninterruptible power supply units, <strong>the</strong>y start to recharge.<br />

However, <strong>the</strong> SVC does not permit any I/O activity to be performed to <strong>the</strong> VDisks until <strong>the</strong><br />

uninterruptible power supply units are charged enough to enable all of <strong>the</strong> data on <strong>the</strong> SVC<br />

nodes to be destaged in <strong>the</strong> event of a subsequent unexpected power loss. Recharging <strong>the</strong><br />

uninterruptible power supply can take as long as two hours.<br />

386 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.8 Nodes<br />

Shutting down <strong>the</strong> cluster prior to removing input power to <strong>the</strong> uninterruptible power supply<br />

units prevents <strong>the</strong> battery power from being drained. It also makes it possible for I/O activity to<br />

be resumed as soon as input power is restored.<br />

You can use <strong>the</strong> following procedure to shut down <strong>the</strong> cluster:<br />

1. Use <strong>the</strong> svctask stopcluster command to shut down your SVC cluster (Example 7-94).<br />

Example 7-94 svctask stopcluster<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stopcluster<br />

Are you sure that you want to continue with <strong>the</strong> shut down?<br />

This command shuts down <strong>the</strong> SVC cluster. All data is flushed to disk before <strong>the</strong> power is<br />

removed. At this point, you lose administrative contact with your cluster, and <strong>the</strong> PuTTY<br />

application automatically closes.<br />

2. You will be presented with <strong>the</strong> following message:<br />

Warning: Are you sure that you want to continue with <strong>the</strong> shut down?<br />

Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy)<br />

relationships, data migration operations, and forced deletions before continuing. Entering<br />

y to this message will execute <strong>the</strong> command. No feedback is <strong>the</strong>n displayed. Entering<br />

anything o<strong>the</strong>r than y(es) or Y(ES) will result in <strong>the</strong> command not executing. No feedback<br />

is displayed.<br />

Important: Before shutting down a cluster, ensure that all I/O operations are stopped<br />

that are destined for this cluster, because you will lose all access to all VDisks being<br />

provided by this cluster. Failure to do so can result in failed I/O operations being<br />

reported to <strong>the</strong> host operating systems.<br />

Begin <strong>the</strong> process of quiescing all I/O to <strong>the</strong> cluster by stopping <strong>the</strong> applications on <strong>the</strong><br />

hosts that are using <strong>the</strong> VDisks that are provided by <strong>the</strong> cluster.<br />

3. We have completed <strong>the</strong> tasks that are required to shut down <strong>the</strong> cluster. To shut down <strong>the</strong><br />

uninterruptible power supply units, press <strong>the</strong> power on button on <strong>the</strong> front panel of each<br />

uninterruptible power supply unit.<br />

Restarting <strong>the</strong> cluster: To restart <strong>the</strong> cluster, you must first restart <strong>the</strong> uninterruptible<br />

power supply units by pressing <strong>the</strong> power button on <strong>the</strong>ir front panels. Then, press <strong>the</strong><br />

power on button on <strong>the</strong> service panel of one of <strong>the</strong> nodes within <strong>the</strong> cluster. After <strong>the</strong><br />

node is fully booted up (for example, displaying Cluster: on line 1 and <strong>the</strong> cluster name<br />

on line 2 of <strong>the</strong> panel), you can start <strong>the</strong> o<strong>the</strong>r nodes in <strong>the</strong> same way.<br />

As soon as all of <strong>the</strong> nodes are fully booted, you can reestablish administrative contact<br />

using PuTTY, and your cluster is fully operational again.<br />

This section details <strong>the</strong> tasks that can be performed at an individual node level.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 387


7.8.1 Viewing node details<br />

7.8.2 Adding a node<br />

Use <strong>the</strong> svcinfo lsnode command to view <strong>the</strong> summary information about <strong>the</strong> nodes that are<br />

defined within <strong>the</strong> SVC environment. To view more details about a specific node, append <strong>the</strong><br />

node name (for example, SVCNode_1) to <strong>the</strong> command.<br />

Example 7-95 shows both of <strong>the</strong>se commands.<br />

Tip: The -delim parameter truncates <strong>the</strong> content in <strong>the</strong> window and separates data fields<br />

with colons (:) as opposed to wrapping text over multiple lines.<br />

Example 7-95 svcinfo lsnode command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsnode -delim ,<br />

id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un<br />

ique_id,hardware<br />

1,node1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4<br />

2,node2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4<br />

3,node3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4<br />

4,node4,100066C108,50050768010027E2,online,1,io_grp1,no,20400001864C1008,8G4<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsnode node1<br />

id 1<br />

name node1<br />

UPS_serial_number 1000739007<br />

WWNN 50050768010037E5<br />

status online<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

partner_node_id 2<br />

partner_node_name node2<br />

config_node yes<br />

UPS_unique_id 20400001C3240007<br />

port_id 50050768014037E5<br />

port_status active<br />

port_speed 4Gb<br />

port_id 50050768013037E5<br />

port_status active<br />

port_speed 4Gb<br />

port_id 50050768011037E5<br />

port_status active<br />

port_speed 4Gb<br />

port_id 50050768012037E5<br />

port_status active<br />

port_speed 4Gb<br />

hardware 8G4<br />

After cluster creation is completed through <strong>the</strong> service panel (<strong>the</strong> front panel of one of <strong>the</strong><br />

SVC nodes) and cluster Web interface, only one node (<strong>the</strong> configuration node) is set up.<br />

To have a fully functional SVC cluster, you must add a second node to <strong>the</strong> configuration.<br />

388 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


To add a node to a cluster, ga<strong>the</strong>r <strong>the</strong> necessary information, as explained in <strong>the</strong>se steps:<br />

► Before you can add a node, you must know which unconfigured nodes you have as<br />

“candidates”. Issue <strong>the</strong> svcinfo lsnodecandidate command (Example 7-96).<br />

► You must specify to which I/O Group you are adding <strong>the</strong> node. If you enter <strong>the</strong> svcinfo<br />

lsnode command, you can easily identify <strong>the</strong> I/O Group ID of <strong>the</strong> group to which you are<br />

adding your node, as shown in Example 7-97.<br />

Example 7-96 svctask lsnodecandidate command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsnodecandidate<br />

id panel_name UPS_serial_number UPS_unique_id hardware<br />

50050768010027E2 108283 100066C108 20400001864C1008 8G4<br />

50050768010037DC 104603 1000739004 20400001C3240004 8G4<br />

Tip: The node that you want to add must have a separate uninterruptible power supply unit<br />

serial number from <strong>the</strong> uninterruptible power supply unit on <strong>the</strong> first node.<br />

Example 7-97 svcinfo lsnode command<br />

id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un<br />

ique_id,hardware,iscsi_name,iscsi_alias<br />

1,ITSO_CLS1_0,100089J040,50050768010059E7,online,0,io_grp0,yes,2040000209680100,8G<br />

4,iqn.1986-03.com.ibm:2145.ITSO_CLS1_0.ITSO_CLS1_0_N0,<br />

Now that we know <strong>the</strong> available nodes, we can use <strong>the</strong> svctask addnode command to add <strong>the</strong><br />

node to <strong>the</strong> SVC cluster configuration.<br />

Example 7-98 shows <strong>the</strong> command to add a node to <strong>the</strong> SVC cluster.<br />

Example 7-98 svctask addnode (wwnodename) command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addnode -wwnodename 50050768010027E2 -name Node2<br />

-iogrp io_grp0<br />

Node, id [2], successfully added<br />

This command adds <strong>the</strong> candidate node with <strong>the</strong> wwnodename of 50050768010027E2 to <strong>the</strong><br />

I/O Group called io_grp0.<br />

We used <strong>the</strong> -wwnodename parameter (50050768010027E2). However, we can also use <strong>the</strong><br />

-panelname parameter (108283) instead (Example 7-99). If you are standing in front of <strong>the</strong><br />

node, it is easier to read <strong>the</strong> panel name than it is to get <strong>the</strong> WWNN.<br />

Example 7-99 svctask addnode (panelname) command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addnode -panelname 108283 -name Node2 -iogrp<br />

io_grp0<br />

We also used <strong>the</strong> optional -name parameter (Node2). If you do not provide <strong>the</strong> -name<br />

parameter, <strong>the</strong> SVC automatically generates <strong>the</strong> name nodex (where x is <strong>the</strong> ID sequence<br />

number that is assigned internally by <strong>the</strong> SVC).<br />

Name: If you want to provide a name, you can use letters A to Z and a to z, numbers 0 to<br />

9, <strong>the</strong> dash (-), and <strong>the</strong> underscore (_). The name can be between one and 15 characters<br />

in length. However, <strong>the</strong> name cannot start with a number, dash, or <strong>the</strong> word “node”<br />

(because this prefix is reserved for SVC assignment only).<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 389


If <strong>the</strong> svctask addnode command returns no information, your second node is powered on,<br />

and <strong>the</strong> zones are correctly defined, preexisting cluster configuration data can be stored in<br />

<strong>the</strong> node. If you are sure that this node is not part of ano<strong>the</strong>r active SVC cluster, you can use<br />

<strong>the</strong> service panel to delete <strong>the</strong> existing cluster information. After this action is complete,<br />

reissue <strong>the</strong> svcinfo lsnodecandidate command and you will see it listed.<br />

7.8.3 Renaming a node<br />

7.8.4 Deleting a node<br />

Use <strong>the</strong> svctask chnode command to rename a node within <strong>the</strong> SVC cluster configuration.<br />

Example 7-100 svctask chnode -name command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chnode -name ITSO_CLS1_Node1 4<br />

This command renames node ID 4 to ITSO_CLS1_Node1.<br />

Name: The chnode command specifies <strong>the</strong> new name first. You can use letters A to Z and<br />

a to z, numbers 0 to 9, <strong>the</strong> dash (-), and <strong>the</strong> underscore (_). The name can be between one<br />

and 15 characters in length. However, <strong>the</strong> name cannot start with a number, dash, or <strong>the</strong><br />

word “node” (because this prefix is reserved for SVC assignment only).<br />

Use <strong>the</strong> svctask rmnode command to remove a node from <strong>the</strong> SVC cluster configuration<br />

(Example 7-98 on page 389).<br />

Example 7-101 svctask rmnode command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmnode node4<br />

This command removes node4 from <strong>the</strong> SVC cluster.<br />

Because node4 was also <strong>the</strong> configuration node, <strong>the</strong> SVC transfers <strong>the</strong> configuration node<br />

responsibilities to a surviving node, within <strong>the</strong> I/O Group. Unfortunately, <strong>the</strong> PuTTY session<br />

cannot be dynamically passed to <strong>the</strong> surviving node. Therefore, <strong>the</strong> PuTTY application loses<br />

communication and closes automatically.<br />

We must restart <strong>the</strong> PuTTY application to establish a secure session with <strong>the</strong> new<br />

configuration node.<br />

Important: If this node is <strong>the</strong> last node in an I/O Group, and <strong>the</strong>re are VDisks still assigned<br />

to <strong>the</strong> I/O Group, <strong>the</strong> node is not deleted from <strong>the</strong> cluster.<br />

If this node is <strong>the</strong> last node in <strong>the</strong> cluster, and <strong>the</strong> I/O Group has no VDisks remaining, <strong>the</strong><br />

cluster is destroyed and all virtualization information is lost. Any data that is still required<br />

must be backed up or migrated prior to destroying <strong>the</strong> cluster.<br />

7.8.5 Shutting down a node<br />

On occasion, it can be necessary to shut down a single node within <strong>the</strong> cluster to perform<br />

tasks, such as scheduled maintenance, while leaving <strong>the</strong> SVC environment up and running.<br />

Use <strong>the</strong> svctask stopcluster -node command, as shown in Example 7-102 on page 391, to<br />

shut down a single node.<br />

390 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.9 I/O Groups<br />

Example 7-102 svctask stopcluster -node command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stopcluster -node n4<br />

Are you sure that you want to continue with <strong>the</strong> shut down?<br />

This command shuts down node n4 in a graceful manner. When this node has been shut<br />

down, <strong>the</strong> o<strong>the</strong>r node in <strong>the</strong> I/O Group will destage <strong>the</strong> contents of its cache and will go into<br />

write-through mode until <strong>the</strong> node is powered up and rejoins <strong>the</strong> cluster.<br />

Important: There is no need to stop FlashCopy mappings, Remote Copy relationships,<br />

and data migration operations. The o<strong>the</strong>r cluster will handle <strong>the</strong>se activities, but be aware<br />

that this cluster is a single point of failure now.<br />

If this is <strong>the</strong> last node in an I/O Group, all access to <strong>the</strong> VDisks in <strong>the</strong> I/O Group will be lost.<br />

Verify that you want to shut down this node before executing this command. You must specify<br />

<strong>the</strong> -force flag.<br />

By reissuing <strong>the</strong> svcinfo lsnode command (Example 7-103), we can see that <strong>the</strong> node is<br />

now offline.<br />

Example 7-103 svcinfo lsnode command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsnode -delim ,<br />

id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_un<br />

ique_id,hardware<br />

1,n1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G4<br />

2,n2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G4<br />

3,n3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G4<br />

6,n4,100066C108,0000000000000000,offline,1,io_grp1,no,20400001864C1008,unknown<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsnode n4<br />

CMMVC5782E The object specified is offline.<br />

Restart: To restart <strong>the</strong> node manually, press <strong>the</strong> power on button from <strong>the</strong> service panel of<br />

<strong>the</strong> node.<br />

We have completed <strong>the</strong> tasks that are required to view, add, delete, rename, and shut down a<br />

node within an SVC environment.<br />

This section explains <strong>the</strong> tasks that you can perform at an I/O Group level.<br />

7.9.1 Viewing I/O Group details<br />

Use <strong>the</strong> svcinfo lsiogrp command, as shown in Example 7-104 on page 392, to view<br />

information about <strong>the</strong> I/O Groups that are defined within <strong>the</strong> SVC environment.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 391


Example 7-104 I/O Group details<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsiogrp<br />

id name node_count vdisk_count host_count<br />

0 io_grp0 2 3 3<br />

1 io_grp1 2 4 3<br />

2 io_grp2 0 0 2<br />

3 io_grp3 0 0 2<br />

4 recovery_io_grp 0 0 0<br />

As we can see, <strong>the</strong> SVC predefines five I/O Groups. In a four node cluster (similar to our<br />

example), only two I/O Groups are actually in use. The o<strong>the</strong>r I/O Groups (io_grp2 and<br />

io_grp3) are for a six or eight node cluster.<br />

The recovery I/O Group is a temporary home for VDisks when all nodes in <strong>the</strong> I/O Group that<br />

normally owns <strong>the</strong>m have suffered multiple failures. This design allows us to move <strong>the</strong> VDisks<br />

to <strong>the</strong> recovery I/O Group and, <strong>the</strong>n, into a working I/O Group. Of course, while temporarily<br />

assigned to <strong>the</strong> recovery I/O Group, I/O access is not possible.<br />

7.9.2 Renaming an I/O Group<br />

Use <strong>the</strong> svctask chiogrp command to rename an I/O Group (Example 7-105).<br />

Example 7-105 svctask chiogrp command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chiogrp -name io_grpA io_grp1<br />

This command renames <strong>the</strong> I/O Group io_grp1 to io_grpA.<br />

Name: The chiogrp command specifies <strong>the</strong> new name first.<br />

If you want to provide a name, you can use letters A to Z, letters a to z, numbers 0 to 9, <strong>the</strong><br />

dash (-), and <strong>the</strong> underscore (_). The name can be between one and 15 characters in<br />

length. However, <strong>the</strong> name cannot start with a number, dash, or <strong>the</strong> word “iogrp” (because<br />

this prefix is reserved for SVC assignment only).<br />

To see whe<strong>the</strong>r <strong>the</strong> renaming was successful, issue <strong>the</strong> svcinfo lsiogrp command again to<br />

see <strong>the</strong> change.<br />

We have completed <strong>the</strong> tasks that are required to rename an I/O Group.<br />

7.9.3 Adding and removing hostiogrp<br />

To map or unmap a specific host object to a specific I/O Group to reach <strong>the</strong> maximum number<br />

of hosts supported by an SVC cluster, use <strong>the</strong> svctask addhostiogrp command to map a<br />

specific host to a specific I/O Group, as shown in Example 7-106 on page 393.<br />

392 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-106 svctask addhostiogrp command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addhostiogrp -iogrp 1 Kanaga<br />

Parameters:<br />

► -iogrp iogrp_list -iogrpall<br />

Specifies a list of one or more I/O Groups that must be mapped to <strong>the</strong> host. This<br />

parameter is mutually exclusive with -iogrpall. The -iogrpall option specifies that all <strong>the</strong> I/O<br />

Groups must be mapped to <strong>the</strong> specified host. This parameter is mutually exclusive with<br />

-iogrp.<br />

► -host host_id_or_name<br />

Identify <strong>the</strong> host ei<strong>the</strong>r by ID or name to which <strong>the</strong> I/O Groups must be mapped.<br />

Use <strong>the</strong> svctask rmhostiogrp command to unmap a specific host to a specific I/O Group, as<br />

shown in Example 7-107.<br />

Example 7-107 svctask rmhostiogrp command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmhostiogrp -iogrp 0 Kanaga<br />

Parameters:<br />

► -iogrp iogrp_list -iogrpall<br />

Specifies a list of one or more I/O Groups that must be unmapped to <strong>the</strong> host. This<br />

parameter is mutually exclusive with -iogrpall. The -iogrpall option specifies that all of <strong>the</strong><br />

I/O Groups must be unmapped to <strong>the</strong> specified host. This parameter is mutually exclusive<br />

with -iogrp.<br />

► -force<br />

If <strong>the</strong> removal of a host to I/O Group mapping will result in <strong>the</strong> loss of VDisk to host<br />

mappings, <strong>the</strong> command fails if <strong>the</strong> -force flag is not used. The -force flag, however,<br />

overrides this behavior and forces <strong>the</strong> deletion of <strong>the</strong> host to I/O Group mapping.<br />

► host_id_or_name<br />

Identify <strong>the</strong> host ei<strong>the</strong>r by <strong>the</strong> ID or name to which <strong>the</strong> I/O Groups must be mapped.<br />

7.9.4 Listing I/O Groups<br />

To list all of <strong>the</strong> I/O Groups that are mapped to <strong>the</strong> specified host and vice versa, use <strong>the</strong><br />

svcinfo lshostiogrp command, specifying <strong>the</strong> host name Kanaga, as shown in<br />

Example 7-108.<br />

Example 7-108 svcinfo lshostiogrp command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostiogrp Kanaga<br />

id name<br />

1 io_grp1<br />

To list all of <strong>the</strong> host objects that are mapped to <strong>the</strong> specified I/O Group, use <strong>the</strong> svcinfo<br />

lsiogrphost command, as shown in Example 7-109 on page 394.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 393


Example 7-109 svcinfo lsiogrphost command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsiogrphost io_grp1<br />

id name<br />

1 Nile<br />

2 Kanaga<br />

3 Siam<br />

In Example 7-110, iogrp_1 is <strong>the</strong> I/O Group name.<br />

7.10 Managing au<strong>the</strong>ntication<br />

In <strong>the</strong> following topics, we show au<strong>the</strong>ntication administration.<br />

7.10.1 Managing users using <strong>the</strong> CLI<br />

In this section, we demonstrate operating and managing au<strong>the</strong>ntication using <strong>the</strong> CLI.<br />

All users must now be a member of a predefined user group. You can list those groups by<br />

using <strong>the</strong> svcinfo lsusergrp command, as shown in Example 7-110.<br />

Example 7-110 svcinfo lsusergrp command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsusergrp<br />

id name<br />

role remote<br />

0 SecurityAdmin<br />

SecurityAdmin no<br />

1 Administrator<br />

Administrator no<br />

2 CopyOperator<br />

CopyOperator no<br />

3 Service<br />

Service no<br />

4 Monitor<br />

Monitor no<br />

Example 7-111 is a simple example of creating a user. User John is added to <strong>the</strong> user group<br />

Monitor with <strong>the</strong> password m0nitor.<br />

Example 7-111 svctask mkuser called John with password m0nitor<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkuser -name John -usergrp Monitor -password<br />

m0nitor<br />

User, id [2], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Local users are those users that are not au<strong>the</strong>nticated by a remote au<strong>the</strong>ntication server.<br />

Remote users are those users that are au<strong>the</strong>nticated by a remote central registry server.<br />

The user groups already have a defined authority role, as shown in Table 7-2 on page 395.<br />

394 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Table 7-2 Authority roles<br />

User group Role User<br />

Security admin All commands Superusers<br />

Administrator All commands except:<br />

svctask: chauthservice,<br />

mkuser, rmuser, chuser,<br />

mkusergrp, rmusergrp,<br />

chusergrp, and setpwdreset<br />

Copy operator All svcinfo commands and<br />

<strong>the</strong> following svctask<br />

commands:<br />

prestartfcconsistgrp,<br />

startfcconsistgrp,<br />

stopfcconsistgrp,<br />

chfcconsistgrp, prestartfcmap,<br />

startfcmap, stopfcmap,<br />

chfcmap,<br />

startrcconsistgrp,<br />

stoprcconsistgrp,<br />

switchrcconsistgrp,<br />

chrcconsistgrp,<br />

startrcrelationship,<br />

stoprcrelationship,<br />

switchrcrelationship,<br />

chrcrelationship, and<br />

chpartnership<br />

Service All svcinfo commands<br />

and <strong>the</strong> following svctask<br />

commands:<br />

applysoftware, setlocale,<br />

addnode, rmnode, cherrstate,<br />

writesernum, detectmdisk,<br />

includemdisk, clearerrlog,<br />

cleardumps,<br />

settimezone, stopcluster,<br />

startstats, stopstats, and<br />

settime<br />

Monitor All svcinfo commands and<br />

<strong>the</strong> following svctask<br />

commands: finderr,<br />

dumperrlog, dumpinternallog,<br />

and chcurrentuser<br />

And svcconfig: backup<br />

7.10.2 Managing user roles and groups<br />

Administrators that control <strong>the</strong><br />

SVC<br />

For those users that control all<br />

of <strong>the</strong> copy functionality of <strong>the</strong><br />

cluster<br />

For those users that perform<br />

service maintenance and o<strong>the</strong>r<br />

hardware tasks on <strong>the</strong> cluster<br />

For those users only needing<br />

view access<br />

Role-based security commands are used to restrict <strong>the</strong> administrative abilities of a user. We<br />

cannot create new user roles, but we can create new user groups and assign a predefined<br />

role to our group.<br />

To view <strong>the</strong> user roles on your cluster, use <strong>the</strong> svcinfo lsusergrp command, as shown in<br />

Example 7-112 on page 396, to list all of <strong>the</strong> users.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 395


Example 7-112 svcinfo lsusergrp command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsusergrp<br />

id name<br />

role remote<br />

0 SecurityAdmin<br />

SecurityAdmin no<br />

1 Administrator<br />

Administrator no<br />

2 CopyOperator<br />

CopyOperator no<br />

3 Service<br />

Service no<br />

4 Monitor<br />

Monitor no<br />

To view our currently defined users and <strong>the</strong> user groups to which <strong>the</strong>y belong, we use <strong>the</strong><br />

svcinfo lsuser command, as shown in Example 7-113.<br />

Example 7-113 svcinfo lsuser command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsuser -delim ,<br />

id,name,password,ssh_key,remote,usergrp_id,usergrp_name<br />

0,superuser,yes,no,no,0,SecurityAdmin<br />

1,admin,no,yes,no,0,SecurityAdmin<br />

2,Pall,yes,no,no,1,Administrator<br />

7.10.3 Changing a user<br />

To change user passwords, issue <strong>the</strong> svctask chuser command. To change <strong>the</strong> Service<br />

account user password, see 7.7.3, “Cluster au<strong>the</strong>ntication” on page 381.<br />

The chuser command allows you to modify a user that is already created. You can rename,<br />

assign a new password (if you are logged on with administrative privileges), move a user from<br />

one user group to ano<strong>the</strong>r user group, but be aware that a member can only be a member of<br />

one group at a time.<br />

7.10.4 Audit log command<br />

The audit log can be extremely helpful to see which commands have been entered on our<br />

cluster.<br />

Most action commands that are issued by <strong>the</strong> old or new CLI are recorded in <strong>the</strong> audit log:<br />

► The native GUI performs actions by using <strong>the</strong> CLI programs.<br />

► The SVC Console performs actions by issuing Common Information Model (CIM)<br />

commands to <strong>the</strong> CIM object manager (CIMOM), which <strong>the</strong>n runs <strong>the</strong> CLI programs.<br />

Actions performed by using both <strong>the</strong> native GUI and <strong>the</strong> SVC Console are recorded in <strong>the</strong><br />

audit log.<br />

Certain commands are not audited:<br />

► svctask cpdumps<br />

► svctask cleardumps<br />

► svctask finderr<br />

396 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► svctask dumperrlog<br />

► svctask dumpinternallog<br />

The audit log contains approximately 1 MB of data, which can contain about 6,000 average<br />

length commands. When this log is full, <strong>the</strong> cluster copies it to a new file in <strong>the</strong> /dumps/audit<br />

directory on <strong>the</strong> config node and resets <strong>the</strong> in-memory audit log.<br />

To display entries from <strong>the</strong> audit log, use <strong>the</strong> svcinfo catauditlog -first 5 command to<br />

return a list of five in-memory audit log entries, as shown in Example 7-114.<br />

Example 7-114 catauditlog command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo catauditlog -first 5 -delim ,<br />

291,090904200329,superuser,10.64.210.231,0,,svctask mkvdiskhostmap -host 1 21<br />

292,090904201238,admin,10.64.210.231,0,,svctask chvdisk -name swiss_cheese 21<br />

293,090904204314,superuser,10.64.210.231,0,,svctask chhost -name ITSO_W2008 1<br />

294,090904204314,superuser,10.64.210.231,0,,svctask chhost -mask 15 1<br />

295,090904204410,admin,10.64.210.231,0,,svctask chvdisk -name SwissCheese 21<br />

If you need to dump <strong>the</strong> contents of <strong>the</strong> in-memory audit log to a file on <strong>the</strong> current<br />

configuration node, use <strong>the</strong> svctask dumpauditlog command. This command does not<br />

provide any feedback, only <strong>the</strong> prompt. To obtain a list of <strong>the</strong> audit log dumps, use <strong>the</strong> svcinfo<br />

lsauditlogdumps command, as described in Example 7-115.<br />

Example 7-115 svctask dumpauditlog/svcinfo lsauditlogdumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask dumpauditlog<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsauditlogdumps<br />

id auditlog_filename<br />

0 auditlog_0_80_20080619134139_0000020060c06fca<br />

7.11 Managing Copy Services<br />

In <strong>the</strong>se topics, we show how to manage copy services.<br />

7.11.1 FlashCopy operations<br />

In this section, we use a scenario to illustrate how to use commands with PuTTY to perform<br />

FlashCopy. See <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>:<br />

Command-Line Interface User’s Guide, SC26-7544, for more commands.<br />

Scenario description<br />

We use <strong>the</strong> following scenario in both <strong>the</strong> command-line section and <strong>the</strong> GUI section. In <strong>the</strong><br />

following scenario, we want to FlashCopy <strong>the</strong> following VDisks:<br />

DB_Source Database files<br />

Log_Source Database log files<br />

App_Source Application files<br />

We create consistency groups to handle <strong>the</strong> FlashCopy of DB_Source and Log_Source,<br />

because data integrity must be kept on DB_Source and Log_Source.<br />

In our scenario, <strong>the</strong> application files are independent of <strong>the</strong> database, so we create a single<br />

FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 397


Log_Source and, <strong>the</strong>refore, two consistency groups. Example 7-123 on page 403 shows <strong>the</strong><br />

scenario.<br />

Figure 7-2 FlashCopy scenario<br />

7.11.2 Setting up FlashCopy<br />

We have already created <strong>the</strong> source and target VDisks, and <strong>the</strong> source and target VDisks are<br />

identical in size, which is a requirement of <strong>the</strong> FlashCopy function:<br />

► DB_Source, DB_Target1, and DB_Target2<br />

► Log_Source, Log_Target1, and Log_Target2<br />

► App_Source and App_Target1<br />

To set up <strong>the</strong> FlashCopy, we performed <strong>the</strong> following steps:<br />

1. Create two FlashCopy consistency groups:<br />

– FCCG1<br />

– FCCG2<br />

2. Create FlashCopy mappings for Source VDisks:<br />

– DB_Source FlashCopy to DB_Target1, <strong>the</strong> mapping name is DB_Map1<br />

– DB_Source FlashCopy to DB_Target2, <strong>the</strong> mapping name is DB_Map2<br />

– Log_Source FlashCopy to Log_Target1, <strong>the</strong> mapping name is Log_Map1<br />

– Log_Source FlashCopy to Log_Target2, <strong>the</strong> mapping name is Log_Map2<br />

– App_Source FlashCopy to App_Target1, <strong>the</strong> mapping name is App_Map1<br />

– Copyrate 50<br />

7.11.3 Creating a FlashCopy consistency group<br />

To create a FlashCopy consistency group, we use <strong>the</strong> command svctask mkfcconsistgrp to<br />

create a new consistency group. The ID of <strong>the</strong> new group is returned. If you have created<br />

several FlashCopy mappings for a group of VDisks that contain elements of data for <strong>the</strong> same<br />

application, it might be convenient to assign <strong>the</strong>se mappings to a single FlashCopy<br />

consistency group. Then, you can issue a single prepare or start command for <strong>the</strong> whole<br />

398 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


group, so that, for example, all of <strong>the</strong> files for a particular database are copied at <strong>the</strong> same<br />

time.<br />

In Example 7-116, <strong>the</strong> FCCG1 and FCCG2 consistency groups are created to hold <strong>the</strong><br />

FlashCopy maps of DB and Log. This step is extremely important for FlashCopy on database<br />

applications. It helps to keep data integrity during FlashCopy.<br />

Example 7-116 Creating two FlashCopy consistency groups<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG1<br />

FlashCopy Consistency Group, id [1], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG2<br />

FlashCopy Consistency Group, id [2], successfully created<br />

In Example 7-117, we checked <strong>the</strong> status of consistency groups. Each consistency group has<br />

a status of empty.<br />

Example 7-117 Checking <strong>the</strong> status<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp<br />

id name status<br />

1 FCCG1 empty<br />

2 FCCG2 empty<br />

If you want to change <strong>the</strong> name of a consistency group, you can use <strong>the</strong> svctask<br />

chfcconsistgrp command. Type svctask chfcconsistgrp -h for help with this command.<br />

7.11.4 Creating a FlashCopy mapping<br />

To create a FlashCopy mapping, we use <strong>the</strong> svctask mkfcmap command. This command<br />

creates a new FlashCopy mapping, which maps a source VDisk to a target VDisk to prepare<br />

for subsequent copying.<br />

When executed, this command creates a new FlashCopy mapping logical object. This<br />

mapping persists until it is deleted. The mapping specifies <strong>the</strong> source and destination VDisks.<br />

The destination must be identical in size to <strong>the</strong> source, or <strong>the</strong> mapping will fail. Issue <strong>the</strong><br />

svcinfo lsvdisk -bytes command to find <strong>the</strong> exact size of <strong>the</strong> source VDisk for which you<br />

want to create a target disk of <strong>the</strong> same size.<br />

In a single mapping, source and destination cannot be on <strong>the</strong> same VDisk. A mapping is<br />

triggered at <strong>the</strong> point in time when <strong>the</strong> copy is required. The mapping can optionally be given<br />

a name and assigned to a consistency group. These groups of mappings can be triggered at<br />

<strong>the</strong> same time, enabling multiple VDisks to be copied at <strong>the</strong> same time, which creates a<br />

consistent copy of multiple disks. A consistent copy of multiple disks is required for database<br />

products in which <strong>the</strong> database and log files reside on separate disks.<br />

If no consistency group is defined, <strong>the</strong> mapping is assigned to <strong>the</strong> default group 0, which is a<br />

special group that cannot be started as a whole. Mappings in this group can only be started<br />

on an individual basis.<br />

The background copy rate specifies <strong>the</strong> priority that must be given to completing <strong>the</strong> copy. If 0<br />

is specified, <strong>the</strong> copy will not proceed in <strong>the</strong> background. The default is 50.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 399


Tip: There is a parameter to delete FlashCopy mappings automatically after completion of<br />

a background copy (when <strong>the</strong> mapping gets to <strong>the</strong> idle_or_copied state). Use <strong>the</strong><br />

command:<br />

svctask mkfcmap -autodelete<br />

This command does not delete mappings in cascade with dependent mappings, because it<br />

cannot get to <strong>the</strong> idle_or_copied state in this situation.<br />

In Example 7-118, <strong>the</strong> first FlashCopy mapping for DB_Source and Log_Source is created.<br />

Example 7-118 Create <strong>the</strong> first FlashCopy mapping for DB_Source, Log_Source, and App_Source<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target_1<br />

-name DB_Map1 -consistgrp FCCG1<br />

FlashCopy Mapping, id [0], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target_1<br />

-name Log_Map1 -consistgrp FCCG1<br />

FlashCopy Mapping, id [1], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Target_1<br />

-name App_Map1<br />

FlashCopy Mapping, id [2], successfully created<br />

Example 7-119 shows <strong>the</strong> command to create a second FlashCopy mapping for VDisk<br />

DB_Source and Log_Source.<br />

Example 7-119 Create additional FlashCopy mappings<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target2<br />

-name DB_Map2 -consistgrp FCCG2<br />

FlashCopy Mapping, id [3], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target2<br />

-name Log_Map2 -consistgrp FCCG2<br />

FlashCopy Mapping, id [4], successfully created<br />

Example 7-120 shows <strong>the</strong> result of <strong>the</strong>se FlashCopy mappings. The status of <strong>the</strong> mapping is<br />

idle_or_copied.<br />

Example 7-120 Check <strong>the</strong> result of Multiple Target FlashCopy mappings<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap<br />

id name source_vdisk_id source_vdisk_name<br />

target_vdisk_id target_vdisk_name group_id group_name status<br />

progress copy_rate clean_progress incremental partner_FC_id<br />

partner_FC_name restoring<br />

0 DB_Map1 0 DB_Source 6<br />

DB_Target_1 1 FCCG1 idle_or_copied 0<br />

50 100 off no<br />

1 Log_Map1 1 Log_Source 4<br />

Log_Target_1 1 FCCG1 idle_or_copied 0<br />

50 100 off no<br />

2 App_Map1 2 App_Source 3<br />

App_Target_1 idle_or_copied 0<br />

50 100 off no<br />

400 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


3 DB_Map2 0 DB_Source 7<br />

DB_Target_2 2 FCCG2 idle_or_copied 0<br />

50 100 off no<br />

4 Log_Map2 1 Log_Source 5<br />

Log_Target_2 2 FCCG2 idle_or_copied 0<br />

50 100 off no<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp<br />

id name status<br />

1 FCCG1 idle_or_copied<br />

2 FCCG2 idle_or_copied<br />

If you want to change <strong>the</strong> FlashCopy mapping, you can use <strong>the</strong> svctask chfcmap command.<br />

Type svctask chfcmap -h to get help with this command.<br />

7.11.5 Preparing (pre-triggering) <strong>the</strong> FlashCopy mapping<br />

At this point, <strong>the</strong> mapping has been created, but <strong>the</strong> cache still accepts data for <strong>the</strong> source<br />

VDisks. You can only trigger <strong>the</strong> mapping when <strong>the</strong> cache does not contain any data for<br />

FlashCopy source VDisks. You must issue an svctask prestartfcmap command to prepare a<br />

FlashCopy mapping to start. This command tells <strong>the</strong> SVC to flush <strong>the</strong> cache of any content for<br />

<strong>the</strong> source VDisk and to pass through any fur<strong>the</strong>r write data for this VDisk.<br />

When <strong>the</strong> svctask prestartfcmap command is executed, <strong>the</strong> mapping enters <strong>the</strong> Preparing<br />

state. After <strong>the</strong> preparation is complete, it changes to <strong>the</strong> Prepared state. At this point, <strong>the</strong><br />

mapping is ready for triggering. Preparing and <strong>the</strong> subsequent triggering are usually<br />

performed on a consistency group basis. Only mappings belonging to consistency group 0<br />

can be prepared on <strong>the</strong>ir own, because consistency group 0 is a special group, which<br />

contains <strong>the</strong> FlashCopy mappings that do not belong to any consistency group. A FlashCopy<br />

must be prepared before it can be triggered.<br />

In our scenario, App_Map1 is not in a consistency group. In Example 7-121, we show how we<br />

initialize <strong>the</strong> preparation for App_Map1.<br />

Ano<strong>the</strong>r option is that you add <strong>the</strong> -prep parameter to <strong>the</strong> svctask startfcmap command,<br />

which first prepares <strong>the</strong> mapping and <strong>the</strong>n starts <strong>the</strong> FlashCopy.<br />

In <strong>the</strong> example, we also show how to check <strong>the</strong> status of <strong>the</strong> current FlashCopy mapping.<br />

App_Map1’s status is prepared.<br />

Example 7-121 Prepare a FlashCopy without a consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask prestartfcmap App_Map1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1<br />

id 2<br />

name App_Map1<br />

source_vdisk_id 2<br />

source_vdisk_name App_Source<br />

target_vdisk_id 3<br />

target_vdisk_name App_Target_1<br />

group_id<br />

group_name<br />

status prepared<br />

progress 0<br />

copy_rate 50<br />

start_time<br />

dependent_mappings 0<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 401


autodelete off<br />

clean_progress 100<br />

clean_rate 50<br />

incremental off<br />

difference 100<br />

grain_size 256<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

partner_FC_id<br />

partner_FC_name<br />

restoring no<br />

7.11.6 Preparing (pre-triggering) <strong>the</strong> FlashCopy consistency group<br />

We use <strong>the</strong> svctask prestartfcconsistsgrp command to prepare a FlashCopy consistency<br />

group. As with 7.11.5, “Preparing (pre-triggering) <strong>the</strong> FlashCopy mapping” on page 401, this<br />

command flushes <strong>the</strong> cache of any data that is destined for <strong>the</strong> source VDisks and forces <strong>the</strong><br />

cache into <strong>the</strong> write-through mode until <strong>the</strong> mapping is started. The difference is that this<br />

command prepares a group of mappings (at a consistency group level) instead of one<br />

mapping.<br />

When you have assigned several mappings to a FlashCopy consistency group, you only have<br />

to issue a single prepare command for <strong>the</strong> whole group to prepare all of <strong>the</strong> mappings at one<br />

time.<br />

Example 7-122 shows how we prepare <strong>the</strong> consistency groups for DB and Log and check <strong>the</strong><br />

result. After <strong>the</strong> command has executed all of <strong>the</strong> FlashCopy maps that we have, all of <strong>the</strong>m<br />

are in <strong>the</strong> prepared status, and all <strong>the</strong> consistency groups are in <strong>the</strong> prepared status, too.<br />

Now, we are ready to start <strong>the</strong> FlashCopy.<br />

Example 7-122 Prepare a FlashCopy consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask prestartfcconsistgrp FCCG1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask prestartfcconsistgrp FCCG2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp FCCG1<br />

id 1<br />

name FCCG1<br />

status prepared<br />

autodelete off<br />

FC_mapping_id 0<br />

FC_mapping_name DB_Map1<br />

FC_mapping_id 1<br />

FC_mapping_name Log_Map1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp<br />

id name status<br />

1 FCCG1 prepared<br />

2 FCCG2 prepared<br />

7.11.7 Starting (triggering) FlashCopy mappings<br />

The svctask startfcmap command is used to start a single FlashCopy mapping. When<br />

invoked, a point-in-time copy of <strong>the</strong> source VDisk is created on <strong>the</strong> target VDisk.<br />

402 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


When <strong>the</strong> FlashCopy mapping is triggered, it enters <strong>the</strong> Copying state. The way that <strong>the</strong> copy<br />

proceeds depends on <strong>the</strong> background copy rate attribute of <strong>the</strong> mapping. If <strong>the</strong> mapping is set<br />

to 0 (NOCOPY), only data that is subsequently updated on <strong>the</strong> source will be copied to <strong>the</strong><br />

destination. We suggest that you use this scenario as a backup copy while <strong>the</strong> mapping exists<br />

in <strong>the</strong> Copying state. If <strong>the</strong> copy is stopped, <strong>the</strong> destination is unusable. If you want to end up<br />

with a duplicate copy of <strong>the</strong> source at <strong>the</strong> destination, set <strong>the</strong> background copy rate greater<br />

than 0. This way, <strong>the</strong> system copies all of <strong>the</strong> data (even unchanged data) to <strong>the</strong> destination<br />

and eventually reaches <strong>the</strong> idle_or_copied state. After this data is copied, you can delete <strong>the</strong><br />

mapping and have a usable point-in-time copy of <strong>the</strong> source at <strong>the</strong> destination.<br />

In Example 7-123, after <strong>the</strong> FlashCopy is started, App_Map1 changes to copying status.<br />

Example 7-123 Start App_Map1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startfcmap App_Map1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap<br />

id name source_vdisk_id source_vdisk_name<br />

target_vdisk_id target_vdisk_name group_id group_name status<br />

progress copy_rate clean_progress incremental partner_FC_id<br />

partner_FC_name restoring<br />

0 DB_Map1 0 DB_Source 6<br />

DB_Target_1 1 FCCG1 prepared 0<br />

50 100 off no<br />

1 Log_Map1 1 Log_Source 4<br />

Log_Target_1 1 FCCG1 prepared 0<br />

50 100 off no<br />

2 App_Map1 2 App_Source 3<br />

App_Target_1 copying 0<br />

50 100 off no<br />

3 DB_Map2 0 DB_Source 7<br />

DB_Target_2 2 FCCG2 prepared 0<br />

50 100 off no<br />

4 Log_Map2 1 Log_Source 5<br />

Log_Target_2 2 FCCG2 prepared 0<br />

50 100 off no<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1<br />

id 2<br />

name App_Map1<br />

source_vdisk_id 2<br />

source_vdisk_name App_Source<br />

target_vdisk_id 3<br />

target_vdisk_name App_Target_1<br />

group_id<br />

group_name<br />

status copying<br />

progress 29<br />

copy_rate 50<br />

start_time 090826171647<br />

dependent_mappings 0<br />

autodelete off<br />

clean_progress 100<br />

clean_rate 50<br />

incremental off<br />

difference 100<br />

grain_size 256<br />

IO_group_id 0<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 403


IO_group_name io_grp0<br />

partner_FC_id<br />

partner_FC_name<br />

restoring no<br />

7.11.8 Starting (triggering) FlashCopy consistency group<br />

We execute <strong>the</strong> svctask startfcconsistgrp command, as shown in Example 7-124, and<br />

afterward, <strong>the</strong> database can be resumed. We have created two point-in-time consistent<br />

copies of <strong>the</strong> DB and Log VDisks. After execution, <strong>the</strong> consistency group and <strong>the</strong> FlashCopy<br />

maps are all in <strong>the</strong> copying status.<br />

Example 7-124 Start FlashCopy consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startfcconsistgrp FCCG1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startfcconsistgrp FCCG2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp FCCG1<br />

id 1<br />

name FCCG1<br />

status copying<br />

autodelete off<br />

FC_mapping_id 0<br />

FC_mapping_name DB_Map1<br />

FC_mapping_id 1<br />

FC_mapping_name Log_Map1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp<br />

id name status<br />

1 FCCG1 copying<br />

2 FCCG2 copying<br />

7.11.9 Monitoring <strong>the</strong> FlashCopy progress<br />

To monitor <strong>the</strong> background copy progress of <strong>the</strong> FlashCopy mappings, we issue <strong>the</strong> svcinfo<br />

lsfcmapprogress command for each FlashCopy mapping.<br />

Alternatively, you can also query <strong>the</strong> copy progress by using <strong>the</strong> svcinfo lsfcmap command.<br />

As shown in Example 7-125, both DB_Map1 and Log_Map1 return information that <strong>the</strong><br />

background copy is 21% completed, and both DB_Map2 and Log_Map2 return information<br />

that <strong>the</strong> background copy is 18% completed.<br />

Example 7-125 Monitoring background copy progress<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress DB_Map1<br />

id progress<br />

0 23<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress Log_Map1<br />

id progress<br />

1 23<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress Log_Map2<br />

id progress<br />

4 23<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress DB_Map2<br />

id progress<br />

3 23<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress App_Map1<br />

404 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


id progress<br />

2 53<br />

When <strong>the</strong> background copy has completed, <strong>the</strong> FlashCopy mapping enters <strong>the</strong><br />

idle_or_copied state, and when all FlashCopy mappings in a consistency group enter this<br />

status, <strong>the</strong> consistency group will be at idle_or_copied status.<br />

When in this state, <strong>the</strong> FlashCopy mapping can be deleted, and <strong>the</strong> target disk can be used<br />

independently, if, for example, ano<strong>the</strong>r target disk is to be used for <strong>the</strong> next FlashCopy of <strong>the</strong><br />

particular source VDisk.<br />

7.11.10 Stopping <strong>the</strong> FlashCopy mapping<br />

The svctask stopfcmap command is used to stop a FlashCopy mapping. This command<br />

allows you to stop an active (copying) or suspended mapping. When executed, this command<br />

stops a single FlashCopy mapping.<br />

When a FlashCopy mapping is stopped, <strong>the</strong> target VDisk becomes invalid and is set offline by<br />

<strong>the</strong> SVC. The FlashCopy mapping needs to be prepared again or retriggered to bring <strong>the</strong><br />

target VDisk online again.<br />

Tip: In a Multiple Target FlashCopy environment, if you want to stop a mapping or group,<br />

consider whe<strong>the</strong>r you want to keep any of <strong>the</strong> dependent mappings. If not, issue <strong>the</strong> stop<br />

command with <strong>the</strong> force parameter, which will stop all of <strong>the</strong> dependent maps and negate<br />

<strong>the</strong> need for <strong>the</strong> stopping copy process to run.<br />

Important: Only stop a FlashCopy mapping when <strong>the</strong> data on <strong>the</strong> target VDisk is not in<br />

use, or when you want to modify <strong>the</strong> FlashCopy mapping. When a FlashCopy mapping is<br />

stopped, <strong>the</strong> target VDisk becomes invalid and is set offline by <strong>the</strong> SVC, if <strong>the</strong> mapping is in<br />

<strong>the</strong> Copying state and progress=100.<br />

Example 7-126 shows how to stop <strong>the</strong> App_Map1 FlashCopy. The status of App_Map1 has<br />

changed to idle_or_copied.<br />

Example 7-126 Stop APP_Map1 FlashCopy<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stopfcmap App_Map1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1<br />

id 2<br />

name App_Map1<br />

source_vdisk_id 2<br />

source_vdisk_name App_Source<br />

target_vdisk_id 3<br />

target_vdisk_name App_Target_1<br />

group_id<br />

group_name<br />

status idle_or_copied<br />

progress 100<br />

copy_rate 50<br />

start_time 090826171647<br />

dependent_mappings 0<br />

autodelete off<br />

clean_progress 100<br />

clean_rate 50<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 405


incremental off<br />

difference 100<br />

grain_size 256<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

partner_FC_id<br />

partner_FC_name<br />

restoring no<br />

7.11.11 Stopping <strong>the</strong> FlashCopy consistency group<br />

The svctask stopfcconsistgrp command is used to stop any active FlashCopy consistency<br />

group. It stops all mappings in a consistency group. When a FlashCopy consistency group is<br />

stopped for all mappings that are not 100% copied, <strong>the</strong> target VDisks become invalid and are<br />

set offline by <strong>the</strong> SVC. The FlashCopy consistency group needs to be prepared again and<br />

restarted to bring <strong>the</strong> target VDisks online again.<br />

Important: Only stop a FlashCopy mapping when <strong>the</strong> data on <strong>the</strong> target VDisk is not in<br />

use, or when you want to modify <strong>the</strong> FlashCopy consistency group. When a consistency<br />

group is stopped, <strong>the</strong> target VDisk might become invalid and set offline by <strong>the</strong> SVC,<br />

depending on <strong>the</strong> state of <strong>the</strong> mapping.<br />

As shown in Example 7-127, we stop <strong>the</strong> FCCG1 and FCCG2 consistency groups. The status<br />

of <strong>the</strong> two consistency groups has changed to stopped. Most of <strong>the</strong> FlashCopy mapping<br />

relations now have <strong>the</strong> status stopped. As you can see, several of <strong>the</strong>m have already<br />

completed <strong>the</strong> copy operation and are now in a status of idle_or_copied.<br />

Example 7-127 Stop FCCG1 and FCCG2 consistency groups<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp<br />

id name status<br />

1 FCCG1 stopped<br />

2 FCCG2 stopped<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap -delim ,<br />

id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_<br />

id,group_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,p<br />

artner_FC_name,restoring<br />

0,DB_Map1,0,DB_Source,6,DB_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no<br />

1,Log_Map1,1,Log_Source,4,Log_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no<br />

2,App_Map1,2,App_Source,3,App_Target_1,,,idle_or_copied,100,50,100,off,,,no<br />

3,DB_Map2,0,DB_Source,7,DB_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no<br />

4,Log_Map2,1,Log_Source,5,Log_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no<br />

7.11.12 Deleting <strong>the</strong> FlashCopy mapping<br />

To delete a FlashCopy mapping, we use <strong>the</strong> svctask rmfcmap command. When <strong>the</strong><br />

command is executed, it attempts to delete <strong>the</strong> specified FlashCopy mapping. If <strong>the</strong><br />

FlashCopy mapping is stopped, <strong>the</strong> command fails unless <strong>the</strong> -force flag is specified. If <strong>the</strong><br />

mapping is active (copying), it must first be stopped before it can be deleted.<br />

406 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Deleting a mapping only deletes <strong>the</strong> logical relationship between <strong>the</strong> two VDisks. However,<br />

when issued on an active FlashCopy mapping using <strong>the</strong> -force flag, <strong>the</strong> delete renders <strong>the</strong><br />

data on <strong>the</strong> FlashCopy mapping target VDisk as inconsistent.<br />

Tip: If you want to use <strong>the</strong> target VDisk as a normal VDisk, monitor <strong>the</strong> background copy<br />

progress until it is complete (100% copied) and, <strong>the</strong>n, delete <strong>the</strong> FlashCopy mapping.<br />

Ano<strong>the</strong>r option is to set <strong>the</strong> -autodelete option when creating <strong>the</strong> FlashCopy mapping.<br />

As shown in Example 7-128, we delete App_Map1.<br />

Example 7-128 Delete App_Map1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmfcmap App_Map1<br />

7.11.13 Deleting <strong>the</strong> FlashCopy consistency group<br />

The svctask rmfcconsistgrp command is used to delete a FlashCopy consistency group.<br />

When executed, this command deletes <strong>the</strong> specified consistency group. If <strong>the</strong>re are mappings<br />

that are members of <strong>the</strong> group, <strong>the</strong> command fails unless <strong>the</strong> -force flag is specified.<br />

If you want to delete all of <strong>the</strong> mappings in <strong>the</strong> consistency group, as well, you must first<br />

delete <strong>the</strong> mappings and, <strong>the</strong>n, delete <strong>the</strong> consistency group.<br />

As shown in Example 7-129, we delete all of <strong>the</strong> maps and consistency groups, and <strong>the</strong>n, we<br />

check <strong>the</strong> result.<br />

Example 7-129 Remove fcmaps and fcconsistgrp<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

7.11.14 Migrating a VDisk to a Space-Efficient VDisk<br />

Use <strong>the</strong> following scenario to migrate a VDisk to a Space-Efficient VDisk:<br />

1. Create a space-efficient target VDisk with exactly <strong>the</strong> same size as <strong>the</strong> VDisk that you<br />

want to migrate.<br />

Example 7-130 on page 408 shows <strong>the</strong> VDisk 8 details. It has been created as a<br />

Space-Efficient VDisk with <strong>the</strong> same size of App_Source VDisk.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 407


Example 7-130 svcinfo lsvdisk 8 command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk 8<br />

id 8<br />

name App_Source_SE<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

capacity 1.00GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 6005076801AB813F100000000000000B<br />

throttling 0<br />

preferred_node_id 2<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 221.17MB<br />

free_capacity 220.77MB<br />

overallocation 462<br />

autoexpand on<br />

warning 80<br />

grainsize 32<br />

2. Define a FlashCopy mapping in which <strong>the</strong> non-Space-Efficient VDisk is <strong>the</strong> source and <strong>the</strong><br />

Space-Efficient VDisk is <strong>the</strong> target. Specify a copy rate as high as possible, and activate<br />

<strong>the</strong> -autodelete option for <strong>the</strong> mapping. See Example 7-131.<br />

Example 7-131 svctask mkfcmap<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target<br />

App_Source_SE -name MigrtoSEV -copyrate 100 -autodelete<br />

FlashCopy Mapping, id [0], successfully created<br />

408 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap 0<br />

id 0<br />

name MigrtoSEV<br />

source_vdisk_id 2<br />

source_vdisk_name App_Source<br />

target_vdisk_id 8<br />

target_vdisk_name App_Source_SE<br />

group_id<br />

group_name<br />

status idle_or_copied<br />

progress 0<br />

copy_rate 100<br />

start_time<br />

dependent_mappings 0<br />

autodelete on<br />

clean_progress 100<br />

clean_rate 50<br />

incremental off<br />

difference 100<br />

grain_size 256<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

partner_FC_id<br />

partner_FC_name<br />

restoring no<br />

3. Run <strong>the</strong> svctask prestartfcmap command and <strong>the</strong> svcinfo lsfcmap MigrtoSEV<br />

command, as shown in Example 7-132.<br />

Example 7-132 svctask prestartfcmap<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask prestartfcmap MigrtoSEV<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV<br />

id 0<br />

name MigrtoSEV<br />

source_vdisk_id 2<br />

source_vdisk_name App_Source<br />

target_vdisk_id 8<br />

target_vdisk_name App_Source_SE<br />

group_id<br />

group_name<br />

status prepared<br />

progress 0<br />

copy_rate 100<br />

start_time<br />

dependent_mappings 0<br />

autodelete on<br />

clean_progress 100<br />

clean_rate 50<br />

incremental off<br />

difference 100<br />

grain_size 256<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

partner_FC_id<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 409


partner_FC_name<br />

restoring no<br />

4. Run <strong>the</strong> svctask startfcmap command, as shown in Example 7-133.<br />

Example 7-133 svctask startfcmap command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startfcmap MigrtoSEV<br />

5. Monitor <strong>the</strong> copy process using <strong>the</strong> svcinfo lsfcmapprogress command, as shown in<br />

Example 7-134.<br />

Example 7-134 svcinfo lsfcmapprogress command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress MigrtoSEV<br />

id progress<br />

0 63<br />

6. The FlashCopy mapping has been deleted automatically, as shown in Example 7-135.<br />

Example 7-135 svcinfo lsfcmap command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV<br />

id 0<br />

name MigrtoSEV<br />

source_vdisk_id 2<br />

source_vdisk_name App_Source<br />

target_vdisk_id 8<br />

target_vdisk_name App_Source_SE<br />

group_id<br />

group_name<br />

status copying<br />

progress 73<br />

copy_rate 100<br />

start_time 090827095354<br />

dependent_mappings 0<br />

autodelete on<br />

clean_progress 100<br />

clean_rate 50<br />

incremental off<br />

difference 100<br />

grain_size 256<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

partner_FC_id<br />

partner_FC_name<br />

restoring no<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEV<br />

CMMVC5754E The specified object does not exist, or <strong>the</strong> name supplied does not<br />

meet <strong>the</strong> naming rules.<br />

An independent copy of <strong>the</strong> source VDisk (App_Source) has been created. The migration<br />

has completed, as shown in Example 7-136 on page 411.<br />

410 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-136 svcinfo lsvdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk App_Source_SE<br />

id 8<br />

name App_Source_SE<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

capacity 1.00GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 6005076801AB813F100000000000000B<br />

throttling 0<br />

preferred_node_id 2<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 1.00GB<br />

real_capacity 1.00GB<br />

free_capacity 0.77MB<br />

overallocation 99<br />

autoexpand on<br />

warning 80<br />

grainsize 32<br />

Real size: Independently of what you defined as <strong>the</strong> real size of <strong>the</strong> target SEV, <strong>the</strong> real<br />

size will be at least <strong>the</strong> capacity of <strong>the</strong> source VDisk.<br />

To migrate a Space-Efficient VDisk to a fully allocated VDisk, you can follow <strong>the</strong> same<br />

scenario.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 411


7.11.15 Reverse FlashCopy<br />

Starting with SVC 5.1, you can have a reverse FlashCopy mapping without having to remove<br />

<strong>the</strong> original FlashCopy mapping, and without restarting a FlashCopy mapping from <strong>the</strong><br />

beginning.<br />

In Example 7-137, FCMAP0 is <strong>the</strong> forward FlashCopy mapping, and FCMAP0_rev is a<br />

reverse FlashCopy mapping. Its source is FCMAP0’s target, and its target is FCMAP0’s<br />

source. When starting a reverse FlashCopy mapping, you must use <strong>the</strong> -restore option to<br />

indicate that <strong>the</strong> user wants to overwrite <strong>the</strong> data on <strong>the</strong> source disk of <strong>the</strong> forward mapping.<br />

Example 7-137 Reverse FlashCopy<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk0 -target vdsk1 -name FCMAP0<br />

FlashCopy Mapping, id [0], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin> svctask startfcmap -prep FCMAP0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk1 -target vdsk0 -name<br />

FCMAP0_rev<br />

FlashCopy Mapping, id [1], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin> svctask startfcmap -prep -restore FCMAP0_rev<br />

id:name:source_vdisk_id:source_vdisk_name:target_vdisk_id:target_vdisk_name:group_<br />

id:group_name:status:progress:copy_rate:clean_progress:incremental:partner_FC_id:p<br />

artner_FC_name:restoring<br />

0:FCMAP0:75:vdsk0:76:vdsk1:::copying:0:10:99:off:1:FCMAP0_rev:no<br />

1:FCMAP0_rev:76:vdsk1:75:vdsk0:::copying:99:50:100:off:0:FCMAP0:yes<br />

FCMAP0_rev will show a restoring value of yes while <strong>the</strong> FlashCopy mapping is copying.<br />

After it has finished copying, <strong>the</strong> restoring value field will change to no.<br />

7.11.16 Split-stopping of FlashCopy maps<br />

The stopfcmap command now has a -split option. This option allows <strong>the</strong> source target of a<br />

map, which is 100% complete, to be removed from <strong>the</strong> head of a cascade, when <strong>the</strong> map is<br />

stopped.<br />

For example, if we have four VDisks in a cascade (A B C D), and <strong>the</strong> map A B is<br />

100% complete, using <strong>the</strong> stopfcmap -split mapAB command results in mapAB becoming<br />

idle_copied and <strong>the</strong> remaining cascade becomes B C D.<br />

Without <strong>the</strong> -split option, VDisk A remains at <strong>the</strong> head of <strong>the</strong> cascade (A C D). Consider<br />

this sequence of steps:<br />

1. User takes a backup using <strong>the</strong> mapping A B. A is <strong>the</strong> production VDisk; B is a backup.<br />

2. At a later point, <strong>the</strong> user experiences corruption on A and, so, reverses <strong>the</strong> mapping B <br />

A.<br />

3. The user <strong>the</strong>n takes ano<strong>the</strong>r backup from <strong>the</strong> production disk A, resulting in <strong>the</strong> cascade<br />

B A C.<br />

Stopping A B without <strong>the</strong> -split option results in <strong>the</strong> cascade B C. Note that <strong>the</strong> backup<br />

disk B is now at <strong>the</strong> head of this cascade.<br />

When <strong>the</strong> user next wants to take a backup to B, <strong>the</strong> user can still start mapping A B (using<br />

<strong>the</strong> -restore flag), but <strong>the</strong> user cannot <strong>the</strong>n reverse <strong>the</strong> mapping to A (B A or C A).<br />

412 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Stopping A B with <strong>the</strong> -split option results in <strong>the</strong> cascade A C. This action does not result<br />

in <strong>the</strong> same problem, because production disk A is at <strong>the</strong> head of <strong>the</strong> cascade instead of <strong>the</strong><br />

backup disk B.<br />

7.12 Metro Mirror operation<br />

Note: This example is for intercluster operations only. If you want to set up intracluster<br />

operations, we highlight those parts of <strong>the</strong> following procedure that you do not need to<br />

perform.<br />

In <strong>the</strong> following scenario, we set up an intercluster Metro Mirror relationship between <strong>the</strong> SVC<br />

cluster ITSO-CLS1 primary site and <strong>the</strong> SVC cluster ITSO-CLS4 at <strong>the</strong> secondary site.<br />

Table 7-3 shows <strong>the</strong> details of <strong>the</strong> VDisks.<br />

Table 7-3 VDisk details<br />

Content of VDisk VDisks at primary site VDisks at secondary site<br />

Database files MM_DB_Pri MM_DB_Sec<br />

Database log files MM_DBLog_Pri MM_DBLog_Sec<br />

Application files MM_App_Pri MM_App_Sec<br />

Because data consistency is needed across <strong>the</strong> MM_DB_Pri and MM_DBLog_Pri VDisks, a<br />

CG_WIN2K3_MM consistency group is created to handle Metro Mirror relationships for <strong>the</strong>m.<br />

Because, in this scenario, application files are independent of <strong>the</strong> database, a stand-alone<br />

Metro Mirror relationship is created for <strong>the</strong> MM_App_Pri VDisk. Figure 7-3 on page 414<br />

illustrates <strong>the</strong> Metro Mirror setup.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 413


Figure 7-3 Metro Mirror scenario<br />

7.12.1 Setting up Metro Mirror<br />

In <strong>the</strong> following section, we assume that <strong>the</strong> source and target VDisks have already been<br />

created and that <strong>the</strong> inter-switch links (ISLs) and zoning are in place, enabling <strong>the</strong> SVC<br />

clusters to communicate.<br />

To set up <strong>the</strong> Metro Mirror, perform <strong>the</strong> following steps:<br />

1. Create an SVC partnership between ITSO-CLS1 and ITSO-CLS4, on both SVC clusters.<br />

2. Create a Metro Mirror consistency group:<br />

Name CG_W2K3_MM<br />

3. Create <strong>the</strong> Metro Mirror relationship for MM_DB_Pri:<br />

– Master MM_DB_Pri<br />

– Auxiliary MM_DB_Sec<br />

– Auxiliary SVC cluster ITSO-CLS4<br />

– Name MMREL1<br />

– Consistency group CG_W2K3_MM<br />

4. Create <strong>the</strong> Metro Mirror relationship for MM_DBLog_Pri:<br />

– Master MM_DBLog_Pri<br />

– Auxiliary MM_DBLog_Sec<br />

– Auxiliary SVC cluster ITSO-CLS4<br />

– Name MMREL2<br />

– Consistency group CG_W2K3_MM<br />

414 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5. Create <strong>the</strong> Metro Mirror relationship for MM_App_Pri:<br />

– Master MM_App_Pri<br />

– Auxiliary MM_App_Sec<br />

– Auxiliary SVC cluster ITSO-CLS4<br />

– Name MMREL3<br />

In <strong>the</strong> following section, we perform each step by using <strong>the</strong> CLI.<br />

7.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4<br />

We create <strong>the</strong> SVC partnership on both clusters.<br />

Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform<br />

<strong>the</strong> next step; instead, go to 7.12.3, “Creating a Metro Mirror consistency group” on<br />

page 416.<br />

Pre-verification<br />

To verify that both clusters can communicate with each o<strong>the</strong>r, use <strong>the</strong> svcinfo<br />

lsclustercandidate command.<br />

As shown in Example 7-138, ITSO-CLS4 is an eligible SVC cluster candidate at ITSO-CLS1<br />

for <strong>the</strong> SVC cluster partnership, and vice versa. Therefore, both clusters are communicating<br />

with each o<strong>the</strong>r.<br />

Example 7-138 Listing <strong>the</strong> available SVC cluster for partnership<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate<br />

id configured name<br />

0000020069E03A42 no ITSO-CLS3<br />

0000020063E03A38 no ITSO-CLS4<br />

0000020061006FCA no ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate<br />

id configured name<br />

0000020069E03A42 no ITSO-CLS3<br />

000002006AE04FC4 no ITSO-CLS1<br />

0000020061006FCA no ITSO-CLS2<br />

Example 7-139 shows <strong>the</strong> output of <strong>the</strong> svcinfo lscluster command, before setting up <strong>the</strong><br />

Metro Mirror relationship. We show it so that you can compare with <strong>the</strong> same relationship<br />

after setting up <strong>the</strong> Metro Mirror relationship.<br />

Example 7-139 Pre-verification of cluster configuration<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster<br />

id name location partnership bandwidth<br />

id_alias<br />

000002006AE04FC4 ITSO-CLS1 local<br />

000002006AE04FC4<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svcinfo lscluster<br />

id name location partnership bandwidth<br />

id_alias<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 415


0000020063E03A38 ITSO-CLS4 local<br />

0000020063E03A38<br />

Partnership between clusters<br />

In Example 7-140, a partnership is created between ITSO-CLS1 and ITSO-CL4, specifying<br />

50 MBps bandwidth to be used for <strong>the</strong> background copy.<br />

To check <strong>the</strong> status of <strong>the</strong> newly created partnership, issue <strong>the</strong> svcinfo lscluster<br />

command. Also, notice that <strong>the</strong> new partnership is only partially configured. It remains<br />

partially configured until <strong>the</strong> Metro Mirror relationship is created on <strong>the</strong> o<strong>the</strong>r node.<br />

Example 7-140 Creating <strong>the</strong> partnership from ITSO-CLS1 to ITSO-CLS4 and verifying <strong>the</strong> partnership<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster<br />

id name location partnership bandwidth<br />

id_alias<br />

000002006AE04FC4 ITSO-CLS1 local<br />

000002006AE04FC4<br />

0000020063E03A38 ITSO-CLS4 remote fully_configured 50<br />

0000020063E03A38<br />

In Example 7-141, <strong>the</strong> partnership is created between ITSO-CLS4 back to ITSO-CLS1,<br />

specifying <strong>the</strong> bandwidth to be used for a background copy of 50 MBps.<br />

After creating <strong>the</strong> partnership, verify that <strong>the</strong> partnership is fully configured on both clusters<br />

by reissuing <strong>the</strong> svcinfo lscluster command.<br />

Example 7-141 Creating <strong>the</strong> partnership from ITSO-CLS4 to ITSO-CLS1 and verifying <strong>the</strong> partnership<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svcinfo lscluster<br />

id name location partnership bandwidth<br />

id_alias<br />

0000020063E03A38 ITSO-CLS4 local<br />

0000020063E03A38<br />

000002006AE04FC4 ITSO-CLS1 remote fully_configured 50<br />

000002006AE04FC4<br />

7.12.3 Creating a Metro Mirror consistency group<br />

In Example 7-142, we create <strong>the</strong> Metro Mirror consistency group using <strong>the</strong> svctask<br />

mkrcconsistgrp command. This consistency group will be used for <strong>the</strong> Metro Mirror<br />

relationships of <strong>the</strong> database VDisks named MM_DB_Pri and MM_DBLog_Pri. The<br />

consistency group is named CG_W2K3_MM.<br />

Example 7-142 Creating <strong>the</strong> Global Mirror consistency group CG_W2K3_MM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name<br />

CG_W2K3_MM<br />

RC Consistency Group, id [0], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp<br />

id name master_cluster_id master_cluster_name<br />

aux_cluster_id aux_cluster_name primary state<br />

relationship_count copy_type<br />

416 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


0 CG_W2K3_MM 000002006AE04FC4 ITSO-CLS1<br />

0000020063E03A38 ITSO-CLS4 empty 0<br />

empty_group<br />

7.12.4 Creating <strong>the</strong> Metro Mirror relationships<br />

In Example 7-143, we create <strong>the</strong> Metro Mirror relationships MMREL1 and MMREL2, for<br />

MM_DB_Pri and MM_DBLog_Pri. Also, we make <strong>the</strong>m members of <strong>the</strong> Metro Mirror<br />

consistency group CG_W2K3_MM. We use <strong>the</strong> svcinfo lsvdisk command to list all of <strong>the</strong><br />

VDisks in <strong>the</strong> ITSO-CLS1 cluster, and we <strong>the</strong>n use <strong>the</strong> svcinfo lsrcrelationshipcandidate<br />

command to show <strong>the</strong> VDisks in <strong>the</strong> ITSO-CLS4 cluster.<br />

By using this command, we check <strong>the</strong> possible candidates for MM_DB_Pri. After checking all<br />

of <strong>the</strong>se conditions, use <strong>the</strong> svctask mkrcrelationship command to create <strong>the</strong> Metro Mirror<br />

relationship.<br />

To verify <strong>the</strong> newly created Metro Mirror relationships, list <strong>the</strong>m with <strong>the</strong> svcinfo<br />

lsrcrelationship command.<br />

Example 7-143 Creating Metro Mirror relationships MMREL1 and MMREL2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=MM*<br />

id name IO_group_id IO_group_name status mdisk_grp_id<br />

mdisk_grp_name capacity type FC_id FC_name RC_id<br />

RC_name vdisk_UID fc_map_count copy_count fast_write_state<br />

13 MM_DB_Pri 0 io_grp0 online 0<br />

MDG_DS47 1.00GB striped<br />

6005076801AB813F1000000000000010 0 1 empty<br />

14 MM_Log_Pri 0 io_grp0 online 0<br />

MDG_DS47 1.00GB striped<br />

6005076801AB813F1000000000000011 0 1 empty<br />

15 MM_App_Pri 0 io_grp0 online 0<br />

MDG_DS47 1.00GB striped<br />

6005076801AB813F1000000000000012 0 1 empty<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate<br />

id vdisk_name<br />

0 DB_Source<br />

1 Log_Source<br />

2 App_Source<br />

3 App_Target_1<br />

4 Log_Target_1<br />

5 Log_Target_2<br />

6 DB_Target_1<br />

7 DB_Target_2<br />

8 App_Source_SE<br />

9 FC_A<br />

13 MM_DB_Pri<br />

14 MM_Log_Pri<br />

15 MM_App_Pri<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master MM_DB_Pri<br />

id vdisk_name<br />

0 MM_DB_Sec<br />

1 MM_Log_Sec<br />

2 MM_App_Sec<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 417


<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster<br />

ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL1<br />

RC Relationship, id [13], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster<br />

ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL2<br />

RC Relationship, id [14], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship<br />

id name master_cluster_id master_cluster_name master_vdisk_id<br />

master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary<br />

consistency_group_id consistency_group_name state bg_copy_priority progress<br />

copy_type<br />

13 MMREL1 000002006AE04FC4 ITSO-CLS1 13<br />

MM_DB_Pri 0000020063E03A38 ITSO-CLS4 0 MM_DB_Sec master<br />

0 CG_W2K3_MM inconsistent_stopped 50 0<br />

metro<br />

14 MMREL2 000002006AE04FC4 ITSO-CLS1 14<br />

MM_Log_Pri 0000020063E03A38 ITSO-CLS4 1 MM_Log_Sec master<br />

0 CG_W2K3_MM inconsistent_stopped 50 0<br />

metro<br />

7.12.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri<br />

In Example 7-144, we create <strong>the</strong> stand-alone Metro Mirror relationship MMREL3 for<br />

MM_App_Pri. After it is created, we check <strong>the</strong> status of this Metro Mirror relationship.<br />

Notice that <strong>the</strong> state of MMREL3 is consistent_stopped. MMREL3 is in this state, because it<br />

was created with <strong>the</strong> -sync option. The -sync option indicates that <strong>the</strong> secondary (auxiliary)<br />

VDisk is already synchronized with <strong>the</strong> primary (master) VDisk. Initial background<br />

synchronization is skipped when this option is used, even though <strong>the</strong> VDisks are not actually<br />

synchronized in this scenario. We want to illustrate <strong>the</strong> option of pre-synchronized master and<br />

auxiliary VDisks, before setting up <strong>the</strong> relationship. We have created <strong>the</strong> new relationship for<br />

MM_App_Sec using <strong>the</strong> -sync option.<br />

Tip: The -sync option is only used when <strong>the</strong> target VDisk has already mirrored all of <strong>the</strong><br />

data from <strong>the</strong> source VDisk. By using this option, <strong>the</strong>re is no initial background copy<br />

between <strong>the</strong> primary VDisk and <strong>the</strong> secondary VDisk.<br />

MMREL2 and MMREL1 are in <strong>the</strong> inconsistent_stopped state, because <strong>the</strong>y were not created<br />

with <strong>the</strong> -sync option, so <strong>the</strong>ir auxiliary VDisks need to be synchronized with <strong>the</strong>ir primary<br />

VDisks.<br />

Example 7-144 Creating a stand-alone relationship and verifying it<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_App_Pri -aux<br />

MM_App_Sec -sync -cluster ITSO-CLS4 -name MMREL3<br />

RC Relationship, id [15], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship 15<br />

id 15<br />

name MMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 15<br />

418 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


master_vdisk_name MM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 2<br />

aux_vdisk_name MM_App_Sec<br />

primary master<br />

consistency_group_id<br />

consistency_group_name<br />

state consistent_stopped<br />

bg_copy_priority 50<br />

progress 100<br />

freeze_time<br />

status online<br />

sync in_sync<br />

copy_type metro<br />

sync in_sync<br />

copy_type metro<br />

7.12.6 Starting Metro Mirror<br />

Now that <strong>the</strong> Metro Mirror consistency group and relationships are in place, we are ready to<br />

use Metro Mirror relationships in our environment.<br />

When implementing Metro Mirror, <strong>the</strong> goal is to reach a consistent and synchronized state<br />

that can provide redundancy for a dataset if a failure occurs that affects <strong>the</strong> production site.<br />

In <strong>the</strong> following section, we show how to stop and start stand-alone Metro Mirror relationships<br />

and consistency groups.<br />

Starting a stand-alone Metro Mirror relationship<br />

In Example 7-145, we start a stand-alone Metro Mirror relationship named MMREL3.<br />

Because <strong>the</strong> Metro Mirror relationship was in <strong>the</strong> Consistent stopped state and no updates<br />

have been made to <strong>the</strong> primary VDisk, <strong>the</strong> relationship quickly enters <strong>the</strong> Consistent<br />

synchronized state.<br />

Example 7-145 Starting <strong>the</strong> stand-alone Metro Mirror relationship<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startrcrelationship MMREL3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3<br />

id 15<br />

name MMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 15<br />

master_vdisk_name MM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 2<br />

aux_vdisk_name MM_App_Sec<br />

primary master<br />

consistency_group_id<br />

consistency_group_name<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 419


progress<br />

freeze_time<br />

status online<br />

sync<br />

copy_type metro<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

7.12.7 Starting a Metro Mirror consistency group<br />

In Example 7-146, we start <strong>the</strong> Metro Mirror consistency group CG_W2K3_MM. Because <strong>the</strong><br />

consistency group was in <strong>the</strong> Inconsistent stopped state, it enters <strong>the</strong> Inconsistent copying<br />

state until <strong>the</strong> background copy has completed for all of <strong>the</strong> relationships in <strong>the</strong> consistency<br />

group.<br />

Upon completion of <strong>the</strong> background copy, it enters <strong>the</strong> Consistent synchronized state.<br />

Example 7-146 Starting <strong>the</strong> Metro Mirror consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_MM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM<br />

id 0<br />

name CG_W2K3_MM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary master<br />

state inconsistent_copying<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type metro<br />

RC_rel_id 13<br />

RC_rel_name MMREL1<br />

RC_rel_id 14<br />

RC_rel_name MMREL2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

7.12.8 Monitoring <strong>the</strong> background copy progress<br />

To monitor <strong>the</strong> background copy progress, we can use <strong>the</strong> svcinfo lsrcrelationship<br />

command. This command shows us all of <strong>the</strong> defined Metro Mirror relationships if it is used<br />

without any arguments. In <strong>the</strong> command output, progress indicates <strong>the</strong> current background<br />

copy progress.<br />

Our Metro Mirror relationship is shown in Example 7-147 on page 421.<br />

Using SNMP traps: Setting up SNMP traps for <strong>the</strong> SVC enables automatic notification<br />

when Metro Mirror consistency groups or relationships change state.<br />

420 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-147 Monitoring background copy progress example<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL1<br />

id 13<br />

name MMREL1<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 13<br />

master_vdisk_name MM_DB_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 0<br />

aux_vdisk_name MM_DB_Sec<br />

primary master<br />

consistency_group_id 0<br />

consistency_group_name CG_W2K3_MM<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

progress 35<br />

freeze_time<br />

status online<br />

sync<br />

copy_type metro<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL2<br />

id 14<br />

name MMREL2<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 14<br />

master_vdisk_name MM_Log_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 1<br />

aux_vdisk_name MM_Log_Sec<br />

primary master<br />

consistency_group_id 0<br />

consistency_group_name CG_W2K3_MM<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

progress 37<br />

freeze_time<br />

status online<br />

sync<br />

copy_type metro<br />

When all Metro Mirror relationships have completed <strong>the</strong> background copy, <strong>the</strong> consistency<br />

group enters <strong>the</strong> Consistent synchronized state, as shown in Example 7-148.<br />

Example 7-148 Listing <strong>the</strong> Metro Mirror consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM<br />

id 0<br />

name CG_W2K3_MM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 421


aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary master<br />

state consistent_synchronized<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type metro<br />

RC_rel_id 13<br />

RC_rel_name MMREL1<br />

RC_rel_id 14<br />

RC_rel_name MMREL2<br />

7.12.9 Stopping and restarting Metro Mirror<br />

Now that <strong>the</strong> Metro Mirror consistency group and relationships are running, in this section and<br />

in <strong>the</strong> following sections, we describe how to stop, restart, and change <strong>the</strong> direction of <strong>the</strong><br />

stand-alone Metro Mirror relationships, as well as <strong>the</strong> consistency group.<br />

In this section, we show how to stop and restart <strong>the</strong> stand-alone Metro Mirror relationships<br />

and <strong>the</strong> consistency group.<br />

7.12.10 Stopping a stand-alone Metro Mirror relationship<br />

Example 7-149 shows how to stop <strong>the</strong> stand-alone Metro Mirror relationship, while enabling<br />

access (write I/O) to both <strong>the</strong> primary and secondary VDisks. It also shows <strong>the</strong> relationship<br />

entering <strong>the</strong> Idling state.<br />

Example 7-149 Stopping stand-alone Metro Mirror relationship and enabling access to <strong>the</strong> secondary<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access MMREL3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3<br />

id 15<br />

name MMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 15<br />

master_vdisk_name MM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 2<br />

aux_vdisk_name MM_App_Sec<br />

primary<br />

consistency_group_id<br />

consistency_group_name<br />

state idling<br />

bg_copy_priority 50<br />

progress<br />

freeze_time<br />

status<br />

sync in_sync<br />

copy_type metro<br />

422 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.12.11 Stopping a Metro Mirror consistency group<br />

Example 7-150 shows how to stop <strong>the</strong> Metro Mirror consistency group without specifying <strong>the</strong><br />

-access flag. The consistency group enters <strong>the</strong> Consistent stopped state.<br />

Example 7-150 Stopping a Metro Mirror consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_MM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM<br />

id 0<br />

name CG_W2K3_MM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary master<br />

state consistent_stopped<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync in_sync<br />

copy_type metro<br />

RC_rel_id 13<br />

RC_rel_name MMREL1<br />

RC_rel_id 14<br />

RC_rel_name MMREL2<br />

If, afterwards, we want to enable access (write I/O) to <strong>the</strong> secondary VDisk, reissue <strong>the</strong><br />

svctask stoprcconsistgrp command, specifying <strong>the</strong> -access flag, and <strong>the</strong> consistency group<br />

transits to <strong>the</strong> Idling state, as shown in Example 7-151.<br />

Example 7-151 Stopping a Metro Mirror consistency group and enabling access to <strong>the</strong> secondary<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_MM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM<br />

id 0<br />

name CG_W2K3_MM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary<br />

state idling<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync in_sync<br />

copy_type metro<br />

RC_rel_id 13<br />

RC_rel_name MMREL1<br />

RC_rel_id 14<br />

RC_rel_name MMREL2<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 423


7.12.12 Restarting a Metro Mirror relationship in <strong>the</strong> Idling state<br />

When restarting a Metro Mirror relationship in <strong>the</strong> Idling state, we must specify <strong>the</strong> copy<br />

direction.<br />

If any updates have been performed on ei<strong>the</strong>r <strong>the</strong> master or <strong>the</strong> auxiliary VDisk, consistency<br />

will be compromised. Therefore, we must issue <strong>the</strong> command with <strong>the</strong> -force flag to restart a<br />

relationship, as shown in Example 7-152.<br />

Example 7-152 Restarting a Metro Mirror relationship after updates in <strong>the</strong> Idling state<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force MMREL3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3<br />

id 15<br />

name MMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 15<br />

master_vdisk_name MM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 2<br />

aux_vdisk_name MM_App_Sec<br />

primary master<br />

consistency_group_id<br />

consistency_group_name<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

progress<br />

freeze_time<br />

status online<br />

sync<br />

copy_type metro<br />

7.12.13 Restarting a Metro Mirror consistency group in <strong>the</strong> Idling state<br />

When restarting a Metro Mirror consistency group in <strong>the</strong> Idling state, we must specify <strong>the</strong><br />

copy direction.<br />

If any updates have been performed on ei<strong>the</strong>r <strong>the</strong> master or <strong>the</strong> auxiliary VDisk in any of <strong>the</strong><br />

Metro Mirror relationships in <strong>the</strong> consistency group, <strong>the</strong> consistency is compromised.<br />

Therefore, we must use <strong>the</strong> -force flag to start a relationship. If <strong>the</strong> -force flag is not used, <strong>the</strong><br />

command fails.<br />

In Example 7-153, we change <strong>the</strong> copy direction by specifying <strong>the</strong> auxiliary VDisks to become<br />

<strong>the</strong> primaries.<br />

Example 7-153 Restarting a Metro Mirror relationship while changing <strong>the</strong> copy direction<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -force -primary aux CG_W2K3_MM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM<br />

id 0<br />

name CG_W2K3_MM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

424 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


aux_cluster_name ITSO-CLS4<br />

primary aux<br />

state consistent_synchronized<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type metro<br />

RC_rel_id 13<br />

RC_rel_name MMREL1<br />

RC_rel_id 14<br />

RC_rel_name MMREL2<br />

7.12.14 Changing copy direction for Metro Mirror<br />

In this section, we show how to change <strong>the</strong> copy direction of <strong>the</strong> stand-alone Metro Mirror<br />

relationship and <strong>the</strong> consistency group.<br />

7.12.15 Switching copy direction for a Metro Mirror relationship<br />

When a Metro Mirror relationship is in <strong>the</strong> Consistent synchronized state, we can change <strong>the</strong><br />

copy direction for <strong>the</strong> relationship using <strong>the</strong> svctask switchrcrelationship command,<br />

specifying <strong>the</strong> primary VDisk.<br />

If <strong>the</strong> specified VDisk, when you issue this command, is already a primary, <strong>the</strong> command has<br />

no effect.<br />

In Example 7-154, we change <strong>the</strong> copy direction for <strong>the</strong> stand-alone Metro Mirror relationship<br />

by specifying <strong>the</strong> auxiliary VDisk to become <strong>the</strong> primary.<br />

Important: When <strong>the</strong> copy direction is switched, it is crucial that <strong>the</strong>re is no outstanding I/O<br />

to <strong>the</strong> VDisk that transitions from <strong>the</strong> primary to <strong>the</strong> secondary, because all of <strong>the</strong> I/O will<br />

be inhibited to that VDisk when it becomes <strong>the</strong> secondary. Therefore, careful planning is<br />

required prior to using <strong>the</strong> svctask switchrcrelationship command.<br />

Example 7-154 Switching <strong>the</strong> copy direction for a Metro Mirror consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3<br />

id 15<br />

name MMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 15<br />

master_vdisk_name MM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 2<br />

aux_vdisk_name MM_App_Sec<br />

primary master<br />

consistency_group_id<br />

consistency_group_name<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

progress<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 425


freeze_time<br />

status online<br />

sync<br />

copy_type metro<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux MMREL3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3<br />

id 15<br />

name MMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 15<br />

master_vdisk_name MM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 2<br />

aux_vdisk_name MM_App_Sec<br />

primary aux<br />

consistency_group_id<br />

consistency_group_name<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

progress<br />

freeze_time<br />

status online<br />

sync<br />

copy_type metro<br />

7.12.16 Switching copy direction for a Metro Mirror consistency group<br />

When a Metro Mirror consistency group is in <strong>the</strong> Consistent synchronized state, we can<br />

change <strong>the</strong> copy direction for <strong>the</strong> consistency group, by using <strong>the</strong> svctask<br />

switchrcconsistgrp command and specifying <strong>the</strong> primary VDisk.<br />

If <strong>the</strong> specified VDisk is already a primary when you issue this command, <strong>the</strong> command has<br />

no effect.<br />

In Example 7-155, we change <strong>the</strong> copy direction for <strong>the</strong> Metro Mirror consistency group by<br />

specifying <strong>the</strong> auxiliary VDisk to become <strong>the</strong> primary.<br />

Important: When <strong>the</strong> copy direction is switched, it is crucial that <strong>the</strong>re is no outstanding I/O<br />

to <strong>the</strong> VDisk that transitions from primary to secondary, because all of <strong>the</strong> I/O will be<br />

inhibited when that VDisk becomes <strong>the</strong> secondary. Therefore, careful planning is required<br />

prior to using <strong>the</strong> svctask switchrcconsistgrp command.<br />

Example 7-155 Switching <strong>the</strong> copy direction for a Metro Mirror consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM<br />

id 0<br />

name CG_W2K3_MM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary master<br />

426 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


state consistent_synchronized<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type metro<br />

RC_rel_id 13<br />

RC_rel_name MMREL1<br />

RC_rel_id 14<br />

RC_rel_name MMREL2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_MM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MM<br />

id 0<br />

name CG_W2K3_MM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary aux<br />

state consistent_synchronized<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type metro<br />

RC_rel_id 13<br />

RC_rel_name MMREL1<br />

RC_rel_id 14<br />

RC_rel_name MMREL2<br />

7.12.17 Creating an SVC partnership among many clusters<br />

Starting with SVC 5.1, you can have a cluster partnership among many SVC clusters. This<br />

capability allows you to create four configurations using a maximum of four connected<br />

clusters:<br />

► Star configuration<br />

► Triangle configuration<br />

► Fully connected configuration<br />

► Daisy-chain configuration<br />

In this section, we describe how to configure <strong>the</strong> SVC cluster partnership for each<br />

configuration.<br />

Important: In order to have a supported and working configuration, all of <strong>the</strong> SVC clusters<br />

must be at level 5.1 or higher.<br />

In our scenarios, we configure <strong>the</strong> SVC partnership by referring to <strong>the</strong> clusters as A, B, C, and<br />

D:<br />

► ITSO-CLS1 = A<br />

► ITSO-CLS2 = B<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 427


► ITSO-CLS3 = C<br />

► ITSO-CLS4 = D<br />

Example 7-156 shows <strong>the</strong> available clusters for a partnership using <strong>the</strong> lsclustercandidate<br />

command on each cluster.<br />

Example 7-156 Available clusters<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate<br />

id configured name<br />

0000020069E03A42 no ITSO-CLS3<br />

0000020063E03A38 no ITSO-CLS4<br />

0000020061006FCA no ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsclustercandidate<br />

id configured cluster_name<br />

000002006AE04FC4 no ITSO-CLS1<br />

0000020069E03A42 no ITSO-CLS3<br />

0000020063E03A38 no ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svcinfo lsclustercandidate<br />

id configured name<br />

000002006AE04FC4 no ITSO-CLS1<br />

0000020063E03A38 no ITSO-CLS4<br />

0000020061006FCA no ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate<br />

id configured name<br />

0000020069E03A42 no ITSO-CLS3<br />

000002006AE04FC4 no ITSO-CLS1<br />

0000020061006FCA no ITSO-CLS2<br />

7.12.18 Star configuration partnership<br />

Figure 7-4 shows <strong>the</strong> star configuration.<br />

Figure 7-4 Star configuration<br />

428 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-157 shows <strong>the</strong> sequence of mkpartnership commands to execute to create a star<br />

configuration.<br />

Example 7-157 Creating a star configuration using <strong>the</strong> mkpartnership command<br />

From ITSO-CLS1 to multiple clusters<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4<br />

From ITSO-CLS2 to ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

From ITSO-CLS3 to ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

From ITSO-CLS4 to ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

From ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42<br />

0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38<br />

From ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4<br />

0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42<br />

0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38<br />

From ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38<br />

From ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 429


0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42<br />

After <strong>the</strong> SVC partnership has been configured, you can configure any rcrelationship or<br />

rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.<br />

Triangle configuration<br />

Figure 7-5 shows <strong>the</strong> triangle configuration.<br />

Figure 7-5 Triangle configuration<br />

Example 7-158 shows <strong>the</strong> sequence of mkpartnership commands to execute to create a<br />

triangle configuration.<br />

Example 7-158 Creating a triangle configuration<br />

From ITSO-CLS1 to ITSO-CLS2 and ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3<br />

From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3<br />

From ITSO-CLS3 to ITSO-CLS1 and ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2<br />

From ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42<br />

From ITSO-CLS2<br />

430 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4<br />

0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42<br />

From ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

After <strong>the</strong> SVC partnership has been configured, you can configure any rcrelationship or<br />

rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.<br />

Fully connected configuration<br />

Figure 7-6 shows <strong>the</strong> fully connected configuration.<br />

Figure 7-6 Fully connected configuration<br />

Example 7-159 shows <strong>the</strong> sequence of mkpartnership commands to execute to create a fully<br />

connected configuration.<br />

Example 7-159 Creating a fully connected configuration<br />

From ITSO-CLS1 to ITSO-CLS2, ITSO-CLS3 and ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4<br />

From ITSO-CLS2 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 431


<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4<br />

From ITSO-CLS3 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4<br />

From ITSO-CLS4 to ITSO-CLS1, ITSO-CLS2 and ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3<br />

From ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42<br />

0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38<br />

From ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4<br />

0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42<br />

0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38<br />

From ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38<br />

From ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42<br />

After <strong>the</strong> SVC partnership has been configured, you can configure any rcrelationship or<br />

rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.<br />

432 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Daisy-chain configuration<br />

Figure 7-7 shows <strong>the</strong> daisy-chain configuration.<br />

Figure 7-7 Daisy-chain configuration<br />

Example 7-160 shows <strong>the</strong> sequence of mkpartnership commands to execute to create a<br />

daisy-chain configuration.<br />

Example 7-160 Creating a daisy-chain configuration<br />

From ITSO-CLS1 to ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2<br />

From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3<br />

From ITSO-CLS3 to ITSO-CLS2 and ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4<br />

From ITSO-CLS4 to ITSO-CLS3<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3<br />

From ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

From ITSO-CLS2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4<br />

0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42<br />

From ITSO-CLS3<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 433


<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42<br />

000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC4<br />

0000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA<br />

From ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:id_alias<br />

0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38<br />

After <strong>the</strong> SVC partnership has been configured, you can configure any rcrelationship or<br />

rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.<br />

7.13 Global Mirror operation<br />

In <strong>the</strong> following scenario, we set up an intercluster Global Mirror relationship between <strong>the</strong><br />

SVC cluster ITSO-CLS1 at <strong>the</strong> primary site and <strong>the</strong> SVC cluster ITSO-CLS4 at <strong>the</strong> secondary<br />

site.<br />

Note: This example is for an intercluster Global Mirror operation only. In case you want to<br />

set up an intracluster operation, we highlight those parts in <strong>the</strong> following procedure that<br />

you do not need to perform.<br />

Table 7-4 shows <strong>the</strong> details of <strong>the</strong> VDisks.<br />

Table 7-4 Details of VDisks for Global Mirror relationship scenario<br />

Content of VDisk VDisks at primary site VDisks at secondary site<br />

Database files GM_DB_Pri GM_DB_Sec<br />

Database log files GM_DBLog_Pri GM_DBLog_Sec<br />

Application files GM_App_Pri GM_App_Sec<br />

Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a<br />

consistency group to handle Global Mirror relationships for <strong>the</strong>m. Because, in this scenario,<br />

<strong>the</strong> application files are independent of <strong>the</strong> database, we create a stand-alone Global Mirror<br />

relationship for GM_App_Pri. Figure 7-8 on page 435 illustrates <strong>the</strong> Global Mirror relationship<br />

setup.<br />

434 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 7-8 Global Mirror scenario<br />

7.13.1 Setting up Global Mirror<br />

Primary Site<br />

SVC Cluster - ITSO - CLS1<br />

GM_DB_Pri<br />

GM_Dlog_Pri<br />

Consistency Group<br />

CG_W2K3_GM<br />

GM Relationship 1<br />

GM Relationship 2<br />

Secondary Site<br />

SVC Cluster - ITSO - CLS4<br />

GM_DB_Sec<br />

GM_DBlog_Sec<br />

GM Relationship 3<br />

GM_App_Pri GM_App_Sec<br />

In <strong>the</strong> following section, we assume that <strong>the</strong> source and target VDisks have already been<br />

created and that <strong>the</strong> ISLs and zoning are in place, enabling <strong>the</strong> SVC clusters to communicate.<br />

To set up <strong>the</strong> Global Mirror, perform <strong>the</strong> following steps:<br />

1. Create an SVC partnership between ITSO_CLS1 and ITSO_CLS4, on both SVC clusters:<br />

Bandwidth 10 MBps<br />

2. Create a Global Mirror consistency group:<br />

Name CG_W2K3_GM<br />

3. Create <strong>the</strong> Global Mirror relationship for GM_DB_Pri:<br />

– Master GM_DB_Pri<br />

– Auxiliary GM_DB_Sec<br />

– Auxiliary SVC cluster ITSO-CLS4<br />

– Name GMREL1<br />

– Consistency group CG_W2K3_GM<br />

4. Create <strong>the</strong> Global Mirror relationship for GM_DBLog_Pri:<br />

– Master GM_DBLog_Pri<br />

– Auxiliary GM_DBLog_Sec<br />

– Auxiliary SVC cluster ITSO-CLS4<br />

– Name GMREL2<br />

– Consistency group CG_W2K3_GM<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 435


5. Create <strong>the</strong> Global Mirror relationship for GM_App_Pri:<br />

– Master GM_App_Pri<br />

– Auxiliary GM_App_Sec<br />

– Auxiliary SVC cluster ITSO-CLS4<br />

– Name GMREL3<br />

In <strong>the</strong> following sections, we perform each step by using <strong>the</strong> CLI.<br />

7.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4<br />

We create an SVC partnership between both clusters.<br />

Note: If you are creating an intracluster Global Mirror, do not perform <strong>the</strong> next step;<br />

instead, go to 7.13.3, “Changing link tolerance and cluster delay simulation” on page 437.<br />

Pre-verification<br />

To verify that both clusters can communicate with each o<strong>the</strong>r, use <strong>the</strong> svcinfo<br />

lsclustercandidate command. Example 7-161 confirms that our clusters are<br />

communicating, because ITSO-CLS4 is an eligible SVC cluster candidate, at ITSO-CLS1, for<br />

<strong>the</strong> SVC cluster partnership, and vice versa. Therefore, both clusters are communicating with<br />

each o<strong>the</strong>r.<br />

Example 7-161 Listing <strong>the</strong> available SVC clusters for partnership<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate<br />

id configured cluster_name<br />

0000020068603A42 no ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate<br />

id configured cluster_name<br />

0000020060C06FCA no ITSO-CLS1<br />

In Example 7-162, we show <strong>the</strong> output of <strong>the</strong> svcinfo lscluster command, before setting up<br />

<strong>the</strong> SVC clusters’ partnership for Global Mirror. We show this output for comparison after we<br />

have set up <strong>the</strong> SVC partnership.<br />

Example 7-162 Pre-verification of cluster configuration<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre<br />

ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias<br />

0000020060C06FCA:ITSO-CLS1:local:::10.64.210.240:10.64.210.241:::0000020060C06FCA<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lscluster -delim :<br />

id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_addre<br />

ss:cluster_IP_address_6:cluster_service_IP_address_6:id_alias<br />

0000020063E03A38:ITSO-CLS4:local:::10.64.210.246.119:10.64.210.247:::0000020063E03<br />

A38<br />

436 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Partnership between clusters<br />

In Example 7-163, we create <strong>the</strong> partnership from ITSO-CLS1 to ITSO-CLS4, specifying a<br />

10 MBps bandwidth to use for <strong>the</strong> background copy.<br />

To verify <strong>the</strong> status of <strong>the</strong> newly created partnership, we issue <strong>the</strong> svcinfo lscluster<br />

command. Notice that <strong>the</strong> new partnership is only partially configured. It will remain partially<br />

configured until we run <strong>the</strong> mkpartnership command on <strong>the</strong> o<strong>the</strong>r cluster.<br />

Example 7-163 Creating <strong>the</strong> partnership from ITSO-CLS1 to ITSO-CLS4 and verifying <strong>the</strong> partnership<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS4<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster<br />

id name location partnership bandwidth<br />

id_alias<br />

000002006AE04FC4 ITSO-CLS1 local<br />

000002006AE04FC4<br />

0000020063E03A38 ITSO-CLS4 remote partially_configured_local 10<br />

0000020063E03A38<br />

In Example 7-164, we create <strong>the</strong> partnership from ITSO-CLS4 back to ITSO-CLS1, specifying<br />

a 10 MBps bandwidth to be used for <strong>the</strong> background copy.<br />

After creating <strong>the</strong> partnership, verify that <strong>the</strong> partnership is fully configured by reissuing <strong>the</strong><br />

svcinfo lscluster command.<br />

Example 7-164 Creating <strong>the</strong> partnership from ITSO-CLS4 to ITSO-CLS1 and verifying <strong>the</strong> partnership<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS1<br />

<strong>IBM</strong>_2145:ITSO-CLS4:admin>svcinfo lscluster<br />

id name location partnership bandwidth<br />

id_alias<br />

0000020063E03A38 ITSO-CLS4 local<br />

0000020063E03A38<br />

000002006AE04FC4 ITSO-CLS1 remote fully_configured 10<br />

000002006AE04FC4<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster<br />

id name location partnership bandwidth<br />

id_alias<br />

000002006AE04FC4 ITSO-CLS1 local<br />

000002006AE04FC4<br />

0000020063E03A38 ITSO-CLS4 remote fully_configured 10<br />

0000020063E03A38<br />

7.13.3 Changing link tolerance and cluster delay simulation<br />

The gm_link_tolerance defines <strong>the</strong> sensitivity of <strong>the</strong> SVC to inter-link overload conditions.<br />

The value is <strong>the</strong> number of seconds of continuous link difficulties that will be tolerated before<br />

<strong>the</strong> SVC will stop <strong>the</strong> remote copy relationships in order to prevent affecting host I/O at <strong>the</strong><br />

primary site. In order to change <strong>the</strong> value, use <strong>the</strong> following command:<br />

svctask chcluster -gmlinktolerance link_tolerance<br />

The link_tolerance value is between 60 and 86,400 seconds in increments of 10 seconds.<br />

The default value for <strong>the</strong> link tolerance is 300 seconds. A value of 0 disables link tolerance.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 437


Recommendation: We strongly recommend that you use <strong>the</strong> default value. If <strong>the</strong> link is<br />

overloaded for a period, which affects host I/O at <strong>the</strong> primary site, <strong>the</strong> relationships will be<br />

stopped to protect those hosts.<br />

Intercluster and intracluster delay simulation<br />

This Global Mirror feature permits a simulation of a delayed write to a remote VDisk. This<br />

feature allows testing to be performed that detects colliding writes, and so, you can use this<br />

feature to test an application before <strong>the</strong> full deployment of <strong>the</strong> Global Mirror feature. The delay<br />

simulation can be enabled separately for each intracluster or intercluster Global Mirror. To<br />

enable this feature, you need to run <strong>the</strong> following command ei<strong>the</strong>r for <strong>the</strong> intracluster or<br />

intercluster simulation:<br />

► For intercluster:<br />

svctask chcluster -gminterdelaysimulation <br />

► For intracluster:<br />

svctask chcluster -gmintradelaysimulation <br />

The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express <strong>the</strong><br />

amount of time (in milliseconds) secondary I/Os are delayed respectively for intercluster and<br />

intracluster relationships. These values specify <strong>the</strong> number of milliseconds that I/O activity,<br />

that is, copying a primary VDisk to a secondary VDisk, is delayed. You can set a value from 0<br />

to 100 milliseconds in 1 millisecond increments for <strong>the</strong> cluster_delay_simulation in <strong>the</strong><br />

previous commands. A value of zero (0) disables <strong>the</strong> feature.<br />

To check <strong>the</strong> current settings for <strong>the</strong> delay simulation, use <strong>the</strong> following command:<br />

svcinfo lscluster <br />

In Example 7-165, we show <strong>the</strong> modification of <strong>the</strong> delay simulation value and a change of<br />

<strong>the</strong> Global Mirror link tolerance parameters. We also show <strong>the</strong> changed values of <strong>the</strong> Global<br />

Mirror link tolerance and delay simulation parameters.<br />

Example 7-165 Delay simulation and link tolerance modification<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chcluster -gminterdelaysimulation 20<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chcluster -gmintradelaysimulation 40<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chcluster -gmlinktolerance 200<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster 000002006AE04FC4<br />

id 000002006AE04FC4<br />

name ITSO-CLS1<br />

location local<br />

partnership<br />

bandwidth<br />

total_mdisk_capacity 160.0GB<br />

space_in_mdisk_grps 160.0GB<br />

space_allocated_to_vdisks 19.00GB<br />

total_free_space 141.0GB<br />

statistics_status off<br />

statistics_frequency 15<br />

required_memory 8192<br />

cluster_locale en_US<br />

time_zone 520 US/Pacific<br />

code_level 5.1.0.0 (build 17.1.0908110000)<br />

FC_port_speed 2Gb<br />

console_IP<br />

438 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


id_alias 000002006AE04FC4<br />

gm_link_tolerance 200<br />

gm_inter_cluster_delay_simulation 20<br />

gm_intra_cluster_delay_simulation 40<br />

email_reply<br />

email_contact<br />

email_contact_primary<br />

email_contact_alternate<br />

email_contact_location<br />

email_state invalid<br />

inventory_mail_interval 0<br />

total_vdiskcopy_capacity 19.00GB<br />

total_used_capacity 19.00GB<br />

total_overallocation 11<br />

total_vdisk_capacity 19.00GB<br />

cluster_ntp_IP_address<br />

cluster_isns_IP_address<br />

iscsi_auth_method none<br />

iscsi_chap_secret<br />

auth_service_configured no<br />

auth_service_enabled no<br />

auth_service_url<br />

auth_service_user_name<br />

auth_service_pwd_set no<br />

auth_service_cert_set no<br />

relationship_bandwidth_limit 25<br />

7.13.4 Creating a Global Mirror consistency group<br />

In Example 7-166, we create <strong>the</strong> Global Mirror consistency group using <strong>the</strong> svctask<br />

mkrcconsistgrp command. We will use this consistency group for <strong>the</strong> Global Mirror<br />

relationships for <strong>the</strong> database VDisks. The consistency group is named CG_W2K3_GM.<br />

Example 7-166 Creating <strong>the</strong> Global Mirror consistency group CG_W2K3_GM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name<br />

CG_W2K3_GM<br />

RC Consistency Group, id [0], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp<br />

id name master_cluster_id master_cluster_name<br />

aux_cluster_id aux_cluster_name primary state<br />

relationship_count copy_type<br />

0 CG_W2K3_GM 000002006AE04FC4 ITSO-CLS1<br />

0000020063E03A38 ITSO-CLS4 empty 0<br />

empty_group<br />

7.13.5 Creating Global Mirror relationships<br />

In Example 7-168 on page 441, we create <strong>the</strong> GMREL1 and GMREL2 Global Mirror<br />

relationships for <strong>the</strong> GM_DB_Pri and GM_DBLog_Pri VDisks. We also make <strong>the</strong>m members<br />

of <strong>the</strong> CG_W2K3_GM Global Mirror consistency group.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 439


We use <strong>the</strong> svcinfo lsvdisk command to list all of <strong>the</strong> VDisks in <strong>the</strong> ITSO-CLS1 cluster and,<br />

<strong>the</strong>n, use <strong>the</strong> svcinfo lsrcrelationshipcandidate command to show <strong>the</strong> possible VDisk<br />

candidates for GM_DB_Pri in ITSO-CLS4.<br />

After checking all of <strong>the</strong>se conditions, use <strong>the</strong> svctask mkrcrelationship command to create<br />

<strong>the</strong> Global Mirror relationship.<br />

To verify <strong>the</strong> newly created Global Mirror relationships, list <strong>the</strong>m with <strong>the</strong> svcinfo<br />

lsrcrelationship command.<br />

Example 7-167 Creating GMREL1 and GMREL2 Global Mirror relationships<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=GM*<br />

id name IO_group_id IO_group_name status mdisk_grp_id<br />

mdisk_grp_name capacity type FC_id FC_name RC_id RC_name<br />

vdisk_UID fc_map_count copy_count fast_write_state<br />

16 GM_App_Pri 0 io_grp0 online 0 MDG_DS47<br />

1.00GB striped<br />

6005076801AB813F1000000000000013 0 1 empty<br />

17 GM_DB_Pri 0 io_grp0 online 0 MDG_DS47<br />

1.00GB striped<br />

6005076801AB813F1000000000000014 0 1 empty<br />

18 GM_DBLog_Pri 0 io_grp0 online 0 MDG_DS47<br />

1.00GB striped<br />

6005076801AB813F1000000000000015 0 1 empty<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master GM_DB_Pri<br />

id vdisk_name<br />

0 MM_DB_Sec<br />

1 MM_Log_Sec<br />

2 MM_App_Sec<br />

3 GM_App_Sec<br />

4 GM_DB_Sec<br />

5 GM_DBLog_Sec<br />

6 SEV<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS2<br />

-consistgrp CG_W2K3_GM -name GMREL1 -global<br />

RC Relationship, id [9], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS2<br />

-consistgrp CG_W2K3_GM -name GMREL2 -global<br />

RC Relationship, id [10], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS4<br />

-consistgrp CG_W2K3_GM -name GMREL1 -global<br />

RC Relationship, id [17], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS4<br />

-consistgrp CG_W2K3_GM -name GMREL2 -global<br />

RC Relationship, id [18], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship<br />

id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name<br />

aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id<br />

consistency_group_name state bg_copy_priority progress copy_type<br />

17 GMREL1 000002006AE04FC4 ITSO-CLS1 17 GM_DB_Pri<br />

0000020063E03A38 ITSO-CLS4 4 GM_DB_Sec master 0<br />

CG_W2K3_GM inconsistent_stopped 50 0 global<br />

18 GMREL2 000002006AE04FC4 ITSO-CLS1 18 GM_DBLog_Pri<br />

0000020063E03A38 ITSO-CLS4 5 GM_DBLog_Sec master 0<br />

CG_W2K3_GM inconsistent_stopped 50 0 global<br />

440 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.13.6 Creating <strong>the</strong> stand-alone Global Mirror relationship for GM_App_Pri<br />

In Example 7-168, we create <strong>the</strong> stand-alone Global Mirror relationship GMREL3 for<br />

GM_App_Pri. After it is created, we will check <strong>the</strong> status of each of our Global Mirror<br />

relationships.<br />

Notice that <strong>the</strong> status of GMREL3 is consistent_stopped, because it was created with <strong>the</strong><br />

-sync option. The -sync option indicates that <strong>the</strong> secondary (auxiliary) VDisk is already<br />

synchronized with <strong>the</strong> primary (master) VDisk. The initial background synchronization is<br />

skipped when this option is used.<br />

GMREL1 and GMREL2 are in <strong>the</strong> inconsistent_stopped state, because <strong>the</strong>y were not created<br />

with <strong>the</strong> -sync option, so <strong>the</strong>ir auxiliary VDisks need to be synchronized with <strong>the</strong>ir primary<br />

VDisks.<br />

Example 7-168 Creating a stand-alone Global Mirror relationship and verifying it<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO-CLS4<br />

-sync -name GMREL3 -global<br />

RC Relationship, id [16], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship -delim :<br />

id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_<br />

name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority<br />

:progress:copy_type<br />

16:GMREL3:000002006AE04FC4:ITSO-CLS1:16:GM_App_Pri:0000020063E03A38:ITSO-CLS4:3:GM_App_Sec:master:::consist<br />

ent_stopped:50:100:global<br />

17:GMREL1:000002006AE04FC4:ITSO-CLS1:17:GM_DB_Pri:0000020063E03A38:ITSO-CLS4:4:GM_DB_Sec:master:0:CG_W2K3_G<br />

M:inconsistent_stopped:50:0:global<br />

18:GMREL2:000002006AE04FC4:ITSO-CLS1:18:GM_DBLog_Pri:0000020063E03A38:ITSO-CLS4:5:GM_DBLog_Sec:master:0:CG_<br />

W2K3_GM:inconsistent_stopped:50:0:global<br />

7.13.7 Starting Global Mirror<br />

Now that we have created <strong>the</strong> Global Mirror consistency group and relationships, we are<br />

ready to use <strong>the</strong> Global Mirror relationships in our environment.<br />

When implementing Global Mirror, <strong>the</strong> goal is to reach a consistent and synchronized state<br />

that can provide redundancy in case a hardware failure occurs that affects <strong>the</strong> <strong>SAN</strong> at <strong>the</strong><br />

production site.<br />

In this section, we show how to start <strong>the</strong> stand-alone Global Mirror relationships and <strong>the</strong><br />

consistency group.<br />

7.13.8 Starting a stand-alone Global Mirror relationship<br />

In Example 7-145 on page 419, we start <strong>the</strong> stand-alone Global Mirror relationship named<br />

GMREL3. Because <strong>the</strong> Global Mirror relationship was in <strong>the</strong> Consistent stopped state and no<br />

updates have been made to <strong>the</strong> primary VDisk, <strong>the</strong> relationship quickly enters <strong>the</strong> Consistent<br />

synchronized state.<br />

Example 7-169 Starting <strong>the</strong> stand-alone Global Mirror relationship<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startrcrelationship GMREL3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3<br />

id 16<br />

name GMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 441


master_vdisk_id 16<br />

master_vdisk_name GM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 3<br />

aux_vdisk_name GM_App_Sec<br />

primary master<br />

consistency_group_id<br />

consistency_group_name<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

progress<br />

freeze_time<br />

status online<br />

sync<br />

copy_type global<br />

7.13.9 Starting a Global Mirror consistency group<br />

In Example 7-146 on page 420, we start <strong>the</strong> CG_W2K3_GM Global Mirror consistency group.<br />

Because <strong>the</strong> consistency group was in <strong>the</strong> Inconsistent stopped state, it enters <strong>the</strong><br />

Inconsistent copying state until <strong>the</strong> background copy has completed for all of <strong>the</strong> relationships<br />

that are in <strong>the</strong> consistency group.<br />

Upon completion of <strong>the</strong> background copy, <strong>the</strong> CG_W2K3_GM Global Mirror consistency<br />

group enters <strong>the</strong> Consistent synchronized state (see Example 7-170).<br />

Example 7-170 Starting <strong>the</strong> Global Mirror consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_GM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM<br />

id 0<br />

name CG_W2K3_GM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary master<br />

state inconsistent_copying<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type global<br />

RC_rel_id 17<br />

RC_rel_name GMREL1<br />

RC_rel_id 18<br />

RC_rel_name GMREL2<br />

442 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7.13.10 Monitoring background copy progress<br />

To monitor <strong>the</strong> background copy progress, use <strong>the</strong> svcinfo lsrcrelationship command.<br />

This command shows us all of <strong>the</strong> defined Global Mirror relationships if it is used without any<br />

parameters. In <strong>the</strong> command output, progress indicates <strong>the</strong> current background copy<br />

progress. Example 7-147 on page 421 shows our Global Mirror relationships.<br />

Using SNMP traps: Setting up SNMP traps for <strong>the</strong> SVC enables automatic notification<br />

when Global Mirror consistency groups or relationships change state.<br />

Example 7-171 Monitoring background copy progress example<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL1<br />

id 17<br />

name GMREL1<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 17<br />

master_vdisk_name GM_DB_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 4<br />

aux_vdisk_name GM_DB_Sec<br />

primary master<br />

consistency_group_id 0<br />

consistency_group_name CG_W2K3_GM<br />

state inconsistent_copying<br />

bg_copy_priority 50<br />

progress 38<br />

freeze_time<br />

status online<br />

sync<br />

copy_type global<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL2<br />

id 18<br />

name GMREL2<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 18<br />

master_vdisk_name GM_DBLog_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 5<br />

aux_vdisk_name GM_DBLog_Sec<br />

primary master<br />

consistency_group_id 0<br />

consistency_group_name CG_W2K3_GM<br />

state inconsistent_copying<br />

bg_copy_priority 50<br />

progress 40<br />

freeze_time<br />

status online<br />

sync<br />

copy_type global<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 443


When all of <strong>the</strong> Global Mirror relationships complete <strong>the</strong> background copy, <strong>the</strong> consistency<br />

group enters <strong>the</strong> Consistent synchronized state, as shown in Example 7-148 on page 421.<br />

Example 7-172 Listing <strong>the</strong> Global Mirror consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM<br />

id 0<br />

name CG_W2K3_GM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary master<br />

state consistent_synchronized<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type global<br />

RC_rel_id 17<br />

RC_rel_name GMREL1<br />

RC_rel_id 18<br />

RC_rel_name GMREL2<br />

7.13.11 Stopping and restarting Global Mirror<br />

Now that <strong>the</strong> Global Mirror consistency group and relationships are running, we now describe<br />

how to stop, restart, and also change <strong>the</strong> direction of <strong>the</strong> stand-alone Global Mirror<br />

relationships, as well as <strong>the</strong> consistency group.<br />

First, we show how to stop and restart <strong>the</strong> stand-alone Global Mirror relationships and <strong>the</strong><br />

consistency group.<br />

7.13.12 Stopping a stand-alone Global Mirror relationship<br />

In Example 7-149 on page 422, we stop <strong>the</strong> stand-alone Global Mirror relationship, while<br />

enabling access (write I/O) to both <strong>the</strong> primary and <strong>the</strong> secondary VDisk, and as a result, <strong>the</strong><br />

relationship enters <strong>the</strong> Idling state.<br />

Example 7-173 Stopping <strong>the</strong> stand-alone Global Mirror relationship<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access GMREL3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3<br />

id 16<br />

name GMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 16<br />

master_vdisk_name GM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 3<br />

aux_vdisk_name GM_App_Sec<br />

primary<br />

consistency_group_id<br />

consistency_group_name<br />

444 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


state idling<br />

bg_copy_priority 50<br />

progress<br />

freeze_time<br />

status<br />

sync in_sync<br />

copy_type global<br />

7.13.13 Stopping a Global Mirror consistency group<br />

In Example 7-150 on page 423, we stop <strong>the</strong> Global Mirror consistency group without<br />

specifying <strong>the</strong> -access parameter; <strong>the</strong>refore, <strong>the</strong> consistency group enters <strong>the</strong> Consistent<br />

stopped state.<br />

Example 7-174 Stopping a Global Mirror consistency group without specifying -access<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_GM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM<br />

id 0<br />

name CG_W2K3_GM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary master<br />

state consistent_stopped<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync in_sync<br />

copy_type global<br />

RC_rel_id 17<br />

RC_rel_name GMREL1<br />

RC_rel_id 18<br />

RC_rel_name GMREL2<br />

If, afterwards, we want to enable access (write I/O) for <strong>the</strong> secondary VDisk, we can reissue<br />

<strong>the</strong> svctask stoprcconsistgrp command, specifying <strong>the</strong> -access parameter, and <strong>the</strong><br />

consistency group transits to <strong>the</strong> Idling state, as shown in Example 7-151 on page 423.<br />

Example 7-175 Stopping a Global Mirror consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_GM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM<br />

id 0<br />

name CG_W2K3_GM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary<br />

state idling<br />

relationship_count 2<br />

freeze_time<br />

status<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 445


sync in_sync<br />

copy_type global<br />

RC_rel_id 17<br />

RC_rel_name GMREL1<br />

RC_rel_id 18<br />

RC_rel_name GMREL2<br />

7.13.14 Restarting a Global Mirror relationship in <strong>the</strong> Idling state<br />

When restarting a Global Mirror relationship in <strong>the</strong> Idling state, we must specify <strong>the</strong> copy<br />

direction.<br />

If any updates have been performed on ei<strong>the</strong>r <strong>the</strong> master or <strong>the</strong> auxiliary VDisk, consistency<br />

will be compromised. Therefore, we must issue <strong>the</strong> -force parameter to restart <strong>the</strong><br />

relationship. If <strong>the</strong> -force parameter is not used, <strong>the</strong> command will fail, which is shown in<br />

Example 7-152 on page 424.<br />

Example 7-176 Restarting a Global Mirror relationship after updates in <strong>the</strong> Idling state<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force GMREL3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3<br />

id 16<br />

name GMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 16<br />

master_vdisk_name GM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 3<br />

aux_vdisk_name GM_App_Sec<br />

primary master<br />

consistency_group_id<br />

consistency_group_name<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

progress<br />

freeze_time<br />

status online<br />

sync<br />

copy_type global<br />

7.13.15 Restarting a Global Mirror consistency group in <strong>the</strong> Idling state<br />

When restarting a Global Mirror consistency group in <strong>the</strong> Idling state, we must specify <strong>the</strong><br />

copy direction.<br />

If any updates have been performed on ei<strong>the</strong>r <strong>the</strong> master or <strong>the</strong> auxiliary VDisk in any of <strong>the</strong><br />

Global Mirror relationships in <strong>the</strong> consistency group, consistency will be compromised.<br />

Therefore, we must issue <strong>the</strong> -force parameter to start <strong>the</strong> relationship. If <strong>the</strong> -force parameter<br />

is not used, <strong>the</strong> command will fail.<br />

In Example 7-153 on page 424, we restart <strong>the</strong> consistency group and change <strong>the</strong> copy<br />

direction by specifying <strong>the</strong> auxiliary VDisks to become <strong>the</strong> primaries.<br />

446 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-177 Restarting a Global Mirror relationship while changing <strong>the</strong> copy direction<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -primary aux CG_W2K3_GM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM<br />

id 0<br />

name CG_W2K3_GM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary aux<br />

state consistent_synchronized<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type global<br />

RC_rel_id 17<br />

RC_rel_name GMREL1<br />

RC_rel_id 18<br />

RC_rel_name GMREL2<br />

7.13.16 Changing direction for Global Mirror<br />

In this section, we show how to change <strong>the</strong> copy direction of <strong>the</strong> stand-alone Global Mirror<br />

relationships and <strong>the</strong> consistency group.<br />

7.13.17 Switching copy direction for a Global Mirror relationship<br />

When a Global Mirror relationship is in <strong>the</strong> Consistent synchronized state, we can change <strong>the</strong><br />

copy direction for <strong>the</strong> relationship by using <strong>the</strong> svctask switchrcrelationship command and<br />

specifying <strong>the</strong> primary VDisk.<br />

If <strong>the</strong> VDisk that is specified as <strong>the</strong> primary when issuing this command is already a primary,<br />

<strong>the</strong> command has no effect.<br />

In Example 7-154 on page 425, we change <strong>the</strong> copy direction for <strong>the</strong> stand-alone Global<br />

Mirror relationship, specifying <strong>the</strong> auxiliary VDisk to become <strong>the</strong> primary.<br />

Important: When <strong>the</strong> copy direction is switched, it is crucial that <strong>the</strong>re is no outstanding I/O<br />

to <strong>the</strong> VDisk that transits from primary to secondary, because all I/O will be inhibited to that<br />

VDisk when it becomes <strong>the</strong> secondary. Therefore, careful planning is required prior to<br />

using <strong>the</strong> svctask switchrcrelationship command.<br />

Example 7-178 Switching <strong>the</strong> copy direction for a Global Mirror relationship<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3<br />

id 16<br />

name GMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 16<br />

master_vdisk_name GM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 447


aux_vdisk_id 3<br />

aux_vdisk_name GM_App_Sec<br />

primary master<br />

consistency_group_id<br />

consistency_group_name<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

progress<br />

freeze_time<br />

status online<br />

sync<br />

copy_type global<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux GMREL3<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3<br />

id 16<br />

name GMREL3<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

master_vdisk_id 16<br />

master_vdisk_name GM_App_Pri<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

aux_vdisk_id 3<br />

aux_vdisk_name GM_App_Sec<br />

primary aux<br />

consistency_group_id<br />

consistency_group_name<br />

state consistent_synchronized<br />

bg_copy_priority 50<br />

progress<br />

freeze_time<br />

status online<br />

sync<br />

copy_type global<br />

7.13.18 Switching copy direction for a Global Mirror consistency group<br />

When a Global Mirror consistency group is in <strong>the</strong> Consistent synchronized state, we can<br />

change <strong>the</strong> copy direction for <strong>the</strong> relationship by using <strong>the</strong> svctask switchrcconsistgrp<br />

command and specifying <strong>the</strong> primary VDisk.<br />

If <strong>the</strong> VDisk that is specified as <strong>the</strong> primary when issuing this command is already a primary,<br />

<strong>the</strong> command has no effect.<br />

In Example 7-155 on page 426, we change <strong>the</strong> copy direction for <strong>the</strong> Global Mirror<br />

consistency group, specifying <strong>the</strong> auxiliary to become <strong>the</strong> primary.<br />

Important: When <strong>the</strong> copy direction is switched, it is crucial that <strong>the</strong>re is no outstanding I/O<br />

to <strong>the</strong> VDisk that transits from primary to secondary, because all I/O will be inhibited when<br />

that VDisk becomes <strong>the</strong> secondary. Therefore, careful planning is required prior to using<br />

<strong>the</strong> svctask switchrcconsistgrp command.<br />

448 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-179 Switching <strong>the</strong> copy direction for a Global Mirror consistency group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM<br />

id 0<br />

name CG_W2K3_GM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary master<br />

state consistent_synchronized<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type global<br />

RC_rel_id 17<br />

RC_rel_name GMREL1<br />

RC_rel_id 18<br />

RC_rel_name GMREL2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_GM<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GM<br />

id 0<br />

name CG_W2K3_GM<br />

master_cluster_id 000002006AE04FC4<br />

master_cluster_name ITSO-CLS1<br />

aux_cluster_id 0000020063E03A38<br />

aux_cluster_name ITSO-CLS4<br />

primary aux<br />

state consistent_synchronized<br />

relationship_count 2<br />

freeze_time<br />

status<br />

sync<br />

copy_type global<br />

RC_rel_id 17<br />

RC_rel_name GMREL1<br />

RC_rel_id 18<br />

RC_rel_name GMREL2<br />

7.14 Service and maintenance<br />

This section details <strong>the</strong> various service and maintenance tasks that you can execute within<br />

<strong>the</strong> SVC environment.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 449


7.14.1 Upgrading software<br />

This section explains how to upgrade <strong>the</strong> SVC software.<br />

Package numbering and version<br />

The format for software upgrade packages is four positive integers that are separated by<br />

periods. For example, a software upgrade package contains something similar to 5.1.0.0, and<br />

each software package is given a unique number.<br />

Requirement: It is mandatory that you run on SVC 4.3.1.7 cluster code before upgrading<br />

to SVC 5.1.0.0 cluster code.<br />

Check <strong>the</strong> recommended software levels on <strong>the</strong> Web at this Web site:<br />

http://www.ibm.com/storage/support/2145<br />

SVC software upgrade test utility<br />

The <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Software Upgrade Test Utility, which resides on <strong>the</strong> Master<br />

Console, will check software levels in <strong>the</strong> system against <strong>the</strong> recommended levels, which will<br />

be documented on <strong>the</strong> support Web site. You will be informed if <strong>the</strong> software levels are<br />

up-to-date, or if you need to download and install newer levels. You can download <strong>the</strong> utility<br />

and installation instructions from this link:<br />

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585<br />

After <strong>the</strong> software file has been uploaded to <strong>the</strong> cluster (to <strong>the</strong> /home/admin/upgrade<br />

directory), you can select <strong>the</strong> software and apply it to <strong>the</strong> cluster. Use <strong>the</strong> Web script and <strong>the</strong><br />

svctask applysoftware command. When a new code level is applied, it is automatically<br />

installed on all of <strong>the</strong> nodes within <strong>the</strong> cluster.<br />

The underlying command-line tool runs <strong>the</strong> sw_preinstall script, which checks <strong>the</strong> validity of<br />

<strong>the</strong> upgrade file, and whe<strong>the</strong>r it can be applied over <strong>the</strong> current level. If <strong>the</strong> upgrade file is<br />

unsuitable, <strong>the</strong> pre-install script deletes <strong>the</strong> files, which prevents <strong>the</strong> buildup of invalid files on<br />

<strong>the</strong> cluster.<br />

Precaution before upgrade<br />

Software installation is normally considered to be a client’s task. The SVC supports<br />

concurrent software upgrade. You can perform <strong>the</strong> software upgrade concurrently with I/O<br />

user operations and certain management activities, but only limited CLI commands will be<br />

operational from <strong>the</strong> time that <strong>the</strong> install command starts until <strong>the</strong> upgrade operation has<br />

ei<strong>the</strong>r terminated successfully or been backed out. Certain commands will fail with a message<br />

indicating that a software upgrade is in progress.<br />

Before you upgrade <strong>the</strong> SVC software, ensure that all I/O paths between all hosts and <strong>SAN</strong>s<br />

are working. O<strong>the</strong>rwise, <strong>the</strong> applications might have I/O failures during <strong>the</strong> software upgrade.<br />

Ensure that all I/O paths between all hosts and <strong>SAN</strong>s are working by using <strong>the</strong> Subsystem<br />

Device Driver (SDD) command. Example 7-180 shows <strong>the</strong> output.<br />

Example 7-180 Query adapter<br />

#datapath query adapter<br />

Active Adapters :2<br />

Adpt# Name State Mode Select Errors Paths Active<br />

0 fscsi0 NORMAL ACTIVE 1445 0 4 4<br />

1 fscsi1 NORMAL ACTIVE 1888 0 4 4<br />

450 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


#datapath query device<br />

Total Devices : 2<br />

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: Optimized<br />

SERIAL: 60050768018201BF2800000000000000<br />

==========================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 fscsi0/hdisk3 OPEN NORMAL 0 0<br />

1 fscsi1/hdisk7 OPEN NORMAL 972 0<br />

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: Optimized<br />

SERIAL: 60050768018201BF2800000000000002<br />

==========================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 fscsi0/hdisk4 OPEN NORMAL 784 0<br />

1 fscsi1/hdisk8 OPEN NORMAL 0 0<br />

Write-through mode: During a software upgrade, <strong>the</strong>re are periods where not all of <strong>the</strong><br />

nodes in <strong>the</strong> cluster are operational, and as a result, <strong>the</strong> cache operates in write-through<br />

mode. write-through mode has an effect on <strong>the</strong> throughput, latency, and bandwidth aspects<br />

of performance.<br />

Verify that your uninterruptible power supply unit configuration is also set up correctly (even if<br />

your cluster is running without problems). Specifically, make sure that <strong>the</strong> following conditions<br />

are true:<br />

► Your uninterruptible power supply units are all getting <strong>the</strong>ir power from an external source,<br />

and <strong>the</strong>y are not daisy chained. Make sure that each uninterruptible power supply unit is<br />

not supplying power to ano<strong>the</strong>r node’s uninterruptible power supply unit.<br />

► The power cable and <strong>the</strong> serial cable, which comes from each node, go back to <strong>the</strong> same<br />

uninterruptible power supply unit. If <strong>the</strong> cables are crossed and go back to separate<br />

uninterruptible power supply units, during <strong>the</strong> upgrade, while one node is shut down,<br />

ano<strong>the</strong>r node might also be mistakenly shut down.<br />

Important: Do not share <strong>the</strong> SVC uninterruptible power supply unit with any o<strong>the</strong>r devices.<br />

You must also ensure that all I/O paths are working for each host that runs I/O operations to<br />

<strong>the</strong> <strong>SAN</strong> during <strong>the</strong> software upgrade. You can check <strong>the</strong> I/O paths by using <strong>the</strong> datapath<br />

query commands.<br />

You do not need to check for hosts that have no active I/O operations to <strong>the</strong> <strong>SAN</strong> during <strong>the</strong><br />

software upgrade.<br />

Procedure<br />

To upgrade <strong>the</strong> SVC cluster software, perform <strong>the</strong> following steps:<br />

1. Before starting <strong>the</strong> upgrade, you must back up <strong>the</strong> configuration (see 7.14.9, “Backing up<br />

<strong>the</strong> SVC cluster configuration” on page 466) and save <strong>the</strong> backup config file in a safe<br />

place.<br />

2. Also, save <strong>the</strong> data collection for support diagnosis in case of problems, as shown in<br />

Example 7-181 on page 452.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 451


Example 7-181 svc_snap command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svc_snap<br />

Collecting system information...<br />

Copying files, please wait...<br />

Copying files, please wait...<br />

Listing files, please wait...<br />

Copying files, please wait...<br />

Listing files, please wait...<br />

Copying files, please wait...<br />

Listing files, please wait...<br />

Dumping error log...<br />

Creating snap package...<br />

Snap data collected in /dumps/snap.104643.080617.002427.tgz<br />

3. List <strong>the</strong> dump that was generated by <strong>the</strong> previous command, as shown in Example 7-182.<br />

Example 7-182 svcinfo ls2145dumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo ls2145dumps<br />

id 2145_filename<br />

0 svc.config.cron.bak_node3<br />

1 svc.config.cron.bak_SVCNode_2<br />

2 svc.config.cron.bak_node1<br />

3 dump.104643.070803.015424<br />

4 dump.104643.071010.232740<br />

5 svc.config.backup.bak_ITSOCL1_N1<br />

6 svc.config.backup.xml_ITSOCL1_N1<br />

7 svc.config.backup.tmp.xml<br />

8 svc.config.cron.bak_ITSOCL1_N1<br />

9 dump.104643.080609.202741<br />

10 104643.080610.154323.ups_log.tar.gz<br />

11 104643.trc.old<br />

12 dump.104643.080609.212626<br />

13 104643.080612.221933.ups_log.tar.gz<br />

14 svc.config.cron.bak_Node1<br />

15 svc.config.cron.log_Node1<br />

16 svc.config.cron.sh_Node1<br />

17 svc.config.cron.xml_Node1<br />

18 dump.104643.080616.203659<br />

19 104643.trc<br />

20 ups_log.a<br />

21 snap.104643.080617.002427.tgz<br />

22 ups_log.b<br />

4. Save <strong>the</strong> generated dump in a safe place using <strong>the</strong> pscp command, as shown in<br />

Example 7-183.<br />

Example 7-183 pscp -load command<br />

C:\>pscp -load ITSOCL1 admin@9.43.86.117:/dumps/snap.104643.080617.002427.tgz c:\<br />

snap.104643.080617.002427 | 597 kB | 597.7 kB/s | ETA: 00:00:00 | 100%<br />

5. Upload <strong>the</strong> new software package using PuTTY Secure Copy. Enter <strong>the</strong> command, as<br />

shown in Example 7-184 on page 453.<br />

452 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 7-184 pscp -load command<br />

C:\>pscp -load ITSOCL1 <strong>IBM</strong>2145_INSTALL_4.3.0.0<br />

admin@9.43.86.117:/home/admin/upgrade<br />

<strong>IBM</strong>2145_INSTALL_4.3.0.0-0 | 103079 kB | 9370.8 kB/s | ETA: 00:00:00 | 100%<br />

6. Upload <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Software Upgrade Test Utility by using PuTTY Secure<br />

Copy. Enter <strong>the</strong> command, as shown in Example 7-185.<br />

Example 7-185 Upload utility<br />

C:\>pscp -load ITSOCL1 <strong>IBM</strong>2145_INSTALL_svcupgradetest_1.11<br />

admin@9.43.86.117:/home/admin/upgrade<br />

<strong>IBM</strong>2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100%<br />

7. Verify that <strong>the</strong> packages were successfully delivered through <strong>the</strong> PuTTY command-line<br />

application by entering <strong>the</strong> svcinfo lssoftwaredumps command, as shown in<br />

Example 7-186.<br />

Example 7-186 svcinfo lssoftwaredumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lssoftwaredumps<br />

id software_filename<br />

0 <strong>IBM</strong>2145_INSTALL_4.3.0.0<br />

1 <strong>IBM</strong>2145_INSTALL_svcupgradetest_1.11<br />

8. Now that <strong>the</strong> packages are uploaded, first install <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Software<br />

Upgrade Test Utility, as shown in Example 7-187.<br />

Example 7-187 svctask applysoftware command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask applysoftware -file<br />

<strong>IBM</strong>2145_INSTALL_svcupgradetest_1.11<br />

CMMVC6227I The package installed successfully.<br />

9. Using <strong>the</strong> following command, test <strong>the</strong> upgrade for known issues that might prevent a<br />

software upgrade from completing successfully, as shown in Example 7-188.<br />

Example 7-188 svcupgradetest command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcupgradetest<br />

svcupgradetest version 1.11. Please wait while <strong>the</strong> tool tests<br />

for issues that may prevent a software upgrade from completing<br />

successfully. The test will take approximately one minute to complete.<br />

The test has not found any problems with <strong>the</strong> 2145 cluster.<br />

Please proceed with <strong>the</strong> software upgrade.<br />

Important: If <strong>the</strong> svcupgradetest command produces any errors, troubleshoot <strong>the</strong> errors<br />

using <strong>the</strong> maintenance procedures before continuing fur<strong>the</strong>r.<br />

10.Now, use <strong>the</strong> svctask command set to apply <strong>the</strong> software upgrade, as shown in<br />

Example 7-189.<br />

Example 7-189 Apply upgrade command example<br />

<strong>IBM</strong>_2145:ITSOSVC42A:admin>svctask applysoftware -file <strong>IBM</strong>2145_INSTALL_4.3.0.0<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 453


While <strong>the</strong> upgrade runs, you can check <strong>the</strong> status, as shown in Example 7-190.<br />

Example 7-190 Check update status<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lssoftwareupgradestatus<br />

status<br />

upgrading<br />

11.The new code is distributed and applied to each node in <strong>the</strong> SVC cluster. After installation,<br />

each node is automatically restarted one at a time. If a node does not restart automatically<br />

during <strong>the</strong> upgrade, you must repair it manually.<br />

Solid-state drives: If you use solid-state drives, <strong>the</strong> data of <strong>the</strong> solid-state drive within<br />

<strong>the</strong> restarted node will not be available during <strong>the</strong> reboot.<br />

12.Eventually both nodes display Cluster: on line one on <strong>the</strong> SVC front panel and <strong>the</strong> name<br />

of your cluster on line two of <strong>the</strong> SVC front panel. Be prepared for a wait (in our case, we<br />

waited approximately 40 minutes).<br />

Performance: During this process, both your CLI and GUI vary from sluggish (slow) to<br />

unresponsive. The important thing is that I/O to <strong>the</strong> hosts can continue through this<br />

process.<br />

13.To verify that <strong>the</strong> upgrade was successful, you can perform ei<strong>the</strong>r of <strong>the</strong> following options:<br />

– Run <strong>the</strong> svcinfo lscluster and svcinfo lsnodevpd commands, as shown in<br />

Example 7-191. We have truncated <strong>the</strong> lscluster and lsnodevpd information for this<br />

example.<br />

Example 7-191 svcinfo lscluster and svcinfo lsnodevpd commands<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1<br />

id 0000020060806FCA<br />

name ITSO-CLS1<br />

location local<br />

partnership<br />

bandwidth<br />

cluster_IP_address 9.43.86.117<br />

cluster_service_IP_address 9.43.86.118<br />

total_mdisk_capacity 756.0GB<br />

space_in_mdisk_grps 756.0GB<br />

space_allocated_to_vdisks 156.00GB<br />

total_free_space 600.0GB<br />

statistics_status off<br />

statistics_frequency 15<br />

required_memory 8192<br />

cluster_locale en_US<br />

SNMP_setting none<br />

SNMP_community<br />

SNMP_server_IP_address 0.0.0.0<br />

subnet_mask 255.255.252.0<br />

default_gateway 9.43.85.1<br />

time_zone 522 UTC<br />

email_setting<br />

email_id<br />

454 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


code_level 4.3.0.0 (build 8.15.0806110000)<br />

FC_port_speed 2Gb<br />

console_IP 9.43.86.115:9080<br />

id_alias 0000020060806FCA<br />

gm_link_tolerance 300<br />

gm_inter_cluster_delay_simulation 0<br />

gm_intra_cluster_delay_simulation 0<br />

email_server 127.0.0.1<br />

email_server_port 25<br />

email_reply itsotest@ibm.com<br />

email_contact ITSO User<br />

email_contact_primary 555-1234<br />

email_contact_alternate<br />

email_contact_location ITSO<br />

email_state running<br />

email_user_count 1<br />

inventory_mail_interval 0<br />

cluster_IP_address_6<br />

cluster_service_IP_address_6<br />

prefix_6<br />

default_gateway_6<br />

total_vdiskcopy_capacity 156.00GB<br />

total_used_capacity 156.00GB<br />

total_overallocation 20<br />

total_vdisk_capacity 156.00GB<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsnodevpd 1<br />

id 1<br />

system board: 24 fields<br />

part_number 31P0906<br />

system_serial_number 13DVT31<br />

number_of_processors 4<br />

number_of_memory_slots 8<br />

number_of_fans 6<br />

number_of_FC_cards 1<br />

number_of_scsi/ide_devices 2<br />

BIOS_manufacturer <strong>IBM</strong><br />

BIOS_version -[GFE136BUS-1.09]-<br />

BIOS_release_date 02/08/2008<br />

system_manufacturer <strong>IBM</strong><br />

system_product <strong>IBM</strong> <strong>System</strong> x3550 -[21458G4]-<br />

.<br />

.<br />

software: 6 fields<br />

code_level 4.3.0.0 (build 8.15.0806110000)<br />

node_name Node1<br />

e<strong>the</strong>rnet_status 1<br />

WWNN 0x50050768010037e5<br />

id 1<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 455


– Copy <strong>the</strong> error log to your management workstation, as explained in 7.14.2, “Running<br />

maintenance procedures” on page 456. Open <strong>the</strong> error log in WordPad and search for<br />

Software Install completed.<br />

You have now completed <strong>the</strong> required tasks to upgrade <strong>the</strong> SVC software.<br />

7.14.2 Running maintenance procedures<br />

Use <strong>the</strong> svctask finderr command to generate a list of any unfixed errors in <strong>the</strong> system. This<br />

command analyzes <strong>the</strong> last generated log that resides in <strong>the</strong> /dumps/elogs/ directory on <strong>the</strong><br />

cluster.<br />

If you want to generate a new log before analyzing unfixed errors, run <strong>the</strong> svctask<br />

dumperrlog command (Example 7-192).<br />

Example 7-192 svctask dumperrlog command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask dumperrlog<br />

This command generates a errlog_timestamp file, such as errlog_100048_080618_042419,<br />

where:<br />

► errlog is part of <strong>the</strong> default prefix for all error log files.<br />

► 100048 is <strong>the</strong> panel name of <strong>the</strong> current configuration node.<br />

► 080618 is <strong>the</strong> date (YYMMDD).<br />

► 042419 is <strong>the</strong> time (HHMMSS).<br />

You can add <strong>the</strong> -prefix parameter to your command to change <strong>the</strong> default prefix of errlog to<br />

something else (Example 7-193).<br />

Example 7-193 svctask dumperrlog -prefix command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask dumperrlog -prefix svcerrlog<br />

This command creates a file called svcerrlog_timestamp.<br />

To see <strong>the</strong> file name, you must enter <strong>the</strong> following command (Example 7-194).<br />

Example 7-194 svcinfo lserrlogdumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lserrlogdumps<br />

id filename<br />

0 errlog_100048_080618_042049<br />

1 errlog_100048_080618_042128<br />

2 errlog_100048_080618_042355<br />

3 errlog_100048_080618_042419<br />

4 errlog_100048_080618_175652<br />

5 errlog_100048_080618_175702<br />

6 errlog_100048_080618_175724<br />

7 errlog_100048_080619_205900<br />

8 errlog_100048_080624_170214<br />

9 svcerrlog_100048_080624_170257<br />

456 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Maximum number of error log dump files: A maximum of ten error log dump files per<br />

node will be kept on <strong>the</strong> cluster. When <strong>the</strong> eleventh dump is made, <strong>the</strong> oldest existing<br />

dump file for that node will be overwritten. Note that <strong>the</strong> directory might also hold log files<br />

retrieved from o<strong>the</strong>r nodes. These files are not counted. The SVC will delete <strong>the</strong> oldest file<br />

(when necessary) for this node in order to maintain <strong>the</strong> maximum number of files. The SVC<br />

will not delete files from o<strong>the</strong>r nodes unless you issue <strong>the</strong> cleandumps command.<br />

After you generate your error log, you can issue <strong>the</strong> svctask finderr command to scan <strong>the</strong><br />

error log for any unfixed errors, as shown in Example 7-195.<br />

Example 7-195 svctask finderr command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask finderr<br />

Highest priority unfixed error code is [1230]<br />

As you can see, we have one unfixed error on our system. To analyze this error, download it<br />

onto your own PC.<br />

To know more about this unfixed error, look at <strong>the</strong> error log in more detail. Use <strong>the</strong> PuTTY<br />

Secure Copy process to copy <strong>the</strong> file from <strong>the</strong> cluster to your local management workstation,<br />

as shown in Example 7-196.<br />

Example 7-196 pscp command: Copy error logs off of <strong>the</strong> SVC<br />

In W2K3 Start Run cmd<br />

C:\Program Files\PuTTY>pscp -load SVC_CL2<br />

admin@9.43.86.119:/dumps/elogs/svcerrlog_100048_080624_170257<br />

c:\temp\svcerrlog.txt<br />

svcerrlog.txt | 6390 kB | 3195.1 kB/s | ETA: 00:00:00 | 100%<br />

In order to use <strong>the</strong> Run option, you must know where your pscp.exe is located. In this case, it<br />

is in <strong>the</strong> C:\Program Files\PuTTY\ folder.<br />

This command copies <strong>the</strong> file called svcerrlog_100048_080624_170257 to <strong>the</strong> C:\temp<br />

directory on our local workstation and calls <strong>the</strong> file svcerrlog.txt.<br />

Open <strong>the</strong> file in WordPad (Notepad does not format <strong>the</strong> window as well). You will see<br />

information similar to what is shown in Example 7-197. We truncated this list for <strong>the</strong> purposes<br />

of this example.<br />

Example 7-197 errlog in WordPad<br />

Error Log Entry 400<br />

Node Identifier : Node2<br />

Object Type : device<br />

Object ID : 0<br />

Copy ID :<br />

Sequence Number : 37404<br />

Root Sequence Number : 37404<br />

First Error Timestamp : Sat Jun 21 00:08:21 2008<br />

: Epoch + 1214006901<br />

Last Error Timestamp : Sat Jun 21 00:11:36 2008<br />

: Epoch + 1214007096<br />

Error Count : 2<br />

Error ID : 10013 : Login Excluded<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 457


Error Code : 1230 : Login excluded<br />

Status Flag : UNFIXED<br />

Type Flag : TRANSIENT ERROR<br />

03 00 00 00 03 00 00 00 31 44 17 B8 A0 00 04 20<br />

33 44 17 B8 A0 00 05 20 00 11 01 00 00 00 01 00<br />

33 00 33 00 05 00 0B 00 00 00 01 00 00 00 01 00<br />

04 00 04 00 00 00 01 00 00 00 00 00 00 00 00 00<br />

00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<br />

00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<br />

00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<br />

00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<br />

Scrolling through, or searching for <strong>the</strong> term unfixed, you can find more detail about <strong>the</strong><br />

problem. You might see more entries in <strong>the</strong> errorlog that have <strong>the</strong> status of unfixed.<br />

After you take <strong>the</strong> necessary steps to rectify <strong>the</strong> problem, you can mark <strong>the</strong> error as fixed in<br />

<strong>the</strong> log by issuing <strong>the</strong> svctask cherrstate command against its sequence numbers<br />

(Example 7-198).<br />

Example 7-198 svctask cherrstate command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask cherrstate -sequencenumber 37404<br />

If you accidentally mark <strong>the</strong> wrong error as fixed, you can mark it as unfixed again by entering<br />

<strong>the</strong> same command and appending <strong>the</strong> -unfix flag to <strong>the</strong> end, as shown in Example 7-199.<br />

Example 7-199 unfix flag<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask cherrstate -sequencenumber 37406 -unfix<br />

7.14.3 Setting up SNMP notification<br />

To set up error notification, use <strong>the</strong> svctask mksnmpserver command.<br />

Example 7-200 shows an example of <strong>the</strong> mksnmpserver command.<br />

Example 7-200 svctask mksnmpserver command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mksnmpserver -error on -warning on -info on -ip<br />

9.43.86.160 -community SVC<br />

SNMP Server id [1] successfully created<br />

This command sends all errors and warning and informational events to <strong>the</strong> SVC community<br />

on <strong>the</strong> SNMP manager with <strong>the</strong> IP address 9.43.86.160.<br />

7.14.4 Set syslog event notification<br />

Starting with SVC 5.1, you can save a syslog to a defined syslog server. The SVC now<br />

provides support for syslog in addition to e-mail and SNMP traps.<br />

The syslog protocol is a client-server standard for forwarding log messages from a sender to<br />

a receiver on an IP network. You can use syslog to integrate log messages from various types<br />

of systems into a central repository. You can configure SVC 5.1 to send information to six<br />

syslog servers.<br />

458 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


You use <strong>the</strong> svctask mksyslogserver command to configure <strong>the</strong> SVC using <strong>the</strong> CLI, as<br />

shown in Example 7-201.<br />

Using this command with <strong>the</strong> -h parameter gives you information about all of <strong>the</strong> available<br />

options. In our example, we only configure <strong>the</strong> SVC to use <strong>the</strong> default values for our syslog<br />

server.<br />

Example 7-201 Configuring <strong>the</strong> syslog<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mksyslogserver -ip 10.64.210.231 -name<br />

Syslogserv1<br />

Syslog Server id [1] successfully created<br />

When we have configured our syslog server, we can display <strong>the</strong> current syslog server<br />

configurations in our cluster, as shown in Example 7-202.<br />

Example 7-202 svcinfo lssyslogserver command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lssyslogserver<br />

id name IP_address<br />

facility error warning info<br />

0 Syslogsrv 10.64.210.230 4<br />

on on on<br />

1 Syslogserv1 10.64.210.231 0<br />

on on on<br />

7.14.5 Configuring error notification using an e-mail server<br />

The SVC can use an e-mail server to send event notification and inventory e-mails to e-mail<br />

users. It can transmit any combination of error, warning, and informational notification types.<br />

The SVC supports up to six e-mail servers to provide redundant access to <strong>the</strong> external e-mail<br />

network. The SVC uses <strong>the</strong> e-mail servers in sequence until <strong>the</strong> e-mail is successfully sent<br />

from <strong>the</strong> SVC.<br />

Important: Before <strong>the</strong> SVC can start sending e-mails, we must run <strong>the</strong> svctask<br />

startemail command, which enables this service.<br />

The attempt is successful when <strong>the</strong> SVC gets a positive acknowledgement from an e-mail<br />

server that <strong>the</strong> e-mail has been received by <strong>the</strong> server.<br />

If no port is specified, port 25 is <strong>the</strong> default port, as shown in Example 7-203.<br />

Example 7-203 The mkemailserver command syntax<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkemailserver -ip 192.168.1.1<br />

Email Server id [0] successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsemailserver 0<br />

id 0<br />

name emailserver0<br />

IP_address 192.168.1.1<br />

port 25<br />

We can configure an e-mail user that will receive e-mail notifications from <strong>the</strong> SVC cluster.<br />

We can define 12 users to receive e-mails from our SVC.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 459


Using <strong>the</strong> svcinfo lsemailuser command, we can verify who is already registered and what<br />

type of information is sent to that user, as shown in Example 7-204.<br />

Example 7-204 svcinfo lsemailuser command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsemailuser<br />

id name address<br />

user_type error warning info inventory<br />

0 <strong>IBM</strong>_Support_Center callhome0@de.ibm.com<br />

support on off off on<br />

We can also create a new user, as shown in Example 7-205 for a <strong>SAN</strong> administrator.<br />

Example 7-205 svctask mkemailuser command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkemailuser -address <strong>SAN</strong>admin@ibm.com -error on<br />

-warning on -info on -inventory on<br />

User, id [1], successfully created<br />

7.14.6 Analyzing <strong>the</strong> error log<br />

The following types of events and errors are logged in <strong>the</strong> error log:<br />

► Events: State changes are detected by <strong>the</strong> cluster software and are logged for<br />

informational purposes. Events are recorded in <strong>the</strong> cluster error log.<br />

► Errors: Hardware or software problems are detected by <strong>the</strong> cluster software and require<br />

repair. Errors are recorded in <strong>the</strong> cluster error log.<br />

► Unfixed errors: Errors were detected and recorded in <strong>the</strong> cluster error log and have not yet<br />

been corrected or repaired.<br />

► Fixed errors: Errors were detected and recorded in <strong>the</strong> cluster error log and have<br />

subsequently been corrected or repaired.<br />

To display <strong>the</strong> error log, use <strong>the</strong> svcinfo lserrlog command or <strong>the</strong> svcinfo caterrlog<br />

command, as shown in Example 7-206 (<strong>the</strong> output is <strong>the</strong> same).<br />

Example 7-206 svcinfo caterrlog command<br />

<strong>IBM</strong>_2145:ITSOSVC42A:admin>svcinfo caterrlog -delim :<br />

id:type:fixed:SNMP_trap_raised:error_type:node_name:sequence_number:root_sequence_<br />

number:first_timestamp:last_timestamp:number_of_errors:error_code<br />

0:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:00990101<br />

0:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:00990101<br />

12:_grp:no:no:5:SVCNode_1:0:0:070606094858:070606094858:1:00990145<br />

12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094539:070606094539:1:00990173<br />

0:internal:no:no:5:SVCNode_1:0:0:070606094507:070606094507:1:00990219<br />

12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094208:070606094208:1:00990148<br />

12:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094139:070606094139:1:00990145<br />

.........<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo caterrlog -delim ,<br />

id,type,fixed,SNMP_trap_raised,error_type,node_name,sequence_number,root_sequence_<br />

number,first_timestamp,last_timestamp,number_of_errors,error_code,copy_id<br />

0,cluster,no,yes,6,n4,171,170,080624115947,080624115947,1,00981001,<br />

0,cluster,no,yes,6,n4,170,170,080624115932,080624115932,1,00981001,<br />

0,cluster,no,no,5,n1,0,0,080624105428,080624105428,1,00990101,<br />

0,internal,no,no,5,n1,0,0,080624095359,080624095359,1,00990219,<br />

460 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


0,internal,no,no,5,n1,0,0,080624094301,080624094301,1,00990220,<br />

0,internal,no,no,5,n1,0,0,080624093355,080624093355,1,00990220,<br />

11,vdisk,no,no,5,n1,0,0,080623150020,080623150020,1,00990183,<br />

4,vdisk,no,no,5,n1,0,0,080623145958,080623145958,1,00990183,<br />

5,vdisk,no,no,5,n1,0,0,080623145934,080623145934,1,00990183,<br />

11,vdisk,no,no,5,n1,0,0,080623145017,080623145017,1,00990182,<br />

6,vdisk,no,no,5,n1,0,0,080623144153,080623144153,1,00990183,<br />

.<br />

This command views <strong>the</strong> last error log that was generated. Use <strong>the</strong> method that is described<br />

in 7.14.2, “Running maintenance procedures” on page 456 to upload and analyze <strong>the</strong> error<br />

log in more detail.<br />

To clear <strong>the</strong> error log, you can issue <strong>the</strong> svctask clearerrlog command, as shown in<br />

Example 7-207.<br />

Example 7-207 svctask clearerrlog command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask clearerrlog<br />

Do you really want to clear <strong>the</strong> log? y<br />

Using <strong>the</strong> -force flag will stop any confirmation requests from appearing.<br />

When executed, this command will clear all of <strong>the</strong> entries from <strong>the</strong> error log. This process will<br />

proceed even if <strong>the</strong>re are unfixed errors in <strong>the</strong> log. It also clears any status events that are in<br />

<strong>the</strong> log.<br />

This command is a destructive command for <strong>the</strong> error log. Only use this command when you<br />

have ei<strong>the</strong>r rebuilt <strong>the</strong> cluster, or when you have fixed a major problem that has caused many<br />

entries in <strong>the</strong> error log that you do not want to fix manually.<br />

7.14.7 License settings<br />

To change <strong>the</strong> licensing feature settings, use <strong>the</strong> svctask chlicense command.<br />

Before you change <strong>the</strong> licensing, you can display <strong>the</strong> licenses that you already have by<br />

issuing <strong>the</strong> svcinfo lslicense command, as shown in Example 7-208.<br />

Example 7-208 svcinfo lslicense command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lslicense<br />

used_flash 0.00<br />

used_remote 0.00<br />

used_virtualization 0.74<br />

license_flash 50<br />

license_remote 20<br />

license_virtualization 80<br />

The current license settings for <strong>the</strong> cluster are displayed in <strong>the</strong> viewing license settings log<br />

window. These settings show whe<strong>the</strong>r you are licensed to use <strong>the</strong> FlashCopy, Metro Mirror,<br />

Global Mirror, or Virtualization features. They also show <strong>the</strong> storage capacity that is licensed<br />

for virtualization. Typically, <strong>the</strong> license settings log contains entries, because feature options<br />

must be set as part of <strong>the</strong> Web-based cluster creation process.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 461


7.14.8 Listing dumps<br />

Consider, for example, that you have purchased an additional 5 TB of licensing for <strong>the</strong> Metro<br />

Mirror and Global Mirror feature. Example 7-209 on page 462 shows <strong>the</strong> command that you<br />

enter.<br />

Example 7-209 svctask chlicense command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chlicense -remote 25<br />

To turn a feature off, add 0 TB as <strong>the</strong> capacity for <strong>the</strong> feature that you want to disable.<br />

To verify that <strong>the</strong> changes you have made are reflected in your SVC configuration, you can<br />

issue <strong>the</strong> svcinfo lslicense command as before (see Example 7-210).<br />

Example 7-210 svcinfo lslicense command: Verifying changes<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lslicense<br />

used_flash 0.00<br />

used_remote 0.00<br />

used_virtualization 0.74<br />

license_flash 50<br />

license_remote 25<br />

license_virtualization 80<br />

Several commands are available for you to list <strong>the</strong> dumps that were generated over a period<br />

of time. You can use <strong>the</strong> lsxxxxdumps command, where xxxx is <strong>the</strong> object dumps, to return a<br />

list of dumps in <strong>the</strong> appropriate directory.<br />

These object dumps are available:<br />

► lserrlogdumps<br />

► lsfeaturedumps<br />

► lsiotracedumps<br />

► lsiostatsdumps<br />

► lssoftwaredumps<br />

► ls2145dumps<br />

If no node is specified, <strong>the</strong> command lists <strong>the</strong> dumps that are available on <strong>the</strong> configuration<br />

node.<br />

Error or event dump<br />

The dumps that are contained in <strong>the</strong> /dumps/elogs directory are dumps of <strong>the</strong> contents of <strong>the</strong><br />

error and event log at <strong>the</strong> time that <strong>the</strong> dump was taken. You create an error or event log<br />

dump by using <strong>the</strong> svctask dumperrlog command. This command dumps <strong>the</strong> contents of <strong>the</strong><br />

error or event log to <strong>the</strong> /dumps/elogs directory. If you do not supply a file name prefix, <strong>the</strong><br />

system uses <strong>the</strong> default errlog_ file name prefix. The full, default file name is<br />

errlog_NNNNNN_YYMMDD_HHMMSS. In this file name, NNNNNN is <strong>the</strong> node front panel name. If<br />

<strong>the</strong> command is used with <strong>the</strong> -prefix option, <strong>the</strong> value that is entered for <strong>the</strong> -prefix is used<br />

instead of errlog.<br />

The svcinfo lserrlogdumps command lists all of <strong>the</strong> dumps in <strong>the</strong> /dumps/elogs directory<br />

(Example 7-211).<br />

Example 7-211 svcinfo lserrlogdumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lserrlogdumps<br />

462 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


id filename<br />

0 errlog_104643_080617_172859<br />

1 errlog_104643_080618_163527<br />

2 errlog_104643_080619_164929<br />

3 errlog_104643_080619_165117<br />

4 errlog_104643_080624_093355<br />

5 svcerrlog_104643_080624_094301<br />

6 errlog_104643_080624_120807<br />

7 errlog_104643_080624_121102<br />

8 errlog_104643_080624_122204<br />

9 errlog_104643_080624_160522<br />

Featurization log dump<br />

The dumps that are contained in <strong>the</strong> /dumps/feature directory are dumps of <strong>the</strong> featurization<br />

log. A featurization log dump is created by using <strong>the</strong> svctask dumpinternallog command.<br />

This command dumps <strong>the</strong> contents of <strong>the</strong> featurization log to <strong>the</strong> /dumps/feature directory to<br />

a file called feature.txt. Only one of <strong>the</strong>se files exists, so every time that <strong>the</strong> svctask<br />

dumpinternallog command is run, this file is overwritten.<br />

The svcinfo lsfeaturedumps command lists all of <strong>the</strong> dumps in <strong>the</strong> /dumps/feature<br />

directory (Example 7-212).<br />

Example 7-212 svctask lsfeaturedumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsfeaturedumps<br />

id feature_filename<br />

0 feature.txt<br />

I/O trace dump<br />

Dumps that are contained in <strong>the</strong> /dumps/iotrace directory are dumps of I/O trace data. The<br />

type of data that is traced depends on <strong>the</strong> options that are specified by <strong>the</strong> svctask settrace<br />

command. The collection of <strong>the</strong> I/O trace data is started by using <strong>the</strong> svctask starttrace<br />

command. The I/O trace data collection is stopped when <strong>the</strong> svctask stoptrace command is<br />

used. When <strong>the</strong> trace is stopped, <strong>the</strong> data is written to <strong>the</strong> file.<br />

The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is <strong>the</strong> node front panel<br />

name, and prefix is <strong>the</strong> value that is entered by <strong>the</strong> user for <strong>the</strong> -filename parameter in <strong>the</strong><br />

svctask settrace command.<br />

The command to list all of <strong>the</strong> dumps in <strong>the</strong> /dumps/iotrace directory is <strong>the</strong> svcinfo<br />

lsiotracedumps command (Example 7-213).<br />

Example 7-213 svcinfo lsiotracedumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsiotracedumps<br />

id iotrace_filename<br />

0 tracedump_104643_080624_172208<br />

1 iotrace_104643_080624_172451<br />

I/O statistics dump<br />

The dumps that are contained in <strong>the</strong> /dumps/iostats directory are <strong>the</strong> dumps of <strong>the</strong> I/O<br />

statistics for <strong>the</strong> disks on <strong>the</strong> cluster. An I/O statistics dump is created by using <strong>the</strong> svctask<br />

startstats command. As part of this command, you can specify a time interval at which you<br />

want <strong>the</strong> statistics to be written to <strong>the</strong> file (<strong>the</strong> default is 15 minutes). Every time that <strong>the</strong> time<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 463


interval is encountered, <strong>the</strong> I/O statistics that are collected up to this point are written to a file<br />

in <strong>the</strong> /dumps/iostats directory.<br />

The file names that are used for storing I/O statistics dumps are<br />

m_stats_NNNNNN_YYMMDD_HHMMSS or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whe<strong>the</strong>r<br />

<strong>the</strong> statistics are for MDisks or VDisks. In <strong>the</strong>se file names, NNNNNN is <strong>the</strong> node front panel<br />

name.<br />

The command to list all of <strong>the</strong> dumps that are in <strong>the</strong> /dumps/iostats directory is <strong>the</strong> svcinfo<br />

lsiostatsdumps command (Example 7-214).<br />

Example 7-214 svcinfo lsiostatsdumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsiostatsdumps<br />

id iostat_filename<br />

0 Nm_stats_104603_071115_020054<br />

1 Nn_stats_104603_071115_020054<br />

2 Nv_stats_104603_071115_020054<br />

3 Nv_stats_104603_071115_022057<br />

........<br />

Software dump<br />

The svcinfo lssoftwaredump command lists <strong>the</strong> contents of <strong>the</strong> /home/admin/upgrade<br />

directory. Any files in this directory are copied <strong>the</strong>re at <strong>the</strong> time that you perform a software<br />

upgrade. Example 7-215 shows <strong>the</strong> command.<br />

Example 7-215 svcinfo lssoftwaredumps<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lssoftwaredumps<br />

id software_filename<br />

0 <strong>IBM</strong>2145_INSTALL_4.3.0.0<br />

O<strong>the</strong>r node dumps<br />

All of <strong>the</strong> svcinfo lsxxxxdumps commands can accept a node identifier as input (for example,<br />

append <strong>the</strong> node name to <strong>the</strong> end of any of <strong>the</strong> node dump commands). If this identifier is not<br />

specified, <strong>the</strong> list of files on <strong>the</strong> current configuration node is displayed. If <strong>the</strong> node identifier is<br />

specified, <strong>the</strong> list of files on that node is displayed.<br />

However, files can only be copied from <strong>the</strong> current configuration node (using PuTTY Secure<br />

Copy). Therefore, you must issue <strong>the</strong> svctask cpdumps command to copy <strong>the</strong> files from a<br />

non-configuration node to <strong>the</strong> current configuration node. Subsequently, you can copy <strong>the</strong>m to<br />

<strong>the</strong> management workstation using PuTTY Secure Copy.<br />

For example, you discover a dump file and want to copy it to your management workstation<br />

for fur<strong>the</strong>r analysis. In this case, you must first copy <strong>the</strong> file to your current configuration node.<br />

To copy dumps from o<strong>the</strong>r nodes to <strong>the</strong> configuration node, use <strong>the</strong> svctask cpdumps<br />

command.<br />

In addition to <strong>the</strong> directory, you can specify a file filter. For example, if you specified<br />

/dumps/elogs/*.txt, all of <strong>the</strong> files in <strong>the</strong> /dumps/elogs directory that end in .txt are copied.<br />

464 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Wildcards: The following rules apply to <strong>the</strong> use of wildcards with <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> CLI:<br />

► The wildcard character is an asterisk (*).<br />

► The command can contain a maximum of one wildcard.<br />

► When you use a wildcard, you must surround <strong>the</strong> filter entry with double quotation<br />

marks (""), for example:<br />

>svctask cleardumps -prefix "/dumps/elogs/*.txt"<br />

Example 7-216 shows an example of <strong>the</strong> cpdumps command.<br />

Example 7-216 svctask cpdumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cpdumps -prefix /dumps/configs n4<br />

Now that you have copied <strong>the</strong> configuration dump file from Node n4 to your configuration<br />

node, you can use PuTTY Secure Copy to copy <strong>the</strong> file to your management workstation for<br />

fur<strong>the</strong>r analysis.<br />

To clear <strong>the</strong> dumps, you can run <strong>the</strong> svctask cleardumps command. Again, you can append<br />

<strong>the</strong> node name if you want to clear dumps off of a node o<strong>the</strong>r than <strong>the</strong> current configuration<br />

node (<strong>the</strong> default for <strong>the</strong> svctask cleardumps command).<br />

The commands in Example 7-217 clear all logs or dumps from <strong>the</strong> SVC Node n1.<br />

Example 7-217 svctask cleardumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps n1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/iostats n1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/iotrace n1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/feature n1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/config n1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/elog n1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /home/admin/upgrade n1<br />

Application abends dump<br />

The dumps that are contained in <strong>the</strong> /dumps directory are <strong>the</strong> dumps resulting from<br />

application (abnormal ends) abends. These dumps are written to <strong>the</strong> /dumps directory. The<br />

default file names are dump.NNNNNN.YYMMDD.HHMMSS. NNNNNN is <strong>the</strong> node front panel name.<br />

In addition to <strong>the</strong> dump file, trace files can be written to this directory. These trace files are<br />

named NNNNNN.trc.<br />

The command to list all of <strong>the</strong> dumps in <strong>the</strong> /dumps directory is <strong>the</strong> svcinfo ls2145dumps<br />

command (Example 7-218).<br />

Example 7-218 svcinfo ls2145dumps command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo ls2145dumps<br />

id 2145_filename<br />

0 svc.config.cron.bak_node3<br />

1 svc.config.cron.bak_SVCNode_2<br />

2 dump.104643.070803.015424<br />

3 dump.104643.071010.232740<br />

4 svc.config.backup.bak_ITSOCL1_N1<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 465


7.14.9 Backing up <strong>the</strong> SVC cluster configuration<br />

You can back up your cluster configuration by using <strong>the</strong> Backing Up a Cluster Configuration<br />

window or <strong>the</strong> CLI svcconfig command. In this section, we describe <strong>the</strong> overall procedure for<br />

backing up your cluster configuration and <strong>the</strong> conditions that must be satisfied to perform a<br />

successful backup.<br />

The backup command extracts configuration data from <strong>the</strong> cluster and saves it to <strong>the</strong><br />

svc.config.backup.xml file in <strong>the</strong> /tmp directory. This process also produces an<br />

svc.config.backup.sh file. You can study this file to see what o<strong>the</strong>r commands were issued<br />

to extract information.<br />

A svc.config.backup.log log is also produced. You can study this log for <strong>the</strong> details of what<br />

was done and when it was done. This log also includes information about <strong>the</strong> o<strong>the</strong>r<br />

commands that were issued.<br />

Any pre-existing svc.config.backup.xml file is archived as <strong>the</strong> svc.config.backup.bak file.<br />

The system only keeps one archive. We recommend that you immediately move <strong>the</strong> .XML file<br />

and related KEY files (see <strong>the</strong> following limitations) off of <strong>the</strong> cluster for archiving. Then, erase<br />

<strong>the</strong> files from <strong>the</strong> /tmp directory using <strong>the</strong> svcconfig clear -all command. We also<br />

recommend that you change all of <strong>the</strong> objects having default names to non-default names.<br />

O<strong>the</strong>rwise, a warning is produced for objects with default names. Also, <strong>the</strong> object with <strong>the</strong><br />

default name is restored with its original name with an “_r” appended. The prefix<br />

_(underscore) is reserved for backup and restore command usage. Do not use this prefix in<br />

any object names.<br />

Important: The tool backs up logical configuration data only, not client data. It does not<br />

replace a traditional data backup and restore tool, but this tool supplements a traditional<br />

data backup and restore tool with a way to back up and restore <strong>the</strong> client’s configuration.<br />

To provide a complete backup and disaster recovery solution, you must back up both user<br />

(non-configuration) data and configuration (non-user) data. After <strong>the</strong> restoration of <strong>the</strong> SVC<br />

configuration, you must fully restore user (non-configuration) data to <strong>the</strong> cluster’s disks.<br />

Prerequisites<br />

You must have <strong>the</strong> following prerequisites in place:<br />

► All nodes must be online.<br />

► No object name can begin with an underscore.<br />

► All objects must have non-default names, that is, names that are not assigned by <strong>the</strong> SVC.<br />

Although we recommend that objects have non-default names at <strong>the</strong> time that <strong>the</strong> backup is<br />

taken, this prerequisite is not mandatory. Objects with default names are renamed when <strong>the</strong>y<br />

are restored.<br />

Example 7-219 shows an example of <strong>the</strong> svcconfig backup command.<br />

Example 7-219 svcconfig backup command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcconfig backup<br />

......<br />

CMMVC6130W Inter-cluster partnership fully_configured will not be restored<br />

...................<br />

CMMVC6112W io_grp io_grp0 has a default name<br />

CMMVC6112W io_grp io_grp1 has a default name<br />

CMMVC6112W mdisk mdisk18 has a default name<br />

CMMVC6112W mdisk mdisk19 has a default name<br />

CMMVC6112W mdisk mdisk20 has a default name<br />

466 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


................<br />

CMMVC6136W No SSH key file svc.config.admin.admin.key<br />

CMMVC6136W No SSH key file svc.config.admincl1.admin.key<br />

CMMVC6136W No SSH key file svc.config.ITSOSVCUser1.admin.key<br />

.......................<br />

CMMVC6112W vdisk vdisk7 has a default name<br />

...................<br />

CMMVC6155I SVCCONFIG processing completed successfully<br />

Example 7-220 shows <strong>the</strong> pscp command.<br />

Example 7-220 pscp command<br />

C:\Program Files\PuTTY>pscp -load SVC_CL1<br />

admin@9.43.86.117:/tmp/svc.config.backup.xml c:\temp\clibackup.xml<br />

clibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%<br />

The following scenario illustrates <strong>the</strong> value of configuration backup:<br />

1. Use <strong>the</strong> svcconfig command to create a backup file on <strong>the</strong> cluster that contains details<br />

about <strong>the</strong> current cluster configuration.<br />

2. Store <strong>the</strong> backup configuration on a form of tertiary storage. You must copy <strong>the</strong> backup file<br />

from <strong>the</strong> cluster or it becomes lost if <strong>the</strong> cluster crashes.<br />

3. If a sufficiently severe failure occurs, <strong>the</strong> cluster might be lost. Both <strong>the</strong> configuration data<br />

(for example, <strong>the</strong> cluster definitions of hosts, I/O Groups, MDGs, and MDisks) and <strong>the</strong><br />

application data on <strong>the</strong> virtualized disks are lost. In this scenario, it is assumed that <strong>the</strong><br />

application data can be restored from normal client backup procedures. However, before<br />

you can perform this restoration, you must reinstate <strong>the</strong> cluster as it was configured at <strong>the</strong><br />

time of <strong>the</strong> failure. Therefore, you restore <strong>the</strong> same MDGs, I/O Groups, host definitions,<br />

and VDisks that existed prior to <strong>the</strong> failure. Then, you can copy <strong>the</strong> application data back<br />

onto <strong>the</strong>se VDisks and resume operations.<br />

4. Recover <strong>the</strong> hardware: hosts, SVCs, disk controller systems, disks, and <strong>SAN</strong> fabric. The<br />

hardware and <strong>SAN</strong> fabric must physically be <strong>the</strong> same as <strong>the</strong> hardware and <strong>SAN</strong> fabric<br />

that were used before <strong>the</strong> failure.<br />

5. Re-initialize <strong>the</strong> cluster with <strong>the</strong> configuration node; <strong>the</strong> o<strong>the</strong>r nodes will be recovered<br />

when restoring <strong>the</strong> configuration.<br />

6. Restore your cluster configuration using <strong>the</strong> backup configuration file that was generated<br />

prior to <strong>the</strong> failure.<br />

7. Restore <strong>the</strong> data on your VDisks using your preferred restoration solution or with help from<br />

<strong>IBM</strong> Service.<br />

8. Resume normal operations.<br />

7.14.10 Restoring <strong>the</strong> SVC cluster configuration<br />

It is extremely important that you always consult <strong>IBM</strong> Support before you restore <strong>the</strong> SVC<br />

cluster configuration from <strong>the</strong> backup. <strong>IBM</strong> Support can assist you in analyzing <strong>the</strong> root<br />

cause of why <strong>the</strong> cluster configuration was lost.<br />

After <strong>the</strong> svcconfig restore -execute command is started, consider any prior user data on<br />

<strong>the</strong> VDisks destroyed. The user data must be recovered through your usual application data<br />

backup and restore process.<br />

Chapter 7. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line interface 467


See <strong>IBM</strong> Total<strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Command-Line<br />

Interface User’s Guide, SC26-7544, for more information about this topic.<br />

For a detailed description of <strong>the</strong> SVC configuration backup and restore functions, see <strong>IBM</strong><br />

Total<strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Configuration Guide,<br />

SC26-7543.<br />

7.14.11 Deleting configuration backup<br />

In this section, we describe in detail <strong>the</strong> tasks that you can perform to delete <strong>the</strong> configuration<br />

backup that is stored in <strong>the</strong> configuration file directory on <strong>the</strong> cluster. Never clear this<br />

configuration without having a backup of your configuration stored in a separate, secure<br />

place.<br />

When using <strong>the</strong> clear command, you erase <strong>the</strong> files in <strong>the</strong> /tmp directory. This command<br />

does not clear <strong>the</strong> running configuration and prevent <strong>the</strong> cluster from working, but <strong>the</strong><br />

command clears all of <strong>the</strong> configuration backup that is stored in <strong>the</strong> /tmp directory<br />

(Example 7-221).<br />

Example 7-221 svcconfig clear command<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcconfig clear -all<br />

.<br />

CMMVC6155I SVCCONFIG processing completed successfully<br />

7.15 <strong>SAN</strong> troubleshooting and data collection<br />

When we encounter a <strong>SAN</strong> issue, <strong>the</strong> SVC is often extremely helpful in troubleshooting <strong>the</strong><br />

<strong>SAN</strong>, because <strong>the</strong> SVC is <strong>the</strong> at <strong>the</strong> center of <strong>the</strong> environment through which <strong>the</strong><br />

communication travels.<br />

Chapter 14 in <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Best Practices and Performance Guidelines,<br />

SG24-7521, contains a detailed description of how to troubleshoot and collect data from <strong>the</strong><br />

SVC:<br />

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open<br />

7.16 T3 recovery process<br />

A procedure called “T3 recovery” has been tested and used in select cases where <strong>the</strong> cluster<br />

has been completely destroyed. (One example is simultaneously pulling power cords from all<br />

nodes to <strong>the</strong>ir uninterruptible power supply units; in this case, all nodes boot up to node error<br />

578 when <strong>the</strong> power is restored.)<br />

This procedure, in certain circumstances, is able to recover most user data. However, this<br />

procedure is not to be used by <strong>the</strong> client or <strong>IBM</strong> service representative without direct<br />

involvement from <strong>IBM</strong> level 3 technical support. This procedure is not published, but we refer<br />

to it here only to indicate that <strong>the</strong> loss of a cluster can be recoverable without total data loss,<br />

but it requires a restoration of application data from <strong>the</strong> backup. It is an extremely sensitive<br />

procedure, which is only to be used as a last resort, and cannot recover any data that was<br />

unstaged from cache at <strong>the</strong> time of <strong>the</strong> total cluster failure.<br />

468 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

operations using <strong>the</strong> GUI<br />

In this chapter, we show <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC) operational<br />

management by using <strong>the</strong> SVC GUI. We have divided this chapter into normal operations and<br />

advanced operations.<br />

We describe <strong>the</strong> basic configuration procedures that are required to get your SVC<br />

environment up and running as quickly as possible using <strong>the</strong> Master Console and its<br />

associated GUI.<br />

8<br />

Chapter 2, “<strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>” on page 7 describes <strong>the</strong> features in<br />

greater depth. In this chapter, we focus on <strong>the</strong> operational aspects.<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 469


8.1 SVC normal operations using <strong>the</strong> GUI<br />

In this topic, we discuss several of <strong>the</strong> operations that we have defined as normal, day-to-day<br />

activities.<br />

It is possible for many users to be logged into <strong>the</strong> GUI at any given time. However, no locking<br />

mechanism exists, so if two users change <strong>the</strong> same object at <strong>the</strong> same time, <strong>the</strong> last action<br />

entered from <strong>the</strong> GUI is <strong>the</strong> one that will take effect.<br />

8.1.1 Organizing on window content<br />

Figure 8-1 The Welcome window<br />

Important: Data entries made through <strong>the</strong> GUI are case sensitive.<br />

In <strong>the</strong> following sections, <strong>the</strong>re are several windows within <strong>the</strong> SVC GUI where you can<br />

perform filtering (to minimize <strong>the</strong> amount of data that is shown on <strong>the</strong> window) and sorting (to<br />

organize <strong>the</strong> content on <strong>the</strong> window). This section provides a brief overview of <strong>the</strong>se<br />

functions.<br />

The SVC Welcome window (Figure 8-1) is an important window and will be referred to as <strong>the</strong><br />

Welcome window throughout this chapter. We expect users to be able to locate this window<br />

without us having to show it each time.<br />

From <strong>the</strong> Welcome window, select Work with Virtual Disks, and select Virtual Disks.<br />

Table filtering<br />

When you are in <strong>the</strong> Viewing Virtual Disks list, you can use <strong>the</strong> table filter option to filter <strong>the</strong><br />

visible list, which is useful if <strong>the</strong> list of entries is too large to work with. You can change <strong>the</strong><br />

filtering here as many times as you like, to fur<strong>the</strong>r reduce <strong>the</strong> lists or for separate views.<br />

Perform <strong>the</strong>se steps to use table filtering:<br />

1. Use <strong>the</strong> Show Filter Row icon, as shown in Figure 8-2 on page 471, or select Show<br />

Filter Row in <strong>the</strong> list, and click Go.<br />

470 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-2 Show Filter Row icon<br />

2. This function enables you to filter based on <strong>the</strong> column names, as shown in Figure 8-3.<br />

The Filter under each column name shows that no filter is in effect for that column.<br />

Figure 8-3 Show Filter Row<br />

3. If you want to filter on a column, click <strong>the</strong> word Filter, which opens up a filter window, as<br />

shown in Figure 8-4 on page 472.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 471


Figure 8-4 Filter option on Name<br />

A list with virtual disks (VDisks) is displayed that contains names that include 01<br />

somewhere in <strong>the</strong> name, as shown in Figure 8-5. (Notice <strong>the</strong> filter line under each column<br />

heading, showing that our filter is in place.) If you want, you can perform additional filtering<br />

on <strong>the</strong> o<strong>the</strong>r columns to fur<strong>the</strong>r narrow your view.<br />

Figure 8-5 Filtered on Name containing 01 in <strong>the</strong> name<br />

4. The option to reset <strong>the</strong> filters is shown in Figure 8-6 on page 473. Use <strong>the</strong> Clear All<br />

Filters icon or use <strong>the</strong> Clear All Filters option in <strong>the</strong> list, and click Go.<br />

472 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-6 Clear All Filter options<br />

Sorting<br />

Regardless of whe<strong>the</strong>r you use <strong>the</strong> pre-filter or additional filter options, when you are in <strong>the</strong><br />

Viewing Virtual Disks window, you can sort <strong>the</strong> displayed data by selecting Edit Sort from <strong>the</strong><br />

list and clicking Go, or you can click <strong>the</strong> small Edit Sort icon highlighted by <strong>the</strong> mouse pointer<br />

in Figure 8-7.<br />

Figure 8-7 Selecting Edit Sort icon<br />

As shown in Figure 8-8 on page 474, you can sort based on up to three criteria, including<br />

Name, State, I/O Group, Managed Disk Group (MDisk Group), Capacity (MB),<br />

Space-Efficient, Type, Hosts, FlashCopy Pair, FlashCopy Map Count, Relationship Name,<br />

UID, and Copies.<br />

Sort criteria: The actual sort criteria differs based on <strong>the</strong> information that you are sorting.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 473


Figure 8-8 Sorting criteria<br />

When you finish making your choices, click OK to regenerate <strong>the</strong> display based on your<br />

sorting criteria. Look at <strong>the</strong> icons next to each column name to see <strong>the</strong> sort criteria currently<br />

in use, as shown in Figure 8-9.<br />

If you want to clear <strong>the</strong> sort, simply select Clear All Sorts from <strong>the</strong> list and click Go, or click<br />

<strong>the</strong> Clear All Sorts icon that is highlighted by <strong>the</strong> mouse pointer in Figure 8-9.<br />

Figure 8-9 Selecting to clear all sorts<br />

474 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.1.2 Documentation<br />

8.1.3 Help<br />

If you need to access <strong>the</strong> online documentation, in <strong>the</strong> upper right corner of <strong>the</strong> window, click<br />

<strong>the</strong> information icon. This action opens <strong>the</strong> Help Assistant pane on <strong>the</strong> right side of <strong>the</strong><br />

window, as shown in Figure 8-10.<br />

Figure 8-10 Online help using <strong>the</strong> i icon<br />

If you need to access <strong>the</strong> online help, in <strong>the</strong> upper right corner of <strong>the</strong> window, click <strong>the</strong><br />

question mark icon. This action opens a new window called <strong>the</strong> information center. Here,<br />

you can search on any item for which you want help (see Figure 8-11 on page 476).<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 475


Figure 8-11 Online help using <strong>the</strong> ? icon<br />

8.1.4 General housekeeping<br />

If, at any time, <strong>the</strong> content in <strong>the</strong> right side of <strong>the</strong> frame is abbreviated, you can collapse <strong>the</strong><br />

My Work column by clicking <strong>the</strong> icon at <strong>the</strong> top of <strong>the</strong> My Work column. When collapsed,<br />

<strong>the</strong> small arrow changes from pointing to <strong>the</strong> left to pointing to <strong>the</strong> right ( ). Clicking <strong>the</strong><br />

small arrow that points right expands <strong>the</strong> My Work column back to its original size.<br />

In addition, each time that you open a configuration or administration window using <strong>the</strong> GUI in<br />

<strong>the</strong> following sections, it creates a link for that window along <strong>the</strong> top of your Web browser<br />

beneath <strong>the</strong> banner graphic. As a general housekeeping task, we recommend that you close<br />

each window when you finish using it by clicking <strong>the</strong> icon to <strong>the</strong> right of <strong>the</strong> window name,<br />

but beneath <strong>the</strong> icon. Be careful not to close <strong>the</strong> entire browser.<br />

8.1.5 Viewing progress<br />

With this view, you can see <strong>the</strong> status of activities, such as VDisk Migration, MDisk Removal<br />

(Figure 8-12 on page 477), Image Mode Migration, Extend Migration, FlashCopy, Metro<br />

Mirror and Global Mirror, VDisk Formatting, Space Efficient copy repair, VDisk copy<br />

verification, and VDisk copy synchronization.<br />

You can see detailed information about <strong>the</strong> item by clicking <strong>the</strong> underlined (progress) number<br />

in <strong>the</strong> Progress column.<br />

476 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-12 Showing possible processes to view where <strong>the</strong> MDisk is being removed from <strong>the</strong> MDG<br />

8.2 Working with managed disks<br />

This section describes <strong>the</strong> various configuration and administration tasks that you can<br />

perform on <strong>the</strong> managed disks (MDisks) within <strong>the</strong> SVC environment.<br />

This section details <strong>the</strong> tasks that you can perform at a disk controller level.<br />

8.2.1 Viewing disk controller details<br />

Perform <strong>the</strong> following steps to view information about a back-end disk controller in use by <strong>the</strong><br />

SVC environment:<br />

1. Select Work with Managed Disks, and <strong>the</strong>n, select Disk <strong>Controller</strong> <strong>System</strong>s.<br />

2. The Viewing Disk <strong>Controller</strong> <strong>System</strong>s window (Figure 8-13) opens. For more detailed<br />

information about a specific controller, click its ID (highlighted by <strong>the</strong> mouse cursor in<br />

Figure 8-13).<br />

Figure 8-13 Disk controller systems<br />

3. When you click <strong>the</strong> controller Name (Figure 8-13), <strong>the</strong> Viewing General Details for Name<br />

window (Figure 8-14 on page 478) opens for <strong>the</strong> controller (where Name is <strong>the</strong> controller<br />

that you selected). Review <strong>the</strong> details, and click Close to return to <strong>the</strong> previous window.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 477


Figure 8-14 Viewing general details about a disk controller<br />

8.2.2 Renaming a disk controller<br />

Perform <strong>the</strong> following steps to rename a disk controller that is used by <strong>the</strong> SVC cluster:<br />

1. Select <strong>the</strong> controller that you want to rename. Then, select Rename a Disk <strong>Controller</strong><br />

<strong>System</strong> from <strong>the</strong> list, and click Go.<br />

2. In <strong>the</strong> Renaming Disk <strong>Controller</strong> <strong>System</strong> controllername window (where controllername is<br />

<strong>the</strong> controller that you selected in <strong>the</strong> previous step), type <strong>the</strong> new name that you want to<br />

assign to <strong>the</strong> controller, and click OK. See Figure 8-15.<br />

Figure 8-15 Renaming a controller<br />

3. You return to <strong>the</strong> Disk <strong>Controller</strong> <strong>System</strong>s window. You now see <strong>the</strong> new name of your<br />

controller displayed.<br />

<strong>Controller</strong> name: The name can consist of <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0<br />

to 9, <strong>the</strong> dash (-), and <strong>the</strong> underscore (_). The name can be between one and 15<br />

characters in length. However, <strong>the</strong> name cannot start with a number, <strong>the</strong> dash, or <strong>the</strong><br />

word “controller” (because this prefix is reserved for SVC assignment only).<br />

478 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.2.3 Discovery status<br />

8.2.4 Managed disks<br />

You can view <strong>the</strong> status of a managed disk (MDisk) discovery from <strong>the</strong> Viewing Discovery<br />

Status window. This status tells you if <strong>the</strong>re is an ongoing MDisk discovery. A running MDisk<br />

discovery will be displayed with a status of Active.<br />

Perform <strong>the</strong> following steps to view <strong>the</strong> status of an MDisk discovery:<br />

1. Select Work with Managed Disks Discovery Status. The Viewing Discovery Status<br />

window is displayed, as shown in Figure 8-16.<br />

Figure 8-16 Discovery status view<br />

2. Click Close to close this window.<br />

This section details <strong>the</strong> tasks that can be performed at an MDisk level. You perform each of<br />

<strong>the</strong> following tasks from <strong>the</strong> Viewing Managed Disks window (Figure 8-17). To access this<br />

window, from <strong>the</strong> SVC Welcome window, click Work with Managed Disks, and <strong>the</strong>n, click<br />

Managed Disks.<br />

Figure 8-17 Viewing Managed Disks window<br />

8.2.5 MDisk information<br />

To retrieve information about a specific MDisk, perform <strong>the</strong> following steps:<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 479


1. In <strong>the</strong> Viewing Managed Disks window (Figure 8-18 on page 480), click <strong>the</strong> underlined<br />

name of any MDisk in <strong>the</strong> list to reveal more detailed information about <strong>the</strong> specified<br />

MDisk.<br />

Figure 8-18 Managed disk details<br />

2. Review <strong>the</strong> details, and <strong>the</strong>n, click Close to return to <strong>the</strong> previous window.<br />

8.2.6 Renaming an MDisk<br />

Tip: If, at any time, <strong>the</strong> content in <strong>the</strong> right side of frame is abbreviated, you can<br />

minimize <strong>the</strong> My Work column by clicking <strong>the</strong> arrow to <strong>the</strong> right of <strong>the</strong> My Work heading<br />

at <strong>the</strong> top right of <strong>the</strong> column (highlighted with <strong>the</strong> mouse pointer in Figure 8-17 on<br />

page 479).<br />

After you minimize <strong>the</strong> column, you see an arrow in <strong>the</strong> far left position in <strong>the</strong> same<br />

location where <strong>the</strong> My Work column formerly appeared.<br />

Perform <strong>the</strong> following steps to rename an MDisk that is controlled by <strong>the</strong> SVC cluster:<br />

1. Select <strong>the</strong> MDisk that you want to rename in <strong>the</strong> window that is shown in Figure 8-17 on<br />

page 479. Select Rename an MDisk from <strong>the</strong> list, and click Go.<br />

2. On <strong>the</strong> Renaming Managed Disk MDiskname window (where MDiskname is <strong>the</strong> MDisk<br />

that you selected in <strong>the</strong> previous step), type <strong>the</strong> new name that you want to assign to <strong>the</strong><br />

MDisk, and click OK. See Figure 8-19 on page 481.<br />

480 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-19 Renaming an MDisk<br />

8.2.7 Discovering MDisks<br />

Perform <strong>the</strong> following steps to discover newly assigned MDisks:<br />

1. Select Discover MDisks from <strong>the</strong> drop-down list that is shown in Figure 8-17 on<br />

page 479, and click Go.<br />

2. Any newly assigned MDisks are displayed in <strong>the</strong> window that is shown in Figure 8-20.<br />

Figure 8-20 Newly discovered managed disks<br />

8.2.8 Including an MDisk<br />

MDisk name: The name can consist of <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0 to 9,<br />

<strong>the</strong> dash (-), and <strong>the</strong> underscore (_). The name can be between one and 15 characters<br />

in length. However, <strong>the</strong> name cannot start with a number, <strong>the</strong> dash, or <strong>the</strong> word “MDisk”<br />

(because this prefix is reserved for SVC assignment only).<br />

If a significant number of errors occur on an MDisk, <strong>the</strong> SVC automatically excludes it. These<br />

errors can result from a hardware problem, a storage area network (<strong>SAN</strong>) zoning problem, or<br />

<strong>the</strong> result of poorly planned maintenance. If it is a hardware fault, you will receive Simple<br />

Network Management Protocol (SNMP) alerts in regard to <strong>the</strong> state of <strong>the</strong> hardware (before<br />

<strong>the</strong> disk was excluded) and preventive maintenance that has been undertaken. If not, <strong>the</strong><br />

hosts that were using VDisks, which used <strong>the</strong> excluded MDisk, now have I/O errors.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 481


After you take <strong>the</strong> necessary corrective action to repair <strong>the</strong> MDisk (for example, replace <strong>the</strong><br />

failed disk and repair <strong>the</strong> <strong>SAN</strong> zones), you can tell <strong>the</strong> SVC to include <strong>the</strong> MDisk again.<br />

8.2.9 Showing a VDisk using a certain MDisk<br />

To display information about VDisks that reside on an MDisk, perform <strong>the</strong> following steps:<br />

1. As shown in Figure 8-21, select <strong>the</strong> MDisk about which you want to obtain VDisk<br />

information. Select Show VDisks using this MDisk from <strong>the</strong> list, and click Go.<br />

Figure 8-21 Show VDisk using an MDisk<br />

2. You now see a subset (specific to <strong>the</strong> MDisk that you chose in <strong>the</strong> previous step) of <strong>the</strong><br />

Viewing VDisks using MDisk window in Figure 8-22. We cover <strong>the</strong> Viewing VDisks window<br />

in more detail in 8.4, “Working with hosts” on page 493.<br />

Figure 8-22 VDisk list from a selected MDisk<br />

482 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.3 Working with Managed Disk Groups<br />

In this section, we describe <strong>the</strong> tasks that can be performed with <strong>the</strong> Managed Disk Group<br />

(MDG). From <strong>the</strong> Welcome window that is shown in Figure 8-1 on page 470, select Working<br />

with MDisks.<br />

8.3.1 Viewing MDisk group information<br />

We perform each of <strong>the</strong> following tasks from <strong>the</strong> Viewing Managed Disk Groups window<br />

(Figure 8-23). To access this window, from <strong>the</strong> SVC Welcome window, click Work with<br />

Managed Disks, and <strong>the</strong>n, click Managed Disk Groups.<br />

Figure 8-23 Viewing Managed Disk Groups window<br />

To retrieve information about a specific MDG, perform <strong>the</strong> following steps:<br />

1. In <strong>the</strong> Viewing Managed Disk Groups window (Figure 8-23), click <strong>the</strong> underlined name of<br />

any MDG in <strong>the</strong> list.<br />

2. In <strong>the</strong> View Managed Disk Group Details for MDGname window (where MDGname is <strong>the</strong><br />

MDG that you selected in <strong>the</strong> previous step), as shown in Figure 8-24, you see more<br />

detailed information about <strong>the</strong> specified MDG. Here, you see information pertaining to <strong>the</strong><br />

number of MDisks and VDisks, as well as <strong>the</strong> capacity (both total and free space) within<br />

<strong>the</strong> MDG. When you finish viewing <strong>the</strong> details, click Close to return to <strong>the</strong> previous<br />

window.<br />

Figure 8-24 MDG details<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 483


8.3.2 Creating MDGs<br />

Perform <strong>the</strong> following steps to create an MDG:<br />

1. From <strong>the</strong> SVC Welcome window (Figure 8-1 on page 470), select Work with Managed<br />

Disks, and <strong>the</strong>n, select Managed Disk Groups.<br />

2. The Viewing Managed Disk Groups window opens (see Figure 8-25). Select Create an<br />

MDisk Group from <strong>the</strong> list, and click Go.<br />

Figure 8-25 Selecting <strong>the</strong> option to create an MDisk group<br />

3. In <strong>the</strong> Create a Managed Disk Group window, <strong>the</strong> wizard provides an overview of <strong>the</strong><br />

steps that will be performed. Click Next.<br />

4. While in <strong>the</strong> Name <strong>the</strong> group and select <strong>the</strong> managed disks window (Figure 8-26 on<br />

page 485), follow <strong>the</strong>se steps:<br />

a. Type a name for <strong>the</strong> MDG.<br />

MDG name: If you do not provide a name, <strong>the</strong> SVC automatically generates <strong>the</strong><br />

name MDiskgrpx, where x is <strong>the</strong> ID sequence number that is assigned by <strong>the</strong> SVC<br />

internally.<br />

If you want to provide a name (as we have done), you can use <strong>the</strong> letters A to Z and<br />

a to z, <strong>the</strong> numbers 0 to 9, and <strong>the</strong> underscore (_). The name can be between one<br />

and 15 characters in length and is case sensitive, but it cannot start with a number<br />

or <strong>the</strong> word “MDiskgrp” (because this prefix is reserved for SVC assignment only).<br />

b. From <strong>the</strong> MDisk Candidates box, as shown in Figure 8-26 on page 485, one at a time,<br />

select <strong>the</strong> MDisks that you want to put into <strong>the</strong> MDG. Click Add to move <strong>the</strong>m to <strong>the</strong><br />

Selected MDisks box. More than one page of disks might exist; you can navigate<br />

between <strong>the</strong> windows (<strong>the</strong> MDisks that you have selected will be preserved).<br />

c. You can specify a threshold to send a warning to <strong>the</strong> error log when <strong>the</strong> capacity is first<br />

exceeded. The threshold can ei<strong>the</strong>r be a percentage or a specific amount.<br />

d. Click Next.<br />

484 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-26 Name <strong>the</strong> group and select <strong>the</strong> managed disks window<br />

5. From <strong>the</strong> list that is shown in Figure 8-27, select <strong>the</strong> extent size to use. When you select a<br />

specific extent size, <strong>the</strong> typical value is 512; <strong>the</strong> total cluster size is shown in TB. Select<br />

Next.<br />

Figure 8-27 Select Extent Size window<br />

6. In <strong>the</strong> Verify Managed Disk Group window (Figure 8-28 on page 486), verify that <strong>the</strong><br />

information that you have specified is correct. Click Finish.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 485


Figure 8-28 Verify Managed Disk Group wizard<br />

7. Return to <strong>the</strong> Viewing Managed Disk Groups window (Figure 8-29) where <strong>the</strong> new MDG is<br />

displayed.<br />

Figure 8-29 A new MDG was added successfully<br />

You have now completed <strong>the</strong> tasks that are required to create an MDG.<br />

8.3.3 Renaming a managed disk group<br />

To rename an MDG, perform <strong>the</strong> following steps:<br />

1. In <strong>the</strong> Viewing Managed Disk Groups window (Figure 8-30), select <strong>the</strong> MDG that you want<br />

to rename. Select Modify an MDisk Group from <strong>the</strong> list, and click Go.<br />

Figure 8-30 Renaming an MDG<br />

486 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


From <strong>the</strong> Modifying Managed Disk Group MDisk Group Name window (where <strong>the</strong> MDisk<br />

Group Name is <strong>the</strong> MDG that you selected in <strong>the</strong> previous step), type <strong>the</strong> new name that you<br />

want to assign and click OK (see Figure 8-31).<br />

You can also set or change <strong>the</strong> usage threshold from this window.<br />

MDG name: The name can consist of <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0 to 9, a<br />

dash (-), and <strong>the</strong> underscore (_). The new name can be between one and 15 characters in<br />

length, but it cannot start with a number, a dash, or <strong>the</strong> word “mdiskgrp” (because this<br />

prefix is reserved for SVC assignment only).<br />

Figure 8-31 Renaming an MDG<br />

It is considered a best practice to enable <strong>the</strong> capacity warning for your MDGs. You must<br />

address <strong>the</strong> range to be used in <strong>the</strong> planning phase of <strong>the</strong> SVC installation, although this<br />

range can always be changed without interruption.<br />

8.3.4 Deleting a managed disk group<br />

To delete an MDG, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> MDG that you want to delete. Select Delete an MDisk Group from <strong>the</strong> list, and<br />

click Go.<br />

2. In <strong>the</strong> Deleting a Managed Disk Group MDGname window (where MDGname is <strong>the</strong> MDG<br />

that you selected in <strong>the</strong> previous step), click OK to confirm that you want to delete <strong>the</strong><br />

MDG (see Figure 8-32).<br />

Figure 8-32 Deleting an MDG<br />

3. If <strong>the</strong>re are MDisks and VDisks within <strong>the</strong> MDG that you are deleting, you are required to<br />

click Forced delete for <strong>the</strong> MDG (Figure 8-33 on page 488).<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 487


8.3.5 Adding MDisks<br />

Important: If you delete an MDG with <strong>the</strong> Forced Delete option, and VDisks were<br />

associated with that MDG, you will lose <strong>the</strong> data on your VDisks, because <strong>the</strong>y are<br />

deleted before <strong>the</strong> MDG. If you want to save your data, migrate or mirror <strong>the</strong> VDisks to<br />

ano<strong>the</strong>r MDG before you delete <strong>the</strong> MDG previously assigned to <strong>the</strong> VDisks.<br />

Figure 8-33 Confirming forced deletion of an MDG<br />

If you created an empty MDG or you simply assign additional MDisks to your SVC<br />

environment later, you can add MDisks to existing MDGs by performing <strong>the</strong> following steps:<br />

Note: You can only add unmanaged MDisks to an MDG.<br />

1. In Figure 8-34, select <strong>the</strong> MDG to which you want to add MDisks. Select Add MDisks<br />

from <strong>the</strong> list, and click Go.<br />

Figure 8-34 Adding an MDisk to an existing MDG<br />

2. From <strong>the</strong> Adding Managed Disks to Managed Disk Group MDGname window (where<br />

MDGname is <strong>the</strong> MDG that you selected in <strong>the</strong> previous step), select <strong>the</strong> desired MDisk or<br />

MDisks from <strong>the</strong> MDisk Candidates list (Figure 8-35 on page 489). After you select all of<br />

<strong>the</strong> desired MDisks, click OK.<br />

488 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-35 Adding MDisks to an MDG<br />

8.3.6 Removing MDisks<br />

To remove an MDisk from an MDG, perform <strong>the</strong> following steps:<br />

1. In Figure 8-36, select <strong>the</strong> MDG from which you want to remove an MDisk. Select Remove<br />

MDisks from <strong>the</strong> list, and click Go.<br />

Figure 8-36 Viewing MDGs<br />

2. From <strong>the</strong> Deleting Managed Disks from Managed Disk Group MDGname window (where<br />

MDGname is <strong>the</strong> MDG that you selected in <strong>the</strong> previous step), select <strong>the</strong> desired MDisk or<br />

MDisks from <strong>the</strong> list (Figure 8-37 on page 490). After you select all of <strong>the</strong> desired MDisks,<br />

click OK.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 489


Figure 8-37 Removing MDisks from an MDG<br />

3. If VDisks are using <strong>the</strong> MDisks that you are removing from <strong>the</strong> MDG, you are required to<br />

click Forced Delete to confirm <strong>the</strong> removal of <strong>the</strong> MDisk, as shown in Figure 8-38.<br />

4. An error message is displayed if <strong>the</strong>re is insufficient space to migrate <strong>the</strong> VDisk data to<br />

o<strong>the</strong>r extents on o<strong>the</strong>r MDisks in that MDG.<br />

Figure 8-38 Confirming forced deletion of MDisks from an MDG<br />

8.3.7 Displaying MDisks<br />

If you want to view <strong>the</strong> MDisks that are configured on your system, perform <strong>the</strong> following steps<br />

to display MDisks.<br />

From <strong>the</strong> SVC Welcome window (Figure 8-1 on page 470), select Work with Managed<br />

Disks, and <strong>the</strong>n, select Managed Disks. In <strong>the</strong> Viewing Managed Disks window (Figure 8-39<br />

on page 491), if your MDisks are not displayed, rescan <strong>the</strong> Fibre Channel (FC) network.<br />

Select Discover MDisks from <strong>the</strong> list, and click Go.<br />

490 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-39 Discover MDisks<br />

Troubleshooting: If your MDisks are still not visible, check that <strong>the</strong> logical unit numbers<br />

(LUNs) from your subsystem are properly assigned to <strong>the</strong> SVC (for example, using storage<br />

partitioning with a DS4000) and that appropriate zoning is in place (for example, <strong>the</strong> SVC<br />

can see <strong>the</strong> disk subsystem).<br />

8.3.8 Showing MDisks in this group<br />

To show a list of MDisks within an MDG, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> MDG from which you want to retrieve MDisk information (Figure 8-40). Select<br />

Show MDisks in This Group from <strong>the</strong> list, and click Go.<br />

Figure 8-40 Viewing Managed Disk Groups<br />

2. You now see a subset (specific to <strong>the</strong> MDG that you chose in <strong>the</strong> previous step) of <strong>the</strong><br />

Viewing Managed Disks window (Figure 8-41 on page 492) that was shown in 8.2.4,<br />

“Managed disks” on page 479.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 491


Figure 8-41 Viewing MDisks in an MDG<br />

Note: Remember, you can collapse <strong>the</strong> column entitled My Work at any time by clicking <strong>the</strong><br />

arrow to <strong>the</strong> right of <strong>the</strong> My Work column heading.<br />

8.3.9 Showing <strong>the</strong> VDisks that are associated with an MDisk group<br />

To show a list of <strong>the</strong> VDisks that are associated with MDisks within an MDG, perform <strong>the</strong><br />

following steps:<br />

1. In Figure 8-42, select <strong>the</strong> MDG from which you want to retrieve VDisk information. Select<br />

Show VDisks using this group from <strong>the</strong> list, and click Go.<br />

Figure 8-42 Viewing Managed Disk Groups<br />

2. You see a subset (specific to <strong>the</strong> MDG that you chose in <strong>the</strong> previous step) of <strong>the</strong> Viewing<br />

Virtual Disks window in Figure 8-43 on page 493. We describe <strong>the</strong> Viewing Virtual Disks<br />

window in more detail in “VDisk information” on page 505.<br />

492 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-43 VDisks belonging to selected MDG<br />

You have now completed <strong>the</strong> required tasks to manage <strong>the</strong> disk controller systems, MDisks,<br />

and MDGs within <strong>the</strong> SVC environment.<br />

8.4 Working with hosts<br />

In this section, we describe <strong>the</strong> various configuration and administration tasks that you can<br />

perform on <strong>the</strong> host that is connected to your SVC.<br />

For more details about connecting hosts to an SVC in a <strong>SAN</strong> environment, obtain more<br />

detailed information in <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>.0 - Host Attachment<br />

Guide, SG26-7905-05.<br />

Starting with SVC 5.1, iSCSI is introduced as an additional method for connecting your host<br />

to <strong>the</strong> SVC. With this option, <strong>the</strong> host can now choose between FC or iSCSI as <strong>the</strong><br />

connection method. After <strong>the</strong> connection type has been selected, all fur<strong>the</strong>r work with <strong>the</strong><br />

host is identical for <strong>the</strong> FC-attached host and <strong>the</strong> iSCSI-attached host.<br />

To access <strong>the</strong> Viewing Hosts window from <strong>the</strong> SVC Welcome window on Figure 8-1 on<br />

page 470, click Work with Hosts, and <strong>the</strong>n, click Hosts. The Viewing Hosts window opens,<br />

as shown in Figure 8-44. You perform each task that is shown in <strong>the</strong> following sections from<br />

<strong>the</strong> Viewing Hosts window.<br />

Figure 8-44 Viewing Hosts window<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 493


8.4.1 Host information<br />

To retrieve information about a specific host, perform <strong>the</strong> following steps:<br />

1. In <strong>the</strong> Viewing Hosts window (see Figure 8-44 on page 493), click <strong>the</strong> underlined name of<br />

any host in <strong>the</strong> displayed list.<br />

2. Next, you can obtain details for <strong>the</strong> host that you requested:<br />

a. In <strong>the</strong> Viewing General Details window (Figure 8-45), you can see more detailed<br />

information about <strong>the</strong> specified host.<br />

Figure 8-45 Host details<br />

b. You can click Port Details (Figure 8-46) to see <strong>the</strong> attachment information, such as <strong>the</strong><br />

worldwide port names (WWPNs) that are defined for this host or <strong>the</strong> iSCSI qualified<br />

name (IQN) that is defined for this host.<br />

Figure 8-46 Host port details<br />

c. You can click Mapped I/O Groups (Figure 8-47 on page 495) to see which I/O Groups<br />

this host can access.<br />

494 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.4.2 Creating a host<br />

Figure 8-47 Host mapped I/O Groups<br />

d. A new feature in SVC 5.1 is <strong>the</strong> capability to create hosts that use ei<strong>the</strong>r FC<br />

connections or iSCSI connections. If we select iSCSI for our host in this example, we<br />

do not see any iSCSI parameters (as shown in Figure 8-48), because this host is<br />

already configured with an FC port, as shown in Figure 8-46 on page 494.<br />

Figure 8-48 iSCSI parameters<br />

When you are finished viewing <strong>the</strong> details, click Close to return to <strong>the</strong> previous window.<br />

Because we have two types of connection methods from which to choose for our host, iSCSI<br />

or FC, we will show both methods.<br />

8.4.3 Fibre Channel-attached hosts<br />

To create a new host that uses <strong>the</strong> FC connection type, perform <strong>the</strong> following steps:<br />

1. As shown in Figure 8-49 on page 496, select Create a Host from <strong>the</strong> list, and click Go.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 495


Figure 8-49 Create a host<br />

2. In <strong>the</strong> Creating Hosts window (Figure 8-50 on page 497), type a name for your host (Host<br />

Name).<br />

Host name: If you do not provide a name, <strong>the</strong> SVC automatically generates <strong>the</strong> name<br />

hostx (where x is <strong>the</strong> ID sequence number that is assigned by <strong>the</strong> SVC internally).<br />

If you want to provide a name, you can use <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0<br />

to 9, and <strong>the</strong> underscore. The host name can be between one and 15 characters in<br />

length. However, <strong>the</strong> name cannot start with a number or <strong>the</strong> word “host” (because this<br />

prefix is reserved for SVC assignment only). Although using an underscore might work<br />

in certain circumstances, it violates <strong>the</strong> request for change (RFC) 2396 definition of<br />

Uniform Resource Identifiers (URIs) and can cause problems. So, we recommend that<br />

you do not use <strong>the</strong> underscore in host names.<br />

3. Select <strong>the</strong> mode (Type) for <strong>the</strong> host. The default type is Generic. Use generic for all hosts,<br />

except if you use Hewlett-Packard UNIX (HP-UX) or SUN, in which case, select HP_UX<br />

(to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts<br />

using MPxIO.<br />

4. The connection type is ei<strong>the</strong>r Fibre Channel or iSCSI. If you select Fibre Channel, you are<br />

asked for <strong>the</strong> port mask and <strong>the</strong> WWPN of <strong>the</strong> server that you are creating. If you select<br />

iSCSI, you are asked for <strong>the</strong> iSCSI initiator, which is commonly called <strong>the</strong> IQN, and <strong>the</strong><br />

Challenge Handshake Au<strong>the</strong>ntication Protocol (CHAP) au<strong>the</strong>ntication secret to ensure<br />

au<strong>the</strong>ntication of <strong>the</strong> target host and volume access.<br />

5. You can use a port mask to control <strong>the</strong> node target ports that a host can access. The port<br />

mask applies to <strong>the</strong> logins from <strong>the</strong> host initiator port that are associated with <strong>the</strong> host<br />

object.<br />

Note: For each login between a host bus adapter (HBA) port and a node port, <strong>the</strong> node<br />

examines <strong>the</strong> port mask that is associated with <strong>the</strong> host object for which <strong>the</strong> HBA is a<br />

member and determines if access is allowed or denied. If access is denied, <strong>the</strong> node<br />

responds to SCSI commands as though <strong>the</strong> HBA port is unknown.<br />

The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled)<br />

to 1111 (all ports enabled). The rightmost bit in <strong>the</strong> mask corresponds to <strong>the</strong> lowest<br />

numbered SVC port (1, not 4) on a node.<br />

As shown in Figure 8-50 on page 497, our port mask is 1111; <strong>the</strong> HBA port can access all<br />

node ports. If, for example, a port mask is set to 0011, only port 1 and port 2 are enabled<br />

for this host access.<br />

496 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


6. Select and add <strong>the</strong> WWPNs that correspond to your HBA or HBAs. Click OK.<br />

In certain cases, your WWPNs might not be displayed, although you are sure that your<br />

adapter is functioning (for example, you see <strong>the</strong> WWPN in <strong>the</strong> switch name server) and<br />

your zones are correctly set up. In this case, you can manually type <strong>the</strong> WWPN of your<br />

HBA or HBAs into <strong>the</strong> Additional Ports field (type <strong>the</strong> WWPNs one per line) at <strong>the</strong> bottom<br />

of <strong>the</strong> window and select Do not validate WWPN before you click OK.<br />

Figure 8-50 Creating a new FC-connected host<br />

This action brings you back to <strong>the</strong> Viewing Hosts window (Figure 8-51) where you can see <strong>the</strong><br />

newly added host.<br />

Figure 8-51 Create host results<br />

8.4.4 iSCSI-attached hosts<br />

Now, we show you <strong>the</strong> steps to configure a host that is connected by using iSCSI.<br />

Prior to starting to use iSCSI, we must configure our cluster to use <strong>the</strong> iSCSI option, which is<br />

shown in 8.4.4, “iSCSI-attached hosts” on page 497.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 497


When creating an iSCSI-attached host from <strong>the</strong> Welcome window, select Working with<br />

hosts, and select Hosts. From <strong>the</strong> drop-down list, we select Create a Host, as shown in<br />

Figure 8-52.<br />

Figure 8-52 Creating an iSCSI host<br />

In <strong>the</strong> Creating Hosts window (Figure 8-53 on page 499), type a name for your host (Host<br />

Name). Follow <strong>the</strong>se steps:<br />

1. Select <strong>the</strong> mode (Type) for <strong>the</strong> host. The default type is Generic. Use generic for all hosts,<br />

except for HP-UX or SUN. For HP or Sun, select HP_UX (to have more than eight LUNs<br />

supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.<br />

2. The connection type is iSCSI.<br />

3. The iSCSI initiator or IQN is iqn.1991-05.com.microsoft:freyja. This IQN is obtained from<br />

<strong>the</strong> server and generally has <strong>the</strong> same purpose as <strong>the</strong> WWPN.<br />

4. The CHAP secret is <strong>the</strong> au<strong>the</strong>ntication method that is used to restrict access for o<strong>the</strong>r<br />

iSCSI hosts to use <strong>the</strong> same connection. You can set <strong>the</strong> CHAP for <strong>the</strong> whole cluster<br />

under cluster properties or for each host definition. The CHAP must be identical on <strong>the</strong><br />

server and <strong>the</strong> cluster/host definition. You can create an iSCSI host definition without<br />

using a CHAP.<br />

In Figure 8-53 on page 499, we set <strong>the</strong> parameters for our host called Freyja.<br />

498 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-53 iSCSI parameters<br />

8.4.5 Modifying a host<br />

The iSCSI host is now created.<br />

To modify a host, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> host that you want to rename (Figure 8-54). Select Modify a Host from <strong>the</strong> list,<br />

and click Go.<br />

Figure 8-54 Modifying a host<br />

2. From <strong>the</strong> Modifying Host window (Figure 8-55 on page 500), type <strong>the</strong> new name that you<br />

want to assign or change <strong>the</strong> Type parameter, and click OK.<br />

Name: The name can consist of <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0 to 9, and<br />

<strong>the</strong> underscore. The name can be between one and 15 characters in length. However, it<br />

cannot start with a number or <strong>the</strong> word “host” (because this prefix is reserved for SVC<br />

assignment only). While using an underscore might work in certain circumstances, it<br />

violates <strong>the</strong> RFC 2396 definition of Uniform Resource Identifiers (URIs) and thus can<br />

cause problems. So, we recommend that you do not use <strong>the</strong> underscore in host names.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 499


8.4.6 Deleting a host<br />

Figure 8-55 Modifying a host (choosing a new name)<br />

To delete a host, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> host that you want to delete (Figure 8-56). Select Delete a Host from <strong>the</strong> list,<br />

and click Go.<br />

Figure 8-56 Deleting a host<br />

2. In <strong>the</strong> Deleting Host host name window (where host name is <strong>the</strong> host that you selected in<br />

<strong>the</strong> previous step), click OK if you are sure that you want to delete <strong>the</strong> host. See<br />

Figure 8-57.<br />

Figure 8-57 Deleting a host<br />

500 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.4.7 Adding ports<br />

3. If you still have VDisks associated with <strong>the</strong> host, you will see a window (Figure 8-58)<br />

requesting confirmation for <strong>the</strong> forced deletion of <strong>the</strong> host. Click OK and all of <strong>the</strong><br />

mappings between this host and its VDisks are deleted before <strong>the</strong> host is deleted.<br />

Figure 8-58 Forcing a deletion<br />

If you add an HBA or a network interface controller (NIC) to a server that is already defined<br />

within <strong>the</strong> SVC, you can simply add additional ports to your host definition by performing <strong>the</strong><br />

following steps:<br />

Note: A host definition can only have FC ports or an iSCSI port defined, but not both.<br />

1. Select <strong>the</strong> host to which you want to add ports, as shown in Figure 8-59. Select Add Ports<br />

from <strong>the</strong> list, and click Go.<br />

Figure 8-59 Adding ports to a host<br />

2. From <strong>the</strong> Adding ports window, you can select whe<strong>the</strong>r to add an FC port (WWPN) or an<br />

iSCSI port (IQN initiator) for <strong>the</strong> connection type. Select ei<strong>the</strong>r <strong>the</strong> desired WWPN from <strong>the</strong><br />

Available Ports list and click Add, or enter <strong>the</strong> new IQN in <strong>the</strong> iSCSI window. After adding<br />

<strong>the</strong> WWPN or IQN, click OK. See Figure 8-60 on page 502.<br />

If your WWPNs are not in <strong>the</strong> list of <strong>the</strong> Available Ports and you are sure that your adapter<br />

is functioning (for example, you see WWPN in <strong>the</strong> switch name server) and your zones are<br />

correctly set up, you can manually type <strong>the</strong> WWPN of your HBAs into <strong>the</strong> Add Additional<br />

Ports field at <strong>the</strong> bottom of <strong>the</strong> window before you click OK.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 501


8.4.8 Deleting ports<br />

Figure 8-60 Adding WWPN ports to a host<br />

Figure 8-61 shows where IQN is added to our host called Thor.<br />

Figure 8-61 Adding IQN port to a host<br />

To delete a port from a host, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> host from which you want to delete a port (Figure 8-62). Select Delete Ports<br />

from <strong>the</strong> list, and click Go.<br />

Figure 8-62 Delete ports from a host<br />

502 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2. On <strong>the</strong> Deleting Ports From host name window (where host name is <strong>the</strong> host that you<br />

selected in <strong>the</strong> previous step), start by selecting <strong>the</strong> connection type of <strong>the</strong> port that you<br />

want to delete. If you select Fibre Channel, you select <strong>the</strong> port that you want to delete<br />

from <strong>the</strong> Available Ports list, and click Add. When you have selected all of <strong>the</strong> ports that<br />

you want to delete from your host and when you have added <strong>the</strong>m to <strong>the</strong> column to <strong>the</strong><br />

right, click OK. If you selected <strong>the</strong> connection type iSCSI, you select <strong>the</strong> ports from <strong>the</strong><br />

available iSCSI initiator and click Add. Then, click OK. Figure 8-63 shows selecting a<br />

WWPN port to delete. Figure 8-64 shows that we have selected an iSCSI initiator to<br />

delete.<br />

Figure 8-63 Deleting WWPN port from a host<br />

Figure 8-64 Deleting iSCSI initiator from an host<br />

3. If you have VDisks that are associated with <strong>the</strong> host, you receive a warning about deleting<br />

a host port. You need to confirm your action when prompted, as shown in Figure 8-65 on<br />

page 504. A similar warning message appears if you delete an iSCSI port.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 503


Figure 8-65 Port deletion confirmation<br />

8.5 Working with VDisks<br />

In this section, we describe <strong>the</strong> tasks that you can perform at a VDisk level.<br />

8.5.1 Using <strong>the</strong> Viewing VDisks using MDisk window<br />

You perform each of <strong>the</strong> following tasks from <strong>the</strong> Viewing VDisks using MDisk window<br />

(Figure 8-66). To access this window, from <strong>the</strong> SVC Welcome window, click Work with<br />

Virtual Disks, and <strong>the</strong>n, click Virtual Disks. The list contains all of <strong>the</strong> actions that you can<br />

perform in <strong>the</strong> Viewing VDisks window.<br />

Figure 8-66 Viewing VDisks<br />

504 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.5.2 VDisk information<br />

To retrieve information about a specific VDisk, perform <strong>the</strong> following steps:<br />

1. In <strong>the</strong> Viewing Virtual Disks window, click <strong>the</strong> underlined name of <strong>the</strong> desired VDisk in <strong>the</strong><br />

list.<br />

2. The next window (Figure 8-67) that opens shows detailed information. Review <strong>the</strong><br />

information. When you are finished, click Close to return to <strong>the</strong> Viewing VDisks window.<br />

Figure 8-67 VDisk details<br />

8.5.3 Creating a VDisk<br />

To create a new VDisk, perform <strong>the</strong> following steps:<br />

1. Select Create a VDisk from <strong>the</strong> list (Figure 8-66 on page 504), and click Go.<br />

2. The Create Virtual Disks wizard launches. Click Next.<br />

3. The Choose an I/O Group and a Preferred Node window opens. Choose an I/O Group,<br />

and <strong>the</strong>n, select a preferred node (see Figure 8-68 on page 506). In our case, we let <strong>the</strong><br />

system choose. Click Next.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 505


Figure 8-68 Creating a VDisk: Select Groups<br />

4. The Set Attributes window opens (Figure 8-69):<br />

a. Choose <strong>the</strong> type of VDisk that you want to create: striped or sequential.<br />

b. Select <strong>the</strong> cache mode: Read/Write or None.<br />

c. If you want, enter a unit device identifier.<br />

d. Enter <strong>the</strong> number of VDisks that you want to create.<br />

e. You can select <strong>the</strong> Space-efficient or Mirrored Disk check box, which will expand <strong>the</strong><br />

respective sections with extra options.<br />

f. Optionally, format <strong>the</strong> new VDisk by selecting <strong>the</strong> Format VDisk before use check box<br />

(write zeros to its MDisk extents).<br />

g. Click Next.<br />

Figure 8-69 Creating a VDisk: Set Attributes<br />

5. Select <strong>the</strong> MDG from which you want <strong>the</strong> VDisk to be a member:<br />

a. If you selected Striped, you will see <strong>the</strong> window that is shown in Figure 8-70 on<br />

page 507. You must select <strong>the</strong> MDisk group, and <strong>the</strong>n, <strong>the</strong> Managed Disk Candidates<br />

window will appear. You can optionally add MDisks to be striped.<br />

506 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-70 Selecting an MDG<br />

b. If you selected Sequential mode, you see <strong>the</strong> window that is shown in Figure 8-71.<br />

You must select <strong>the</strong> MDisk group, and <strong>the</strong>n, a list of managed disks appears. You must<br />

choose at least one MDisk as a managed disk.<br />

Figure 8-71 Creating a VDisk wizard: Select attributes for sequential mode VDisks<br />

c. Enter <strong>the</strong> size of <strong>the</strong> VDisk that you want to create and select <strong>the</strong> capacity<br />

measurement (MB or GB) from <strong>the</strong> list.<br />

Capacity: An entry of 1 GB uses 1,024 MB.<br />

d. Click Next.<br />

6. You can enter <strong>the</strong> VDisk name if you want to create a single VDisk, or you can enter <strong>the</strong><br />

naming prefix if you want to create multiple VDisks. Click Next.<br />

VDisk naming: When you create more than one VDisk, <strong>the</strong> wizard does not ask you for a<br />

name for each VDisk to be created. Instead, <strong>the</strong> name that you use here will be a prefix<br />

and have a number, starting at zero, appended to it as each VDisk is created.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 507


Figure 8-72 Creating a VDisk wizard: Name <strong>the</strong> VDisks<br />

Note: If you do not provide a name, <strong>the</strong> SVC automatically generates <strong>the</strong> name VDiskn<br />

(where n is <strong>the</strong> ID sequence number that is assigned by <strong>the</strong> SVC internally).<br />

If you want to provide a name, you can use <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0<br />

to 9, and <strong>the</strong> underscore. The name can be between one and 15 characters in length,<br />

but it cannot start with a number or <strong>the</strong> word “VDisk” (because this prefix is reserved for<br />

SVC assignment only).<br />

7. In <strong>the</strong> Verify Attributes window (see Figure 8-73 for striped mode and Figure 8-74 on<br />

page 509 for sequential mode), check whe<strong>the</strong>r you are satisfied with <strong>the</strong> information that is<br />

shown, and <strong>the</strong>n, click Finish to complete <strong>the</strong> task. O<strong>the</strong>rwise, click Back to return to<br />

make any corrections.<br />

Figure 8-73 Creating a VDisk wizard: Verify <strong>the</strong> VDisk striped type<br />

508 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-74 Creating a VDisk wizard: Verify <strong>the</strong> VDisk sequential type<br />

8. Figure 8-75 shows <strong>the</strong> progress of <strong>the</strong> creation of your VDisks on <strong>the</strong> storage and <strong>the</strong> final<br />

results.<br />

Figure 8-75 Creating a VDisk wizard: Final result<br />

8.5.4 Creating a Space-Efficient VDisk with autoexpand<br />

Using Space-Efficient VDisks allows you to commit <strong>the</strong> minimal amount of space while<br />

promising an allocation that might be larger than <strong>the</strong> available free storage.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 509


In this section, we create a Space-Efficient VDisk step-by-step. This process allows you to<br />

create VDisks with a much higher capacity than is physically available (this approach is called<br />

thin provisioning).<br />

While <strong>the</strong> host using this VDisk starts utilizing up to <strong>the</strong> level of <strong>the</strong> real allocation, <strong>the</strong> SVC<br />

can dynamically grow (when you enable <strong>the</strong> autoexpand feature) until it reaches <strong>the</strong> virtual<br />

capacity limit or <strong>the</strong> MDG physically runs out of free space. For <strong>the</strong> latter scenario, running out<br />

of space causes <strong>the</strong> growing VDisk to go offline, affecting <strong>the</strong> host that is using that VDisk.<br />

Therefore, enabling threshold warnings is important and recommended.<br />

Perform <strong>the</strong> following steps to create a Space-Efficient VDisk with autoexpand:<br />

1. Select Create a VDisk from <strong>the</strong> list (Figure 8-66 on page 504), and click Go.<br />

2. The Create Virtual Disks wizard launches. Click Next.<br />

3. The Choose an I/O Group and a Preferred Node window opens. Choose an I/O Group,<br />

and <strong>the</strong>n, choose a preferred node (see Figure 8-76). In our case, we let <strong>the</strong> system<br />

choose. Click Next.<br />

Figure 8-76 Creating a VDisk wizard: Select Groups<br />

4. The Set Attributes window opens (Figure 8-69 on page 506). Perform <strong>the</strong>se steps:<br />

a. Choose <strong>the</strong> type of VDisk that you want to create: striped or sequential.<br />

b. Select <strong>the</strong> cache mode: Read/Write or None.<br />

c. Enter a unit device identifier (optional).<br />

d. Enter <strong>the</strong> number of VDisks that you want to create.<br />

e. Select Space-efficient, which expands this section with <strong>the</strong> following options:<br />

i. Type <strong>the</strong> size of <strong>the</strong> VDisk Capacity (remember, this size is <strong>the</strong> virtual size).<br />

ii. Type a percentage or select a specific size for <strong>the</strong> usage threshold warning.<br />

iii. Select Auto expand, which allows <strong>the</strong> real disk size to grow as required.<br />

iv. Select <strong>the</strong> Grain size (choose 32 KB normally, but match <strong>the</strong> FlashCopy grain size,<br />

which is 256 KB, if <strong>the</strong> VDisk will be used for FlashCopy).<br />

f. Optionally, format <strong>the</strong> new VDisk by selecting Format VDisk before use (write zeros to<br />

its managed disk extents).<br />

g. Click Next.<br />

510 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-77 Creating a VDisk wizard: Set Attributes<br />

5. On <strong>the</strong> Select MDisk(s) and Size for a -Mode VDisk window, as shown in<br />

Figure 8-78, follow <strong>the</strong>se steps:<br />

a. Select <strong>the</strong> Managed Disk Group from <strong>the</strong> list.<br />

b. Optionally, choose <strong>the</strong> MDisk Candidates upon which to create <strong>the</strong> VDisk. Click Add to<br />

move <strong>the</strong>m to <strong>the</strong> Managed Disks Striped in this Order box.<br />

c. Type <strong>the</strong> Real size that you want to allocate. This size is <strong>the</strong> amount of disk space that<br />

will actually be allocated. It can ei<strong>the</strong>r be a percentage of <strong>the</strong> virtual size or a specific<br />

number.<br />

Figure 8-78 Creating a VDisk wizard: Selecting MDisks and sizes<br />

6. In <strong>the</strong> Name <strong>the</strong> VDisk(s) window (Figure 8-79 on page 512), type a name for <strong>the</strong> VDisk<br />

that you are creating. In our case, we used vdisk_sev2. Click Next.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 511


Figure 8-79 Name <strong>the</strong> VDisk(s) window<br />

7. In <strong>the</strong> Verify Attributes window (Figure 8-80), verify <strong>the</strong> selections. We can select Back at<br />

any time to make changes.<br />

Figure 8-80 Verifying Space-Efficient VDisk Attributes window<br />

8. After selecting Finish, we are presented with a window (Figure 8-81 on page 513) that<br />

tells us <strong>the</strong> result of <strong>the</strong> action.<br />

512 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.5.5 Deleting a VDisk<br />

Figure 8-81 Space-Efficient VDisk creation success<br />

To delete a VDisk, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> VDisk that you want to delete (Figure 8-66 on page 504). Select Delete a<br />

VDisk from <strong>the</strong> list, and click Go.<br />

2. In <strong>the</strong> Deleting Virtual Disk VDiskname window (where VDiskname is <strong>the</strong> VDisk that you<br />

just selected), click OK to confirm your desire to delete <strong>the</strong> VDisk. See Figure 8-82.<br />

Figure 8-82 Deleting a VDisk<br />

If <strong>the</strong> VDisk is currently assigned to a host, you receive a secondary message where you<br />

must click Forced Delete to confirm your decision. See Figure 8-83 on page 514. This<br />

action deletes <strong>the</strong> VDisk-to-host mapping before deleting <strong>the</strong> VDisk.<br />

Important: Deleting a VDisk is a destructive action for user data residing in that VDisk.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 513


Figure 8-83 Deleting a VDisk: Forcing a deletion<br />

8.5.6 Deleting a VDisk-to-host mapping<br />

To unmap (unassign) a VDisk from a host, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> VDisk that you want to unmap. Select Delete a VDisk-to-host mapping from<br />

<strong>the</strong> list, and click Go.<br />

2. In <strong>the</strong> Deleting a VDisk-to-host Mapping window (Figure 8-84), from <strong>the</strong> Host Name list,<br />

select <strong>the</strong> host from which to unassign <strong>the</strong> VDisk. Click OK.<br />

Tip: Make sure that <strong>the</strong> host is no longer using that disk. Unmapping a disk from a host<br />

does not destroy <strong>the</strong> disk’s contents.<br />

Unmapping a disk has <strong>the</strong> same effect as powering off <strong>the</strong> computer without first<br />

performing a clean shutdown and, thus, might leave <strong>the</strong> data in an Inconsistent state. Also,<br />

any running application that was using <strong>the</strong> disk will start to receive I/O errors.<br />

Figure 8-84 Deleting a VDisk-to-host Mapping window<br />

8.5.7 Expanding a VDisk<br />

Expanding a VDisk presents a larger capacity disk to your operating system. Although you<br />

can expand a VDisk easily using <strong>the</strong> SVC, you must ensure that your operating system is<br />

prepared for it and supports <strong>the</strong> volume expansion before you use this function.<br />

Dynamic expansion of a VDisk is only supported when <strong>the</strong> VDisk is in use by one of <strong>the</strong><br />

following operating systems:<br />

► AIX 5L V5.2 and higher<br />

► Microsoft Windows 2000 Server and Windows Server 2003 for basic disks<br />

► Microsoft Windows 2000 Server and Windows Server 2003 with a hot fix from Microsoft<br />

(Q327020) for dynamic disks<br />

514 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Assuming that your operating system supports it, to expand a VDisk, perform <strong>the</strong> following<br />

steps:<br />

1. Select <strong>the</strong> VDisk that you want to expand, as shown in Figure 8-68 on page 506. Select<br />

Expand a VDisk from <strong>the</strong> list, and click Go.<br />

2. The Expanding Virtual Disks VDiskname window (where VDiskname is <strong>the</strong> VDisk that you<br />

selected in <strong>the</strong> previous step) opens. See Figure 8-85. Follow <strong>the</strong>se steps:<br />

a. Select <strong>the</strong> new size of <strong>the</strong> VDisk. This size is <strong>the</strong> increment to add. For example, if you<br />

have a 5 GB disk and you want it to become 10 GB, you specify 5 GB in this field.<br />

b. Optionally, select <strong>the</strong> MDisk candidates from which to obtain <strong>the</strong> additional capacity.<br />

The default for a striped VDisk is to use equal capacity from each MDisk in <strong>the</strong> MDG.<br />

VDisk expansion notes:<br />

► With sequential VDisks, you must specify <strong>the</strong> MDisk from which you want to<br />

obtain space.<br />

► No support exists for <strong>the</strong> expansion of image mode VDisks.<br />

► If <strong>the</strong>re are insufficient extents to expand your VDisk to <strong>the</strong> specified size, you<br />

receive an error message.<br />

► If you use VDisk Mirroring, all copies must be synchronized before expanding.<br />

c. Optionally, you can format <strong>the</strong> extra space with zeros by selecting <strong>the</strong> Format<br />

Additional Managed Disk Extents check box. This option does not format <strong>the</strong> entire<br />

VDisk, only <strong>the</strong> newly expanded space.<br />

When you are finished, click OK.<br />

Figure 8-85 Expanding a VDisk<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 515


8.5.8 Assigning a VDisk to a host<br />

When we map a VDisk to a host, it does not matter whe<strong>the</strong>r <strong>the</strong> host is attached using ei<strong>the</strong>r<br />

an iSCSI or FC connection type. The SVC treats <strong>the</strong> VDisk mapping in <strong>the</strong> same way for both<br />

connection types.<br />

Perform <strong>the</strong> following steps to map a VDisk to a host:<br />

1. From <strong>the</strong> SVC Welcome window (Figure 8-1 on page 470), select Work with Virtual<br />

Disks, and <strong>the</strong>n, select Virtual Disks.<br />

2. In <strong>the</strong> Viewing VDisks window (Figure 8-86), from <strong>the</strong> list, select Map VDisks to a host,<br />

and click Go.<br />

Figure 8-86 Assigning VDisks to a host<br />

3. In <strong>the</strong> Creating Virtual Disk-to-Host Mappings window (Figure 8-87), select <strong>the</strong> target host.<br />

We have <strong>the</strong> option to specify <strong>the</strong> SCSI LUN ID. (This field is optional. Use this field to<br />

specify an ID for <strong>the</strong> SCSI LUN. If you do not specify an ID, <strong>the</strong> next available SCSI LUN<br />

ID on <strong>the</strong> host adapter is automatically used.) Click OK.<br />

Figure 8-87 Creating VDisk-to-Host Mappings window<br />

4. You are presented with an information window that displays <strong>the</strong> status, as shown in<br />

Figure 8-88 on page 517.<br />

516 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-88 VDisk to host mapping successful<br />

5. You now return to <strong>the</strong> Viewing Virtual Disks window (Figure 8-86 on page 516).<br />

You have now completed all of <strong>the</strong> tasks that are required to assign a VDisk to an attached<br />

host, and <strong>the</strong> VDisk is ready for use by <strong>the</strong> host.<br />

8.5.9 Modifying a VDisk<br />

The Modifying Virtual Disk menu item allows you to rename <strong>the</strong> VDisk, reassign <strong>the</strong> VDisk to<br />

ano<strong>the</strong>r I/O Group, and set throttling parameters.<br />

To modify a VDisk, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> VDisk that you want to modify (Figure 8-66 on page 504). Select Modify a<br />

VDisk from <strong>the</strong> list, and click Go.<br />

2. The Modifying Virtual Disk VDiskname window (where VDiskname is <strong>the</strong> VDisk that you<br />

selected in <strong>the</strong> previous step) opens. See Figure 8-89 on page 518. You can perform <strong>the</strong><br />

following steps separately or in combination:<br />

a. Type a new name for your VDisk.<br />

New name: The name can consist of <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0 to<br />

9, and <strong>the</strong> underscore. The name can be between one and 15 characters in length.<br />

However, it cannot start with a number or <strong>the</strong> word “VDisk” (because this prefix is<br />

reserved for SVC assignment only).<br />

b. Select an alternate I/O Group from <strong>the</strong> list to alter <strong>the</strong> I/O Group to which it is assigned.<br />

c. Set performance throttling for a specific VDisk. In <strong>the</strong> I/O Governing field, type a<br />

number and select ei<strong>the</strong>r I/O or MB from <strong>the</strong> list. Note <strong>the</strong> following items:<br />

I/O governing effectively throttles <strong>the</strong> amount of I/Os per second (or MBs per<br />

second) to and from a specific VDisk. You might want to use I/O governing if you<br />

have a VDisk that has an access pattern that adversely affects <strong>the</strong> performance of<br />

o<strong>the</strong>r VDisks on <strong>the</strong> same set of MDisks, for example, if it uses most of <strong>the</strong> available<br />

bandwidth.<br />

If this application is highly important, migrating <strong>the</strong> VDisk to ano<strong>the</strong>r set of MDisks<br />

might be advisable. However, in certain cases, it is an issue with <strong>the</strong> I/O profile of<br />

<strong>the</strong> application ra<strong>the</strong>r than a measure of its use or importance.<br />

Base your choice between I/O and MB as <strong>the</strong> I/O governing throttle on <strong>the</strong> disk<br />

access profile of <strong>the</strong> application. Database applications generally issue large<br />

amounts of I/O, but <strong>the</strong>y only transfer a relatively small amount of data. In this case,<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 517


setting an I/O governing throttle that is based on MBs per second does not achieve<br />

much. It is better for you to use an I/O per second throttle.<br />

At <strong>the</strong> o<strong>the</strong>r extreme, a streaming video application generally issues a small amount<br />

of I/O, but it transfers large amounts of data. In contrast to <strong>the</strong> database example,<br />

setting an I/O governing throttle based on I/Os per second does not achieve much.<br />

Therefore, it is better for you to use an MB per second throttle.<br />

Additionally, you can specify a unit device identifier.<br />

The Primary Copy is used to select which VDisk copy is going to be used as <strong>the</strong><br />

preferred copy for read operations.<br />

The Mirror Synchronization rate is <strong>the</strong> I/O governing rate in a percentage during <strong>the</strong><br />

initial synchronization. A zero value disables synchronization.<br />

The Copy ID section is used for Space-Efficient VDisks. If you only have a single<br />

Space-Efficient VDisk, <strong>the</strong> Copy ID drop-down list will be grayed out and you can<br />

change <strong>the</strong> warning thresholds and whe<strong>the</strong>r <strong>the</strong> copy will autoexpand. If you have a<br />

VDisk mirror and one, or more, of <strong>the</strong> copies are space-efficient, you can select a<br />

copy, or all copies, and change <strong>the</strong> warning thresholds/autoexpand individually.<br />

Click OK when you have finished making changes.<br />

Figure 8-89 Modifying a VDisk<br />

8.5.10 Migrating a VDisk<br />

To migrate a VDisk, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> VDisk that you want to migrate (Figure 8-66 on page 504). Select Migrate a<br />

VDisk from <strong>the</strong> list, and click Go.<br />

518 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2. The Migrating Virtual Disk VDiskname window (where VDiskname is <strong>the</strong> VDisk that you<br />

selected in <strong>the</strong> previous step) opens, as shown in Figure 8-90. From <strong>the</strong> MDisk Group<br />

Name list, perform <strong>the</strong>se steps:<br />

a. Select <strong>the</strong> MDG to which you want to reassign <strong>the</strong> VDisk. You will only be presented<br />

with a list of MDisk groups with <strong>the</strong> same extent size.<br />

b. Specify <strong>the</strong> number of threads to devote to this process (a value from 1 to 4). The<br />

optional threads parameter allows you to assign a priority to <strong>the</strong> migration process. A<br />

setting of 4 is <strong>the</strong> highest priority setting. If you want <strong>the</strong> process to take a lower priority<br />

over o<strong>the</strong>r types of I/O, you can specify 3, 2, or 1.<br />

When you have finished making your selections, click OK to begin <strong>the</strong> migration<br />

process.<br />

Important: After a migration starts, you cannot stop it. Migration continues until it is<br />

complete unless it is stopped or suspended by an error condition or <strong>the</strong> VDisk that is<br />

being migrated is deleted.<br />

3. You must manually refresh your browser or close it. Return to <strong>the</strong> Viewing Virtual Disks<br />

window periodically to see <strong>the</strong> MDisk Group Name column in <strong>the</strong> Viewing Virtual Disks<br />

window update to reflect <strong>the</strong> new MDG name.<br />

Figure 8-90 Migrating a VDisk<br />

8.5.11 Migrating a VDisk to an image mode VDisk<br />

Migrating a VDisk to an image mode VDisk allows <strong>the</strong> SVC to be removed from <strong>the</strong> data path.<br />

This action might be useful where <strong>the</strong> SVC is used as a data mover appliance.<br />

To migrate a VDisk to an image mode VDisk, <strong>the</strong> following rules apply:<br />

► The destination MDisk must be greater than or equal to <strong>the</strong> size of <strong>the</strong> VDisk.<br />

► The MDisk that is specified as <strong>the</strong> target must be in an unmanaged state.<br />

► Regardless of <strong>the</strong> mode in which <strong>the</strong> VDisk starts, it is reported as being in managed<br />

mode during <strong>the</strong> migration.<br />

► Both of <strong>the</strong> MDisks involved are reported as being in image mode during <strong>the</strong> migration.<br />

► If <strong>the</strong> migration is interrupted by a cluster recovery, or by a cache problem, <strong>the</strong> migration<br />

resumes after <strong>the</strong> recovery completes.<br />

To accomplish <strong>the</strong> migration, perform <strong>the</strong> following steps:<br />

1. Select a VDisk from <strong>the</strong> list, choose Migrate to an Image Mode VDisk from <strong>the</strong><br />

drop-down list (Figure 8-91 on page 520), and click Go.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 519


2. The Migrate to Image Mode VDisk wizard launches (it is not shown here). Read <strong>the</strong> steps<br />

in this window, and click Next.<br />

3. Select <strong>the</strong> MDisk to which <strong>the</strong> data will be migrated (Figure 8-91). Click Next.<br />

Figure 8-91 Migrate to image mode VDisk wizard: Select <strong>the</strong> Target MDisk<br />

4. Select <strong>the</strong> MDG that <strong>the</strong> MDisk will join (Figure 8-92). Click Next.<br />

Figure 8-92 Migrate to image mode VDisk wizard: Select MDG<br />

5. Select <strong>the</strong> priority of <strong>the</strong> migration by selecting <strong>the</strong> number of threads (Figure 8-93). Click<br />

Next.<br />

Figure 8-93 Migrate to image mode VDisk wizard: Select <strong>the</strong> Threads<br />

520 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


6. Verify that <strong>the</strong> information that you specified is correct (Figure 8-94). If you are satisfied,<br />

click Finish. If you want to change something, use <strong>the</strong> Back option.<br />

Figure 8-94 Migrate to image mode VDisk wizard: Verify Migration Attributes<br />

7. Figure 8-95 displays <strong>the</strong> details of <strong>the</strong> VDisk that you are migrating.<br />

Figure 8-95 Migrate to image mode VDisk wizard: Progress of migration<br />

8.5.12 Creating a VDisk Mirror from an existing VDisk<br />

You can create a mirror of <strong>the</strong> MDisks from an existing VDisk, that is, it can give you two<br />

copies of <strong>the</strong> underlying disk extents.<br />

Tip: You can also create a new mirrored VDisk by selecting an option during <strong>the</strong> VDisk<br />

creation, as shown in Figure 8-69 on page 506.<br />

You can use a VDisk mirror for any operation for which you can use a VDisk. It is transparent<br />

to higher level operations, such as Metro Mirror, Global Mirror, or FlashCopy.<br />

Creating a VDisk mirror from an existing VDisk is not restricted to <strong>the</strong> same MDG, so it makes<br />

an ideal method to protect your data from a disk system or an array failure. If one copy of <strong>the</strong><br />

mirror fails, it provides continuous data access to <strong>the</strong> o<strong>the</strong>r copy. When <strong>the</strong> failed copy is<br />

repaired, <strong>the</strong> copies automatically resynchronize.<br />

You can also use a VDisk mirror as an alternative migration tool, where you can synchronize<br />

<strong>the</strong> mirror before splitting off <strong>the</strong> original side of <strong>the</strong> mirror. The VDisk stays online, and it can<br />

be used normally, while <strong>the</strong> data is being synchronized. The copies can also be separate<br />

structures (that is, striped, image, sequential, or space-efficient) and separate extent sizes.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 521


To create a mirror copy from within a VDisk, perform <strong>the</strong> following steps;<br />

1. Select a VDisk from <strong>the</strong> list, choose Add a Mirrored VDisk Copy from <strong>the</strong> drop-down list<br />

(see Figure 8-66 on page 504), and click Go.<br />

2. The Add Copy to VDisk VDiskname window (where VDiskname is <strong>the</strong> VDisk that you<br />

selected in <strong>the</strong> previous step) opens. See Figure 8-96. You can perform <strong>the</strong> following steps<br />

separately or in combination:<br />

a. Choose <strong>the</strong> type of VDisk Copy that you want to create: striped or sequential.<br />

b. Select <strong>the</strong> MDG in which you want to put <strong>the</strong> copy. We recommend that you choose a<br />

separate group to maintain higher availability.<br />

c. Click Select MDisk(s) manually, which expands <strong>the</strong> section that has a list of MDisks<br />

that are available for adding.<br />

d. Choose <strong>the</strong> Mirror synchronization rate, which is <strong>the</strong> I/O governing rate in a percentage<br />

during initial synchronization. A zero value disables synchronization. You can also<br />

select Synchronized, but only use this option when <strong>the</strong> VDisk has never been used or<br />

is going to be formatted by <strong>the</strong> host.<br />

e. You can make <strong>the</strong> copy space-efficient. This section will expand, giving you options to<br />

allocate <strong>the</strong> virtual size, warning thresholds, autoexpansion, and grain size. See 8.5.4,<br />

“Creating a Space-Efficient VDisk with autoexpand” on page 509 for more information.<br />

f. Optionally, format <strong>the</strong> new VDisk by selecting <strong>the</strong> “Format <strong>the</strong> new VDisk copy and<br />

mark <strong>the</strong> VDisk synchronized” check box. Use this option with care, because if <strong>the</strong><br />

primary copy goes offline, you might not have <strong>the</strong> data replicated on <strong>the</strong> o<strong>the</strong>r copy.<br />

g. Click OK.<br />

Figure 8-96 Add Copy to VDisk window<br />

You can monitor <strong>the</strong> MDisk copy synchronization progress by selecting <strong>the</strong> Manage<br />

Progress menu option and, <strong>the</strong>n, by selecting <strong>the</strong> View Progress link.<br />

522 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.5.13 Creating a mirrored VDisk<br />

In this section, we create a mirrored VDisk step-by-step. This process creates a highly<br />

available VDisk.<br />

Refer to 8.5.3, “Creating a VDisk” on page 505, perform steps 1 to 4, and, <strong>the</strong>n, perform <strong>the</strong><br />

following steps:<br />

1. In <strong>the</strong> Set Attributes window (Figure 8-97), follow <strong>the</strong>se steps:<br />

a. Select <strong>the</strong> type of VDisk to create (striped or sequential) from <strong>the</strong> list.<br />

b. Select <strong>the</strong> cache mode (read/write or none) from <strong>the</strong> list.<br />

c. Select a Unit device identifier (a numerical number) for this VDisk.<br />

d. Select <strong>the</strong> number of VDisks to create.<br />

e. Select <strong>the</strong> Mirrored Disk check box. Certain mirror disk options will appear.<br />

f. Type <strong>the</strong> Mirror Synchronization rate, in a percent. It is set to 50%, by default.<br />

g. Optionally, you can check <strong>the</strong> Synchronized check box. Select this option when MDisks<br />

are already formatted or when read stability to unwritten areas of <strong>the</strong> VDisk is not<br />

required.<br />

h. Click Next.<br />

Figure 8-97 Select <strong>the</strong> attributes for <strong>the</strong> VDisk<br />

2. In <strong>the</strong> Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 0) window, as shown in<br />

Figure 8-98 on page 524, follow <strong>the</strong>se steps:<br />

a. Select <strong>the</strong> MDG from <strong>the</strong> list.<br />

b. Type <strong>the</strong> capacity of <strong>the</strong> VDisk. Select <strong>the</strong> unit of capacity from <strong>the</strong> list.<br />

c. Click Next.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 523


Figure 8-98 Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 0) window<br />

3. In <strong>the</strong> Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1) window, as shown in<br />

Figure 8-99, select an MDG for Copy 1 of <strong>the</strong> mirror. You can define Copy 1 within <strong>the</strong><br />

same MDG or on ano<strong>the</strong>r MDG. Click Next.<br />

Figure 8-99 Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1) window<br />

4. In <strong>the</strong> Name <strong>the</strong> VDisk(s) window (Figure 8-100), type a name for <strong>the</strong> VDisk that you are<br />

creating. In this case, we used MirrorVDisk1. Click Next.<br />

Figure 8-100 Name <strong>the</strong> VDisk(s) window<br />

524 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5. In <strong>the</strong> Verify Mirrored VDisk Attributes window (Figure 8-101), verify <strong>the</strong> selections. We<br />

can select <strong>the</strong> Back button at any time to make changes.<br />

Figure 8-101 Verifying Mirrored VDisk Attributes window<br />

6. After selecting Finish, we are presented with <strong>the</strong> window, which is shown in Figure 8-102,<br />

that informs us of <strong>the</strong> result of <strong>the</strong> action.<br />

Figure 8-102 Mirrored VDisk creation success<br />

We click Close again, and by clicking our newly created VDisk, we can see more detailed<br />

information about that VDisk, as shown in Figure 8-103 on page 526.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 525


Figure 8-103 List of created mirrored VDisks<br />

8.5.14 Creating a VDisk in image mode<br />

An image mode disk is a VDisk that has an exact one-to-one (1:1) mapping of VDisk extents<br />

with <strong>the</strong> underlying MDisk. For example, extent 0 on <strong>the</strong> VDisk contains <strong>the</strong> same data as<br />

extent 1 on <strong>the</strong> MDisk and so on. Without this (1:1) mapping (for example, if extent 0 on <strong>the</strong><br />

VDisk mapped to extent 3 on <strong>the</strong> MDisk), <strong>the</strong>re is little chance that <strong>the</strong> data on a newly<br />

introduced MDisk is still readable.<br />

Image mode is intended for <strong>the</strong> purpose of migrating data from an environment without <strong>the</strong><br />

SVC to an environment with <strong>the</strong> SVC. A LUN that was previously directly assigned to a<br />

<strong>SAN</strong>-attached host can now be reassigned to <strong>the</strong> SVC (during a short outage) and returned<br />

to <strong>the</strong> same host as an image mode VDisk, with <strong>the</strong> user’s data intact. During <strong>the</strong> same<br />

outage, <strong>the</strong> host, cables, and zones can be reconfigured to access <strong>the</strong> disk, now through <strong>the</strong><br />

SVC.<br />

After access is re-established, <strong>the</strong> host workload can resume while <strong>the</strong> SVC manages <strong>the</strong><br />

transparent migration of <strong>the</strong> data to o<strong>the</strong>r SVC managed MDisks on <strong>the</strong> same or ano<strong>the</strong>r disk<br />

subsystem.<br />

We recommend that, during <strong>the</strong> migration phase of <strong>the</strong> SVC implementation, you add one<br />

image mode VDisk at a time to <strong>the</strong> SVC environment. This approach reduces <strong>the</strong> risk of error.<br />

It also means that <strong>the</strong> short outages that are required to reassign <strong>the</strong> LUNs from <strong>the</strong><br />

subsystem or subsystems and to reconfigure <strong>the</strong> <strong>SAN</strong> and host can be staggered over a<br />

period of time to minimize <strong>the</strong> effect on <strong>the</strong> business.<br />

As of SVC Version 4.3, you have <strong>the</strong> ability to create a VDisk mirror or a Space-Efficient<br />

VDisk while you are creating an image mode VDisk.<br />

You can use <strong>the</strong> mirroring option, while making <strong>the</strong> image mode VDisk, as a storage array<br />

migration tool, because <strong>the</strong> Copy1 MDisk will also be in image mode.<br />

To create a space-efficient image mode VDisk, you must have <strong>the</strong> same amount of real disk<br />

space as <strong>the</strong> original MDisk, because <strong>the</strong> SVC is unable to detect how much physical space a<br />

host utilizes on a LUN.<br />

526 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Important: You can create an image mode VDisk only by using an unmanaged disk, that<br />

is, you must create an image mode VDisk before you add <strong>the</strong> MDisk that corresponds to<br />

your original logical volume to an MDG.<br />

To create an image mode VDisk, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> My Work window on <strong>the</strong> left side of your GUI, select Work with virtual disks.<br />

2. From <strong>the</strong> Work with Virtual Disks window, select Virtual Disks.<br />

3. From <strong>the</strong> list, select Create Image Mode VDisk.<br />

4. From <strong>the</strong> overview for <strong>the</strong> creation of an image mode VDisk, select Next.<br />

5. The “Set <strong>the</strong> attributes for <strong>the</strong> image mode Virtual Disk you are creating” window opens<br />

(Figure 8-104), where you enter <strong>the</strong> name of <strong>the</strong> VDisk that you want to create.<br />

Figure 8-104 Set attributes for <strong>the</strong> VDisk<br />

6. You can also select whe<strong>the</strong>r you want to have read and write operations stored in cache by<br />

specifying a cache mode. Additionally, you can specify a unit device identifier. You can<br />

optionally choose to have a mirrored or Space-Efficient VDisk. Click Next to continue.<br />

Cache mode: You must specify <strong>the</strong> cache mode when you create <strong>the</strong> VDisk. After <strong>the</strong><br />

VDisk is created, you cannot change <strong>the</strong> cache mode.<br />

We describe <strong>the</strong> VDisk cache modes in Table 8-1.<br />

Table 8-1 VDisk cache modes<br />

Read/Write All read and write I/O operations that are performed by <strong>the</strong> VDisk are stored in<br />

cache. Read/Write cache mode is <strong>the</strong> default cache mode for all VDisks.<br />

None All read and write I/O operations that are performed by <strong>the</strong> VDisk are not stored<br />

in cache.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 527


Note: If you do not provide a name, <strong>the</strong> SVC automatically generates <strong>the</strong> name VDiskn<br />

(where n is <strong>the</strong> ID sequence number that is assigned by <strong>the</strong> SVC internally).<br />

If you want to provide a name, you can use <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0<br />

to 9, a dash, and <strong>the</strong> underscore. The name can be between one and 15 characters in<br />

length, but it cannot start with a number, a dash, or <strong>the</strong> word “VDisk” (because this<br />

prefix is reserved for SVC assignment only).<br />

7. Next, choose <strong>the</strong> MDisk to use for your image mode VDisk, as shown in Figure 8-105.<br />

Figure 8-105 Select your MDisk to use for your image mode VDisk<br />

8. Select your I/O Group and preferred node to handle <strong>the</strong> I/O traffic for <strong>the</strong> VDisk that you<br />

are creating or have <strong>the</strong> system choose for you, as shown in Figure 8-106.<br />

Figure 8-106 Select <strong>the</strong> I/O Group and preferred node<br />

9. Figure 8-107 on page 529 shows you <strong>the</strong> characteristics of <strong>the</strong> new image VDisk. Click<br />

Finish to complete this task.<br />

528 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-107 Verify image VDisk attributes<br />

You can now map <strong>the</strong> newly created VDisk to your host.<br />

8.5.15 Creating an image mode mirrored VDisk<br />

This procedure defines a mirror copy to <strong>the</strong> image mode VDisk creation process. The second<br />

copy (Copy1) will also be an image mode MDisk. You can use this mirror copy as a storage<br />

array migration tool, using <strong>the</strong> SVC as <strong>the</strong> data mover. Follow <strong>the</strong>se steps:<br />

1. From <strong>the</strong> My Work window on <strong>the</strong> left side of your GUI, select Work with Virtual Disks.<br />

2. From <strong>the</strong> Work with Virtual Disks window, select Virtual Disks.<br />

3. From <strong>the</strong> drop down menu, select Create Image Mode VDisk.<br />

4. After selecting Next on <strong>the</strong> overview window, you see <strong>the</strong> attribute selection window, as<br />

shown in Figure 8-108 on page 530. Follow <strong>the</strong>se steps:<br />

a. Enter <strong>the</strong> name of <strong>the</strong> VDisk that you want to create.<br />

b. Select <strong>the</strong> Mirrored Disk check box and a subsection expands. The mirror<br />

synchronization rate is a percentage of <strong>the</strong> peak rate. The synchronized option is only<br />

available when <strong>the</strong> original disk is unused (or going to be o<strong>the</strong>rwise formatted by <strong>the</strong><br />

host).<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 529


Figure 8-108 Set attributes for <strong>the</strong> VDisk<br />

5. Figure 8-109 enables you to choose on which of <strong>the</strong> available MDisks your Copy 0 and<br />

Copy 1 will be stored. Notice that we have selected a second MDisk that is larger than <strong>the</strong><br />

original MDisk. Click Next to proceed.<br />

Figure 8-109 Select MDisks<br />

6. Now, you can optionally select an I/O Group and a preferred node, and you can select an<br />

MDG for each of <strong>the</strong> MDisk copies, as shown in Figure 8-110 on page 531. Click Next to<br />

proceed.<br />

530 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-110 Choose an I/O Group and an MDG for each of <strong>the</strong> MDisk copies<br />

7. Figure 8-111 shows you <strong>the</strong> characteristics of <strong>the</strong> new image mode VDisk. Click Finish to<br />

complete this task.<br />

Figure 8-111 Verify image VDisk attributes<br />

You can monitor <strong>the</strong> MDisk copy synchronization progress by selecting Manage Progress<br />

and <strong>the</strong>n View Progress, as shown in Figure 8-112 on page 532.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 531


Figure 8-112 VDisk copy synchronization status<br />

Optionally, you can assign <strong>the</strong> VDisk to <strong>the</strong> host or wait until it is synchronized and, after<br />

deleting <strong>the</strong> MDisk mirror Copy 1, map <strong>the</strong> MDisk copy to <strong>the</strong> host.<br />

8.5.16 Migrating to a Space-Efficient VDisk using VDisk Mirroring<br />

In this scenario, we migrate from a fully allocated (or an image mode) VDisk to a<br />

Space-Efficient VDisk using VDisk Mirroring. We repeat <strong>the</strong> procedure as described and<br />

shown in 8.5.12, “Creating a VDisk Mirror from an existing VDisk” on page 521, but here we<br />

select <strong>the</strong> Space-Efficient VDisk as <strong>the</strong> mirrored copy. Follow <strong>the</strong>se steps:<br />

1. Select a VDisk from <strong>the</strong> list, choose Add a Mirrored VDisk Copy from <strong>the</strong> drop-down list<br />

(see Figure 8-66 on page 504), and click Go.<br />

2. The Add Copy to VDisk VDiskname window (where VDiskname is <strong>the</strong> VDisk that you<br />

selected in <strong>the</strong> previous step) opens. See Figure 8-113 on page 533. You can perform <strong>the</strong><br />

following steps separately or in combination:<br />

a. Choose <strong>the</strong> type of VDisk copy that you want to create: striped or sequential.<br />

b. Select <strong>the</strong> MDG in which you want to put <strong>the</strong> copy.<br />

c. Click Select MDisk(s) manually, which will expand <strong>the</strong> section with a list of MDisks<br />

that are available for adding.<br />

d. Specify a percentage for <strong>the</strong> Mirror synchronization rate, which is <strong>the</strong> I/O governing<br />

rate used during initial synchronization. A zero value disables synchronization. You can<br />

also select Synchronized, but only when <strong>the</strong> VDisk has never been used or is going to<br />

be formatted by <strong>the</strong> host.<br />

e. Select Space-efficient. This section will expand. Perform <strong>the</strong>se steps:<br />

i. Type 100 in <strong>the</strong> % box for <strong>the</strong> real size to initially allocate. The SVC will see Copy 0<br />

as 100% utilized, so Copy 1 must be defined as <strong>the</strong> same size.<br />

ii. Clear <strong>the</strong> Warn when used capacity of VDisk reaches check box.<br />

iii. Check Auto expand.<br />

iv. Set <strong>the</strong> Grain size. See 8.5.4, “Creating a Space-Efficient VDisk with autoexpand”<br />

on page 509 for more information.<br />

f. Click OK.<br />

532 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-113 Add a space-efficient copy to VDisk<br />

You can monitor <strong>the</strong> VDisk copy synchronization progress by selecting <strong>the</strong> Manage Progress<br />

menu option and, <strong>the</strong>n, <strong>the</strong> View Progress link, as shown in Figure 8-114 on page 534.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 533


Figure 8-114 Two ongoing VDisk copies in <strong>the</strong> system<br />

8.5.17 Deleting a VDisk copy from a VDisk mirror<br />

After <strong>the</strong> VDisk copy has finished synchronizing, you can remove <strong>the</strong> original VDisk copy<br />

(Copy 0):<br />

1. In <strong>the</strong> Viewing Virtual Disks window, select <strong>the</strong> mirrored VDisk from <strong>the</strong> list, choose Delete<br />

a Mirrored VDisk Copy from <strong>the</strong> drop-down list (Figure 8-115), and click Go.<br />

Figure 8-115 Viewing Virtual Disks: Deleting a mirrored VDisk copy<br />

2. Figure 8-116 on page 535 displays both copies of <strong>the</strong> VDisk mirror. Select <strong>the</strong> original<br />

copy (Copy ID 0), and click OK.<br />

534 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-116 Deleting VDisk Copy 0<br />

The VDisk is now a single Space-Efficient VDisk copy.<br />

To migrate a Space-Efficient VDisk to a fully allocated VDisk, follow <strong>the</strong> same scenario, but<br />

add a normal (fully allocated) VDisk as <strong>the</strong> second copy.<br />

8.5.18 Splitting a VDisk copy<br />

To split off a synchronized VDisk copy to a new VDisk, perform <strong>the</strong> following steps:<br />

1. Select a mirrored VDisk from <strong>the</strong> list, choose Split a VDisk Copy from <strong>the</strong> drop-down list<br />

(Figure 8-66 on page 504), and click Go.<br />

2. The Split a Copy from VDisk VDiskname window (where VDiskname is <strong>the</strong> VDisk that you<br />

selected in <strong>the</strong> previous step) opens (See Figure 8-117 on page 536). Perform <strong>the</strong><br />

following steps:<br />

a. Select which copy you want to split.<br />

a. Type a name for <strong>the</strong> new VDisk.<br />

b. You can optionally force-split <strong>the</strong> copies even if <strong>the</strong> copy is not synchronized. However,<br />

<strong>the</strong> split copy will not be point-in-time consistent.<br />

c. Choose an I/O Group and <strong>the</strong>n a preferred node. In our case, we let <strong>the</strong> system<br />

choose.<br />

d. Select <strong>the</strong> cache mode: Read/Write or None.<br />

e. If you want, enter a unit device identifier.<br />

f. Click OK.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 535


Figure 8-117 Split a copy from a VDisk<br />

This new VDisk is available to be mapped to a host.<br />

Important: After you split a VDisk mirror, you cannot resynchronize or recombine <strong>the</strong>m.<br />

You must create a VDisk copy from scratch.<br />

8.5.19 Shrinking a VDisk<br />

The method that <strong>the</strong> SVC uses to shrink a VDisk is to remove <strong>the</strong> required number of extents<br />

from <strong>the</strong> end of <strong>the</strong> VDisk. Depending on where <strong>the</strong> data actually resides on <strong>the</strong> VDisk, this<br />

action can be quite destructive. For example, you might have a VDisk that consists of 128<br />

extents (0 to 127) of 16 MB (2 GB capacity), and you want to decrease <strong>the</strong> capacity to 64<br />

extents (1 GB capacity). In this case, <strong>the</strong> SVC simply removes extents 64 to 127. Depending<br />

on <strong>the</strong> operating system, <strong>the</strong>re is no easy way to ensure that your data resides entirely on<br />

extents 0 through 63, so be aware that you might lose data.<br />

Although easily done using <strong>the</strong> SVC, you must ensure that your operating system supports<br />

shrinking, ei<strong>the</strong>r natively or by using third-party tools, before using this function.<br />

In addition, we recommend that you always have a good current backup before you execute<br />

this task.<br />

Shrinking a VDisk is useful in certain circumstances, such as:<br />

► Reducing <strong>the</strong> size of a candidate target VDisk of a copy relationship to make it <strong>the</strong> same<br />

size as <strong>the</strong> source<br />

► Releasing space from VDisks to have free extents in <strong>the</strong> MDG, provided that you do not<br />

use that space any more and take precautions with <strong>the</strong> remaining data<br />

536 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Assuming your operating system supports it, perform <strong>the</strong> following steps to shrink a VDisk:<br />

1. Perform any necessary steps on your host to ensure that you are not using <strong>the</strong> space that<br />

you are about to remove.<br />

2. Select <strong>the</strong> VDisk that you want to shrink (Figure 8-66 on page 504). Select Shrink a<br />

VDisk from <strong>the</strong> list, and click Go.<br />

3. The Shrinking Virtual Disks VDiskname window (where VDiskname is <strong>the</strong> VDisk that you<br />

selected in <strong>the</strong> previous step) opens, as shown in Figure 8-118. In <strong>the</strong> Reduce Capacity<br />

By field, enter <strong>the</strong> capacity by which you want to reduce. Select B, KB, MB, GB, TB, or PB.<br />

The final capacity of <strong>the</strong> VDisk is <strong>the</strong> Current Capacity minus <strong>the</strong> capacity that you specify.<br />

Capacity: Be careful with <strong>the</strong> capacity information. The Current Capacity field shows<br />

<strong>the</strong> capacity in MBs, while you can specify a capacity to reduce in GBs. SVC calculates<br />

1 GB as 1,024 MB.<br />

When you are finished, click OK. The changes become visible on your host.<br />

Figure 8-118 Shrinking a VDisk<br />

8.5.20 Showing <strong>the</strong> MDisks that are used by a VDisk<br />

To show <strong>the</strong> MDisks that are used by a specific VDisk, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> VDisk for which you want to view MDisk information (Figure 8-66 on page 504).<br />

Select Show MDisks This VDisk is Using from <strong>the</strong> list, and click Go.<br />

2. You will see a subset (specific to <strong>the</strong> VDisk that you chose in <strong>the</strong> previous step) of <strong>the</strong><br />

Viewing Managed Disks window (Figure 8-119).<br />

Figure 8-119 Showing MDisks that are used by a VDisk<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 537


For information about what actions you can perform in this window, see 8.2.4, “Managed<br />

disks” on page 479.<br />

8.5.21 Showing <strong>the</strong> MDG to which a VDisk belongs<br />

To show <strong>the</strong> MDG to which a specific VDisk belongs, perform <strong>the</strong> following steps:<br />

1. Select <strong>the</strong> VDisk for which you want to view MDG information (Figure 8-66 on page 504).<br />

Select Show MDisk Group This VDisk Belongs To from <strong>the</strong> list, and click Go.<br />

2. You will see a subset (specific to <strong>the</strong> VDisk that you chose in <strong>the</strong> previous step) of <strong>the</strong><br />

Viewing Managed Disk Groups Belonging to VDiskname window (Figure 8-120).<br />

Figure 8-120 Showing an MDG for a VDisk<br />

8.5.22 Showing <strong>the</strong> host to which <strong>the</strong> VDisk is mapped<br />

To show <strong>the</strong> host to which a specific VDisk belongs, select <strong>the</strong> VDisk for which you want to<br />

view MDG information (Figure 8-66 on page 504). Select Show Hosts This VDisk is<br />

Mapped To from <strong>the</strong> list, and click Go, which s shows you <strong>the</strong> Host to which <strong>the</strong> VDisk is<br />

attached (Figure 8-121). Alternatively, you can use <strong>the</strong> procedure that is described in 8.5.24,<br />

“Showing VDisks mapped to a particular host” on page 539 to see all of <strong>the</strong> VDisk to Host<br />

mappings.<br />

Figure 8-121 Show host to VDisk mapping<br />

8.5.23 Showing capacity information<br />

To show <strong>the</strong> capacity information of <strong>the</strong> cluster, perform <strong>the</strong> following steps.<br />

From <strong>the</strong> VDisk overview drop-down list, select Show Capacity information, as shown in<br />

Figure 8-122 on page 539.<br />

538 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-122 Selecting capacity information for a VDisk<br />

Figure 8-123 shows you <strong>the</strong> total MDisk capacity, <strong>the</strong> space in <strong>the</strong> MDGs, <strong>the</strong> space allocated<br />

to <strong>the</strong> VDisks, and <strong>the</strong> total free space.<br />

Figure 8-123 Show capacity information<br />

8.5.24 Showing VDisks mapped to a particular host<br />

To show <strong>the</strong> VDisks that are assigned to a specific host, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> SVC Welcome window, click Work with Virtual Disks and, <strong>the</strong>n, Virtual Disk to<br />

Host Mappings (Figure 8-124).<br />

Figure 8-124 VDisk to host mapping<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 539


2. Now you can see to which host that VDisk belongs. If this is a long list, you can use <strong>the</strong><br />

Additional Filtering and Sort option from 8.7.1, “Organizing on window content” on<br />

page 543.<br />

8.5.25 Deleting VDisks from a host<br />

Perform <strong>the</strong>se steps to delete a mapping:<br />

1. From <strong>the</strong> same window where you can view VDisk to host mapping (Figure 8-124 on<br />

page 539), you can also delete a mapping. Select <strong>the</strong> host and VDisk combination that<br />

you want to delete. Ensure that Delete a Mapping is selected from <strong>the</strong> list. Click Go.<br />

2. Confirm <strong>the</strong> selection that you made in Figure 8-125 by clicking Delete.<br />

Figure 8-125 Deleting VDisk to Host mapping<br />

3. Now you are back at <strong>the</strong> window that is shown in Figure 8-124 on page 539. Now, you can<br />

assign this VDisk to ano<strong>the</strong>r host, as described in 8.5.8, “Assigning a VDisk to a host” on<br />

page 516.<br />

You have now completed <strong>the</strong> required tasks to manage VDisks within an SVC environment.<br />

8.6 Working with solid-state drives<br />

In SVC, solid-state drives are introduced as part of each SVC node. During our operational<br />

work on <strong>the</strong> SVC cluster, it is necessary to know how to identify where our solid-state drives<br />

are located and how <strong>the</strong>y are configured. In this section, we describe <strong>the</strong> basic operational<br />

tasks related to <strong>the</strong> solid-state drives. Note that storing <strong>the</strong> quorum disk on solid-state drives<br />

is not supported.<br />

More detailed information about solid-state drives and internal controllers is in 2.5,<br />

“Solid-state drives” on page 49.<br />

8.6.1 Solid-state drive introduction<br />

If you have solid-state drives installed in your node, <strong>the</strong>y will appear as unmanaged MDisks,<br />

which are controlled by an internal controller. This controller is only used for <strong>the</strong> solid-state<br />

drives, and each controller is dedicated to a single node; <strong>the</strong>refore, we can have eight internal<br />

controllers in a single cluster configuration. Those controllers are automatically assigned as<br />

owners of <strong>the</strong> solid-state drives, and <strong>the</strong> controllers have <strong>the</strong> same worldwide node name<br />

(WWNN) as <strong>the</strong> node to which <strong>the</strong>y belong. An internal controller is identified in Figure 8-126<br />

on page 541.<br />

540 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-126 SVC internal controller<br />

The unmanaged MDisks (solid-state drives) are owned by <strong>the</strong> internal controllers. When<br />

<strong>the</strong>se MDisks are added to an MDG, we recommend that a dedicated MDG is created for <strong>the</strong><br />

solid-state drives. When those MDisks are added to an MDG, <strong>the</strong>y will become “managed”<br />

and will be treated as any o<strong>the</strong>r MDisks in an MDG.<br />

If we look closer at one of <strong>the</strong> selected controllers, as shown in Figure 8-127, we can verify<br />

<strong>the</strong> SVC node that owns this controller, and we can verify that this controller is an internal<br />

SVC controller.<br />

Figure 8-127 Shows internal solid-state drive controller<br />

We can now check what MDisks (sourced from our solid-state drives) are provisioned from<br />

that controller, as shown in Figure 8-128 on page 542.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 541


Figure 8-128 Our solid-state drives<br />

From this view, we can see all of <strong>the</strong> relevant information, such as <strong>the</strong> status, <strong>the</strong> MDG, and<br />

<strong>the</strong> size. To see more detailed information about a single MDisk (single solid-state drive), we<br />

click a single MDisk and we will see its information, as shown in Figure 8-129.<br />

Figure 8-129 Showing details for a solid-state drive MDisk<br />

Notice <strong>the</strong> controller type (6), which is an identifier for <strong>the</strong> internal controller type.<br />

When you have your solid-state drives in full operation and you want to see <strong>the</strong> VDisks that<br />

use your solid-state drives, <strong>the</strong> easiest way is to locate <strong>the</strong> MDG that contains your solid-state<br />

drives as MDisks, and select Show VDisks Using This Group, as shown in Figure 8-130 on<br />

page 543.<br />

542 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-130 Showing VDisks using our solid-state drives<br />

This action displays <strong>the</strong> VDisks that use your solid-state drives.<br />

8.7 SVC advanced operations using <strong>the</strong> GUI<br />

In <strong>the</strong> following topics, we describe <strong>the</strong> more advanced activities.<br />

8.7.1 Organizing on window content<br />

Detailed information about filtering and sorting <strong>the</strong> content that is displayed in <strong>the</strong> GUI is<br />

available in 8.1.1, “Organizing on window content” on page 470.<br />

If you need to access <strong>the</strong> online help, in <strong>the</strong> upper right corner of <strong>the</strong> window, click <strong>the</strong><br />

icon. This icon opens an information center window where you can search on any item for<br />

which you want help (see Figure 8-131 on page 544).<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 543


Figure 8-131 Online help using <strong>the</strong> question mark icon<br />

General maintenance<br />

If, at any time, <strong>the</strong> content in <strong>the</strong> right side of <strong>the</strong> frame is abbreviated, you can collapse <strong>the</strong><br />

My Work column by clicking <strong>the</strong> icon at <strong>the</strong> top of <strong>the</strong> My Work column. When collapsed,<br />

<strong>the</strong> small arrow changes from pointing to <strong>the</strong> left to pointing to <strong>the</strong> right ( ). Clicking <strong>the</strong><br />

small arrow that points right expands <strong>the</strong> My Work column back to its original size.<br />

In addition, each time that you open a configuration or administrative window using <strong>the</strong> GUI in<br />

<strong>the</strong> following sections, it creates a link for that window along <strong>the</strong> top of your Web browser<br />

beneath <strong>the</strong> banner graphic. As a general maintenance task, we recommend that you close<br />

each window when you finish using it by clicking <strong>the</strong> icon to <strong>the</strong> right of <strong>the</strong> window name,<br />

but under <strong>the</strong> icon. Be careful not to close <strong>the</strong> entire browser.<br />

8.8 Managing <strong>the</strong> cluster using <strong>the</strong> GUI<br />

This section explains <strong>the</strong> various configuration and administrative tasks that you can perform<br />

on <strong>the</strong> cluster.<br />

8.8.1 Viewing cluster properties<br />

Perform <strong>the</strong> following steps to display <strong>the</strong> cluster properties:<br />

1. From <strong>the</strong> SVC Welcome window, select Manage Cluster and, <strong>the</strong>n, View Cluster<br />

Properties.<br />

2. The Viewing General Properties window (Figure 8-132 on page 545) opens. Click IP<br />

Addresses, Remote Au<strong>the</strong>ntication, Space, Statistics, Metro & Global Mirror, iSCSI,<br />

SNMP, Syslog, E-mail server, and E-mail user, and you will see additional information<br />

about your cluster’s configuration.<br />

544 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-132 View Cluster Properties: General properties<br />

8.8.2 Modifying IP addresses<br />

In SVC 5.1, one new function enables us to use both IP ports of each node. Now, <strong>the</strong>re are<br />

two active cluster ports on each node. We describe <strong>the</strong> two active cluster ports on each node<br />

in fur<strong>the</strong>r detail in 2.2.11, “Usage of IP addresses and E<strong>the</strong>rnet ports” on page 28.<br />

If <strong>the</strong> cluster IP address is changed, <strong>the</strong> open command-line shell closes during <strong>the</strong><br />

processing of <strong>the</strong> command. You must reconnect to <strong>the</strong> new IP address if <strong>the</strong> cluster is<br />

connected through that port.<br />

In this section, we discuss <strong>the</strong> modification of IP addresses.<br />

Important: If you specify a new cluster IP address, <strong>the</strong> existing communication with <strong>the</strong><br />

cluster through <strong>the</strong> GUI is lost. You need to relaunch <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

Application from <strong>the</strong> GUI Welcome window.<br />

Modifying <strong>the</strong> IP address of <strong>the</strong> cluster, although quite simple, requires reconfiguration for<br />

o<strong>the</strong>r items within <strong>the</strong> SVC environments, including reconfiguring <strong>the</strong> central administration<br />

GUI by adding <strong>the</strong> cluster again with its new IP address.<br />

Perform <strong>the</strong> following steps to modify <strong>the</strong> cluster and service IP addresses of our SVC<br />

configuration:<br />

1. From <strong>the</strong> SVC Welcome window, select Manage Cluster and, <strong>the</strong>n, Modify IP<br />

Addresses.<br />

2. The Modify IP Addresses window (Figure 8-133 on page 546) opens.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 545


Figure 8-133 Modify cluster IP address<br />

Select <strong>the</strong> port that you want to modify, select Modify Port Setting, and click GO. Notice<br />

that you can configure both ports on <strong>the</strong> SVC node, as shown in Figure 8-134.<br />

Figure 8-134 Modify cluster IP addresses<br />

We enter <strong>the</strong> new information, as shown in Figure 8-135 on page 547.<br />

546 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-135 Entering <strong>the</strong> new cluster IP address<br />

3. You advance to <strong>the</strong> next window, which shows a message indicating that <strong>the</strong> IP addresses<br />

were updated.<br />

You have now completed <strong>the</strong> required tasks to change <strong>the</strong> IP addresses (cluster, service,<br />

gateway, and Master Console) for your SVC environment.<br />

8.8.3 Starting <strong>the</strong> statistics collection<br />

Perform <strong>the</strong> following steps to start <strong>the</strong> statistics collection on your cluster:<br />

1. From <strong>the</strong> SVC Welcome window, select Manage Cluster and Start Statistics Collection.<br />

2. The Starting <strong>the</strong> Collection of Statistics window (Figure 8-136) opens. Make an interval<br />

change, if desired. The interval that you specify (minimum 1, maximum 60) is in minutes.<br />

Click OK.<br />

Figure 8-136 Starting <strong>the</strong> Collection of Statistics<br />

3. Although it does not state <strong>the</strong> current status, clicking OK turns on <strong>the</strong> statistics collection.<br />

To verify, click Cluster Properties, as you did in 8.8.1, “Viewing cluster properties” on<br />

page 544. Then, click Statistics. You see <strong>the</strong> interval as specified in Step 2 and <strong>the</strong> status<br />

of On, as shown in Figure 8-137 on page 548.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 547


Figure 8-137 Verifying that statistics collection is on<br />

You have now completed <strong>the</strong> required tasks to start statistics collection on your cluster.<br />

8.8.4 Stopping <strong>the</strong> statistics collection<br />

Perform <strong>the</strong> following steps to stop statistics collection on your cluster:<br />

1. From <strong>the</strong> SVC Welcome window, select Manage Cluster and Stop Statistics Collection.<br />

2. The Stopping <strong>the</strong> Collection of Statistics window (Figure 8-138) opens, and you see a<br />

message asking whe<strong>the</strong>r you are sure that you want to stop <strong>the</strong> statistics collection. Click<br />

Yes to stop <strong>the</strong> ongoing task.<br />

Figure 8-138 Stopping <strong>the</strong> collection of statistics<br />

3. The window closes. To verify that <strong>the</strong> collection has stopped, click Cluster Properties, as<br />

you did in 8.8.1, “Viewing cluster properties” on page 544. Then, click Statistics. Now, you<br />

see <strong>the</strong> status has changed to Off, as shown in Figure 8-139.<br />

Figure 8-139 Verifying that statistics collection is off<br />

548 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


You have now completed <strong>the</strong> required tasks to stop statistics collection on your cluster.<br />

8.8.5 Metro Mirror and Global Mirror<br />

8.8.6 iSCSI<br />

From <strong>the</strong> Manage Cluster window, we can see how Metro Mirror or Global Mirror is<br />

configured.<br />

In Figure 8-140, we can see <strong>the</strong> overview of partnership properties and which clusters are<br />

currently in partnership with our cluster.<br />

Figure 8-140 Metro Mirror and Global Mirror overview<br />

From <strong>the</strong> View Cluster Properties window, we can select iSCSI to see <strong>the</strong> iSCSI overview.<br />

The iSCSI properties show whe<strong>the</strong>r <strong>the</strong> iSNS server and CHAP are configured and what<br />

type, if any, of au<strong>the</strong>ntication is supported (Figure 8-141).<br />

Figure 8-141 iSCSI overview from cluster properties window<br />

8.8.7 Setting <strong>the</strong> cluster time and configuring <strong>the</strong> Network Time Protocol<br />

server<br />

Perform <strong>the</strong> following steps to configure time settings:<br />

1. From <strong>the</strong> SVC Welcome window, select Manage Cluster and Set Cluster Time.<br />

2. The Cluster Date and Time Settings window (Figure 8-142 on page 550) opens. At <strong>the</strong> top<br />

of <strong>the</strong> window, you can see <strong>the</strong> current settings.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 549


3. If you are using an Network Time Protocol (NTP) server, you enter <strong>the</strong> IP address of <strong>the</strong><br />

NTP server and select Set NTP Server. From now on, <strong>the</strong> cluster will use that server’s<br />

settings as its time reference.<br />

4. If it is necessary to change <strong>the</strong> cluster time, select Update Cluster Time.<br />

Figure 8-142 Changing cluster date and time<br />

You have now completed <strong>the</strong> necessary tasks to configure an NTP server and to set <strong>the</strong><br />

cluster time zone and time.<br />

8.8.8 Shutting down a cluster<br />

If all input power to a <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> cluster is removed for more than a few minutes<br />

(for example, if <strong>the</strong> machine room power is shut down for maintenance), it is important that<br />

you shut down <strong>the</strong> cluster before you remove <strong>the</strong> power. Shutting down <strong>the</strong> cluster while it is<br />

still connected to <strong>the</strong> main power ensures that <strong>the</strong> uninterruptible power supply unit batteries<br />

are still fully charged (when power is restored).<br />

If you remove <strong>the</strong> main power while <strong>the</strong> cluster is still running, <strong>the</strong> uninterruptible power<br />

supply unit will detect <strong>the</strong> loss of power and instruct <strong>the</strong> nodes to shut down. This shutdown<br />

can take several minutes to complete, and although <strong>the</strong> uninterruptible power supply unit has<br />

sufficient power to perform <strong>the</strong> shutdown, you will be unnecessarily draining <strong>the</strong><br />

uninterruptible power supply unit batteries.<br />

When power is restored, <strong>the</strong> SVC nodes will start; however, one of <strong>the</strong> first checks that <strong>the</strong><br />

SVC nodes make is to ensure that <strong>the</strong> uninterruptible power supply unit’s batteries have<br />

sufficient power to survive ano<strong>the</strong>r power failure, enabling <strong>the</strong> node to perform a clean<br />

shutdown. (We do not want <strong>the</strong> uninterruptible power supply unit to run out of power while <strong>the</strong><br />

550 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


node’s shutdown activities have not yet completed.) If <strong>the</strong> uninterruptible power supply unit’s<br />

batteries are not sufficiently charged, <strong>the</strong> node will not start. It can take up to three hours to<br />

charge <strong>the</strong> batteries sufficiently for a node to start.<br />

Note: When a node shuts down due to loss of power, <strong>the</strong> node will dump <strong>the</strong> cache to an<br />

internal hard drive so that <strong>the</strong> cached data can be retrieved when <strong>the</strong> cluster starts. With<br />

<strong>the</strong> 8F2/8G4 nodes, <strong>the</strong> cache is 8 GB and can take several minutes to dump to <strong>the</strong><br />

internal drive.<br />

SVC uninterruptible power supply units are designed to survive at least two power failures in<br />

a short time, before nodes will refuse to start until <strong>the</strong> batteries have sufficient power (to<br />

survive ano<strong>the</strong>r immediate power failure). If, during your maintenance activities, <strong>the</strong><br />

uninterruptible power supply unit detected power and a loss of power multiple times (and thus<br />

<strong>the</strong> nodes start and shut down more than one time in a short time frame), you might find that<br />

you have unknowingly drained <strong>the</strong> uninterruptible power supply unit batteries. You will have to<br />

wait until <strong>the</strong>y are charged sufficiently before <strong>the</strong> nodes will start.<br />

Important: Before shutting down a cluster, quiesce all I/O operations that are destined for<br />

this cluster, because you will lose access to all of <strong>the</strong> VDisks that are provided by this<br />

cluster. Failure to do so might result in failed I/O operations being reported to your host<br />

operating systems.<br />

There is no need to quiesce all I/O operations if you are only shutting down one SVC node.<br />

Begin <strong>the</strong> process of quiescing all I/O to <strong>the</strong> cluster by stopping <strong>the</strong> applications on your<br />

hosts that are using <strong>the</strong> VDisks that are provided by <strong>the</strong> cluster.<br />

If you are unsure which hosts are using <strong>the</strong> VDisks that are provided by <strong>the</strong> cluster, follow<br />

<strong>the</strong> procedure in 8.5.22, “Showing <strong>the</strong> host to which <strong>the</strong> VDisk is mapped” on page 538,<br />

and repeat this procedure for all VDisks.<br />

Perform <strong>the</strong> following steps to shut down your cluster:<br />

1. From <strong>the</strong> SVC Welcome window, select Manage Cluster and Shut Down Cluster.<br />

2. The Shutting Down cluster window (Figure 8-143) opens. You will get a message asking<br />

you to confirm whe<strong>the</strong>r you want to shut down <strong>the</strong> cluster. Ensure that you have stopped<br />

all FlashCopy mappings, Remote Copy relationships, data migration operations, and<br />

forced deletions before continuing. Click Yes to begin <strong>the</strong> shutdown process.<br />

Note: At this point, you will lose administrative contact with your cluster.<br />

Figure 8-143 Shutting down <strong>the</strong> cluster<br />

You have now completed <strong>the</strong> required tasks to shut down <strong>the</strong> cluster. Now, you can shut down<br />

<strong>the</strong> uninterruptible power supply units by pressing <strong>the</strong> power buttons on <strong>the</strong>ir front panels.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 551


Tip: When you shut down <strong>the</strong> cluster, it will not automatically start. You must manually start<br />

<strong>the</strong> cluster.<br />

If <strong>the</strong> cluster shuts down because <strong>the</strong> uninterruptible power supply unit has detected a loss<br />

of power, it will automatically restart when <strong>the</strong> uninterruptible power supply unit detects that<br />

<strong>the</strong> power has been restored (and <strong>the</strong> batteries have sufficient power to survive ano<strong>the</strong>r<br />

immediate power failure).<br />

Note: To restart <strong>the</strong> SVC cluster, you must first restart <strong>the</strong> uninterruptible power supply<br />

units by pressing <strong>the</strong> power buttons on <strong>the</strong>ir front panels. After <strong>the</strong>y are on, go to <strong>the</strong><br />

service panel of one of <strong>the</strong> nodes within your SVC cluster and press <strong>the</strong> power on button,<br />

releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and<br />

<strong>the</strong> cluster name on line 2 of <strong>the</strong> SVC front panel), you can start <strong>the</strong> o<strong>the</strong>r nodes in <strong>the</strong><br />

same way.<br />

As soon as all of <strong>the</strong> nodes are fully booted and you have re-established administrative<br />

contact using <strong>the</strong> GUI, your cluster is fully operational again.<br />

8.9 Manage au<strong>the</strong>ntication<br />

Users are managed from within <strong>the</strong> Manage Au<strong>the</strong>ntication window in <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> console GUI (see Figure 8-146 on page 554).<br />

Each user account has a name, a role, and password assigned to it, which differs from <strong>the</strong><br />

Secure Shell (SSH)-key based role approach that is used by <strong>the</strong> CLI.<br />

We describe au<strong>the</strong>ntication in detail in 2.3.5, “User au<strong>the</strong>ntication” on page 40.<br />

The role-based security feature organizes <strong>the</strong> SVC administrative functions into groups,<br />

which are known as roles, so that permissions to execute <strong>the</strong> various functions can be<br />

granted differently to <strong>the</strong> separate administrative users. There are four major roles and one<br />

special role.<br />

Table 8-2 on page 553 shows <strong>the</strong> user roles.<br />

552 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Table 8-2 Authority roles<br />

User group Role User<br />

Security Admin All commands Superusers<br />

Administrator All commands except:<br />

svctask: chauthservice,<br />

mkuser, rmuser, chuser,<br />

mkusergrp, rmusergrp,<br />

chusergrp, and setpwdreset<br />

Copy Operator All svcinfo commands and<br />

<strong>the</strong> following svctask<br />

commands:<br />

prestartfcconsistgrp,<br />

startfcconsistgrp,<br />

stopfcconsistgrp,<br />

chfcconsistgrp, prestartfcmap,<br />

startfcmap, stopfcmap,<br />

chfcmap,<br />

startrcconsistgrp,<br />

stoprcconsistgrp,<br />

switchrcconsistgrp,<br />

chrcconsistgrp,<br />

startrcrelationship,<br />

stoprcrelationship,<br />

switchrcrelationship,<br />

chrcrelationship, and<br />

chpartnership<br />

Service All svcinfo commands<br />

and <strong>the</strong> following svctask<br />

commands:<br />

applysoftware, setlocale,<br />

addnode, rmnode, cherrstate,<br />

writesernum, detectmdisk,<br />

includemdisk, clearerrlog,<br />

cleardumps,<br />

settimezone, stopcluster,<br />

startstats, stopstats, and<br />

settime<br />

Monitor All svcinfo commands and<br />

<strong>the</strong> following svctask<br />

commands: finderr,<br />

dumperrlog, dumpinternallog,<br />

chcurrentuser and <strong>the</strong><br />

svcconfig command: backup<br />

The superuser user is a built-in account that has <strong>the</strong> Security Admin user role permissions.<br />

You cannot change permissions or delete this superuser account; you can only change <strong>the</strong><br />

password. You can also change this password manually on <strong>the</strong> front panels of <strong>the</strong> cluster<br />

nodes.<br />

8.9.1 Modify current user<br />

Administrators that control <strong>the</strong><br />

SVC<br />

For those users that control all<br />

copy functionality of <strong>the</strong> cluster<br />

For those users that perform<br />

service maintenance and o<strong>the</strong>r<br />

hardware tasks on <strong>the</strong> cluster<br />

For those users only needing<br />

view access<br />

From <strong>the</strong> SVC Welcome window, select Manage au<strong>the</strong>ntication in <strong>the</strong> My Work pane, and<br />

select Modify Current User, as shown in Figure 8-144 on page 554.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 553


Figure 8-144 Modifying current user<br />

8.9.2 Creating a user<br />

Toward <strong>the</strong> upper-left side of <strong>the</strong> window, you can see <strong>the</strong> name of <strong>the</strong> user that you are<br />

modifying. We enter our new password, as shown in Figure 8-145.<br />

Figure 8-145 Changing password for <strong>the</strong> current user<br />

Perform <strong>the</strong> following steps to view and create a user:<br />

1. From <strong>the</strong> SVC Welcome window, select Users in <strong>the</strong> My Work pane, as shown in<br />

Figure 8-146.<br />

Figure 8-146 Viewing users<br />

554 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2. Select Create a User from <strong>the</strong> list, as shown in Figure 8-147.<br />

Figure 8-147 Create a user<br />

3. Enter a name for your user and <strong>the</strong> desired password. Because we are not connected to a<br />

Lightweight Directory Access Protocol (LDAP) server, we select Local for <strong>the</strong><br />

au<strong>the</strong>ntication type. Therefore, we can choose to which user group our user belongs. In<br />

our scenario, we are creating a user for <strong>SAN</strong> administrative purposes, and it is <strong>the</strong>refore<br />

appropriate to add this user to <strong>the</strong> Administrator group. We attach <strong>the</strong> SSH key, as well, so<br />

a CLI session can be opened. We view <strong>the</strong> attributes, as shown in Figure 8-148.<br />

Figure 8-148 Creating attributes for new user called qwerty<br />

And, we see <strong>the</strong> result of our creation in Figure 8-149.<br />

Figure 8-149 Overview of users that we have created<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 555


8.9.3 Modifying a user role<br />

Perform <strong>the</strong> following steps to modify a role:<br />

1. Select <strong>the</strong> user, as shown in Figure 8-150, to change <strong>the</strong> assigned role. Select Modify a<br />

User from <strong>the</strong> list, and click Go.<br />

Figure 8-150 Modify a user<br />

2. You have <strong>the</strong> option of changing <strong>the</strong> password, assigning a new role, or changing <strong>the</strong> SSH<br />

key for <strong>the</strong> given user name. Click OK (Figure 8-151).<br />

Figure 8-151 Modifying a user window<br />

8.9.4 Deleting a user role<br />

Perform <strong>the</strong> following steps to delete a user role:<br />

1. Select <strong>the</strong> user that you want to delete. Select Delete Users from <strong>the</strong> drop-down list<br />

(Figure 8-152 on page 557), and click Go.<br />

556 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.9.5 User groups<br />

Figure 8-152 Delete a user<br />

2. Click OK to confirm that you want to delete <strong>the</strong> user, as shown in Figure 8-153.<br />

Figure 8-153 Confirming deleting a user<br />

We have several options to change and modify our user groups. We have five roles to assign<br />

to our user groups. Those roles cannot be modified, but a new user group can be created and<br />

linked to an already configured role. In Figure 8-154, we select to Create a Group.<br />

Figure 8-154 Create a new user group<br />

Here, we have several options for our user group and we find detailed information about <strong>the</strong><br />

available groups. In Figure 8-155 on page 558, we can see <strong>the</strong> options, which are <strong>the</strong> same<br />

options with which we are presented when we select Modify User group.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 557


Figure 8-155 Create user group or modify a user group<br />

8.9.6 Cluster password<br />

To change <strong>the</strong> cluster password, select Manage au<strong>the</strong>ntication from <strong>the</strong> Welcome window,<br />

and <strong>the</strong>n, select Cluster Passwords, as shown in Figure 8-156.<br />

Figure 8-156 Change cluster password<br />

8.9.7 Remote au<strong>the</strong>ntication<br />

To enable remote au<strong>the</strong>ntication using LDAP, we configure our SVC cluster by selecting<br />

Manage Au<strong>the</strong>ntication from My Work and selecting Remote Au<strong>the</strong>ntication, as shown in<br />

Figure 8-157.<br />

Figure 8-157 Configuring Remote Au<strong>the</strong>ntication Services window<br />

We have now completed <strong>the</strong> tasks that are required to create, modify, and delete a user and<br />

user groups within <strong>the</strong> SVC cluster.<br />

558 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.10 Working with nodes using <strong>the</strong> GUI<br />

8.10.1 I/O Groups<br />

This section discusses <strong>the</strong> various configuration and administrative tasks that you can<br />

perform on <strong>the</strong> nodes within an SVC cluster.<br />

This section details <strong>the</strong> tasks that can be performed at an I/O Group level.<br />

8.10.2 Renaming an I/O Group<br />

Perform <strong>the</strong> following steps to rename an I/O Group:<br />

1. From <strong>the</strong> SVC Welcome window, select Work with Nodes and I/O Groups.<br />

2. The Viewing Input/Output Groups window (Figure 8-158) opens. Select <strong>the</strong> I/O Group that<br />

you want to rename. In this case, we select io_grp1. Ensure that Rename an I/O Group<br />

is selected from <strong>the</strong> drop-down list. Click Go.<br />

Figure 8-158 Viewing I/O Groups<br />

3. On <strong>the</strong> Renaming I/O Group I/O Group name window (where I/O Group name is <strong>the</strong> I/O<br />

Group that you selected in <strong>the</strong> previous step), type <strong>the</strong> New Name that you want to assign<br />

to <strong>the</strong> I/O Group. Click OK, as shown in Figure 8-159. Our new name is PROD_IO_GRP.<br />

Figure 8-159 Renaming <strong>the</strong> I/O Group<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 559


I/O Group name: The name can consist of <strong>the</strong> letters A to Z and a to z, <strong>the</strong> numbers 0 to<br />

9, <strong>the</strong> dash, and <strong>the</strong> underscore. The name can be between one and 15 characters in<br />

length, but it cannot start with a number, <strong>the</strong> dash, or <strong>the</strong> word “iogrp” (because this prefix<br />

is reserved for SVC assignment only).<br />

SVC also uses “io_grp” as a reserve word prefix. A node name cannot <strong>the</strong>refore be<br />

changed to io_grpn where n is a numeric; however, io_grpny or io_grpyn, where y is any<br />

non-numeric character that is used in conjunction with n, is acceptable.<br />

We have now completed <strong>the</strong> required tasks to rename an I/O Group.<br />

8.10.3 Adding nodes to <strong>the</strong> cluster<br />

After cluster creation is completed through <strong>the</strong> service window (<strong>the</strong> front window of one of <strong>the</strong><br />

SVC nodes) and <strong>the</strong> cluster Web interface, only one node (<strong>the</strong> configuration node) is set up.<br />

To be a fully functional SVC cluster, you must add at least a second node to <strong>the</strong> configuration.<br />

Perform <strong>the</strong> following steps to add nodes to <strong>the</strong> cluster:<br />

1. Open <strong>the</strong> GUI using one of <strong>the</strong> following methods:<br />

– Double-click <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console icon on your <strong>System</strong> <strong>Storage</strong><br />

Productivity Center desktop.<br />

– Open a Web browser on <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center console and point to<br />

this address:<br />

http://localhost:9080/ica<br />

– Open a Web browser on a separate workstation and point to this address:<br />

http://sspcconsoleipaddress:9080/ica<br />

2. The GUI Welcome window opens, as shown in Figure 8-160, and we select Clusters from<br />

<strong>the</strong> My Work window. This window contains several useful links and information: My Work<br />

(top left), <strong>the</strong> GUI version and build level information (on <strong>the</strong> right, under <strong>the</strong> graphic), and<br />

a hypertext link to <strong>the</strong> SVC download page:<br />

http://www.ibm.com/storage/support/2145<br />

Figure 8-160 GUI Welcome window<br />

560 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


3. The Viewing Clusters window opens, as shown in Figure 8-161. On <strong>the</strong> Viewing Clusters<br />

window, select <strong>the</strong> cluster on which you want to perform actions (in our case,<br />

ITSO_CLS3). Click Go.<br />

Figure 8-161 Launch <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> application<br />

4. The <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console Application launches in a separate browser window<br />

(Figure 8-162). In this window, as with <strong>the</strong> Welcome window, you can see several links<br />

under My Work (top left), a Recent Tasks list (bottom left), <strong>the</strong> SVC Console version and<br />

build level information (on <strong>the</strong> right, under <strong>the</strong> graphic), and a hypertext link that takes you<br />

to <strong>the</strong> SVC download page:<br />

http://www.ibm.com/storage/support/2145<br />

Under My Work, click Work with Nodes and, <strong>the</strong>n, Nodes.<br />

Figure 8-162 SVC Console Welcome window<br />

5. The Viewing Nodes window (Figure 8-163 on page 562) opens. Note <strong>the</strong> input/output (I/O)<br />

group name (for example, io_grp0). Select <strong>the</strong> node that you want to add. Ensure that Add<br />

a node is selected from <strong>the</strong> drop-down list, and click Go.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 561


Figure 8-163 Viewing Nodes<br />

Node name: You can rename <strong>the</strong> existing node to your own naming convention<br />

standards (we show you how to rename <strong>the</strong> existing node later). In your window, it<br />

appears as node1, by default.<br />

6. The next window (Figure 8-164) displays <strong>the</strong> available nodes. Select <strong>the</strong> node from <strong>the</strong><br />

Available Candidate Nodes drop-down list. Associate it with an I/O Group and provide a<br />

name (for example, SVCNode2). Click OK.<br />

Figure 8-164 Adding a Node to a Cluster window<br />

Note: If you do not provide a name, <strong>the</strong> SVC automatically generates <strong>the</strong> name noden,<br />

where n is <strong>the</strong> ID sequence number that is assigned by <strong>the</strong> SVC internally. If you want<br />

to provide a name, you can use letters A to Z and a to z, numbers 0 to 9, and <strong>the</strong><br />

underscore. The name be between one and 15 characters in length, but it cannot start<br />

with a number or <strong>the</strong> word “node” (because this prefix is reserved for SVC assignment<br />

only).<br />

In our case, we only have enough nodes to complete <strong>the</strong> formation of one I/O Group.<br />

Therefore, we added our new node to <strong>the</strong> I/O Group that node1 was already using,<br />

io_grp0 (you can rename <strong>the</strong> I/O Group from <strong>the</strong> default of iogrp0 using your own naming<br />

convention standards).<br />

562 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


If this window does not display any available nodes (which is indicated by <strong>the</strong> message<br />

“CMMVC1100I There are no candidate nodes available”), check whe<strong>the</strong>r your second<br />

node is powered on and whe<strong>the</strong>r zones are appropriately configured in your switches. It is<br />

also possible that a pre-existing cluster’s configuration data is stored on <strong>the</strong> second node.<br />

If you are sure that this node is not part of ano<strong>the</strong>r active SVC cluster, use <strong>the</strong> service<br />

window to delete <strong>the</strong> existing cluster information. When this action is complete, return to<br />

this window and you will see <strong>the</strong> node listed.<br />

7. Return to <strong>the</strong> Viewing Nodes window (Figure 8-165). It shows <strong>the</strong> status change of <strong>the</strong><br />

node from Adding to Online.<br />

Figure 8-165 Node added and currently has status “adding”<br />

Refresh: This window does not automatically refresh. Therefore, you continue to see<br />

<strong>the</strong> Adding status until you click Refresh.<br />

We have completed <strong>the</strong> cluster configuration.<br />

Now, you have a fully redundant SVC environment.<br />

8.10.4 Configuring iSCSI ports<br />

In this topic, we show configuring a cluster for use with iSCSI.<br />

We will configure our nodes to use <strong>the</strong> primary and secondary E<strong>the</strong>rnet ports for iSCSI, as<br />

well as to contain <strong>the</strong> cluster IP. While we are configuring our nodes to be used with iSCSI, we<br />

are not affecting our cluster IP. The cluster IP is changed, as shown in 8.8, “Managing <strong>the</strong><br />

cluster using <strong>the</strong> GUI” on page 544.<br />

It is important to know that you can have more than a one IP address to one physical<br />

connection relationship. The capability exists to have a four to one relationship (4:1)<br />

consisting of two IPv4 addresses, plus two IPv6 addresses (four total), to one physical<br />

connection per port per node.<br />

Important: When reconfiguring IP ports, be aware that you must reconnect already<br />

configured iSCSI connections if changes are made on <strong>the</strong> IP addresses of <strong>the</strong> nodes.<br />

You can perform iSCSI au<strong>the</strong>ntication or CHAP in ei<strong>the</strong>r of two ways, ei<strong>the</strong>r for <strong>the</strong> whole<br />

cluster or per host connection. We show configuring <strong>the</strong> CHAP for <strong>the</strong> entire cluster in 8.8.6,<br />

“iSCSI” on page 549.<br />

In our scenario, we have a cluster IP of 9.64.210.64, as shown in Figure 8-166 on page 564.<br />

That cluster will not be impacted during our configuration of <strong>the</strong> nodes’ IP addresses.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 563


Perform <strong>the</strong>se steps:<br />

Figure 8-166 Cluster IP address shown<br />

1. We start by selecting Work with nodes from our Welcome window and by selecting Node<br />

E<strong>the</strong>rnet Ports, as shown in Figure 8-167.<br />

Figure 8-167 Configuring node E<strong>the</strong>rnet ports<br />

We can see that we have four (two per node) connections to use. They are all physically<br />

connected with a 100 Mb link, but <strong>the</strong>y are not configured yet.<br />

From <strong>the</strong> list, we select Configure a Node E<strong>the</strong>rnet Port and insert <strong>the</strong> IP address that<br />

we intend to use for iSCSI, as shown in Figure 8-168.<br />

Figure 8-168 IP parameters for iSCSI<br />

2. We can now see that one of our E<strong>the</strong>rnet ports is now configured and online, as shown in<br />

Figure 8-169 on page 565. We perform <strong>the</strong> same task to configure <strong>the</strong> three remaining IP<br />

addresses.<br />

564 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-169 E<strong>the</strong>rnet port successfully configured and online<br />

We configure <strong>the</strong> remaining ports and use a unique IP address for each port. When finished,<br />

all of our E<strong>the</strong>rnet ports are configured, as shown in Figure 8-170.<br />

Figure 8-170 All E<strong>the</strong>rnet ports are online<br />

Now, both physical ports on each node are configured for iSCSI.<br />

We can see <strong>the</strong> iSCSI identifier (iSCSI name) for our SVC node by selecting Working with<br />

nodes from our Welcome window. Then, by selecting Nodes, under <strong>the</strong> column iSCSI Name,<br />

we see our iSCSI identifier, as shown in Figure 8-171.<br />

Each node has a unique iSCSI name associated with two IP addresses. After <strong>the</strong> host has<br />

initiated <strong>the</strong> iSCSI connection to a target node, this IQN from <strong>the</strong> target node will be visible in<br />

<strong>the</strong> iSCSI configuration tool on <strong>the</strong> host.<br />

Figure 8-171 iSCSI identifier for our nodes<br />

You can also enter an iSCSI alias name for <strong>the</strong> iSCSI name on <strong>the</strong> node, as shown in<br />

Figure 8-172 on page 566.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 565


Figure 8-172 Entering an iSCSI alias name<br />

We change <strong>the</strong> name to a name that is easier to recognize, as shown in Figure 8-173.<br />

Figure 8-173 Changing <strong>the</strong> iSCSI alias name<br />

We have now finished configuring iSCSI for our SVC cluster.<br />

8.11 Managing Copy Services<br />

See Chapter 6, “Advanced Copy Services” on page 255 for more information about <strong>the</strong><br />

functionality of Copy Services in <strong>the</strong> SVC environment.<br />

8.12 FlashCopy operations using <strong>the</strong> GUI<br />

It is often easier to control working with FlashCopy by using <strong>the</strong> GUI, as long as you have a<br />

small number of mappings. When using many mappings, we recommend that you use <strong>the</strong> CLI<br />

to execute your commands.<br />

8.13 Creating a FlashCopy consistency group<br />

To create a FlashCopy consistency group in <strong>the</strong> SVC GUI, perform <strong>the</strong>se steps:<br />

1. Expand Manage Copy Services in <strong>the</strong> Task pane, and select FlashCopy Consistency<br />

Groups (Figure 8-174 on page 567).<br />

566 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-174 Select FlashCopy Consistency Groups<br />

2. Then, from <strong>the</strong> list, select Create a Consistency Group, and click Go (Figure 8-175).<br />

Figure 8-175 Create a FlashCopy consistency group<br />

3. Enter <strong>the</strong> desired FlashCopy consistency group name, and click OK, as shown in<br />

Figure 8-176.<br />

Figure 8-176 Create consistency group<br />

Autodelete: If you choose to use <strong>the</strong> Automatically Delete Consistency Group When<br />

Empty feature, you can only use this consistency group for mappings that are marked for<br />

autodeletion. The non-autodelete consistency group can contain both autodelete<br />

FlashCopy mappings and non-autodelete FlashCopy mappings.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 567


4. Click Close when <strong>the</strong> new name has been entered. Figure 8-177 on page 568 shows <strong>the</strong><br />

result.<br />

Figure 8-177 View consistency group<br />

Repeat <strong>the</strong> previous steps to create ano<strong>the</strong>r FlashCopy consistency group (Figure 8-178).<br />

The FlashCopy consistency groups are now ready to use.<br />

Figure 8-178 Viewing FlashCopy Consistency Groups<br />

8.13.1 Creating a FlashCopy mapping<br />

In this section, we create <strong>the</strong> FlashCopy mappings for each of our VDisks for <strong>the</strong>ir respective<br />

targets. Follow <strong>the</strong>se steps:<br />

1. In <strong>the</strong> SVC GUI, expand Manage Copy Services in <strong>the</strong> Task pane, and select FlashCopy<br />

mappings.<br />

2. When prompted for filtering, select Bypass Filter to show all of <strong>the</strong> defined FlashCopy<br />

mappings, if <strong>the</strong>re were any FlashCopy mappings created previously.<br />

3. As shown in Figure 8-179, select Create a Mapping from <strong>the</strong> list, and click Go to start <strong>the</strong><br />

creation process of a FlashCopy mapping.<br />

Figure 8-179 Create a FlashCopy mapping<br />

4. We are <strong>the</strong>n presented with <strong>the</strong> FlashCopy creation wizard overview of <strong>the</strong> creation<br />

process for a FlashCopy mapping, and we click Next to proceed.<br />

568 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5. We name <strong>the</strong> first FlashCopy mapping PROD_1, select <strong>the</strong> previously created consistency<br />

group FC_SIGNA, set <strong>the</strong> background copy priority to 50 and <strong>the</strong> Grain Size to 64, and<br />

click Next to proceed, as shown in Figure 8-180 on page 569.<br />

Figure 8-180 Define FlashCopy mapping properties<br />

6. The next step is to select <strong>the</strong> source VDisk. If <strong>the</strong>re were many source VDisks that were<br />

not already defined in a FlashCopy mapping, we can filter that list here. In Figure 8-181,<br />

we define <strong>the</strong> filter * (asterisk will show us all of our VDisks) for <strong>the</strong> source VDisk, and<br />

click Next to proceed.<br />

Figure 8-181 Filter source VDisk candidates<br />

7. We select Galtarey_01 from <strong>the</strong> available VDisks as our source disk, and click Next to<br />

proceed.<br />

8. The next step is to select our target VDisk. The FlashCopy mapping wizard only presents<br />

a list of <strong>the</strong> VDisks that are <strong>the</strong> same size as <strong>the</strong> source VDisk. These VDisks are not<br />

already in a FlashCopy mapping, and <strong>the</strong>y are not already defined in a Metro Mirror<br />

relationship. In Figure 8-182 on page 570, we select <strong>the</strong> target Hrappsey_01 and click<br />

Next to proceed.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 569


Figure 8-182 Select target VDisk<br />

In <strong>the</strong> next step, we select an I/O Group for this mapping.<br />

9. Finally, we verify our FlashCopy mapping (Figure 8-183) and click Finish to create it.<br />

Figure 8-183 FlashCopy mapping verification<br />

We check <strong>the</strong> result of this FlashCopy mapping, as shown in Figure 8-184.<br />

Figure 8-184 View FlashCopy mapping<br />

We repeat <strong>the</strong> procedure to create o<strong>the</strong>r FlashCopy mappings on <strong>the</strong> second FlashCopy<br />

target VDisk named Galtarey_01:<br />

1. We give this VDisk ano<strong>the</strong>r FlashCopy mapping name and choose a separate FlashCopy<br />

consistency group, as shown in Figure 8-185 on page 571.<br />

570 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2. As you can see in this example, we changed <strong>the</strong> background copy rate to 30, which slows<br />

down <strong>the</strong> background copy process. The clearing rate of 60 extends <strong>the</strong> stopping process<br />

if we had to stop <strong>the</strong> mapping during a copy process. An incremental mapping copies only<br />

<strong>the</strong> parts of <strong>the</strong> source or target VDisk that have changed since <strong>the</strong> last FlashCopy<br />

process.<br />

Note: Even if <strong>the</strong> type of <strong>the</strong> FlashCopy mapping is incremental, <strong>the</strong> first copy process<br />

copies all of <strong>the</strong> data from <strong>the</strong> source to <strong>the</strong> target VDisk.<br />

Figure 8-185 Creating a FlashCopy mapping type of incremental<br />

Consistency groups: If no consistency group is defined, <strong>the</strong> mapping is a stand-alone<br />

mapping, and it can be prepared and started without affecting o<strong>the</strong>r mappings. All<br />

mappings in <strong>the</strong> same consistency group must have <strong>the</strong> same status to maintain <strong>the</strong><br />

“consistency” of <strong>the</strong> group.<br />

In Figure 8-186 on page 572, you can see that Galtarey_01 is still available.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 571


Figure 8-186 Viewing FlashCopy mapping<br />

3. We select Heimaey_02 as <strong>the</strong> destination VDisk, as shown in Figure 8-187.<br />

Figure 8-187 Select a second target VDisk<br />

4. On <strong>the</strong> final page of <strong>the</strong> wizard, as shown in Figure 8-188 on page 573, we select Finish<br />

after verifying all <strong>the</strong> parameters.<br />

572 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-188 Verification of FlashCopy mapping<br />

The background copy rate specifies <strong>the</strong> priority to give to complete <strong>the</strong> copy. If 0 is specified,<br />

<strong>the</strong> copy does not proceed in <strong>the</strong> background. A default value is 50.<br />

Tip: You can invoke FlashCopy from <strong>the</strong> SVC GUI, but using <strong>the</strong> SVC GUI might not make<br />

much sense if you plan to handle a large number of FlashCopy mappings or consistency<br />

groups periodically, or at varying times. In this case, creating a script by using <strong>the</strong> CLI<br />

might be more convenient.<br />

8.13.2 Preparing (pre-triggering) <strong>the</strong> FlashCopy<br />

When performing <strong>the</strong> FlashCopy on <strong>the</strong> VDisks with <strong>the</strong> database, we want to be able to<br />

control <strong>the</strong> point-in-time when <strong>the</strong> FlashCopy is triggered to keep our quiesce time to a<br />

minimum and to preserve data integrity. We put <strong>the</strong> VDisks in a consistency group, and <strong>the</strong>n,<br />

we prepare <strong>the</strong> consistency group to flush <strong>the</strong> cache for all source VDisks.<br />

If you only select one mapping to be prepared, <strong>the</strong> cluster will ask if you want all of <strong>the</strong><br />

volumes in that consistency group to be prepared, as shown in Figure 8-189.<br />

Figure 8-189 FlashCopy messages<br />

When you have assigned several mappings to a FlashCopy consistency group, you only have<br />

to issue a single prepare command for <strong>the</strong> whole group, to prepare all of <strong>the</strong> mappings at one<br />

time.<br />

We select <strong>the</strong> FlashCopy consistency group, select Prepare a consistency group from <strong>the</strong><br />

list, and click Go. The status changes to Preparing and, <strong>the</strong>n, finally to Prepared. Click<br />

Refresh several times until <strong>the</strong> FlashCopy consistency group is in <strong>the</strong> Prepared state.<br />

Figure 8-190 on page 574 shows how we check <strong>the</strong> result. The status of <strong>the</strong> consistency<br />

group has changed to Prepared.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 573


Figure 8-190 View Prepared state of consistency groups<br />

8.13.3 Starting (triggering) FlashCopy mappings<br />

When <strong>the</strong> FlashCopy mapping enters <strong>the</strong> Prepared state, we can start <strong>the</strong> copy process. Only<br />

mappings that are not a member of an consistency group, or <strong>the</strong> only mapping in an<br />

consistency group, can be started individually. As shown in Figure 8-191, we select <strong>the</strong><br />

FlashCopy that we want to start, select Start a Mapping from <strong>the</strong> menu, and click Go to<br />

proceed.<br />

Figure 8-191 Start a FlashCopy mapping<br />

Because we have already prepared <strong>the</strong> FlashCopy mapping, we are ready to start <strong>the</strong><br />

mapping right away. Notice that this mapping is not a member of any consistency group. An<br />

overview message with information about <strong>the</strong> mapping that we are about to start is shown in<br />

Figure 8-192, and we select Start to start <strong>the</strong> FlashCopy mapping.<br />

Figure 8-192 Starting a FlashCopy mapping<br />

After we have selected Start, we are automatically shown <strong>the</strong> copy process view that shows<br />

<strong>the</strong> progress of our copy mappings.<br />

8.13.4 Starting (triggering) a FlashCopy consistency group<br />

As shown in Figure 8-193 on page 575, <strong>the</strong> FlashCopy consistency group enters <strong>the</strong><br />

Prepared state. All of <strong>the</strong> mappings in this group will be brought to <strong>the</strong> same state. To start <strong>the</strong><br />

574 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


FlashCopy consistency group, we select <strong>the</strong> consistency group, select Start a Consistency<br />

Group from <strong>the</strong> list, and click Go.<br />

Figure 8-193 Start <strong>the</strong> consistency group<br />

In Figure 8-194, we are prompted to confirm starting <strong>the</strong> FlashCopy consistency group. We<br />

now flush <strong>the</strong> database and OS buffers and quiesce <strong>the</strong> database. Then, we click OK to start<br />

<strong>the</strong> FlashCopy consistency group.<br />

Note: Because we have already prepared <strong>the</strong> FlashCopy consistency group, this option is<br />

grayed out when you are prompted to confirm starting <strong>the</strong> FlashCopy consistency group.<br />

Figure 8-194 Start consistency group message<br />

As shown in Figure 8-195, we verified that <strong>the</strong> consistency group is in <strong>the</strong> Copying state, and<br />

subsequently, we resume <strong>the</strong> database I/O.<br />

Figure 8-195 Consistency group status<br />

8.13.5 Monitoring <strong>the</strong> FlashCopy progress<br />

To monitor <strong>the</strong> copy progress, you can click Refresh or ano<strong>the</strong>r option is to select Manage<br />

Progress and FlashCopy. Then, you can monitor <strong>the</strong> progress (Figure 8-196 on page 576).<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 575


Figure 8-196 FlashCopy background copy progress<br />

When <strong>the</strong> background copy is completed for all FlashCopy mappings in <strong>the</strong> consistency<br />

group, <strong>the</strong> status is changed to “Idle or Copied”.<br />

8.13.6 Stopping <strong>the</strong> FlashCopy consistency group<br />

When a FlashCopy consistency group is stopped, <strong>the</strong> target VDisks become invalid and are<br />

set offline by <strong>the</strong> SVC. The FlashCopy mapping or consistency group must be prepared again<br />

or retriggered to bring <strong>the</strong> target VDisks online again.<br />

Tip: If you want to stop a mapping or group in a Multiple Target FlashCopy environment,<br />

consider whe<strong>the</strong>r you want to keep any of <strong>the</strong> dependent mappings. If not, issue <strong>the</strong> stop<br />

command with <strong>the</strong> force parameter, which stops all of <strong>the</strong> dependent maps too and<br />

negates <strong>the</strong> need for stopping <strong>the</strong> copy process.<br />

Important: Only stop a FlashCopy mapping when <strong>the</strong> data on <strong>the</strong> target VDisk is useless,<br />

or if you want to modify <strong>the</strong> FlashCopy mapping.<br />

When a FlashCopy mapping is stopped, <strong>the</strong> target VDisk becomes invalid and is set offline<br />

by <strong>the</strong> SVC.<br />

As shown in Figure 8-197 on page 577, we stop <strong>the</strong> FC_DONA consistency group. All of <strong>the</strong><br />

mappings belonging to that consistency group are now in <strong>the</strong> Copying state.<br />

576 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-197 Stop FlashCopy consistency group<br />

Perform <strong>the</strong>se steps:<br />

1. We select <strong>the</strong> FC_DONA FlashCopy consistency group, and from <strong>the</strong> list, we select Stop<br />

a Consistency Group, as shown in Figure 8-198.<br />

Figure 8-198 Stopping <strong>the</strong> FlashCopy consistency group<br />

2. When selecting <strong>the</strong> method to use to stop <strong>the</strong> mapping, we have <strong>the</strong> three options, as<br />

shown in Figure 8-199.<br />

Figure 8-199 Stopping FlashCopy consistency group options<br />

3. Because we want to stop <strong>the</strong> mapping immediately, we select Forced Stop. The status of<br />

<strong>the</strong> FlashCopy consistency groups changes from Copying Stopping Stopped, as<br />

shown in Figure 8-200 on page 578.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 577


Figure 8-200 FlashCopy consistency group status<br />

8.13.7 Deleting <strong>the</strong> FlashCopy mapping<br />

We have two options to delete a FlashCopy mapping: <strong>the</strong> automatic deletion of a mapping or<br />

manual deletion.<br />

When we initially create a mapping, we can select <strong>the</strong> “Automatically delete mapping when<br />

<strong>the</strong> background copy completes” function, as shown in Figure 8-201.<br />

Figure 8-201 Selecting <strong>the</strong> function to automatically delete <strong>the</strong> mapping<br />

Or, if <strong>the</strong> option has not been selected initially, you can delete <strong>the</strong> mapping manually, as<br />

shown in Figure 8-202 on page 579.<br />

578 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-202 Manually deleting a FlashCopy mapping<br />

8.13.8 Deleting <strong>the</strong> FlashCopy consistency group<br />

If you delete a consistency group with active mappings in it, all of <strong>the</strong> mappings in that group<br />

become stand-alone mappings.<br />

Tip: If you want to use <strong>the</strong> target VDisks in a consistency group as normal VDisks, you can<br />

monitor <strong>the</strong> background copy progress until it is complete (100% copied) and, <strong>the</strong>n, delete<br />

<strong>the</strong> FlashCopy mapping.<br />

When deleting a consistency group, we start by selecting a group. From <strong>the</strong> list, select Delete<br />

a Consistency Group and click Go, as shown in Figure 8-203.<br />

Figure 8-203 Deleting a FlashCopy consistency group<br />

We can still delete a FlashCopy consistency group even if <strong>the</strong> consistency group has a status<br />

of Copying, as shown in Figure 8-204, by forcing <strong>the</strong> deletion.<br />

Figure 8-204 Deleting a consistency group with a mapping in <strong>the</strong> Copying state<br />

And, because <strong>the</strong>re is an active mapping with <strong>the</strong> state of Copying, we see a warning<br />

message, as shown in Figure 8-205 on page 580.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 579


Figure 8-205 Warning message<br />

8.13.9 Migrating between a fully allocated VDisk and a Space-Efficient VDisk<br />

If you want to migrate from a fully allocated VDisk to a Space-Efficient VDisk, follow <strong>the</strong> same<br />

procedure as described in 8.13.1, “Creating a FlashCopy mapping” on page 568, but make<br />

sure that you select a Space-Efficient VDisk that has already been created as your target<br />

volume. You can use this same method to migrate from a Space-Efficient VDisk to a fully<br />

allocated VDisk.<br />

Create a FlashCopy mapping with <strong>the</strong> fully allocated VDisk as <strong>the</strong> source and <strong>the</strong><br />

Space-Efficient VDisk as <strong>the</strong> target. We describe creating a Space-Efficient VDisk in 8.5.4,<br />

“Creating a Space-Efficient VDisk with autoexpand” on page 509 in detail.<br />

Important: The copy process overwrites all of <strong>the</strong> data on <strong>the</strong> target VDisk. You must back<br />

up all of <strong>the</strong> data before you start <strong>the</strong> copy process.<br />

8.13.10 Reversing and splitting a FlashCopy mapping<br />

Starting with SVC 5.1, you can now perform a reverse FlashCopy mapping without having to<br />

remove <strong>the</strong> original FlashCopy mapping, and without restarting a FlashCopy mapping from<br />

<strong>the</strong> beginning.<br />

You can start a FlashCopy mapping whose target is <strong>the</strong> source of ano<strong>the</strong>r FlashCopy<br />

mapping. This capability enables you to reverse <strong>the</strong> direction of a FlashCopy map, without<br />

having to remove existing maps, and without losing <strong>the</strong> data from <strong>the</strong> target.<br />

When you prepare ei<strong>the</strong>r a stand-alone mapping or consistency group, you are prompted with<br />

a message, as shown in Figure 8-206.<br />

Figure 8-206 FlashCopy restore option<br />

580 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Splitting a cascaded FlashCopy mapping allows <strong>the</strong> source target of a map, which is 100%<br />

complete, to be removed from <strong>the</strong> head of <strong>the</strong> cascade when <strong>the</strong> map is stopped.<br />

For example, if you have four VDisks in a cascade (A B C D), and <strong>the</strong> map A B is<br />

100% complete, as shown in Figure 8-207, clicking Split Stop, as shown in Figure 8-208,<br />

results in FCMAP_AB becoming idle_copied and <strong>the</strong> remaining cascade becomes B C <br />

D.<br />

Figure 8-207 Stopping a FlashCopy mapping<br />

Figure 8-208 Selecting <strong>the</strong> Split Stop option<br />

Without <strong>the</strong> split option, VDisk A remains at <strong>the</strong> head of <strong>the</strong> cascade (A C D). Consider<br />

this sequence of steps:<br />

► User takes a backup using <strong>the</strong> mapping A B. A is <strong>the</strong> production VDisk; B is a backup.<br />

► At a later point, <strong>the</strong> user experiences corruption on A and, <strong>the</strong>refore, reverses <strong>the</strong> mapping<br />

B A.<br />

► The user <strong>the</strong>n takes ano<strong>the</strong>r backup from <strong>the</strong> production disk A and, <strong>the</strong>refore, has <strong>the</strong><br />

cascade B A C.<br />

Stopping A B without using <strong>the</strong> Split Stop option will result in <strong>the</strong> cascade B C. Note that<br />

<strong>the</strong> backup disk B is now at <strong>the</strong> head of this cascade.<br />

When <strong>the</strong> user next wants to take a backup to B, <strong>the</strong> user can still start mapping A B (using<br />

<strong>the</strong> -restore flag), but <strong>the</strong> user cannot <strong>the</strong>n reverse <strong>the</strong> mapping to A (B A or C A).<br />

Stopping A B with <strong>the</strong> Split Stop option results in <strong>the</strong> cascade A C. This option does not<br />

result in <strong>the</strong> same problem, because <strong>the</strong> production disk A is at <strong>the</strong> head of <strong>the</strong> cascade<br />

instead of <strong>the</strong> backup disk B.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 581


8.14 Metro Mirror operations<br />

Next, we show how to set up Metro Mirror using <strong>the</strong> GUI.<br />

Note: This example is for intercluster Metro Mirror operations only. If you want to set up<br />

Metro Mirror intracluster operations, we highlight those parts of <strong>the</strong> following procedure<br />

that you do not need to perform.<br />

8.14.1 Cluster partnership<br />

Starting with SVC 5.1, you now have <strong>the</strong> opportunity to create more than a one-to-one cluster<br />

partnership.<br />

Now, you can have a cluster partnership among multiple SVC clusters, which allows you to<br />

create four types of configurations, using a maximum of four connected clusters:<br />

► Star configuration, as shown in Figure 8-209<br />

Figure 8-209 Star configuration<br />

► Triangle configuration, as shown in Figure 8-210<br />

Figure 8-210 Triangle configuration<br />

582 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► Fully connected configuration, as shown in Figure 8-211<br />

Figure 8-211 Fully connected configuration<br />

► Daisy-chain configuration, as shown in Figure 8-212<br />

Figure 8-212 Daisy-chain configuration<br />

Important: All SVC clusters must be at level 5.1 or higher.<br />

In <strong>the</strong> following scenario, we set up an intercluster Metro Mirror relationship between <strong>the</strong><br />

ITSO-CLS1 SVC cluster at <strong>the</strong> primary site and <strong>the</strong> ITSO-CLS2 SVC cluster at <strong>the</strong> secondary<br />

site. Table 8-3 shows <strong>the</strong> details of <strong>the</strong> VDisks.<br />

Table 8-3 VDisk details<br />

Content of VDisk VDisks at primary site VDisks at secondary site<br />

Database files MM_DB_Pri MM_DB_Sec<br />

Database log files MM_DBLog_Pri MM_DBLog_Sec<br />

Application files MM_App_Pri MM_App_Sec<br />

Because data consistency is needed across <strong>the</strong> MM_DB_Pri and MM_DBLog_Pri VDisks, a<br />

consistency group named CG_WIN2K3_MM is created to handle <strong>the</strong> Metro Mirror<br />

relationships for <strong>the</strong>m. While, in this scenario, application files are independent of <strong>the</strong><br />

database, a stand-alone Metro Mirror relationship is created for <strong>the</strong> MM_App_Pri VDisk.<br />

Figure 8-213 on page 584 illustrates <strong>the</strong> Metro Mirror setup.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 583


Figure 8-213 Metro Mirror scenario<br />

8.14.2 Setting up Metro Mirror<br />

Primary Site<br />

SVC Cluster - ITSO - CLS1<br />

MM_DB_Pri<br />

MM_DBlog_Pri<br />

Consistency Group<br />

CG_W2K3_MM<br />

MM Relationship 1<br />

MM Relationship 2<br />

In <strong>the</strong> following section, we assume that <strong>the</strong> source and target VDisks have already been<br />

created and that <strong>the</strong> inter-switch links (ISLs) and zoning are in place, enabling <strong>the</strong> SVC<br />

clusters to communicate.<br />

To set up <strong>the</strong> Metro Mirror, perform <strong>the</strong> following steps:<br />

1. Create <strong>the</strong> SVC partnership between ITSO-CLS1 and ITSO-CLS2, on both SVC clusters.<br />

2. Create a Metro Mirror consistency group:<br />

Name CG_W2K3_MM<br />

3. Create <strong>the</strong> Metro Mirror relationship for MM_DB_Pri:<br />

– Master MM_DB_Pri<br />

– Auxiliary MM_DB_Sec<br />

– Auxiliary SVC cluster ITSO-CLS2<br />

– Name MMREL1<br />

– Consistency group CG_W2K3_MM<br />

4. Create <strong>the</strong> Metro Mirror relationship for MM_DBLog_Pri:<br />

– Master MM_DBLog_Pri<br />

– Auxiliary MM_DBLog_Sec<br />

– Auxiliary SVC cluster ITSO-CLS2<br />

– Name MMREL2<br />

– Consistency group CG_W2K3_MM<br />

584 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

Secondary Site<br />

SVC Cluster - ITSO - CLS2<br />

MM_DB_Sec<br />

MM_DBlog_Sec<br />

MM Relationship 3<br />

MM_App_Pri MM_App_Sec


5. Create <strong>the</strong> Metro Mirror relationship for MM_App_Pri:<br />

– Master MM_App_Pri<br />

– Auxiliary MM_App_Sec<br />

– Auxiliary SVC cluster ITSO-CLS2<br />

– Name MMREL3<br />

8.14.3 Creating <strong>the</strong> SVC partnership between ITSO-CLS1 and ITSO-CLS2<br />

We perform this operation to create <strong>the</strong> partnership on both clusters.<br />

Note: If you are creating an intracluster Metro Mirror, do not perform this next step to<br />

create <strong>the</strong> SVC cluster Metro Mirror partnership. Instead, skip to 8.14.4, “Creating a Metro<br />

Mirror consistency group” on page 587.<br />

To create a Metro Mirror partnership between <strong>the</strong> SVC clusters using <strong>the</strong> GUI, perform <strong>the</strong>se<br />

steps:<br />

1. We launch <strong>the</strong> SVC GUI for ITSO-CLS1. Then, we select Manage Copy Services from<br />

<strong>the</strong> Welcome window and click Metro & Global Mirror Cluster Partnerships. The<br />

window opens, as shown in Figure 8-214.<br />

Figure 8-214 Creating a cluster partnership<br />

2. After we have selected Go for <strong>the</strong> creation of an cluster partnership, as shown in<br />

Figure 8-214, <strong>the</strong> SVC cluster shows us <strong>the</strong> available options to select a partner cluster,<br />

as shown in Figure 8-215 on page 586. We have multiple cluster candidates from which to<br />

choose. In our scenario, we choose ITSO-CLS2.<br />

Select ITSO-CLS2, specify <strong>the</strong> available bandwidth for <strong>the</strong> background copy, in this case,<br />

50 MBps, and <strong>the</strong>n, click OK. Two options are available during creation:<br />

– Intercluster Delay Simulation, which simulates <strong>the</strong> Global Mirror round-trip delay<br />

between <strong>the</strong> two clusters, in milliseconds. The default is 0, and <strong>the</strong> valid range is 0 to<br />

100 milliseconds.<br />

– Intracluster Delay Simulation, which simulates <strong>the</strong> Global Mirror round-trip delay in<br />

milliseconds. The default is 0, and <strong>the</strong> valid range is 0 to 100 milliseconds.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 585


Figure 8-215 Showing available cluster candidates<br />

As shown in Figure 8-216, our partnership is in <strong>the</strong> Partially Configured state, because we<br />

have only performed <strong>the</strong> work on one side of <strong>the</strong> partnership so far.<br />

Figure 8-216 Viewing cluster partnerships<br />

3. To fully configure <strong>the</strong> Metro Mirror cluster partnership, we must perform <strong>the</strong> same steps on<br />

ITSO-CLS2 as we did on ITSO-CLS1. For simplicity and brevity, only two most significant<br />

windows are shown when <strong>the</strong> partnership is fully configured.<br />

4. Launching <strong>the</strong> SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for <strong>the</strong> Metro Mirror cluster<br />

partnership and specify <strong>the</strong> available bandwidth for <strong>the</strong> background copy, again 50 MBps,<br />

and <strong>the</strong>n click OK, as shown in Figure 8-217 on page 587.<br />

586 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-217 We select <strong>the</strong> cluster partner for <strong>the</strong> secondary partner<br />

Now that both sides of <strong>the</strong> SVC cluster partnership are defined, <strong>the</strong> resulting window shown<br />

in Figure 8-218 confirms that our Metro Mirror cluster partnership is in <strong>the</strong> Fully Configured<br />

state.<br />

Figure 8-218 Fully configured cluster partnership<br />

The GUI for ITSO-CLS2 is no longer necessary. Close this GUI, and use <strong>the</strong> GUI for <strong>the</strong><br />

ITSO-CLS1 cluster for all fur<strong>the</strong>r steps.<br />

8.14.4 Creating a Metro Mirror consistency group<br />

To create <strong>the</strong> consistency group to use for <strong>the</strong> Metro Mirror relationships of VDisks with<br />

database and database log files, select Manage Copy Services and click Metro Mirror<br />

Consistency Groups from <strong>the</strong> Welcome window.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 587


To create a Metro Mirror consistency group, perform <strong>the</strong> following steps:<br />

1. Select Create a Consistency Group from <strong>the</strong> list, and click Go, as shown in<br />

Figure 8-219.<br />

Figure 8-219 Creating a consistency group<br />

2. The wizard appears that helps to create <strong>the</strong> Metro Mirror consistency group. First, <strong>the</strong><br />

wizard introduces <strong>the</strong> steps that are involved in <strong>the</strong> creation of a Metro Mirror consistency<br />

group, as shown in Figure 8-220. Click Next to proceed.<br />

Figure 8-220 Introduction to <strong>the</strong> Metro Mirror consistency group creation wizard<br />

3. As shown in Figure 8-221, specify <strong>the</strong> name for <strong>the</strong> consistency group, and select <strong>the</strong><br />

remote cluster, which we have already defined in 8.14.3, “Creating <strong>the</strong> SVC partnership<br />

between ITSO-CLS1 and ITSO-CLS2” on page 585. If you are planning to use this<br />

consistency group for internal mirroring, that is, mirroring within <strong>the</strong> same cluster, select<br />

intracluster consistency group. In our scenario, we selected Create an inter-cluster<br />

consistency group with <strong>the</strong> remote cluster ITSO_CLS2. Click Next.<br />

Figure 8-221 Specifying <strong>the</strong> consistency group name and type<br />

588 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


4. In Figure 8-222, we can see <strong>the</strong> Metro Mirror relationships that have already been created<br />

that can be included in our Metro Mirror consistency group. Because we do not have any<br />

existing relationships at this point to include in <strong>the</strong> Metro Mirror consistency group, we<br />

create a blank group by clicking Next to proceed.<br />

Figure 8-222 Empty list<br />

5. Verify <strong>the</strong> setting for <strong>the</strong> consistency group, and click Finish to create <strong>the</strong> Metro Mirror<br />

consistency group, as shown in Figure 8-223.<br />

Figure 8-223 Verifying settings for Metro Mirror consistency group<br />

After creating <strong>the</strong> consistency group, <strong>the</strong> GUI returns to <strong>the</strong> Viewing Metro & Global Mirror<br />

Consistency Groups window, as shown in Figure 8-224. This page lists <strong>the</strong> newly created<br />

consistency group. Notice that <strong>the</strong> newly created consistency group is “empty”, because<br />

no relationships have been added to <strong>the</strong> group.<br />

Figure 8-224 Viewing <strong>the</strong> newly created consistency group<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 589


8.14.5 Creating Metro Mirror relationships for MM_DB_Pri and MM_DBLog_Pri<br />

To create <strong>the</strong> Metro Mirror relationships for VDisks MM_DB_Pri and MM_DBLog_Pri, perform<br />

<strong>the</strong> following steps:<br />

1. Select Manage Copy Services and click Metro Mirror Cluster Relationships from <strong>the</strong><br />

SVC Welcome window.<br />

2. To start <strong>the</strong> creation process, select Create a Relationship from <strong>the</strong> list, and click Go, as<br />

shown in Figure 8-225.<br />

Figure 8-225 Create a relationship<br />

3. We are presented with <strong>the</strong> wizard that will help us create <strong>the</strong> Metro Mirror relationship.<br />

First, <strong>the</strong> wizard introduces <strong>the</strong> steps that are involved in <strong>the</strong> creation of <strong>the</strong> Metro Mirror<br />

relationship, as shown in Figure 8-226. Click Next to proceed.<br />

Figure 8-226 Introduction to <strong>the</strong> Metro Mirror relationship creation wizard<br />

4. As shown in Figure 8-227 on page 591, we name <strong>the</strong> first Metro Mirror relationship MMREL1<br />

and specify <strong>the</strong> type of cluster relationship (in this case, intercluster as per <strong>the</strong> scenario<br />

that is shown in Figure 8-213 on page 584). The wizard also gives us <strong>the</strong> option to select<br />

<strong>the</strong> type of copy service, which, in our case, is Metro Mirror Relationship.<br />

590 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-227 Naming <strong>the</strong> Metro Mirror relationship and selecting <strong>the</strong> type of cluster relationship<br />

5. Next, we select a master VDisk. Because <strong>the</strong> list of VDisks can be large, <strong>the</strong> Filtering<br />

Master VDisk Candidates window opens, which allows us to reduce <strong>the</strong> list of eligible<br />

VDisks based on a defined filter.<br />

In Figure 8-228, you can use <strong>the</strong> asterisk character (*) filter to list all of <strong>the</strong> VDisks, and<br />

click Next.<br />

Tip: In our scenario, we use MM* as a filter to avoid listing all <strong>the</strong> VDisks.<br />

Figure 8-228 Define filter for VDisk candidates<br />

6. As shown in Figure 8-229 on page 592, we select MM_DB_Pri to be a master VDisk for<br />

this relationship, and click Next to proceed.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 591


Figure 8-229 Selecting <strong>the</strong> master VDisk<br />

7. The next step requires us to select an auxiliary VDisk. The Metro Mirror relationship<br />

wizard will automatically filter this list, so that only eligible VDisks are shown. Eligible<br />

VDisks are VDisks that have <strong>the</strong> same size as <strong>the</strong> master VDisk and that are not already<br />

part of a Metro Mirror relationship.<br />

As shown in Figure 8-230, we select MM_DB_Sec as <strong>the</strong> auxiliary VDisk for this<br />

relationship and click Next to proceed.<br />

Figure 8-230 Selecting <strong>the</strong> auxiliary VDisk<br />

8. As shown in Figure 8-231, we select <strong>the</strong> consistency group that we created, and now our<br />

relationship is immediately added to that group. Click Next to proceed.<br />

Figure 8-231 Selecting <strong>the</strong> relationship to be a part of <strong>the</strong> consistency group<br />

592 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


9. Finally, in Figure 8-232, we verify <strong>the</strong> attributes for our Metro Mirror relationship and click<br />

Finish to create it.<br />

Figure 8-232 Verifying <strong>the</strong> Metro Mirror relationship<br />

After <strong>the</strong> relationship is successfully created, we are returned to <strong>the</strong> Metro Mirror relationship<br />

list.<br />

After <strong>the</strong> successful creation of <strong>the</strong> relationship, <strong>the</strong> GUI returns to <strong>the</strong> Viewing Metro &<br />

Global Mirror Relationships window, as shown in Figure 8-233. This window lists <strong>the</strong> newly<br />

created relationship. Notice that we have not started <strong>the</strong> copy process; we have only<br />

established <strong>the</strong> connections between those two VDisks.<br />

Figure 8-233 Viewing <strong>the</strong> Metro Mirror relationship<br />

By following a similar process, we create <strong>the</strong> second Metro Mirror relationship, MMREL2,<br />

which is shown in Figure 8-234.<br />

Figure 8-234 Viewing <strong>the</strong> second Metro Mirror relationship MMREL2<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 593


8.14.6 Creating a stand-alone Metro Mirror relationship for MM_App_Pri<br />

To create a stand-alone Metro Mirror relationship, perform <strong>the</strong> following steps:<br />

1. We start <strong>the</strong> creation process by selecting Create a Relationship from <strong>the</strong> menu, and<br />

click Go.<br />

2. Next, we are presented with <strong>the</strong> wizard that shows <strong>the</strong> steps that are involved in <strong>the</strong><br />

process of creating a consistency group, and we click Next to proceed.<br />

3. As shown in Figure 8-235, we name <strong>the</strong> relationship (MMREL3), specify that it is an<br />

intercluster relationship with ITSO-CLS2, and click Next.<br />

Figure 8-235 Specifying <strong>the</strong> Metro Mirror relationship name and auxiliary cluster<br />

4. As shown in Figure 8-236, we are prompted for a filter prior to use to present <strong>the</strong> master<br />

VDisk candidates. We enter <strong>the</strong> MM* filter and click Next.<br />

Figure 8-236 Filtering VDisk candidates<br />

5. As shown in Figure 8-237 on page 595, we select MM_App_Pri to be <strong>the</strong> master VDisk of<br />

<strong>the</strong> relationship, and we click Next to proceed.<br />

594 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-237 Selecting <strong>the</strong> master VDisk<br />

6. As shown in Figure 8-238, we select MM_APP_Sec as <strong>the</strong> auxiliary VDisk of <strong>the</strong><br />

relationship, and we click Next to proceed.<br />

Figure 8-238 Selecting <strong>the</strong> auxiliary VDisk<br />

7. As shown in Figure 8-239, we do not select a consistency group, because we are creating<br />

a stand-alone Metro Mirror relationship.<br />

Figure 8-239 Selecting options for <strong>the</strong> Metro Mirror relationship<br />

Note: To add a Metro Mirror relationship to a consistency group, it must be in <strong>the</strong> same<br />

state as <strong>the</strong> consistency group.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 595


As shown in Figure 8-240, we cannot select a consistency group, because we selected<br />

our relationship as “synchronized”, which is not in <strong>the</strong> same state as <strong>the</strong> consistency group<br />

that we created earlier.<br />

Figure 8-240 The consistency group must have <strong>the</strong> same state as <strong>the</strong> relationship<br />

8. Finally, Figure 8-241 shows <strong>the</strong> actions that will be performed. We click Finish to create<br />

this new relationship.<br />

Figure 8-241 Verifying <strong>the</strong> Metro Mirror relationship<br />

After <strong>the</strong> successful creation, we are returned to <strong>the</strong> Metro Mirror relationship window.<br />

Figure 8-242 now shows all of our defined Metro Mirror relationships.<br />

Figure 8-242 Viewing Metro Mirror relationships<br />

596 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.14.7 Starting Metro Mirror<br />

Now that we have created <strong>the</strong> Metro Mirror consistency group and relationships, we are ready<br />

to use Metro Mirror relationships in our environment.<br />

When performing Metro Mirror, <strong>the</strong> goal is to reach a consistent and synchronized state that<br />

can provide redundancy if a failure occurs that affects <strong>the</strong> <strong>SAN</strong> at <strong>the</strong> production site.<br />

In <strong>the</strong> following section, we show how to stop and start a stand-alone Metro Mirror<br />

relationship and a consistency group.<br />

8.14.8 Starting a stand-alone Metro Mirror relationship<br />

In Figure 8-243, we select <strong>the</strong> MMREL3 stand-alone Metro Mirror relationship, and from <strong>the</strong><br />

list, we select Start Copy Process and click Go.<br />

Figure 8-243 Starting a stand-alone Metro Mirror relationship<br />

In Figure 8-244, we do not need to change <strong>the</strong> Forced start, Mark as clean, or Copy direction<br />

parameter, because we are invoking this Metro Mirror relationship for <strong>the</strong> first time (and we<br />

have defined <strong>the</strong> relationship as already synchronized). We click OK to start <strong>the</strong> MMREL3<br />

stand-alone Metro Mirror relationship.<br />

Figure 8-244 Selecting options and starting <strong>the</strong> copy process<br />

Because <strong>the</strong> Metro Mirror relationship was in <strong>the</strong> Consistent stopped state and no updates<br />

have been made to <strong>the</strong> primary VDisk, <strong>the</strong> relationship quickly enters <strong>the</strong> Consistent<br />

synchronized state, as shown in Figure 8-246 on page 598.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 597


Figure 8-245 Viewing Metro Mirror relationships<br />

8.14.9 Starting a Metro Mirror consistency group<br />

To start <strong>the</strong> CG_W2K3_MM Metro Mirror consistency group, we select Manage Copy<br />

Services and click Metro Mirror Consistency Groups from our SVC Welcome window.<br />

In Figure 8-246, we select <strong>the</strong> CG_W2K3_MM Metro Mirror consistency group, and from <strong>the</strong><br />

list, we select Start Copy Process and click Go.<br />

Figure 8-246 Starting copy process for <strong>the</strong> consistency group<br />

As shown in Figure 8-247, we click OK to start <strong>the</strong> copy process. We cannot select <strong>the</strong><br />

Forced start, Mark as clean, or Copy Direction option, because our consistency group is<br />

currently in <strong>the</strong> Inconsistent stopped state.<br />

Figure 8-247 Selecting options and starting <strong>the</strong> copy process<br />

As shown in Figure 8-248 on page 599, we are returned to <strong>the</strong> Metro Mirror consistency<br />

group list and <strong>the</strong> CG_W2K3_MM consistency group has changed to <strong>the</strong> Inconsistent copying<br />

state.<br />

598 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-248 Viewing Metro Mirror consistency groups<br />

Because <strong>the</strong> consistency group was in <strong>the</strong> Inconsistent stopped state, it enters <strong>the</strong><br />

Inconsistent copying state until <strong>the</strong> background copy has completed for all of <strong>the</strong> relationships<br />

in <strong>the</strong> consistency group. Upon <strong>the</strong> completion of <strong>the</strong> background copy for all of <strong>the</strong><br />

relationships in <strong>the</strong> consistency group, <strong>the</strong> consistency group enters <strong>the</strong> Consistent<br />

synchronized state.<br />

8.14.10 Monitoring background copy progress<br />

You can view <strong>the</strong> status of <strong>the</strong> background copy progress ei<strong>the</strong>r in <strong>the</strong> last column of <strong>the</strong><br />

Viewing Metro Mirror Relationships window or by opening it under <strong>the</strong> My Work, Manage<br />

progress view, and clicking View progress. This option allows you to view <strong>the</strong> Metro Mirror<br />

progress, as shown in Figure 8-249.<br />

Figure 8-249 Viewing background copy progress for Metro Mirror relationships<br />

Note: Setting up SNMP traps for <strong>the</strong> SVC enables automatic notification when <strong>the</strong> Metro<br />

Mirror consistency group or relationships change state.<br />

8.14.11 Stopping and restarting Metro Mirror<br />

Now that <strong>the</strong> Metro Mirror consistency group and relationships are running, in this section and<br />

<strong>the</strong> following sections, we describe how to stop, restart, and change <strong>the</strong> direction of <strong>the</strong><br />

stand-alone Metro Mirror relationships, as well as <strong>the</strong> consistency group.<br />

In this section, we show how to stop and restart <strong>the</strong> stand-alone Metro Mirror relationship and<br />

<strong>the</strong> consistency group.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 599


8.14.12 Stopping a stand-alone Metro Mirror relationship<br />

To stop a Metro Mirror relationship, while enabling access (write I/O) to both <strong>the</strong> primary and<br />

secondary VDisk, we select <strong>the</strong> relationship and select Stop Copy Process from <strong>the</strong> list and<br />

click Go, as shown in Figure 8-250.<br />

Figure 8-250 Stopping a stand-alone Metro MIrror relationship<br />

As shown in Figure 8-251, we select Enable write access to <strong>the</strong> secondary VDisk, if it is<br />

consistent with <strong>the</strong> primary VDisk and click OK to stop <strong>the</strong> Metro Mirror relationship.<br />

Figure 8-251 Enable write access to <strong>the</strong> secondary VDisk while stopping <strong>the</strong> relationship<br />

As shown in Figure 8-252, <strong>the</strong> Metro Mirror relationship transits to <strong>the</strong> Idling state, when<br />

stopped while enabling access to <strong>the</strong> secondary VDisk.<br />

Figure 8-252 Viewing <strong>the</strong> Metro Mirror relationships<br />

8.14.13 Stopping a Metro Mirror consistency group<br />

As shown in Figure 8-253 on page 601, we select <strong>the</strong> Metro Mirror consistency group and<br />

Stop Copy Process from <strong>the</strong> list and click Go.<br />

600 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-253 Selecting <strong>the</strong> Metro Mirror consistency group to be stopped<br />

As shown in Figure 8-254, we click OK without specifying “Enable write access to <strong>the</strong><br />

secondary VDisks, if <strong>the</strong>y are consistent with <strong>the</strong> primary VDisks”.<br />

Figure 8-254 Stopping consistency group without enabling access to secondary VDisks<br />

As shown in Figure 8-255, <strong>the</strong> consistency group enters <strong>the</strong> Consistent stopped state, when<br />

stopped without enabling access to <strong>the</strong> secondary.<br />

Figure 8-255 Viewing Metro Mirror consistency groups<br />

Afterwards, if we want to enable write access (write I/O) to <strong>the</strong> secondary VDisks, we can<br />

reissue <strong>the</strong> Stop Copy Process and, this time, specify that we want to enable write access to<br />

<strong>the</strong> secondary VDisks.<br />

In Figure 8-256 on page 602, we select <strong>the</strong> Metro Mirror relationship, select Stop Copy<br />

Process from <strong>the</strong> list and click Go.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 601


Figure 8-256 Stopping <strong>the</strong> Metro Mirror consistency group<br />

As shown in Figure 8-257, we check Enable write access to <strong>the</strong> secondary VDisks, if <strong>the</strong>y<br />

are consistent with <strong>the</strong> primary VDisks and click OK.<br />

Figure 8-257 Enabling access to secondary VDisks<br />

When applying <strong>the</strong> “Enable write access to <strong>the</strong> secondary VDisk, if it is consistent with <strong>the</strong><br />

primary VDisk option”, <strong>the</strong> consistency group transits to <strong>the</strong> Idling state, as shown in<br />

Figure 8-258.<br />

Figure 8-258 Viewing Metro Mirror consistency group in <strong>the</strong> Idling state<br />

8.14.14 Restarting a Metro Mirror relationship in <strong>the</strong> Idling state<br />

When restarting a Metro Mirror relationship in <strong>the</strong> Idling state, we must specify <strong>the</strong> copy<br />

direction.<br />

If any updates have been performed on ei<strong>the</strong>r <strong>the</strong> master or auxiliary VDisks in <strong>the</strong> Metro<br />

Mirror relationship, consistency is compromised. In this situation, we must check <strong>the</strong> Force<br />

option to start <strong>the</strong> copy process; o<strong>the</strong>rwise, <strong>the</strong> command fails.<br />

As shown in Figure 8-259 on page 603, we select <strong>the</strong> Metro Mirror relationship and Start<br />

Copy Process from <strong>the</strong> list and click Go.<br />

602 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-259 Starting a stand-alone Metro Mirror relationship in <strong>the</strong> Idling state<br />

As shown in Figure 8-260, we check <strong>the</strong> Force option, because write I/O has been performed<br />

while in <strong>the</strong> Idling state, and we select <strong>the</strong> copy direction by defining <strong>the</strong> master VDisk as <strong>the</strong><br />

primary and click OK.<br />

Figure 8-260 Specifying options while starting copy process<br />

The Metro Mirror relationship enters <strong>the</strong> Consistent copying, and when background copy is<br />

complete, <strong>the</strong> relationship transits to <strong>the</strong> Consistent synchronized state, as shown in<br />

Figure 8-261.<br />

Figure 8-261 Viewing Metro Mirror relationship<br />

8.14.15 Restarting a Metro Mirror consistency group in <strong>the</strong> Idling state<br />

When restarting a Metro Mirror consistency group in <strong>the</strong> Idling state, we must specify <strong>the</strong><br />

copy direction.<br />

If any updates have been performed on ei<strong>the</strong>r <strong>the</strong> master or auxiliary VDisk in any of <strong>the</strong><br />

Metro Mirror relationships in <strong>the</strong> consistency group, consistency is compromised. In this<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 603


situation, we must check <strong>the</strong> Force option to start <strong>the</strong> copy process; o<strong>the</strong>rwise, <strong>the</strong> command<br />

fails.<br />

As shown in Figure 8-262, we select <strong>the</strong> Metro Mirror consistency group and Start Copy<br />

Process from <strong>the</strong> list and click Go.<br />

Figure 8-262 Starting <strong>the</strong> copy process for <strong>the</strong> consistency group<br />

As shown in Figure 8-263, we check <strong>the</strong> Force option and set <strong>the</strong> copy direction by selecting<br />

<strong>the</strong> primary as <strong>the</strong> master.<br />

Figure 8-263 Specifying <strong>the</strong> options while starting <strong>the</strong> copy process in <strong>the</strong> consistency group<br />

When <strong>the</strong> background copy completes, <strong>the</strong> Metro Mirror consistency group enters <strong>the</strong><br />

Consistent synchronized state, as shown in Figure 8-264.<br />

Figure 8-264 Viewing Metro Mirror consistency groups<br />

8.14.16 Changing copy direction for Metro Mirror<br />

In this section, we show how to change <strong>the</strong> copy direction of <strong>the</strong> stand-alone Metro Mirror<br />

relationships and <strong>the</strong> consistency group.<br />

604 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.14.17 Switching copy direction for a Metro Mirror consistency group<br />

When a Metro Mirror consistency group is in <strong>the</strong> Consistent synchronized state, we can<br />

change <strong>the</strong> copy direction for <strong>the</strong> Metro Mirror consistency group.<br />

In Figure 8-265, we select <strong>the</strong> CG_W2K3_MM consistency group, click Switch Copy<br />

Direction from <strong>the</strong> list, and click Go.<br />

Important: When <strong>the</strong> copy direction is switched, it is crucial that no outstanding I/O exists<br />

to <strong>the</strong> VDisks that will change from primary to secondary, because all of <strong>the</strong> I/O will be<br />

inhibited when <strong>the</strong> VDisks become secondary. Therefore, careful planning is required prior<br />

to switching <strong>the</strong> copy direction.<br />

Figure 8-265 Selecting <strong>the</strong> consistency group for which <strong>the</strong> copy direction is to change<br />

In Figure 8-266, we see that <strong>the</strong> current primary VDisks are <strong>the</strong> master. So, to change <strong>the</strong><br />

copy direction for <strong>the</strong> Metro Mirror consistency group, we specify <strong>the</strong> auxiliary VDisks to<br />

become <strong>the</strong> primary, and click OK.<br />

Figure 8-266 Selecting primary VDisk, as auxiliary, to switch <strong>the</strong> copy direction<br />

The copy direction is now switched, and we are returned to <strong>the</strong> Metro Mirror consistency<br />

group list, where we see that <strong>the</strong> copy direction has switched, as shown in Figure 8-267 on<br />

page 606.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 605


Figure 8-267 Viewing Metro Mirror consistency group after changing <strong>the</strong> copy direction<br />

In Figure 8-268, we show <strong>the</strong> new copy direction for individual relationships within that<br />

consistency group.<br />

Figure 8-268 Viewing Metro Mirror relationship after changing <strong>the</strong> copy direction<br />

8.14.18 Switching <strong>the</strong> copy direction for a Metro Mirror relationship<br />

When a Metro Mirror relationship is in <strong>the</strong> Consistent synchronized state, we can change <strong>the</strong><br />

copy direction for <strong>the</strong> relationship.<br />

In Figure 8-269, we select <strong>the</strong> MMREL3 relationship, click Switch Copy Direction from <strong>the</strong><br />

list, and click Go.<br />

Important: When <strong>the</strong> copy direction is switched, it is crucial that no outstanding I/O exists<br />

to <strong>the</strong> VDisk that transits from primary to secondary, because all of <strong>the</strong> I/O will be inhibited<br />

to that VDisk when it becomes <strong>the</strong> secondary. Therefore, careful planning is required prior<br />

to switching <strong>the</strong> copy direction for a Metro Mirror relationship.<br />

Figure 8-269 Selecting <strong>the</strong> relationship whose copy direction needs to be changed<br />

606 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


In Figure 8-270, we see that <strong>the</strong> current primary VDisk is <strong>the</strong> master, so to change <strong>the</strong> copy<br />

direction for <strong>the</strong> stand-alone Metro Mirror relationship, we specify <strong>the</strong> auxiliary VDisk to<br />

become <strong>the</strong> primary, and click OK.<br />

Figure 8-270 Selecting <strong>the</strong> primary VDisk, as auxiliary, to switch copy direction<br />

The copy direction is now switched. We are returned to <strong>the</strong> Metro Mirror relationship list,<br />

where we see that <strong>the</strong> copy direction has been switched and that <strong>the</strong> auxiliary VDisk has<br />

become <strong>the</strong> primary, as shown in Figure 8-271.<br />

Figure 8-271 Viewing Metro Mirror relationships<br />

8.15 Global Mirror operations<br />

Next, we show how to set up Global Mirror.<br />

Note: This example is for intercluster Global Mirror operations only. In case you want to set<br />

up intracluster Global Mirror operations, we highlight those parts of <strong>the</strong> following<br />

procedure that you do not need to perform.<br />

Starting with 5.1, we can install multiple clusters in a partnership. We show this capability in<br />

8.14.1, “Cluster partnership” on page 582, but in <strong>the</strong> following scenario, we set up an<br />

intercluster Global Mirror relationship between <strong>the</strong> ITSO-CLS1 SVC cluster at primary site<br />

and <strong>the</strong> ITSO-CLS2 SVC cluster at <strong>the</strong> secondary site. Table 8-4 on page 608 shows <strong>the</strong><br />

details of <strong>the</strong> VDisks.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 607


Table 8-4 Details of VDisks for Global Mirror relationship<br />

Content of VDisk VDisks at primary site VDisks at secondary site<br />

Database files GM_DB_Pri GM_DB_Sec<br />

Database log files GM_DBLog_Pri GM_DBLog_Sec<br />

Application files GM_App_Pri GM_App_Sec<br />

Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a<br />

consistency group to handle Global Mirror relationships for <strong>the</strong>m. While, in this scenario, <strong>the</strong><br />

application files are independent of <strong>the</strong> database, we create a stand-alone Global Mirror<br />

relationship for GM_App_Pri. Figure 8-272 illustrates <strong>the</strong> Global Mirror setup.<br />

Primary Site<br />

SVC Cluster – ITSO-CLS1<br />

Figure 8-272 Global Mirror scenario using <strong>the</strong> GUI<br />

8.15.1 Setting up Global Mirror<br />

GM_DB_Pri GM_DB_Sec<br />

GM_DBLog_Pri<br />

GM_App_Pri<br />

Consistency Group<br />

CG_W2K3_MM<br />

GM_Relationship 1<br />

GM_Relationship 2<br />

GM_Relationship 3<br />

In <strong>the</strong> following section, we assume that <strong>the</strong> source and target VDisks have already been<br />

created and that <strong>the</strong> ISLs and zoning are in place, enabling <strong>the</strong> SVC clusters to communicate.<br />

To set up <strong>the</strong> Global Mirror, you must perform <strong>the</strong> following steps:<br />

1. Create an SVC partnership between ITSO-CLS1 and ITSO-CLS2, on both SVC clusters:<br />

Bandwidth 10 MBps<br />

2. Create a Global Mirror consistency group:<br />

Name CG_W2K3_GM<br />

3. Create <strong>the</strong> Global Mirror relationship for GM_DB_Pri:<br />

– Master GM_DB_Pri<br />

– Auxiliary GM_DB_Sec<br />

608 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

Secondary Site<br />

SVC Cluster – ITSO-CLS2<br />

GM_DBLog_Sec<br />

GM_App_Sec


– Auxiliary SVC cluster ITSO-CLS2<br />

– Name GMREL1<br />

– Consistency group CG_W2K3_GM<br />

4. Create <strong>the</strong> Global Mirror relationship for GM_DBLog_Pri:<br />

– Master GM_DBLog_Pri<br />

– Auxiliary GM_DBLog_Sec<br />

– Auxiliary SVC cluster ITSO-CLS2<br />

– Name GMREL2<br />

– Consistency group CG_W2K3_GM<br />

5. Create <strong>the</strong> Global Mirror relationship for GM_App_Pri:<br />

– Master GM_App_Pri<br />

– Auxiliary GM_App_Sec<br />

– Auxiliary SVC cluster ITSO-CLS2<br />

– Name GMREL3<br />

8.15.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2<br />

In this section, we create <strong>the</strong> SVC partnership on both clusters.<br />

Note: If you are creating an intracluster Global Mirror, do not perform <strong>the</strong> next step;<br />

instead, go to 8.15.4, “Creating a Global Mirror consistency group” on page 614.<br />

To create a Global Mirror partnership between <strong>the</strong> SVC clusters using <strong>the</strong> GUI, perform <strong>the</strong>se<br />

steps:<br />

1. We launch <strong>the</strong> SVC GUI for ITSO-CLS1. Then, we select Manage Copy Services and<br />

click Metro & Global Mirror Cluster Partnerships, as shown in Figure 8-273.<br />

Figure 8-273 Selecting Global Mirror Cluster Partnership on ITSO-CLS1<br />

2. Figure 8-274 on page 610 shows <strong>the</strong> cluster partnership that is defined for this cluster.<br />

Because <strong>the</strong>re is no existing partnership, nothing is listed. Figure 8-274 on page 610 also<br />

gives a warning stating that for any type of copy relationship between VDisks across two<br />

separate clusters, <strong>the</strong> partnership must exist between <strong>the</strong>m. Notice that we already have<br />

ano<strong>the</strong>r partnership running. Select GO to continue creating your partnership.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 609


Figure 8-274 Creating a new partnership<br />

3. Figure 8-275 lists <strong>the</strong> available SVC cluster candidates. In our case, we select ITSO-CLS4<br />

and specify <strong>the</strong> available bandwidth for <strong>the</strong> background copy; we enter 10 MBps and, <strong>the</strong>n,<br />

click OK.<br />

Figure 8-275 Selecting SVC cluster partner and specifying bandwidth for background copy<br />

In <strong>the</strong> resulting window, which is shown in Figure 8-276 on page 611, <strong>the</strong> newly created<br />

Global Mirror cluster partnership is shown as Partially Configured.<br />

610 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-276 Viewing <strong>the</strong> newly created Global Mirror partnership<br />

To fully configure <strong>the</strong> Global Mirror cluster partnership, we must perform <strong>the</strong> same steps<br />

on ITSO-CLS4 that we performed on ITSO-CLS1. For simplicity, in <strong>the</strong> following figures,<br />

only <strong>the</strong> last two windows are shown.<br />

4. Launching <strong>the</strong> SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for <strong>the</strong> Global Mirror<br />

cluster partnership, specify <strong>the</strong> available bandwidth for <strong>the</strong> background copy, which again<br />

is 10 MBps, and <strong>the</strong>n, click OK, as shown in Figure 8-277.<br />

Figure 8-277 Selecting SVC cluster partner and specifying bandwidth for background copy<br />

5. Now that we have defined both sides of <strong>the</strong> SVC cluster partnership, <strong>the</strong> window that is<br />

shown in Figure 8-278 on page 612 confirms that our Global Mirror cluster partnership is<br />

in <strong>the</strong> Fully Configured state.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 611


Figure 8-278 Global Mirror cluster partnership is fully configured<br />

Note: Link tolerance, intercluster delay simulation, and intracluster delay simulation are<br />

introduced with <strong>the</strong> use of <strong>the</strong> Global Mirror feature.<br />

8.15.3 Global Mirror link tolerance and delay simulations<br />

We describe link tolerance and delay simulations.<br />

Global Mirror link tolerance<br />

The gm_link_tolerance parameter defines <strong>the</strong> SVC’s sensitivity to interlinking overload<br />

conditions. The value is <strong>the</strong> number of seconds of continuous link difficulties that will be<br />

tolerated before <strong>the</strong> SVC will stop <strong>the</strong> remote copy relationships to prevent affecting host I/O<br />

at <strong>the</strong> primary site. To change <strong>the</strong> value, refer to “Changing link tolerance and delay<br />

simulation values for Global Mirror” on page 613.<br />

The link tolerance values are between 60 and 86,400 seconds in increments of 10 seconds.<br />

The default value for <strong>the</strong> link tolerance is 300 seconds.<br />

Recommendation: We strongly recommend using <strong>the</strong> default value. If <strong>the</strong> link is<br />

overloaded for a period that affects host I/O at <strong>the</strong> primary site, <strong>the</strong> relationships will be<br />

stopped to protect those hosts.<br />

Global Mirror intercluster and intracluster delay simulation<br />

This Global Mirror feature permits a simulation of a delayed write to a remote VDisk. This<br />

feature allows you to perform testing that detects colliding writes and so can be used to test<br />

an application before <strong>the</strong> full deployment of <strong>the</strong> Global Mirror feature. You can enable delay<br />

simulation separately for ei<strong>the</strong>r intracluster or intercluster Global Mirror. To enable and<br />

change to <strong>the</strong> appropriate value, refer to “Changing link tolerance and delay simulation values<br />

for Global Mirror” on page 613.<br />

The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express <strong>the</strong><br />

amount of time that secondary I/Os are delayed for intercluster and intracluster relationships.<br />

These values specify <strong>the</strong> number of milliseconds that I/O activity, that is, copying <strong>the</strong> primary<br />

612 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


VDisk to a secondary VDisk, is delayed. A value from 0 to 100 milliseconds in 1 millisecond<br />

increments can be set. A value of zero disables this feature.<br />

To check <strong>the</strong> current settings for <strong>the</strong> delay simulation, refer to “Changing link tolerance and<br />

delay simulation values for Global Mirror” on page 613.<br />

Changing link tolerance and delay simulation values for Global Mirror<br />

Here, we show <strong>the</strong> modification of <strong>the</strong> delay simulations and <strong>the</strong> Global Mirror link tolerance<br />

values. We also show <strong>the</strong> changed values for <strong>the</strong> Global Mirror link tolerance and delay<br />

simulation parameters.<br />

Launching <strong>the</strong> SVC GUI for ITSO-CLS1, we select Global Mirror Cluster Partnership to<br />

view and to modify <strong>the</strong> parameters, as shown in Figure 8-279 and Figure 8-280.<br />

Figure 8-279 View and modify Global Mirror link tolerance and delay simulation parameters<br />

Figure 8-280 Set Global Mirror link tolerance and delay simulations parameters<br />

After performing <strong>the</strong> steps, <strong>the</strong> GUI returns to <strong>the</strong> Global Mirror Partnership window and lists<br />

<strong>the</strong> new parameter settings, as shown in Figure 8-281 on page 614.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 613


Figure 8-281 View modified parameters<br />

8.15.4 Creating a Global Mirror consistency group<br />

To create <strong>the</strong> consistency group for use by <strong>the</strong> Global Mirror relationships for <strong>the</strong> VDisks with<br />

<strong>the</strong> database and database log files, perform <strong>the</strong>se steps:<br />

1. We select Manage Copy Services and click Global Mirror Consistency Groups, as<br />

shown in Figure 8-282.<br />

Figure 8-282 Selecting Global Mirror consistency groups<br />

2. To start <strong>the</strong> creation process, we select Create Consistency Group from <strong>the</strong> list and click<br />

Go, as shown in Figure 8-283 on page 615. We see that, in our list, we already have one<br />

Metro Mirror consistency group that was created between ITSO-CLS1 and ITSO-CLS2,<br />

but now, we are creating a new Global Mirror Consistency group.<br />

614 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-283 Creating a consistency group<br />

3. We are presented with a wizard that helps us to create <strong>the</strong> Global Mirror consistency<br />

group. First, <strong>the</strong> wizard introduces <strong>the</strong> steps that are involved in <strong>the</strong> creation of <strong>the</strong> Global<br />

Mirror consistency group, as shown in Figure 8-284. Click Next to proceed.<br />

Figure 8-284 Introduction to Global Mirror consistency group creation wizard<br />

4. As shown in Figure 8-285, we specify <strong>the</strong> consistency group name and whe<strong>the</strong>r it will be<br />

used for intercluster or intracluster relationships. In our scenario, we select Create an<br />

inter-cluster consistency group and, <strong>the</strong>n, we need to select our remote cluster partner.<br />

In Figure 8-285, we select ITSO-CLS4, because it is our Global Mirror partner, and click<br />

Next.<br />

Figure 8-285 Specifying <strong>the</strong> consistency group name and type<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 615


5. Figure 8-286 shows any existing Global Mirror relationships that can be included in <strong>the</strong><br />

Global Mirror consistency group. Because we do not have any existing Global Mirror<br />

relationships at this time, we create an empty group by clicking Next to proceed, as shown<br />

in Figure 8-286.<br />

Figure 8-286 Selecting <strong>the</strong> existing Global Mirror relationship<br />

6. Verify <strong>the</strong> settings for <strong>the</strong> consistency group, and click Finish to create <strong>the</strong> Global Mirror<br />

consistency group, as shown in Figure 8-287.<br />

Figure 8-287 Verifying <strong>the</strong> settings for <strong>the</strong> Global Mirror consistency group<br />

When <strong>the</strong> Global Mirror consistency group is created, we are returned to <strong>the</strong> Viewing Metro &<br />

Global Mirror Consistency Groups window. It shows our newly created Global Mirror<br />

consistency group, as shown in Figure 8-288.<br />

Figure 8-288 Viewing Global Mirror consistency groups<br />

616 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.15.5 Creating Global Mirror relationships for GM_DB_Pri and<br />

GM_DBLog_Pri<br />

To create <strong>the</strong> Global Mirror Relationships for GM_DB_Pri and GM_DBLog_Pri, perform <strong>the</strong>se<br />

steps:<br />

1. We select Manage Copy Services and click Global Mirror Cluster Relationships, from<br />

<strong>the</strong> Welcome window.<br />

2. To start <strong>the</strong> creation process, we select Create a Relationship from <strong>the</strong> list and click Go,<br />

as shown in Figure 8-289.<br />

Figure 8-289 Creating a relationship<br />

3. We are presented with a wizard that helps us to create Global Mirror relationships. First,<br />

<strong>the</strong> wizard introduces <strong>the</strong> steps that are involved in <strong>the</strong> creation of <strong>the</strong> Global Mirror<br />

relationship, as shown in Figure 8-290. Click Next to proceed.<br />

Figure 8-290 Introduction to <strong>the</strong> Global Mirror relationship creation wizard<br />

4. As shown in Figure 8-291 on page 618, we name our first Global Mirror relationship<br />

GMREL1, click Global Mirror Relationship, and select <strong>the</strong> relationship for <strong>the</strong> cluster. In this<br />

case, it is an intercluster relationship toward ITSO-CLS4, as shown in Figure 8-272 on<br />

page 608.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 617


Figure 8-291 Naming <strong>the</strong> Global Mirror relationship and selecting <strong>the</strong> type of <strong>the</strong> cluster relationship<br />

5. The next step enables us to select a master VDisk. Because this list can be large, <strong>the</strong><br />

Filtering Master VDisk Candidates window opens, which enables us to define a filter to<br />

reduce <strong>the</strong> list of eligible VDisks.<br />

In Figure 8-292, we use <strong>the</strong> filter GM* (you can use <strong>the</strong> asterisk character (*) to list all<br />

VDisks) and click Next.<br />

Figure 8-292 Defining <strong>the</strong> filter for master VDisk candidates<br />

6. As shown in Figure 8-293, we select GM_DB_Pri to be <strong>the</strong> master VDisk of <strong>the</strong><br />

relationship, and we click Next to proceed.<br />

Figure 8-293 Selecting <strong>the</strong> master VDisk<br />

618 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


The next step requires us to select an auxiliary VDisk. The Global Mirror relationship<br />

wizard automatically filters this list so that only eligible VDisks are shown. Eligible VDisks<br />

are those VDisks that have <strong>the</strong> same size as <strong>the</strong> master VDisk and that are not already<br />

part of a Global Mirror relationship.<br />

7. As shown in Figure 8-294, we select GM_DB_Sec as <strong>the</strong> auxiliary VDisk for this<br />

relationship, and we click Next to proceed.<br />

Figure 8-294 Selecting <strong>the</strong> auxiliary VDisk<br />

8. As shown in Figure 8-295, select <strong>the</strong> relationship to be part of <strong>the</strong> consistency group that<br />

we have created, and click Next to proceed.<br />

Figure 8-295 Selecting <strong>the</strong> relationship to be part of a consistency group<br />

Consistency groups: It is not mandatory to make <strong>the</strong> relationship part of a consistency<br />

group at this stage. You can make <strong>the</strong> relationship part of a consistency group at a later<br />

stage after <strong>the</strong> creation of <strong>the</strong> relationship. You can add <strong>the</strong> relationship to <strong>the</strong> consistency<br />

group by modifying that relationship.<br />

9. Finally, in Figure 8-296 on page 620, we verify <strong>the</strong> Global Mirror Relationship attributes<br />

and click Finish to create it.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 619


Figure 8-296 Verifying <strong>the</strong> Global Mirror relationship<br />

After <strong>the</strong> successful creation of <strong>the</strong> relationship, <strong>the</strong> GUI returns to <strong>the</strong> Viewing Metro &<br />

Global Mirror Relationships window, as shown in Figure 8-297. This window lists <strong>the</strong> newly<br />

created relationship.<br />

Using <strong>the</strong> same process, create <strong>the</strong> second Global Mirror relationship, GMREL2.<br />

Figure 8-297 shows both relationships.<br />

Figure 8-297 Viewing Metro & Global Mirror relationships<br />

8.15.6 Creating <strong>the</strong> stand-alone Global Mirror relationship for GM_App_Pri<br />

To create <strong>the</strong> stand-alone Global Mirror relationship, perform <strong>the</strong>se steps:<br />

1. We start <strong>the</strong> creation process by selecting Create a Relationship from <strong>the</strong> list and by<br />

clicking Go, as shown in Figure 8-298.<br />

Figure 8-298 Creating a Global Mirror relationship<br />

2. Next, we are presented with <strong>the</strong> wizard that shows <strong>the</strong> steps that are involved in <strong>the</strong><br />

process of creating a Global Mirror relationship, as shown in Figure 8-299 on page 621.<br />

Click Next to proceed.<br />

620 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-299 Introduction to <strong>the</strong> Global Mirror relationship creation wizard<br />

3. In Figure 8-300, we name <strong>the</strong> Global Mirror relationship GMREL3, specify that it is an<br />

intercluster relationship, and click Next.<br />

Figure 8-300 Naming <strong>the</strong> Global Mirror relationship and selecting <strong>the</strong> type of cluster relationship<br />

4. As shown in Figure 8-301, we are prompted for a filter prior to presenting <strong>the</strong> master VDisk<br />

candidates. We use <strong>the</strong> asterisk character (*) to list all of <strong>the</strong> candidates and click Next.<br />

Figure 8-301 Filtering master VDisk candidates<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 621


5. As shown in Figure 8-302, we select GM_App_Pri to be <strong>the</strong> master VDisk for <strong>the</strong><br />

relationship and click Next to proceed.<br />

Figure 8-302 Selecting <strong>the</strong> master VDisk<br />

6. As shown in Figure 8-303, we select GM_App_Sec as <strong>the</strong> auxiliary VDisk for <strong>the</strong><br />

relationship and click Next to proceed.<br />

Figure 8-303 Selecting auxiliary VDisk<br />

As shown in Figure 8-304 on page 623, we did not select a consistency group, because<br />

we are creating a stand-alone Global Mirror relationship.<br />

622 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-304 Selecting options for <strong>the</strong> Global Mirror relationship<br />

7. We also specify that <strong>the</strong> master and auxiliary VDisks are already synchronized; for <strong>the</strong><br />

purpose of this example, we can assume that <strong>the</strong>y are pristine (Figure 8-305).<br />

Figure 8-305 Selecting <strong>the</strong> synchronized option for <strong>the</strong> Global Mirror relationship<br />

Note: To add a Global Mirror relationship to a consistency group, <strong>the</strong> Global Mirror<br />

relationship must be in <strong>the</strong> same state as <strong>the</strong> consistency group.<br />

Even if we intend to make <strong>the</strong> GMREL3 Global Mirror relationship part of <strong>the</strong><br />

CG_W2K3_GM consistency group, we are not offered <strong>the</strong> option, as shown in<br />

Figure 8-305, because <strong>the</strong> states differ. The state of <strong>the</strong> GMREL3 relationship is<br />

Consistent Stopped, because we selected <strong>the</strong> synchronized option. The state of <strong>the</strong><br />

CG_W2K3_GM consistency group is currently Inconsistent Stopped.<br />

8. Finally, Figure 8-306 on page 624 prompts you to verify <strong>the</strong> relationship information. We<br />

click Finish to create this new relationship.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 623


Figure 8-306 Verifying <strong>the</strong> Global Mirror relationship<br />

After <strong>the</strong> successful creation, we are returned to <strong>the</strong> Viewing Metro & Global Mirror<br />

Relationships window. Figure 8-307 now shows all of our defined Global Mirror relationships.<br />

Figure 8-307 Viewing Global Mirror relationships<br />

8.15.7 Starting Global Mirror<br />

Now that we have created <strong>the</strong> Global Mirror consistency group and relationships, we are<br />

ready to use <strong>the</strong> Global Mirror relationships in our environment.<br />

When performing Global Mirror, <strong>the</strong> goal is to reach a consistent and synchronized state that<br />

can provide redundancy in case a hardware failure occurs that affects <strong>the</strong> <strong>SAN</strong> at <strong>the</strong><br />

production site.<br />

In this section, we show how to start <strong>the</strong> stand-alone Global Mirror relationship and <strong>the</strong><br />

consistency group.<br />

8.15.8 Starting a stand-alone Global Mirror relationship<br />

Perform <strong>the</strong>se steps to start a stand-alone Global Mirror relationship:<br />

1. In Figure 8-308 on page 625, we select <strong>the</strong> stand-alone Global Mirror relationship<br />

GMREL3, and from <strong>the</strong> list, we select Start Copy Process and click Go.<br />

624 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-308 Starting <strong>the</strong> stand-alone Global Mirror relationship<br />

2. In Figure 8-309, we do not need to change <strong>the</strong> parameters Forced start, Mark as clean, or<br />

Copy Direction, because we are invoking this Global Mirror relationship for <strong>the</strong> first time<br />

(and we have already defined <strong>the</strong> relationship as being synchronized in Figure 8-305 on<br />

page 623). We click OK to start <strong>the</strong> stand-alone Global Mirror relationship GMREL3.<br />

Figure 8-309 Selecting options and starting <strong>the</strong> copy process<br />

3. Because <strong>the</strong> Global Mirror relationship was in <strong>the</strong> Consistent Stopped state and no<br />

updates have been made on <strong>the</strong> primary VDisk, <strong>the</strong> relationship quickly enters <strong>the</strong><br />

Consistent Synchronized state, as shown in Figure 8-310.<br />

Figure 8-310 Viewing Global Mirror relationship<br />

8.15.9 Starting a Global Mirror consistency group<br />

Perform <strong>the</strong>se steps to start <strong>the</strong> CG_W2K3_GM Global Mirror consistency group:<br />

1. Select Global Mirror Consistency Groups from <strong>the</strong> SVC Welcome window.<br />

2. In Figure 8-311 on page 626, we select <strong>the</strong> Global Mirror consistency group<br />

CG_W2K3_GM, and from <strong>the</strong> list, we select Start Copy Process and click Go.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 625


Figure 8-311 Selecting <strong>the</strong> Global Mirror consistency group and starting <strong>the</strong> copy process<br />

3. As shown in Figure 8-312, we click OK to start <strong>the</strong> copy process. We cannot select <strong>the</strong><br />

options Forced start, Mark as clean, or Copy Direction, because we are invoking this<br />

Global Mirror relationship for <strong>the</strong> first time.<br />

Figure 8-312 Selecting options and starting <strong>the</strong> copy process<br />

4. We are returned to <strong>the</strong> Viewing Metro & Global Mirror Consistency Groups window and<br />

<strong>the</strong> CG_W2K3_GM consistency group has changed to <strong>the</strong> Inconsistent copying state.<br />

Because <strong>the</strong> consistency group was in <strong>the</strong> Inconsistent stopped state, it enters <strong>the</strong><br />

Inconsistent copying state until <strong>the</strong> background copy has completed for all of <strong>the</strong><br />

relationships in <strong>the</strong> consistency group. Upon completion of <strong>the</strong> background copy for all of<br />

<strong>the</strong> relationships in <strong>the</strong> consistency group, it enters <strong>the</strong> Consistent Synchronized state, as<br />

shown in Figure 8-313.<br />

Figure 8-313 Viewing Global Mirror consistency groups<br />

8.15.10 Monitoring background copy progress<br />

The status of <strong>the</strong> background copy progress can be seen in <strong>the</strong> Viewing Global Mirror<br />

Relationships window, as shown in Figure 8-314 on page 627, or alternatively, use <strong>the</strong><br />

Manage Progress section under My Work and select Viewing Global Mirror Progress, as<br />

shown in Figure 8-315 on page 627.<br />

626 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-314 Monitoring background copy process for Global Mirror relationships<br />

Figure 8-315 Monitoring background copy process for Global Mirror relationships<br />

Using SNMP traps: Setting up SNMP traps for <strong>the</strong> SVC enables automatic notification<br />

when Global Mirror consistency groups or relationships change state.<br />

8.15.11 Stopping and restarting Global Mirror<br />

Now that <strong>the</strong> Global Mirror consistency group and relationships are running, we describe how<br />

to stop, restart, and change <strong>the</strong> direction of <strong>the</strong> stand-alone Global Mirror relationships, as<br />

well as <strong>the</strong> consistency group.<br />

In this section, we show how to stop and restart <strong>the</strong> stand-alone Global Mirror relationships<br />

and <strong>the</strong> consistency group.<br />

8.15.12 Stopping a stand-alone Global Mirror relationship<br />

Perform <strong>the</strong>se steps to stop a Global Mirror relationship while enabling access (write I/O) to<br />

<strong>the</strong> secondary VDisk:<br />

1. We select <strong>the</strong> relationship and click Stop Copy Process from <strong>the</strong> list and click Go, as<br />

shown in Figure 8-316 on page 628.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 627


Figure 8-316 Stopping a stand-alone Global Mirror relationship<br />

2. As shown in Figure 8-317, we select Enable write access to <strong>the</strong> secondary VDisk, if it<br />

is consistent with <strong>the</strong> primary VDisk and click OK to stop <strong>the</strong> Global Mirror relationship.<br />

Figure 8-317 Enable access to <strong>the</strong> secondary VDisk while stopping <strong>the</strong> relationship<br />

3. As shown in Figure 8-318, <strong>the</strong> Global Mirror relationship transits to <strong>the</strong> Idling state when<br />

stopped, while enabling write access to <strong>the</strong> secondary VDisk.<br />

Figure 8-318 Viewing Global Mirror relationships<br />

8.15.13 Stopping a Global Mirror consistency group<br />

Perform <strong>the</strong>se steps to stop a Global Mirror consistency group:<br />

1. As shown in Figure 8-319 on page 629, we select <strong>the</strong> Global Mirror consistency group,<br />

click Stop Copy Process from <strong>the</strong> list, and click Go.<br />

628 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-319 Selecting <strong>the</strong> Global Mirror consistency group to be stopped<br />

2. As shown in Figure 8-320, we click OK without specifying “Enable write access to <strong>the</strong><br />

secondary VDisks, if <strong>the</strong>y are consistent with <strong>the</strong> primary VDisks”.<br />

Figure 8-320 Stopping <strong>the</strong> consistency group without enabling access to <strong>the</strong> secondary VDisks<br />

The consistency group enters <strong>the</strong> Consistent stopped state when stopped.<br />

Afterward, if we want to enable access (write I/O) to <strong>the</strong> secondary VDisks, we can reissue<br />

<strong>the</strong> Stop Copy Process and specify to enable access to <strong>the</strong> secondary VDisks.<br />

3. In Figure 8-321, we select <strong>the</strong> Global Mirror relationship, select Stop Copy Process from<br />

<strong>the</strong> list, and click Go.<br />

Figure 8-321 Selecting <strong>the</strong> Global Mirror consistency group<br />

4. As shown in Figure 8-322 on page 630, we select Enable write access to <strong>the</strong> secondary<br />

VDisks, if <strong>the</strong>y are consistent with <strong>the</strong> primary VDisks and click OK.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 629


Figure 8-322 Enabling access to <strong>the</strong> secondary VDisks<br />

When applying <strong>the</strong> Enable write access to <strong>the</strong> secondary VDisks, if <strong>the</strong>y are consistent with<br />

<strong>the</strong> primary VDisks option, <strong>the</strong> consistency group transits to <strong>the</strong> Idling state, as shown in<br />

Figure 8-323.<br />

Figure 8-323 Viewing <strong>the</strong> Global Mirror consistency group after write access to <strong>the</strong> secondary VDisk<br />

8.15.14 Restarting a Global Mirror relationship in <strong>the</strong> Idling state<br />

When restarting a Global Mirror relationship in <strong>the</strong> Idling state, we must specify <strong>the</strong> copy<br />

direction.<br />

If any updates have been performed on ei<strong>the</strong>r <strong>the</strong> master or <strong>the</strong> auxiliary VDisk in any of <strong>the</strong><br />

Global Mirror relationships in <strong>the</strong> consistency group, consistency is compromised. In this<br />

situation, we must check Force to start <strong>the</strong> copy process, or <strong>the</strong> command will fail.<br />

Perform <strong>the</strong>se steps to restart a Global Mirror relationship in <strong>the</strong> Idling state:<br />

1. As shown in Figure 8-324, we select <strong>the</strong> Global Mirror relationship, click Start Copy<br />

Process from <strong>the</strong> list, and click Go.<br />

Figure 8-324 Starting stand-alone Global Mirror relationship in <strong>the</strong> Idling state<br />

2. As shown in Figure 8-325 on page 631, we check Force, because write I/O has been<br />

performed while in <strong>the</strong> Idling state. We select <strong>the</strong> copy direction by defining <strong>the</strong> master<br />

VDisk as <strong>the</strong> primary and click OK.<br />

630 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-325 Restarting <strong>the</strong> copy process<br />

The Global Mirror relationship enters <strong>the</strong> Consistent copying state. When <strong>the</strong> background<br />

copy is complete, <strong>the</strong> relationship transits to <strong>the</strong> Consistent synchronized state, as shown<br />

in Figure 8-326.<br />

Figure 8-326 Viewing <strong>the</strong> Global Mirror relationship<br />

8.15.15 Restarting a Global Mirror consistency group in <strong>the</strong> Idling state<br />

When restarting a Global Mirror consistency group in <strong>the</strong> Idling state, we must specify <strong>the</strong><br />

copy direction.<br />

If any updates have been performed on ei<strong>the</strong>r <strong>the</strong> master or <strong>the</strong> auxiliary VDisk in any of <strong>the</strong><br />

Global Mirror relationships in <strong>the</strong> consistency group, consistency is compromised. In this<br />

situation, we must check Force to start <strong>the</strong> copy process, or <strong>the</strong> command will fail.<br />

Perform <strong>the</strong>se steps:<br />

1. As shown in Figure 8-327, we select <strong>the</strong> Global Mirror consistency group, select Start<br />

Copy Process from <strong>the</strong> list, and click Go.<br />

Figure 8-327 Starting <strong>the</strong> copy process for Global Mirror consistency group<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 631


2. As shown in Figure 8-328, we check Force and set <strong>the</strong> copy direction by selecting <strong>the</strong><br />

auxiliary as <strong>the</strong> master. Click OK.<br />

Figure 8-328 Restarting <strong>the</strong> copy process for <strong>the</strong> consistency group<br />

3. When <strong>the</strong> background copy completes, <strong>the</strong> Global Mirror consistency group enters <strong>the</strong><br />

Consistent synchronized state, as shown in Figure 8-329.<br />

Figure 8-329 Viewing Global Mirror consistency groups<br />

The individual relationships within that consistency group also are shown in Figure 8-330.<br />

Figure 8-330 Viewing Global Mirror relationships<br />

8.15.16 Changing copy direction for Global Mirror<br />

When a stand-alone Global Mirror relationship is in <strong>the</strong> Consistent synchronized state, we can<br />

change <strong>the</strong> copy direction for <strong>the</strong> relationship. Perform <strong>the</strong>se steps:<br />

1. In Figure 8-331 on page 633, we select <strong>the</strong> GMREL3 relationship, click Switch Copy<br />

Direction from <strong>the</strong> list, and click Go.<br />

632 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Important: When <strong>the</strong> copy direction is switched, it is crucial that <strong>the</strong>re is no outstanding<br />

I/O to <strong>the</strong> VDisk that transits from primary to secondary, because all I/O will be inhibited<br />

to that VDisk when it becomes <strong>the</strong> secondary. Therefore, careful planning is required<br />

prior to switching <strong>the</strong> copy direction for a Global Mirror relationship.<br />

Figure 8-331 Selecting <strong>the</strong> relationship for which <strong>the</strong> copy direction is to be changed<br />

2. In Figure 8-332, we see that <strong>the</strong> current primary VDisk is <strong>the</strong> master, so to change <strong>the</strong><br />

copy direction for <strong>the</strong> stand-alone Global Mirror relationship, we specify <strong>the</strong> auxiliary VDisk<br />

to become <strong>the</strong> primary, and click OK.<br />

Figure 8-332 Selecting <strong>the</strong> primary VDisk as auxiliary to switch <strong>the</strong> copy direction<br />

3. The copy direction is now switched, and we are returned to <strong>the</strong> Viewing Global Mirror<br />

Relationships window, where we see that <strong>the</strong> copy direction has been switched, as shown<br />

in Figure 8-333.<br />

Figure 8-333 Viewing Global Mirror relationship after changing <strong>the</strong> copy direction<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 633


8.15.17 Switching copy direction for a Global Mirror consistency group<br />

When a Global Mirror consistency group is in <strong>the</strong> Consistent synchronized state, we can<br />

change <strong>the</strong> copy direction for <strong>the</strong> Global Mirror consistency group. Perform <strong>the</strong>se steps:<br />

1. In Figure 8-334, we select <strong>the</strong> CG_W2K3_GM consistency group, click Switch Copy<br />

Direction from <strong>the</strong> list, and click Go.<br />

Note: When <strong>the</strong> copy direction is switched, it is crucial that <strong>the</strong>re is no outstanding I/O<br />

to <strong>the</strong> VDisks that transit from primary to secondary, because all I/O will be inhibited<br />

when <strong>the</strong>y become <strong>the</strong> secondary. Therefore, careful planning is required prior to<br />

switching <strong>the</strong> copy direction.<br />

Figure 8-334 Selecting <strong>the</strong> consistency group for which <strong>the</strong> copy direction is to be changed<br />

2. In Figure 8-335, we see that currently <strong>the</strong> primary VDisks are also <strong>the</strong> master. So, to<br />

change <strong>the</strong> copy direction for <strong>the</strong> Global Mirror consistency group, we specify <strong>the</strong> auxiliary<br />

VDisks to become <strong>the</strong> primary, and click OK.<br />

Figure 8-335 Selecting <strong>the</strong> primary VDisk as auxiliary to switch <strong>the</strong> copy direction<br />

The copy direction is now switched and we are returned to <strong>the</strong> Viewing Global Mirror<br />

Consistency Group window, where we see that <strong>the</strong> copy direction has been switched.<br />

Figure 8-336 on page 635 shows that <strong>the</strong> auxiliary is now <strong>the</strong> primary.<br />

634 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


.<br />

Figure 8-336 Viewing Global Mirror consistency groups after changing <strong>the</strong> copy direction<br />

Figure 8-337 shows <strong>the</strong> new copy direction for individual relationships within that<br />

consistency group.<br />

Figure 8-337 Viewing Global Mirror Relationships, after changing copy direction for consistency group<br />

Because everything has been completed to our expectations, we are now finished with Global<br />

Mirror.<br />

8.16 Service and maintenance<br />

This section discusses <strong>the</strong> various service and maintenance tasks that you can perform<br />

within <strong>the</strong> SVC environment. To perform all of <strong>the</strong> following activities, in <strong>the</strong> SVC Welcome<br />

window (Figure 8-338 on page 636), select <strong>the</strong> Service and Maintenance option.<br />

Note: You are prompted for a cluster user ID and password for several of <strong>the</strong> following<br />

tasks.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 635


Figure 8-338 Service and Maintenance functions<br />

8.17 Upgrading software<br />

This section explains how to upgrade <strong>the</strong> SVC software.<br />

8.17.1 Package numbering and version<br />

The format for <strong>the</strong> software upgrade package name ends in four positive integers separated<br />

by dots. For example, a software upgrade package might have <strong>the</strong> name<br />

<strong>IBM</strong>_2145_INSTALL_5.1.0.0.<br />

8.17.2 Upgrade status utility<br />

A function of <strong>the</strong> Master Console is to check <strong>the</strong> software levels in <strong>the</strong> system against<br />

recommended levels that will be documented on <strong>the</strong> support Web site. You are informed if<br />

software levels are up-to-date, or if you need to download and install newer levels. This<br />

information is provided after you log in to <strong>the</strong> SVC GUI. In <strong>the</strong> middle of <strong>the</strong> Welcome window,<br />

you will see that new software is available. Use <strong>the</strong> link that is provided <strong>the</strong>re to download <strong>the</strong><br />

new software and get more information about it.<br />

Important: To use this feature, <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center/Master Console<br />

must be able to access <strong>the</strong> Internet.<br />

If <strong>the</strong> <strong>System</strong> <strong>Storage</strong> Productivity Center cannot access <strong>the</strong> Internet because of<br />

restrictions, such as a local firewall, you will see <strong>the</strong> message “The update server cannot<br />

be reached at this time.” Use <strong>the</strong> Web link that is provided in <strong>the</strong> message for <strong>the</strong> latest<br />

software information.<br />

636 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


8.17.3 Precautions before upgrade<br />

In this section, we describe precautions that you must take before attempting an upgrade.<br />

Important: Before attempting any SVC code update, read and understand <strong>the</strong> SVC<br />

concurrent compatibility and code cross-reference matrix. Go to <strong>the</strong> following site and click<br />

<strong>the</strong> link for Latest <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> code:<br />

http://www-1.ibm.com/support/docview.wss?uid=ssg1S1001707<br />

During <strong>the</strong> upgrade, each node in your cluster will be automatically shut down and restarted<br />

by <strong>the</strong> upgrade process. Because each node in an I/O Group provides an alternate path to<br />

VDisks, use Subsystem Device Driver (SDD) to make sure that all I/O paths between all<br />

hosts and <strong>SAN</strong>s are working.<br />

If you have not performed this check, certain hosts might lose connectivity to <strong>the</strong>ir VDisk and<br />

experience I/O errors when <strong>the</strong> SVC node providing that access is shut down during <strong>the</strong><br />

upgrade process (Example 8-1).<br />

Example 8-1 Using datapath query commands to check that all paths are online<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM>datapath query adapter<br />

Active Adapters :2<br />

Adpt# Name State Mode Select Errors Paths Active<br />

0 Scsi Port2 Bus0 NORMAL ACTIVE 167 0 4 4<br />

1 Scsi Port3 Bus0 NORMAL ACTIVE 137 0 4 4<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM>datapath query device<br />

Total Devices : 2<br />

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E9080000000000002A<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 37 0<br />

1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0<br />

2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0<br />

3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 29 0<br />

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A180E90800000000000010<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0<br />

1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 130 0<br />

2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 108 0<br />

3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0<br />

You can check <strong>the</strong> I/O paths by using datapath query commands, as shown in Example 8-1.<br />

You do not need to check for hosts that have no active I/O operations to <strong>the</strong> <strong>SAN</strong>s during <strong>the</strong><br />

software upgrade.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 637


Tip: See <strong>the</strong> Subsystem Device Driver User’s Guide for <strong>the</strong> <strong>IBM</strong> Total<strong>Storage</strong> Enterprise<br />

<strong>Storage</strong> Server and <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>, SC26-7540, for more<br />

information about datapath query commands.<br />

It is well worth double-checking that your uninterruptible power supply unit power<br />

configuration is also set up correctly (even if your cluster is running without problems).<br />

Specifically, double-check <strong>the</strong>se areas:<br />

► Ensure that your uninterruptible power supply units are all getting <strong>the</strong>ir power from an<br />

external source, and that <strong>the</strong>y are not daisy-chained. Make sure that each uninterruptible<br />

power supply unit is not supplying power to ano<strong>the</strong>r node’s uninterruptible power supply<br />

unit.<br />

► Ensure that <strong>the</strong> power cable, and <strong>the</strong> serial cable coming from <strong>the</strong> back of each node,<br />

goes back to <strong>the</strong> same uninterruptible power supply unit. If <strong>the</strong> cables are crossed and are<br />

going back to separate uninterruptible power supply units, during <strong>the</strong> upgrade, as one<br />

node is shut down, ano<strong>the</strong>r node might also be mistakenly shut down.<br />

8.17.4 SVC software upgrade test utility<br />

The SVC software upgrade test utility is an SVC software utility that checks for known issues<br />

that can cause problems during an SVC software upgrade. You can run it on any SVC cluster<br />

running level 4.1.0.0 or higher. It is available from <strong>the</strong> following location:<br />

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585<br />

You can use <strong>the</strong> svcupgradetest utility to check for known issues that might cause problems<br />

during a <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> software upgrade. You can use it to check for potential<br />

problems upgrading from V4.1.0.0 and all later releases to <strong>the</strong> latest available level.<br />

You can run <strong>the</strong> utility multiple times on <strong>the</strong> same cluster to perform a readiness check in<br />

preparation for a software upgrade. We strongly recommend running this utility for a final time<br />

immediately prior to applying <strong>the</strong> SVC upgrade, making sure that <strong>the</strong>re have not been any<br />

new releases of <strong>the</strong> utility since it was originally downloaded.<br />

After you install <strong>the</strong> utility, you can obtain <strong>the</strong> version information for this utility by running <strong>the</strong><br />

svcupgradetest -h command.<br />

The installation and usage of this utility are nondisruptive and do not require restarting any<br />

SVC nodes, so <strong>the</strong>re is no interruption to host I/O. The utility is only installed on <strong>the</strong> current<br />

configuration node.<br />

<strong>System</strong> administrators must continue to check if <strong>the</strong> version of code that <strong>the</strong>y plan to install is<br />

<strong>the</strong> latest version. You can obtain information about <strong>the</strong> latest information at this Web site:<br />

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_<strong>SAN</strong>_<strong>Volume</strong>_Contro<br />

ller%20Code<br />

This utility is intended to supplement ra<strong>the</strong>r than duplicate <strong>the</strong> existing tests that are carried<br />

out by <strong>the</strong> SVC upgrade procedure (for example, checking for unfixed errors in <strong>the</strong> error log).<br />

The upgrade test utility includes command-line parameters.<br />

Prerequisites<br />

You can install this utility only on clusters running SVC V4.1.0.0 or later.<br />

638 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Installation Instructions<br />

To use <strong>the</strong> upgrade test utility, follow <strong>the</strong>se steps:<br />

1. Download <strong>the</strong> latest version of <strong>the</strong> upgrade test utility<br />

(<strong>IBM</strong>2145_INSTALL_svcupgradetest_V.R) using <strong>the</strong> download link:<br />

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585<br />

2. You can install <strong>the</strong> utility package by using <strong>the</strong> standard SVC Console (GUI) or<br />

command-line interface (CLI) software upgrade procedures that are used to install any<br />

new software onto <strong>the</strong> cluster.<br />

3. An example CLI command to install <strong>the</strong> package, after it has been uploaded to <strong>the</strong> cluster,<br />

is svcservicetask applysoftware -file <strong>IBM</strong>2145_INSTALL_svcupgradetest_n.nn.<br />

4. Run <strong>the</strong> upgrade test utility by logging onto <strong>the</strong> SVC CLI and running svcupgradetest -v<br />

where V.R.M.F is <strong>the</strong> version number of <strong>the</strong> SVC release being installed.<br />

5. For example, if upgrading to SVC <strong>V5.1</strong>.0.0, <strong>the</strong> command is svcupgradetest -v 5.1.0.0.<br />

6. The output from <strong>the</strong> command will ei<strong>the</strong>r state that <strong>the</strong>re have been no problems found, or<br />

will direct you to details about any known issues that have been discovered on this cluster.<br />

Example 8-2 shows <strong>the</strong> command to test an upgrade.<br />

Example 8-2 Run an upgrade test<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcupgradetest<br />

svcupgradetest version 4.11. Please wait while <strong>the</strong> tool tests<br />

for issues that may prevent a software upgrade from completing<br />

successfully. The test will take approximately one minute to complete.<br />

The test has not found any problems with <strong>the</strong> 2145 cluster.<br />

Please proceed with <strong>the</strong> software upgrade.<br />

8.17.5 Upgrade procedure<br />

To upgrade <strong>the</strong> SVC cluster software, perform <strong>the</strong> following steps:<br />

1. Use <strong>the</strong> Run Maintenance Procedure in <strong>the</strong> GUI and correct all open problems first, as<br />

described in 8.17.6, “Running maintenance procedures” on page 645.<br />

2. Back up <strong>the</strong> SVC Config, as described in 8.18.1, “Backup procedure” on page 669.<br />

3. Back up <strong>the</strong> support data in case <strong>the</strong>re is a problem during <strong>the</strong> upgrade that renders a<br />

node unusable. This information can assist <strong>IBM</strong> Support in determining why <strong>the</strong> upgrade<br />

might have failed and help with a resolution. Example 8-3 shows <strong>the</strong> necessary<br />

commands that need to be run. This command is only available in <strong>the</strong> CLI.<br />

Example 8-3 Creating an SVC snapshot<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svc_snap<br />

Collecting system information...<br />

Copying files, please wait...<br />

Copying files, please wait...<br />

Dumping error log...<br />

Creating snap package...<br />

Snap data collected in /dumps/snap.100047.080617.002334.tgz<br />

Note: You can ignore <strong>the</strong> error message “No such file or directory”.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 639


4. Select Software Maintenance List Dumps Software Dumps, download <strong>the</strong> dump<br />

that was created in Example 8-3 on page 639, and store it in a safe place with <strong>the</strong> SVC<br />

Config that you created previously (see Figure 8-339 and Figure 8-340).<br />

Figure 8-339 Getting software dumps<br />

Figure 8-340 Downloading software dumps<br />

5. From <strong>the</strong> SVC Welcome window, click Service and Maintenance, and <strong>the</strong>n, click <strong>the</strong><br />

Upgrade Software link.<br />

6. In <strong>the</strong> Software Upgrade window that is shown in Figure 8-341 on page 641, you can<br />

ei<strong>the</strong>r upload a new software upgrade file or list <strong>the</strong> upgrade files. Click Upload to upload<br />

<strong>the</strong> latest SVC cluster code.<br />

640 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-341 Software Upgrade window<br />

7. In <strong>the</strong> Software Upgrade (file upload) window (Figure 8-342), type or browse to <strong>the</strong><br />

directory on your management workstation (for example, Master Console) where you<br />

stored <strong>the</strong> latest code level, and click Upload.<br />

Figure 8-342 Software upgrade (file upload)<br />

8. The File Upload window (Figure 8-343) is displayed if <strong>the</strong> file is uploaded. Click Continue.<br />

Figure 8-343 File Upload window<br />

9. The Select Upgrade File window (Figure 8-344 on page 642) lists <strong>the</strong> available software<br />

packages. Make sure that <strong>the</strong> package that you want to apply is selected. Click Apply.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 641


Figure 8-344 Select Upgrade File window<br />

10.In <strong>the</strong> Confirm Upgrade File window (Figure 8-345), click Confirm.<br />

Figure 8-345 Confirm Upgrade File window<br />

11.After this confirmation, <strong>the</strong> SVC will check whe<strong>the</strong>r <strong>the</strong>re are any outstanding errors. If<br />

<strong>the</strong>re are no errors, click Continue, as shown in Figure 8-346, to proceed to <strong>the</strong> next<br />

upgrade step. O<strong>the</strong>rwise, <strong>the</strong> Run Maintenance button is displayed, which is used to<br />

check <strong>the</strong> errors. For more information about how to use <strong>the</strong> maintenance procedures, see<br />

8.17.6, “Running maintenance procedures” on page 645.<br />

Figure 8-346 Check Outstanding Errors window<br />

12.The Check Node Status window shows <strong>the</strong> in-use nodes with <strong>the</strong>ir current status<br />

displayed, as shown in Figure 8-347 on page 643. Click Continue to proceed.<br />

642 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-347 Check Node Status window<br />

13.The Start Upgrade window opens. Click Start Software Upgrade to start <strong>the</strong> software<br />

upgrade, as shown in Figure 8-348.<br />

Figure 8-348 Start Upgrade window<br />

The upgrade starts by upgrading one node in each I/O Group.<br />

14.The Software Upgrade Status window (Figure 8-349 on page 644) opens. Click Check<br />

Upgrade Status periodically. This process might take a while to complete. If <strong>the</strong> software<br />

is completely upgraded, you get a software completed message, and <strong>the</strong> code level of <strong>the</strong><br />

cluster and nodes will show <strong>the</strong> newly applied software level.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 643


Figure 8-349 Software Upgrade Status window<br />

15.During <strong>the</strong> upgrade process, you can only issue informational commands. All task<br />

commands, such as <strong>the</strong> creation of a VDisk (as shown in Figure 8-350), are denied,<br />

including both GUI and CLI tasks. All tasks, such as creation, modifying, mapping, and<br />

deleting, are denied.<br />

Figure 8-350 Denial of a task command during <strong>the</strong> software update<br />

16.The new code is distributed and applied to each node in <strong>the</strong> SVC cluster. After installation,<br />

each node is automatically restarted in turn.<br />

Although unlikely, if <strong>the</strong> concurrent code load (CCL) fails, for example, if one node fails to<br />

accept <strong>the</strong> new code level, <strong>the</strong> update on that one node will be backed out, and <strong>the</strong> node<br />

will revert back to <strong>the</strong> original code level.<br />

From 4.1.0 onward, <strong>the</strong> update will simply wait for user intervention. For example, if <strong>the</strong>re<br />

are two nodes (A and B) in an I/O Group, and node A has been upgraded successfully,<br />

and <strong>the</strong>n, node B experiences a hardware failure, <strong>the</strong> upgrade will end with an I/O Group<br />

that has a single node at <strong>the</strong> higher code level. If <strong>the</strong> hardware failure is repaired on node<br />

B, <strong>the</strong> CCL will <strong>the</strong>n complete <strong>the</strong> code upgrade process.<br />

644 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


I<br />

Tip: Be patient. After <strong>the</strong> software update is applied, <strong>the</strong> first SVC node in a cluster will<br />

update and install <strong>the</strong> new SVC code version shortly afterward. If multiple I/O Groups (up<br />

to four I/O Groups are possible) exist in an SVC cluster, <strong>the</strong> second node of <strong>the</strong> second I/O<br />

Group will load <strong>the</strong> new SVC code and restart with a 10 minute delay to <strong>the</strong> first node. A 30<br />

minute delay between <strong>the</strong> update of <strong>the</strong> first node and <strong>the</strong> second node in an I/O Group<br />

ensures that all paths, from a multipathing point of view, are available again.<br />

An SVC cluster update with one I/O Group takes approximately one hour.<br />

17.If you run into an error, go to <strong>the</strong> Analyze Error Log window. Search for Software Install<br />

completed. Select Sort by date with <strong>the</strong> newest first, and <strong>the</strong>n, click Perform to list <strong>the</strong><br />

software near <strong>the</strong> top. For more information about working with <strong>the</strong> Analyze Error Log<br />

window, see 8.17.10, “Analyzing <strong>the</strong> error log” on page 655.<br />

It might also be worthwhile to capture information for <strong>IBM</strong> Support to help you diagnose<br />

what went wrong.<br />

You have now completed <strong>the</strong> tasks that are required to upgrade <strong>the</strong> SVC software. Click <strong>the</strong> X<br />

icon in <strong>the</strong> upper-right corner of <strong>the</strong> display area to close <strong>the</strong> Software Upgrade window. Do<br />

not close <strong>the</strong> browser by mistake.<br />

8.17.6 Running maintenance procedures<br />

To run <strong>the</strong> maintenance procedures on <strong>the</strong> SVC cluster, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> SVC Welcome window, click Service and Maintenance and, <strong>the</strong>n, click Run<br />

Maintenance Procedures.<br />

2. Click Start Analysis, as shown in Figure 8-351, to analyze <strong>the</strong> cluster log and to guide<br />

you through <strong>the</strong> maintenance procedures.<br />

Figure 8-351 Maintenance Procedures window<br />

3. This action generates a new error log file in <strong>the</strong> /dumps/elogs/ directory (Figure 8-352 on<br />

page 646). Also, we can see <strong>the</strong> list of <strong>the</strong> errors, as shown in Figure 8-352 on page 646.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 645


Figure 8-352 Maintenance error log with unfixed errors<br />

4. Click <strong>the</strong> error number in <strong>the</strong> Error Code column in Figure 8-352 to see <strong>the</strong> explanation for<br />

this error, as shown in Figure 8-353.<br />

Figure 8-353 Maintenance: Error code description<br />

5. To perform problem determination, click Continue. The details for <strong>the</strong> error appear and<br />

might provide options to diagnose and repair <strong>the</strong> problem. In this case, it asks you to<br />

check an external configuration and, <strong>the</strong>n, to click Continue (Figure 8-354 on page 647).<br />

646 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-354 Maintenance procedures: Fixing error<br />

6. The SVC maintenance procedure has completed, and <strong>the</strong> error is fixed, as shown in<br />

Figure 8-355.<br />

Figure 8-355 Maintenance procedure: Fixing error<br />

7. The discovery reported no new errors, so <strong>the</strong> entry in <strong>the</strong> error log is now marked as fixed<br />

(as shown in Figure 8-356). Click OK.<br />

Figure 8-356 Maintenance procedure: Fixed<br />

8.17.7 Setting up error notification<br />

To set up error notification, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> SVC Welcome window, click Service and Maintenance and, <strong>the</strong>n, Set SNMP<br />

Error Notifications.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 647


Figure 8-357 Setting SNMP error notification<br />

2. Select Add a Server and click Go.<br />

3. In Figure 8-358, add <strong>the</strong> Server Name, <strong>the</strong> IP address of your SNMP Manager, (optional)<br />

Port, and Community string to use.<br />

IP address: Depending on which IP protocol addressing is configured, <strong>the</strong> window<br />

displays options for IPV4, IPV6, or both.<br />

Figure 8-358 Set <strong>the</strong> SNMP settings<br />

4. The next window now displays confirmation that it has updated <strong>the</strong> settings, as shown in<br />

Figure 8-359 on page 649.<br />

648 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-359 Error notification settings confirmation<br />

5. The next window now displays <strong>the</strong> current status, as shown in Figure 8-360.<br />

Figure 8-360 Current event notification settings<br />

6. You can now click <strong>the</strong> X icon in <strong>the</strong> upper-right corner of <strong>the</strong> Set SNMP Event Notification<br />

window to close <strong>the</strong> window.<br />

8.17.8 Setting syslog event notification<br />

Starting with SVC 5.1, you can save a syslog to a defined syslog server. The SVC provides<br />

support for syslogs, in addition to e-mail and SNMP traps.<br />

Figure 8-361 on page 650, Figure 8-362 on page 650, and Figure 8-363 on page 651 show<br />

<strong>the</strong> sequence of windows to use to define a syslog server.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 649


Figure 8-361 Adding a syslog server<br />

Figure 8-362 shows <strong>the</strong> syslog server definition window.<br />

Figure 8-362 Syslog server definition<br />

650 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-363 Syslog server confirmation<br />

The syslog messages can be sent in ei<strong>the</strong>r compact message format or full message format.<br />

Example 8-4 shows a compact format syslog message.<br />

Example 8-4 Compact syslog message example<br />

<strong>IBM</strong>2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node<br />

CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST<br />

#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100<br />

Example 8-5 shows a full format syslog message.<br />

Example 8-5 Full format syslog message example<br />

<strong>IBM</strong>2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=Node<br />

CPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST<br />

#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2<br />

#NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0<br />

(build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234<br />

#AdditionalData(0->63)=00000000210000000000000000000000000000000000000000000000000<br />

00000000000000000000000000000000000000000000000000000000000000000000000#Additional<br />

Data(64-127)=000000000000000000000000000000000000000000000000000000000000000000000<br />

00000000000000000000000000000000000000000000000000000000000<br />

8.17.9 Set e-mail features<br />

The SVC GUI supports <strong>the</strong> SVC e-mail error notification service.The SVC uses <strong>the</strong> e-mail<br />

server to send event notification and inventory e-mails to e-mail users. It can transmit any<br />

combination of error, warning, and informational notification types.<br />

To run <strong>the</strong> e-mail service for <strong>the</strong> first time, <strong>the</strong> Web pages guide us through <strong>the</strong> required<br />

steps:<br />

► Set <strong>the</strong> e-mail server and contact details<br />

► Test <strong>the</strong> e-mail service<br />

Figure 8-364 on page 652 shows <strong>the</strong> set e-mail notification window.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 651


Figure 8-364 Setting e-mail notification window<br />

Figure 8-365 shows how to insert contact details.<br />

Figure 8-365 Inserting contact details<br />

Figure 8-366 shows <strong>the</strong> confirmation window for <strong>the</strong> e-mail contact details.<br />

Figure 8-366 Contact details confirmation<br />

652 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-367 shows how to configure <strong>the</strong> Simple Mail Transfer Protocol (SMTP) server in <strong>the</strong><br />

SVC cluster.<br />

Figure 8-367 SMTP server definition<br />

Figure 8-368 shows <strong>the</strong> SMTP server definition confirmation.<br />

Figure 8-368 SMTP definition confirmation<br />

Figure 8-369 on page 654 shows how to define <strong>the</strong> support e-mail to which SVC notifications<br />

will be sent.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 653


Figure 8-369 E-mail notification user<br />

Figure 8-370 shows how to start <strong>the</strong> e-mail service.<br />

Figure 8-370 Starting <strong>the</strong> e-mail service<br />

Figure 8-371 shows how to start <strong>the</strong> test e-mail process.<br />

Figure 8-371 Sending test e-mail<br />

Figure 8-372 on page 655 shows how to send a test e-mail to all users.<br />

654 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-372 Sending a test e-mail to all users<br />

Figure 8-373 shows how to confirm <strong>the</strong> test e-mail notification.<br />

Figure 8-373 Confirming <strong>the</strong> test e-mail notification<br />

8.17.10 Analyzing <strong>the</strong> error log<br />

The following types of events and errors are logged in <strong>the</strong> error log:<br />

► Events: State changes are detected by <strong>the</strong> cluster software and are logged for<br />

informational purposes. Events are recorded in <strong>the</strong> cluster error log.<br />

► Errors: Hardware or software problems are detected by <strong>the</strong> cluster software and require<br />

repair. Errors are recorded in <strong>the</strong> cluster error log.<br />

► Unfixed errors: Errors were detected and recorded in <strong>the</strong> cluster error log and were not yet<br />

corrected or repaired.<br />

► Fixed errors: Errors were detected and recorded in <strong>the</strong> cluster error log and were<br />

subsequently corrected or repaired.<br />

To display <strong>the</strong> error log for analysis, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> SVC Welcome window, click Service and Maintenance and, <strong>the</strong>n, click Analyze<br />

Error Log.<br />

2. From <strong>the</strong> Error Log Analysis window (Figure 8-374 on page 656), you can choose ei<strong>the</strong>r<br />

Process or Clear Log:<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 655


Figure 8-374 Analyzing <strong>the</strong> error log<br />

a. Select <strong>the</strong> appropriate radio buttons and click Process to display <strong>the</strong> log for analysis.<br />

The Analysis Options and Display Options allow you to filter <strong>the</strong> results of your log<br />

inquiry to reduce <strong>the</strong> output.<br />

b. You can display <strong>the</strong> whole log, or you can filter <strong>the</strong> log so that only errors, events, or<br />

unfixed errors are displayed. You can also sort <strong>the</strong> results by selecting <strong>the</strong> appropriate<br />

display options. For example, you can sort <strong>the</strong> errors by error priority (lowest number =<br />

most serious error) or by date. If you sort by date, you can specify whe<strong>the</strong>r <strong>the</strong> newest<br />

or oldest error displays at <strong>the</strong> top of <strong>the</strong> table. You can also specify <strong>the</strong> number of<br />

entries that you want to display on each page of <strong>the</strong> table.<br />

Figure 8-375 on page 657 shows an example of <strong>the</strong> error logs listed.<br />

656 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-375 Analyzing Error Log: Process<br />

c. Click an underlined sequence number to see <strong>the</strong> detailed log of this error (Figure 8-376<br />

on page 658).<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 657


Figure 8-376 Analyzing Error Log: Detailed Error Analysis window<br />

d. You can optionally display detailed sense code data by clicking Sense Expert, as<br />

shown in Figure 8-377 on page 659. Click Return to go back to <strong>the</strong> Detailed Error<br />

Analysis window.<br />

658 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-377 Decoding Sense Data window<br />

e. If <strong>the</strong> log entry is an error, you can optionally mark <strong>the</strong> error as fixed, which does not<br />

cause any o<strong>the</strong>r checks or processes. We recommend that you do this action as a<br />

maintenance task (see 8.17.6, “Running maintenance procedures” on page 645).<br />

f. Click Clear Log at <strong>the</strong> bottom of <strong>the</strong> Error Log Analysis window (see Figure 8-374 on<br />

page 656) to clear <strong>the</strong> log. If <strong>the</strong> error log contains unfixed errors, a warning message<br />

is displayed when you click Clear Log.<br />

3. You can now click <strong>the</strong> X icon in <strong>the</strong> upper-right corner of <strong>the</strong> Analyze Error Log window.<br />

8.17.11 License settings<br />

To change <strong>the</strong> license settings, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> SVC Welcome window, click Service and Maintenance and, <strong>the</strong>n, License<br />

Settings, as shown in Figure 8-378 on page 660.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 659


Figure 8-378 License setting<br />

2. Now, you can choose between Capacity Licensing or Physical Disk Licensing.<br />

Figure 8-379 shows <strong>the</strong> Physical Disk Licensing Settings window.<br />

Figure 8-379 Physical Disk Licensing Settings window<br />

Figure 8-380 on page 661 shows <strong>the</strong> Capacity Licensing Settings window.<br />

660 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-380 Capacity License Setting window<br />

3. Consult your license before you make changes in <strong>the</strong> License Settings window<br />

(Figure 8-381). If you have purchased additional features (for example, FlashCopy or<br />

Global Mirror) or if you have increased <strong>the</strong> capacity of your license, make <strong>the</strong> appropriate<br />

changes. Then, click Update License Settings.<br />

Figure 8-381 License Settings window<br />

4. You now see a license confirmation window, as shown in Figure 8-382 on page 662.<br />

Review this window and ensure that you are in compliance. If you are in compliance, click<br />

I Agree to make <strong>the</strong> requested changes take effect.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 661


Figure 8-382 License agreement<br />

5. You return to <strong>the</strong> License Settings window to review your changes (Figure 8-383). Make<br />

sure that your changes are reflected.<br />

Figure 8-383 Feature settings update<br />

6. You can now click <strong>the</strong> X icon in <strong>the</strong> upper-right corner of <strong>the</strong> License Settings window.<br />

8.17.12 Viewing <strong>the</strong> license settings log<br />

To view <strong>the</strong> feature log, which registers <strong>the</strong> events that are related to <strong>the</strong> SVC-licensed<br />

features, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> SVC Welcome window, click Service and Maintenance and, <strong>the</strong>n, View<br />

License Settings Log.<br />

2. The View License Settings Log window (Figure 8-384 on page 663) opens. It displays <strong>the</strong><br />

current license settings and a log of when changes were made.<br />

662 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-384 Feature log<br />

3. You can now click <strong>the</strong> X icon in <strong>the</strong> upper-right corner of <strong>the</strong> View License Settings Log<br />

window.<br />

8.17.13 Dumping <strong>the</strong> cluster configuration<br />

8.17.14 Listing dumps<br />

To dump your cluster configuration, click Service and Maintenance and, <strong>the</strong>n, Dump<br />

Configuration, as shown in Figure 8-385.<br />

Figure 8-385 Dumping Cluster Configuration window<br />

To list <strong>the</strong> dumps that were generated, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> SVC Welcome window, click Service and Maintenance and, <strong>the</strong>n, List Dumps.<br />

2. In <strong>the</strong> List Dumps window (Figure 8-386 on page 664), you see several dumps and log<br />

files that were generated over time on this node. They include <strong>the</strong> configuration dump that<br />

we generated in Example 8-3 on page 639. Click any of <strong>the</strong> available links (<strong>the</strong> underlined<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 663


text in <strong>the</strong> table under <strong>the</strong> Dump Type heading) to go to ano<strong>the</strong>r window that displays <strong>the</strong><br />

available dumps. To see <strong>the</strong> dumps on <strong>the</strong> o<strong>the</strong>r node, you must click Check o<strong>the</strong>r nodes.<br />

Note: By default, <strong>the</strong> dump and log information that is displayed is available from <strong>the</strong><br />

configuration node. In addition to <strong>the</strong>se files, each node in <strong>the</strong> SVC cluster keeps a<br />

local software dump file. Occasionally, o<strong>the</strong>r dumps are stored on <strong>the</strong>m. Click Check<br />

O<strong>the</strong>r Nodes at <strong>the</strong> bottom of <strong>the</strong> List Dumps window (Figure 8-386) to see which<br />

dumps or logs exist on o<strong>the</strong>r nodes in your cluster.<br />

Figure 8-386 List Dumps<br />

3. Figure 8-387 shows <strong>the</strong> list of dumps from <strong>the</strong> partner node. You can see a list of <strong>the</strong><br />

dumps by clicking one of <strong>the</strong> Dump Types.<br />

Figure 8-387 List Dumps from <strong>the</strong> partner node<br />

4. To copy a file from this partner node to <strong>the</strong> config node, click <strong>the</strong> dump type and <strong>the</strong>n click<br />

<strong>the</strong> file that you want to copy, as shown in Figure 8-388 on page 665.<br />

664 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-388 Copy dump files<br />

5. You will see a confirmation window that <strong>the</strong> dumps are being retrieved. You can ei<strong>the</strong>r click<br />

Continue to continue working with <strong>the</strong> o<strong>the</strong>r node or click Cancel to go back to <strong>the</strong> original<br />

node (Figure 8-389).<br />

Figure 8-389 Retrieve dump confirmation<br />

6. After all of <strong>the</strong> necessary files are copied to <strong>the</strong> SVC config node, click Cancel to finish <strong>the</strong><br />

copy operation, and click Cancel again to return to <strong>the</strong> SVC config node. Now, for<br />

example, if you click <strong>the</strong> Error Logs link, you see information similar to that shown in<br />

Figure 8-390 on page 666.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 665


Figure 8-390 List Dumps: Error Logs<br />

7. From this window, you can perform ei<strong>the</strong>r of <strong>the</strong> following tasks:<br />

– Click any of <strong>the</strong> available log file links (indicated by <strong>the</strong> underlined text) to display <strong>the</strong><br />

log in complete detail.<br />

– Delete one or all of <strong>the</strong> dump or log files. To delete all, click Delete All. To delete<br />

several error log files, select <strong>the</strong> check boxes to <strong>the</strong> right of <strong>the</strong> file, and click Delete.<br />

8. You can now click <strong>the</strong> X icon in <strong>the</strong> upper-right corner of <strong>the</strong> List Dumps window.<br />

8.17.15 Setting up a quorum disk<br />

The SVC cluster, after <strong>the</strong> process of node discovery, automatically chooses three MDisks as<br />

quorum disks. Each disk is assigned an index number of ei<strong>the</strong>r 0, 1, or 2.<br />

In <strong>the</strong> event that half of <strong>the</strong> nodes in a cluster are missing for any reason, <strong>the</strong> o<strong>the</strong>r half of <strong>the</strong><br />

cluster nodes cannot simply assume that <strong>the</strong> nodes are “dead”. It might mean that <strong>the</strong> cluster<br />

state information is not being successfully passed between nodes for a reason (network<br />

failure, for example). For this reason, if half of <strong>the</strong> cluster disappears from <strong>the</strong> view of <strong>the</strong><br />

o<strong>the</strong>r half, each surviving half attempts to lock <strong>the</strong> first quorum disk (index 0). In <strong>the</strong> event that<br />

quorum disk index 0 is not available on any node, <strong>the</strong> next disk (index 1) becomes <strong>the</strong><br />

quorum, and so on.<br />

The half of <strong>the</strong> cluster that is successful in locking <strong>the</strong> quorum disk becomes <strong>the</strong> exclusive<br />

processor of I/O activity. It attempts to reform <strong>the</strong> cluster with any nodes that it can still see.<br />

The o<strong>the</strong>r half will stop processing I/O, which provides a tie-breaker solution and ensures that<br />

both halves of <strong>the</strong> cluster do not continue to operate.<br />

In <strong>the</strong> case that all clusters can see <strong>the</strong> quorum disk, <strong>the</strong>y will use this quorum disk to<br />

communicate with each o<strong>the</strong>r, and <strong>the</strong>y will decide which half will become <strong>the</strong> exclusive<br />

processor of I/O activity.<br />

666 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


If, for any reason, you want to set your own quorum disks (for example, if you have installed<br />

additional back-end storage and you want to move one or two quorum disks onto this newly<br />

installed back-end storage subsystem), complete <strong>the</strong> following tasks:<br />

1. From <strong>the</strong> Welcome window, select Work with Managed Disks, <strong>the</strong>n select Quorum<br />

Disks, which takes you to <strong>the</strong> window that is shown in Figure 8-391.<br />

Figure 8-391 Selecting <strong>the</strong> quorum disks<br />

2. We can now select our quorum disks and identify which disk will be <strong>the</strong> active quorum<br />

disk.<br />

3. To change <strong>the</strong> active quorum disk, as shown in Figure 8-392, we start by selecting ano<strong>the</strong>r<br />

MDisk to be <strong>the</strong> active quorum disk. We click Set Active Quorum Disk and click Go.<br />

Figure 8-392 Selecting a new active quorum disk<br />

4. We confirm that we want to change <strong>the</strong> active quorum disk by clicking Set Active Quorum<br />

Disk, as shown in Figure 8-393.<br />

Figure 8-393 Confirming <strong>the</strong> change of <strong>the</strong> active quorum disk<br />

5. After we have changed <strong>the</strong> active quorum disk, we can see that our previous active<br />

quorum disk is in <strong>the</strong> state of initializing, as shown in Figure 8-394 on page 668.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 667


Figure 8-394 Quorum disk initializing<br />

6. Shortly afterward, we have a successful change, as shown in Figure 8-395.<br />

Figure 8-395 New quorum disk is now active<br />

Quorum disks are only created if at least one MDisk is in managed mode (that is, it was<br />

formatted by <strong>the</strong> SVC with extents in it). O<strong>the</strong>rwise, a 1330 cluster error message is displayed<br />

in <strong>the</strong> SVC front window. You can correct it only when you place MDisks in managed mode.<br />

8.18 Backing up <strong>the</strong> SVC configuration<br />

The SVC configuration data is stored on all <strong>the</strong> nodes in <strong>the</strong> cluster. It is specially hardened,<br />

so that, in normal circumstances, <strong>the</strong> SVC never loses its configuration settings. However, in<br />

exceptional circumstances, this data can become corrupted or lost.<br />

This section details <strong>the</strong> tasks that you can perform to save <strong>the</strong> configuration data from an<br />

SVC configuration node and restore it. The following configuration information is backed up:<br />

► <strong>Storage</strong> subsystem<br />

► Hosts<br />

► Managed disks (MDisks)<br />

► MDGs<br />

► SVC nodes<br />

► VDisks<br />

► VDisk-to-host mappings<br />

► FlashCopy mappings<br />

► FlashCopy consistency groups<br />

► Mirror relationships<br />

► Mirror consistency groups<br />

668 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Backing up <strong>the</strong> cluster configuration enables you to restore your cluster configuration in <strong>the</strong><br />

event that it is lost. But only <strong>the</strong> data that describes <strong>the</strong> cluster configuration is backed up. To<br />

back up your application data, you need to use <strong>the</strong> appropriate backup methods.<br />

To begin <strong>the</strong> restore process, consult <strong>IBM</strong> Support to determine <strong>the</strong> cause or reason why you<br />

cannot access your original configuration data.<br />

Perform or verify <strong>the</strong>se prerequisites to have a successful backup:<br />

► All nodes in <strong>the</strong> cluster must be online.<br />

► No object name can begin with an underscore (_).<br />

► Do not run any independent operations that might change <strong>the</strong> cluster configuration while<br />

<strong>the</strong> backup command runs.<br />

► Do not make any changes to <strong>the</strong> fabric or cluster between backup and restore. If changes<br />

are made, back up your configuration again, or you might not be able to restore it later.<br />

Important: We recommend that you make a backup of <strong>the</strong> SVC configuration data after<br />

each major change in <strong>the</strong> environment, such as defining or changing a VDisk,<br />

VDisk-to-host mappings, and so on.<br />

The svc.config.backup.xml file is stored in <strong>the</strong> /tmp folder on <strong>the</strong> configuration node and<br />

must be copied to an external and secure place for backup purposes.<br />

Important: We strongly recommend that you change <strong>the</strong> default names of all objects to<br />

non-default names. For objects with a default name, a warning is issued, and <strong>the</strong> object is<br />

restored with its original name and “_r” appended to it.<br />

8.18.1 Backup procedure<br />

To back up <strong>the</strong> SVC configuration data, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> SVC Welcome window, click Service and Maintenance and, <strong>the</strong>n, Backup<br />

Configuration.<br />

2. In <strong>the</strong> Backing up a Cluster Configuration window (Figure 8-396), click Backup.<br />

Figure 8-396 Backing up Cluster Configuration data<br />

3. After <strong>the</strong> configuration backup is successful, you see messages similar to <strong>the</strong> messages<br />

that are shown in Figure 8-397 on page 670. Make sure that you read, understand, act<br />

upon, and document <strong>the</strong> warning messages, because <strong>the</strong>y can affect <strong>the</strong> restore<br />

procedure.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 669


Figure 8-397 Configuration backup successful messages and warnings<br />

4. You can now click <strong>the</strong> X icon in <strong>the</strong> upper-right corner of <strong>the</strong> Backing up a Cluster<br />

Configuration window.<br />

Change <strong>the</strong> default names: To avoid getting <strong>the</strong> CMMVC messages that are shown in<br />

Figure 8-397, you need to replace all <strong>the</strong> default names, for example, mdisk1, vdisk1, and<br />

so on.<br />

8.18.2 Saving <strong>the</strong> SVC configuration<br />

To save <strong>the</strong> SVC configuration in a safe place, follow <strong>the</strong>se steps:<br />

► From <strong>the</strong> List Dump window, select Software Dumps, select <strong>the</strong> configuration dump that<br />

you want to save, and right-click to save it.<br />

Figure 8-398 on page 671 shows saving a software dump on <strong>the</strong> Software Dumps window.<br />

670 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-398 Software Dumps list with options<br />

After you have saved your configuration file, it will be presented to you as a .xml file.<br />

Figure 8-399 shows an SVC backup configuration file example.<br />

Figure 8-399 SVC backup configuration file example<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 671


8.18.3 Restoring <strong>the</strong> SVC configuration<br />

It is extremely important that you perform <strong>the</strong> configuration backup that is described in 8.18.1,<br />

“Backup procedure” on page 669 periodically, and every time after you change <strong>the</strong><br />

configuration of your cluster.<br />

Carry out <strong>the</strong> restore procedure only under <strong>the</strong> direction of <strong>IBM</strong> Level 3 support.<br />

8.18.4 Deleting <strong>the</strong> configuration backup files<br />

8.18.5 Fabrics<br />

This section details <strong>the</strong> tasks that you can perform to delete <strong>the</strong> configuration backup files<br />

from <strong>the</strong> default folder in <strong>the</strong> SVC Master Console. You can do this task if you have already<br />

copied <strong>the</strong>m to ano<strong>the</strong>r external and secure place.<br />

To delete <strong>the</strong> SVC configuration backup files, perform <strong>the</strong> following steps:<br />

1. From <strong>the</strong> SVC Welcome window, click Service and Maintenance and, <strong>the</strong>n, Delete<br />

Backup.<br />

2. In <strong>the</strong> Deleting a Cluster Configuration window (Figure 8-400), click OK to confirm <strong>the</strong><br />

deletion of <strong>the</strong> C:\Program Files\<strong>IBM</strong>\svcconsole\cimom\backup\SVCclustername folder<br />

(where SVCclustername is <strong>the</strong> SVC cluster name on which you are working) on <strong>the</strong> SVC<br />

Master Console and all its contents.<br />

Figure 8-400 Deleting a Cluster Configuration window<br />

3. Click Delete to confirm <strong>the</strong> deletion of <strong>the</strong> configuration backup data. See Figure 8-401.<br />

Figure 8-401 Deleting a Cluster Configuration confirmation message<br />

4. The cluster configuration is now deleted.<br />

From <strong>the</strong> Fabrics Link in <strong>the</strong> Service and Maintenance window, you can view <strong>the</strong> fabrics from<br />

<strong>the</strong> SVC’s point of view. This function might be useful to debug a <strong>SAN</strong> problem.<br />

Figure 8-402 on page 673 shows a Viewing Fabrics example.<br />

672 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 8-402 Viewing Fabrics example<br />

8.18.6 Common Information Model object manager log configuration<br />

Because <strong>the</strong> Common Information Model object manager (CIMOM) has been moved from <strong>the</strong><br />

Hardware Management Console (<strong>System</strong> <strong>Storage</strong> Productivity Center) to <strong>the</strong> SVC cluster<br />

starting with SVC 5.1, you can configure <strong>the</strong> SVC CIMOM log by using <strong>the</strong> GUI to set <strong>the</strong><br />

detail logging level.<br />

Figure 8-403 shows <strong>the</strong> Configuring CIMOM Log window.<br />

Figure 8-403 CIMOM Configuration Log window<br />

We have completed our discussion of <strong>the</strong> service and maintenance operational tasks.<br />

Chapter 8. <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI 673


674 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Chapter 9. Data migration<br />

In this chapter, we explain how to migrate from a conventional storage infrastructure to a<br />

virtualized storage infrastructure by applying <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

(SVC). We also explain how <strong>the</strong> SVC can be phased out of a virtualized storage<br />

infrastructure, for example, after a trial period. Or, <strong>the</strong> SVC can be phased out of a virtualized<br />

storage infrastructure because you want to use <strong>the</strong> SVC as a data mover because it best<br />

meets your requirements in terms of data migration performance. Or, <strong>the</strong> SVC can be phased<br />

out of a virtualized storage infrastructure, because it gives <strong>the</strong> best service level agreement<br />

(SLA) to your application during <strong>the</strong> data migration.<br />

Moreover, we show migrating from a fully allocated VDisk to a space-efficient virtual disk<br />

(VDisk) using <strong>the</strong> VDisk Mirroring feature and <strong>the</strong> space-efficient volume toge<strong>the</strong>r.<br />

We also show an example of using intracluster Metro Mirror to migrate data.<br />

9<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 675


9.1 Migration overview<br />

The SVC allows you to change <strong>the</strong> mapping of VDisk extents to managed disk (MDisk)<br />

extents, without interrupting host access to <strong>the</strong> VDisk. This functionality is utilized when<br />

performing VDisk migrations, and it can be performed for any VDisk that is defined on <strong>the</strong><br />

SVC.<br />

This functionality can be used for <strong>the</strong>se tasks:<br />

► Redistributing VDisks and, <strong>the</strong>refore, <strong>the</strong> workload within an SVC cluster across back-end<br />

storage:<br />

– Moving workload onto newly installed storage<br />

– Moving workload off of old or failing storage, ahead of decommissioning it<br />

– Moving workload to rebalance a changed workload<br />

► Migrating data from older back-end storage to SVC-managed storage<br />

► Migrating data from one back-end controller to ano<strong>the</strong>r back-end controller using <strong>the</strong> SVC<br />

as a data block mover and afterward removing <strong>the</strong> SVC from <strong>the</strong> <strong>SAN</strong><br />

► Migrating data from managed mode back into image mode prior to removing <strong>the</strong> SVC from<br />

a <strong>SAN</strong><br />

9.2 Migration operations<br />

You can perform migration at ei<strong>the</strong>r <strong>the</strong> VDisk or <strong>the</strong> extent level, depending on <strong>the</strong> purpose of<br />

<strong>the</strong> migration. These migration activities are supported:<br />

► Migrating extents within a Managed Disk Group (MDG), redistributing <strong>the</strong> extents of a<br />

given VDisk on <strong>the</strong> MDisks in <strong>the</strong> MDG<br />

► Migrating extents off of an MDisk, which is removed from <strong>the</strong> MDG, to o<strong>the</strong>r MDisks in <strong>the</strong><br />

MDG<br />

► Migrating a VDisk from one MDG to ano<strong>the</strong>r MDG<br />

► Migrating a VDisk to change <strong>the</strong> virtualization type of <strong>the</strong> VDisk to image<br />

► Migrating a VDisk between I/O Groups<br />

9.2.1 Migrating multiple extents (within an MDG)<br />

You can migrate a number of VDisk extents at one time by using <strong>the</strong> migrateexts command.<br />

When executed, this command migrates a given number of extents from <strong>the</strong> source MDisk,<br />

where <strong>the</strong> extents of <strong>the</strong> specified VDisk reside, to a defined target MDisk that must be part of<br />

<strong>the</strong> same MDG.<br />

You can specify a number of migration threads to be used in parallel (from 1 to 4).<br />

If <strong>the</strong> type of <strong>the</strong> VDisk is image, <strong>the</strong> VDisk type transitions to striped when <strong>the</strong> first extent is<br />

migrated, while <strong>the</strong> MDisk access mode transitions from image to managed.<br />

676 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


The syntax of <strong>the</strong> command-line interface (CLI) command is:<br />

svctask migrateexts -source src_mdisk_id | src_mdisk_name -exts num_extents<br />

-target<br />

target_mdisk_id | target_mdisk_name [-threads number_of_threads] -vdisk vdisk_id |<br />

vdisk_name<br />

The parameters for <strong>the</strong> CLI command are defined this way:<br />

► -vdisk: Specifies <strong>the</strong> VDisk ID or name to which <strong>the</strong> extents belong.<br />

► -source: Specifies <strong>the</strong> source MDisk ID or name on which <strong>the</strong> extents currently reside.<br />

► -exts: Specifies <strong>the</strong> number of extents to migrate.<br />

► -target: Specifies <strong>the</strong> target MDisk ID or name onto which <strong>the</strong> extents are to be migrated.<br />

► -threads: Optional parameter that specifies <strong>the</strong> number of threads to use while migrating<br />

<strong>the</strong>se extents, from 1 to 4.<br />

9.2.2 Migrating extents off of an MDisk that is being deleted<br />

When an MDisk is deleted from an MDG using <strong>the</strong> rmmdisk -force command, any occupied<br />

extents on <strong>the</strong> MDisk are migrated off of <strong>the</strong> MDisk (to o<strong>the</strong>r MDisks in <strong>the</strong> MDG) prior to its<br />

deletion.<br />

In this case, <strong>the</strong> extents that need to be migrated are moved onto <strong>the</strong> set of MDisks that are<br />

not being deleted, and <strong>the</strong> extents are distributed. This statement holds true if multiple<br />

MDisks are being removed from <strong>the</strong> MDG at <strong>the</strong> same time and if MDisks that are being<br />

removed are not candidates for supplying free extents to <strong>the</strong> allocation of <strong>the</strong> free extents<br />

algorithm.<br />

If a VDisk uses one or more extents that need to be moved as a result of an rmmdisk<br />

command, <strong>the</strong> virtualization type for that VDisk is set to striped (if it was previously sequential<br />

or image).<br />

If <strong>the</strong> MDisk is operating in image mode, <strong>the</strong> MDisk transitions to managed mode while <strong>the</strong><br />

extents are being migrated, and upon deletion, it transitions to unmanaged mode.<br />

The syntax of <strong>the</strong> CLI command follows this format:<br />

svctask rmmdisk -mdisk mdisk_id_list | mdisk_name_list [-force]<br />

mdisk_group_id | mdisk_group_name<br />

The parameters for <strong>the</strong> CLI command are defined this way:<br />

► -mdisk: Specifies one or more MDisk IDs or names to delete from <strong>the</strong> group.<br />

► -force: Migrates any data that belongs to o<strong>the</strong>r VDisks before removing <strong>the</strong> MDisk.<br />

Using <strong>the</strong> -force flag: If <strong>the</strong> -force flag is not supplied and if VDisks occupy extents on one<br />

or more of <strong>the</strong> MDisks that are specified, <strong>the</strong> command fails.<br />

When <strong>the</strong> -force flag is supplied and when VDisks exist that are made from extents on one<br />

or more of <strong>the</strong> MDisks that are specified, all extents on <strong>the</strong> MDisks will be migrated to <strong>the</strong><br />

o<strong>the</strong>r MDisks in <strong>the</strong> MDG, if <strong>the</strong>re are enough free extents in <strong>the</strong> MDG. The deletion of <strong>the</strong><br />

MDisks is postponed until all extents are migrated, which can take time. In <strong>the</strong> case where<br />

<strong>the</strong>re are insufficient free extents in <strong>the</strong> MDG, <strong>the</strong> command fails.<br />

When <strong>the</strong> -force flag is supplied, <strong>the</strong> command completes asynchronously.<br />

Chapter 9. Data migration 677


9.2.3 Migrating a VDisk between MDGs<br />

An entire VDisk can be migrated from one MDG to ano<strong>the</strong>r MDG using <strong>the</strong> migratevdisk<br />

command. A VDisk can be migrated between MDGs regardless of <strong>the</strong> virtualization type<br />

(image, striped, or sequential), although it transitions to <strong>the</strong> virtualization type of striped. The<br />

command varies depending on <strong>the</strong> type of migration, as shown in Table 9-1.<br />

Table 9-1 Migration type<br />

MDG to MDG type Command<br />

Managed to managed migratevdisk<br />

Image to managed migratevdisk<br />

Managed to image migratetoimage<br />

Image to image migratetoimage<br />

The syntax of <strong>the</strong> CLI command is this format:<br />

svctask migratevdisk -mdiskgrp mdisk_group_id | mdisk_group_name [-threads<br />

number_of_threads -copy_id] -vdisk vdisk_id | vdisk_name<br />

The parameters for <strong>the</strong> CLI command are defined this way:<br />

► -vdisk: Specifies <strong>the</strong> VDisk ID or name to migrate into ano<strong>the</strong>r MDG.<br />

► -mdiskgrp: Specifies <strong>the</strong> target MDG ID or name.<br />

► -threads: An optional parameter that specifies <strong>the</strong> number of threads to use while<br />

migrating <strong>the</strong>se extents, from 1 to 4.<br />

► -copy_id: Required if <strong>the</strong> specified VDisk has more than one copy.<br />

The syntax of <strong>the</strong> CLI command is this format:<br />

svctask migratetoimage -copy_id -vdisk source_vdisk_id | name -mdisk<br />

unmanaged_target_mdisk_id |<br />

name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads]<br />

The parameters for <strong>the</strong> CLI command are:<br />

► -vdisk: Specifies <strong>the</strong> name or ID of <strong>the</strong> source VDisk to be migrated.<br />

► -copy_id: Required if <strong>the</strong> specified VDisk has more than one copy.<br />

► -mdisk: Specifies <strong>the</strong> name of <strong>the</strong> MDisk to which <strong>the</strong> data must be migrated. (This MDisk<br />

must be unmanaged and large enough to contain <strong>the</strong> data of <strong>the</strong> disk being migrated.)<br />

► -mdiskgrp: Specifies <strong>the</strong> MDG into which <strong>the</strong> MDisk must be placed after <strong>the</strong> migration has<br />

completed.<br />

► -threads: Optional parameter that specifies <strong>the</strong> number of threads to use while migrating<br />

<strong>the</strong>se extents, from 1 to 4.<br />

678 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


In Figure 9-1, we illustrate <strong>the</strong> V3 VDisk migrating from MDG1 to MDG2.<br />

Rule: In order for <strong>the</strong> migration to be acceptable, <strong>the</strong> source and destination MDisk must<br />

have <strong>the</strong> same extent size.<br />

I/O Group 0<br />

M1<br />

SVC1 IO-Grp0<br />

Node 1<br />

V1<br />

SVC1 IO-Grp0<br />

Node 2<br />

M2<br />

V2<br />

MDG 1<br />

V4<br />

M3<br />

RAID <strong>Controller</strong> A<br />

MDG 2<br />

M4<br />

Figure 9-1 Managed VDisk migration to ano<strong>the</strong>r MDG<br />

Extents are allocated to <strong>the</strong> migrating VDisk, from <strong>the</strong> set of MDisks in <strong>the</strong> target MDG, using<br />

<strong>the</strong> extent allocation algorithm.<br />

The process can be prioritized by specifying <strong>the</strong> number of threads to use while migrating;<br />

using only one thread will put <strong>the</strong> least background load on <strong>the</strong> system. If a large number of<br />

extents are being migrated, you can specify <strong>the</strong> number of threads that will be used in parallel<br />

(from 1 to 4).<br />

The offline rules apply to both MDGs; <strong>the</strong>refore, referring back to Figure 9-1, if any of <strong>the</strong> M4,<br />

M5, M6, or M7 MDisks go offline, <strong>the</strong> V3 VDisk goes offline. If <strong>the</strong> M4 MDisk goes offline, V3<br />

and V5 go offline, but V1, V2, V4, and V6 remain online.<br />

If <strong>the</strong> type of <strong>the</strong> VDisk is image, <strong>the</strong> VDisk type transitions to striped when <strong>the</strong> first extent is<br />

migrated while <strong>the</strong> MDisk access mode transitions from image to managed.<br />

For <strong>the</strong> duration of <strong>the</strong> move, <strong>the</strong> VDisk is listed as being a member of <strong>the</strong> original MDG. For<br />

<strong>the</strong> purposes of configuration, <strong>the</strong> VDisk moves to <strong>the</strong> new MDG instantaneously at <strong>the</strong> end<br />

of <strong>the</strong> migration.<br />

V3<br />

V5<br />

V3<br />

M5<br />

V6<br />

MDG 3<br />

M6<br />

M7<br />

RAID <strong>Controller</strong> B<br />

Chapter 9. Data migration 679


9.2.4 Migrating <strong>the</strong> VDisk to image mode<br />

The facility to migrate a VDisk to an image mode VDisk can be combined with <strong>the</strong> ability to<br />

migrate between MDGs. The source for <strong>the</strong> migration can be a managed mode or an image<br />

mode VDisk. This leads to four possibilities:<br />

► Migrate image mode to image mode within an MDG.<br />

► Migrate managed mode to image mode within an MDG.<br />

► Migrate image mode to image mode between MDGs.<br />

► Migrate managed mode to image mode between MDGs.<br />

These conditions must apply to be able to migrate:<br />

► The destination MDisk must be greater than or equal to <strong>the</strong> size of <strong>the</strong> VDisk.<br />

► The MDisk that is specified as <strong>the</strong> target must be in an unmanaged state at <strong>the</strong> time that<br />

<strong>the</strong> command is run.<br />

► If <strong>the</strong> migration is interrupted by a cluster recovery, <strong>the</strong> migration will resume after <strong>the</strong><br />

recovery completes.<br />

► If <strong>the</strong> migration involves moving between MDGs, <strong>the</strong> VDisk behaves as described in 9.2.3,<br />

“Migrating a VDisk between MDGs” on page 678.<br />

The syntax of <strong>the</strong> CLI command is this format:<br />

svctask migratetoimage -copy_id -vdisk source_vdisk_id | name -mdisk<br />

unmanaged_target_mdisk_id |<br />

name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads]<br />

The parameters for <strong>the</strong> CLI command are defined this way:<br />

► -copy_id: Required if <strong>the</strong> specified VDisk has more than one copy.<br />

► -vdisk: Specifies <strong>the</strong> name or ID of <strong>the</strong> source VDisk to be migrated.<br />

► -mdisk: Specifies <strong>the</strong> name of <strong>the</strong> MDisk to which <strong>the</strong> data must be migrated. (This MDisk<br />

must be unmanaged and large enough to contain <strong>the</strong> data of <strong>the</strong> disk being migrated.)<br />

► -mdiskgrp: Specifies <strong>the</strong> MDG into which <strong>the</strong> MDisk must be placed after <strong>the</strong> migration has<br />

completed.<br />

► -threads: An optional parameter that specifies <strong>the</strong> number of threads to use while<br />

migrating <strong>the</strong>se extents, from 1 to 4.<br />

Regardless of <strong>the</strong> mode in which <strong>the</strong> VDisk starts, it is reported as a managed mode during<br />

<strong>the</strong> migration. Also, both of <strong>the</strong> MDisks involved are reported as being in image mode during<br />

<strong>the</strong> migration. Upon completion of <strong>the</strong> command, <strong>the</strong> VDisk is classified as an image mode<br />

VDisk.<br />

9.2.5 Migrating a VDisk between I/O Groups<br />

A VDisk can be migrated between I/O Groups by using <strong>the</strong> svctask chvdisk command. This<br />

command is only supported if <strong>the</strong> VDisk is not in a FlashCopy Mapping or Remote Copy<br />

relationship.<br />

To move a VDisk between I/O Groups, <strong>the</strong> cache must be flushed. The SVC will attempt to<br />

destage all write data for <strong>the</strong> VDisk from <strong>the</strong> cache during <strong>the</strong> I/O Group move. This flush will<br />

fail if data has been pinned in <strong>the</strong> cache for any reason (such as an MDG being offline). By<br />

default, this failed flush will cause <strong>the</strong> migration between I/O Groups to fail, but this behavior<br />

can be overridden using <strong>the</strong> -force flag. If <strong>the</strong> -force flag is used and if <strong>the</strong> SVC is unable to<br />

destage all write data from <strong>the</strong> cache, <strong>the</strong> result is that <strong>the</strong> contents of <strong>the</strong> VDisk are<br />

680 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


corrupted by <strong>the</strong> loss of <strong>the</strong> cached data. During <strong>the</strong> flush, <strong>the</strong> VDisk operates in cache<br />

write-through mode.<br />

Important: Do not move a VDisk to an offline I/O Group under any circumstance. You must<br />

ensure that <strong>the</strong> I/O Group is online before you move <strong>the</strong> VDisks to avoid any data loss.<br />

You must quiesce host I/O before <strong>the</strong> migration for two reasons:<br />

► If <strong>the</strong>re is significant data in cache that takes a long time to destage, <strong>the</strong> command line will<br />

time out.<br />

► Subsystem Device Driver (SDD) vpaths that are associated with <strong>the</strong> VDisk are deleted<br />

before <strong>the</strong> VDisk move takes place in order to avoid data corruption. So, data corruption<br />

can occur if I/O is still ongoing at a particular logical unit number (LUN) ID when it is<br />

reused for ano<strong>the</strong>r VDisk.<br />

When migrating a VDisk between I/O Groups, you do not have <strong>the</strong> ability to specify <strong>the</strong><br />

preferred node. The preferred node is assigned by <strong>the</strong> SVC.<br />

The syntax of <strong>the</strong> CLI command is:<br />

svctask chvdisk [-name -new_name_arg][-iogrp -io_group_id | - io_group_name<br />

[-force]] [-node -node_id | - node_name [-rate -throttle_rate]] [-unitmb -udid<br />

-vdisk_udid] [-warning -disk_size | -disk_size_percentage] [-autoexpand -on | -off<br />

[ -copy -id]] [-primary -copy_id][-syncrate -percentage_arg] [vdisk_name |<br />

vdisk_id [-unit [-b | -kb | -mb | -gb | -tb | -pb]]]<br />

For detailed information about <strong>the</strong> chvdisk command parameters, refer to <strong>the</strong> SVC<br />

command-line interface help by typing this command:<br />

svctask chvdisk -h<br />

Or, refer to <strong>the</strong> Command Line Interface User’s Guide, SG26-7903-05.<br />

The chvdisk command modifies a single property of a VDisk. To change <strong>the</strong> VDisk name and<br />

to modify <strong>the</strong> I/O Group, for example, you must issue <strong>the</strong> command twice. A VDisk that is a<br />

member of a FlashCopy or Remote Copy relationship cannot be moved to ano<strong>the</strong>r I/O Group,<br />

and you cannot override this restriction by using <strong>the</strong> -force flag.<br />

9.2.6 Monitoring <strong>the</strong> migration progress<br />

To monitor <strong>the</strong> progress of ongoing migrations, use <strong>the</strong> CLI command:<br />

svcinfo lsmigrate<br />

To determine <strong>the</strong> extent allocation of MDisks and VDisks, use <strong>the</strong> following commands:<br />

► To list <strong>the</strong> VDisk IDs and <strong>the</strong> corresponding number of extents that <strong>the</strong> VDisks occupy on<br />

<strong>the</strong> queried MDisk, use <strong>the</strong> following CLI command:<br />

svcinfo lsmdiskextent <br />

► To list <strong>the</strong> MDisk IDs and <strong>the</strong> corresponding number of extents that <strong>the</strong> queried VDisks<br />

occupy on <strong>the</strong> listed MDisks, use <strong>the</strong> following CLI command:<br />

svcinfo lsvdiskextent <br />

► To list <strong>the</strong> number of available free extents on an MDisk, use <strong>the</strong> following CLI command:<br />

svcinfo lsfreeextents <br />

Chapter 9. Data migration 681


9.3 Functional overview of migration<br />

9.3.1 Parallelism<br />

Important: After a migration has been started, <strong>the</strong>re is no way for you to stop <strong>the</strong><br />

migration. The migration runs to completion unless it is stopped or suspended by an error<br />

condition, or if <strong>the</strong> VDisk being migrated is deleted.<br />

This section describes <strong>the</strong> functional view of data migration.<br />

You can perform several of <strong>the</strong> following activities in parallel.<br />

Per cluster<br />

An SVC cluster supports up to 32 active concurrent instances of members of <strong>the</strong> set of<br />

migration activities:<br />

► Migrate multiple extents<br />

► Migrate between MDGs<br />

► Migrate off of a deleted MDisk<br />

► Migrate to image mode<br />

These high-level migration tasks operate by scheduling single extent migrations:<br />

► Up to 256 single extent migrations can run concurrently. This number is made up of single<br />

extent migrates, which result from <strong>the</strong> operations previously listed.<br />

► The Migrate Multiple Extents and Migrate Between MDGs commands support a flag that<br />

allows you to specify <strong>the</strong> number of “threads” to use, between 1 and 4. This parameter<br />

affects <strong>the</strong> number of extents that will be concurrently migrated for that migration<br />

operation. Thus, if <strong>the</strong> thread value is set to 4, up to four extents can be migrated<br />

concurrently for that operation, subject to o<strong>the</strong>r resource constraints.<br />

Per MDisk<br />

The SVC supports up to four concurrent single extent migrates per MDisk. This limit does not<br />

take into account whe<strong>the</strong>r <strong>the</strong> MDisk is <strong>the</strong> source or <strong>the</strong> destination. If more than four single<br />

extent migrates are scheduled for a particular MDisk, fur<strong>the</strong>r migrations are queued pending<br />

<strong>the</strong> completion of one of <strong>the</strong> currently running migrations.<br />

682 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


9.3.2 Error handling<br />

If a medium error occurs on a read from <strong>the</strong> source, and <strong>the</strong> destination’s medium error table<br />

is full, if an I/O error occurs on a read from <strong>the</strong> source repeatedly, or if <strong>the</strong> MDisks go offline<br />

repeatedly, <strong>the</strong> migration is suspended or stopped.<br />

The migration will be suspended if any of <strong>the</strong> following conditions exist; o<strong>the</strong>rwise, it will be<br />

stopped:<br />

► The migration is between MDGs and has progressed beyond <strong>the</strong> first extent. These<br />

migrations are always suspended ra<strong>the</strong>r than stopped, because stopping a migration in<br />

progress leaves a VDisk spanning MDGs, which is not a valid configuration o<strong>the</strong>r than<br />

during a migration.<br />

► The migration is a Migrate to Image Mode (even if it is processing <strong>the</strong> first extent). These<br />

migrations are always suspended ra<strong>the</strong>r than stopped, because stopping a migration in<br />

progress leaves <strong>the</strong> VDisk in an inconsistent state.<br />

► A migration is waiting for a metadata checkpoint that has failed.<br />

If a migration is stopped, and if any migrations are queued awaiting <strong>the</strong> use of <strong>the</strong> MDisk for<br />

migration, <strong>the</strong>se migrations are now considered. If, however, a migration is suspended, <strong>the</strong><br />

migration continues to use resources, and so, ano<strong>the</strong>r migration is not started.<br />

The SVC attempts to resume <strong>the</strong> migration if <strong>the</strong> error log entry is marked as fixed using <strong>the</strong><br />

CLI or <strong>the</strong> GUI. If <strong>the</strong> error condition no longer exists, <strong>the</strong> migration will proceed. The<br />

migration might resume on ano<strong>the</strong>r node than <strong>the</strong> node that started <strong>the</strong> migration.<br />

9.3.3 Migration algorithm<br />

This section describes <strong>the</strong> effect of <strong>the</strong> migration algorithm.<br />

Chunks<br />

Regardless of <strong>the</strong> extent size for <strong>the</strong> MDG, data is migrated in units of 16 MB. In this<br />

description, this unit is referred to as a chunk.<br />

We describe <strong>the</strong> algorithm that is used to migrate an extent:<br />

1. Pause (pause means to queue all new I/O requests in <strong>the</strong> virtualization layer in SVC and<br />

to wait for all outstanding requests to complete) all I/O on <strong>the</strong> source MDisk on all nodes in<br />

<strong>the</strong> SVC cluster. The I/O to o<strong>the</strong>r extents is unaffected.<br />

2. Unpause (resume) I/O on all of <strong>the</strong> source MDisk extents apart from writes to <strong>the</strong> specific<br />

chunk that is being migrated. Writes to <strong>the</strong> extent are mirrored to <strong>the</strong> source and<br />

destination.<br />

3. On <strong>the</strong> node that is performing <strong>the</strong> migration, for each 256 KB section of <strong>the</strong> chunk:<br />

– Synchronously read 256 KB from <strong>the</strong> source.<br />

– Synchronously write 256 KB to <strong>the</strong> target.<br />

4. After <strong>the</strong> entire chunk has been copied to <strong>the</strong> destination, repeat <strong>the</strong> process for <strong>the</strong> next<br />

chunk within <strong>the</strong> extent.<br />

5. After <strong>the</strong> entire extent has been migrated, pause all I/O to <strong>the</strong> extent being migrated,<br />

perform a checkpoint on <strong>the</strong> extent move to on-disk metadata, redirect all fur<strong>the</strong>r reads to<br />

<strong>the</strong> destination, and stop mirroring writes (writes only to destination).<br />

6. If <strong>the</strong> checkpoint fails, <strong>the</strong> I/O is unpaused.<br />

Chapter 9. Data migration 683


During <strong>the</strong> migration, <strong>the</strong> extent can be divided into three regions, as shown in Figure 9-2.<br />

Region B is <strong>the</strong> chunk that is being copied. Writes to Region B are queued (paused) in <strong>the</strong><br />

virtualization layer waiting for <strong>the</strong> chunk to be copied. Reads to Region A are directed to <strong>the</strong><br />

destination, because this data has already been copied. Writes to Region A are written to<br />

both <strong>the</strong> source and <strong>the</strong> destination extent in order to maintain <strong>the</strong> integrity of <strong>the</strong> source<br />

extent. Reads and writes to Region C are directed to <strong>the</strong> source, because this region has yet<br />

to be migrated.<br />

The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During<br />

this time, all writes to <strong>the</strong> chunk from higher layers in <strong>the</strong> software stack (such as cache<br />

destages) are held back. If <strong>the</strong> back-end storage is operating with significant latency, it is<br />

possible that this operation might take time (minutes) to complete, which can have an adverse<br />

affect on <strong>the</strong> overall performance of <strong>the</strong> SVC. To avoid this situation, if <strong>the</strong> migration of a<br />

particular chunk is still active after one minute, <strong>the</strong> migration is paused for 30 seconds. During<br />

this time, writes to <strong>the</strong> chunk are allowed to proceed. After 30 seconds, <strong>the</strong> migration of <strong>the</strong><br />

chunk is resumed. This algorithm is repeated as many times as necessary to complete <strong>the</strong><br />

migration of <strong>the</strong> chunk.<br />

Extent N-1 Extent N Extent N+1<br />

Region A<br />

(already<br />

copied)<br />

reads/writes<br />

go to<br />

destination<br />

Figure 9-2 Migrating an extent<br />

Managed Disk Extents<br />

Region B<br />

(copying)<br />

reads/writes<br />

paused<br />

16 MB<br />

SVC guarantees read stability during data migrations even if <strong>the</strong> data migration is stopped by<br />

a node reset or a cluster shutdown. This read stability is possible, because SVC disallows<br />

writes on all nodes to <strong>the</strong> area being copied, and upon a failure, <strong>the</strong> extent migration is<br />

restarted from <strong>the</strong> beginning.<br />

At <strong>the</strong> conclusion of <strong>the</strong> operation, we will have <strong>the</strong>se results:<br />

► Extents migrated in 16 MB chunks, one chunk at a time.<br />

► Chunks are ei<strong>the</strong>r copied, in progress, or not copied.<br />

► When <strong>the</strong> extent is finished, its new location is saved.<br />

Figure 9-3 on page 685 shows <strong>the</strong> data migration and write operation relationship.<br />

684 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

Region C<br />

(yet to be copied)<br />

reads/writes go<br />

to source<br />

Not to scale


Figure 9-3 Migration and write operation relationship<br />

9.4 Migrating data from an image mode VDisk<br />

This section describes migrating data from an image mode VDisk to a fully managed VDisk.<br />

9.4.1 Image mode VDisk migration concept<br />

First, we describe <strong>the</strong> concepts associated with this operation.<br />

MDisk modes<br />

There are three MDisk modes:<br />

► Unmanaged MDisk:<br />

An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged<br />

MDisk is not associated with any VDisks and has no metadata stored on it. The SVC will<br />

not write to an MDisk that is in unmanaged mode except when it attempts to change <strong>the</strong><br />

mode of <strong>the</strong> MDisk to one of <strong>the</strong> o<strong>the</strong>r modes.<br />

► Image mode MDisk:<br />

Image mode provides a direct block-for-block translation from <strong>the</strong> MDisk to <strong>the</strong> VDisk with<br />

no virtualization. Image mode VDisks have a minimum size of one block (512 bytes) and<br />

always occupy at least one extent. An image mode MDisk is associated with exactly one<br />

VDisk.<br />

► Managed mode MDisk:<br />

Managed mode Mdisks contribute extents to <strong>the</strong> pool of available extents in <strong>the</strong> MDG.<br />

Zero or more managed mode VDisks might use <strong>the</strong>se extents.<br />

Transitions between <strong>the</strong> modes<br />

The following state transitions can occur to an MDisk (see Figure 9-4 on page 686):<br />

► Unmanaged mode to managed mode.<br />

This transition occurs when an MDisk is added to an MDisk group, which makes <strong>the</strong> MDisk<br />

eligible for <strong>the</strong> allocation of data and metadata extents.<br />

► Managed mode to unmanaged mode.<br />

This transition occurs when an MDisk is removed from an MDisk group.<br />

► Unmanaged mode to image mode.<br />

Chapter 9. Data migration 685


This transition occurs when an image mode MDisk is created on an MDisk that was<br />

previously unmanaged. It also occurs when an MDisk is used as <strong>the</strong> target for a migration<br />

to image mode.<br />

► Image mode to unmanaged mode.<br />

There are two distinct ways in which this transition can happen:<br />

– When an image mode VDisk is deleted. The MDisk that supported <strong>the</strong> VDisk becomes<br />

unmanaged.<br />

– When an image mode VDisk is migrated in image mode to ano<strong>the</strong>r MDisk, <strong>the</strong> MDisk<br />

that is being migrated from remains in image mode until all data has been moved off of<br />

it. It <strong>the</strong>n transitions to unmanaged mode.<br />

► Image mode to managed mode.<br />

This transition occurs when <strong>the</strong> image mode VDisk that is using <strong>the</strong> MDisk is migrated into<br />

managed mode.<br />

► Managed mode to image mode is impossible.<br />

There is no operation that will take an MDisk directly from managed mode to image mode.<br />

You can achieve this transition by performing operations that convert <strong>the</strong> MDisk to<br />

unmanaged mode and <strong>the</strong>n to image mode.<br />

complete migrate<br />

Not in group<br />

Migrating to<br />

image mode<br />

Figure 9-4 Various states of a VDisk<br />

create image<br />

mode vdisk<br />

add to group<br />

remove from group<br />

Image mode VDisks have <strong>the</strong> special property that <strong>the</strong> last extent in <strong>the</strong> VDisk can be a<br />

partial extent. Managed mode disks do not have this property.<br />

To perform any type of migration activity on an image mode VDisk, <strong>the</strong> image mode disk must<br />

first be converted into a managed mode disk. If <strong>the</strong> image mode disk has a partial last extent,<br />

this last extent in <strong>the</strong> image mode VDisk must be <strong>the</strong> first extent to be migrated. This<br />

migration is handled as a special case.<br />

686 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

delete image<br />

mode vdisk<br />

start migrate to image mode<br />

Managed<br />

mode<br />

Image mode<br />

start migrate to<br />

managed mode


9.4.2 Migration tips<br />

After this special migration operation has occurred, <strong>the</strong> VDisk becomes a managed mode<br />

VDisk and is treated in <strong>the</strong> same way as any o<strong>the</strong>r managed mode VDisk. If <strong>the</strong> image mode<br />

disk does not have a partial last extent, no special processing is performed; <strong>the</strong> image mode<br />

VDisk is simply changed into a managed mode VDisk and is treated in <strong>the</strong> same way as any<br />

o<strong>the</strong>r managed mode VDisk.<br />

After data is migrated off of a partial extent, <strong>the</strong>re is no way to migrate data back onto <strong>the</strong><br />

partial extent.<br />

You have several methods to migrate an image mode VDisk to a managed mode VDisk:<br />

► If your image mode VDisk is in <strong>the</strong> same MDG as <strong>the</strong> MDisks on which you want to<br />

migrate <strong>the</strong> extents, you can perform one of <strong>the</strong>se migrations:<br />

– Migrate a single extent. You have to migrate <strong>the</strong> last extent of <strong>the</strong> image mode VDisk<br />

(number N-1).<br />

– Migrate multiple extents.<br />

– Migrate all of <strong>the</strong> in-use extents from an MDisk. Migrate extents off of an MDisk that is<br />

being deleted.<br />

► If you have two MDGs, one MDG for <strong>the</strong> image mode VDisk, and one MDG for <strong>the</strong><br />

managed mode VDisks, you can migrate a VDisk from one MDG to ano<strong>the</strong>r MDG.<br />

The recommended method is to have one MDG for all <strong>the</strong> image mode VDisks, and o<strong>the</strong>r<br />

MDGs for <strong>the</strong> managed mode VDisks, and to use <strong>the</strong> migrate VDisk facility.<br />

Be sure to verify that enough extents are available in <strong>the</strong> target MDG.<br />

9.5 Data migration for Windows using <strong>the</strong> SVC GUI<br />

In this section, we move <strong>the</strong> two LUNs from a Windows Server 2008 server that is currently<br />

attached to a DS4700 storage subsystem over to <strong>the</strong> SVC.<br />

We <strong>the</strong>n manage those LUNs with <strong>the</strong> SVC, migrate <strong>the</strong>m from an image mode VDisk to a<br />

VDisk, migrate one of <strong>the</strong>m back to an image mode VDisk, and finally, move it to ano<strong>the</strong>r<br />

image mode VDisk on ano<strong>the</strong>r storage subsystem, so that those LUNs can <strong>the</strong>n be<br />

masked/mapped back to <strong>the</strong> host directly. This approach, of course, also works if we move<br />

<strong>the</strong> LUN back to <strong>the</strong> same storage subsystem.<br />

Using this example will help you perform any one of <strong>the</strong> following activities in your<br />

environment:<br />

► Move a Microsoft server’s <strong>SAN</strong> LUNs from a storage subsystem and virtualize those same<br />

LUNs through <strong>the</strong> SVC. Perform this activity first when introducing <strong>the</strong> SVC into your<br />

environment. This section shows that your host downtime is only a few minutes while you<br />

remap and remask disks using your storage subsystem LUN management tool. We<br />

describe this step in detail in 9.5.2, “Adding <strong>the</strong> SVC between <strong>the</strong> host system and <strong>the</strong><br />

DS4700” on page 690.<br />

► Migrate your image mode VDisk to a VDisk while your host is still running and servicing<br />

your business application. You might perform this activity if you were removing a storage<br />

subsystem from your <strong>SAN</strong> environment, or wanting to move <strong>the</strong> data onto LUNs that are<br />

more appropriate for <strong>the</strong> type of data stored on those LUNs, taking into account<br />

availability, performance, and redundancy. We describe this step in 9.5.4, “Migrating <strong>the</strong><br />

VDisk from image mode to managed mode” on page 700.<br />

Chapter 9. Data migration 687


► Migrate your VDisk to an image mode VDisk. You might perform this activity if you were<br />

removing <strong>the</strong> SVC from your <strong>SAN</strong> environment after a trial period. We describe this step in<br />

detail in 9.5.5, “Migrating <strong>the</strong> VDisk from managed mode to image mode” on page 702.<br />

► Move an image mode VDisk to ano<strong>the</strong>r image mode VDisk. Use this procedure to migrate<br />

data from one storage subsystem to <strong>the</strong> o<strong>the</strong>r storage subsystem. We describe this step in<br />

detail in 9.6.6, “Migrate <strong>the</strong> VDisks to image mode VDisks” on page 728.<br />

You can use <strong>the</strong>se activities individually, or toge<strong>the</strong>r, to migrate your server’s LUNs from one<br />

storage subsystem to ano<strong>the</strong>r storage subsystem using <strong>the</strong> SVC as your migration tool.<br />

The only downtime that is required for <strong>the</strong>se activities is <strong>the</strong> time that it takes you to remask<br />

and remap <strong>the</strong> LUNs between <strong>the</strong> storage subsystems and your SVC.<br />

9.5.1 Windows Server 2008 host system connected directly to <strong>the</strong> DS4700<br />

In our example configuration, we use a Windows Server 2008 host, a DS4700, and a<br />

DS4500. The host has two LUNs (drive X and Y). The two LUNs are part of one DS4700<br />

array. Before <strong>the</strong> migration, LUN masking is defined in <strong>the</strong> DS4700 to give access to <strong>the</strong><br />

Windows Server 2008 host system for <strong>the</strong> volume from DS4700 labeled X and Y (see<br />

Figure 9-6 on page 689).<br />

Figure 9-5 shows <strong>the</strong> starting zoning scenario.<br />

Figure 9-5 Starting zoning scenario<br />

Figure 9-6 on page 689 shows <strong>the</strong> two LUNs (drive X and Y).<br />

688 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-6 Drives X and Y<br />

Figure 9-7 shows <strong>the</strong> properties of one of <strong>the</strong> DS4700 disks using <strong>the</strong> Subsystem Device<br />

Driver DSM (SDDDSM). The disk appears as an <strong>IBM</strong> 1814 Fast Multipath Device.<br />

Figure 9-7 Disk properties<br />

Chapter 9. Data migration 689


9.5.2 Adding <strong>the</strong> SVC between <strong>the</strong> host system and <strong>the</strong> DS4700<br />

Figure 9-8 shows <strong>the</strong> new environment with <strong>the</strong> SVC and a second storage subsystem<br />

attached to <strong>the</strong> <strong>SAN</strong>. The second storage subsystem is not required to migrate to <strong>the</strong> SVC,<br />

but in <strong>the</strong> following examples, we show that it is possible to move data across storage<br />

subsystems without any host downtime.<br />

Figure 9-8 Add SVC and second storage subsystem<br />

To add <strong>the</strong> SVC between <strong>the</strong> host system and <strong>the</strong> DS4700 storage subsystem, perform <strong>the</strong><br />

following steps:<br />

1. Check that you have installed supported device drivers on your host system.<br />

2. Check that your <strong>SAN</strong> environment fulfills <strong>the</strong> supported zoning configurations.<br />

3. Shut down <strong>the</strong> host.<br />

4. Change <strong>the</strong> LUN masking in <strong>the</strong> DS4700. Mask <strong>the</strong> LUNs to <strong>the</strong> SVC, and remove <strong>the</strong><br />

masking for <strong>the</strong> host.<br />

Figure 9-9 on page 691 shows <strong>the</strong> two LUNs with LUN IDs 12 and 13 remapped to SVC<br />

ITSO-CLS3.<br />

690 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-9 LUNs remapped<br />

5. Log on to your SVC Console, open Work with Managed Disks and Managed Disks,<br />

select Discover Managed Disks in <strong>the</strong> drop-down list, and click Go (Figure 9-10).<br />

Figure 9-10 Discover managed disks<br />

Figure 9-11 on page 692 shows <strong>the</strong> two LUNs discovered as Mdisk12 and Mdisk13.<br />

Chapter 9. Data migration 691


Figure 9-11 Mdisk12 and Mdisk13 discovered<br />

6. Now, we create one new empty MDG for each MDisk that we want to use to create an<br />

image VDisk later. Open Work with Managed Disks and Managed Disks Group, select<br />

Create an Mdisk Group in <strong>the</strong> drop-down list, and click Go.<br />

Figure 9-12 shows <strong>the</strong> MDisk Group creation.<br />

Figure 9-12 MDG creation<br />

7. Click Next.<br />

8. Type <strong>the</strong> MDG name, MDG_img_1. Do not select any MDisk, as shown in Figure 9-13 on<br />

page 693, and <strong>the</strong>n, click Next.<br />

692 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-13 MDG for image VDisk creation<br />

9. Choose <strong>the</strong> extent size that you want to use, as shown in Figure 9-14, and <strong>the</strong>n, click<br />

Next. Remember that <strong>the</strong> extent size that you choose must be <strong>the</strong> same extent size in <strong>the</strong><br />

MDG to which you will migrate your data later.<br />

Figure 9-14 Extent size selection<br />

10.Now, click Finish to complete <strong>the</strong> MDG creation.<br />

Figure 9-15 shows <strong>the</strong> completion window.<br />

Figure 9-15 Completion window<br />

Chapter 9. Data migration 693


11.Now, we create <strong>the</strong> new VDisks named W2k8_Log and W2k8_Data by using <strong>the</strong> two<br />

newly discovered MDisks in <strong>the</strong> MDG0 MDG.<br />

12.Expand Work with Virtual Disks and click Virtual Disks. As shown in Figure 9-16, select<br />

Create an Imagemode VDisk from <strong>the</strong> drop-down list, and click Go.<br />

Figure 9-16 Image VDisk creation<br />

13.The Create Image Mode Virtual Disk window (Figure 9-17) opens. Click Next.<br />

Figure 9-17 Create Image Mode Virtual Disk window<br />

14.Type <strong>the</strong> name that you want to use for <strong>the</strong> VDisk, and select <strong>the</strong> attributes, in our case,<br />

<strong>the</strong> name is W2k8_Log. Click Next (Figure 9-18 on page 695).<br />

694 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-18 Set <strong>the</strong> attributes for <strong>the</strong> image mode VDisk<br />

15.Select <strong>the</strong> MDisk to use to create <strong>the</strong> image mode VDisk, and click Next (Figure 9-19).<br />

Figure 9-19 Select <strong>the</strong> MDisk to use to create your image mode VDisk<br />

16.Select an I/O Group, <strong>the</strong> Preferred Node, and <strong>the</strong> MDisk group that you previously<br />

created. Optionally, you can let this system choose <strong>the</strong>se settings (Figure 9-20 on<br />

page 696). Click Next.<br />

Chapter 9. Data migration 695


Figure 9-20 Select I/O Group and MDisk Group<br />

Multiple nodes: If you have more than two nodes in <strong>the</strong> cluster, select <strong>the</strong> I/O Group of<br />

<strong>the</strong> nodes to evenly share <strong>the</strong> load.<br />

17.Review <strong>the</strong> summary, and click Finish to create <strong>the</strong> image mode VDisk.<br />

Figure 17 shows <strong>the</strong> image VDisk summary and attributes.<br />

Figure 9-21 Verify Attributes window<br />

18.Repeat steps 6 through 17 for each LUN that you want to migrate to <strong>the</strong> SVC.<br />

19.In <strong>the</strong> Viewing Virtual Disk view, we see <strong>the</strong> two newly created VDisks, as shown in<br />

Figure 9-22 on page 697. In our example, <strong>the</strong>y are named W2k8_log and W2k8_data.<br />

696 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-22 Viewing Virtual Disks<br />

20.In <strong>the</strong> Viewing Managed Disks window (Figure 9-23), we see <strong>the</strong> two new MDisks are now<br />

shown as Image Mode disks. In our example, <strong>the</strong>y are named mdisk12 and mdisk13.<br />

Figure 9-23 Viewing Managed Disks<br />

21.Map <strong>the</strong> VDisks again to <strong>the</strong> Windows Server 2008 host system.<br />

22.Expand Work with Virtual Disks, and click Virtual Disks. Select <strong>the</strong> VDisks, and select<br />

Map VDisks to a Host, and click Go (Figure 9-24).<br />

Figure 9-24 Mapping VDisks to a host<br />

23.Choose <strong>the</strong> host, and enter <strong>the</strong> Small Computer <strong>System</strong> Interface (SCSI) LUN IDs. Click<br />

OK (Figure 9-25 on page 698).<br />

Chapter 9. Data migration 697


Figure 9-25 Creating Virtual Disk-to-Host Mappings window<br />

9.5.3 Putting <strong>the</strong> migrated disks onto an online Windows Server 2008 host<br />

Perform <strong>the</strong>se steps:<br />

1. Start <strong>the</strong> Windows Server 2008 host system again, and expand Computer Management to<br />

see <strong>the</strong> new disk properties changed to a 2145 Multi-Path Disk Device (Figure 9-26).<br />

Figure 9-26 Disk Management<br />

698 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


2. Figure 9-27 shows <strong>the</strong> Disk Management window.<br />

Figure 9-27 Migrated disks are available<br />

3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device<br />

Driver DSM to open <strong>the</strong> SDDDSM command-line utility (Figure 9-28).<br />

Figure 9-28 Subsystem Device Driver DSM CLI<br />

Chapter 9. Data migration 699


4. Enter <strong>the</strong> datapath query device command to check if all paths are available, as planned<br />

in your <strong>SAN</strong> environment (Example 9-1).<br />

Example 9-1 datapath query device command<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM>datapath query device<br />

Total Devices : 2<br />

DEV#: 0 DEVICE NAME: Disk0 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A680E90800000000000007<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 180 0<br />

1 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0<br />

2 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 145 0<br />

3 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0<br />

DEV#: 1 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZED<br />

SERIAL: 6005076801A680E90800000000000005<br />

============================================================================<br />

Path# Adapter/Hard Disk State Mode Select Errors<br />

0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 25 0<br />

1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 164 0<br />

2 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0<br />

3 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 136 0<br />

C:\Program Files\<strong>IBM</strong>\SDDDSM><br />

9.5.4 Migrating <strong>the</strong> VDisk from image mode to managed mode<br />

Perform <strong>the</strong>se steps to migrate <strong>the</strong> VDisk to managed mode by migrating <strong>the</strong> completed<br />

VDisk:<br />

1. As shown in Figure 9-29 on page 701, select <strong>the</strong> VDisk. Then, select Migrate a VDisk<br />

from <strong>the</strong> drop-down list, and click Go.<br />

700 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-29 Migrate a VDisk<br />

2. Select <strong>the</strong> MDG to which to migrate <strong>the</strong> disk, and select <strong>the</strong> number of threads to use for<br />

this process, as shown in Figure 9-30. Click OK.<br />

Figure 9-30 Migrating virtual disks<br />

Extent sizes: If you migrate <strong>the</strong> VDisks to ano<strong>the</strong>r MDG, <strong>the</strong> extent size of <strong>the</strong> source<br />

MDG and <strong>the</strong> extent size of <strong>the</strong> target MDG must be equal.<br />

Chapter 9. Data migration 701


3. The Viewing VDisk Migration Progress window opens and enables you to monitor <strong>the</strong><br />

migration progress (Figure 9-31).<br />

Figure 9-31 Viewing VDisk Migration Progress window<br />

4. Click <strong>the</strong> percentage to show more detailed information about this VDisk. During <strong>the</strong><br />

migration process, <strong>the</strong> VDisks are still in <strong>the</strong> old MDG. During <strong>the</strong> migration, your server is<br />

still accessing <strong>the</strong> data. After <strong>the</strong> migration is complete, <strong>the</strong> VDisk is in <strong>the</strong> new<br />

MDG_DS45 MDG and is a striped VDisk.<br />

Figure 9-32 shows <strong>the</strong> migrated VDisk in <strong>the</strong> new MDG.<br />

Figure 9-32 VDisk W2k8_log in <strong>the</strong> new MDG<br />

9.5.5 Migrating <strong>the</strong> VDisk from managed mode to image mode<br />

You can migrate <strong>the</strong> VDisk from managed mode to image mode. In this example, we migrate<br />

a managed VDisk to an image mode VDisk. Follow <strong>the</strong>se steps:<br />

1. Create an empty MDG, following <strong>the</strong> same procedure as shown previously, one time for<br />

each VDisk that you want to migrate to image mode. These MDGs will host <strong>the</strong> target<br />

MDisk that we will map later to our server at <strong>the</strong> end of <strong>the</strong> migration.<br />

2. Select <strong>the</strong> VDisk that you want to migrate, and select Migrate to an Image Mode VDisk<br />

from <strong>the</strong> list (Figure 9-33 on page 703). Click Go.<br />

702 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-33 Migrate to an Image Mode VDisk<br />

3. The Introduction window opens. Click Next (Figure 9-34).<br />

Figure 9-34 Introduction to migrating to an image mode VDisk<br />

4. Select <strong>the</strong> source VDisk copy, and click Next (Figure 9-35).<br />

Figure 9-35 Migrating to an image mode VDisk<br />

Chapter 9. Data migration 703


5. Select a target MDisk (Figure 9-36). Click Next.<br />

Figure 9-36 Select <strong>the</strong> Target MDisk window<br />

6. Select an MDG (Figure 9-37). Click Next.<br />

Figure 9-37 Selecting <strong>the</strong> target MDG<br />

Extent sizes: If you migrate <strong>the</strong> VDisks to ano<strong>the</strong>r MDG, <strong>the</strong> extent size of <strong>the</strong> source<br />

MDG must equal <strong>the</strong> extent size of <strong>the</strong> target MDG.<br />

704 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


7. Select <strong>the</strong> number of threads (1 to 4) to use for this migration process. The higher <strong>the</strong><br />

number, <strong>the</strong> higher <strong>the</strong> priority (Figure 9-38). Click Next.<br />

Figure 9-38 Selecting <strong>the</strong> number of threads<br />

8. Verify <strong>the</strong> migration attributes (Figure 9-39), and click Finish.<br />

Figure 9-39 Verify Migration Attributes window<br />

9. The progress window opens.<br />

10.Repeat <strong>the</strong>se steps for every VDisk that you want to migrate to an image mode VDisk.<br />

11.Free <strong>the</strong> data from <strong>the</strong> SVC by using <strong>the</strong> procedure that is described in 9.5.7, “Free <strong>the</strong><br />

data from <strong>the</strong> SVC” on page 709.<br />

9.5.6 Migrating <strong>the</strong> VDisk from image mode to image mode<br />

Use <strong>the</strong> migrating a VDisk from image mode to image mode process to move image mode<br />

VDisks from one storage subsystem to ano<strong>the</strong>r storage subsystem without going through <strong>the</strong><br />

fully managed mode. The data stays available for <strong>the</strong> applications during this migration. This<br />

procedure is nearly <strong>the</strong> same procedure as <strong>the</strong> procedure in 9.5.5, “Migrating <strong>the</strong> VDisk from<br />

managed mode to image mode” on page 702.<br />

In this section, we describe how to migrate an image mode VDisk to ano<strong>the</strong>r image mode<br />

VDisk. In our example, we migrate <strong>the</strong> W2k8_Log VDisk to ano<strong>the</strong>r disk subsystem as an<br />

image mode VDisk. The second storage subsystem is a DS4500; a new LUN is configured on<br />

<strong>the</strong> storage and mapped to <strong>the</strong> SVC cluster. The LUN is available in SVC as unmanaged<br />

disk11.<br />

Figure shows Mdisk11.<br />

Chapter 9. Data migration 705


Figure 9-40 Unmanaged disk on a DS4500 storage subsystem<br />

To migrate <strong>the</strong> image mode VDisk to ano<strong>the</strong>r image mode VDisk, perform <strong>the</strong> following steps:<br />

1. Check <strong>the</strong> VDisk to migrate, and select Migrate to an image mode VDisk from <strong>the</strong> list.<br />

Click Go.<br />

Figure 9-41 Migrate to an image mode VDisk<br />

2. The Introduction window opens, as shown in Figure 9-42 on page 707. Click Next.<br />

706 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-42 Migrating data to an image mode VDisk<br />

3. Select <strong>the</strong> VDisk source copy, and click Next (Figure 9-43).<br />

Figure 9-43 Select copy<br />

4. Select a target MDisk, as shown in Figure 9-44 on page 708. Click Next.<br />

Chapter 9. Data migration 707


Figure 9-44 Select Target MDisk<br />

5. Select a target MDG for <strong>the</strong> MDisk to join, as shown in Figure 9-45. Click Next.<br />

Figure 9-45 Select MDisk Group window<br />

6. Select <strong>the</strong> number of threads (1 to 4) to devote to this process, as shown in Figure 9-46.<br />

The higher <strong>the</strong> number, <strong>the</strong> higher <strong>the</strong> priority. Click Next.<br />

Figure 9-46 Select <strong>the</strong> Threads window<br />

7. Verify <strong>the</strong> migration attributes, as shown in Figure 9-47 on page 709, and click Finish.<br />

708 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-47 Verify Migration Attributes window<br />

8. Check <strong>the</strong> progress window (Figure 9-48), and click Close.<br />

Figure 9-48 Progress window<br />

9. Repeat <strong>the</strong>se steps for all of <strong>the</strong> image mode VDisks that you want to migrate.<br />

10.If you want to free <strong>the</strong> data from <strong>the</strong> SVC, use <strong>the</strong> procedure that is described in 9.5.7,<br />

“Free <strong>the</strong> data from <strong>the</strong> SVC” on page 709.<br />

9.5.7 Free <strong>the</strong> data from <strong>the</strong> SVC<br />

If your data resides in an image mode VDisk inside <strong>the</strong> SVC, you can free <strong>the</strong> data from <strong>the</strong><br />

SVC. The following sections show how to migrate data to an image mode VDisk. Depending<br />

on your environment, you might have to follow <strong>the</strong>se procedures before freeing <strong>the</strong> data of <strong>the</strong><br />

SVC:<br />

► 9.5.5, “Migrating <strong>the</strong> VDisk from managed mode to image mode” on page 702<br />

► 9.5.6, “Migrating <strong>the</strong> VDisk from image mode to image mode” on page 705<br />

To free <strong>the</strong> data from <strong>the</strong> SVC, we use <strong>the</strong> delete vdisk command.<br />

Chapter 9. Data migration 709


If <strong>the</strong> command succeeds on an image mode VDisk, <strong>the</strong> underlying back-end storage<br />

controller will be consistent with <strong>the</strong> data that a host might previously have read from <strong>the</strong><br />

image mode VDisk; that is, all fast write data will have been flushed to <strong>the</strong> underlying LUN.<br />

Deleting an image mode VDisk causes <strong>the</strong> MDisk that is associated with <strong>the</strong> VDisk to be<br />

ejected from <strong>the</strong> MDG. The mode of <strong>the</strong> MDisk will be returned to unmanaged.<br />

Note: This situation only applies to image mode VDisks. If you delete a normal VDisk, all of<br />

<strong>the</strong> data will also be deleted.<br />

As shown in Example 9-1 on page 700, <strong>the</strong> <strong>SAN</strong> disks currently reside on <strong>the</strong> SVC 2145<br />

device.<br />

Check that you have installed <strong>the</strong> supported device drivers on your host system.<br />

To switch back to <strong>the</strong> storage subsystem, perform <strong>the</strong> following steps:<br />

1. Shut down your host system.<br />

2. Edit <strong>the</strong> LUN masking on your storage subsystem. Remove <strong>the</strong> SVC from <strong>the</strong> LUN<br />

masking, and add <strong>the</strong> host to <strong>the</strong> masking.<br />

3. Open <strong>the</strong> Viewing Virtual Disk-to-Host Mappings window in <strong>the</strong> SVC Console, mark your<br />

host, select Delete a Mapping, and click Go (Figure 9-49).<br />

Figure 9-49 Delete a mapping<br />

4. Confirm <strong>the</strong> task by clicking Delete (Figure 9-50).<br />

Figure 9-50 Delete a mapping<br />

5. The VDisk is removed from <strong>the</strong> SVC.<br />

6. Repeat steps 3 and 4 for every disk that you want to free from <strong>the</strong> SVC.<br />

7. Power on your host system.<br />

710 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


9.5.8 Put <strong>the</strong> free disks online on Windows Server 2008<br />

Put <strong>the</strong> disks, which have ben freed from <strong>the</strong> SVC, online on WIndows Server 2008:<br />

1. Using your DS4500 <strong>Storage</strong> Manager interface, now remap <strong>the</strong> two LUNs that were<br />

MDisks back to your Windows Server 2008 server.<br />

2. Open your Computer Management window. Figure 9-51 shows that <strong>the</strong> LUNs are now<br />

back to an <strong>IBM</strong> 1814 type.<br />

Figure 9-51 <strong>IBM</strong> 1814 type<br />

3. Open your Disk Management window, you will see that <strong>the</strong> disks have appeared. You<br />

might need to reactivate your disk using <strong>the</strong> right-click option on each disk.<br />

Chapter 9. Data migration 711


Figure 9-52 Windows Server 2008 Disk Management<br />

9.6 Migrating Linux <strong>SAN</strong> disks to SVC disks<br />

In this section, we move <strong>the</strong> two LUNs from a Linux server that is currently booting directly off<br />

of our DS4000 storage subsystem over to <strong>the</strong> SVC. We <strong>the</strong>n manage those LUNs with SVC,<br />

move <strong>the</strong>m between o<strong>the</strong>r managed disks, and <strong>the</strong>n, finally, move <strong>the</strong>m back to image mode<br />

disks, so that those LUNs can be masked and mapped back to <strong>the</strong> Linux server directly.<br />

Using this example can help you to perform any of <strong>the</strong> following activities in your environment:<br />

► Move a Linux server’s <strong>SAN</strong> LUNs from a storage subsystem and virtualize those same<br />

LUNs through <strong>the</strong> SVC. Perform this activity first when introducing <strong>the</strong> SVC into your<br />

environment. This section shows that your host downtime is only a few minutes while you<br />

remap and remask disks using your storage subsystem LUN management tool. We<br />

describe this step in detail in 9.6.2, “Preparing your SVC to virtualize disks” on page 715.<br />

► Move data between storage subsystems while your Linux server is still running and<br />

servicing your business application. You might perform this activity if you are removing a<br />

storage subsystem from your <strong>SAN</strong> environment. Or, perform this activity if you want to<br />

move <strong>the</strong> data onto LUNs that are more appropriate for <strong>the</strong> type of data that is stored on<br />

those LUNs, taking availability, performance, and redundancy into account. We describe<br />

this step in 9.6.4, “Migrate <strong>the</strong> image mode VDisks to managed MDisks” on page 722.<br />

► Move your Linux server’s LUNs back to image mode VDisks so that <strong>the</strong>y can be remapped<br />

and remasked directly back to <strong>the</strong> Linux server. We describe this step in 9.6.5, “Preparing<br />

to migrate from <strong>the</strong> SVC” on page 725.<br />

You can use <strong>the</strong>se three activities individually, or toge<strong>the</strong>r, to migrate your Linux server’s<br />

LUNs from one storage subsystem to ano<strong>the</strong>r storage subsystem using <strong>the</strong> SVC as your<br />

migration tool. If you do not use all three activities, you can introduce or remove <strong>the</strong> SVC from<br />

your environment.<br />

712 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


The only downtime required for <strong>the</strong>se activities is <strong>the</strong> time that it takes to remask and remap<br />

<strong>the</strong> LUNs between <strong>the</strong> storage subsystems and your SVC.<br />

In Figure 9-53, we show our Linux environment.<br />

<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

Figure 9-53 Linux <strong>SAN</strong> environment<br />

Zoning for migration scenarios<br />

LINUX<br />

Host<br />

<strong>SAN</strong><br />

Green Zone<br />

Figure 9-53 shows our Linux server connected to our <strong>SAN</strong> infrastructure. It has two LUNs that<br />

are masked directly to it from our storage subsystem:<br />

► The LUN with SCSI ID 0 has <strong>the</strong> host operating system (our host is Red Hat Enterprise<br />

Linux <strong>V5.1</strong>), and this LUN is used to boot <strong>the</strong> system directly from <strong>the</strong> storage subsystem.<br />

The operating system identifies it as /dev/mapper/VolGroup00-LogVol00.<br />

SCSI LUN ID 0: To successfully boot a host off of <strong>the</strong> <strong>SAN</strong>, you must have assigned <strong>the</strong><br />

LUN as SCSI LUN ID 0.<br />

Linux sees this LUN as our /dev/sda disk.<br />

► We have also mapped a second disk (SCSI LUN ID 1) to <strong>the</strong> host. It is 5 GB in size, and it<br />

is mounted in <strong>the</strong> / data folder on <strong>the</strong> /dev/dm-2 disk.<br />

Example 9-2 shows our disks that are directly attached to <strong>the</strong> Linux hosts.<br />

Example 9-2 Directly attached disks<br />

[root@Palau data]# df<br />

Filesystem 1K-blocks Used Available Use% Mounted on<br />

/dev/mapper/VolGroup00-LogVol00<br />

10093752 1971344 7601400 21% /<br />

/dev/sda1 101086 12054 83813 13% /boot<br />

Chapter 9. Data migration 713


tmpfs 1033496 0 1033496 0% /dev/shm<br />

/dev/dm-2 5160576 158160 4740272 4% /data<br />

[root@Palau data]#<br />

Our Linux server represents a typical <strong>SAN</strong> environment with a host directly using LUNs that<br />

were created on a <strong>SAN</strong> storage subsystem, as shown in Figure 9-53 on page 713:<br />

► The Linux server’s host bus adapter (HBA) cards are zoned so that <strong>the</strong>y are in <strong>the</strong> Green<br />

zone with our storage subsystem.<br />

► The two LUNs that have been defined on <strong>the</strong> storage subsystem, using LUN masking, are<br />

directly available to our Linux server.<br />

9.6.1 Connecting <strong>the</strong> SVC to your <strong>SAN</strong> fabric<br />

This section describes <strong>the</strong> basic steps that you take to introduce <strong>the</strong> SVC into your <strong>SAN</strong><br />

environment. While this section only summarizes <strong>the</strong>se activities, you can introduce <strong>the</strong> SVC<br />

into your <strong>SAN</strong> environment without any downtime to any host or application that also uses<br />

your storage area network.<br />

If you have an SVC that is already connected, skip to 9.6.2, “Preparing your SVC to virtualize<br />

disks” on page 715.<br />

Connecting <strong>the</strong> SVC to your <strong>SAN</strong> fabric requires that you perform <strong>the</strong>se tasks:<br />

► Assemble your SVC components (nodes, uninterruptible power supply unit, and Master<br />

Console), cable <strong>the</strong> SVC correctly, power <strong>the</strong> SVC on, and verify that <strong>the</strong> SVC is visible on<br />

your <strong>SAN</strong>. We describe <strong>the</strong>se tasks in much greater detail in Chapter 3, “Planning and<br />

configuration” on page 65.<br />

► Create and configure your SVC cluster.<br />

► Create <strong>the</strong>se additional zones:<br />

– An SVC node zone (our Black zone in Figure 9-54 on page 715). This zone only<br />

contains all of <strong>the</strong> ports (or worldwide names (WWN)) for each of <strong>the</strong> SVC nodes in<br />

your cluster. Our SVC is made up of a two node cluster, where each node has four<br />

ports. So, our Black zone has eight defined WWNs.<br />

– A storage zone (our Red zone). This zone also has all of <strong>the</strong> ports/WWNs from <strong>the</strong><br />

SVC node zone, as well as <strong>the</strong> ports/WWNs for all <strong>the</strong> storage subsystems that SVC<br />

will virtualize.<br />

– A host zone (our Blue zone). This zone contains <strong>the</strong> ports/WWNs for each host that will<br />

access <strong>the</strong> VDisk, toge<strong>the</strong>r with <strong>the</strong> ports that are defined in <strong>the</strong> SVC node zone.<br />

Important: Do not put your storage subsystems in <strong>the</strong> host (Blue) zone. The host<br />

zone is an unsupported configuration. Putting your storage subsystems in <strong>the</strong> host<br />

zone can lead to data loss.<br />

We set our environment in this manner. Figure 9-54 on page 715 shows our environment.<br />

714 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

LINUX<br />

Host<br />

<strong>SAN</strong><br />

Figure 9-54 <strong>SAN</strong> environment with SVC attached<br />

9.6.2 Preparing your SVC to virtualize disks<br />

This section describes <strong>the</strong> preparation tasks that we performed before taking our Linux server<br />

offline.<br />

These activities are all nondisruptive. They do not affect your <strong>SAN</strong> fabric or your existing SVC<br />

configuration (if you already have a production SVC in place).<br />

Creating a managed disk group<br />

When we move <strong>the</strong> two Linux LUNs to <strong>the</strong> SVC, we use <strong>the</strong>m initially in image mode.<br />

Therefore, we need a Managed Disk Group (MDG) to hold those disks.<br />

First, we need to create an empty MDG for each of <strong>the</strong> disks, using <strong>the</strong> commands in<br />

Example 9-3. We name our MDGs Palau-MDG0 to hold our boot LUN. And, we name <strong>the</strong><br />

second MDG Palau-MDG1 to hold <strong>the</strong> data LUN.<br />

Example 9-3 Create an empty MDG<br />

Zoning per Migration Scenarios<br />

<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

SVC<br />

I/O grp0 SVC<br />

SVC<br />

Green Zone<br />

Red Zone<br />

Blue Zone<br />

Black Zone<br />

By Pinocchio 12-09-2005<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name Palau_Data -ext 512<br />

MDisk Group, id [7], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp<br />

id name status mdisk_count vdisk_count<br />

capacity extent_size free_capacity virtual_capacity used_capacity<br />

real_capacity overallocation warning<br />

6 Palau_<strong>SAN</strong>B online 0 0 0<br />

512 0 0.00MB 0.00MB 0.00MB 0<br />

0<br />

Chapter 9. Data migration 715


7 Palau_Data online 0 0 0<br />

512 0 0.00MB 0.00MB 0.00MB 0<br />

0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Creating your host definition<br />

If you have prepared your zones correctly, <strong>the</strong> SVC can see <strong>the</strong> Linux server’s HBA adapters<br />

on <strong>the</strong> fabric (our host only had one HBA).<br />

The svcinfo lshbaportcandidate command on <strong>the</strong> SVC lists all of <strong>the</strong> WWNs, which have<br />

not yet been allocated to a host, that <strong>the</strong> SVC can see on <strong>the</strong> <strong>SAN</strong> fabric. Example 9-4 shows<br />

<strong>the</strong> output of <strong>the</strong> nodes that it found on our <strong>SAN</strong> fabric. (If <strong>the</strong> port did not show up, it indicates<br />

that we have a zone configuration problem.)<br />

Example 9-4 Display HBA port candidates<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate<br />

id<br />

210000E08B89C1CD<br />

210000E08B054CAA<br />

210000E08B0548BC<br />

210000E08B0541BC<br />

210000E08B89CCC2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

If you do not know <strong>the</strong> WWN of your Linux server, you can look at which WWNs are currently<br />

configured on your storage subsystem for this host. Figure 9-55 shows our configured ports<br />

on an <strong>IBM</strong> DS4700 storage subsystem.<br />

Figure 9-55 Display port WWNs<br />

716 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


After verifying that <strong>the</strong> SVC can see our host (linux2), we create <strong>the</strong> host entry and assign <strong>the</strong><br />

WWN to this entry. Example 9-5 shows <strong>the</strong>se commands.<br />

Example 9-5 Create <strong>the</strong> host entry<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn<br />

210000E08B054CAA:210000E08B89C1CD<br />

Host, id [0], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshost Palau<br />

id 0<br />

name Palau<br />

port_count 2<br />

type generic<br />

mask 1111<br />

iogrp_count 4<br />

WWPN 210000E08B89C1CD<br />

node_logged_in_count 4<br />

state inactive<br />

WWPN 210000E08B054CAA<br />

node_logged_in_count 4<br />

state inactive<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Verify that we can see our storage subsystem<br />

If we set up our zoning correctly, <strong>the</strong> SVC can see <strong>the</strong> storage subsystem with <strong>the</strong> svcinfo<br />

lscontroller command (Example 9-6).<br />

Example 9-6 Discover storage controller<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscontroller<br />

id controller_name ctrl_s/n vendor_id product_id_low product_id_high<br />

0 DS4500 <strong>IBM</strong> 1742-900<br />

1 DS4700 <strong>IBM</strong> 1814 FAStT<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

You can rename <strong>the</strong> storage subsystem to a more meaningful name (if we had multiple<br />

storage subsystems that were connected to our <strong>SAN</strong> fabric, renaming <strong>the</strong>m makes it<br />

considerably easier to identify <strong>the</strong>m) with <strong>the</strong> svctask chcontroller -name command.<br />

Get <strong>the</strong> disk serial numbers<br />

To help avoid <strong>the</strong> risk of creating <strong>the</strong> wrong VDisks from all of <strong>the</strong> available, unmanaged<br />

MDisks (in case <strong>the</strong> SVC sees many available, unmanaged MDisks), we get <strong>the</strong> LUN serial<br />

numbers from our storage subsystem administration tool (<strong>Storage</strong> Manager).<br />

When we discover <strong>the</strong>se MDisks, we confirm that we have <strong>the</strong> correct serial numbers before<br />

we create <strong>the</strong> image mode VDisks.<br />

If you also use a DS4000 family storage subsystem, <strong>Storage</strong> Manager provides <strong>the</strong> LUN<br />

serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are<br />

shown in Figure 9-56 on page 718 and in Figure 9-57 on page 718.<br />

Chapter 9. Data migration 717


Figure 9-56 Obtaining <strong>the</strong> disk serial number<br />

Figure 9-57 Obtaining <strong>the</strong> disk serial number<br />

718 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Before we move <strong>the</strong> LUNs to <strong>the</strong> SVC, we must configure <strong>the</strong> host multipath configuration for<br />

<strong>the</strong> SVC. Add <strong>the</strong> following entry to your multipath.conf file, as shown in Example 9-7, and<br />

add <strong>the</strong> content of Example 9-8 to <strong>the</strong> file.<br />

Example 9-7 Edit <strong>the</strong> multipath.conf file<br />

[root@Palau ~]# vi /etc/multipath.conf<br />

[root@Palau ~]# service multipathd stop<br />

Stopping multipathd daemon: [ OK ]<br />

[root@Palau ~]# service multipathd start<br />

Starting multipathd daemon: [ OK ]<br />

[root@Palau ~]#<br />

Example 9-8 Data to add to <strong>the</strong> multipath.conf file<br />

# SVC<br />

device {<br />

vendor "<strong>IBM</strong>"<br />

product "2145CF8"<br />

path_grouping_policy group_by_serial<br />

}<br />

We are now ready to move <strong>the</strong> ownership of <strong>the</strong> disks to <strong>the</strong> SVC, to discover <strong>the</strong>m as<br />

MDisks, and to give <strong>the</strong>m back to <strong>the</strong> host as VDisks.<br />

9.6.3 Move <strong>the</strong> LUNs to <strong>the</strong> SVC<br />

In this step, we move <strong>the</strong> LUNs that are assigned to <strong>the</strong> Linux server and reassign <strong>the</strong>m to <strong>the</strong><br />

SVC.<br />

Our Linux server has two LUNs: One LUN is for our boot disk and operating system file<br />

systems, and <strong>the</strong> o<strong>the</strong>r LUN holds our application and data files. Moving both LUNs at one<br />

time requires shutting down <strong>the</strong> host.<br />

If we only wanted to move <strong>the</strong> LUN that holds our application and data files, we do not have to<br />

reboot <strong>the</strong> host. The only requirement is that we unmount <strong>the</strong> file system and vary off <strong>the</strong><br />

<strong>Volume</strong> Group to ensure <strong>the</strong> data integrity between <strong>the</strong> reassignment.<br />

The following steps are required, because we intend to move both LUNs at <strong>the</strong> same time:<br />

1. Confirm that <strong>the</strong> multipath.conf file is configured for SVC.<br />

2. Shut down <strong>the</strong> host.<br />

If you are only moving <strong>the</strong> LUNs that contain <strong>the</strong> application and data, follow this<br />

procedure instead:<br />

a. Stop <strong>the</strong> applications that are using <strong>the</strong> LUNs.<br />

b. Unmount those file systems with <strong>the</strong> umount MOUNT_POINT command.<br />

c. If <strong>the</strong> file systems are a logical volume manager (LVM) volume, deactivate that <strong>Volume</strong><br />

Group with <strong>the</strong> vgchange -a n VOLUMEGROUP_NAME.<br />

d. If possible, also unload your HBA driver using <strong>the</strong> rmmod DRIVER_MODULE command.<br />

This command removes <strong>the</strong> SCSI definitions from <strong>the</strong> kernel (we will reload this<br />

module and rediscover <strong>the</strong> disks later). It is possible to tell <strong>the</strong> Linux SCSI subsystem<br />

to rescan for new disks without requiring you to unload <strong>the</strong> HBA driver; however, we do<br />

not provide those details here.<br />

Chapter 9. Data migration 719


3. Using <strong>Storage</strong> Manager (our storage subsystem management tool), we can unmap and<br />

unmask <strong>the</strong> disks from <strong>the</strong> Linux server and remap and remask <strong>the</strong> disks to <strong>the</strong> SVC.<br />

LUN IDs: Even though we are using boot from <strong>SAN</strong>, you can also map <strong>the</strong> boot disk<br />

with any LUN number to <strong>the</strong> SVC. It does not have to be 0 until later when we configure<br />

<strong>the</strong> mapping in <strong>the</strong> SVC to <strong>the</strong> host.<br />

4. From <strong>the</strong> SVC, discover <strong>the</strong> new disks with <strong>the</strong> svctask detectmdisk command. The disks<br />

will be discovered and named mdiskN, where N is <strong>the</strong> next available MDisk number<br />

(starting from 0). Example 9-9 shows <strong>the</strong> commands that we used to discover our MDisks<br />

and to verify that we have <strong>the</strong> correct MDisks.<br />

Example 9-9 Discover <strong>the</strong> new MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

26 mdisk26 online unmanaged<br />

12.0GB 0000000000000008 DS4700<br />

600a0b800026b2820000428f48739bca00000000000000000000000000000000<br />

27 mdisk27 online unmanaged<br />

5.0GB 0000000000000009 DS4700<br />

600a0b800026b282000042f84873c7e100000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Important: Match your discovered MDisk serial numbers (UID on <strong>the</strong> svcinfo lsmdisk<br />

task display) with <strong>the</strong> serial number that you recorded earlier (in Figure 9-56 and<br />

Figure 9-57 on page 718).<br />

5. After we have verified that we have <strong>the</strong> correct MDisks, we rename <strong>the</strong>m to avoid<br />

confusion in <strong>the</strong> future when we perform o<strong>the</strong>r MDisk-related tasks (Example 9-10).<br />

Example 9-10 Rename <strong>the</strong> MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauS mdisk26<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauD mdisk27<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

26 md_palauS online unmanaged<br />

12.0GB 0000000000000008 DS4700<br />

600a0b800026b2820000428f48739bca00000000000000000000000000000000<br />

27 md_palauD online unmanaged<br />

5.0GB 0000000000000009 DS4700<br />

600a0b800026b282000042f84873c7e100000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

720 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


6. We create our image mode VDisks with <strong>the</strong> svctask mkvdisk command and <strong>the</strong> -vtype<br />

image option (Example 9-11). This command virtualizes <strong>the</strong> disks in <strong>the</strong> exact same layout<br />

as though <strong>the</strong>y were not virtualized.<br />

Example 9-11 Create <strong>the</strong> image mode VDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_<strong>SAN</strong>B -iogrp 0 -vtype<br />

image -mdisk md_palauS -name palau_<strong>SAN</strong>B<br />

Virtual Disk, id [29], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Data -iogrp 0 -vtype<br />

image -mdisk md_palauD -name palau_Data<br />

Virtual Disk, id [30], successfully create<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

26 md_palauS online image 6<br />

Palau_<strong>SAN</strong>B 12.0GB 0000000000000008 DS4700<br />

600a0b800026b2820000428f48739bca00000000000000000000000000000000<br />

27 md_palauD online image 7<br />

Palau_Data 5.0GB 0000000000000009 DS4700<br />

600a0b800026b282000042f84873c7e100000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk<br />

id name IO_group_id IO_group_name status<br />

mdisk_grp_id mdisk_grp_name capacity type FC_id<br />

FC_name RC_id RC_name vdisk_UID fc_map_count<br />

copy_count<br />

29 palau_<strong>SAN</strong>B 0 io_grp0 online 4<br />

Palau_<strong>SAN</strong>B 12.0GB image<br />

60050768018301BF280000000000002B 0 1<br />

30 palau_Data 0 io_grp0 online 4<br />

Palau_Data 5.0GB image<br />

60050768018301BF280000000000002C 0 1<br />

7. Map <strong>the</strong> new image mode VDisks to <strong>the</strong> host (Example 9-12).<br />

Important: Make sure that you map <strong>the</strong> boot VDisk with SCSI ID 0 to your host. The<br />

host must be able to identify <strong>the</strong> boot volume during <strong>the</strong> boot process.<br />

Example 9-12 Map <strong>the</strong> VDisks to <strong>the</strong> host<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 0 palau_<strong>SAN</strong>B<br />

Virtual Disk to Host map, id [0], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 1 palau_Data<br />

Virtual Disk to Host map, id [1], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau<br />

id name SCSI_id vdisk_id vdisk_name<br />

wwpn vdisk_UID<br />

0 Palau 0 29 palau_<strong>SAN</strong>B<br />

210000E08B89C1CD 60050768018301BF280000000000002B<br />

0 Palau 1 30 palau_Data<br />

210000E08B89C1CD 60050768018301BF280000000000002C<br />

Chapter 9. Data migration 721


<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

FlashCopy: While <strong>the</strong> application is in a quiescent state, you can choose to use<br />

FlashCopy to copy <strong>the</strong> new image VDisks onto o<strong>the</strong>r VDisks. You do not need to wait<br />

until <strong>the</strong> FlashCopy process has completed before starting your application.<br />

8. Power on your host server and enter your Fibre Channel (FC) HBA adapter BIOS before<br />

booting <strong>the</strong> operating system, and make sure that you change <strong>the</strong> boot configuration so<br />

that it points to <strong>the</strong> SVC. In our example, we performed <strong>the</strong> following steps on a QLogic<br />

HBA:<br />

a. Press Ctrl+Q to enter <strong>the</strong> HBA BIOS.<br />

b. Open Configuration Settings.<br />

c. Open Selectable Boot Settings.<br />

d. Change <strong>the</strong> entry from your storage subsystem to <strong>the</strong> SVC 2145 LUN with SCSI ID 0.<br />

e. Exit <strong>the</strong> menu and save your changes.<br />

9. Boot up your Linux operating system.<br />

If you only moved <strong>the</strong> application LUN to <strong>the</strong> SVC and left your Linux server running, you<br />

only need to follow <strong>the</strong>se steps to see <strong>the</strong> new VDisk:<br />

a. Load your HBA driver with <strong>the</strong> modprobe DRIVER_NAME command. If you did not (and<br />

cannot) unload your HBA driver, you can issue commands to <strong>the</strong> kernel to rescan <strong>the</strong><br />

SCSI bus to see <strong>the</strong> new VDisks (<strong>the</strong>se details are beyond <strong>the</strong> scope of this book).<br />

b. Check your syslog, and verify that <strong>the</strong> kernel found <strong>the</strong> new VDisks. On Red Hat<br />

Enterprise Linux, <strong>the</strong> syslog is stored in <strong>the</strong> /var/log/messages directory.<br />

c. If your application and data are on an LVM volume, rediscover <strong>the</strong> <strong>Volume</strong> Group, and<br />

<strong>the</strong>n, run <strong>the</strong> vgchange -a y VOLUME_GROUP command to activate <strong>the</strong> <strong>Volume</strong> Group.<br />

10.Mount your file systems with <strong>the</strong> mount /MOUNT_POINT command (Example 9-13). The df<br />

output shows us that all of disks are available again.<br />

Example 9-13 Mount data disk<br />

[root@Palau data]# mount /dev/dm-2 /data<br />

[root@Palau data]# df<br />

Filesystem 1K-blocks Used Available Use% Mounted on<br />

/dev/mapper/VolGroup00-LogVol00<br />

10093752 1938056 7634688 21% /<br />

/dev/sda1 101086 12054 83813 13% /boot<br />

tmpfs 1033496 0 1033496 0% /dev/shm<br />

/dev/dm-2 5160576 158160 4740272 4% /data<br />

[root@Palau data]#<br />

11.You are now ready to start your application.<br />

9.6.4 Migrate <strong>the</strong> image mode VDisks to managed MDisks<br />

While <strong>the</strong> Linux server is still running, and while our file systems are in use, we migrate <strong>the</strong><br />

image mode VDisks onto striped VDisks, with <strong>the</strong> extents being spread over <strong>the</strong> o<strong>the</strong>r three<br />

MDisks. In our example, <strong>the</strong> three new LUNs are located on an DS4500 storage subsystem,<br />

so we will also move to ano<strong>the</strong>r storage subsystem in this example.<br />

722 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Preparing MDisks for striped mode VDisks<br />

From our second storage subsystem, we have performed <strong>the</strong>se tasks:<br />

► Created and allocated three new LUNs to <strong>the</strong> SVC<br />

► Discovered <strong>the</strong>m as MDisks<br />

► Renamed <strong>the</strong>se LUNs to more meaningful names<br />

► Created a new MDG<br />

► Placed all of <strong>the</strong>se MDisks into this MDG<br />

You can see <strong>the</strong> output of our commands in Example 9-14.<br />

Example 9-14 Create a new MDG<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MD_palauVD -ext 512<br />

MDisk Group, id [8], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

600a0b8000174233000000b5486d255b00000000000000000000000000000000<br />

26 md_palauS online image 6<br />

Palau_<strong>SAN</strong>B 12.0GB 0000000000000008 DS4700<br />

600a0b800026b2820000428f48739bca00000000000000000000000000000000<br />

27 md_palauD online image 7<br />

Palau_Data 5.0GB 0000000000000009 DS4700<br />

600a0b800026b282000042f84873c7e100000000000000000000000000000000<br />

28 mdisk28 online unmanaged<br />

8.0GB 0000000000000010 DS4500<br />

600a0b8000174233000000b9487778ab00000000000000000000000000000000<br />

29 mdisk29 online unmanaged<br />

8.0GB 0000000000000011 DS4500<br />

600a0b80001744310000010f48776bae00000000000000000000000000000000<br />

30 mdisk30 online unmanaged<br />

8.0GB 0000000000000012 DS4500<br />

600a0b8000174233000000bb487778d900000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md1 mdisk28<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md2 mdisk29<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md3 mdisk30<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md1 MD_palauVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md2 MD_palauVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md3 MD_palauVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

26 md_palauS online image 6<br />

Palau_<strong>SAN</strong>B 12.0GB 0000000000000008 DS4700<br />

600a0b800026b2820000428f48739bca00000000000000000000000000000000<br />

27 md_palauD online image 7<br />

Palau_Data 5.0GB 0000000000000009 DS4700<br />

600a0b800026b282000042f84873c7e100000000000000000000000000000000<br />

28 palau-md1 online managed 8<br />

MD_palauVD 8.0GB 0000000000000010 DS4500<br />

600a0b8000174233000000b9487778ab00000000000000000000000000000000<br />

Chapter 9. Data migration 723


29 palau-md2 online managed 8<br />

MD_palauVD 8.0GB 0000000000000011 DS4500<br />

600a0b80001744310000010f48776bae00000000000000000000000000000000<br />

30 palau-md3 online managed 8<br />

MD_palauVD 8.0GB 0000000000000012 DS4500<br />

600a0b8000174233000000bb487778d900000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Migrate <strong>the</strong> VDisks<br />

We are now ready to migrate <strong>the</strong> image mode VDisks onto striped VDisks in <strong>the</strong> MD_palauVD<br />

MDG with <strong>the</strong> svctask migratevdisk command (Example 9-15).<br />

While <strong>the</strong> migration is running, our Linux server is still running.<br />

To check <strong>the</strong> overall progress of <strong>the</strong> migration, we use <strong>the</strong> svcinfo lsmigrate command, as<br />

shown in Example 9-15. Listing <strong>the</strong> MDG with <strong>the</strong> svcinfo lsmdiskgrp command shows that<br />

<strong>the</strong> free capacity on <strong>the</strong> old MDGs is slowly increasing as those extents are moved to <strong>the</strong> new<br />

MDG.<br />

Example 9-15 Migrating image mode VDisks to striped VDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_<strong>SAN</strong>B -mdiskgrp<br />

MD_palauVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp<br />

MD_palauVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmigrate<br />

migrate_type MDisk_Group_Migration<br />

progress 25<br />

migrate_source_vdisk_index 29<br />

migrate_target_mdisk_grp 8<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

migrate_type MDisk_Group_Migration<br />

progress 70<br />

migrate_source_vdisk_index 30<br />

migrate_target_mdisk_grp 8<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

After this task has completed, Example 9-16 shows that <strong>the</strong> VDisks are now spread over<br />

three MDisks.<br />

Example 9-16 Migration complete<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp MD_palauVD<br />

id 8<br />

name MD_palauVD<br />

status online<br />

mdisk_count 3<br />

vdisk_count 2<br />

capacity 24.0GB<br />

extent_size 512<br />

free_capacity 7.0GB<br />

virtual_capacity 17.00GB<br />

used_capacity 17.00GB<br />

724 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


eal_capacity 17.00GB<br />

overallocation 70<br />

warning 0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_<strong>SAN</strong>B<br />

id<br />

28<br />

29<br />

30<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Data<br />

id<br />

28<br />

29<br />

30<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Our migration to striped VDisks on ano<strong>the</strong>r storage subsystem (DS4500) is now complete.<br />

The original MDisks (Palau-MDG0 and Palau-MD1) can now be removed from <strong>the</strong> SVC, and<br />

<strong>the</strong>se LUNs can be removed from <strong>the</strong> storage subsystem.<br />

If <strong>the</strong>se LUNs are <strong>the</strong> last LUNs that were used on our DS4700 storage subsystem, we can<br />

remove it from our <strong>SAN</strong> fabric.<br />

9.6.5 Preparing to migrate from <strong>the</strong> SVC<br />

Before we move <strong>the</strong> Linux server’s LUNs from being accessed by <strong>the</strong> SVC as VDisks to being<br />

directly accessed from <strong>the</strong> storage subsystem, we must convert <strong>the</strong> VDisks into image mode<br />

VDisks.<br />

You might want to perform this activity for any one of <strong>the</strong>se reasons:<br />

► You purchased a new storage subsystem, and you were using SVC as a tool to migrate<br />

from your old storage subsystem to this new storage subsystem.<br />

► You used <strong>the</strong> SVC to FlashCopy or Metro Mirror a VDisk onto ano<strong>the</strong>r VDisk, and you no<br />

longer need that host connected to <strong>the</strong> SVC.<br />

► You want to ship a host, and its data, that is currently connected to <strong>the</strong> SVC to a site where<br />

<strong>the</strong>re is no SVC.<br />

► Changes to your environment no longer require this host to use <strong>the</strong> SVC.<br />

There are also o<strong>the</strong>r preparation activities that we can perform before we have to shut down<br />

<strong>the</strong> host and reconfigure <strong>the</strong> LUN masking and mapping. This section covers those activities.<br />

If you are moving <strong>the</strong> data to a new storage subsystem, it is assumed that <strong>the</strong> storage<br />

subsystem is connected to your <strong>SAN</strong> fabric, powered on, and visible from your <strong>SAN</strong> switches.<br />

Your environment must look similar to our environment, which is shown in Figure 9-58 on<br />

page 726.<br />

Chapter 9. Data migration 725


<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

Figure 9-58 Environment with SVC<br />

LINUX<br />

Host<br />

<strong>SAN</strong><br />

Making fabric zone changes<br />

The first step is to set up <strong>the</strong> <strong>SAN</strong> configuration so that all of <strong>the</strong> zones are created. You must<br />

add <strong>the</strong> new storage subsystem to <strong>the</strong> Red zone so that <strong>the</strong> SVC can talk to it directly.<br />

We also need a Green zone for our host to use when we are ready for it to directly access <strong>the</strong><br />

disk after it has been removed from <strong>the</strong> SVC.<br />

It is assumed that you have created <strong>the</strong> necessary zones.<br />

After your zone configuration is set up correctly, <strong>the</strong> SVC sees <strong>the</strong> new storage subsystem’s<br />

controller using <strong>the</strong> svcinfo lscontroller command, as shown in Figure 9-10 on page 691.<br />

It is also a good idea to rename <strong>the</strong> new storage subsystem’s controller to a more useful<br />

name, which can be done with <strong>the</strong> svctask chcontroller -name command.<br />

Creating new LUNs<br />

On our storage subsystem, we created two LUNs and masked <strong>the</strong> LUNs so that <strong>the</strong> SVC can<br />

see <strong>the</strong>m. Eventually, we will give <strong>the</strong>se two LUNs directly to <strong>the</strong> host, removing <strong>the</strong> VDisks<br />

that <strong>the</strong> host currently has. To check that <strong>the</strong> SVC can use <strong>the</strong>se two LUNs, issue <strong>the</strong> svctask<br />

detectmdisk command, as shown in Example 9-17.<br />

Example 9-17 Discover <strong>the</strong> new MDisks<br />

Zoning for migration scenarios<br />

<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

726 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

SVC<br />

I/O grp0 SVC<br />

SVC<br />

Green Zone<br />

Red Zone<br />

Blue Zone<br />

Black Zone


0 mdisk0 online managed<br />

600a0b800026b282000042f84873c7e100000000000000000000000000000000<br />

28 palau-md1 online managed 8<br />

MD_palauVD 8.0GB 0000000000000010 DS4500<br />

600a0b8000174233000000b9487778ab00000000000000000000000000000000<br />

29 palau-md2 online managed 8<br />

MD_palauVD 8.0GB 0000000000000011 DS4500<br />

600a0b80001744310000010f48776bae00000000000000000000000000000000<br />

30 palau-md3 online managed 8<br />

MD_palauVD 8.0GB 0000000000000012 DS4500<br />

600a0b8000174233000000bb487778d900000000000000000000000000000000<br />

31 mdisk31 online unmanaged<br />

6.0GB 0000000000000013 DS4500<br />

600a0b8000174233000000bd4877890f00000000000000000000000000000000<br />

32 mdisk32 online unmanaged<br />

12.5GB 0000000000000014 DS4500<br />

600a0b80001744310000011048777bda00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Even though <strong>the</strong> MDisks will not stay in <strong>the</strong> SVC for long, we still recommend that you rename<br />

<strong>the</strong>m to more meaningful names, so that <strong>the</strong>y do not get confused with o<strong>the</strong>r MDisks that are<br />

used by o<strong>the</strong>r activities. Also, we create <strong>the</strong> MDGs to hold our new MDisks, which is shown in<br />

Example 9-18.<br />

Example 9-18 Rename <strong>the</strong> MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name mdpalau_ivd mdisk32<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512<br />

MDisk Group, id [9], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512<br />

CMMVC5758E Object name already exists.<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp<br />

id name status mdisk_count vdisk_count<br />

capacity extent_size free_capacity virtual_capacity used_capacity<br />

real_capacity overallocation warning<br />

8 MD_palauVD online 3 2<br />

24.0GB 512 7.0GB 17.00GB 17.00GB<br />

17.00GB 70 0<br />

9 MDG_Palauivd online 0 0 0<br />

512 0 0.00MB 0.00MB 0.00MB 0<br />

0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Our SVC environment is now ready for <strong>the</strong> VDisk migration to image mode VDisks.<br />

Chapter 9. Data migration 727


9.6.6 Migrate <strong>the</strong> VDisks to image mode VDisks<br />

While our Linux server is still running, we migrate <strong>the</strong> managed VDisks onto <strong>the</strong> new MDisks<br />

using image mode VDisks. The command to perform this action is <strong>the</strong> svctask<br />

migratetoimage command, which is shown in Example 9-19.<br />

Example 9-19 Migrate <strong>the</strong> VDisks to image mode VDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_<strong>SAN</strong>B -mdisk<br />

mdpalau_ivd -mdiskgrp MD_palauVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_Data -mdisk<br />

mdpalau_ivd1 -mdiskgrp MD_palauVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

28 palau-md1 online managed 8<br />

MD_palauVD 8.0GB 0000000000000010 DS4500<br />

600a0b8000174233000000b9487778ab00000000000000000000000000000000<br />

29 palau-md2 online managed 8<br />

MD_palauVD 8.0GB 0000000000000011 DS4500<br />

600a0b80001744310000010f48776bae00000000000000000000000000000000<br />

30 palau-md3 online managed 8<br />

MD_palauVD 8.0GB 0000000000000012 DS4500<br />

600a0b8000174233000000bb487778d900000000000000000000000000000000<br />

31 mdpalau_ivd1 online image 8<br />

MD_palauVD 6.0GB 0000000000000013 DS4500<br />

600a0b8000174233000000bd4877890f00000000000000000000000000000000<br />

32 mdpalau_ivd online image 8<br />

MD_palauVD 12.5GB 0000000000000014 DS4500<br />

600a0b80001744310000011048777bda00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmigrate<br />

migrate_type Migrate_to_Image<br />

progress 4<br />

migrate_source_vdisk_index 29<br />

migrate_target_mdisk_index 32<br />

migrate_target_mdisk_grp 8<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

migrate_type Migrate_to_Image<br />

progress 30<br />

migrate_source_vdisk_index 30<br />

migrate_target_mdisk_index 31<br />

migrate_target_mdisk_grp 8<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

During <strong>the</strong> migration, our Linux server is unaware that its data is being physically moved<br />

between storage subsystems.<br />

After <strong>the</strong> migration has completed, <strong>the</strong> image mode VDisks are ready to be removed from <strong>the</strong><br />

Linux server, and <strong>the</strong> real LUNs can be mapped and masked directly to <strong>the</strong> host by using <strong>the</strong><br />

storage subsystem’s tool.<br />

728 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


9.6.7 Removing <strong>the</strong> LUNs from <strong>the</strong> SVC<br />

The next step requires downtime on <strong>the</strong> Linux server, because we will remap and remask <strong>the</strong><br />

disks so that <strong>the</strong> host sees <strong>the</strong>m directly through <strong>the</strong> Green zone, as shown in Figure 9-58 on<br />

page 726.<br />

Our Linux server has two LUNs: one LUN is our boot disk and operating system file systems,<br />

and <strong>the</strong> o<strong>the</strong>r LUN holds our application and data files. Moving both LUNs at one time<br />

requires shutting down <strong>the</strong> host.<br />

If we only want to move <strong>the</strong> LUN that holds our application and data files, we can move that<br />

LUN without rebooting <strong>the</strong> host. The only requirement is that we unmount <strong>the</strong> file system and<br />

vary off <strong>the</strong> <strong>Volume</strong> Group to ensure <strong>the</strong> data integrity during <strong>the</strong> reassignment.<br />

Before you start: Moving LUNs to ano<strong>the</strong>r storage subsystem might need an additional<br />

entry in <strong>the</strong> multipath.conf file. Check with <strong>the</strong> storage subsystem vendor to see which<br />

content you must add to <strong>the</strong> file. You might be able to install and modify <strong>the</strong> file ahead of<br />

time.<br />

When you intend to move both LUNs at <strong>the</strong> same time, you must use <strong>the</strong>se required steps:<br />

1. Confirm that your operating system is configured for <strong>the</strong> new storage.<br />

2. Shut down <strong>the</strong> host.<br />

If you are only moving <strong>the</strong> LUNs that contain <strong>the</strong> application and data, you can follow this<br />

procedure instead:<br />

a. Stop <strong>the</strong> applications that are using <strong>the</strong> LUNs.<br />

b. Unmount those file systems with <strong>the</strong> umount MOUNT_POINT command.<br />

c. If <strong>the</strong> file systems are an LVM volume, deactivate that <strong>Volume</strong> Group with <strong>the</strong> vgchange<br />

-a n VOLUMEGROUP_NAME command.<br />

d. If you can, unload your HBA driver using <strong>the</strong> rmmod DRIVER_MODULE command. This<br />

command removes <strong>the</strong> SCSI definitions from <strong>the</strong> kernel (we will reload this module and<br />

rediscover <strong>the</strong> disks later). It is possible to tell <strong>the</strong> Linux SCSI subsystem to rescan for<br />

new disks without requiring you to unload <strong>the</strong> HBA driver; however, we do not provide<br />

<strong>the</strong>se details here.<br />

3. Remove <strong>the</strong> VDisks from <strong>the</strong> host by using <strong>the</strong> svctask rmvdiskhostmap command<br />

(Example 9-20). To double-check that you have removed <strong>the</strong> VDisks, use <strong>the</strong> svcinfo<br />

lshostvdiskmap command, which shows that <strong>the</strong>se disks are no longer mapped to <strong>the</strong><br />

Linux server.<br />

Example 9-20 Remove <strong>the</strong> VDisks from <strong>the</strong> host<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_<strong>SAN</strong>B<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_Data<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palau<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

4. Remove <strong>the</strong> VDisks from <strong>the</strong> SVC by using <strong>the</strong> svctask rmvdisk command. This step<br />

makes <strong>the</strong>m unmanaged, as seen in Example 9-21 on page 730.<br />

Chapter 9. Data migration 729


Cached data: When you run <strong>the</strong> svctask rmvdisk command, <strong>the</strong> SVC will first<br />

double-check that <strong>the</strong>re is no outstanding dirty cached data for <strong>the</strong> VDisk that is being<br />

removed. If <strong>the</strong>re is still uncommitted cached data, <strong>the</strong> command fails with <strong>the</strong> following<br />

error message:<br />

CMMVC6212E The command failed because data in <strong>the</strong> cache has not been committed<br />

to disk<br />

You will have to wait for this cached data to be committed to <strong>the</strong> underlying storage<br />

subsystem before you can remove <strong>the</strong> VDisk.<br />

The SVC will automatically destage uncommitted cached data two minutes after <strong>the</strong><br />

last write activity for <strong>the</strong> VDisk. How much data <strong>the</strong>re is to destage, and how busy <strong>the</strong><br />

I/O subsystem is, determines how long this command takes to complete.<br />

You can check if <strong>the</strong> VDisk has uncommitted data in <strong>the</strong> cache by using <strong>the</strong> command<br />

svcinfo lsvdisk and checking <strong>the</strong> fast_write_state attribute. This<br />

attribute has <strong>the</strong> following meanings:<br />

empty No modified data exists in <strong>the</strong> cache.<br />

not_empty Modified data might exist in <strong>the</strong> cache.<br />

corrupt Modified data might have existed in <strong>the</strong> cache, but any data has been<br />

lost.<br />

Example 9-21 Remove <strong>the</strong> VDisks from <strong>the</strong> SVC<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdisk palau_<strong>SAN</strong>B<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdisk palau_Data<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

31 mdpalau_ivd1 online unmanaged<br />

6.0GB 0000000000000013 DS4500<br />

600a0b8000174233000000bd4877890f00000000000000000000000000000000<br />

32 mdpalau_ivd online unmanaged<br />

12.5GB 0000000000000014 DS4500<br />

600a0b80001744310000011048777bda00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

5. Using <strong>Storage</strong> Manager (our storage subsystem management tool), unmap and unmask<br />

<strong>the</strong> disks from <strong>the</strong> SVC back to <strong>the</strong> Linux server.<br />

Important: If one of <strong>the</strong> disks is used to boot your Linux server, you must make sure<br />

that it is presented back to <strong>the</strong> host as SCSI ID 0, so that <strong>the</strong> FC adapter BIOS finds<br />

that disk during its initialization.<br />

6. Power on your host server and enter your FC HBA BIOS before booting <strong>the</strong> OS. Make<br />

sure that you change <strong>the</strong> boot configuration, so that it points to <strong>the</strong> SVC. In our example,<br />

we have performed <strong>the</strong> following steps on a QLogic HBA:<br />

a. Press Ctrl+Q to enter <strong>the</strong> HBA BIOS.<br />

b. Open Configuration Settings.<br />

c. Open Selectable Boot Settings.<br />

730 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


d. Change <strong>the</strong> entry from <strong>the</strong> SVC to your storage subsystem LUN with SCSI ID 0.<br />

e. Exit <strong>the</strong> menu and save your changes.<br />

Important: This is <strong>the</strong> last step that you can perform and still safely back out everything<br />

that you have done so far.<br />

Up to this point, you can reverse all of <strong>the</strong> actions that you have performed so far to get<br />

<strong>the</strong> server back online without data loss:<br />

► Remap and remask <strong>the</strong> LUNs back to <strong>the</strong> SVC.<br />

► Run <strong>the</strong> svctask detectmdisk command to rediscover <strong>the</strong> MDisks.<br />

► Recreate <strong>the</strong> VDisks with <strong>the</strong> svctask mkvdisk command.<br />

► Remap <strong>the</strong> VDisks back to <strong>the</strong> server with <strong>the</strong> svctask mkvdiskhostmap command.<br />

After you start <strong>the</strong> next step, you might not be able to turn back without <strong>the</strong> risk of data<br />

loss.<br />

7. We are now ready to restart <strong>the</strong> Linux server.<br />

If all of <strong>the</strong> zoning and LUN masking and mapping were done successfully, our Linux<br />

server boots as though nothing has happened.<br />

If you only moved <strong>the</strong> application LUN to <strong>the</strong> SVC and left your Linux server running, you<br />

must follow <strong>the</strong>se steps to see <strong>the</strong> new VDisk:<br />

a. Load your HBA driver with <strong>the</strong> modprobe DRIVER_NAME command. If you did not (and<br />

cannot) unload your HBA driver, you can issue commands to <strong>the</strong> kernel to rescan <strong>the</strong><br />

SCSI bus to see <strong>the</strong> new VDisks (<strong>the</strong>se details are beyond <strong>the</strong> scope of this book).<br />

b. Check your syslog and verify that <strong>the</strong> kernel found <strong>the</strong> new VDisks. On Red Hat<br />

Enterprise Linux, <strong>the</strong> syslog is stored in <strong>the</strong> /var/log/messages directory.<br />

c. If your application and data are on an LVM volume, run <strong>the</strong> vgscan command to<br />

rediscover <strong>the</strong> <strong>Volume</strong> Group, and <strong>the</strong>n, run <strong>the</strong> vgchange -a y VOLUME_GROUP<br />

command to activate <strong>the</strong> <strong>Volume</strong> Group.<br />

8. Mount your file systems with <strong>the</strong> mount /MOUNT_POINT command (Example 9-22). The df<br />

output shows us that all of <strong>the</strong> disks are available again.<br />

Example 9-22 File system after migration<br />

[root@Palau ~]# mount /dev/dm-2 /data<br />

[root@Palau ~]# df<br />

Filesystem 1K-blocks Used Available Use% Mounted on<br />

/dev/mapper/VolGroup00-LogVol00<br />

10093752 1938124 7634620 21% /<br />

/dev/sda1 101086 12054 83813 13% /boot<br />

tmpfs 1033496 0 1033496 0% /dev/shm<br />

/dev/dm-2 5160576 158160 4740272 4% /data<br />

[root@Palau ~]#<br />

9. You are ready to start your application.<br />

10.And finally, to make sure that <strong>the</strong> MDisks are removed from <strong>the</strong> SVC, run <strong>the</strong> svctask<br />

detectmdisk command. The MDisks will first be discovered as offline, and <strong>the</strong>n, <strong>the</strong>y will<br />

automatically be removed when <strong>the</strong> SVC determines that <strong>the</strong>re are no VDisks associated<br />

with <strong>the</strong>se MDisks.<br />

Chapter 9. Data migration 731


9.7 Migrating ESX <strong>SAN</strong> disks to SVC disks<br />

In this section, we move <strong>the</strong> two LUNs from our VMware ESX server to <strong>the</strong> SVC. The ESX<br />

operating system is installed locally on <strong>the</strong> host, but <strong>the</strong> two <strong>SAN</strong> disks are connected, and<br />

<strong>the</strong> virtual machines are stored <strong>the</strong>re.<br />

We <strong>the</strong>n manage those LUNs with <strong>the</strong> SVC, move <strong>the</strong>m between o<strong>the</strong>r managed disks, and<br />

<strong>the</strong>n, finally move <strong>the</strong>m back to image mode disks, so that those LUNs can <strong>the</strong>n be masked<br />

and mapped back to <strong>the</strong> VMware ESX server directly.<br />

This example can help you perform any one of <strong>the</strong> following activities in your environment:<br />

► Move your ESX server’s data LUNs (that are your VMware vmfs file systems where you<br />

might have your virtual machines stored), which are directly accessed from a storage<br />

subsystem, to virtualized disks under <strong>the</strong> control of <strong>the</strong> SVC.<br />

► Move LUNs between storage subsystems while your VMware virtual machines are still<br />

running. You might perform this activity to move <strong>the</strong> data onto LUNs that are more<br />

appropriate for <strong>the</strong> type of data that is stored on those LUNs, taking into account<br />

availability, performance, and redundancy. We describe this step in 9.7.4, “Migrating <strong>the</strong><br />

image mode VDisks” on page 742.<br />

► Move your VMware ESX server’s LUNs back to image mode VDisks so that <strong>the</strong>y can be<br />

remapped and remasked directly back to <strong>the</strong> server. This step starts in 9.7.5, “Preparing to<br />

migrate from <strong>the</strong> SVC” on page 745.<br />

You can use <strong>the</strong>se activities individually, or toge<strong>the</strong>r, to migrate your VMware ESX server’s<br />

LUNs from one storage subsystem to ano<strong>the</strong>r storage subsystem, using <strong>the</strong> SVC as your<br />

migration tool. If you do not use all three activities, you can introduce <strong>the</strong> SVC in your<br />

environment, or move <strong>the</strong> data between your storage subsystems.<br />

The only downtime that is required for <strong>the</strong>se activities is <strong>the</strong> time that it takes you to remask<br />

and remap <strong>the</strong> LUNs between <strong>the</strong> storage subsystems and your SVC.<br />

In Figure 9-59 on page 733, we show our starting <strong>SAN</strong> environment.<br />

732 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-59 ESX environment before migration<br />

Figure 9-59 shows our ESX server connected to <strong>the</strong> <strong>SAN</strong> infrastructure. It has two LUNs that<br />

are masked directly to it from our storage subsystem.<br />

Our ESX server represents a typical <strong>SAN</strong> environment with a host directly using LUNs that<br />

were created on a <strong>SAN</strong> storage subsystem, as shown in Figure 9-59:<br />

► The ESX Server’s HBA cards are zoned so that <strong>the</strong>y are in <strong>the</strong> Green zone with our<br />

storage subsystem.<br />

► The two LUNs that have been defined on <strong>the</strong> storage subsystem and that use LUN<br />

masking are directly available to our ESX server.<br />

9.7.1 Connecting <strong>the</strong> SVC to your <strong>SAN</strong> fabric<br />

This section describes <strong>the</strong> steps to take to introduce <strong>the</strong> SVC into your <strong>SAN</strong> environment.<br />

While we only summarize <strong>the</strong>se activities here, you can introduce <strong>the</strong> SVC into your <strong>SAN</strong><br />

environment without any downtime to any host or application that also uses your storage area<br />

network.<br />

If you have an SVC already connected, skip to <strong>the</strong> instructions that are given in 9.7.2,<br />

“Preparing your SVC to virtualize disks” on page 735.<br />

Chapter 9. Data migration 733


Be extremely careful connecting <strong>the</strong> SVC to your storage area network, because it requires<br />

you to connect cables to your <strong>SAN</strong> switches and to alter your switch zone configuration.<br />

Performing <strong>the</strong>se activities incorrectly can render your <strong>SAN</strong> inoperable, so make sure that you<br />

fully understand <strong>the</strong> effect of your actions.<br />

Connecting <strong>the</strong> SVC to your <strong>SAN</strong> fabric will require you to perform <strong>the</strong>se tasks:<br />

► Assemble your SVC components (nodes, uninterruptible power supply unit, and Master<br />

Console), cable <strong>the</strong> SVC correctly, power <strong>the</strong> SVC on, and verify that <strong>the</strong> SVC is visible on<br />

your <strong>SAN</strong> area network.<br />

► Create and configure your SVC cluster.<br />

► Create <strong>the</strong>se additional zones:<br />

– An SVC node zone (<strong>the</strong> Black zone in our picture on Example 9-45 on page 757). This<br />

zone only contains all of <strong>the</strong> ports (or WWNs) for each of <strong>the</strong> SVC nodes in your<br />

cluster. Our SVC is made up of a two node cluster where each node has four ports. So,<br />

our Black zone has eight WWNs defined.<br />

– A storage zone (our Red zone). This zone also has all of <strong>the</strong> ports or WWNs from <strong>the</strong><br />

SVC node zone, as well as <strong>the</strong> ports/WWNs for all of <strong>the</strong> storage subsystems that SVC<br />

will virtualize.<br />

– A host zone (our Blue zone). This zone contains <strong>the</strong> ports or WWNs for each host that<br />

will access VDisks, toge<strong>the</strong>r with <strong>the</strong> ports that are defined in <strong>the</strong> SVC node zone.<br />

Important: Do not put your storage subsystems in <strong>the</strong> host (Blue) zone. This zone is<br />

an unsupported configuration and can lead to data loss.<br />

Figure 9-60 on page 735 shows <strong>the</strong> environment that we set up.<br />

734 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-60 <strong>SAN</strong> environment with SVC attached<br />

9.7.2 Preparing your SVC to virtualize disks<br />

This section describes <strong>the</strong> preparatory tasks that we perform before taking our ESX server or<br />

virtual machines offline. These tasks are all nondisruptive activities, which do not affect your<br />

<strong>SAN</strong> fabric or your existing SVC configuration (if you already have a production SVC in<br />

place).<br />

Creating a managed disk group<br />

When we move <strong>the</strong> two ESX LUNs to <strong>the</strong> SVC, <strong>the</strong>y are first used in image mode, and<br />

<strong>the</strong>refore, we need an MDG to hold those disks.<br />

We create an empty MDG for each of <strong>the</strong> disks, by using <strong>the</strong> commands in Example 9-23.<br />

Our ESX-BOOT-MDG MDG holds <strong>the</strong> boot LUN and our ESX-DATA-MDG MDG holds our<br />

data LUN.<br />

Example 9-23 Creating an empty MDG<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512<br />

MDisk Group, id [3], successfully created<br />

Creating <strong>the</strong> host definition<br />

If you prepared <strong>the</strong> zones correctly, <strong>the</strong> SVC can see <strong>the</strong> ESX server’s HBA adapters on <strong>the</strong><br />

fabric (our host only had one HBA).<br />

Chapter 9. Data migration 735


First, we get <strong>the</strong> WWN for our ESX server’s HBA, because we have many hosts connected to<br />

our <strong>SAN</strong> fabric and in <strong>the</strong> Blue zone. We want to make sure that we have <strong>the</strong> correct WWN to<br />

reduce our ESX server’s downtime.<br />

Log in to your VMware management console as root, navigate to Configuration, and <strong>the</strong>n,<br />

select <strong>Storage</strong> Adapter. The <strong>Storage</strong> Adapters are shown on <strong>the</strong> right side of this window<br />

and display all of <strong>the</strong> necessary information. Figure 9-61 shows our WWNs, which are<br />

210000E08B89B8C0 and 210000E08B892BCD.<br />

Figure 9-61 Obtain your WWN using <strong>the</strong> VMware Management Console<br />

Use <strong>the</strong> svcinfo lshbaportcandidate command on <strong>the</strong> SVC to list all of <strong>the</strong> WWNs, which<br />

have not yet been allocated to a host, that <strong>the</strong> SVC can see on <strong>the</strong> <strong>SAN</strong> fabric. Example 9-24<br />

shows <strong>the</strong> output of <strong>the</strong> nodes that it found on our <strong>SAN</strong> fabric. (If <strong>the</strong> port did not show up, it<br />

indicates that we have a zone configuration problem.)<br />

Example 9-24 Add <strong>the</strong> host to <strong>the</strong> SVC<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidate<br />

id<br />

210000E08B89B8C0<br />

210000E08B892BCD<br />

210000E08B0548BC<br />

210000E08B0541BC<br />

210000E08B89CCC2<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

After verifying that <strong>the</strong> SVC can see our host, we create <strong>the</strong> host entry and assign <strong>the</strong> WWN<br />

to this entry. Example 9-25 shows <strong>the</strong>se commands.<br />

Example 9-25 Create <strong>the</strong> host entry<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkhost -name Nile -hbawwpn<br />

210000E08B89B8C0:210000E08B892BCD<br />

Host, id [1], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshost Nile<br />

id 1<br />

name Nile<br />

port_count 2<br />

736 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


type generic<br />

mask 1111<br />

iogrp_count 4<br />

WWPN 210000E08B892BCD<br />

node_logged_in_count 4<br />

state active<br />

WWPN 210000E08B89B8C0<br />

node_logged_in_count 4<br />

state active<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Verify that you can see your storage subsystem<br />

If our zoning has been performed correctly, <strong>the</strong> SVC can also see <strong>the</strong> storage subsystem with<br />

<strong>the</strong> svcinfo lscontroller command (Example 9-26).<br />

Example 9-26 Available storage controllers<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lscontroller<br />

id controller_name ctrl_s/n vendor_id<br />

product_id_low product_id_high<br />

0 DS4500 <strong>IBM</strong><br />

1742-900<br />

1 DS4700 <strong>IBM</strong><br />

1814 FAStT<br />

Get your disk serial numbers<br />

To help avoid <strong>the</strong> risk of creating <strong>the</strong> wrong VDisks from all of <strong>the</strong> available unmanaged<br />

MDisks (in case <strong>the</strong> SVC sees many available unmanaged MDisks), we get <strong>the</strong> LUN serial<br />

numbers from our storage subsystem administration tool (<strong>Storage</strong> Manager).<br />

When we discover <strong>the</strong>se MDisks, we confirm that we have <strong>the</strong> correct serial numbers before<br />

we create <strong>the</strong> image mode VDisks.<br />

If you also use a DS4000 family storage subsystem, <strong>Storage</strong> Manager provides <strong>the</strong> LUN<br />

serial numbers. Right-click your logical drive, and choose Properties. Figure 9-63 on<br />

page 738 and Figure 9-62 on page 738 show our serial numbers.<br />

Chapter 9. Data migration 737


Figure 9-62 Obtaining <strong>the</strong> disk serial number<br />

Figure 9-63 Obtaining <strong>the</strong> disk serial number<br />

Now, we are ready to move <strong>the</strong> ownership of <strong>the</strong> disks to <strong>the</strong> SVC, discover <strong>the</strong>m as MDisks,<br />

and give <strong>the</strong>m back to <strong>the</strong> host as VDisks.<br />

738 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


9.7.3 Move <strong>the</strong> LUNs to <strong>the</strong> SVC<br />

In this step, we move <strong>the</strong> LUNs that are assigned to <strong>the</strong> ESX server and reassign <strong>the</strong>m to <strong>the</strong><br />

SVC.<br />

Our ESX server has two LUNs, as shown in Figure 9-64.<br />

Figure 9-64 VMWare LUNs<br />

The virtual machines are located on <strong>the</strong>se LUNs. So, in order to move <strong>the</strong>se LUNs under <strong>the</strong><br />

control of <strong>the</strong> SVC, we do not need to reboot <strong>the</strong> entire ESX server, but we have to stop and<br />

suspend all VMware guests that are using <strong>the</strong>se LUNs.<br />

Move VMware guest LUNs<br />

To move <strong>the</strong> VMware LUNs to <strong>the</strong> SVC, perform <strong>the</strong> following steps:<br />

1. Using <strong>Storage</strong> Manager, we have identified <strong>the</strong> LUN number that has been presented to<br />

<strong>the</strong> ESX Server. Make sure to record which LUN had which LUN number (Figure 9-65).<br />

Figure 9-65 Identify LUN numbers in <strong>IBM</strong> DS4000 <strong>Storage</strong> Manager<br />

2. Next, identify all of <strong>the</strong> VMware guests that are using this LUN and shut <strong>the</strong>m down. One<br />

way to identify <strong>the</strong>m is to highlight <strong>the</strong> virtual machine and open <strong>the</strong> Summary Tab. The<br />

datapool that is used is displayed under Datastore. Figure 9-66 on page 740 shows a<br />

Linux virtual machine using <strong>the</strong> datastore named SLES_Costa_Rica.<br />

Chapter 9. Data migration 739


Figure 9-66 Identify <strong>the</strong> LUNs that are used by virtual machines<br />

3. If you have several ESX hosts, also check <strong>the</strong> o<strong>the</strong>r ESX hosts to make sure that <strong>the</strong>re is<br />

no guest operating system that is running and using this datastore.<br />

4. Repeat steps 1 to 3 for every datastore that you want to migrate.<br />

5. After <strong>the</strong> guests are suspended, we use <strong>Storage</strong> Manager (our storage subsystem<br />

management tool) to unmap and unmask <strong>the</strong> disks from <strong>the</strong> ESX server and to remap and<br />

to remask <strong>the</strong> disks to <strong>the</strong> SVC.<br />

6. From <strong>the</strong> SVC, discover <strong>the</strong> new disks with <strong>the</strong> svctask detectmdisk command. The disks<br />

will be discovered and named as mdiskN, where N is <strong>the</strong> next available MDisk number<br />

(starting from 0). Example 9-27 shows <strong>the</strong> commands that we used to discover our<br />

MDisks and to verify that we have <strong>the</strong> correct MDisks.<br />

Example 9-27 Discover <strong>the</strong> new MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

21 mdisk21 online unmanaged<br />

60.0GB 0000000000000008 DS4700<br />

600a0b800026b282000041ca486d14a500000000000000000000000000000000<br />

22 mdisk22 online unmanaged<br />

70.0GB 0000000000000009 DS4700<br />

600a0b80002904de0000447a486d14cd00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

740 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Important: Match your discovered MDisk serial numbers (UID on <strong>the</strong> svcinfo lsmdisk<br />

command task display) with <strong>the</strong> serial number that you obtained earlier (in Figure 9-62 and<br />

Figure 9-63 on page 738).<br />

7. After we have verified that we have <strong>the</strong> correct MDisks, we rename <strong>the</strong>m to avoid<br />

confusion in <strong>the</strong> future when we perform o<strong>the</strong>r MDisk-related tasks (Example 9-28).<br />

Example 9-28 Rename <strong>the</strong> MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_W2k3 mdisk22<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_SLES mdisk21<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

21 ESX_SLES online unmanaged<br />

60.0GB 0000000000000008 DS4700<br />

600a0b800026b282000041ca486d14a500000000000000000000000000000000<br />

22 ESX_W2k3 online unmanaged<br />

70.0GB 0000000000000009 DS4700<br />

600a0b80002904de0000447a486d14cd00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

8. We create our image mode VDisks with <strong>the</strong> svctask mkvdisk command (Example 9-29).<br />

The parameter -vtype image makes sure that it will create image mode VDisks, which<br />

means that <strong>the</strong> virtualized disks will have <strong>the</strong> exact same layout as though <strong>the</strong>y were not<br />

virtualized.<br />

Example 9-29 Create <strong>the</strong> image mode VDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype<br />

image -mdisk ESX_W2k3 -name ESX_W2k3_IVD<br />

Virtual Disk, id [29], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype<br />

image -mdisk ESX_SLES -name ESX_SLES_IVD<br />

Virtual Disk, id [30], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

9. Finally, we can map <strong>the</strong> new image mode VDisks to <strong>the</strong> host. Use <strong>the</strong> same SCSI LUN IDs<br />

as on <strong>the</strong> storage subsystem for <strong>the</strong> mapping (Example 9-30).<br />

Example 9-30 Map <strong>the</strong> VDisks to <strong>the</strong> host<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVD<br />

Virtual Disk to Host map, id [0], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD<br />

Virtual Disk to Host map, id [1], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap<br />

id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID<br />

1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD<br />

60050768018301BF280000000000002A<br />

1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD<br />

60050768018301BF2800000000000029<br />

Chapter 9. Data migration 741


10.Using <strong>the</strong> VMware management console, rescan to discover <strong>the</strong> new VDisk. Open <strong>the</strong><br />

configuration tab, select <strong>Storage</strong> Adapters, and click Rescan. During <strong>the</strong> rescan, you<br />

might receive geometry errors, when ESX discovers that <strong>the</strong> old disk has disappeared.<br />

Your VDisk will appear with <strong>the</strong> new vmhba devices.<br />

11.We are ready to restart <strong>the</strong> VMware guests again.<br />

You have migrated <strong>the</strong> VMware LUNs successfully to <strong>the</strong> SVC.<br />

9.7.4 Migrating <strong>the</strong> image mode VDisks<br />

While <strong>the</strong> VMware server and its virtual machines are still running, we migrate <strong>the</strong> image<br />

mode VDisks onto striped VDisks, with <strong>the</strong> extents being spread over three o<strong>the</strong>r MDisks.<br />

Preparing MDisks for striped mode VDisks<br />

In this example, we migrate <strong>the</strong> image mode VDisks to VDisks and we move <strong>the</strong> data to<br />

ano<strong>the</strong>r storage subsystem in one step.<br />

Adding a new storage subsystem to SVC<br />

If you are moving <strong>the</strong> data to a new storage subsystem, it is assumed that this storage<br />

subsystem is connected to your <strong>SAN</strong> fabric, powered on, and visible from your <strong>SAN</strong> switches.<br />

Your environment must look similar to our environment, which is shown in Figure 9-67.<br />

Figure 9-67 ESX SVC <strong>SAN</strong> environment<br />

Make fabric zone changes<br />

The first step is to set up <strong>the</strong> <strong>SAN</strong> configuration so that all of <strong>the</strong> zones are created. Add <strong>the</strong><br />

new storage subsystem to <strong>the</strong> Red zone so that <strong>the</strong> SVC can talk to it directly.<br />

742 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


We also need a Green zone for our host to use when we are ready for it to directly access <strong>the</strong><br />

disk, after it has been removed from <strong>the</strong> SVC.<br />

We assume that you have created <strong>the</strong> necessary zones.<br />

In our environment, we have performed <strong>the</strong>se tasks:<br />

► Created three LUNs on ano<strong>the</strong>r storage subsystem and mapped it to <strong>the</strong> SVC<br />

► Discovered <strong>the</strong>m as MDisks<br />

► Created a new MDG<br />

► Renamed <strong>the</strong>se LUNs to more meaningful names.<br />

► Put all <strong>the</strong>se MDisks into this MDG.<br />

You can see <strong>the</strong> output of our commands in Example 9-31.<br />

Example 9-31 Create a new MDisk group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

21 ESX_SLES online image 3<br />

MDG_Nile_VM 60.0GB 0000000000000008 DS4700<br />

600a0b800026b282000041ca486d14a500000000000000000000000000000000<br />

22 ESX_W2k3 online image 3<br />

MDG_Nile_VM 70.0GB 0000000000000009 DS4700<br />

600a0b80002904de0000447a486d14cd00000000000000000000000000000000<br />

23 mdisk23 online unmanaged<br />

55.0GB 000000000000000D DS4500<br />

600a0b8000174233000000b4486d250300000000000000000000000000000000<br />

24 mdisk24 online unmanaged<br />

55.0GB 000000000000000E DS4500<br />

600a0b800017443100000108486d182c00000000000000000000000000000000<br />

25 mdisk25 online unmanaged<br />

55.0GB 000000000000000F DS4500<br />

600a0b8000174233000000b5486d255b00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name <strong>IBM</strong>ESX-MD1 mdisk23<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name <strong>IBM</strong>ESX-MD2 mdisk24<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name <strong>IBM</strong>ESX-MD3 mdisk25<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk <strong>IBM</strong>ESX-MD1 MDG_ESX_VD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk <strong>IBM</strong>ESX-MD2 MDG_ESX_VD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk <strong>IBM</strong>ESX-MD3 MDG_ESX_VD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

21 ESX_SLES online image 3<br />

MDG_Nile_VM 60.0GB 0000000000000008 DS4700<br />

600a0b800026b282000041ca486d14a500000000000000000000000000000000<br />

22 ESX_W2k3 online image 3<br />

MDG_Nile_VM 70.0GB 0000000000000009 DS4700<br />

600a0b80002904de0000447a486d14cd00000000000000000000000000000000<br />

23 <strong>IBM</strong>ESX-MD1 online managed 4<br />

MDG_ESX_VD 55.0GB 000000000000000D DS4500<br />

600a0b8000174233000000b4486d250300000000000000000000000000000000<br />

Chapter 9. Data migration 743


24 <strong>IBM</strong>ESX-MD2 online managed 4<br />

MDG_ESX_VD 55.0GB 000000000000000E DS4500<br />

600a0b800017443100000108486d182c00000000000000000000000000000000<br />

25 <strong>IBM</strong>ESX-MD3 online managed 4<br />

MDG_ESX_VD 55.0GB 000000000000000F DS4500<br />

600a0b8000174233000000b5486d255b00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Migrating <strong>the</strong> VDisks<br />

We are ready to migrate <strong>the</strong> image mode VDisks onto striped VDisks in <strong>the</strong> new MDG<br />

(MDG_ESX_VD) with <strong>the</strong> svctask migratevdisk command (Example 9-32).<br />

While <strong>the</strong> migration is running, our VMware ESX server, as well as our VMware guests, will<br />

remain running.<br />

To check <strong>the</strong> overall progress of <strong>the</strong> migration, we use <strong>the</strong> svcinfo lsmigrate command, as<br />

shown in Example 9-32. Listing <strong>the</strong> MDG with <strong>the</strong> svcinfo lsmdiskgrp command shows that<br />

<strong>the</strong> free capacity on <strong>the</strong> old MDG is slowly increasing as those extents are moved to <strong>the</strong> new<br />

MDG.<br />

Example 9-32 Migrating image mode VDisks to striped VDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp<br />

MDG_ESX_VD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp<br />

MDG_ESX_VD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmigrate<br />

migrate_type MDisk_Group_Migration<br />

progress 0<br />

migrate_source_vdisk_index 30<br />

migrate_target_mdisk_grp 4<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

migrate_type MDisk_Group_Migration<br />

progress 0<br />

migrate_source_vdisk_index 29<br />

migrate_target_mdisk_grp 4<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmigrate<br />

migrate_type MDisk_Group_Migration<br />

progress 1<br />

migrate_source_vdisk_index 30<br />

migrate_target_mdisk_grp 4<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

migrate_type MDisk_Group_Migration<br />

progress 0<br />

migrate_source_vdisk_index 29<br />

migrate_target_mdisk_grp 4<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp<br />

744 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


id name status mdisk_count vdisk_count<br />

capacity extent_size free_capacity virtual_capacity used_capacity<br />

real_capacity overallocation warning<br />

3 MDG_Nile_VM online 2 2<br />

130.0GB 512 1.0GB 130.00GB 130.00GB<br />

130.00GB 100 0<br />

4 MDG_ESX_VD online 3 0<br />

165.0GB 512 35.0GB 0.00MB 0.00MB<br />

0.00MB 0 0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

If you compare <strong>the</strong> svcinfo lsmdiskgrp output after <strong>the</strong> migration, as shown in<br />

Example 9-33, you can see that all of <strong>the</strong> virtual capacity has now been moved from <strong>the</strong> old<br />

MDG (MDG_Nile_VM) to <strong>the</strong> new MDG (MDG_ESX_VD). The mdisk_count column shows<br />

that <strong>the</strong> capacity is now spread over three MDisks.<br />

Example 9-33 List MDisk group<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp<br />

id name status mdisk_count vdisk_count<br />

capacity extent_size free_capacity virtual_capacity used_capacity<br />

real_capacity overallocation warning<br />

3 MDG_Nile_VM online 2 0<br />

130.0GB 512 130.0GB 0.00MB 0.00MB<br />

0.00MB 0 0<br />

4 MDG_ESX_VD online 3 2<br />

165.0GB 512 35.0GB 130.00GB 130.00GB<br />

130.00GB 78 0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Our migration to <strong>the</strong> SVC is complete. You can remove <strong>the</strong> original MDisks from <strong>the</strong> SVC, and<br />

you can remove <strong>the</strong>se LUNs from <strong>the</strong> storage subsystem.<br />

If <strong>the</strong>se LUNs are <strong>the</strong> last LUNs that were used on our storage subsystem, we can remove it<br />

from our <strong>SAN</strong> fabric.<br />

9.7.5 Preparing to migrate from <strong>the</strong> SVC<br />

Before we move <strong>the</strong> ESX server’s LUNs from being accessible by <strong>the</strong> SVC as VDisks to<br />

becoming directly accessed from <strong>the</strong> storage subsystem, we need to convert <strong>the</strong> VDisks into<br />

image mode VDisks.<br />

You might want to perform this activity for any one of <strong>the</strong>se reasons:<br />

► You purchased a new storage subsystem, and you were using SVC as a tool to migrate<br />

from your old storage subsystem to this new storage subsystem.<br />

► You used SVC to FlashCopy or Metro Mirror a VDisk onto ano<strong>the</strong>r VDisk, and you no<br />

longer need that host connected to <strong>the</strong> SVC.<br />

► You want to ship a host, and its data, that currently is connected to <strong>the</strong> SVC, to a site<br />

where <strong>the</strong>re is no SVC.<br />

► Changes to your environment no longer require this host to use <strong>the</strong> SVC.<br />

Chapter 9. Data migration 745


There are also o<strong>the</strong>r preparatory activities that we can perform before we shut down <strong>the</strong> host<br />

and reconfigure <strong>the</strong> LUN masking and mapping. This section describes those activities. In our<br />

example, we will move VDisks that are located on a DS4500 to image mode VDisks that are<br />

located on a DS4700.<br />

If you are moving <strong>the</strong> data to a new storage subsystem, it is assumed that this storage<br />

subsystem is connected to your <strong>SAN</strong> fabric, powered on, and visible from your <strong>SAN</strong> switches.<br />

Your environment must look similar to our environment, as described in “Adding a new<br />

storage subsystem to SVC” on page 742 and “Make fabric zone changes” on page 742.<br />

Creating new LUNs<br />

On our storage subsystem, we create two LUNs and mask <strong>the</strong> LUNs so that <strong>the</strong> SVC can see<br />

<strong>the</strong>m. These two LUNs will eventually be given directly to <strong>the</strong> host, removing <strong>the</strong> VDisks that it<br />

currently has. To check that <strong>the</strong> SVC can use <strong>the</strong>m, issue <strong>the</strong> svctask detectmdisk<br />

command, as shown in Example 9-34.<br />

Example 9-34 Discover <strong>the</strong> new MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

23 <strong>IBM</strong>ESX-MD1 online managed 4<br />

MDG_ESX_VD 55.0GB 000000000000000D DS4500<br />

600a0b8000174233000000b4486d250300000000000000000000000000000000<br />

24 <strong>IBM</strong>ESX-MD2 online managed 4<br />

MDG_ESX_VD 55.0GB 000000000000000E DS4500<br />

600a0b800017443100000108486d182c00000000000000000000000000000000<br />

25 <strong>IBM</strong>ESX-MD3 online managed 4<br />

MDG_ESX_VD 55.0GB 000000000000000F DS4500<br />

600a0b8000174233000000b5486d255b00000000000000000000000000000000<br />

26 mdisk26 online unmanaged<br />

120.0GB 000000000000000A DS4700<br />

600a0b800026b282000041f0486e210100000000000000000000000000000000<br />

27 mdisk27 online unmanaged<br />

100.0GB 000000000000000B DS4700<br />

600a0b800026b282000041e3486e20cf00000000000000000000000000000000<br />

Even though <strong>the</strong> MDisks will not stay in <strong>the</strong> SVC for long, we still recommend that you rename<br />

<strong>the</strong>m to more meaningful names, so that <strong>the</strong>y do not get confused with o<strong>the</strong>r MDisks being<br />

used by o<strong>the</strong>r activities. Also, we create <strong>the</strong> MDGs to hold our new MDisks. Example 9-35<br />

shows <strong>the</strong>se tasks.<br />

Example 9-35 Rename <strong>the</strong> MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_SLES mdisk26<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512<br />

MDisk Group, id [5], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp<br />

id name status mdisk_count vdisk_count<br />

capacity extent_size free_capacity virtual_capacity used_capacity<br />

real_capacity overallocation warning<br />

746 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


4 MDG_ESX_VD online 3 2<br />

165.0GB 512 35.0GB 130.00GB 130.00GB<br />

130.00GB 78 0<br />

5 MDG_IVD_ESX online 0 0 0<br />

512 0 0.00MB 0.00MB 0.00MB 0<br />

0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

Our SVC environment is ready for <strong>the</strong> VDisk migration to image mode VDisks.<br />

9.7.6 Migrating <strong>the</strong> managed VDisks to image mode VDisks<br />

While our ESX server is still running, we migrate <strong>the</strong> managed VDisks onto <strong>the</strong> new MDisks<br />

using image mode VDisks. The command to perform this action is <strong>the</strong> svctask<br />

migratetoimage command, which is shown in Example 9-36.<br />

Example 9-36 Migrate <strong>the</strong> VDisks to image mode VDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk<br />

ESX_IVD_SLES -mdiskgrp MDG_IVD_ESX<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk<br />

ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESX<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

23 <strong>IBM</strong>ESX-MD1 online managed 4<br />

MDG_ESX_VD 55.0GB 000000000000000D DS4500<br />

600a0b8000174233000000b4486d250300000000000000000000000000000000<br />

24 <strong>IBM</strong>ESX-MD2 online managed 4<br />

MDG_ESX_VD 55.0GB 000000000000000E DS4500<br />

600a0b800017443100000108486d182c00000000000000000000000000000000<br />

25 <strong>IBM</strong>ESX-MD3 online managed 4<br />

MDG_ESX_VD 55.0GB 000000000000000F DS4500<br />

600a0b8000174233000000b5486d255b00000000000000000000000000000000<br />

26 ESX_IVD_SLES online image 5<br />

MDG_IVD_ESX 120.0GB 000000000000000A DS4700<br />

600a0b800026b282000041f0486e210100000000000000000000000000000000<br />

27 ESX_IVD_W2K3 online image 5<br />

MDG_IVD_ESX 100.0GB 000000000000000B DS4700<br />

600a0b800026b282000041e3486e20cf00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

During <strong>the</strong> migration, our ESX server is unaware that its data is being physically moved<br />

between storage subsystems. We can continue to run and continue to use <strong>the</strong> virtual<br />

machines that are running on <strong>the</strong> server.<br />

You can check <strong>the</strong> migration status with <strong>the</strong> svcinfo lsmigrate command, as shown in<br />

Example 9-37 on page 748.<br />

Chapter 9. Data migration 747


Example 9-37 The svcinfo lsmigrate command and output<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmigrate<br />

migrate_type Migrate_to_Image<br />

progress 2<br />

migrate_source_vdisk_index 29<br />

migrate_target_mdisk_index 27<br />

migrate_target_mdisk_grp 5<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

migrate_type Migrate_to_Image<br />

progress 12<br />

migrate_source_vdisk_index 30<br />

migrate_target_mdisk_index 26<br />

migrate_target_mdisk_grp 5<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

After <strong>the</strong> migration has completed, <strong>the</strong> image mode VDisks are ready to be removed from <strong>the</strong><br />

ESX server, and <strong>the</strong> real LUNs can be mapped and masked directly to <strong>the</strong> host using <strong>the</strong><br />

storage subsystem’s tool.<br />

9.7.7 Remove <strong>the</strong> LUNs from <strong>the</strong> SVC<br />

Your ESX server’s configuration determines in what order your LUNs are removed from <strong>the</strong><br />

control of <strong>the</strong> SVC, and whe<strong>the</strong>r you need to reboot <strong>the</strong> ESX server, as well as suspend <strong>the</strong><br />

VMware guests.<br />

In our example, we have moved <strong>the</strong> virtual machine disks, so in order to remove <strong>the</strong>se LUNs<br />

from <strong>the</strong> control of <strong>the</strong> SVC, we have to stop and suspend all of <strong>the</strong> VMware guests that are<br />

using this LUN. Perform <strong>the</strong> following steps:<br />

1. Check which SCSI LUN IDs are assigned to <strong>the</strong> migrated disks, by using <strong>the</strong> svcinfo<br />

lshostvdiskmap command, as shown in Example 9-38. Compare <strong>the</strong> VDisk UID and sort<br />

out <strong>the</strong> information.<br />

Example 9-38 Note SCSI LUN IDs<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap<br />

id name SCSI_id vdisk_id vdisk_name<br />

wwpn vdisk_UID<br />

1 Nile 0 30 ESX_SLES_IVD<br />

210000E08B892BCD 60050768018301BF280000000000002A<br />

1 Nile 1 29 ESX_W2k3_IVD<br />

210000E08B892BCD 60050768018301BF2800000000000029<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsvdisk<br />

id name IO_group_id IO_group_name status<br />

mdisk_grp_id mdisk_grp_name capacity type FC_id<br />

FC_name RC_id RC_name vdisk_UID fc_map_count<br />

copy_count<br />

0 vdisk_A 0 io_grp0 online<br />

2 MDG_Image 36.0GB image<br />

29 ESX_W2k3_IVD 0 io_grp0 online<br />

4 MDG_ESX_VD 70.0GB striped<br />

60050768018301BF2800000000000029 0 1<br />

748 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


30 ESX_SLES_IVD 0 io_grp0 online<br />

4 MDG_ESX_VD 60.0GB striped<br />

60050768018301BF280000000000002A 0 1<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

2. Shut down and suspend all of our guests using <strong>the</strong> LUNs. You can use <strong>the</strong> same method<br />

that is used in “Move VMware guest LUNs” on page 739 to identify <strong>the</strong> guests that are<br />

using this LUN.<br />

3. Remove <strong>the</strong> VDisks from <strong>the</strong> host by using <strong>the</strong> svctask rmvdiskhostmap command<br />

(Example 9-39). To double-check that you have removed <strong>the</strong> VDisks, use <strong>the</strong> svcinfo<br />

lshostvdiskmap command, which shows that <strong>the</strong>se VDisks are no longer mapped to <strong>the</strong><br />

ESX server.<br />

Example 9-39 Remove <strong>the</strong> VDisks from <strong>the</strong> host<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD<br />

4. Remove <strong>the</strong> VDisks from <strong>the</strong> SVC by using <strong>the</strong> svctask rmvdisk command, which makes<br />

<strong>the</strong> MDisks unmanaged, as shown in Example 9-40.<br />

Cached data: When you run <strong>the</strong> svctask rmvdisk command, <strong>the</strong> SVC first<br />

double-checks that <strong>the</strong>re is no outstanding dirty cached data for <strong>the</strong> VDisk that is being<br />

removed. If <strong>the</strong>re is still uncommitted cached data, <strong>the</strong> command fails with this error<br />

message:<br />

CMMVC6212E The command failed because data in <strong>the</strong> cache has not been<br />

committed to disk<br />

You have to wait for this cached data to be committed to <strong>the</strong> underlying storage<br />

subsystem before you can remove <strong>the</strong> VDisk.<br />

The SVC will automatically destage uncommitted cached data two minutes after <strong>the</strong><br />

last write activity for <strong>the</strong> VDisk. Depending on <strong>the</strong> amount of data to destage and how<br />

busy <strong>the</strong> I/O subsystem is determine how long this command takes to complete.<br />

You can check if <strong>the</strong> VDisk has uncommitted data in <strong>the</strong> cache by using <strong>the</strong> svcinfo<br />

lsvdisk command and checking <strong>the</strong> fast_write_state attribute. This<br />

attribute has <strong>the</strong> following meanings:<br />

empty No modified data exists in <strong>the</strong> cache.<br />

not_empty Modified data might exist in <strong>the</strong> cache.<br />

corrupt Modified data might have existed in <strong>the</strong> cache, but <strong>the</strong> data has been<br />

lost.<br />

Example 9-40 Remove <strong>the</strong> VDisks from <strong>the</strong> SVC<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_W2k3_IVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_SLES_IVD<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name<br />

UID<br />

26 ESX_IVD_SLES online unmanaged<br />

120.0GB 000000000000000A DS4700<br />

600a0b800026b282000041f0486e210100000000000000000000000000000000<br />

Chapter 9. Data migration 749


27 ESX_IVD_W2K3 online unmanaged<br />

100.0GB 000000000000000B DS4700<br />

600a0b800026b282000041e3486e20cf00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS1:admin><br />

5. Using <strong>Storage</strong> Manager (our storage subsystem management tool), unmap and unmask<br />

<strong>the</strong> disks from <strong>the</strong> SVC back to <strong>the</strong> ESX server. Remember in Example 9-38 on page 748<br />

that we have recorded <strong>the</strong> SCSI LUNs’ IDs. To map your LUNs on <strong>the</strong> storage subsystem,<br />

use <strong>the</strong> same SCSI LUN IDs that you used in <strong>the</strong> SVC.<br />

Important: This is <strong>the</strong> last step that you can perform and still safely back out of<br />

everything you have done so far.<br />

Up to this point, you can reverse all of <strong>the</strong> actions that you have performed so far to get<br />

<strong>the</strong> server back online without data loss:<br />

► Remap and remask <strong>the</strong> LUNs back to <strong>the</strong> SVC.<br />

► Run <strong>the</strong> svctask detectmdisk command to rediscover <strong>the</strong> MDisks.<br />

► Recreate <strong>the</strong> VDisks with <strong>the</strong> svctask mkvdisk command.<br />

► Remap <strong>the</strong> VDisks back to <strong>the</strong> server with <strong>the</strong> svctask mkvdiskhostmap command.<br />

After you start <strong>the</strong> next step, you might not be able to turn back without <strong>the</strong> risk of data<br />

loss.<br />

6. Using <strong>the</strong> VMware management console, rescan to discover <strong>the</strong> new VDisk. Figure 9-68<br />

shows <strong>the</strong> view before <strong>the</strong> rescan. Figure 9-69 on page 751 shows <strong>the</strong> view after <strong>the</strong><br />

rescan. Note that <strong>the</strong> size of <strong>the</strong> LUN has changed, because we have moved to ano<strong>the</strong>r<br />

LUN on ano<strong>the</strong>r storage subsystem.<br />

Figure 9-68 Before adapter rescan<br />

750 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-69 After adapter rescan<br />

During <strong>the</strong> rescan, you might receive geometry errors when ESX discovers that <strong>the</strong> old<br />

disk has disappeared. Your VDisk will appear with a new vmhba address, and VMware will<br />

recognize it as our VMWARE-GUESTS disk.<br />

7. We are now ready to restart <strong>the</strong> VMware guests.<br />

8. Finally, to make sure that <strong>the</strong> MDisks are removed from <strong>the</strong> SVC, run <strong>the</strong> svctask<br />

detectmdisk command. The MDisks are discovered as offline and, <strong>the</strong>n, automatically<br />

removed when <strong>the</strong> SVC determines that <strong>the</strong>re are no VDisks associated with <strong>the</strong>se<br />

MDisks.<br />

9.8 Migrating AIX <strong>SAN</strong> disks to SVC disks<br />

In this section, we move <strong>the</strong> two LUNs from an AIX server, which is directly off of our DS4000<br />

storage subsystem, over to <strong>the</strong> SVC.<br />

We <strong>the</strong>n manage those LUNs with <strong>the</strong> SVC, move <strong>the</strong>m between o<strong>the</strong>r managed disks, and<br />

<strong>the</strong>n finally move <strong>the</strong>m back to image mode disks, so that those LUNs can <strong>the</strong>n be masked<br />

and mapped back to <strong>the</strong> AIX server directly.<br />

If you use this example, it can help you perform any of <strong>the</strong> following activities in your<br />

environment:<br />

► Move an AIX server’s <strong>SAN</strong> LUNs from a storage subsystem and virtualize those same<br />

LUNs through <strong>the</strong> SVC, which is <strong>the</strong> first activity that you perform when introducing <strong>the</strong><br />

SVC into your environment. This section shows that your host downtime is only a few<br />

minutes while you remap and remask disks using your storage subsystem LUN<br />

management tool. This step starts in 9.8.2, “Preparing your SVC to virtualize disks” on<br />

page 754.<br />

► Move data between storage subsystems while your AIX server is still running and<br />

servicing your business application. You might perform this activity if you were removing a<br />

storage subsystem from your <strong>SAN</strong> environment and if you want to move <strong>the</strong> data onto<br />

LUNs that are more appropriate for <strong>the</strong> type of data that is stored on those LUNs, taking<br />

Chapter 9. Data migration 751


into account availability, performance, and redundancy. We describe this step in 9.8.4,<br />

“Migrating image mode VDisks to VDisks” on page 761.<br />

► Move your AIX server’s LUNs back to image mode VDisks, so that <strong>the</strong>y can be remapped<br />

and remasked directly back to <strong>the</strong> AIX server. This step starts in 9.8.5, “Preparing to<br />

migrate from <strong>the</strong> SVC” on page 763.<br />

Use <strong>the</strong>se activities individually or toge<strong>the</strong>r to migrate your AIX server’s LUNs from one<br />

storage subsystem to ano<strong>the</strong>r storage subsystem, using <strong>the</strong> SVC as your migration tool. If<br />

you do not use all three activities, you can introduce or remove <strong>the</strong> SVC from your<br />

environment.<br />

The only downtime that is required for <strong>the</strong>se activities is <strong>the</strong> time that it takes you to remask<br />

and remap <strong>the</strong> LUNs between <strong>the</strong> storage subsystems and your SVC.<br />

We show our AIX environment in Figure 9-70.<br />

<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

Figure 9-70 AIX <strong>SAN</strong> environment<br />

Zoning for migration scenarios<br />

AIX<br />

Host<br />

<strong>SAN</strong><br />

Figure 9-70 shows our AIX server connected to our <strong>SAN</strong> infrastructure. It has two LUNs<br />

(hdisk3 and hdisk4) that are masked directly to it from our storage subsystem.<br />

The hdisk3 disk makes up <strong>the</strong> itsoaixvg LVM group, and <strong>the</strong> hdisk4 disk makes up <strong>the</strong><br />

itsoaixvg1 LVM group, as shown in Example 9-41 on page 753.<br />

752 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

Green Zone


Example 9-41 AIX <strong>SAN</strong> configuration<br />

#lsdev -Cc disk<br />

hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive<br />

hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive<br />

hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive<br />

hdisk3 Available 1D-08-02 1814 DS4700 Disk Array Device<br />

hdisk4 Available 1D-08-02 1814 DS4700 Disk Array Device<br />

#lspv<br />

hdisk0 0009cddaea97bf61 rootvg active<br />

hdisk1 0009cdda43c9dfd5 rootvg active<br />

hdisk2 0009cddabaef1d99 rootvg active<br />

hdisk3 0009cdda0a4c0dd5 itsoaixvg active<br />

hdisk4 0009cdda0a4d1a64 itsoaixvg1 active<br />

#<br />

Our AIX server represents a typical <strong>SAN</strong> environment with a host directly using LUNs that<br />

were created on a <strong>SAN</strong> storage subsystem, as shown in Figure 9-70 on page 752:<br />

► The AIX server’s HBA cards are zoned so that <strong>the</strong>y are in <strong>the</strong> Green (dotted line) zone,<br />

with our storage subsystem.<br />

► The two LUNs, hdisk3 and hdisk4, have been defined on <strong>the</strong> storage subsystem, and<br />

using LUN masking, are directly available to our AIX server.<br />

9.8.1 Connecting <strong>the</strong> SVC to your <strong>SAN</strong> fabric<br />

This section describes <strong>the</strong> steps to take to introduce <strong>the</strong> SVC into your <strong>SAN</strong> environment.<br />

While this section only summarizes <strong>the</strong>se activities, you can accomplish this task without any<br />

downtime to any host or application that also uses your storage area network.<br />

If you have an SVC already connected, skip to 9.8.2, “Preparing your SVC to virtualize disks”<br />

on page 754.<br />

Be extremely careful, because connecting <strong>the</strong> SVC into your storage area network requires<br />

you to connect cables to your <strong>SAN</strong> switches and alter your switch zone configuration.<br />

Performing <strong>the</strong>se activities incorrectly can render your <strong>SAN</strong> inoperable, so make sure that you<br />

fully understand <strong>the</strong> effect of your actions.<br />

Connecting <strong>the</strong> SVC to your <strong>SAN</strong> fabric will require you to perform <strong>the</strong>se tasks:<br />

► Assemble your SVC components (nodes, uninterruptible power supply unit, and Master<br />

Console), cable <strong>the</strong> SVC correctly, power <strong>the</strong> SVC on, and verify that <strong>the</strong> SVC is visible on<br />

your <strong>SAN</strong>.<br />

► Create and configure your SVC cluster.<br />

► Create <strong>the</strong>se additional zones:<br />

– An SVC node zone (our Black zone in Example 9-54 on page 763). This zone only<br />

contains all of <strong>the</strong> ports (or WWNs) for each of <strong>the</strong> SVC nodes in your cluster. Our SVC<br />

is made up of a two node cluster, where each node has four ports. So, our Black zone<br />

has eight defined WWNs.<br />

– A storage zone (our Red zone). This zone also has all of <strong>the</strong> ports and WWNs from <strong>the</strong><br />

SVC node zone, as well as <strong>the</strong> ports and WWNs for all of <strong>the</strong> storage subsystems that<br />

SVC will virtualize.<br />

Chapter 9. Data migration 753


– A host zone (our Blue zone). This zone contains <strong>the</strong> ports and WWNs for each host<br />

that will access <strong>the</strong> VDisk, toge<strong>the</strong>r with <strong>the</strong> ports that are defined in <strong>the</strong> SVC node<br />

zone.<br />

Important: Do not put your storage subsystems in <strong>the</strong> host (Blue) zone, which is an<br />

unsupported configuration and can lead to data loss.<br />

Figure 9-71 shows our environment.<br />

<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

AIX<br />

Host<br />

<strong>SAN</strong><br />

Figure 9-71 <strong>SAN</strong> environment with SVC attached<br />

9.8.2 Preparing your SVC to virtualize disks<br />

Zoning for migration scenarios<br />

<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

This section describes <strong>the</strong> preparatory tasks that we perform before taking our AIX server<br />

offline. These tasks are all nondisruptive activities and do not affect your <strong>SAN</strong> fabric or your<br />

existing SVC configuration (if you already have a production SVC in place).<br />

Creating a managed disk group<br />

When we move <strong>the</strong> two AIX LUNs to <strong>the</strong> SVC, <strong>the</strong>y are first used in image mode; <strong>the</strong>refore,<br />

we must create an MDG to hold those disks. We must create an empty MDG for each of <strong>the</strong><br />

disks, using <strong>the</strong> commands in Example 9-42 on page 755. We name <strong>the</strong> MDGs to hold our<br />

LUNs KANAGA_MDG_0 and KANAGA_MDG_1.<br />

754 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

SVC<br />

I/O grp0 SVC<br />

SVC<br />

Green Zone<br />

Red Zone<br />

Blue Zone<br />

Black Zone


Example 9-42 Create empty mdiskgroup<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512<br />

MDisk Group, id [7], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp<br />

id name status mdisk_count vdisk_count capacity extent_size<br />

free_capacity virtual_capacity used_capacity real_capacity overallocation<br />

warning<br />

7 aix_imgmdg online 0 0 0<br />

512 0 0.00MB 0.00MB 0.00MB 0<br />

0<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

Creating our host definition<br />

If you have prepared <strong>the</strong> zones correctly, <strong>the</strong> SVC can see <strong>the</strong> AIX server’s HBA adapters on<br />

<strong>the</strong> fabric (our host only had one HBA).<br />

First, we get <strong>the</strong> WWN for our AIX server’s HBA, because we have many hosts that are<br />

connected to our <strong>SAN</strong> fabric and in <strong>the</strong> Blue zone. We want to make sure we have <strong>the</strong> correct<br />

WWN to reduce our AIX servers’ downtime. Example 9-43 shows <strong>the</strong> commands to get <strong>the</strong><br />

WWN; our host has a WWN of 10000000C932A7FB.<br />

Example 9-43 Discover your WWN<br />

#lsdev -Ccadapter|grep fcs<br />

fcs0 Available 1Z-08 FC Adapter<br />

fcs1 Available 1D-08 FC Adapter<br />

#lscfg -vpl fcs0<br />

fcs0 U0.1-P2-I4/Q1 FC Adapter<br />

Part Number.................00P4494<br />

EC Level....................A<br />

Serial Number...............1E3120A68D<br />

Manufacturer................001E<br />

Device Specific.(CC)........2765<br />

FRU Number.................. 00P4495<br />

Network Address.............10000000C932A7FB<br />

ROS Level and ID............02C03951<br />

Device Specific.(Z0)........2002606D<br />

Device Specific.(Z1)........00000000<br />

Device Specific.(Z2)........00000000<br />

Device Specific.(Z3)........03000909<br />

Device Specific.(Z4)........FF401210<br />

Device Specific.(Z5)........02C03951<br />

Device Specific.(Z6)........06433951<br />

Device Specific.(Z7)........07433951<br />

Device Specific.(Z8)........20000000C932A7FB<br />

Device Specific.(Z9)........CS3.91A1<br />

Device Specific.(ZA)........C1D3.91A1<br />

Device Specific.(ZB)........C2D3.91A1<br />

Device Specific.(YL)........U0.1-P2-I4/Q1<br />

PLATFORM SPECIFIC<br />

Chapter 9. Data migration 755


Name: fibre-channel<br />

Model: LP9002<br />

Node: fibre-channel@1<br />

Device Type: fcp<br />

Physical Location: U0.1-P2-I4/Q1<br />

#lscfg -vpl fcs1<br />

fcs1 U0.1-P2-I5/Q1 FC Adapter<br />

Part Number.................00P4494<br />

EC Level....................A<br />

Serial Number...............1E3120A67B<br />

Manufacturer................001E<br />

Device Specific.(CC)........2765<br />

FRU Number.................. 00P4495<br />

Network Address.............10000000C932A800<br />

ROS Level and ID............02C03891<br />

Device Specific.(Z0)........2002606D<br />

Device Specific.(Z1)........00000000<br />

Device Specific.(Z2)........00000000<br />

Device Specific.(Z3)........02000909<br />

Device Specific.(Z4)........FF401050<br />

Device Specific.(Z5)........02C03891<br />

Device Specific.(Z6)........06433891<br />

Device Specific.(Z7)........07433891<br />

Device Specific.(Z8)........20000000C932A800<br />

Device Specific.(Z9)........CS3.82A1<br />

Device Specific.(ZA)........C1D3.82A1<br />

Device Specific.(ZB)........C2D3.82A1<br />

Device Specific.(YL)........U0.1-P2-I5/Q1<br />

PLATFORM SPECIFIC<br />

Name: fibre-channel<br />

Model: LP9000<br />

Node: fibre-channel@1<br />

Device Type: fcp<br />

Physical Location: U0.1-P2-I5/Q1<br />

##<br />

The svcinfo lshbaportcandidate command on <strong>the</strong> SVC lists all of <strong>the</strong> WWNs, which have<br />

not yet been allocated to a host, that <strong>the</strong> SVC can see on <strong>the</strong> <strong>SAN</strong> fabric. Example 9-44<br />

shows <strong>the</strong> output of <strong>the</strong> nodes that it found in our <strong>SAN</strong> fabric. (If <strong>the</strong> port did not show up, it<br />

indicates that we have a zone configuration problem.)<br />

Example 9-44 Add <strong>the</strong> host to <strong>the</strong> SVC<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshbaportcandidate<br />

id<br />

10000000C932A7FB<br />

10000000C932A800<br />

210000E08B89B8C0<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

756 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


After verifying that <strong>the</strong> SVC can see our host (Kanaga), we create <strong>the</strong> host entry and assign<br />

<strong>the</strong> WWN to this entry, as shown with <strong>the</strong> commands in Example 9-45.<br />

Example 9-45 Create <strong>the</strong> host entry<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkhost -name Kanaga -hbawwpn<br />

10000000C932A7FB:10000000C932A800<br />

Host, id [5], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshost Kanaga<br />

id 5<br />

name Kanaga<br />

port_count 2<br />

type generic<br />

mask 1111<br />

iogrp_count 4<br />

WWPN 10000000C932A800<br />

node_logged_in_count 2<br />

state inactive<br />

WWPN 10000000C932A7FB<br />

node_logged_in_count 2<br />

state inactive<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

Verifying that we can see our storage subsystem<br />

If we performed <strong>the</strong> zoning correctly, <strong>the</strong> SVC can see <strong>the</strong> storage subsystem with <strong>the</strong><br />

svcinfo lscontroller command (Example 9-46).<br />

Example 9-46 Discover <strong>the</strong> storage controller<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lscontroller<br />

id controller_name ctrl_s/n vendor_id product_id_low product_id_high<br />

0 DS4500 <strong>IBM</strong> 1742-900<br />

1 DS4700 <strong>IBM</strong> 1814<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

Names: The svctask chcontroller command enables you to change <strong>the</strong> discovered<br />

storage subsystem name in SVC. In complex <strong>SAN</strong>s, we recommend that you rename your<br />

storage subsystem to a more meaningful name.<br />

Getting <strong>the</strong> disk serial numbers<br />

To help avoid <strong>the</strong> risk of creating <strong>the</strong> wrong VDisks from all of <strong>the</strong> available unmanaged<br />

MDisks (in case <strong>the</strong>re are many available unmanaged MDisks that are seen by <strong>the</strong> SVC), we<br />

obtain <strong>the</strong> LUN serial numbers from our storage subsystem administration tool (<strong>Storage</strong><br />

Manager).<br />

When we discover <strong>the</strong>se MDisks, we confirm that we have <strong>the</strong> correct serial numbers before<br />

we create <strong>the</strong> image mode VDisks.<br />

If you also use a DS4000 family storage subsystem, <strong>Storage</strong> Manager will provide <strong>the</strong> LUN<br />

serial numbers. Right-click your logical drive, and choose Properties. Figure 9-72 on<br />

page 758 and Figure 9-73 on page 758 show our serial numbers.<br />

Chapter 9. Data migration 757


Figure 9-72 Obtaining disk serial number<br />

Figure 9-73 Obtaining disk serial number<br />

We are now ready to move <strong>the</strong> ownership of <strong>the</strong> disks to <strong>the</strong> SVC, discover <strong>the</strong>m as MDisks,<br />

and give <strong>the</strong>m back to <strong>the</strong> host as VDisks.<br />

758 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


9.8.3 Moving <strong>the</strong> LUNs to <strong>the</strong> SVC<br />

In this step, we move <strong>the</strong> LUNs that are assigned to <strong>the</strong> AIX server and reassign <strong>the</strong>m to <strong>the</strong><br />

SVC.<br />

Because we only want to move <strong>the</strong> LUN that holds our application and data files, we move<br />

that LUN without rebooting <strong>the</strong> host. The only requirement is that we unmount <strong>the</strong> file system<br />

and vary off <strong>the</strong> <strong>Volume</strong> Group to ensure data integrity after <strong>the</strong> reassignment.<br />

Before you start: Moving LUNs to <strong>the</strong> SVC requires that <strong>the</strong> Subsystem Device Driver<br />

(SDD) device driver is installed on <strong>the</strong> AIX server. You can install <strong>the</strong> SDD ahead of time;<br />

however, it might require an outage of your host to do so.<br />

The following steps are required, because we intend to move both LUNs at <strong>the</strong> same time:<br />

1. Confirm that <strong>the</strong> SDD is installed.<br />

2. Unmount and vary off <strong>the</strong> <strong>Volume</strong> Groups:<br />

a. Stop <strong>the</strong> applications that are using <strong>the</strong> LUNs.<br />

b. Unmount those file systems with <strong>the</strong> umount MOUNT_POINT command.<br />

c. If <strong>the</strong> file systems are an LVM volume, deactivate that <strong>Volume</strong> Group with <strong>the</strong><br />

varyoffvg VOLUMEGROUP_NAME command.<br />

Example 9-47 shows <strong>the</strong> commands that we ran on Kanaga.<br />

Example 9-47 AIX command sequence<br />

#varyoffvg itsoaixvg<br />

#varyoffvg itsoaixvg1<br />

#lsvg<br />

rootvg<br />

itsoaixvg<br />

itsoaixvg1<br />

#lsvg -o<br />

rootvg<br />

3. Using <strong>Storage</strong> Manager (our storage subsystem management tool), we can unmap and<br />

unmask <strong>the</strong> disks from <strong>the</strong> AIX server and remap and remask <strong>the</strong> disks to <strong>the</strong> SVC.<br />

4. From <strong>the</strong> SVC, discover <strong>the</strong> new disks with <strong>the</strong> svctask detectmdisk command. The disks<br />

will be discovered and named mdiskN, where N is <strong>the</strong> next available mdisk number<br />

(starting from 0). Example 9-48 shows <strong>the</strong> commands that we used to discover our<br />

MDisks and to verify that we have <strong>the</strong> correct MDisks.<br />

Example 9-48 Discover <strong>the</strong> new MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

24 mdisk24 online unmanaged<br />

5.0GB 0000000000000008 DS4700<br />

600a0b800026b282000043224874f41900000000000000000000000000000000<br />

Chapter 9. Data migration 759


25 mdisk25 online unmanaged<br />

8.0GB 0000000000000009 DS4700<br />

600a0b800026b2820000432f4874f57c00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

Important: Match your discovered MDisk serial numbers (UID on <strong>the</strong> svcinfo lsmdisk<br />

command task display) with <strong>the</strong> serial number that you discovered earlier (in<br />

Figure 9-72 and Figure 9-73 on page 758).<br />

5. After we have verified that we have <strong>the</strong> correct MDisks, we rename <strong>the</strong>m to avoid<br />

confusion in <strong>the</strong> future when we perform o<strong>the</strong>r MDisk-related tasks (Example 9-49).<br />

Example 9-49 Rename <strong>the</strong> MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX mdisk24<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX1 mdisk25<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

5.0GB 0000000000000008 DS4700<br />

600a0b800026b282000043224874f41900000000000000000000000000000000<br />

25 Kanaga_AIX1 online unmanaged<br />

8.0GB 0000000000000009 DS4700<br />

600a0b800026b2820000432f4874f57c00000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

6. We create our image mode VDisks with <strong>the</strong> svctask mkvdisk command and <strong>the</strong> option<br />

-vtype image (Example 9-50). This command virtualizes <strong>the</strong> disks in <strong>the</strong> exact same layout<br />

as though <strong>the</strong>y were not virtualized.<br />

Example 9-50 Create <strong>the</strong> image mode VDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype<br />

image -mdisk Kanaga_AIX -name IVD_Kanaga<br />

Virtual Disk, id [8], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype<br />

image -mdisk Kanaga_AIX1 -name IVD_Kanaga1<br />

Virtual Disk, id [9], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

7. Finally, we can map <strong>the</strong> new image mode VDisks to <strong>the</strong> host (Example 9-51).<br />

Example 9-51 Map <strong>the</strong> VDisks to <strong>the</strong> host<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga<br />

Virtual Disk to Host map, id [0], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1<br />

Virtual Disk to Host map, id [1], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

FlashCopy: While <strong>the</strong> application is in a quiescent state, you can choose to use<br />

FlashCopy to copy <strong>the</strong> new image VDisks onto o<strong>the</strong>r VDisks. You do not need to wait until<br />

<strong>the</strong> FlashCopy process has completed before starting your application.<br />

760 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Now, we are ready to perform <strong>the</strong> following steps to put <strong>the</strong> image mode VDisks online:<br />

1. Remove <strong>the</strong> old disk definitions, if you have not done so already.<br />

2. Run <strong>the</strong> cfgmgr -vs command to rediscover <strong>the</strong> available LUNs.<br />

3. If your application and data are on an LVM volume, rediscover <strong>the</strong> <strong>Volume</strong> Group, and<br />

<strong>the</strong>n, run <strong>the</strong> varyonvg VOLUME_GROUP command to activate <strong>the</strong> <strong>Volume</strong> Group.<br />

4. Mount your file systems with <strong>the</strong> mount /MOUNT_POINT command.<br />

5. You are ready to start your application.<br />

9.8.4 Migrating image mode VDisks to VDisks<br />

While <strong>the</strong> AIX server is still running, and our file systems are in use, we migrate <strong>the</strong> image<br />

mode VDisks onto striped VDisks, with <strong>the</strong> extents being spread over three o<strong>the</strong>r MDisks.<br />

Preparing MDisks for striped mode VDisks<br />

From our storage subsystem, we have performed <strong>the</strong>se tasks:<br />

► Created and allocated three LUNs to <strong>the</strong> SVC<br />

► Discovered <strong>the</strong>m as MDisks<br />

► Renamed <strong>the</strong>se LUNs to more meaningful names<br />

► Created a new MDG<br />

► Put all <strong>the</strong>se MDisks into this MDG<br />

You can see <strong>the</strong> output of our commands in Example 9-52.<br />

Example 9-52 Create a new MDisk group<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_vd -ext 512<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

24 Kanaga_AIX online image 7<br />

aix_imgmdg 5.0GB 0000000000000008 DS4700<br />

600a0b800026b282000043224874f41900000000000000000000000000000000<br />

25 Kanaga_AIX1 online image 7<br />

aix_imgmdg 8.0GB 0000000000000009 DS4700<br />

600a0b800026b2820000432f4874f57c00000000000000000000000000000000<br />

26 mdisk26 online unmanaged<br />

6.0GB 000000000000000A DS4700<br />

600a0b800026b2820000439c48751ddc00000000000000000000000000000000<br />

27 mdisk27 online unmanaged<br />

6.0GB 000000000000000B DS4700<br />

600a0b800026b2820000438448751da900000000000000000000000000000000<br />

28 mdisk28 online unmanaged<br />

6.0GB 000000000000000C DS4700<br />

600a0b800026b2820000439048751dc200000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd0 mdisk26<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd1 mdisk27<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd2 mdisk28<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd0 aix_vd<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd1 aix_vd<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd2 aix_vd<br />

Chapter 9. Data migration 761


<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

24 Kanaga_AIX online image 7<br />

aix_imgmdg 5.0GB 0000000000000008 DS4700<br />

600a0b800026b282000043224874f41900000000000000000000000000000000<br />

25 Kanaga_AIX1 online image 7<br />

aix_imgmdg 8.0GB 0000000000000009 DS4700<br />

600a0b800026b2820000432f4874f57c00000000000000000000000000000000<br />

26 aix_vd0 online managed 6<br />

aix_vd 6.0GB 000000000000000A DS4700<br />

600a0b800026b2820000439c48751ddc00000000000000000000000000000000<br />

27 aix_vd1 online managed 6<br />

aix_vd 6.0GB 000000000000000B DS4700<br />

600a0b800026b2820000438448751da900000000000000000000000000000000<br />

28 aix_vd2 online managed 6<br />

aix_vd 6.0GB 000000000000000C DS4700<br />

600a0b800026b2820000439048751dc200000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

Migrating <strong>the</strong> VDisks<br />

We are ready to migrate <strong>the</strong> image mode VDisks onto striped VDisks with <strong>the</strong> svctask<br />

migratevdisk command (Example 9-15 on page 724).<br />

While <strong>the</strong> migration is running, our AIX server is still running, and we can continue accessing<br />

<strong>the</strong> files.<br />

To check <strong>the</strong> overall progress of <strong>the</strong> migration, we use <strong>the</strong> svcinfo lsmigrate command, as<br />

shown in Example 9-53. Listing <strong>the</strong> MDG with <strong>the</strong> svcinfo lsmdiskgrp command shows that<br />

<strong>the</strong> free capacity on <strong>the</strong> old MDG is slowly increasing while those extents are moved to <strong>the</strong><br />

new MDG.<br />

Example 9-53 Migrating image mode VDisks to striped VDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vd<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmigrate<br />

migrate_type MDisk_Group_Migration<br />

progress 10<br />

migrate_source_vdisk_index 8<br />

migrate_target_mdisk_grp 6<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

migrate_type MDisk_Group_Migration<br />

progress 0<br />

migrate_source_vdisk_index 9<br />

migrate_target_mdisk_grp 6<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

After this task has completed, Example 9-54 on page 763 shows that <strong>the</strong> VDisks are spread<br />

over three MDisks in <strong>the</strong> aix_vd MDG. The old MDG is empty.<br />

762 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Example 9-54 Migration complete<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_vd<br />

id 6<br />

name aix_vd<br />

status online<br />

mdisk_count 3<br />

vdisk_count 2<br />

capacity 18.0GB<br />

extent_size 512<br />

free_capacity 5.0GB<br />

virtual_capacity 13.00GB<br />

used_capacity 13.00GB<br />

real_capacity 13.00GB<br />

overallocation 72<br />

warning 0<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdg<br />

id 7<br />

name aix_imgmdg<br />

status online<br />

mdisk_count 2<br />

vdisk_count 0<br />

capacity 13.0GB<br />

extent_size 512<br />

free_capacity 13.0GB<br />

virtual_capacity 0.00MB<br />

used_capacity 0.00MB<br />

real_capacity 0.00MB<br />

overallocation 0<br />

warning 0<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

Our migration to <strong>the</strong> SVC is complete. You can remove <strong>the</strong> original MDisks from <strong>the</strong> SVC, and<br />

you can remove <strong>the</strong>se LUNs from <strong>the</strong> storage subsystem.<br />

If <strong>the</strong>se LUNs are <strong>the</strong> LUNs that were used last on our storage subsystem, we can remove it<br />

from our <strong>SAN</strong> fabric.<br />

9.8.5 Preparing to migrate from <strong>the</strong> SVC<br />

Before we change <strong>the</strong> AIX servers’ LUNs from being accessed by <strong>the</strong> SVC as VDisks to<br />

being directly accessed from <strong>the</strong> storage subsystem, we need to convert <strong>the</strong> VDisks into<br />

image mode VDisks.<br />

You might want to perform this activity for one of <strong>the</strong>se reasons:<br />

► You purchased a new storage subsystem, and you were using <strong>the</strong> SVC as a tool to<br />

migrate from your old storage subsystem to this new storage subsystem.<br />

► You used <strong>the</strong> SVC to FlashCopy or Metro Mirror a VDisk onto ano<strong>the</strong>r VDisk and you no<br />

longer need that host connected to <strong>the</strong> SVC.<br />

► You want to ship a host, and its data, that is currently connected to <strong>the</strong> SVC to a site where<br />

<strong>the</strong>re is no SVC.<br />

► Changes to your environment no longer require this host to use <strong>the</strong> SVC.<br />

Chapter 9. Data migration 763


There are o<strong>the</strong>r preparatory activities before we shut down <strong>the</strong> host and reconfigure <strong>the</strong> LUN<br />

masking and mapping. This section covers those activities.<br />

If you are moving <strong>the</strong> data to a new storage subsystem, it is assumed that this storage<br />

subsystem is connected to your <strong>SAN</strong> fabric, powered on, and visible from your <strong>SAN</strong> switches.<br />

Your environment must look similar to our environment, as shown in Figure 9-74.<br />

<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

Figure 9-74 Environment with SVC<br />

Zoning for migration scenarios<br />

AIX<br />

Host<br />

<strong>SAN</strong><br />

<strong>IBM</strong> or OEM<br />

<strong>Storage</strong><br />

Subsystem<br />

Making fabric zone changes<br />

The first step is to set up <strong>the</strong> <strong>SAN</strong> configuration so that all of <strong>the</strong> zones are created. Add <strong>the</strong><br />

new storage subsystem to <strong>the</strong> Red zone, so that <strong>the</strong> SVC can communicate with it directly.<br />

Create a Green zone for our host to use when we are ready for it to directly access <strong>the</strong> disk,<br />

after it has been removed from <strong>the</strong> SVC.<br />

It is assumed that you have created <strong>the</strong> necessary zones.<br />

After your zone configuration is set up correctly, <strong>the</strong> SVC sees <strong>the</strong> new storage subsystem’s<br />

controller by using <strong>the</strong> svcinfo lscontroller command, as shown in Example 9-55 on<br />

page 765. It is also a good idea to rename <strong>the</strong> controller to a more meaningful name. You can<br />

use <strong>the</strong> svctask chcontroller -name command.<br />

764 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

SVC<br />

I/O grp0 SVC<br />

SVC<br />

Green Zone<br />

Red Zone<br />

Blue Zone<br />

Black Zone


Example 9-55 Discovering <strong>the</strong> new storage subsystem<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lscontroller<br />

id controller_name ctrl_s/n vendor_id product_id_low product_id_high<br />

0 DS4500 <strong>IBM</strong> 1742-900<br />

1 DS4700 <strong>IBM</strong> 1814 FAStT<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

Creating new LUNs<br />

On our storage subsystem, we created two LUNs and masked <strong>the</strong>m so that <strong>the</strong> SVC can see<br />

<strong>the</strong>m. We will eventually give <strong>the</strong>se LUNs directly to <strong>the</strong> host, removing <strong>the</strong> VDisks that it<br />

currently has. To check that <strong>the</strong> SVC can use <strong>the</strong> LUNs, issue <strong>the</strong> svctask detectmdisk<br />

command, as shown in Example 9-56. In our example, we use two 10 GB LUNs that are<br />

located on <strong>the</strong> DS4500 subsystem, so in this step, we migrate back to image mode VDisks<br />

and to ano<strong>the</strong>r subsystem in one step. We have already deleted <strong>the</strong> old LUNs on <strong>the</strong> DS4700<br />

storage subsystem, which is <strong>the</strong> reason why <strong>the</strong>y appear offline here.<br />

Example 9-56 Discover <strong>the</strong> new MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask detectmdisk<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

24 Kanaga_AIX offline managed 7<br />

aix_imgmdg 5.0GB 0000000000000008 DS4700<br />

600a0b800026b282000043224874f41900000000000000000000000000000000<br />

25 Kanaga_AIX1 offline managed 7<br />

aix_imgmdg 8.0GB 0000000000000009 DS4700<br />

600a0b800026b2820000432f4874f57c00000000000000000000000000000000<br />

26 aix_vd0 online managed 6<br />

aix_vd 6.0GB 000000000000000A DS4700<br />

600a0b800026b2820000439c48751ddc00000000000000000000000000000000<br />

27 aix_vd1 online managed 6<br />

aix_vd 6.0GB 000000000000000B DS4700<br />

600a0b800026b2820000438448751da900000000000000000000000000000000<br />

28 aix_vd2 online managed 6<br />

aix_vd 6.0GB 000000000000000C DS4700<br />

600a0b800026b2820000439048751dc200000000000000000000000000000000<br />

29 mdisk29 online unmanaged<br />

10.0GB 0000000000000010 DS4500<br />

600a0b8000174233000000b84876512f00000000000000000000000000000000<br />

30 mdisk30 online unmanaged<br />

10.0GB 0000000000000011 DS4500<br />

600a0b80001744310000010e4876444600000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

Even though <strong>the</strong> MDisks will not stay in <strong>the</strong> SVC for long, we still recommend that you rename<br />

<strong>the</strong>m to more meaningful names so that <strong>the</strong>y do not get confused with o<strong>the</strong>r MDisks that are<br />

used by o<strong>the</strong>r activities. Also, we create <strong>the</strong> MDGs to hold our new MDisks, as shown in<br />

Example 9-57.<br />

Example 9-57 Rename <strong>the</strong> MDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG mdisk29<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG1 mdisk30<br />

Chapter 9. Data migration 765


<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512<br />

MDisk Group, id [3], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp<br />

id name status mdisk_count vdisk_count<br />

capacity extent_size free_capacity virtual_capacity used_capacity<br />

real_capacity overallocation warning<br />

3 KANAGA_AIXMIG online 0 0 0<br />

512 0 0.00MB 0.00MB 0.00MB 0<br />

0<br />

6 aix_vd online 3 2<br />

18.0GB 512 5.0GB 13.00GB 13.00GB<br />

13.00GB 72 0<br />

7 aix_imgmdg offline 2 0<br />

13.0GB 512 13.0GB 0.00MB 0.00MB<br />

0.00MB 0 0<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

Our SVC environment is ready for <strong>the</strong> VDisk migration to image mode VDisks.<br />

9.8.6 Migrating <strong>the</strong> managed VDisks<br />

While our AIX server is still running, we migrate <strong>the</strong> managed VDisks onto <strong>the</strong> new MDisks<br />

using image mode VDisks. The command to perform this action is <strong>the</strong> svctask<br />

migratetoimage command, which is shown in Example 9-58.<br />

Example 9-58 Migrate <strong>the</strong> VDisks to image mode VDisks<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG<br />

-mdiskgrp KANAGA_AIXMIG<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1<br />

-mdiskgrp KANAGA_AIXMIG<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

24 Kanaga_AIX offline managed 7<br />

aix_imgmdg 5.0GB 0000000000000008 DS4700<br />

600a0b800026b282000043224874f41900000000000000000000000000000000<br />

25 Kanaga_AIX1 offline managed 7<br />

aix_imgmdg 8.0GB 0000000000000009 DS4700<br />

600a0b800026b2820000432f4874f57c00000000000000000000000000000000<br />

26 aix_vd0 online managed 6<br />

aix_vd 6.0GB 000000000000000A DS4700<br />

600a0b800026b2820000439c48751ddc00000000000000000000000000000000<br />

27 aix_vd1 online managed 6<br />

aix_vd 6.0GB 000000000000000B DS4700<br />

600a0b800026b2820000438448751da900000000000000000000000000000000<br />

28 aix_vd2 online managed 6<br />

aix_vd 6.0GB 000000000000000C DS4700<br />

600a0b800026b2820000439048751dc200000000000000000000000000000000<br />

29 AIX_MIG online image 3<br />

KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500<br />

600a0b8000174233000000b84876512f00000000000000000000000000000000<br />

766 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


30 AIX_MIG1 online image 3<br />

KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500<br />

600a0b80001744310000010e4876444600000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmigrate<br />

migrate_type Migrate_to_Image<br />

progress 50<br />

migrate_source_vdisk_index 9<br />

migrate_target_mdisk_index 30<br />

migrate_target_mdisk_grp 3<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

migrate_type Migrate_to_Image<br />

progress 50<br />

migrate_source_vdisk_index 8<br />

migrate_target_mdisk_index 29<br />

migrate_target_mdisk_grp 3<br />

max_thread_count 4<br />

migrate_source_vdisk_copy_id 0<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

During <strong>the</strong> migration, our AIX server is unaware that its data is being physically moved<br />

between storage subsystems.<br />

After <strong>the</strong> migration is complete, <strong>the</strong> image mode VDisks are ready to be removed from <strong>the</strong><br />

AIX server, and <strong>the</strong> real LUNs can be mapped and masked directly to <strong>the</strong> host by using <strong>the</strong><br />

storage subsystems tool.<br />

9.8.7 Removing <strong>the</strong> LUNs from <strong>the</strong> SVC<br />

The next step will require downtime, while we remap and remask <strong>the</strong> disks so that <strong>the</strong> host<br />

sees <strong>the</strong>m directly through <strong>the</strong> Green zone.<br />

Because our LUNs only hold data files, and because we use a unique <strong>Volume</strong> Group, we can<br />

remap and remask <strong>the</strong> disks without rebooting <strong>the</strong> host. The only requirement is that we<br />

unmount <strong>the</strong> file system and vary off <strong>the</strong> <strong>Volume</strong> Group to ensure data integrity after <strong>the</strong><br />

reassignment.<br />

Before you start: Moving LUNs to ano<strong>the</strong>r storage system might need ano<strong>the</strong>r driver than<br />

SDD. Check with <strong>the</strong> storage subsystems vendor to see which driver you will need. You<br />

might be able to install this driver ahead of time.<br />

Follow <strong>the</strong>se required steps to remove <strong>the</strong> SVC:<br />

1. Confirm that <strong>the</strong> correct device driver for <strong>the</strong> new storage subsystem is loaded. Because<br />

we are moving to a DS4500, we can continue to use <strong>the</strong> SDD.<br />

2. Shut down any applications and unmount <strong>the</strong> file systems:<br />

a. Stop <strong>the</strong> applications that are using <strong>the</strong> LUNs.<br />

b. Unmount those file systems with <strong>the</strong> umount MOUNT_POINT command.<br />

c. If <strong>the</strong> file systems are an LVM volume, deactivate that <strong>Volume</strong> Group with <strong>the</strong><br />

varyoffvg VOLUMEGROUP_NAME command.<br />

Chapter 9. Data migration 767


3. Remove <strong>the</strong> VDisks from <strong>the</strong> host by using <strong>the</strong> svctask rmvdiskhostmap command<br />

(Example 9-59). To double-check that you have removed <strong>the</strong> VDisks, use <strong>the</strong> svcinfo<br />

lshostvdiskmap command, which shows that <strong>the</strong>se disks are no longer mapped to <strong>the</strong> AIX<br />

server.<br />

Example 9-59 Remove <strong>the</strong> VDisks from <strong>the</strong> host<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Kanaga<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

4. Remove <strong>the</strong> VDisks from <strong>the</strong> SVC by using <strong>the</strong> svctask rmvdisk command, which will<br />

make <strong>the</strong> MDisks unmanaged, as shown in Example 9-60.<br />

Cached data: When you run <strong>the</strong> svctask rmvdisk command, <strong>the</strong> SVC first<br />

double-checks that <strong>the</strong>re is no outstanding dirty cached data for <strong>the</strong> VDisk being<br />

removed. If uncommitted cached data still exists, <strong>the</strong> command fails with <strong>the</strong> following<br />

error message:<br />

CMMVC6212E The command failed because data in <strong>the</strong> cache has not been<br />

committed to disk<br />

You will have to wait for this cached data to be committed to <strong>the</strong> underlying storage<br />

subsystem before you can remove <strong>the</strong> VDisk.<br />

The SVC will automatically destage uncommitted cached data two minutes after <strong>the</strong><br />

last write activity for <strong>the</strong> VDisk. How much data <strong>the</strong>re is to destage and how busy <strong>the</strong><br />

I/O subsystem is will determine how long this command takes to complete.<br />

You can check whe<strong>the</strong>r <strong>the</strong> VDisk has uncommitted data in <strong>the</strong> cache by using <strong>the</strong><br />

svcinfo lsvdisk command and checking <strong>the</strong> fast_write_state attribute.<br />

This attribute has <strong>the</strong> following meanings:<br />

empty No modified data exists in <strong>the</strong> cache.<br />

not_empty Modified data might exist in <strong>the</strong> cache.<br />

corrupt Modified data might have existed in <strong>the</strong> cache, but any modified data<br />

has been lost.<br />

Example 9-60 Remove <strong>the</strong> VDisks from <strong>the</strong> SVC<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga1<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsmdisk<br />

id name status mode<br />

mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#<br />

controller_name UID<br />

29 AIX_MIG online unmanaged<br />

10.0GB 0000000000000010 DS4500<br />

600a0b8000174233000000b84876512f00000000000000000000000000000000<br />

30 AIX_MIG1 online unmanaged<br />

10.0GB 0000000000000011 DS4500<br />

600a0b80001744310000010e4876444600000000000000000000000000000000<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin><br />

768 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


5. Using <strong>Storage</strong> Manager (our storage subsystem management tool), unmap and unmask<br />

<strong>the</strong> disks from <strong>the</strong> SVC back to <strong>the</strong> AIX server.<br />

Important: This step is <strong>the</strong> last step that you can perform and still safely back out of<br />

everything you have done so far.<br />

Up to this point, you can reverse all of <strong>the</strong> actions that you have performed so far to get<br />

<strong>the</strong> server back online without data loss:<br />

► Remap and remask <strong>the</strong> LUNs back to <strong>the</strong> SVC.<br />

► Run <strong>the</strong> svctask detectmdisk command to rediscover <strong>the</strong> MDisks.<br />

► Recreate <strong>the</strong> VDisks with <strong>the</strong> svctask mkvdisk command.<br />

► Remap <strong>the</strong> VDisks back to <strong>the</strong> server with <strong>the</strong> svctask mkvdiskhostmap command.<br />

After you start <strong>the</strong> next step, you might not be able to turn back without <strong>the</strong> risk of data<br />

loss.<br />

We are ready to access <strong>the</strong> LUNs from <strong>the</strong> AIX server. If all of <strong>the</strong> zoning and LUN masking<br />

and mapping were done successfully, our AIX server will boot as though nothing has<br />

happened:<br />

1. Run <strong>the</strong> cfgmgr -S command to discover <strong>the</strong> storage subsystem.<br />

2. Use <strong>the</strong> lsdev -Ccdisk command to verify <strong>the</strong> discovery of <strong>the</strong> new disk.<br />

3. Remove <strong>the</strong> references to all of <strong>the</strong> old disks. Example 9-61 shows <strong>the</strong> removal using SDD<br />

and Example 9-62 on page 770 shows <strong>the</strong> removal using SDDPCM.<br />

Example 9-61 Remove references to old paths using SDD<br />

#lsdev -Cc disk<br />

hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive<br />

hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive<br />

hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive<br />

hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device<br />

hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device<br />

hdisk5 Defined 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk6 Defined 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk7 Defined 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk8 Defined 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk10 Defined 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk11 Defined 1Z-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk12 Defined 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

hdisk13 Defined 1D-08-02 <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Device<br />

vpath0 Defined Data Path Optimizer Pseudo Device Driver<br />

vpath1 Defined Data Path Optimizer Pseudo Device Driver<br />

vpath2 Defined Data Path Optimizer Pseudo Device Driver<br />

# for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;done<br />

hdisk5 deleted<br />

hdisk6 deleted<br />

hdisk7 deleted<br />

hdisk8 deleted<br />

hdisk10 deleted<br />

hdisk11 deleted<br />

hdisk12 deleted<br />

hdisk13 deleted<br />

#for i in 0 1 2; do rmdev -dl vpath$i -R;done<br />

Chapter 9. Data migration 769


vpath0 deleted<br />

vpath1 deleted<br />

vpath2 deleted<br />

#lsdev -Cc disk<br />

hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive<br />

hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive<br />

hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive<br />

hdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Device<br />

hdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device<br />

#<br />

Example 9-62 Remove references to old paths using SDDPCM<br />

# lsdev -Cc disk<br />

hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive<br />

hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive<br />

hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive<br />

hdisk3 Defined 1D-08-02 MPIO FC 2145<br />

hdisk4 Defined 1D-08-02 MPIO FC 2145<br />

hdisk5 Available 1D-08-02 MPIO FC 2145<br />

# for i in 3 4; do rmdev -dl hdisk$i -R;done<br />

hdisk3 deleted<br />

hdisk4 deleted<br />

# lsdev -Cc disk<br />

hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive<br />

hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive<br />

hdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drive<br />

hdisk5 Available 1D-08-02 MPIO FC 2145<br />

4. If your application and data are on an LVM volume, rediscover <strong>the</strong> <strong>Volume</strong> Group, and<br />

<strong>the</strong>n, run <strong>the</strong> varyonvg VOLUME_GROUP command to activate <strong>the</strong> <strong>Volume</strong> Group.<br />

5. Mount your file systems with <strong>the</strong> mount /MOUNT_POINT command.<br />

6. You are ready to start your application.<br />

Finally, to make sure that <strong>the</strong> MDisks are removed from <strong>the</strong> SVC, run <strong>the</strong> svctask<br />

detectmdisk command. The MDisks will first be discovered as offline, and <strong>the</strong>n, <strong>the</strong>y will<br />

automatically be removed after <strong>the</strong> SVC determines that <strong>the</strong>re are no VDisks associated with<br />

<strong>the</strong>se MDisks.<br />

9.9 Using SVC for storage migration<br />

The primary use of <strong>the</strong> SVC is not as a storage migration tool. However, <strong>the</strong> advanced<br />

capabilities of <strong>the</strong> SVC enable us to use <strong>the</strong> SVC as a “storage migration tool”; <strong>the</strong>refore, you<br />

can add <strong>the</strong> SVC temporarily to your <strong>SAN</strong> environment to copy <strong>the</strong> data from one storage<br />

subsystem to ano<strong>the</strong>r storage subsystem. The SVC enables you to copy image mode VDisks<br />

directly from one subsystem to ano<strong>the</strong>r subsystem while host I/O is running. The only<br />

downtime that is required is when <strong>the</strong> SVC is added to and removed from your <strong>SAN</strong><br />

environment.<br />

To use <strong>the</strong> SVC for migration purposes only, perform <strong>the</strong> following steps:<br />

1. Add <strong>the</strong> SVC to your <strong>SAN</strong> environment.<br />

2. Prepare <strong>the</strong> SVC.<br />

770 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


3. Depending on your operating system, unmount <strong>the</strong> selected LUNs or shut down <strong>the</strong> host.<br />

4. Add <strong>the</strong> SVC between your storage and <strong>the</strong> host.<br />

5. Mount <strong>the</strong> LUNs or start <strong>the</strong> host again.<br />

6. Start <strong>the</strong> migration.<br />

7. After <strong>the</strong> migration process is complete, unmount <strong>the</strong> selected LUNs or shut down <strong>the</strong><br />

host.<br />

8. Remove <strong>the</strong> SVC from your <strong>SAN</strong>.<br />

9. Mount <strong>the</strong> LUNs, or start <strong>the</strong> host again.<br />

10.The migration is complete.<br />

As you can see, extremely little downtime is required. If you prepare everything correctly, you<br />

are able to reduce your downtime to a few minutes. The copy process is handled by <strong>the</strong> SVC,<br />

so <strong>the</strong> host does not hinder <strong>the</strong> performance while <strong>the</strong> migration progresses.<br />

To use <strong>the</strong> SVC for storage migrations, perform <strong>the</strong> steps that are described in <strong>the</strong> following<br />

sections:<br />

► 9.5.2, “Adding <strong>the</strong> SVC between <strong>the</strong> host system and <strong>the</strong> DS4700” on page 690<br />

► 9.5.6, “Migrating <strong>the</strong> VDisk from image mode to image mode” on page 705<br />

► 9.5.7, “Free <strong>the</strong> data from <strong>the</strong> SVC” on page 709<br />

9.10 Using VDisk Mirroring and Space-Efficient VDisks<br />

toge<strong>the</strong>r<br />

In this section, we show that you can use <strong>the</strong> VDisk Mirroring feature and Space-Efficient<br />

VDisks toge<strong>the</strong>r to move data from a fully allocated VDisk to a Space-Efficient VDisk.<br />

9.10.1 Zero detect feature<br />

SVC 5.1 introduced <strong>the</strong> zero detect feature for Space-Efficient VDisks. This feature enables<br />

clients to reclaim unused allocated disk space (zeros) when converting a fully allocated VDisk<br />

to a Space-Efficient VDisk using VDisk Mirroring.<br />

To migrate from a fully allocated VDisk to a Space-Efficient VDisk, perform <strong>the</strong>se steps:<br />

1. Add <strong>the</strong> target space-efficient copy.<br />

2. Wait for synchronization to complete.<br />

3. Remove <strong>the</strong> source fully allocated copy.<br />

By using this feature, clients can easily free up managed disk space and make better use of<br />

<strong>the</strong>ir storage, without needing to purchase any additional function for <strong>the</strong> SVC.<br />

VDisk Mirroring and Space-Efficient VDisk functions are included in <strong>the</strong> base virtualization<br />

license. Clients with thin-provisioned storage on an existing storage system can migrate <strong>the</strong>ir<br />

data under SVC management using Space-Efficient VDisks without having to allocate<br />

additional storage space.<br />

Zero detect only works if <strong>the</strong> disk actually contains zeroes; an uninitialized disk can contain<br />

anything, unless <strong>the</strong> disk has been formatted (for example, using <strong>the</strong> - fmtdisk flag on <strong>the</strong><br />

mkvdisk command).<br />

Chapter 9. Data migration 771


Figure 9-75 shows <strong>the</strong> Space-Efficient VDisk zero detect concept.<br />

Figure 9-75 Space-Efficient VDisk zero detect feature<br />

Figure 9-76 on page 773 shows <strong>the</strong> Space-Efficient VDisk organization.<br />

772 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Figure 9-76 Space-Efficient VDisk organization<br />

As shown in Figure 9-76, a Space-Efficient VDisk has <strong>the</strong>se components:<br />

► Used capacity: This term specifies <strong>the</strong> portion of real capacity that is being used to store<br />

data. For non-space-efficient copies, this value is <strong>the</strong> same as <strong>the</strong> VDisk capacity. If <strong>the</strong><br />

VDisk copy is space-efficient, <strong>the</strong> value increases from zero to <strong>the</strong> real capacity value as<br />

more of <strong>the</strong> VDisk is written to.<br />

► Real capacity: This capacity is <strong>the</strong> real allocated space in <strong>the</strong> MDG. In a Space-Efficient<br />

VDisk, this value can differ from <strong>the</strong> total capacity.<br />

► Free data: This value specifies <strong>the</strong> difference between <strong>the</strong> real capacity and <strong>the</strong> used<br />

capacity values. The SVC is continuously trying to keep equal to <strong>the</strong> real capacity for<br />

contingency. If <strong>the</strong> free data capacity reaches <strong>the</strong> used capacity and if <strong>the</strong> VDisk has been<br />

configured with <strong>the</strong> -autoexpand option, <strong>the</strong> SVC autoexpands <strong>the</strong> allocated space for this<br />

VDisk to keep this value equal to <strong>the</strong> real capacity.<br />

► Grains: This value is <strong>the</strong> smallest unit in into which <strong>the</strong> allocated space can be divided.<br />

► Metadata: This value is allocated in <strong>the</strong> real capacity, and it tracks <strong>the</strong> used capacity, real<br />

capacity, and free capacity.<br />

9.10.2 VDisk Mirroring With Space-Efficient VDisks<br />

In this section, we show an example of using <strong>the</strong> VDisk Mirror feature with Space-Efficient<br />

VDisks:<br />

1. We create a fully allocated VDisk of 15 GB named VD_Full, as shown in Example 9-63.<br />

Example 9-63 VD_Full creation example<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk 0:1:2:3:4:5<br />

-node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_Full<br />

Virtual Disk, id [2], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full<br />

Chapter 9. Data migration 773


id 2<br />

name VD_Full<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status offline<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

capacity 15.00GB<br />

type striped<br />

formatted yes<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018401BF280000000000000B<br />

throttling 0<br />

preferred_node_id 1<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status offline<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 15.00GB<br />

real_capacity 15.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

2. We <strong>the</strong>n add a Space-Efficient VDisk copy with <strong>the</strong> VDisk Mirroring option by using <strong>the</strong><br />

addvdiskcopy command and <strong>the</strong> autoexpand parameter, as shown in Example 9-64.<br />

Example 9-64 addvdiskcopy command example<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9 -vtype<br />

striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_Full<br />

VDisk [2] copy [1] successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full<br />

id 2<br />

name VD_Full<br />

774 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id many<br />

mdisk_grp_name many<br />

capacity 15.00GB<br />

type many<br />

formatted yes<br />

mdisk_id many<br />

mdisk_name many<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018401BF280000000000000B<br />

throttling 0<br />

preferred_node_id 1<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 2<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 15.00GB<br />

real_capacity 15.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

copy_id 1<br />

status online<br />

sync no<br />

primary no<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 323.57MB<br />

free_capacity 323.17MB<br />

overallocation 4746<br />

autoexpand on<br />

Chapter 9. Data migration 775


warning 80<br />

grainsize 32<br />

As you can see in Example 9-64 on page 774, <strong>the</strong> VD_Full has a copy_id 1 where <strong>the</strong><br />

used_capacity is 0.41 MB, which is equal to <strong>the</strong> metadata, because only zeros exist in <strong>the</strong><br />

disk. The real_capacity is 323.57 MB, which is equal to <strong>the</strong> -rsize 2% value that is specified in<br />

<strong>the</strong> addvdiskcopy command. The free capacity is 323.17 MB, which is equal to <strong>the</strong> real<br />

capacity minus <strong>the</strong> used capacity.<br />

If zeros are written on <strong>the</strong> disk, <strong>the</strong> Space-Efficient VDisk does not consume space.<br />

Example 9-65 shows that <strong>the</strong> Space-Efficient VDisk does not consume space even when <strong>the</strong>y<br />

are in sync.<br />

Example 9-65 Space-Efficient VDisk display<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisksyncprogress 2<br />

vdisk_id vdisk_name copy_id progress<br />

estimated_completion_time<br />

2 VD_Full 0 100<br />

2 VD_Full 1 100<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full<br />

id 2<br />

name VD_Full<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id many<br />

mdisk_grp_name many<br />

capacity 15.00GB<br />

type many<br />

formatted yes<br />

mdisk_id many<br />

mdisk_name many<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018401BF280000000000000B<br />

throttling 0<br />

preferred_node_id 1<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 2<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

776 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


used_capacity 15.00GB<br />

real_capacity 15.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

copy_id 1<br />

status online<br />

sync yes<br />

primary no<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 323.57MB<br />

free_capacity 323.17MB<br />

overallocation 4746<br />

autoexpand on<br />

warning 80<br />

grainsize 32<br />

3. We can split <strong>the</strong> VDisk Mirror or remove one of <strong>the</strong> copies, keeping <strong>the</strong> space-efficient<br />

copy as our valid copy by using <strong>the</strong> splitvdiskcopy command or <strong>the</strong> rmvdiskcopy<br />

command:<br />

– If you need your copy as a space-efficient clone, we suggest that you use <strong>the</strong><br />

splitvdiskcopy command, because that command will generate a new VDisk and you<br />

will be able to map to any server that you want.<br />

– If you need your copy, because you are migrating from a previously fully allocated<br />

VDisk to go to a Space-Efficient VDisk without any effect on <strong>the</strong> server operations, we<br />

suggest that you use <strong>the</strong> rmvdiskcopy command. In this case, <strong>the</strong> original VDisk name<br />

is kept, and it remains mapped to <strong>the</strong> same server.<br />

Example 9-66 shows <strong>the</strong> splitvdiskcopy command.<br />

Example 9-66 splitvdiskcopy command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask splitvdiskcopy -copy 1 -name VD_SEV VD_Full<br />

Virtual Disk, id [7], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD*<br />

id name IO_group_id IO_group_name status<br />

mdisk_grp_id mdisk_grp_name capacity type FC_id<br />

FC_name RC_id RC_name vdisk_UID fc_map_count<br />

copy_count fast_write_state<br />

2 VD_Full 0 io_grp0 online<br />

0 MDG_DS47 15.00GB striped<br />

60050768018401BF280000000000000B 0 1 empty<br />

7 VD_SEV 0 io_grp0 online<br />

1 MDG_DS83 15.00GB striped<br />

60050768018401BF280000000000000D 0 1 empty<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV<br />

id 7<br />

Chapter 9. Data migration 777


name VD_SEV<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

capacity 15.00GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018401BF280000000000000D<br />

throttling 0<br />

preferred_node_id 2<br />

fast_write_state empty<br />

cache readwrite<br />

udid<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 323.57MB<br />

free_capacity 323.17MB<br />

overallocation 4746<br />

autoexpand on<br />

warning 80<br />

grainsize 32<br />

Example 9-67 shows <strong>the</strong> rmvdiskcopy command.<br />

Example 9-67 rmvdiskcopy command<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask rmvdiskcopy -copy 0 VD_Full<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD*<br />

id name IO_group_id IO_group_name status<br />

mdisk_grp_id mdisk_grp_name capacity type FC_id<br />

FC_name RC_id RC_name vdisk_UID fc_map_count<br />

copy_count fast_write_state<br />

2 VD_Full 0 io_grp0 online<br />

1 MDG_DS83 15.00GB striped<br />

60050768018401BF280000000000000B 0 1 empty<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk 2<br />

778 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


id 2<br />

name VD_Full<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

capacity 15.00GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018401BF280000000000000B<br />

throttling 0<br />

preferred_node_id 1<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 1<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 323.57MB<br />

free_capacity 323.17MB<br />

overallocation 4746<br />

autoexpand on<br />

warning 80<br />

grainsize 32<br />

9.10.3 Metro Mirror and Space-Efficient VDisk<br />

In this section, we show how to use Metro Mirror with a Space-Efficient VDisk as a target<br />

VDisk. Using Metro Mirror is one way to migrate data when it is used in an intracluster<br />

configuration.<br />

Remember that VDisk Mirroring or migrating a VDisk is a concurrent operation, and Metro<br />

Mirror must be considered as disruptive for data access, when at <strong>the</strong> end of <strong>the</strong> migration, we<br />

must map <strong>the</strong> Metro Mirror target VDisk to <strong>the</strong> server.<br />

With this example, we show how you can migrate data with intracluster Metro Mirror using a<br />

Space-Efficient VDisk as a target VDisk. We also show how <strong>the</strong> real capacity and <strong>the</strong> free<br />

Chapter 9. Data migration 779


capacity change following <strong>the</strong> used capacity changing during <strong>the</strong> Metro Mirror<br />

synchronization. Follow <strong>the</strong>se steps:<br />

1. We use a fully allocated VDisk named VD_Full, and we create a Metro Mirror relationship<br />

with a Space-Efficient VDisk named VD_SEV.<br />

Example 9-68 shows <strong>the</strong> two VDisks and <strong>the</strong> rcrelationship creation.<br />

Example 9-68 VDisks and rcrelationship<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD*<br />

id name IO_group_id IO_group_name status<br />

mdisk_grp_id mdisk_grp_name capacity type FC_id<br />

FC_name RC_id RC_name vdisk_UID fc_map_count<br />

copy_count fast_write_state<br />

2 VD_Full 0 io_grp0 online<br />

1 MDG_DS47 15.00GB striped<br />

60050768018401BF280000000000000B 0 1 empty<br />

7 VD_SEV 0 io_grp0 online<br />

1 MDG_DS83 15.00GB striped<br />

60050768018401BF280000000000000F 0 1 empty<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full<br />

id 2<br />

name VD_Full<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

capacity 15.00GB<br />

type striped<br />

formatted yes<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id 2<br />

RC_name<br />

vdisk_UID 60050768018401BF2800000000000010<br />

throttling 0<br />

preferred_node_id 1<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 0<br />

mdisk_grp_name MDG_DS47<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

780 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


fast_write_state empty<br />

used_capacity 15.00GB<br />

real_capacity 15.00GB<br />

free_capacity 0.00MB<br />

overallocation 100<br />

autoexpand<br />

warning<br />

grainsize<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV<br />

id 7<br />

name VD_SEV<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status online<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

capacity 15.00GB<br />

type striped<br />

formatted no<br />

mdisk_id<br />

mdisk_name<br />

FC_id<br />

FC_name<br />

RC_id<br />

RC_name<br />

vdisk_UID 60050768018401BF280000000000000F<br />

throttling 0<br />

preferred_node_id 1<br />

fast_write_state empty<br />

cache readwrite<br />

udid 0<br />

fc_map_count 0<br />

sync_rate 50<br />

copy_count 1<br />

copy_id 0<br />

status online<br />

sync yes<br />

primary yes<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 0.41MB<br />

real_capacity 307.20MB<br />

free_capacity 306.79MB<br />

overallocation 5000<br />

autoexpand off<br />

warning 1<br />

grainsize 32<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask mkrcrelationship -cluster 0000020061006FCA<br />

-master VD_Full -aux VD_SEV -name MM_SEV_rel<br />

RC Relationship, id [2], successfully created<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsrcrelationship MM_SEV_rel<br />

Chapter 9. Data migration 781


id 2<br />

name MM_SEV_rel<br />

master_cluster_id 0000020061006FCA<br />

master_cluster_name ITSO-CLS2<br />

master_vdisk_id 2<br />

master_vdisk_name VD_Full<br />

aux_cluster_id 0000020061006FCA<br />

aux_cluster_name ITSO-CLS2<br />

aux_vdisk_id 7<br />

aux_vdisk_name VD_SEV<br />

primary master<br />

consistency_group_id<br />

consistency_group_name<br />

state inconsistent_stopped<br />

bg_copy_priority 50<br />

progress 0<br />

freeze_time<br />

status online<br />

sync<br />

copy_type metro<br />

2. We start <strong>the</strong> rcrelationship and observe how <strong>the</strong> space allocation in <strong>the</strong> target VDisk will<br />

change until it reaches <strong>the</strong> total of <strong>the</strong> used capacity.<br />

Example 9-69 shows how to start <strong>the</strong> rcrelationship and shows <strong>the</strong> space allocation<br />

changing.<br />

Example 9-69 rcrelationship and space allocation<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svctask startrcrelationship MM_SEV_rel<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV<br />

id 7<br />

name VD_SEV<br />

IO_group_id 0<br />

IO_group_name io_grp0<br />

status offline<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

capacity 15.00GB<br />

type striped<br />

formatted no<br />

.<br />

.<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state not_empty<br />

used_capacity 3.64GB<br />

real_capacity 3.95GB<br />

free_capacity 312.89MB<br />

overallocation 380<br />

autoexpand on<br />

warning 80<br />

grainsize 32<br />

782 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEV<br />

id 7<br />

name VD_SEV<br />

IO_group_id 0<br />

mdisk_grp_id 1<br />

mdisk_grp_name MDG_DS83<br />

type striped<br />

mdisk_id<br />

mdisk_name<br />

fast_write_state empty<br />

used_capacity 15.02GB<br />

real_capacity 15.03GB<br />

free_capacity 11.97MB<br />

overallocation 99<br />

autoexpand on<br />

warning 80<br />

grainsize 32<br />

3. In conclusion, it is possible to use Metro Mirror to migrate data, and we can use a<br />

Space-Efficient VDisk as <strong>the</strong> target VDisk. However, this action does not make sense,<br />

because at <strong>the</strong> end of <strong>the</strong> initial data synchronization, <strong>the</strong> Space-Efficient VDisk will<br />

allocate as much space as <strong>the</strong> source (in our case, VD_Full). If you want to use Metro<br />

Mirror to migrate your data, we suggest that you use a fully allocated VDisk for <strong>the</strong> source<br />

and <strong>the</strong> target VDisks.<br />

Chapter 9. Data migration 783


784 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Appendix A. Scripting<br />

A<br />

In this appendix, we present a high-level overview of how to automate various tasks by<br />

creating scripts using <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC) command-line<br />

interface (CLI).<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 785


Scripting structure<br />

When creating scripts to automate tasks on <strong>the</strong> SVC, use <strong>the</strong> structure that is illustrated in<br />

Figure A-1.<br />

Create<br />

connection<br />

(SSH) to <strong>the</strong><br />

SVC<br />

Run <strong>the</strong><br />

command(s)<br />

command<br />

Perform<br />

logging<br />

Figure A-1 Scripting structure for SVC task automation<br />

Creating a Secure Shell connection to <strong>the</strong> SVC<br />

When creating a connection to <strong>the</strong> SVC, if you are running <strong>the</strong> script, you must have access<br />

to a private key that corresponds to a public key that has been previously uploaded to <strong>the</strong><br />

SVC. The private key is used to establish <strong>the</strong> Secure Shell (SSH) connection that is needed<br />

to use <strong>the</strong> CLI on <strong>the</strong> SVC. If <strong>the</strong> SSH keypair is generated without a passphrase, you can<br />

connect without <strong>the</strong> need of special scripting to parse in <strong>the</strong> passphrase.<br />

On UNIX systems, you can use <strong>the</strong> ssh command to create an SSH connection with <strong>the</strong> SVC.<br />

On Windows systems, you can use a utility called plink.exe, which is provided with <strong>the</strong> PuTTY<br />

tool, to create an SSH connection with <strong>the</strong> SVC. In <strong>the</strong> following examples, we use plink to<br />

create <strong>the</strong> SSH connection to <strong>the</strong> SVC.<br />

Executing <strong>the</strong> commands<br />

When using <strong>the</strong> CLI, you can use <strong>the</strong> examples in Chapter 7, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

operations using <strong>the</strong> command-line interface” on page 339 for inspiration, or refer to <strong>the</strong> <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Command-Line Interface User’s Guide, which you<br />

can download from <strong>the</strong> SVC documentation page for each SVC code level at this Web site:<br />

http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5<br />

000033&familyind=5329743&taskind=1<br />

Performing logging<br />

When using <strong>the</strong> CLI, not all commands provide a usable response to determine <strong>the</strong> status of<br />

<strong>the</strong> invoked command. Therefore, we recommend that you always create checks that can be<br />

logged for monitoring and troubleshooting purposes.<br />

786 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

Scheduled<br />

activation<br />

or<br />

Manual<br />

activation


Automated virtual disk creation<br />

In <strong>the</strong> following example, we create a simple bat script to be used to automate virtual disk<br />

(VDisk) creation to illustrate how scripts are created. Creating scripts to automate SVC<br />

administrative tasks is not limited to bat scripting, and you can, in principle, encapsulate <strong>the</strong><br />

CLI commands in scripts using any programming language that you prefer, or you can use<br />

program applets to perform routine tasks.<br />

Connecting to <strong>the</strong> SVC using a predefined SSH connection<br />

The easiest way to create an SSH connection to <strong>the</strong> SVC is when plink can call a predefined<br />

PuTTY session, as shown in Figure A-2 on page 788.<br />

Define a session, including this information:<br />

► The auto-login user name and setting <strong>the</strong> auto-login user name to your SVC admin user<br />

name (for example, admin). This parameter is set under <strong>the</strong> Connection Data<br />

category.<br />

► The private key for au<strong>the</strong>ntication (for example, icat.ppk). This key is <strong>the</strong> private key that<br />

you have already created. This parameter is set under <strong>the</strong> Connection Session <br />

Auth category.<br />

► The IP address of <strong>the</strong> SVC cluster. This parameter is set under <strong>the</strong> Session category.<br />

► A session name. Our example uses SVC:cluster1.<br />

Your version of PuTTY might have <strong>the</strong>se parameters set in o<strong>the</strong>r categories.<br />

Appendix A. Scripting 787


Figure A-2 Using a predefined SSH connection with plink<br />

To use this predefined PuTTY session, use this syntax:<br />

plink SVC1:cluster1<br />

If a predefined PuTTY session is not used, use this syntax:<br />

plink admin@9.43.36.117 -i "C:\DirectoryPath\KeyName.PPK"<br />

Using a CLI command to create VDisks<br />

In our example, we decided <strong>the</strong> following parameters are variables when creating <strong>the</strong> VDisks:<br />

► VDisk size (in GB): %1<br />

► VDisk name: %2<br />

► Managed Disk Group (MDG): %3<br />

Use <strong>the</strong> following command:<br />

svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb -name %2 -mdiskgrp %3<br />

788 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Listing created VDisks<br />

To log <strong>the</strong> fact that our script created <strong>the</strong> VDisk that we defined when executing <strong>the</strong> script, we<br />

use <strong>the</strong> -filtervalue parameter:<br />

svcinfo lsvdisk -filtervalue 'name=%2' >> C:\DirectoryPath\VDiskScript.log<br />

Invoking <strong>the</strong> VDiskScript.bat sample script<br />

Finally, putting it all toge<strong>the</strong>r, we create our sample bat script for creating a VDisk, as shown<br />

in Figure A-3.<br />

-------------------------------------VDiskScript.bat---------------------------<br />

plink SVC1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb<br />

-name %2 -mdiskgrp %3<br />

plink SVC1 -l admin svcinfo lsvdisk -filtervalue 'name=%2' >><br />

E:\SVC_Jobs\VDiskScript.log<br />

-------------------------------------------------------------------------------<br />

Figure A-3 VDiskScript.bat<br />

Using <strong>the</strong> script, we create a VDisk with <strong>the</strong> following parameters:<br />

► VDisk size (in GB): 20 (%1)<br />

► VDisk name: Host1_F_Drive (%2)<br />

► MDG: 1 (%3)<br />

Example A-1 shows executing <strong>the</strong> script to create a VDisk.<br />

Example: A-1 Executing <strong>the</strong> script to create <strong>the</strong> VDisk<br />

E:\SVC_Jobs>VDiskScript 4 Host1_E_Drive 1<br />

E:\SVC_Jobs>plink SVC1:Cluster1 -l admin svctask mkvdisk -iogrp 0 -vtype striped<br />

-size 4 -unit gb -name Host1_E_Drive -mdiskgrp 1<br />

Virtual Disk, id [32], successfully created<br />

E:\SVC_Jobs>plink SVC1:Cluster1 -l admin svcinfo lsvdisk -filtervalue 'name=Host<br />

1_E_Drive' 1>>E:\SVC_Jobs\VDiskScript.log<br />

From <strong>the</strong> output of <strong>the</strong> log, as shown in Example A-2, we verify that <strong>the</strong> VDisk is created as<br />

intended.<br />

Example: A-2 Log file output from VDiskScript.bat<br />

id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type<br />

FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count<br />

32 Host1_E_Drive 0 io_grp0 online 1 MDG_DS47 4.0GB striped<br />

60050768018301BF280000000000002E 0 1<br />

Appendix A. Scripting 789


SVC tree<br />

We provide ano<strong>the</strong>r example of using scripting to communicate with <strong>the</strong> SVC. This script<br />

displays a tree-like structure for <strong>the</strong> SVC, as shown in Example A-3.<br />

We have written this script in Perl to work without modification using Perl on UNIX systems<br />

(such as AIX or Linux), Perl for Windows, or Perl in a Windows Cygwin environment.<br />

Example: A-3 SVC tree script output<br />

$ ./svctree.pl 10.0.1.119 admin /cygdrive/c/Keys/icat.ssh<br />

+ ITSO-CLS2 (10.0.1.119)<br />

+ CONTROLLERS<br />

+ DS4500 (0)<br />

+ mdisk0 (ID: 0 CAP: 36.0GB MODE: managed)<br />

+ mdisk1 (ID: 1 CAP: 36.0GB MODE: managed)<br />

+ Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed)<br />

+ Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed)<br />

+ mdisk2 (ID: 2 CAP: 36.0GB MODE: managed)<br />

+ mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed)<br />

+ DS4700 (1)<br />

+ mdisk0 (ID: 0 CAP: 36.0GB MODE: managed)<br />

+ mdisk1 (ID: 1 CAP: 36.0GB MODE: managed)<br />

+ Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed)<br />

+ Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed)<br />

+ mdisk2 (ID: 2 CAP: 36.0GB MODE: managed)<br />

+ mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed)<br />

+ MDISK GROUPS<br />

+ MDG_0_DS45 (ID: 0 CAP: 144.0GB FREE: 120.0GB)<br />

+ mdisk0 (ID: 0 CAP: 36.0GB MODE: managed)<br />

+ mdisk1 (ID: 1 CAP: 36.0GB MODE: managed)<br />

+ mdisk2 (ID: 2 CAP: 36.0GB MODE: managed)<br />

+ mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed)<br />

+ aix_imgmdg (ID: 7 CAP: 13.0GB FREE: 3.0GB)<br />

+ Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed)<br />

+ Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed)<br />

+ iogrp0 (0)<br />

+ NODES<br />

+ Node2 (5)<br />

+ Node1 (2)<br />

+ HOSTS<br />

+ W2k8 (0)<br />

+ Senegal (1)<br />

+ VSS_FREE (2)<br />

+ msvc0001 (ID: 10 CAP: 12.0GB TYPE: striped STAT: online)<br />

+ msvc0002 (ID: 11 CAP: 12.0GB TYPE: striped STAT: online)<br />

+ VSS_RESERVED (3)<br />

+ Kanaga (5)<br />

+ A_Kanaga_VD_IM1 (ID: 9 CAP: 10.0GB TYPE: many STAT: online)<br />

+ VDISKS<br />

+ MDG_SE_VDisk3 (ID: 0 CAP: 10.2GB TYPE: many)<br />

+ mdisk2 (ID: 10 CAP: 36.0GB MODE: managed CONT: DS4500)<br />

+ mdisk_3 (ID: 12 CAP: 36.0GB MODE: managed CONT: DS4500)<br />

+ A_Kanaga_VD_IM1 (ID: 9 CAP: 10.0GB TYPE: many)<br />

+ Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed CONT: DS4700)<br />

+ Kanaga_AIX1 (ID: 24 CAP: 8.0GB MODE: managed CONT: DS4700)<br />

790 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


+ msvc0001 (ID: 10 CAP: 12.0GB TYPE: striped)<br />

+ mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: DS4500)<br />

+ mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: DS4500)<br />

+ msvc0002 (ID: 11 CAP: 12.0GB TYPE: striped)<br />

+ mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: DS4500)<br />

+ mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: DS4500)<br />

+ iogrp1 (1)<br />

+ NODES<br />

+ HOSTS<br />

+ VDISKS<br />

+ iogrp2 (2)<br />

+ NODES<br />

+ HOSTS<br />

+ VDISKS<br />

+ iogrp3 (3)<br />

+ NODES<br />

+ HOSTS<br />

+ VDISKS<br />

+ recovery_io_grp (4)<br />

+ NODES<br />

+ HOSTS<br />

+ VDISKS<br />

+ recovery_io_grp (4)<br />

+ NODES<br />

+ HOSTS<br />

+ itsosvc1 (2200642269468)<br />

+ VDISKS<br />

Example A-4 shows <strong>the</strong> coding for our script.<br />

Example: A-4 svctree.pl<br />

#!/usr/bin/perl<br />

$SSHCLIENT = “ssh”; # (plink or ssh)<br />

$HOST = $ARGV[0];<br />

$USER = ($ARGV[1] ? $ARGV[1] : “admin”);<br />

$PRIVATEKEY = ($ARGV[2] ? $ARGV[2] : “/path/toprivatekey”);<br />

$DEBUG = 0;<br />

die(sprintf(“Please call script with cluster IP address. The syntax is: \n%s<br />

ipaddress loginname privatekey\n”,$0))<br />

if (! $HOST);<br />

sub TalkToSVC() {<br />

my $COMMAND = shift;<br />

my $NODELIM = shift;<br />

my $ARGUMENT = shift;<br />

my @info;<br />

if ($SSHCLIENT eq “plink” || $SSHCLIENT eq “ssh”) {<br />

$SSH = sprintf(‘%s -i %s %s@%s ‘,$SSHCLIENT,$PRIVATEKEY,$USER,$HOST);<br />

} else {<br />

die (“ERROR: Unknown SSHCLIENT [$SSHCLIENT]\n”);<br />

Appendix A. Scripting 791


}<br />

}<br />

if ($NODELIM) {<br />

$CMD = “$SSH svcinfo $COMMAND $ARGUMENT\n”;<br />

} else {<br />

$CMD = “$SSH svcinfo $COMMAND -delim : $ARGUMENT\n”;<br />

}<br />

print “Running $CMD” if ($DEBUG);<br />

open SVC,”$CMD|”;<br />

while () {<br />

print “Got [$_]\n” if ($DEBUG);<br />

chomp;<br />

push(@info,$_);<br />

}<br />

close SVC;<br />

return @info;<br />

sub DelimToHash() {<br />

my $COMMAND = shift;<br />

my $MULTILINE = shift;<br />

my $NODELIM = shift;<br />

my $ARGUMENT = shift;<br />

my %hash;<br />

@details = &TalkToSVC($COMMAND,$NODELIM,$ARGUMENT);<br />

print “$COMMAND: Got [“,join(‘|’,@details).”]\n” if ($DEBUG);<br />

my $linenum = 0;<br />

foreach (@details) {<br />

print “$linenum, $_” if ($debug);<br />

if ($linenum == 0) {<br />

@heading = split(‘:’,$_);<br />

} else {<br />

@line = split(‘:’,$_);<br />

$counter = 0;<br />

foreach $id (@heading) {<br />

printf(“$COMMAND: ID [%s], value [%s]\n”,$id,$line[$counter]) if<br />

($DEBUG);<br />

if ($MULTILINE) {<br />

$hash{$linenum,$id} = $line[$counter++];<br />

} else {<br />

$hash{$id} = $line[$counter++];<br />

}<br />

}<br />

}<br />

}<br />

$linenum++;<br />

792 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


}<br />

return %hash;<br />

sub TreeLine() {<br />

my $indent = shift;<br />

my $line = shift;<br />

my $last = shift;<br />

}<br />

for ($tab=1;$tab


%mdiskgrps = &DelimToHash(‘lsmdiskgrp’,1);<br />

# We are now ready to display it.<br />

# CLUSTER<br />

$indent = 0;<br />

foreach $cluster (sort keys %clusters) {<br />

($numcluster,$detail) = split($;,$cluster);<br />

next if ($numcluster == $lastnumcluster);<br />

$lastnumcluster = $numcluster;<br />

next if ("$clusters{$numcluster,'location'}" =~ /remote/);<br />

&TreeLine($indent,sprintf(‘%s<br />

(%s)’,$clusters{$numcluster,’name’},$clusters{$numcluster,’cluster_IP_address’}),0<br />

);<br />

# CONTROLLERS<br />

&TreeLine($indentiogrp+1,’CONTROLLERS’,0);<br />

$lastnumcontroller = ““;<br />

foreach $controller (sort keys %controllers) {<br />

$indentcontroller = $indent+2;<br />

($numcontroller,$detail) = split($;,$controller);<br />

next if ($numcontroller == $lastnumcontroller);<br />

$lastnumcontroller = $numcontroller;<br />

&TreeLine($indentcontroller,<br />

sprintf(‘%s (%s)’,<br />

$controllers{$numcontroller,’controller_name’},<br />

$controllers{$numcontroller,’id’})<br />

,0);<br />

# MDISKS<br />

&TreeData($indentcontroller+1,<br />

‘%s (ID: %s CAP: %s MODE: %s)’,<br />

*mdisks,<br />

[‘name’,’id’,’capacity’,’mode’],<br />

{“SRC”=>$controllers{$numcontroller,’controller_name’},”DST”=>”controller_name”});<br />

}<br />

# MDISKGRPS<br />

&TreeLine($indentiogrp+1,’MDISK GROUPS’,0,[]);<br />

$lastnummdiskgrp = ““;<br />

foreach $mdiskgrp (sort keys %mdiskgrps) {<br />

$indentmdiskgrp = $indent+2;<br />

($nummdiskgrp,$detail) = split($;,$mdiskgrp);<br />

next if ($nummdiskgrp == $lastnummdiskgrp);<br />

$lastnummdiskgrp = $nummdiskgrp;<br />

&TreeLine($indentmdiskgrp,<br />

sprintf(‘%s (ID: %s CAP: %s FREE: %s)’,<br />

$mdiskgrps{$nummdiskgrp,’name’},<br />

$mdiskgrps{$nummdiskgrp,’id’},<br />

$mdiskgrps{$nummdiskgrp,’capacity’},<br />

794 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


}<br />

$mdiskgrps{$nummdiskgrp,’free_capacity’})<br />

,0);<br />

# MDISKS<br />

&TreeData($indentcontroller+1,<br />

‘%s (ID: %s CAP: %s MODE: %s)’,<br />

*mdisks,<br />

[‘name’,’id’,’capacity’,’mode’],<br />

{“SRC”=>$mdiskgrps{$nummdiskgrp,’id’},”DST”=>”mdisk_grp_id”});<br />

# IOGROUP<br />

$lastnumiogrp = ““;<br />

foreach $iogrp (sort keys %iogrps) {<br />

$indentiogrp = $indent+1;<br />

($numiogrp,$detail) = split($;,$iogrp);<br />

next if ($numiogrp == $lastnumiogrp);<br />

$lastnumiogrp = $numiogrp;<br />

&TreeLine($indentiogrp,sprintf(‘%s<br />

(%s)’,$iogrps{$numiogrp,’name’},$iogrps{$numiogrp,’id’}),0);<br />

$indentiogrp++;<br />

# NODES<br />

&TreeLine($indentiogrp,’NODES’,0);<br />

&TreeData($indentiogrp+1,<br />

‘%s (%s)’,<br />

*nodes,<br />

[‘name’,’id’],<br />

{“SRC”=>$iogrps{$numiogrp,’id’},”DST”=>”IO_group_id”});<br />

# HOSTS<br />

&TreeLine($indentiogrp,’HOSTS’,0);<br />

$lastnumhost = ““;<br />

%iogrphosts = &DelimToHash(‘lsiogrphost’,1,0,$iogrps{$numiogrp,’id’});<br />

foreach $host (sort keys %iogrphosts) {<br />

my $indenthost = $indentiogrp+1;<br />

($numhost,$detail) = split($;,$host);<br />

next if ($numhost == $lastnumhost);<br />

$lastnumhost = $numhost;<br />

&TreeLine($indenthost,<br />

sprintf(‘%s<br />

(%s)’,$iogrphosts{$numhost,’name’},$iogrphosts{$numhost,’id’}),<br />

0);<br />

# HOSTVDISKMAP<br />

%vdiskhostmap = &DelimToHash(‘lshostvdiskmap’,1,0,$hosts{$numhost,’id’});<br />

$lastnumvdisk = ““;<br />

foreach $vdiskhost (sort keys %vdiskhostmap) {<br />

($numvdisk,$detail) = split($;,$vdiskhost);<br />

Appendix A. Scripting 795


}<br />

}<br />

next if ($numvdisk == $lastnumvdisk);<br />

$lastnumvdisk = $numvdisk;<br />

next if ($vdisks{$numvdisk,’IO_group_id’} != $iogrps{$numiogrp,’id’});<br />

&TreeData($indenthost+1,<br />

‘%s (ID: %s CAP: %s TYPE: %s STAT: %s)’,<br />

*vdisks,<br />

[‘name’,’id’,’capacity’,’type’,’status’],<br />

{“SRC”=>$vdiskhostmap{$numvdisk,’vdisk_id’},”DST”=>”id”});<br />

# VDISKS<br />

&TreeLine($indentiogrp,’VDISKS’,0);<br />

$lastnumvdisk = ““;<br />

foreach $vdisk (sort keys %vdisks) {<br />

my $indentvdisk = $indentiogrp+1;<br />

($numvdisk,$detail) = split($;,$vdisk);<br />

next if ($numvdisk == $lastnumvdisk);<br />

$lastnumvdisk = $numvdisk;<br />

&TreeLine($indentvdisk,<br />

sprintf(‘%s (ID: %s CAP: %s TYPE: %s)’,<br />

$vdisks{$numvdisk,’name’},<br />

$vdisks{$numvdisk,’id’},<br />

$vdisks{$numvdisk,’capacity’},<br />

$vdisks{$numvdisk,’type’}),<br />

0)<br />

if ($iogrps{$numiogrp,’id’} == $vdisks{$numvdisk,’IO_group_id’});<br />

# VDISKMEMBERS<br />

if ($iogrps{$numiogrp,’id’} == $vdisks{$numvdisk,’IO_group_id’}) {<br />

%vdiskmembers =<br />

&DelimToHash(‘lsvdiskmember’,1,1,$vdisks{$numvdisk,’id’});<br />

}<br />

}<br />

}<br />

}<br />

foreach $vdiskmember (sort keys %vdiskmembers) {<br />

&TreeData($indentvdisk+1,<br />

‘%s (ID: %s CAP: %s MODE: %s CONT: %s)’,<br />

*mdisks,<br />

[‘name’,’id’,’capacity’,’mode’,’controller_name’],<br />

{“SRC”=>$vdiskmembers{$vdiskmember},”DST”=>”id”});<br />

}<br />

796 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Scripting alternatives<br />

For an alternative to scripting, visit <strong>the</strong> Tivoli <strong>Storage</strong> Manager for Advanced Copy Services<br />

product page:<br />

http://www.ibm.com/software/tivoli/products/storage-mgr-advanced-copy-services/<br />

Additionally, <strong>IBM</strong> provides a suite of scripting tools that is based on Perl. You can download<br />

<strong>the</strong>se scripting tools from this Web site:<br />

http://www.alphaworks.ibm.com/tech/svctools<br />

Appendix A. Scripting 797


798 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Appendix B. Node replacement<br />

B<br />

In this appendix, we discuss <strong>the</strong> process to replace nodes. For <strong>the</strong> latest information about<br />

replacing a node, refer to <strong>the</strong> development page at one of <strong>the</strong> following Web sites:<br />

► <strong>IBM</strong> employees:<br />

http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437<br />

► <strong>IBM</strong> Business Partners (login required):<br />

http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD104437<br />

► Clients:<br />

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 799


Replacing nodes nondisruptively<br />

You can replace <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC) 2145-4F2, <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> 2145-8F2, and <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-8F4 nodes with <strong>SAN</strong> <strong>Volume</strong><br />

<strong>Controller</strong> 2145-8G4 nodes in an existing, active cluster without having an outage on <strong>the</strong> SVC<br />

or on your host applications. This procedure does not require that you change your storage<br />

area network (<strong>SAN</strong>) environment, because <strong>the</strong> replacement (new) node uses <strong>the</strong> same<br />

worldwide node name (WWNN) as <strong>the</strong> node that you replace. In fact, you can use this<br />

procedure to replace any model node with ano<strong>the</strong>r model node.<br />

This task assumes that <strong>the</strong> following conditions exist:<br />

► The cluster software is at V4.2.0 or later for older to newer model node replacements. The<br />

exception is <strong>the</strong> 2145-8G4 model node, which requires that <strong>the</strong> cluster is running V4.2.0 or<br />

later.<br />

► The new nodes that are configured are not powered on and not connected.<br />

► All nodes that are configured in <strong>the</strong> cluster are present.<br />

► All errors in <strong>the</strong> cluster error log are fixed.<br />

► There are no virtual disks (VDisks), managed disks (MDisks), or controllers with a status<br />

of degraded or offline.<br />

► The SVC configuration has been backed up through <strong>the</strong> CLI or GUI, and <strong>the</strong> file has been<br />

saved to <strong>the</strong> Master Console.<br />

► Download, install, and run <strong>the</strong> latest “SVC Software Upgrade Test Utility” from this Web<br />

site to verify that <strong>the</strong>re are no known issues with <strong>the</strong> current cluster environment before<br />

beginning <strong>the</strong> node upgrade procedure:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585<br />

► You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> 2145-8G4 node.<br />

Recommendation: If you are planning to redeploy <strong>the</strong> old nodes in your environment to<br />

create a test cluster or to add to ano<strong>the</strong>r cluster, you must ensure that each WWNN of<br />

<strong>the</strong>se old nodes is set to a unique number on your <strong>SAN</strong>. We recommend that you<br />

document <strong>the</strong> factory WWNN of <strong>the</strong> new nodes that you use to replace <strong>the</strong> old nodes and,<br />

in effect, swap <strong>the</strong> WWNN so that each node still has a unique number. Failure to do so<br />

can lead to a duplicate WWNN and worldwide port name (WWPN), causing unpredictable<br />

<strong>SAN</strong> problems.<br />

Perform <strong>the</strong> following steps to replace <strong>the</strong> nodes:<br />

1. Perform <strong>the</strong> following steps to determine <strong>the</strong> node_name or node_id of <strong>the</strong> node that you<br />

want to replace, <strong>the</strong> iogroup_id or iogroup_name to which it belongs, and to determine<br />

which of <strong>the</strong> nodes is <strong>the</strong> configuration node. If <strong>the</strong> configuration node is to be replaced,<br />

we recommend that you upgrade it last. If you already can identify which physical node<br />

equates to a node_name or node_id, <strong>the</strong> iogroup_id or iogroup_name to which it belongs,<br />

and which node is <strong>the</strong> configuration node, you can skip this step and proceed to step 2:<br />

a. Issue <strong>the</strong> following command from <strong>the</strong> command-line interface (CLI):<br />

svcinfo lsnode -delim :<br />

b. Under <strong>the</strong> config node column, look for <strong>the</strong> status of yes and record <strong>the</strong> node_name or<br />

node_id of this node for later use.<br />

c. Under <strong>the</strong> id and name columns, record <strong>the</strong> node_name or node_id of all of <strong>the</strong> o<strong>the</strong>r<br />

nodes in <strong>the</strong> cluster.<br />

800 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


d. Under <strong>the</strong> IO_group_id and IO_group_name columns, record <strong>the</strong> iogroup_id or<br />

iogroup_name for all of <strong>the</strong> nodes in <strong>the</strong> cluster.<br />

e. Issue <strong>the</strong> following command from <strong>the</strong> CLI for each node_name or node_id to<br />

determine <strong>the</strong> front_panel_id for each node and record <strong>the</strong> ID. This front_panel_id is<br />

physically located on <strong>the</strong> front of every node (it is not <strong>the</strong> serial number), and you can<br />

use this front_panel_id to determine which physical node equates to <strong>the</strong> node_name or<br />

node_id that you plan to replace:<br />

svcinfo lsnodevpd node_name or node_id<br />

2. Perform <strong>the</strong> following steps to record <strong>the</strong> WWNN of <strong>the</strong> node that you want to replace:<br />

a. Issue <strong>the</strong> following command from <strong>the</strong> CLI, where node_name or node_id is <strong>the</strong> name<br />

or ID of <strong>the</strong> node for which you want to determine <strong>the</strong> WWNN:<br />

svcinfo lsnode -delim : node_name or node_id<br />

b. Record <strong>the</strong> WWNN of <strong>the</strong> node that you want to replace.<br />

3. Verify that all VDisks, MDisks, and disk controllers are online and that none are in a state<br />

of “Degraded”. If <strong>the</strong>re are any VDisks, MDisks, or controllers in this state, resolve this<br />

issue before going forward, or <strong>the</strong> loss of access to data might occur when you perform<br />

step 4. This action is an especially important step if this node is <strong>the</strong> second node in <strong>the</strong> I/O<br />

Group to be replaced.<br />

Issue <strong>the</strong> following commands from <strong>the</strong> CLI, where object_id or object_name is <strong>the</strong><br />

controller ID or controller name that you want to view. Verify that each disk controller<br />

shows its status as “degraded no”:<br />

svcinfo lsvdisk -filtervalue “status=degraded”<br />

svcinfo lsmdisk -filtervalue “status=degraded”<br />

svcinfo lscontroller object_id or object_name<br />

4. Issue <strong>the</strong> following CLI command to shut down <strong>the</strong> node that will be replaced, where<br />

node_name or node_id is <strong>the</strong> name or ID of <strong>the</strong> node that you want to delete:<br />

svctask stopcluster -node node_name or node_id<br />

Important:<br />

► Do not power off <strong>the</strong> node through <strong>the</strong> front panel instead of using this command.<br />

► Be careful that you do not issue <strong>the</strong> stopcluster command without <strong>the</strong> -node<br />

node_name or node_id parameter, because you will shut down <strong>the</strong> entire cluster if<br />

you do.<br />

Issue <strong>the</strong> following CLI command to ensure that <strong>the</strong> node is shut down and that <strong>the</strong> status<br />

is “offline”, where node_name or node_id is <strong>the</strong> name or ID of <strong>the</strong> original node. The node<br />

status must be “offline”:<br />

svcinfo lsnode node_name or node_id<br />

5. Issue <strong>the</strong> following CLI command to delete this node from <strong>the</strong> cluster and <strong>the</strong> I/O Group,<br />

where node_name or node_id is <strong>the</strong> name or ID of <strong>the</strong> node that you want to delete:<br />

svctask rmnode node_name or node_id<br />

6. Issue <strong>the</strong> following CLI command to ensure that <strong>the</strong> node is no longer a member of <strong>the</strong><br />

cluster, where node_name or node_id is <strong>the</strong> name or ID of <strong>the</strong> original node. Do not list <strong>the</strong><br />

node in <strong>the</strong> command output:<br />

svcinfo lsnode node_name or node_id<br />

Appendix B. Node replacement 801


7. Perform <strong>the</strong> following steps to change <strong>the</strong> WWNN of <strong>the</strong> node that you just deleted to<br />

FFFFF:<br />

Important recommendation:<br />

► Record and mark <strong>the</strong> Fibre Channel (FC) cables with <strong>the</strong> SVC node port number<br />

(1-4) before removing <strong>the</strong>m from <strong>the</strong> back of <strong>the</strong> node that is being replaced. You<br />

must reconnect <strong>the</strong> cables on <strong>the</strong> new node exactly as <strong>the</strong>y were connected on <strong>the</strong><br />

old node. Looking at <strong>the</strong> back of <strong>the</strong> node, <strong>the</strong> FC ports on <strong>the</strong> SVC nodes are<br />

numbered 1-4 from left to right and must be reconnected in <strong>the</strong> same order, or <strong>the</strong><br />

port IDs will change, which can affect <strong>the</strong> hosts’ access to VDisks or cause<br />

problems with adding <strong>the</strong> new node back into <strong>the</strong> cluster. The SVC Hardware<br />

Installation Guide for your model shows <strong>the</strong> port numbering of <strong>the</strong> various node<br />

models.<br />

► Failure to disconnect <strong>the</strong> FC cables now will likely cause <strong>SAN</strong> devices and <strong>SAN</strong><br />

management software to discover <strong>the</strong>se new WWPNs that are generated when <strong>the</strong><br />

WWNN is changed to FFFFF in <strong>the</strong> following steps. This discovery might cause<br />

ghost records to be seen after <strong>the</strong> node is powered down. These ghost records do<br />

not necessarily cause a problem, but you might have to reboot a <strong>SAN</strong> device to clear<br />

out <strong>the</strong> record.<br />

► In addition, <strong>the</strong> ghost records might cause problems with AIX dynamic tracking<br />

functioning correctly, assuming that it is enabled, so we highly recommend<br />

disconnecting <strong>the</strong> node’s FC cables as instructed in <strong>the</strong> following step before<br />

continuing to any o<strong>the</strong>r steps.<br />

a. Disconnect <strong>the</strong> four FC cables from this node before powering <strong>the</strong> node on in <strong>the</strong> next<br />

step.<br />

b. Power on this node using <strong>the</strong> power button on <strong>the</strong> front panel and wait for it to boot up<br />

before going to <strong>the</strong> next step.<br />

c. From <strong>the</strong> front panel of <strong>the</strong> node, press <strong>the</strong> down button until <strong>the</strong> Node: panel is<br />

displayed, and <strong>the</strong>n use <strong>the</strong> right and left navigation buttons to display <strong>the</strong> Status:<br />

panel.<br />

d. Press and hold <strong>the</strong> down button, press and release <strong>the</strong> select button, and <strong>the</strong>n release<br />

<strong>the</strong> down button. The WWNN of <strong>the</strong> node is displayed.<br />

e. Press and hold <strong>the</strong> down button, press and release <strong>the</strong> select button, and <strong>the</strong>n release<br />

<strong>the</strong> down button to enter <strong>the</strong> WWNN edit mode. The first character of <strong>the</strong> WWNN is<br />

highlighted.<br />

f. Press <strong>the</strong> up or down button to increment or decrement <strong>the</strong> character that is displayed.<br />

Note: The characters wrap F to 0 or 0 to F.<br />

g. Press <strong>the</strong> left navigation button to move to <strong>the</strong> next field or <strong>the</strong> right navigation button to<br />

return to <strong>the</strong> previous field and repeat step f for each field. At <strong>the</strong> end of this step, <strong>the</strong><br />

characters that are displayed must be FFFFF.<br />

h. Press <strong>the</strong> select button to retain <strong>the</strong> characters that you have updated and return to <strong>the</strong><br />

WWNN window.<br />

i. Press <strong>the</strong> select button again to apply <strong>the</strong> characters as <strong>the</strong> new WWNN for <strong>the</strong> node.<br />

802 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Note: You must press <strong>the</strong> select button twice as steps h and i instruct you to do. After<br />

step h, it might appear that <strong>the</strong> WWNN has been changed, but step i actually applies<br />

<strong>the</strong> change.<br />

8. Power off this node using <strong>the</strong> power button on <strong>the</strong> front panel and remove <strong>the</strong> node from<br />

<strong>the</strong> rack, if desired.<br />

9. Install <strong>the</strong> replacement node and its uninterruptible power supply unit in <strong>the</strong> rack and<br />

connect <strong>the</strong> node to <strong>the</strong> uninterruptible power supply unit cables according to <strong>the</strong> SVC<br />

Hardware Installation Guide, which is available at this Web site:<br />

http://www.ibm.com/storage/support/2145<br />

Note: Do not connect <strong>the</strong> FC cables to <strong>the</strong> new node during this step.<br />

10.Power on <strong>the</strong> replacement node from <strong>the</strong> front panel with <strong>the</strong> FC cables disconnected.<br />

After <strong>the</strong> node has booted, ensure that <strong>the</strong> node displays Cluster: on <strong>the</strong> front panel and<br />

nothing else. If a word o<strong>the</strong>r than Cluster: is displayed, contact <strong>IBM</strong> Support for<br />

assistance before continuing.<br />

11.Record <strong>the</strong> WWNN of this new node, because you will need <strong>the</strong> WWNN if you plan to<br />

redeploy <strong>the</strong> old nodes that are being replaced. Perform <strong>the</strong> following steps to change <strong>the</strong><br />

WWNN of <strong>the</strong> replacement node to match <strong>the</strong> WWNN that you recorded in step 2 on<br />

page 801:<br />

a. From <strong>the</strong> front panel of <strong>the</strong> node, press <strong>the</strong> down button until <strong>the</strong> Node: panel is<br />

displayed, and <strong>the</strong>n use <strong>the</strong> right and left navigation buttons to display <strong>the</strong> Status:<br />

panel.<br />

b. Press and hold <strong>the</strong> down button, press and release <strong>the</strong> select button, and <strong>the</strong>n, release<br />

<strong>the</strong> down button. The WWNN of <strong>the</strong> node is displayed. Record this number for use in<br />

<strong>the</strong> redeployment of <strong>the</strong> old nodes.<br />

c. Press and hold <strong>the</strong> down button, press and release <strong>the</strong> select button, and <strong>the</strong>n, release<br />

<strong>the</strong> down button to enter <strong>the</strong> WWNN edit mode. The first character of <strong>the</strong> WWNN is<br />

highlighted.<br />

d. Press <strong>the</strong> up or down button to increment or decrement <strong>the</strong> character that is displayed.<br />

e. Press <strong>the</strong> left navigation button to move to <strong>the</strong> next field or <strong>the</strong> right navigation button to<br />

return to <strong>the</strong> previous field and repeat step d for each field. At <strong>the</strong> end of this step, <strong>the</strong><br />

characters that are displayed must be <strong>the</strong> same as <strong>the</strong> WWNN that you recorded in<br />

step 2 on page 801.<br />

f. Press <strong>the</strong> select button to retain <strong>the</strong> characters that you have updated, and return to<br />

<strong>the</strong> WWNN panel.<br />

g. Press <strong>the</strong> select button to apply <strong>the</strong> characters as <strong>the</strong> new WWNN for <strong>the</strong> node.<br />

Press select twice: You must press <strong>the</strong> select button twice as steps f and g instruct<br />

you to do. After step f, it might appear that <strong>the</strong> WWNN has been changed, but step g<br />

actually applies <strong>the</strong> change.<br />

h. The node displays Cluster: on <strong>the</strong> front panel and is now ready to begin <strong>the</strong> process<br />

of adding <strong>the</strong> node to <strong>the</strong> cluster. If ano<strong>the</strong>r word is displayed, contact <strong>IBM</strong> Support for<br />

assistance before continuing.<br />

12.Connect <strong>the</strong> FC cables to <strong>the</strong> same port numbers on <strong>the</strong> new node that <strong>the</strong>y were<br />

connected to originally on <strong>the</strong> old node. See step 7 on page 802.<br />

Appendix B. Node replacement 803


Important: Do not connect <strong>the</strong> new nodes to o<strong>the</strong>r ports at <strong>the</strong> switch or at <strong>the</strong> director,<br />

because using o<strong>the</strong>r ports will cause port IDs to change, which can affect <strong>the</strong> hosts’<br />

access to VDisks or cause problems with adding <strong>the</strong> new node back into <strong>the</strong> cluster.<br />

The new nodes have 4 Gbps host bus adapters (HBAs) in <strong>the</strong>m. The temptation is to<br />

move <strong>the</strong>m to 4 Gbps switch or director ports at <strong>the</strong> same time, but we do not<br />

recommend moving <strong>the</strong>m while performing <strong>the</strong> hardware node upgrade. Moving <strong>the</strong><br />

node cables to faster ports on <strong>the</strong> switch or director is a separate process that needs to<br />

be planned independently of upgrading <strong>the</strong> nodes in <strong>the</strong> cluster.<br />

13.Issue <strong>the</strong> following CLI command to verify that <strong>the</strong> last five characters of <strong>the</strong> WWNN are<br />

correct:<br />

svcinfo lsnodecandidate<br />

Note: If <strong>the</strong> WWNN does not match <strong>the</strong> original node’s WWNN exactly as recorded in<br />

step 2 on page 801, you must repeat step 11 on page 803.<br />

14.Add <strong>the</strong> node to <strong>the</strong> cluster and ensure that it is added back to <strong>the</strong> same I/O Group as <strong>the</strong><br />

original node. Using <strong>the</strong> following command, where wwnn_arg and iogroup_name or<br />

iogroup_id are <strong>the</strong> items that you recorded in steps 1 on page 800 and 2 on page 801.<br />

svctask addnode -wwnodename wwnn_arg -iogrp iogroup_name or iogroup_id<br />

15.Verify that all of <strong>the</strong> VDisks for this I/O Group are back online and are no longer degraded.<br />

If you are perform <strong>the</strong> node replacement process disruptively, so that no I/O occurs to <strong>the</strong><br />

I/O Group, you still must wait a certain period of time (we recommend 30 minutes in this<br />

case, too) to make sure that <strong>the</strong> new node is back online and available to take over before<br />

you replace <strong>the</strong> next node in <strong>the</strong> I/O Group. See step 3 on page 801.<br />

Both nodes in <strong>the</strong> I/O Group cache data; however, <strong>the</strong> cache sizes are asymmetric if <strong>the</strong><br />

remaining partner node in <strong>the</strong> I/O Group is a <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-4F2 node. The<br />

replacement node is limited by <strong>the</strong> cache size of <strong>the</strong> partner node in <strong>the</strong> I/O Group in this<br />

case. Therefore, <strong>the</strong> replacement node does not utilize <strong>the</strong> full 8 GB cache size until <strong>the</strong> o<strong>the</strong>r<br />

2145-4F2 node in <strong>the</strong> I/O Group is replaced.<br />

You do not have to reconfigure <strong>the</strong> host multipathing device drivers because <strong>the</strong> replacement<br />

node uses <strong>the</strong> same WWNN and WWPNs as <strong>the</strong> previous node. The multipathing device<br />

drivers detect <strong>the</strong> recovery of paths that are available to <strong>the</strong> replacement node.<br />

The host multipathing device drivers take approximately 30 minutes to recover <strong>the</strong> paths.<br />

Therefore, do not upgrade <strong>the</strong> o<strong>the</strong>r node in <strong>the</strong> I/O Group for at least 30 minutes after<br />

successfully upgrading <strong>the</strong> first node in <strong>the</strong> I/O Group. If you have o<strong>the</strong>r nodes in o<strong>the</strong>r I/O<br />

Groups to upgrade, you can perform o<strong>the</strong>r upgrades while you wait <strong>the</strong> 30 minutes for <strong>the</strong><br />

host multipathing device drivers to recover <strong>the</strong> paths.<br />

16.Repeat steps 2 on page 801 to 15 for each node that you want to replace.<br />

Expanding an existing SVC cluster<br />

In this section, we describe how to expand an existing SVC cluster with new nodes. You can<br />

only expand an SVC cluster with node pairs, which means that you always have to add at<br />

least two nodes to your existing cluster. The maximum number of nodes is eight.<br />

804 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


This task assumes <strong>the</strong> following situation:<br />

► Your cluster contains six or fewer nodes.<br />

► All nodes that are configured in <strong>the</strong> cluster are present.<br />

► All errors in <strong>the</strong> cluster error log are fixed.<br />

► All managed disks (MDisks) are online.<br />

► You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> 2145-8G4 node.<br />

► There are no VDisks, MDisks, or controllers with a status of degraded or offline.<br />

► The SVC configuration has been backed up through <strong>the</strong> CLI or GUI, and <strong>the</strong> file has been<br />

saved to <strong>the</strong> Master Console.<br />

► Download, install, and run <strong>the</strong> latest “SVC Software Upgrade Test Utility” from this Web<br />

site to verify that <strong>the</strong>re are no known issues with <strong>the</strong> current cluster environment before<br />

beginning <strong>the</strong> node upgrade procedure:<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585<br />

Perform <strong>the</strong> following steps to add nodes to an existing cluster:<br />

1. Depending on <strong>the</strong> model of <strong>the</strong> node that is being added, it might be necessary to upgrade<br />

<strong>the</strong> existing SVC cluster software to a level that supports <strong>the</strong> hardware model:<br />

– The model 2145-8G4 requires Version 4.2.x or later.<br />

– The model 2145-8F4 requires Version 4.1.x or later.<br />

– The model 2145-8F2 requires Version 3.1.x or later.<br />

– The 2145-4F2 is <strong>the</strong> original model and thus is supported by Version 1 through Version<br />

4. We highly recommend that you upgrade <strong>the</strong> existing cluster to <strong>the</strong> latest level of SVC<br />

software that is available; however, <strong>the</strong> minimum level of SVC cluster software that is<br />

recommended for <strong>the</strong> 4F2 is Version 3.1.0.5.<br />

2. Install additional nodes and uninterruptible power supply units in a rack. Do not connect<br />

<strong>the</strong>m to <strong>the</strong> <strong>SAN</strong> at this time.<br />

3. Ensure that each node that is being added has a unique WWNN. Duplicate WWNNs can<br />

cause serious problems on a <strong>SAN</strong> and must be avoided. This example shows how this<br />

problem might occur:<br />

The nodes came from cluster ABC where <strong>the</strong>y were replaced by brand new nodes. The<br />

procedure to replace <strong>the</strong>se nodes in cluster ABC required changing each brand new<br />

node’s WWNN to <strong>the</strong> old node’s WWNN. Adding <strong>the</strong>se nodes now to <strong>the</strong> same <strong>SAN</strong><br />

causes duplicate WWNNs to appear with unpredictable results. You will need to power up<br />

each node separately while it is disconnected from <strong>the</strong> <strong>SAN</strong> and use <strong>the</strong> front panel to<br />

view <strong>the</strong> current WWNN. If necessary, change <strong>the</strong> WWNN to a unique name on <strong>the</strong> <strong>SAN</strong>.<br />

If required, contact <strong>IBM</strong> Support for assistance before continuing.<br />

4. Power up additional uninterruptible power supply units and nodes. Do not connect <strong>the</strong>m to<br />

<strong>the</strong> <strong>SAN</strong> at this time.<br />

5. Ensure that each node displays Cluster: on <strong>the</strong> front panel and nothing else. If ano<strong>the</strong>r<br />

word is displayed, contact <strong>IBM</strong> Support for assistance before continuing.<br />

6. Connect additional nodes to <strong>the</strong> LAN.<br />

7. Connect additional nodes to <strong>the</strong> <strong>SAN</strong> fabrics.<br />

Appendix B. Node replacement 805


Important: Do not add <strong>the</strong> additional nodes to <strong>the</strong> existing cluster before <strong>the</strong> following<br />

zoning and masking steps are completed, or <strong>the</strong> SVC will enter a degraded mode and<br />

log errors with unpredictable results.<br />

8. Zone additional node ports in <strong>the</strong> existing SVC-only zones. You must have an SVC zone in<br />

each fabric with nothing but <strong>the</strong> ports from <strong>the</strong> SVC nodes in it. These zones are<br />

necessary for <strong>the</strong> initial formation of <strong>the</strong> cluster, because nodes need to see each o<strong>the</strong>r to<br />

form a cluster. This zone might not exist, and <strong>the</strong> only way that <strong>the</strong> SVC nodes see each<br />

o<strong>the</strong>r is through a storage zone that includes all of <strong>the</strong> node ports. However, we highly<br />

recommend that you have a separate zone in each fabric with only <strong>the</strong> SVC node ports<br />

included to avoid <strong>the</strong> risk of <strong>the</strong> nodes losing communication with each o<strong>the</strong>r if <strong>the</strong> storage<br />

zones are changed or deleted.<br />

9. Zone new node ports in <strong>the</strong> existing SVC/<strong>Storage</strong> zones. You must have an SVC/<strong>Storage</strong><br />

zone in each fabric for each disk subsystem that is used with <strong>the</strong> SVC. Each zone must<br />

have all of <strong>the</strong> SVC ports in that fabric, along with all of <strong>the</strong> disk subsystem ports in that<br />

fabric that will be used by <strong>the</strong> SVC to access <strong>the</strong> physical disks.<br />

Exceptions: There are exceptions when EMC DMX/Symmetrix or HDS storage is<br />

involved. For fur<strong>the</strong>r information, review <strong>the</strong> SVC Software Installation and<br />

Configuration Guide, which is available at this Web site:<br />

http://www.ibm.com/storage/support/2145<br />

10.On each disk subsystem that is seen by <strong>the</strong> SVC, use its management interface to map<br />

<strong>the</strong> LUNs that are currently used by <strong>the</strong> SVC to all of <strong>the</strong> new WWPNs of <strong>the</strong> new nodes<br />

that will be added to <strong>the</strong> SVC cluster. This step is a critical step, because <strong>the</strong> new nodes<br />

must see <strong>the</strong> same LUNs that <strong>the</strong> existing SVC cluster nodes see before adding <strong>the</strong> new<br />

nodes to <strong>the</strong> cluster; o<strong>the</strong>rwise, problems might arise. Also, note that all of <strong>the</strong> SVC ports<br />

that are zoned with <strong>the</strong> back-end storage must see all of <strong>the</strong> LUNs that are presented to<br />

SVC through all of those same storage ports, or <strong>the</strong> SVC will mark <strong>the</strong> devices as<br />

degraded.<br />

11.After all of <strong>the</strong>se activities have been completed, you can add <strong>the</strong> additional nodes to <strong>the</strong><br />

cluster by using <strong>the</strong> SVC GUI or CLI. The cluster does not mark any devices as degraded,<br />

because <strong>the</strong> new nodes will see <strong>the</strong> same cluster configuration, <strong>the</strong> same storage zoning,<br />

and <strong>the</strong> same LUNs as <strong>the</strong> existing nodes.<br />

12.Check <strong>the</strong> status of <strong>the</strong> controllers and MDisks to ensure that <strong>the</strong>re is nothing marked<br />

degraded. If a controller or MDisk is marked degraded, it is not configured properly, and<br />

you must fix <strong>the</strong> configuration immediately before performing any o<strong>the</strong>r action on <strong>the</strong><br />

cluster. If you cannot determine fairly quickly what is wrong, remove <strong>the</strong> newly added<br />

nodes from <strong>the</strong> cluster until <strong>the</strong> problem is resolved. You can contact <strong>IBM</strong> Support for<br />

assistance.<br />

Moving VDisks to a new I/O Group<br />

After <strong>the</strong> new nodes are added to a cluster, you might want to move VDisk ownership from<br />

one I/O Group to ano<strong>the</strong>r I/O Group to balance <strong>the</strong> workload. This action is currently a<br />

disruptive process. The host applications will have to be quiesced during <strong>the</strong> process. The<br />

actual moving of <strong>the</strong> VDisk in SVC is simple and quick; however, certain host operating<br />

systems might need to have <strong>the</strong>ir file systems and <strong>Volume</strong> Groups varied off or removed,<br />

along with <strong>the</strong>ir disks and multiple paths to <strong>the</strong> VDisks deleted and rediscovered. In effect, it is<br />

<strong>the</strong> equivalent of discovering <strong>the</strong> VDisks again as when <strong>the</strong>y were initially brought under SVC<br />

806 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


control. This task is not a difficult process, but it can take time to complete, so you must plan<br />

accordingly.<br />

This task assumes <strong>the</strong> following situation:<br />

► All of <strong>the</strong> steps that are described in “Expanding an existing SVC cluster” on page 804 are<br />

completed.<br />

► All nodes that are configured in <strong>the</strong> cluster are present.<br />

► All errors in <strong>the</strong> cluster error log are fixed.<br />

► All MDisks are online.<br />

► There are no VDisks, MDisks, or controllers with a status of degraded or offline.<br />

► The SVC configuration has been backed up through <strong>the</strong> CLI or GUI, and <strong>the</strong> file has been<br />

saved to <strong>the</strong> Master Console.<br />

Perform <strong>the</strong> following steps to move <strong>the</strong> VDisks:<br />

1. Stop <strong>the</strong> host I/O.<br />

2. Vary off your file system or shut down your host, depending on your operating system.<br />

3. Move all of <strong>the</strong> VDisks from <strong>the</strong> I/O Group of <strong>the</strong> nodes that you are replacing to <strong>the</strong> new<br />

I/O Group.<br />

4. If you had your host shut down, start it again.<br />

5. From each host, issue a rescan of <strong>the</strong> multipathing software to discover <strong>the</strong> new paths to<br />

<strong>the</strong> VDisks.<br />

6. See <strong>the</strong> documentation that is provided with your multipathing device driver for information<br />

about how to query paths to ensure that all paths have been recovered.<br />

7. Vary on your file system.<br />

8. Restart <strong>the</strong> host I/O.<br />

9. Repeat steps 1 to 8 for each VDisk in <strong>the</strong> cluster that you want to replace.<br />

Replacing nodes disruptively (rezoning <strong>the</strong> <strong>SAN</strong>)<br />

You can replace <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-4F2, <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-8F2, or <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> 2145-8F4 nodes with <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 2145-8G4 nodes. This task<br />

disrupts your environment, because you must rezone your <strong>SAN</strong>, and <strong>the</strong> host multipathing<br />

device drivers must discover new paths. Access to VDisks is lost during this task. In fact, you<br />

can use this procedure to replace any model node with a separate model node.<br />

This task assumes that <strong>the</strong> following conditions exist:<br />

► The cluster software is at V4.2.0 or later.<br />

► All nodes that are configured in <strong>the</strong> cluster are present.<br />

► The new nodes that are configured are not powered on and not connected.<br />

► All errors in <strong>the</strong> cluster error log are fixed.<br />

► All MDisks are online.<br />

► You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> 2145-8G4 node.<br />

► There are no VDisks, MDisks, or controllers with a status of degraded or offline.<br />

Appendix B. Node replacement 807


► The SVC configuration has been backed up through <strong>the</strong> CLI or GUI, and <strong>the</strong> file has been<br />

saved to <strong>the</strong> Master Console.<br />

► Download, install, and run <strong>the</strong> latest “SVC Software Upgrade Test Utility” from this Web<br />

site to verify that <strong>the</strong>re are no known issues with <strong>the</strong> current cluster environment before<br />

beginning <strong>the</strong> node upgrade procedure.<br />

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585<br />

Perform <strong>the</strong> following steps to replace <strong>the</strong> nodes:<br />

1. Quiesce all I/O from <strong>the</strong> hosts that access <strong>the</strong> I/O Group of <strong>the</strong> node that you are<br />

replacing.<br />

2. Delete <strong>the</strong> node that you want to replace from <strong>the</strong> cluster and I/O Group. The node is not<br />

deleted until <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> cache is destaged to disk. During this time, <strong>the</strong><br />

partner node in <strong>the</strong> I/O Group transitions to write-through mode.<br />

3. You can use <strong>the</strong> CLI or <strong>the</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Console to verify that <strong>the</strong> deletion<br />

process has completed.<br />

4. Ensure that <strong>the</strong> node is no longer a member of <strong>the</strong> cluster.<br />

5. Power off <strong>the</strong> node, and remove it from <strong>the</strong> rack.<br />

6. Install <strong>the</strong> replacement (new) node in <strong>the</strong> rack, and connect <strong>the</strong> uninterruptible power<br />

supply unit cables and <strong>the</strong> FC cables.<br />

7. Power on <strong>the</strong> node.<br />

8. Rezone your switch zones to remove <strong>the</strong> ports of <strong>the</strong> node that you are replacing from <strong>the</strong><br />

host and storage zones. Replace <strong>the</strong>se ports with <strong>the</strong> ports of <strong>the</strong> replacement node.<br />

9. Add <strong>the</strong> replacement node to <strong>the</strong> cluster and <strong>the</strong> I/O Group.<br />

Important: Both nodes in <strong>the</strong> I/O Group cache data; however, <strong>the</strong> cache sizes are<br />

asymmetric. The replacement node is limited by <strong>the</strong> cache size of <strong>the</strong> partner node in<br />

<strong>the</strong> I/O Group. Therefore, <strong>the</strong> replacement node does not utilize <strong>the</strong> full size of its<br />

cache.<br />

10.From each host, issue a rescan of <strong>the</strong> multipathing software to discover <strong>the</strong> new paths to<br />

VDisks. If your system is inactive, you can perform this step after you have replaced all of<br />

<strong>the</strong> nodes in <strong>the</strong> cluster. The host multipathing device drivers take approximately 30<br />

minutes to recover <strong>the</strong> paths.<br />

11.Refer to <strong>the</strong> documentation that is provided with your multipathing device driver for<br />

information about how to query paths to ensure that all of <strong>the</strong> paths have been recovered<br />

before proceeding to <strong>the</strong> next step.<br />

12.Repeat steps 1 to 10 for <strong>the</strong> partner node in <strong>the</strong> I/O Group.<br />

Symmetric cache sizes: After you have upgraded both nodes in <strong>the</strong> I/O Group, <strong>the</strong><br />

cache sizes are symmetric, and <strong>the</strong> full 8 GB of cache is utilized.<br />

13.Repeat steps 1 to 11 for each node in <strong>the</strong> cluster that you want to replace.<br />

14.Resume host I/O.<br />

808 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


C<br />

Appendix C. Performance data and statistics<br />

ga<strong>the</strong>ring<br />

It is not <strong>the</strong> intent of this book to describe performance data and statistics ga<strong>the</strong>ring in-depth.<br />

We show a method to process <strong>the</strong> statistics that we have ga<strong>the</strong>red. For a more<br />

comprehensive look at <strong>the</strong> performance of <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong><br />

(SVC), we recommend <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Best Practices and Performance Guidelines,<br />

SG24-7521, at this Web site:<br />

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open<br />

Although this book was written at an SVC V4.3.x level, many of <strong>the</strong> underlying principles<br />

remain applicable to SVC 5.1.<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 809


SVC performance overview<br />

While storage virtualization with SVC improves flexibility and provides simpler management<br />

of a storage infrastructure, it can also provide a substantial performance advantage for a<br />

variety of workloads. The SVC’s caching capability and its ability to stripe VDisks across<br />

multiple disk arrays are <strong>the</strong> reasons why performance improvement is significant when<br />

implemented with midrange disk subsystems, because this technology is often only provided<br />

with high-end enterprise disk subsystems.<br />

To ensure <strong>the</strong> desired performance and capacity of your storage infrastructure, from time to<br />

time, we recommend that you conduct a performance and capacity analysis to reveal <strong>the</strong><br />

business requirements of your storage environment.<br />

Performance considerations<br />

SVC<br />

When discussing performance for a system, it always comes down to identifying <strong>the</strong><br />

bottleneck, and <strong>the</strong>reby, <strong>the</strong> limiting factor of a given system. At <strong>the</strong> same time, you must take<br />

into consideration <strong>the</strong> component for whose workload you do identify a limiting factor,<br />

because it might not be <strong>the</strong> same component that is identified as <strong>the</strong> limiting factor for o<strong>the</strong>r<br />

workloads.<br />

When designing a storage infrastructure using SVC, or using an SVC storage infrastructure,<br />

you must <strong>the</strong>refore take into consideration <strong>the</strong> performance and capacity of your<br />

infrastructure. Ensuring that your SVC is monitored is a key point to ensure that you obtain<br />

<strong>the</strong> desired performance.<br />

The SVC cluster is scalable up to eight nodes, and <strong>the</strong> performance is almost linear when<br />

adding more nodes into an SVC cluster, until it becomes limited by o<strong>the</strong>r components in <strong>the</strong><br />

storage infrastructure. While virtualization with <strong>the</strong> SVC provides a great deal of flexibility, it<br />

does not diminish <strong>the</strong> necessity to have a storage area network (<strong>SAN</strong>) and disk subsystems<br />

that can deliver <strong>the</strong> desired performance. Essentially, SVC performance improvements are<br />

gained by having as many managed disks (MDisks) as possible, <strong>the</strong>refore, creating a greater<br />

level of concurrent I/O to <strong>the</strong> back end without overloading a single disk or array.<br />

In <strong>the</strong> following sections, we discuss <strong>the</strong> performance of <strong>the</strong> SVC and assume that <strong>the</strong>re are<br />

no bottlenecks in <strong>the</strong> <strong>SAN</strong> or on <strong>the</strong> disk subsystem.<br />

Performance monitoring<br />

In this section, we discuss several performance monitoring techniques.<br />

Collecting performance statistics<br />

By default, performance statistics are not collected. You can start or stop performance<br />

collection by using <strong>the</strong> svctask startstats and svctask stopstats commands, as<br />

described in Chapter 7, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> command-line<br />

interface” on page 339. You can also start or stop performance collection by using <strong>the</strong> SVC<br />

GUI as described Chapter 8, “<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> operations using <strong>the</strong> GUI” on page 469.<br />

810 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Statistics ga<strong>the</strong>ring is enabled or disabled on a cluster basis. When ga<strong>the</strong>ring is enabled, all<br />

of <strong>the</strong> nodes in <strong>the</strong> cluster ga<strong>the</strong>r statistics.<br />

SVC supports sampling periods of <strong>the</strong> ga<strong>the</strong>ring of statistics from 1 to 60 minutes in steps of<br />

one minute.<br />

Previous versions of <strong>the</strong> SVC used to provide per cluster statistics. These statistics were later<br />

superseded by per node statistics, which provide a greater range of information. SVC 5.1.0<br />

onward only provides per node statistics; per cluster statistics are no longer generated.<br />

Clients need to use per node statistics instead.<br />

Statistics file naming<br />

The files that are generated are written to <strong>the</strong> /dumps/iostats/ directory.<br />

The file name is of <strong>the</strong> format:<br />

► Nm_stats___ for MDisk statistics<br />

► Nv_stats___ for VDisk statistics<br />

► Nn_stats___ for node statistics<br />

The node_frontpanel_id is of <strong>the</strong> node on which <strong>the</strong> statistics were collected.<br />

The date is in <strong>the</strong> form and <strong>the</strong> time is in <strong>the</strong> form .<br />

An example of an mdisk statistics filename is:<br />

Nm_stats_1_020808_105224<br />

Example 9-70 shows typical MDisk and VDisk statistics file names.<br />

Example 9-70 Filename of per node statistics<br />

<strong>IBM</strong>_2145:ITSO-CLS2:admin>svcinfo lsiostatsdumps<br />

id iostat_filename<br />

0 Nm_stats_110775_090904_064337<br />

1 Nv_stats_110775_090904_064337<br />

2 Nn_stats_110775_090904_064337<br />

3 Nm_stats_110775_090904_064437<br />

4 Nv_stats_110775_090904_064437<br />

5 Nn_stats_110775_090904_064437<br />

6 Nm_stats_110775_090904_064537<br />

7 Nv_stats_110775_090904_064537<br />

8 Nn_stats_110775_090904_064537<br />

Tip: You can use pscp.exe, which is installed with PuTTY, from an MS-DOS command-line<br />

prompt to copy <strong>the</strong>se files to local drives. You can use WordPad to open <strong>the</strong>m, for example:<br />

C:\Program Files\PuTTY>pscp -load ITSO-CLS1<br />

admin@10.64.210.242:/dumps/iostats/* c:\temp\iostats<br />

Use <strong>the</strong> -load parameter to specify <strong>the</strong> session that is defined in PuTTY.<br />

After you have saved your performance statistics data files, because <strong>the</strong>y are in .xml format,<br />

you can format and merge your data to get more detail about <strong>the</strong> performance in your SVC<br />

environment.<br />

Appendix C. Performance data and statistics ga<strong>the</strong>ring 811


An example of an unsupported tool that is provided “as is” is <strong>the</strong> SVC Performance Monitor<br />

svcmon User’s Guide, which you can obtain from this Web site:<br />

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177<br />

Figure C-1 Spreadsheet example<br />

You can also process your statistics data with a spreadsheet application to get <strong>the</strong> report that<br />

is shown in Figure C-1.<br />

Performance data collection and Total<strong>Storage</strong> Productivity Center for Disk<br />

Even though <strong>the</strong> performance statistics data files are readable as standard .xml files,<br />

Total<strong>Storage</strong> Productivity Center for Disk is <strong>the</strong> official and supported <strong>IBM</strong> tool that is used to<br />

collect and analyze statistics data and provide a performance report for storage subsystems.<br />

Total<strong>Storage</strong> Productivity Center for Disk comes preinstalled on your <strong>System</strong> <strong>Storage</strong><br />

Productivity Center Console and can be made available by activating <strong>the</strong> specific licensing for<br />

Total<strong>Storage</strong> Productivity Center for Disk.<br />

By activating this license, you upgrade your running Total<strong>Storage</strong> Productivity Center-Basic<br />

Edition to a Total<strong>Storage</strong> Productivity Center for Disk edition.<br />

You can obtain more information about using Total<strong>Storage</strong> Productivity Center to monitor your<br />

storage subsystem in <strong>SAN</strong> <strong>Storage</strong> Performance Management Using Total<strong>Storage</strong><br />

Productivity Center, SG24-7364, at this Web site:<br />

http://www.redbooks.ibm.com/abstracts/sg247364.html?Open<br />

812 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>IBM</strong> Total<strong>Storage</strong> Reporter for Disk (a utility for anyone running <strong>IBM</strong> Total<strong>Storage</strong> Productivity<br />

Center) provides more information about creating a performance report. This utility is<br />

available at this Web site:<br />

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS2618<br />

<strong>IBM</strong> is withdrawing Total<strong>Storage</strong> Productivity Center Reporter for Disk for Tivoli <strong>Storage</strong><br />

Productivity Center Version 4.1. The replacement function for this utility is packaged with<br />

Tivoli Productivity Center Version 4.1 in Business Intelligence and Reporting Tools (BIRT).<br />

Appendix C. Performance data and statistics ga<strong>the</strong>ring 813


814 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Related publications<br />

The publications listed in this section are considered particularly suitable for a more detailed<br />

discussion of <strong>the</strong> topics covered in this book.<br />

<strong>IBM</strong> Redbooks publications<br />

For information about ordering <strong>the</strong>se publications, see “How to get <strong>IBM</strong> Redbooks<br />

publications” on page 817. Note that several of <strong>the</strong> documents referenced here might be<br />

available in softcopy only.<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>, SG24-6423-05<br />

► Get More Out of Your <strong>SAN</strong> with <strong>IBM</strong> Tivoli <strong>Storage</strong> Manager, SG24-6687<br />

► <strong>IBM</strong> Tivoli <strong>Storage</strong> Area Network Manager: A Practical Introduction, SG24-6848<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong>: <strong>Implementing</strong> an <strong>IBM</strong> <strong>SAN</strong>, SG24-6116<br />

► Introduction to <strong>Storage</strong> Area Networks, SG24-5470<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> V4.3.0 Advanced Copy Services, SG24-7574<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Best Practices and Performance Guidelines, SG24-7521<br />

► Using <strong>the</strong> SVC for Business Continuity, SG24-7371<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Business Continuity: Part 1 Planning Guide, SG24-6547<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Business Continuity: Part 2 Solutions Guide, SG24-6548<br />

O<strong>the</strong>r publications<br />

These publications are also relevant as fur<strong>the</strong>r information sources:<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Planning Guide,<br />

GA22-1052<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Master Console: Installation and User’s Guide, GC30-4090<br />

► Subsystem Device Driver User’s Guide for <strong>the</strong> <strong>IBM</strong> Total<strong>Storage</strong> Enterprise <strong>Storage</strong><br />

Server and <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>, SC26-7540<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Installation Guide,<br />

SC26-7541<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Service Guide,<br />

SC26-7542<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Configuration Guide,<br />

SC26-7543<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Command-Line<br />

Interface User’s Guide, SC26-7544<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: CIM Agent<br />

Developers Reference, SC26-7545<br />

► <strong>IBM</strong> Total<strong>Storage</strong> Multipath Subsystem Device Driver User’s Guide, SC30-4096<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 815


Online resources<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> Open Software Family <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong>: Host Attachment<br />

Guide, SC26-7563<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Model 2145-CF8 Hardware Installation<br />

Guide, GC52-1356<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Model 2145-8A4 Hardware Installation<br />

Guide, GC27-2219<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Model 2145-8G4 Hardware Installation<br />

Guide, GC27-2220<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> Models 2145-8F2 and 2145-8F4 Hardware<br />

Installation Guide, GC27-2221<br />

► <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>.0 - Host Attachment Guide,<br />

SG26-7905-05<br />

► Command Line Interface User’s Guide, SG26-7903-05<br />

These Web sites are also relevant as fur<strong>the</strong>r information sources:<br />

► <strong>IBM</strong> Total<strong>Storage</strong> home page:<br />

http://www.storage.ibm.com<br />

► <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> supported platform:<br />

http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html<br />

► Download site for Windows Secure Shell (SSH) freeware:<br />

http://www.chiark.greenend.org.uk/~sgtatham/putty<br />

► <strong>IBM</strong> site to download SSH for AIX:<br />

http://oss.software.ibm.com/developerworks/projects/openssh<br />

► Open source site for SSH for Windows and Mac:<br />

http://www.openssh.com/windows.html<br />

► Cygwin Linux-like environment for Windows:<br />

http://www.cygwin.com<br />

► <strong>IBM</strong> Tivoli <strong>Storage</strong> Area Network Manager site:<br />

http://www-306.ibm.com/software/sysmgmt/products/support/<strong>IBM</strong>Tivoli<strong>Storage</strong>AreaNe<br />

tworkManager.html<br />

► Microsoft Knowledge Base Article 131658:<br />

http://support.microsoft.com/support/kb/articles/Q131/6/58.asp<br />

► Microsoft Knowledge Base Article 149927:<br />

http://support.microsoft.com/support/kb/articles/Q149/9/27.asp<br />

► Sysinternals home page:<br />

http://www.sysinternals.com<br />

► Subsystem Device Driver download site:<br />

http://www-1.ibm.com/servers/storage/support/software/sdd/index.html<br />

816 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


► <strong>IBM</strong> Total<strong>Storage</strong> Virtualization home page:<br />

http://www-1.ibm.com/servers/storage/software/virtualization/index.html<br />

► SVC support page:<br />

http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&<br />

brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continu<br />

e.x=1<br />

► SVC online documentation:<br />

http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp<br />

► lBM Redbooks publications about SVC:<br />

http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC<br />

How to get <strong>IBM</strong> Redbooks publications<br />

Help from <strong>IBM</strong><br />

You can search for, view, or download <strong>IBM</strong> Redbooks publications, Redpapers, Webdocs,<br />

draft publications and additional materials, as well as order hardcopy <strong>IBM</strong> Redbooks<br />

publications, at this Web site:<br />

ibm.com/redbooks<br />

<strong>IBM</strong> Support and downloads<br />

ibm.com/support<br />

<strong>IBM</strong> Global Services<br />

ibm.com/services<br />

Related publications 817


818 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


Index<br />

Numerics<br />

64-bit kernel 57<br />

A<br />

abends 465<br />

abends dump 465<br />

access pattern 517<br />

active quorum disk 36<br />

active SVC cluster 563<br />

add a new volume 167, 172<br />

add a node 389<br />

add additional ports 501<br />

add an HBA 354<br />

Add SSH Public Key 129<br />

administration tasks 493, 559<br />

Advanced Copy Services 93<br />

AIX host system 181<br />

AIX specific information 162<br />

AIX toolbox 181<br />

AIX-based hosts 162<br />

alias 27<br />

alias string 158<br />

aliases 27<br />

analysis 100, 655, 810<br />

application server guidelines 92<br />

application testing 257<br />

assign VDisks 369<br />

assigned VDisk 167, 172<br />

asynchronous 309<br />

asynchronous notifications 280–281<br />

Asynchronous Peer-to-Peer Remote Copy 309<br />

asynchronous remote 310<br />

asynchronous remote copy 32, 283, 309, 311<br />

asynchronous replication 331<br />

asynchronously 309<br />

attributes 527<br />

audit log 40<br />

Au<strong>the</strong>ntication 160<br />

au<strong>the</strong>ntication 41, 58, 132<br />

au<strong>the</strong>ntication service 44<br />

Autoexpand 25<br />

automate tasks 786<br />

automatic Linux system 225<br />

automatic update process 226<br />

automatically discover 342<br />

automatically formatted 52<br />

automatically restarted 644<br />

automation 378<br />

auxiliary 320, 328, 425, 447<br />

auxiliary VDisk 310, 321, 328<br />

available managed disks 343<br />

B<br />

back-end application 60<br />

background copy 301, 309, 321, 328<br />

background copy bandwidth 333<br />

background copy progress 420, 443<br />

background copy rate 276–277<br />

backup 257<br />

of data with minimal impact on production 262<br />

backup speed 257<br />

backup time 257<br />

bandwidth 66, 95, 319, 585, 610<br />

bandwidth impact 333<br />

basic setup requirements 130<br />

bat script 787<br />

bind 252<br />

bitmaps 261<br />

boot 99<br />

boss node 35<br />

bottleneck 49<br />

bottlenecks 100, 102, 810<br />

budget 26<br />

budget allowance 26<br />

business requirements 100, 810<br />

C<br />

cable connections 73<br />

cable length 48<br />

cache 37, 269, 310<br />

caching 101<br />

caching capability 100, 810<br />

candidate node 389<br />

capacity 90, 180<br />

capacity information 538<br />

capacity measurement 507<br />

CDB 27<br />

challenge message 30<br />

Challenge-Handshake Au<strong>the</strong>ntication Protocol 30, 160,<br />

352, 496<br />

change <strong>the</strong> IP addresses 384<br />

Channel extender 59<br />

channel extender 62<br />

channels 317<br />

CHAP 30, 160, 352, 496<br />

CHAP au<strong>the</strong>ntication 30, 160<br />

CHAP secret 30, 160<br />

check software levels 636<br />

chpartnership 333<br />

chrcconsistgrp 335<br />

chrcrelationship 335<br />

chunks 88, 683<br />

CIM agent 38<br />

CIM Client 38<br />

CIMOM 28, 38, 125, 159<br />

CLI 125, 434<br />

© Copyright <strong>IBM</strong> Corp. 2010. All rights reserved. 819


commands 181<br />

scripting for SVC task automation 378<br />

Cluster 59<br />

cluster 34<br />

adding nodes 560<br />

creation 388, 560<br />

error log 460<br />

IP address 114<br />

shutting down 342, 386, 396, 550<br />

time zone 384<br />

viewing properties 380, 544<br />

cluster error log 655<br />

Cluster management 38<br />

cluster nodes 34<br />

cluster overview 34<br />

cluster partnership 291, 317<br />

cluster properties 385<br />

clustered e<strong>the</strong>rnet port 160<br />

clustered server resources 34<br />

clusters 66<br />

colliding rites 312<br />

Colliding writes 311<br />

Command Descriptor Block 27<br />

command syntax 378<br />

COMPASS architecture 46<br />

compression 98<br />

concepts 7<br />

concurrent instances 682<br />

concurrent software upgrade 450<br />

configurable warning capacity 25<br />

configuration 153<br />

restoring 672<br />

configuration node 35, 48, 59, 160, 388, 560<br />

configuration rules 52<br />

configure AIX 162<br />

configure SDD 252<br />

configuring <strong>the</strong> GUI 117<br />

connected 295, 322<br />

connected state 298, 323, 325<br />

connectivity 36<br />

consistency 284, 311, 324<br />

consistency freeze 298, 307, 325<br />

Consistency Group 59<br />

consistency group 262, 264–265<br />

limits 265<br />

consistent 32, 296–297, 323–324<br />

consistent data set 256<br />

Consistent Stopped state 294, 321<br />

Consistent Synchronized state 294, 321, 599, 626<br />

ConsistentDisconnected 300, 327<br />

ConsistentStopped 298, 325<br />

ConsistentSynchronized 299, 326<br />

constrained link 320<br />

container 88<br />

contingency capacity 25<br />

controller, renaming 341<br />

conventional storage 675<br />

cookie crumbs recovery 468<br />

cooling 67<br />

Copied 59<br />

820 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

copy bandwidth 95, 333<br />

copy operation 33<br />

copy process 306, 335<br />

copy rate 267, 277<br />

copy rate parameter 93<br />

Copy Services<br />

managing 397, 566<br />

COPY_COMPLETED 280<br />

copying state 403<br />

corruption 257<br />

Counterpart <strong>SAN</strong> 59<br />

counterpart <strong>SAN</strong> 59, 62, 102<br />

CPU cycle 49<br />

create a FlashCopy 399<br />

create a new VDisk 505<br />

create an SVC partnership 585, 609<br />

create mapping command 398–399, 566, 568<br />

create New Cluster 119<br />

create SVC partnership 415, 436<br />

creating a VDisk 356<br />

creating managed disk groups 484<br />

credential caching 45<br />

current cluster state 35<br />

Cygwin 214<br />

D<br />

data<br />

backup with minimal impact on production 262<br />

moving and migration 256<br />

data change rates 98<br />

data consistency 309, 397<br />

data corruption 323<br />

data flow 77<br />

data migration 67, 682<br />

data migration and moving 256<br />

data mining 257<br />

data mover appliance 371<br />

database log 315<br />

database update 314<br />

degraded mode 86<br />

delete<br />

a FlashCopy 406<br />

a host 354<br />

a host port 356<br />

a port 502<br />

a VDisk 367, 513, 540<br />

ports 355<br />

Delete consistency group command 407, 579<br />

Delete mapping command 578<br />

dependent writes 264, 288–289, 314–315<br />

destaged 37<br />

destructive 461<br />

detect <strong>the</strong> new MDisks 342<br />

detected 342<br />

device specific modules 188<br />

differentiator 50<br />

directory protocol 44<br />

dirty bit 302, 328<br />

disconnected 295, 322<br />

disconnected state 323


discovering assigned VDisk 167, 172, 190<br />

discovering newly assigned MDisks 481<br />

disk access profile 365<br />

disk controller<br />

renaming 478<br />

systems 340<br />

viewing details 340, 477<br />

disk internal controllers 50<br />

disk timeout value 246<br />

disk zone 76<br />

Diskpart 196<br />

display summary information 343<br />

displaying managed disks 490<br />

distance 61, 282<br />

distance limitations 282<br />

documentation 66, 475<br />

DSMs 188<br />

dump<br />

I/O statistics 463<br />

I/O trace 463<br />

listing 462, 663<br />

o<strong>the</strong>r nodes 464<br />

durability 50<br />

dynamic pathing 249–250<br />

dynamic shrinking 536<br />

dynamic tracking 163<br />

E<br />

elapsed time 93<br />

empty MDG 346<br />

empty state 301, 328<br />

Enterprise <strong>Storage</strong> Server (ESS) 281<br />

entire VDisk 262<br />

error 298, 322, 325, 345, 460, 655<br />

Error Code 59, 646<br />

error handling 278<br />

Error ID 60<br />

error log 460, 655<br />

analyzing 655<br />

file 645<br />

error notification 458, 647<br />

error number 646<br />

error priority 656<br />

ESS 44<br />

ESS (Enterprise <strong>Storage</strong> Server) 281<br />

ESS server 44<br />

ESS to SVC 687<br />

ESS token 44<br />

eth0 48<br />

eth0 port 48<br />

eth1 48<br />

E<strong>the</strong>rnet 73<br />

E<strong>the</strong>rnet connection 74<br />

event 460, 655<br />

event log 462<br />

events 293, 320<br />

Excluded 60<br />

excludes 481<br />

Execute Metro Mirror 419, 441<br />

expand<br />

a VDisk 177, 195, 367<br />

a volume 196<br />

expand a space-efficient VDisk 367<br />

expiry timestamp 44<br />

expiry timestamps 45<br />

extended distance solutions 282<br />

Extent 60<br />

extent 88, 676<br />

extent level 676<br />

extent sizes 88<br />

F<br />

fabric<br />

remote 102<br />

fabric interconnect 61<br />

factory WWNN 800<br />

failover 60, 249, 311<br />

failover only 229<br />

failover situation 282<br />

fan-in 60<br />

fast fail 163<br />

fast restore 257<br />

FAStT 281<br />

FC optical distance 48<br />

feature log 662<br />

feature, licensing 659<br />

features, licensing 461<br />

featurization log 463<br />

Featurization Settings 122<br />

Fibre Channel interfaces 47<br />

Fibre Channel port fan in 62, 102<br />

Fibre Channel Port Login 28<br />

Fibre Channel port logins 60<br />

Fibre Channel ports 73<br />

file system 232<br />

filtering 379, 470<br />

filters 379<br />

fixed error 460, 655<br />

FlashCopy 33, 256<br />

bitmap 266<br />

how it works 257, 261<br />

image mode disk 270<br />

indirection layer 266<br />

mapping 257<br />

mapping events 271<br />

rules 270<br />

serialization of I/O 278<br />

syn<strong>the</strong>sis 278<br />

FlashCopy indirection layer 266<br />

FlashCopy mapping 262, 271<br />

FlashCopy mapping states 274<br />

Copying 274<br />

Idling/Copied 274<br />

Prepared 275<br />

Preparing 275<br />

Stopped 274<br />

Suspended 274<br />

FlashCopy mappings 265<br />

FlashCopy properties 265<br />

FlashCopy rate 93<br />

Index 821


flexibility 100, 810<br />

flush <strong>the</strong> cache 573<br />

forced deletion 501<br />

foreground I/O latency 333<br />

format 506, 510, 515, 522<br />

free extents 367<br />

front-end application 60<br />

FRU 60<br />

Full Feature Phase 28<br />

G<br />

gateway IP address 114<br />

GBICs 61<br />

general housekeeping 476, 544<br />

generating output 379<br />

generator 128<br />

geographically dispersed 281<br />

Global Mirror guidelines 96<br />

Global Mirror protocol 32<br />

Global Mirror relationship 313<br />

Global Mirror remote copy technique 310<br />

gminterdelaysimulation 330<br />

gmintradelaysimulation 330<br />

gmlinktolerance 330–331<br />

governing 26<br />

governing rate 26<br />

governing throttle 517<br />

graceful manner 391<br />

grain 60, 266, 278<br />

grain sizes 93<br />

grains 93, 277<br />

granularity 262<br />

GUI 117, 131<br />

H<br />

Hardware Management Console 38<br />

hardware nodes 46, 56<br />

hardware overview 46<br />

hash function 30<br />

HBA 60, 350<br />

HBA fails 86<br />

HBA ports 92<br />

heartbeat signal 36<br />

heartbeat traffic 95<br />

help 475, 543<br />

high availability 34, 66<br />

home directory 181<br />

host<br />

and application server guidelines 92<br />

configuration 153<br />

creating 350<br />

deleting 500<br />

information 494<br />

showing 375<br />

systems 76<br />

host adapter configuration settings 183<br />

host bus adapter 350<br />

Host ID 60<br />

host workload 526<br />

822 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

housekeeping 476, 544<br />

HP-UX support information 249–250<br />

I<br />

I/O budget 26<br />

I/O Governing 26<br />

I/O governing 26, 365, 517<br />

I/O governing rate 365<br />

I/O Group 61<br />

I/O group 37, 61–62<br />

name 473<br />

renaming 392, 559<br />

viewing details 391<br />

I/O pair 69<br />

I/O per secs 66<br />

I/O statistics dump 463<br />

I/O trace dump 463<br />

ICAT 38–39<br />

identical data 320<br />

idling 299, 326<br />

idling state 306, 335<br />

IdlingDisconnected 300, 326<br />

Image Mode 61<br />

image mode 526, 685<br />

image mode disk 270<br />

image mode MDisk 685<br />

image mode to image mode 705<br />

image mode to managed mode 700<br />

image mode VDisk 680<br />

image mode virtual disks 91<br />

inappropriate zoning 84<br />

inconsistent 296, 323<br />

Inconsistent Copying state 294, 321<br />

Inconsistent Stopped state 294, 321, 598–599, 626<br />

InconsistentCopying 298, 325<br />

InconsistentDisconnected 300, 327<br />

InconsistentStopped 298, 324<br />

index number 666<br />

Index/Secret/Challenge 30<br />

indirection layer 266<br />

indirection layer algorithm 267<br />

informational error logs 280<br />

initiator 158<br />

initiator name 27<br />

input power 386<br />

install 65<br />

insufficient bandwidth 278<br />

integrity 264, 289, 315<br />

interaction with <strong>the</strong> cache 269<br />

intercluster communication and zoning 317<br />

intercluster link 291, 317<br />

intercluster link bandwidth 333<br />

intercluster link maintenance 291–292, 317<br />

intercluster Metro Mirror 282, 309<br />

intercluster zoning 291–292, 317<br />

Internet <strong>Storage</strong> Name Service 30, 61, 159<br />

interswitch link (ISL) 62<br />

interval 385<br />

intracluster Metro Mirror 281, 309<br />

IP address


modifying 383, 545<br />

IP addresses 66, 545<br />

IP subnet 74<br />

ipconfig 137<br />

IPv4 136<br />

ipv4 and 48<br />

IPv4 stack 141<br />

IPv6 136<br />

IPv6 address 140<br />

IPv6 addresses 137<br />

IQN 27, 60, 158<br />

iSCSI 26, 49, 66, 159<br />

iSCSI Address 27<br />

iSCSI client 158<br />

iSCSI IP address failover 160<br />

iSCSI Multipathing 31<br />

iSCSI Name 27<br />

iSCSI node 27<br />

iSCSI protocol 57<br />

iSCSI Qualified Name 27, 60<br />

iSCSI support 57–58<br />

iSCSI target node failover 160<br />

ISL (interswitch link) 62<br />

ISL hop count 282, 309<br />

iSNS 30, 61, 159<br />

issue CLI commands 214<br />

ivp6 48<br />

J<br />

Jumbo Frames 30<br />

K<br />

kernel level 226<br />

key 160<br />

key files on AIX 181<br />

L<br />

LAN Interfaces 48<br />

last extent 686<br />

latency 32, 95<br />

LBA 302, 328<br />

license 114<br />

license feature 659<br />

licensing feature 461<br />

licensing feature settings 461, 659<br />

limiting factor 100, 810<br />

link errors 47<br />

Linux 181<br />

Linux kernel 35<br />

Linux on Intel 225<br />

list dump 462<br />

list of MDisks 491<br />

list of VDisks 492<br />

list <strong>the</strong> dumps 663<br />

listing dumps 462, 663<br />

Load balancing 229<br />

Local au<strong>the</strong>ntication 40<br />

local cluster 303, 330<br />

Local fabric 61<br />

local fabric interconnect 61<br />

Local users 42<br />

log 315<br />

logged 460<br />

Logical Block Address 302, 328<br />

logical configuration data 466<br />

Login Phase 28<br />

logs 314<br />

lsrcrelationshipcandidate 334<br />

LU 61<br />

LUNs 61<br />

M<br />

magnetic disks 50<br />

maintenance levels 183<br />

maintenance procedures 645<br />

maintenance tasks 449, 635<br />

Managed 61<br />

Managed disk 61<br />

managed disk 61, 479<br />

displaying 490<br />

working with 477<br />

managed disk group 347<br />

creating 484<br />

viewing 486<br />

Managed Disks 61<br />

managed mode MDisk 685<br />

managed mode to image mode 702<br />

managed mode virtual disk 91<br />

management 100, 810<br />

map a VDisk 516<br />

map a VDisk to a host 368<br />

mapping 261<br />

mapping events 271<br />

mapping state 271<br />

Master 62<br />

master 320, 328<br />

master console 67<br />

master VDisk 321, 328<br />

maximum supported configurations 58<br />

MC 62<br />

MD5 checksum hash 30<br />

MDG 61<br />

MDG information 538<br />

MDG level 347<br />

MDGs 67<br />

MDisk 61, 67, 479, 490<br />

adding 346, 488<br />

discovering 342, 481<br />

including 345, 481<br />

information 479<br />

modes 685<br />

name parameter 343<br />

removing 349, 489<br />

renaming 344, 480<br />

showing 374, 491, 537<br />

showing in group 346<br />

MDisk group<br />

creating 348, 484<br />

Index 823


deleting 349, 487<br />

name 473<br />

renaming 348, 486<br />

showing 346, 374, 482, 538<br />

viewing information 348<br />

MDiskgrp 61<br />

Metro Mirror 281<br />

Metro Mirror consistency group 304, 306–308, 334–337<br />

Metro Mirror features 283, 311<br />

Metro Mirror process 292, 319<br />

Metro Mirror relationship 305–306, 308, 313, 334–335,<br />

337, 597, 624<br />

microcode 36<br />

Microsoft Active Directory 43<br />

Microsoft Cluster 195<br />

Microsoft Multi Path Input Output 188<br />

migrate 675<br />

migrate a VDisk 680<br />

migrate between MDGs 680<br />

migrate data 685<br />

migrate VDisks 370<br />

migrating multiple extents 676<br />

migration<br />

algorithm 683<br />

functional overview 682<br />

operations 676<br />

overview 676<br />

tips 687<br />

migration activities 676<br />

migration phase 526<br />

migration process 371<br />

migration progress 681<br />

migration threads 676<br />

mirrored 310<br />

mirrored copy 309<br />

mirrored VDisks 54<br />

mkpartnership 333<br />

mkrcconsistgrp 334<br />

mkrcrelationship 334<br />

MLC 49<br />

modify a host 353<br />

modifying a VDisk 364<br />

mount 232<br />

mount point 232<br />

moving and migrating data 256<br />

MPIO 92, 188<br />

MSCS 195<br />

MTU sizes 30, 159<br />

multi layer cell 49<br />

multipath configuration 165<br />

multipath I/O 92<br />

multipath storage solution 188<br />

multipathing device driver 92<br />

Multipathing drivers 31<br />

multiple disk arrays 100, 810<br />

multiple extents 676<br />

multiple paths 31<br />

multiple virtual machines 240<br />

N<br />

network bandwidth 98<br />

Network Entity 158<br />

Network Portals 158<br />

new code 644<br />

new disks 169, 175<br />

new mapping 368<br />

Node 62<br />

node 35, 61, 387<br />

adding 388<br />

adding to cluster 560<br />

deleting 390<br />

failure 278<br />

port 60<br />

renaming 390<br />

shutting down 390<br />

using <strong>the</strong> GUI 559<br />

viewing details 388<br />

node details 388<br />

node discovery 666<br />

node dumps 464<br />

node level 387<br />

Node Unique ID 35<br />

nodes 66<br />

non-preferred path 249<br />

non-redundant 59<br />

non-zero contingency 25<br />

N-port 62<br />

824 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

O<br />

offline rules 679<br />

offload features 30<br />

older disk systems 101<br />

on screen content 379, 470, 543<br />

online help 475, 543<br />

on-screen content 379<br />

OpenSSH 181<br />

OpenSSH client 214<br />

operating system versions 183<br />

ordering 32, 264<br />

organizing on-screen content 379<br />

o<strong>the</strong>r node dumps 464<br />

overall performance needs 66<br />

Oversubscription 62<br />

oversubscription 62<br />

overwritten 261, 457<br />

P<br />

package numbering and version 450, 636<br />

parallelism 682<br />

partial extents 25<br />

partial last extent 686<br />

partnership 291, 317, 330<br />

passphrase 128<br />

path failover 249<br />

path failure 279<br />

path offline 279<br />

path offline for source VDisk 279<br />

path offline for target VDisk 280


path offline state 279<br />

path-selection policy algorithms 229<br />

peak 333<br />

peak workload 95<br />

pended 26<br />

per cluster 682<br />

per managed disk 682<br />

performance 90<br />

performance advantage 100, 810<br />

performance boost 45<br />

performance considerations 810<br />

performance improvement 100, 810<br />

performance monitoring tool 96<br />

performance requirements 66<br />

performance scalability 34<br />

performance statistics 96<br />

performance throttling 517<br />

physical location 67<br />

physical planning 67<br />

physical rules 69<br />

physical site 67<br />

Physical <strong>Volume</strong> Links 250<br />

PiT 34<br />

PiT consistent data 257<br />

PiT copy 266<br />

PiT semantics 264<br />

planning rules 66<br />

plink 786<br />

PLOGI 28<br />

Point in Time 34<br />

point in time 33<br />

point-in-time copy 297, 324<br />

policy decision 302, 328<br />

port<br />

adding 354, 501<br />

deleting 355, 502<br />

port binding 252<br />

port mask 93<br />

Power <strong>System</strong>s 181<br />

PPRC<br />

background copy 301, 309, 328<br />

commands 303, 329<br />

configuration limits 329<br />

detailed states 298, 324<br />

preferred access node 91<br />

preferred path 249<br />

pre-installation planning 66<br />

Prepare 62<br />

prepare (pre-trigger) FlashCopy mapping command 401<br />

PREPARE_COMPLETED 280<br />

preparing volumes 172, 177<br />

pre-trigger 401<br />

primary 311, 425, 447<br />

primary copy 328<br />

priority 371<br />

priority setting 371<br />

private key 125, 128, 181, 786<br />

production VDisk 328<br />

provisioning 333<br />

pseudo device driver 165<br />

public key 125, 128, 181, 786<br />

PuTTY 39, 125, 130, 387<br />

CLI session 134<br />

default location 128<br />

security alert 135<br />

PuTTY application 134, 390<br />

PuTTY Installation 214<br />

PuTTY Key Generator 128–129<br />

PuTTY Key Generator GUI 126<br />

PuTTY Secure Copy 452<br />

PuTTY session 129, 135<br />

PuTTY SSH client software 214<br />

PVLinks 250<br />

Q<br />

QLogic HBAs 226<br />

Queue Full Condition 26<br />

quiesce 387<br />

quiesce time 573<br />

quiesced 806<br />

quorum 35<br />

quorum candidates 36<br />

Quorum Disk 35<br />

quorum disk 35, 666<br />

setting 666<br />

quorum disk candidate 36<br />

quorum disks 25<br />

R<br />

RAID 62<br />

RAID controller 76<br />

RAMAC 50<br />

RAS 62<br />

read workload 53<br />

real capacity 25<br />

real-time synchronized 281<br />

reassign <strong>the</strong> VDisk 370<br />

recall commands 340, 379<br />

recommended levels 636<br />

Redbooks Web site 817<br />

Contact us xxiii<br />

redundancy 48, 96<br />

redundant 59<br />

Redundant <strong>SAN</strong> 62<br />

redundant <strong>SAN</strong> 62<br />

redundant SVC 563<br />

relationship 262, 309, 319<br />

relationship state diagram 293, 320<br />

reliability 90<br />

Reliability Availability and Serviceability 62<br />

Remote 62<br />

Remote au<strong>the</strong>ntication 40<br />

remote cluster 61<br />

remote fabric 61, 102<br />

interconnect 61<br />

Remote users 43<br />

remove a disk 211<br />

remove a VDisk 181<br />

remove an MDG 349<br />

Index 825


emove WWPN definitions 355<br />

rename a disk controller 478<br />

rename an MDG 486<br />

rename an MDisk 480<br />

renaming an I/O group 559<br />

repartitioning 90<br />

rescan disks 193<br />

restart <strong>the</strong> cluster 387<br />

restart <strong>the</strong> node 391<br />

restarting 424, 446<br />

restore points 258<br />

restore procedure 672<br />

Reverse FlashCopy 34, 258<br />

reverse FlashCopy 57<br />

RFC3720 27<br />

rmrcconsistgrp 337<br />

rmrcrelationship 337<br />

round robin 91, 229, 249<br />

S<br />

sample script 789<br />

<strong>SAN</strong> Boot Support 249, 251<br />

<strong>SAN</strong> definitions 102<br />

<strong>SAN</strong> fabric 76<br />

<strong>SAN</strong> planning 74<br />

<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> 62<br />

documentation 475<br />

general housekeeping 476, 544<br />

help 475, 543<br />

virtualization 38<br />

<strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC) 62<br />

<strong>SAN</strong> zoning 125<br />

SATA 97<br />

scalable 102, 810<br />

scalable architecture 51<br />

SCM 50<br />

scripting 302, 329, 378<br />

scripts 196, 785<br />

SCSI 62<br />

SCSI Disk 61<br />

SCSI primitives 342<br />

SDD 91–92, 162, 165, 170, 176, 251<br />

SDD (Subsystem Device Driver) 170, 176, 226, 251, 689<br />

SDD Dynamic Pathing 249<br />

SDD installation 165<br />

SDD package version 165, 185<br />

SDDDSM 188<br />

secondary 311<br />

secondary copy 328<br />

secondary site 66<br />

secure data flow 125<br />

secure session 390<br />

Secure Shell (SSH) 125<br />

Secure Shell connection 38<br />

separate physical IP networks 48<br />

sequential 91, 356, 506, 510, 522, 532<br />

serial numbers 167, 174<br />

serialization 278<br />

serialization of I/O by FlashCopy 278<br />

Service Location Protocol 30, 63, 159<br />

826 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

service, maintenance using <strong>the</strong> GUI 635<br />

set attributes 527<br />

set <strong>the</strong> cluster time zone 549<br />

set up Metro Mirror 413, 434, 583, 607<br />

SEV 365<br />

shells 378<br />

show <strong>the</strong> MDG 538<br />

show <strong>the</strong> MDisks 537<br />

shrink a VDisk 536<br />

shrinking 536<br />

shrinkvdisksize 372<br />

shut down 195<br />

shut down a single node 390<br />

shut down <strong>the</strong> cluster 386, 550<br />

Simple Network Management Protocol 302, 329, 345<br />

single layer cell 49<br />

single point of failure 62<br />

single sign on 58<br />

single sign-on 39, 44<br />

site 67<br />

SLC 49<br />

SLP 30, 63, 159<br />

SLP daemon 30<br />

SNIA 2<br />

SNMP 302, 329, 345<br />

SNMP alerts 481<br />

SNMP manager 458<br />

SNMP trap 280<br />

software upgrade 450, 636–637<br />

software upgrade packages 636<br />

Solid State Disk 57<br />

Solid State Drive 34<br />

Solid State Drives 46<br />

solution 100<br />

sort 473<br />

sort criteria 473<br />

sorting 473<br />

source 277, 328<br />

space-efficient 359<br />

Space-efficient background copy 319<br />

space-efficient VDisk 372, 526<br />

space-efficient VDisks 509<br />

Space-Efficient Virtual Disk 57<br />

space-efficient volume 372<br />

special migration 687<br />

split per second 93<br />

splitting <strong>the</strong> <strong>SAN</strong> 62<br />

SPoF 62<br />

spreading <strong>the</strong> load 90<br />

SSD 51<br />

SSD market 50<br />

SSD solution 50<br />

SSD storage 52<br />

SSH 38, 125, 786<br />

SSH (Secure Shell) 125<br />

SSH Client 39<br />

SSH client 181, 214<br />

SSH client software 125<br />

SSH key 41<br />

SSH keys 125, 130


SSH server 125<br />

SSH-2 125<br />

SSO 44<br />

stack 684<br />

stand-alone Metro Mirror relationship 418, 441<br />

start (trigger) FlashCopy mapping command 402, 404,<br />

574<br />

start a PPRC relationship command 306, 335<br />

startrcrelationship 335<br />

state 298, 324–325<br />

connected 295, 322<br />

consistent 296–297, 323–324<br />

ConsistentDisconnected 300, 327<br />

ConsistentStopped 298, 325<br />

ConsistentSynchronized 299, 326<br />

disconnected 295, 322<br />

empty 301, 328<br />

idling 299, 326<br />

IdlingDisconnected 300, 326<br />

inconsistent 296, 323<br />

InconsistentCopying 298, 325<br />

InconsistentDisconnected 300, 327<br />

InconsistentStopped 298, 324<br />

overview 293, 322<br />

synchronized 297, 324<br />

state fragments 296, 323<br />

state overview 295, 329<br />

state transitions 280, 322<br />

states 271, 277, 293, 320<br />

statistics 385<br />

statistics collection 547<br />

starting 547<br />

stopping 386, 548<br />

statistics dump 463<br />

stop 322<br />

stop FlashCopy consistency group 406, 576<br />

stop FlashCopy mapping command 405<br />

STOP_COMPLETED 280<br />

stoprcconsistgrp 336<br />

stoprcrelationship 335<br />

storage cache 37<br />

storage capacity 66<br />

<strong>Storage</strong> Class Memory 50<br />

stripe VDisks 100, 810<br />

striped 506, 510, 522, 532<br />

striped VDisk 356<br />

subnet mask IP address 114<br />

Subsystem Device Driver (SDD) 170, 176, 226, 251, 689<br />

Subsystem Device Driver DSM 188<br />

SUN Solaris support information 249<br />

superuser 381<br />

surviving node 390<br />

suspended mapping 405<br />

SVC<br />

basic installation 111<br />

task automation 378<br />

SVC cluster 560<br />

SVC cluster candidates 585, 610<br />

SVC cluster partnership 303, 330<br />

SVC cluster software 639<br />

SVC configuration 66<br />

backing up 668<br />

deleting <strong>the</strong> backup 672<br />

restoring 672<br />

SVC Console 38<br />

SVC device 63<br />

SVC GUI 39<br />

SVC installations 86<br />

SVC master console 125<br />

SVC node 37, 86<br />

SVC PPRC functions 283<br />

SVC setup 154<br />

SVC SSD storage 52<br />

SVC superuser 41<br />

svcinfo 340, 344, 378<br />

svcinfo lsfreeextents 681<br />

svcinfo lshbaportcandidate 354<br />

svcinfo lsmdiskextent 681<br />

svcinfo lsmigrate 681<br />

svcinfo lsVDisk 373<br />

svcinfo lsVDiskextent 681<br />

svcinfo lsVDiskmember 374<br />

svctask 340, 344, 378, 381<br />

svctask chlicense 461<br />

svctask finderr 456<br />

svctask mkfcmap 303–306, 330, 333–335, 398–399,<br />

566, 568<br />

switching copy direction 425, 447, 606, 632<br />

switchrcconsistgrp 338<br />

switchrcrelationship 337<br />

symmetrical 1<br />

symmetrical network 62<br />

symmetrical virtualization 1<br />

synchronized 297, 320, 324<br />

synchronized clocks 45<br />

synchronizing 319<br />

synchronous data mirroring 57<br />

synchronous reads 684<br />

synchronous writes 684<br />

syn<strong>the</strong>sis 278<br />

Syslog error event logging 58<br />

<strong>System</strong> <strong>Storage</strong> Productivity Center 63<br />

T<br />

T0 34<br />

target 158, 328<br />

target name 27<br />

test new applications 257<br />

threads parameter 519<br />

threshold level 26<br />

throttles 517<br />

throttling parameters 517<br />

tie breaker 35<br />

tie-break situations 35<br />

tie-break solution 666<br />

tie-breaker 35<br />

time 384<br />

time zone 384<br />

timeout 246<br />

timestamp 44–45<br />

Index 827


Time-Zero copy 34<br />

Tivoli Directory Server 43<br />

Tivoli Embedded Security Services 40, 44<br />

Tivoli Integrated Portal 39<br />

Tivoli <strong>Storage</strong> Productivity Center 39<br />

Tivoli <strong>Storage</strong> Productivity Center for Data 39<br />

Tivoli <strong>Storage</strong> Productivity Center for Disk 39<br />

Tivoli <strong>Storage</strong> Productivity Center for Replication 39<br />

Tivoli <strong>Storage</strong> Productivity Center Standard Edition 39<br />

token 44–45<br />

token expiry timestamp 45<br />

token facility 44<br />

trace dump 463<br />

traffic 95<br />

traffic profile activity 66<br />

transitions 685<br />

trigger 402, 404, 574<br />

U<br />

unallocated capacity 198<br />

unallocated region 319<br />

unassign 514<br />

unconfigured nodes 389<br />

undetected data corruption 323<br />

unfixed error 460, 655<br />

uninterruptible power supply 73, 86, 386, 451<br />

unmanaged MDisk 685<br />

unmap a VDisk 370<br />

up2date 225<br />

updates 225<br />

upgrade 636–637<br />

upgrade precautions 450<br />

upgrading software 636<br />

use of Metro Mirror 301, 328<br />

used capacity 25<br />

used free capacity 25<br />

User account migration 38<br />

using SDD 170, 176, 226, 251<br />

V<br />

VDisk 490<br />

assigning 516<br />

assigning to host 368<br />

creating 356, 358, 505<br />

creating in image mode 359, 526<br />

deleting 367, 509, 513<br />

discovering assigned 167, 172, 190<br />

expanding 367<br />

I/O governing 364<br />

image mode migration concept 685<br />

information 358, 505<br />

mapped to this host 369<br />

migrating 92, 370, 518<br />

modifying 364, 517<br />

path offline for source 279<br />

path offline for target 280<br />

showing 492<br />

showing for MDisk 373, 482<br />

showing map to a host 539<br />

828 <strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

showing using group 373<br />

shrinking 371, 519<br />

working with 356<br />

VDisk discovery 159<br />

VDisk mirror 526<br />

VDisk Mirroring 53<br />

VDisk-to-host mapping 370<br />

deleting 514<br />

Veritas <strong>Volume</strong> Manager 249<br />

View I/O Group details 391<br />

viewing managed disk groups 486<br />

virtual disk 262, 356, 468, 504<br />

Virtual Machine File <strong>System</strong> 238, 240<br />

virtualization 38<br />

VLUN 61<br />

VMFS 238, 240–242<br />

VMFS datastore 244<br />

volume group 177<br />

Voting Set 35<br />

voting set 35<br />

vpath configured 169, 175<br />

W<br />

warning capacity 25<br />

warning threshold 372<br />

Web interface 252<br />

Windows 2000 based hosts 182<br />

Windows 2000 host configuration 182, 238<br />

Windows 2003 188<br />

Windows host system CLI 214<br />

Windows NT and 2000 specific information 182<br />

working with managed disks 477<br />

workload cycle 96<br />

worldwide node name 800<br />

worldwide port name 164<br />

Write data 37<br />

Write ordering 324<br />

write ordering 288, 313, 323<br />

write through mode 86<br />

write workload 96<br />

writes 314<br />

write-through mode 37<br />

WWNN 800<br />

WWPNs 164, 350, 355, 497<br />

Y<br />

YaST Online Update 225<br />

Z<br />

zero buffer 319<br />

zero contingency 25<br />

Zero Detection 57<br />

zero-detection algorithm 25<br />

zone 76<br />

zoning capabilities 76<br />

zoning recommendation 194, 208


<strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

<strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

<strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong><br />

<strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

(1.5” spine)<br />

1.5” 1.998”<br />

789 1051 pages<br />

<strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong><br />

<strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong><br />

<strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

<strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong> <strong>System</strong><br />

<strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong>


<strong>Implementing</strong> <strong>the</strong> <strong>IBM</strong><br />

<strong>System</strong> <strong>Storage</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong> <strong>V5.1</strong><br />

Install, use, and<br />

troubleshoot <strong>the</strong> <strong>SAN</strong><br />

<strong>Volume</strong> <strong>Controller</strong><br />

Learn about and how<br />

to attach iSCSI hosts<br />

Understand what<br />

solid-state drives<br />

have to offer<br />

Back cover<br />

This <strong>IBM</strong> Redbooks publication is a detailed technical guide<br />

to <strong>the</strong> <strong>IBM</strong> <strong>System</strong> <strong>Storage</strong> <strong>SAN</strong> <strong>Volume</strong> <strong>Controller</strong> (SVC),<br />

a virtualization appliance solution that maps virtualized<br />

volumes that are visible to hosts and applications to<br />

physical volumes on storage devices. Each server within<br />

<strong>the</strong> <strong>SAN</strong> has its own set of virtual storage addresses, which<br />

are mapped to physical addresses. If <strong>the</strong> physical<br />

addresses change, <strong>the</strong> server continues running using <strong>the</strong><br />

same virtual addresses that it had before. This capability<br />

means that volumes or storage can be added or moved<br />

while <strong>the</strong> server is still running. The <strong>IBM</strong> virtualization<br />

technology improves <strong>the</strong> management of information at <strong>the</strong><br />

“block” level in a network, enabling applications and servers<br />

to share storage devices on a network.<br />

This book is intended to allow you to implement <strong>the</strong> SVC at<br />

a 5.1.0 release level with a minimum of effort.<br />

SG24-6423-07 ISBN 0738434035<br />

INTERNATIONAL<br />

TECHNICAL<br />

SUPPORT<br />

ORGANIZATION<br />

®<br />

BUILDING TECHNICAL<br />

INFORMATION BASED ON<br />

PRACTICAL EXPERIENCE<br />

<strong>IBM</strong> Redbooks are developed<br />

by <strong>the</strong> <strong>IBM</strong> International<br />

Technical Support<br />

Organization. Experts from<br />

<strong>IBM</strong>, Customers and Partners<br />

from around <strong>the</strong> world create<br />

timely technical information<br />

based on realistic scenarios.<br />

Specific recommendations<br />

are provided to help you<br />

implement IT solutions more<br />

effectively in your<br />

environment.<br />

For more information:<br />

ibm.com/redbooks<br />

®

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!