Cluster Server Installation Guide for Solaris x64 5.0 - Storage ...
Cluster Server Installation Guide for Solaris x64 5.0 - Storage ... Cluster Server Installation Guide for Solaris x64 5.0 - Storage ...
100 Setting up I/O fencingPreparing to configure I/O fencingI/O fencing operationsI/O fencing, provided by the kernel-based fencing module (vxfen), performsidentically on node failures and communications failures. When the fencingmodule on a node is informed of a change in cluster membership by the GABmodule, it immediately begins the fencing operation. The node attempts to ejectthe key for departed nodes from the coordinator disks using the preempt andabort command. When the node successfully ejects the departed nodes from thecoordinator disks, it ejects the departed nodes from the data disks. In a splitbrain scenario, both sides of the split would race for control of the coordinatordisks. The side winning the majority of the coordinator disks wins the race andfences the loser. The loser then panics and reboots the system.Preparing to configure I/O fencingMake sure you performed the following tasks before configuring I/O fencing forVCS:■■■Install the correct operating system.Install the VRTSvxfen package when you installed VCS.Install a version of Veritas Volume Manager (VxVM) that supports SCSI-3persistent reservations (SCSI-3 PR).Refer to the installation guide accompanying the Storage Foundationproduct that you are using.The shared storage that you add for use with VCS software must support SCSI-3persistent reservations, a functionality that enables the use of I/O fencing.Checking shared disks for I/O fencingThe shared storage for VCS must support SCSI-3 persistent reservations toenable I/O fencing. VCS involves two types of shared storage:Data disksCoordinator disksStores shared dataAct as a global lock during membership changes.Coordinator disks are small LUNs (typically three per cluster)See “Setting up shared storage” on page 34.Perform the following checks for I/O fencing disks:■Identify three SCSI-3 PR compliant shared disks as coordinator disks.List the disks on each node and pick three disks as coordinator disks.For example, execute the following commands to list the disks:
Setting up I/O fencingPreparing to configure I/O fencing101■# lsdev -Cc diskTest the shared disks using the vxfentsthdw script.See “Testing the shared disks for SCSI-3” on page 101.Testing the shared disks for SCSI-3Use the vxfentsthdw utility to test the shared storage arrays support SCSI-3 persistent reservations and I/O fencing. Review the guidelines to run vxfentsthdw program, verify that the systems see the same disk, and proceed to test the disks. Make sure to test disks serving as coordinator disks.See “Setting up coordinator disk groups” on page 104.The vxfentsthdw utility has additional options suitable for testing many disks. Review the options for testing disk groups (-g) and disks listed in a file (-f).You can also test disks without destroying data using the -r option.Review these guidelines for using vxfentsthdw■Verify the connection of the shared storage for data to two of the nodes onwhich you installed VCS.Warning: The tests overwrite and destroy data on the disks unless you usethe -r option.■■The two nodes must have ssh (default) or rsh communication. If you usersh, launch the vxfentsthdw utility with the -n option.See “Enabling communication between systems” on page 35.After completing the testing process, remove permissions forcommunication and restore public network connections.See “Removing permissions for communication” on page 109.To ensure both nodes are connected to the same disk during the testing, usethe vxfenadm -i diskpath command to verify the disk serial number.See “Verifying the nodes see the same disk” on page 101.Verifying the nodes see the same diskTo confirm whether a disk (or LUN) supports SCSI-3 persistent reservations, two nodes must simultaneously have access to the same disks. Because a shared disk is likely to have a different name on each node, check the serial number to verify the identity of the disk. Use the vxfenadm command with the -i option to verify that the same serial number for the LUN is returned on all paths to the LUN.For example, an EMC disk is accessible by the /dev/rdsk/c2t13d0s2 path on node A and the /dev/rdsk/c2t11d0s2 path on node B. From node A, enter:
- Page 50 and 51: 50 Installing and configuring VCSIn
- Page 52 and 53: 52 Installing and configuring VCSIn
- Page 54 and 55: 54 Installing and configuring VCSIn
- Page 56 and 57: 56 Installing and configuring VCSIn
- Page 58 and 59: 58 Installing and configuring VCSIn
- Page 60 and 61: 60 Installing and configuring VCSIn
- Page 62 and 63: 62 Installing and configuring VCSIn
- Page 64 and 65: 64 Installing and configuring VCSIn
- Page 66 and 67: 66 Installing and configuring VCSIn
- Page 68 and 69: 68 Installing and configuring VCSPe
- Page 70 and 71: 70 Installing and configuring VCSPe
- Page 72 and 73: 72 Installing and configuring VCSPe
- Page 74 and 75: 74 Installing and configuring VCSPe
- Page 76 and 77: 76 Installing and configuring VCSCh
- Page 78 and 79: 78 Installing and configuring VCSAb
- Page 80 and 81: 80 Installing and configuring VCSAb
- Page 82 and 83: 82 Installing and configuring VCSUn
- Page 84 and 85: 84 Manually installing and configur
- Page 86 and 87: 86 Manually installing and configur
- Page 88 and 89: 88 Manually installing and configur
- Page 90 and 91: 90 Manually installing and configur
- Page 92 and 93: 92 Manually installing and configur
- Page 94 and 95: 94 Manually installing and configur
- Page 96 and 97: 96 Manually installing and configur
- Page 98 and 99: 98 Setting up I/O fencingAbout I/O
- Page 102 and 103: 102 Setting up I/O fencingPreparing
- Page 104 and 105: 104 Setting up I/O fencingSetting u
- Page 106 and 107: 106 Setting up I/O fencingSetting u
- Page 108 and 109: 108 Setting up I/O fencingSetting u
- Page 110 and 111: 110 Setting up I/O fencingAdditiona
- Page 112 and 113: 112 Setting up I/O fencingAdditiona
- Page 114 and 115: 114 Setting up I/O fencingAdditiona
- Page 116 and 117: 116 Setting up I/O fencingHow I/O f
- Page 118 and 119: 118 Setting up I/O fencingHow I/O f
- Page 120 and 121: 120 Setting up I/O fencingAbout the
- Page 122 and 123: 122 Setting up I/O fencingTroublesh
- Page 124 and 125: 124 Setting up I/O fencingTroublesh
- Page 126 and 127: 126 Setting up I/O fencingTroublesh
- Page 128 and 129: 128 Setting up I/O fencingTroublesh
- Page 130 and 131: 130 Verifying the VCS installationV
- Page 132 and 133: 132 Verifying the VCS installationV
- Page 134 and 135: 134 Verifying the VCS installationV
- Page 136 and 137: 136 Verifying the VCS installationV
- Page 138 and 139: 138 Verifying the VCS installationV
- Page 140 and 141: 140 Verifying the VCS installationV
- Page 142 and 143: 142 Verifying the VCS installationA
- Page 144 and 145: 144 Upgrading to VCS 5.0Upgrading V
- Page 146 and 147: 146 Upgrading to VCS 5.0Upgrading V
- Page 148 and 149: 148 Upgrading to VCS 5.0Upgrading V
100 Setting up I/O fencingPreparing to configure I/O fencingI/O fencing operationsI/O fencing, provided by the kernel-based fencing module (vxfen), per<strong>for</strong>msidentically on node failures and communications failures. When the fencingmodule on a node is in<strong>for</strong>med of a change in cluster membership by the GABmodule, it immediately begins the fencing operation. The node attempts to ejectthe key <strong>for</strong> departed nodes from the coordinator disks using the preempt andabort command. When the node successfully ejects the departed nodes from thecoordinator disks, it ejects the departed nodes from the data disks. In a splitbrain scenario, both sides of the split would race <strong>for</strong> control of the coordinatordisks. The side winning the majority of the coordinator disks wins the race andfences the loser. The loser then panics and reboots the system.Preparing to configure I/O fencingMake sure you per<strong>for</strong>med the following tasks be<strong>for</strong>e configuring I/O fencing <strong>for</strong>VCS:■■■Install the correct operating system.Install the VRTSvxfen package when you installed VCS.Install a version of Veritas Volume Manager (VxVM) that supports SCSI-3persistent reservations (SCSI-3 PR).Refer to the installation guide accompanying the <strong>Storage</strong> Foundationproduct that you are using.The shared storage that you add <strong>for</strong> use with VCS software must support SCSI-3persistent reservations, a functionality that enables the use of I/O fencing.Checking shared disks <strong>for</strong> I/O fencingThe shared storage <strong>for</strong> VCS must support SCSI-3 persistent reservations toenable I/O fencing. VCS involves two types of shared storage:Data disksCoordinator disksStores shared dataAct as a global lock during membership changes.Coordinator disks are small LUNs (typically three per cluster)See “Setting up shared storage” on page 34.Per<strong>for</strong>m the following checks <strong>for</strong> I/O fencing disks:■Identify three SCSI-3 PR compliant shared disks as coordinator disks.List the disks on each node and pick three disks as coordinator disks.For example, execute the following commands to list the disks: