Veritas Storage Foundation™ and High Availability Solutions ...
Veritas Storage Foundation™ and High Availability Solutions ... Veritas Storage Foundation™ and High Availability Solutions ...
98Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARCKnown issuesWorkaround: Export VxVM volumes using their block device nodes instead. Oracleis investigating this issue.Oracle (SUN) bug id: 6716365 (disk images on volumes should be exported usingthe ldi interface)This Oracle (Sun) bug is fixed in Oracle (Sun) patch 139562-02.See “Solaris patch requirements” on page 80.Resizing a Veritas Volume Manager volume (exported as a slice or fulldisk) does not dynamically reflect the new size of the volume in theguestOn resizing a VxVM volume exported to a guest, the virtual disk still shows theold size of the volume. The virtual disk drivers do not update the size of thebackend volume after the volume is resized.Oracle has an RFE for this issue (CR 6699271 Dynamic virtual disk sizemanagement).Workaround: The guest must be stopped and rebound for the new size to bereflected.This Oracle (Sun) bug is fixed in Oracle (Sun) patch 139562-02.See “Solaris patch requirements” on page 80.Known issuesThe following section describes some of the known issues of the Oracle VM Serverfor SPARC software and how those known issues affect the functionality of theVeritas Storage Foundation products.Guest-based known issuesThe following are new known issues in this release of Veritas Storage Foundationand High Availability Solutions Support for Oracle VM Server for SPARC.Encapsulating a non-scsi disk may failTrying to encapsulate a non-scsi disk which is a slice of a disk or a disk exportedas a slice may fail with the following error:VxVM vxslicer ERROR V-5-1-599 Disk layout does not support swap shrinkingVxVM vxslicer ERROR V-5-1-5964 Unsupported disk layout.
Storage Foundation and High Availability Solutions support for Oracle VM Server for SPARCKnown issues99Encapsulation requires at least 0 sectors of unused space either at thebeginning or end of the disk drive.This is because while installing the OS on such a disk, it is required to specify theentire size of the backend device as the size of slice "s0", leaving no free space onthe disk.Boot disk encapsulation requires free space at the end or the beginning of the diskfor it to proceed ahead.Guest domain node shows only one PGR key instead of twoafter rejecting the other node in the clusterFor configuration information concerning the guest domain node shows only 1PGR key instead of 2 after rejecting the other node in the cluster:See Figure 4-4 on page 76.This was observed while performing a series of reboots of the primary and alternateI/O domains on both the physical hosts housing the two guests. At some pointone key is reported missing on the coordinator disk.This issue is under investigation. The vxfen driver can still function as long asthere is 1 PGR key. This is a low severity issue as it will not cause any immediateinterruption. Symantec will update this issue when the root cause is found forthe missing key.Disk paths intermittently go offline while performing I/O on amirrored volumeThis was observed while testing the SFCFS stack inside a 4-node guest clusterwhere each node gets its network and virtual disk resources from multiple I/Odomains within the same host.See “Supported configurations with SFCFS and multiple I/O Domains” on page 75.While performing I/O on a mirrored volume inside a guest, it was observed thata vdisk would go offline intermittently even when at least one I/O domain whichprovided a path to that disk was still up and running.Symantec recommends that you install Solaris 10 Update 7 that contains the fixfor Oracle (Sun) bug id 6742587 (vds can ACK a request twice).Deadlock between DMP kernel and VDC driver during volumecreationThis was observed by Oracle during their interoperatability testing of 5.0 MP3.The deadlock happens when creating a mirrored VxVM volume, while there is
- Page 48 and 49: 48Storage Foundation and High Avail
- Page 50 and 51: 50Storage Foundation and High Avail
- Page 52 and 53: 52Storage Foundation and High Avail
- Page 54 and 55: 54Storage Foundation and High Avail
- Page 56 and 57: 56Storage Foundation and High Avail
- Page 58 and 59: 58Storage Foundation and High Avail
- Page 60 and 61: 60Storage Foundation and High Avail
- Page 62 and 63: 62Storage Foundation and High Avail
- Page 64 and 65: 64Storage Foundation and High Avail
- Page 66 and 67: 66Storage Foundation and High Avail
- Page 68 and 69: 68Storage Foundation and High Avail
- Page 70 and 71: 70Storage Foundation and High Avail
- Page 72 and 73: 72Storage Foundation and High Avail
- Page 74 and 75: 74Storage Foundation and High Avail
- Page 76 and 77: 76Storage Foundation and High Avail
- Page 78 and 79: 78Storage Foundation and High Avail
- Page 80 and 81: 80Storage Foundation and High Avail
- Page 82 and 83: 82Storage Foundation and High Avail
- Page 84 and 85: 84Storage Foundation and High Avail
- Page 86 and 87: 86Storage Foundation and High Avail
- Page 88 and 89: 88Storage Foundation and High Avail
- Page 90 and 91: 90Storage Foundation and High Avail
- Page 92 and 93: 92Storage Foundation and High Avail
- Page 94 and 95: 94Storage Foundation and High Avail
- Page 96 and 97: 96Storage Foundation and High Avail
- Page 100 and 101: 100Storage Foundation and High Avai
- Page 102 and 103: 102Storage Foundation and High Avai
- Page 104 and 105: 104Veritas Cluster Server support f
- Page 106 and 107: 106Veritas Cluster Server support f
- Page 108 and 109: 108Veritas Cluster Server support f
- Page 110 and 111: 110Veritas Cluster Server: Configur
- Page 112 and 113: 112Veritas Cluster Server: Configur
- Page 114 and 115: 114Veritas Cluster Server: Configur
- Page 116 and 117: 116Veritas Cluster Server: Configur
- Page 118 and 119: 118Veritas Cluster Server: Configur
- Page 120 and 121: 120Veritas Cluster Server: Configur
- Page 122 and 123: 122Veritas Cluster Server: Configur
- Page 124 and 125: 124Veritas Cluster Server: Configur
- Page 126 and 127: 126Veritas Cluster Server: Configur
- Page 128 and 129: 128Veritas Cluster Server: Configur
- Page 130 and 131: 130Veritas Cluster Server: Configur
- Page 132 and 133: 132Veritas Cluster Server: Configur
- Page 134 and 135: 134Veritas Cluster Server: Configur
- Page 136 and 137: 136Veritas Cluster Server: Configur
- Page 138 and 139: 138Veritas Cluster Server: Configur
- Page 140 and 141: 140Veritas Cluster Server: Configur
- Page 142 and 143: 142Veritas Cluster Server: Configur
- Page 144 and 145: 144Veritas Cluster Server: Configur
- Page 146 and 147: 146Veritas Cluster Server: Configur
<strong>Storage</strong> Foundation <strong>and</strong> <strong>High</strong> <strong>Availability</strong> <strong>Solutions</strong> support for Oracle VM Server for SPARCKnown issues99Encapsulation requires at least 0 sectors of unused space either at thebeginning or end of the disk drive.This is because while installing the OS on such a disk, it is required to specify theentire size of the backend device as the size of slice "s0", leaving no free space onthe disk.Boot disk encapsulation requires free space at the end or the beginning of the diskfor it to proceed ahead.Guest domain node shows only one PGR key instead of twoafter rejecting the other node in the clusterFor configuration information concerning the guest domain node shows only 1PGR key instead of 2 after rejecting the other node in the cluster:See Figure 4-4 on page 76.This was observed while performing a series of reboots of the primary <strong>and</strong> alternateI/O domains on both the physical hosts housing the two guests. At some pointone key is reported missing on the coordinator disk.This issue is under investigation. The vxfen driver can still function as long asthere is 1 PGR key. This is a low severity issue as it will not cause any immediateinterruption. Symantec will update this issue when the root cause is found forthe missing key.Disk paths intermittently go offline while performing I/O on amirrored volumeThis was observed while testing the SFCFS stack inside a 4-node guest clusterwhere each node gets its network <strong>and</strong> virtual disk resources from multiple I/Odomains within the same host.See “Supported configurations with SFCFS <strong>and</strong> multiple I/O Domains” on page 75.While performing I/O on a mirrored volume inside a guest, it was observed thata vdisk would go offline intermittently even when at least one I/O domain whichprovided a path to that disk was still up <strong>and</strong> running.Symantec recommends that you install Solaris 10 Update 7 that contains the fixfor Oracle (Sun) bug id 6742587 (vds can ACK a request twice).Deadlock between DMP kernel <strong>and</strong> VDC driver during volumecreationThis was observed by Oracle during their interoperatability testing of 5.0 MP3.The deadlock happens when creating a mirrored VxVM volume, while there is