Veritas Storage Foundation and High Availability Solutions Release Notes

Veritas Storage Foundation™ and High ... - SORT - Symantec Veritas Storage Foundation™ and High ... - SORT - Symantec

d1mj3xqaoh14j0.cloudfront.net
from d1mj3xqaoh14j0.cloudfront.net More from this publisher
17.08.2015 Views

96About Veritas Storage Foundation and High Availability SolutionsKnown issuesto run in a container and has PidFiles configured for monitoring the resource thenthe value expected for this attribute is the pathname of the PID file relative to thezone root.In releases prior to SF 6.0, the value expected for the attribute was the pathnameof the PID file including the zone root.For example, a configuration extract of an application resource configured in SF5.0MP3 to run in a container would appear as follows:Application apptest (User = rootStartProgram = "/ApplicationTest/app_test_start"StopProgram = "/ApplicationTest/app_test_stop"PidFiles = {"/zones/testzone/root/var/tmp/apptest.pid" }ContainerName = testzone)Whereas, the same resource if configured in SF 6.0 and later releases would beconfigured as follows:Application apptest (User = rootStartProgram = "/ApplicationTest/app_test_start"StopProgram = "/ApplicationTest/app_test_stop"PidFiles = {"/var/tmp/apptest.pid" })Note: The container information is set at the service group level.Workaround: Modify the PidFiles pathname to be relative to the zone root asshown in the latter part of the example.# hares -modify apptest PidFiles /var/tmp/apptest.pidMounting a / file system of NFS server is not supported [2847999]Mount agent do not support BlockDevice attribute with / file system of NFS serverfor NFS file system.Workaround: No workaround.

About Veritas Storage Foundation and High Availability SolutionsKnown issues97SambaShare agent clean entry point fails when access to configuration fileon shared storage is lost [2858183]When the Samba server configuration file is on shared storage and access to theshared storage is lost, SambaShare agent clean entry point fails.Workaround: No workaround.SambaShare agent fails to offline resource in case of cable pull or onunplumbing of IP [2848020]When IP is unplumbed or in case of cable pull scenario, agent fails to offlineSambasShare resource.Workaround: No workaround.Entry points that run inside a zone are not cancelled cleanly [1179694]Cancelling entry points results in the cancellation of only the zlogin process. Thescript entry points that run inside a zone are forked off using the zlogin command.However, the zlogin command forks off an sh command, which runs in the contextof the Solaris zone. This shell process and its family do not inherit the group idof the zlogin process, and instead get a new group id. Thus, it is difficult for theagent framework to trace the children or grand-children of the shell process,which translates to the cancellation of only the zlogin process.Workaround: Oracle must provide an API or a mechanism to kill all the childrenof the zlogin process that was started to run the entry point script in thelocal-zone.The zpool command runs into a loop if all storage paths from a node aredisabledThe Solaris Zpool agent runs zpool commands to import and export zpools. If allpaths to the storage are disabled, the zpool command does not respond. Instead,the zpool export command goes into a loop and attempts to export the zpool. Thiscontinues till the storage paths are restored and zpool is cleared. As a result, theoffline and clean procedures of Zpool Agent fail and the service group cannot failover to the other node.Workaround: You must restore the storage paths and run the zpool clear commandfor all the pending commands to succeed. This will cause the service group to failover to another node.Zone remains stuck in down state if tried to halt with file system mountedfrom global zone [2326105]If zone halts without unmounting the file system, the zone goes to down stateand does not halt with the zoneadm commands.

96About <strong>Veritas</strong> <strong>Storage</strong> <strong>Foundation</strong> <strong>and</strong> <strong>High</strong> <strong>Availability</strong> <strong>Solutions</strong>Known issuesto run in a container <strong>and</strong> has PidFiles configured for monitoring the resource thenthe value expected for this attribute is the pathname of the PID file relative to thezone root.In releases prior to SF 6.0, the value expected for the attribute was the pathnameof the PID file including the zone root.For example, a configuration extract of an application resource configured in SF5.0MP3 to run in a container would appear as follows:Application apptest (User = rootStartProgram = "/ApplicationTest/app_test_start"StopProgram = "/ApplicationTest/app_test_stop"PidFiles = {"/zones/testzone/root/var/tmp/apptest.pid" }ContainerName = testzone)Whereas, the same resource if configured in SF 6.0 <strong>and</strong> later releases would beconfigured as follows:Application apptest (User = rootStartProgram = "/ApplicationTest/app_test_start"StopProgram = "/ApplicationTest/app_test_stop"PidFiles = {"/var/tmp/apptest.pid" })Note: The container information is set at the service group level.Workaround: Modify the PidFiles pathname to be relative to the zone root asshown in the latter part of the example.# hares -modify apptest PidFiles /var/tmp/apptest.pidMounting a / file system of NFS server is not supported [2847999]Mount agent do not support BlockDevice attribute with / file system of NFS serverfor NFS file system.Workaround: No workaround.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!