![]() |
![]() HP OpenVMS Systemsask the wizard |
![]() |
The Question is: When one node of our cluster crashes (or is shutdown) any devices that were local to that machine and are mounted on the other nodes in the cluster (via mscp) invariably end up with the device in a "MntVerifyTimeout Dismount" state. There is no way other than rebooting the node with the device showing this status to get access back to the disc. I know a variation of this question has been asked before but you pointed the questioner back to compaq. I have in the past asked the question of support who co uld not help. However rather than trying to get access back is it possible to alter the behaviour of vms when the node leaves the cluster. Is there some timeout parameters that could be increased or possibly number of waiting io's allowed before timing out. The Answer is : You will likely find one or more processes with files open on the target disk spindle(s). When the disk is in mount verify timeout state, you can use the DCL command SHOW DEVICE/FILES or the traversal of the WCB chains (via SDA) to locate open files, and then delete the process(es) with the open file(s). When in mount verify dismount, I/O and SHOW DEVICE/FILES and such will hang. (Once the mount verification operation has timed out, these commands should be able to proceed, though these commands will return somewhat less information on the open file(s).) The OpenVMS Wizard will assume you are issuing the dismount. You will want to ensure that the MVTIMEOUT system parameter is set to a value appropriate for your local requirements. You will also want to find out what is causing the crash or the failure to clean up on shutdown. You will also want to review the applications that have files open on these disks, and evaluate the application error handling when the disk device(s) have failed. You will also want to evaluate the disk storage configuration, potentially moving the environment to multi-path devices.
|