![]() |
![]() HP OpenVMS Systemsask the wizard |
![]() |
The Question is: Can you recommend a quorum file scheme to allow 1 of my 3 clustered nodes to remain up, while 2 are down for maintenance? Would a Quorum file vote of 3, system votes of 1, and expected votes of 4 do the trick and prevent partitioning? Many thanks! Ron The Answer is : What you seek is a three node cluster able to be reduced down to one node with the orderly shutdown of two of its members. There is no way to perform the shutdown of two arbitrary voting nodes in a three-voting-nodes cluster, without direct operator intervention. This is a variant of the classic two-node cluster configuration, of course -- when you move from three to two nodes, the clsuter will continue operating. When you have a two-node configuration, you must have a primary-secondary configuration or (if you have a shared interconnect) a quorum disk, or you must have a plan to determine which lobe should be manually resumed through operator intervention. The two-node configuration is a matter of making sure that the cluster quorum value is set to number of votes of the remaining node after the other two nodes are removed. (For details on EXPECTED_VOTES and VOTES, please see the cluster documentation and the OpenVMS FAQ.) Also see the SET CLUSTER/EXPECTED_VOTES command, and the Availability Manager (or AMDS) quorum adjustment mechanism. When you perform an orderly shutdown of a cluster member, the specification of the REMOVE_NODE option will cause the quorum value to adjusted downward. See the OpenVMS Cluster Systems manual, where this is explained in detail. The OpenVMS documentation is available at: http://www.hp.com/go/openvms/doc Having all members with EXPECTED_VOTES of 1 would allow any single member to boot while the others are down. As the other nodes boot into the cluster, the quorum value will automatically increase. (This is effectively a partitioned cluster, and has the inherent corruption-related risks of such configurations.) Having all members with EXPECTED_VOTES of 2 would require you to have two VOTES present at boot time, so that a single node with a bad cluster interconnect could not form it's own single node cluster. This still allows you to use the REMOVE_NODE option on an orderly shutdown to get down to a single node cluster for maintenance. The OpenVMS Wizard would tend to configure the cluster using the expected and documented settings for VOTES and EXPECTED_VOTES (see the OpenVMS manuals and in the OpenVMS FAQ), and would configure and use Availability Manager or AMDS to manually resume the remaining node for the specific case of the two-node shutdown. If this process must be performed unattended and you know which lobe will remain, bootstrap that lobe with the VOTES. You can then reset the volumes and roll the lobes using the REMOVE_NODE and/or SET CLUSTER/EXPECTED_VOTES commands.
|