Hi managers,
I've got a three node 5.1B memory channel cluster of which one node is
deliberately down.
So that the remaining two nodes can be shutdown independantly, I've
adjusted the expected votes and quorum disk votes such that I don't lose
quorum, as shown by the following clu_quorum output.
# clu_quorum
Cluster Quorum Data for: cluster01 as of Thu Jul 28 14:08:12 BST 2005
Cluster Common Quorum Data
Quorum disk: dsk4h
File: /etc/sysconfigtab.cluster
Attribute File Value
expected votes 3
Member 1 Quorum Data
Host name: node01.lynx.co.uk Status: UP
File: /cluster/members/member1/boot_partition/etc/sysconfigtab
Attribute Running Value File Value
current votes 3 N/A
quorum votes 2 N/A
expected votes 3 3
node votes 1 1
qdisk votes 1 1
qdisk major 19 19
qdisk minor 155 155
Member 2 Quorum Data
Host name: node02.lynx.co.uk Status: UP
File: /cluster/members/member2/boot_partition/etc/sysconfigtab
Attribute Running Value File Value
current votes 3 N/A
quorum votes 2 N/A
expected votes 3 3
node votes 1 1
qdisk votes 1 1
qdisk major 19 19
qdisk minor 155 155
Member 3 Quorum Data
Host name: node03.lynx.co.uk Status: DOWN
File: /cluster/members/member3/boot_partition/etc/sysconfigtab
I now want to remove the third member from the cluster altogether.
Does clu_delete_member take account of the manual vote adjustments I've
made, or will it try to adjust the expected votes based on the file
value of the third node - which will (I think) make me lose quorum...?
In case its relevant, its not an option to bring the third node back up,
but its boot_partition is mountable.
Cheers,
Rob
This message is intended only for the use of the person(s) ("The intended
Recipient(s)") to whom it is addressed. It may contain information which
is privileged and confidential within the meaning of applicable law. If
you are not the intended recipient, please contact the sender as soon as
possible. The views expressed in this communication are not necessarily
those held by LYNX Express Limited.
Received on Thu Jul 28 2005 - 13:40:29 NZST