Hello,
I have another problem with a Tru64 Unix 5.0A cluster.
I have a 2 node cluster consisting of 2 ES40 systems,
running Tru64 Unix 5.0A, and an external RAID system.
The OS and the cluster Software is unpatched(yet).
The installation of Tru64 Unix 5.0A was an update
of Tru64 Unix 4.0F and True Cluster 1.5.
The Tru64 Unix 5.0A cluster can be booted by
member1 forming a one-node cluster.
Member2 can be booted together with member1.
The problem consists in the fact that member2
can not be booted to form a standalone one-node cluster.
The booted cluster comprising both machines seems to work.
===========================================
Using
/usr/sbin/clu_check_config -s clu_check
Information on each cluster member
Cluster memberid = 1
Hostname = mars
Cluster interconnect IP name = marsmc
Cluster interconnect IP address = 10.0.0.1
Member state = UP
Member base O/S version = Compaq Tru64 UNIX V5.0A (Rev. 1094)
Member cluster version = TruCluster Server V5.0A (Rev. 354)
Member running version = INSTALLED
Member name = mars
Member votes = 1
csid = 0x10001
Cluster memberid = 2
Hostname = luna
Cluster interconnect IP name = lunamc
Cluster interconnect IP address = 10.0.0.2
Member state = UP
Member base O/S version = Compaq Tru64 UNIX V5.0A (Rev. 1094)
Member cluster version = TruCluster Server V5.0A (Rev. 354)
Member running version = INSTALLED
Member name = luna
Member votes = 1
csid = 0x10002
*****
***** Output from running cfsmgr -v
*****
Domain or filesystem name = cluster_root#root
Mounted On = /
Server Name = mars
Server Status : OK
Domain or filesystem name = root1_domain#root
Mounted On = /cluster/members/member1/boot_partition
Server Name = mars
Server Status : OK
Domain or filesystem name = cluster_usr#usr
Mounted On = /usr
Server Name = mars
Server Status : OK
..........
Domain or filesystem name = root2_domain#root
Mounted On = /cluster/members/member2/boot_partition
Server Name = luna
Server Status : OK
..........
=====================================================
It is possible to use the cluster functionality to remount
for example
Domain or filesystem name = cluster_root#root
to member2.
cfsmgr -f -h mars -r -a SERVER=luna /
Domain or filesystem name = cluster_root#root
Mounted On = /
Server Name = luna
Server Status : OK
Result:
-------
1.) The second member of the cluster cannot be booted
to form a standalone one-node cluster.
I get the following message
"cfs_perform_glroot_mount: cfs_mountroot_local failed to mount the cluster root fs with errot = 6"
2.) If the first member is taken off the cluster by a shutdown procedure in the case of
a running two-node cluster, the second member crashes even if
the cluster_root#root, cluster_usr#usr and cluster_var#var
are "served" by the second member.
What's the reason of this configuration problem?
YS, Barbara Loehle
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Address: Barbara Loehle e-mail:Barbara.Loehle_at_uni-konstanz.de
University of Constance Phone: +49 7531 882542
Received on Fri Nov 02 2001 - 08:35:55 NZDT