We're replacing our 4.0f 8200s and HSZ50 storage with 5.1 ES40s
and HSG80 storage. I've battled through the setup of the fiber
switches, the storage arrays, installed v5.1 of the o/s and
trucluster (after the hassle of trading in the "old" ASE licenses
for the "new" TCS ones), applied patch kit 3, said good-bye to
the device names I know and love (e.g. rmt0h), and I'm really
wondering whether LSM is set up correctly or not. The first
node (node1) has an internal disk and the rootdg points to this
device. With the cluster file system, node2 also points to node1's
internal disk for the rootdg and they have the same /etc/vol/volboot
file. Is this the way it's supposed to be instead of having a
CDSL for those files? The reason I ask is when I issue a command
on node2 like 'voldisk list', I get this message:
lsm:voldisk: ERROR: Cannot get records from vold: Record not in disk group
I'm thinking that the volume daemons didn't come up right because
they didn't like the rootdg, but this could be lack of education
on my part as version 5.1 is way different than what I'm used to.
There were CDSLs for /etc/vol/vold_diag and /etc/vol/vold_request,
but not for volboot.
Jeff Beck
jbeck_at_carewiseinc.com
206.749.1878
CareWise Inc.
1501 4th Ave.
Suite 800
Seattle, WA 98101-1629
Received on Mon Oct 15 2001 - 20:18:25 NZDT