How to retain LSM config info after de-encapsulate and copy root disks?

From: John Speakman <speakman_at_biost.mskcc.org>
Date: Tue, 11 Jan 2000 11:24:24 -0500

Hi!

Smallish panic - this is something I have been bugging you guys forever about
and tonight we get our precious downtime window and of course now I
just realized something.

Background - 2 node 4100 Production Server cluster, 4.0Epk3, TCR 1.5.
The system disks (root, /usr, /var, swaps) of each system are local, each
on a KZPDA adapter - all other disks are on a shared SCSI bus. Right
now we are using LSM to mirror the system disks (i.e. they have been
encapsulated) and we also use LSM to cut some little tiny disks for drds
(distributed raw disks) that sit on the shared SCSI bus and are used for
our Oracle Parallel Server database.

What we are doing tonight is replacing LSM mirroring of the system disks
by hardware mirroring using KZPAC-XF (aka RAID Array 230/Plus)
controllers. So what we have to do on each system is this (approximately):
1) Break the mirror (volplex -o rm ...)
2) Un-encapsulate the LSM system disk (volunroot -a)
3) Vdump all the information on the system disks to either tape or a disk on
the shared bus
4) Take out the disks, shelf, cable and KZPDA
5) Put in KZPAC, cable, new shelf and put old system disks back in the (very)
vain
hope of being able to use RCU to config without initializing and boot and see
the data
on the old system disks and thus avoid having to vrestore
6) When (5) fails, take out old system disks and put in a safe place, put new
blank
disk in the shelf. Use RCU to initialize and configure, then boot off CD-ROM
and
vrestore the data.

So far so good (sort of). But here's the problem: assuming step (5) fails,
which all of
you think it will because of the proprietary way in which the KZPAC stores its
metadata, what's storing the LSM information?
When you encapsulate the system disks using LSM, it creates a little "simple"
type
partition on the system disk that stores all the LSM configuration information
(i.e., it
is in the rootdg disk group).

As recommended by Digital we have of course got a couple of backup partitions
of this information (as we can see in /etc/volboot). But they are all in the
system
disks that we are going to rebuild, so we'd lose the LSM information. This
would
be OK if we were no longer going to use LSM, but we will still be using it to
slice
disks for DRDs and all that information is also in rootdg. If we lose it, we
will
(presumably, right?) lose our Oracle database and thus be in big trouble.

What we think we can do is make a disk on the shared SCSI bus and add it to the

rootdg. Is this the easiest way to make sure the information is retained?

Bonus question: Does the fact that this is a cluster and we will do this
upgrade
one machine at a time mean the other system will keep the information, even if
the non-root disk groups are only defined in the rootdg of one system and not
the other (not quite sure how these LSM drd things would manage to fail over,
in
fact, come to think of it, as it's only in the rootdg of one system) ?

Extra bonus for Digital types only - what is a KZPAC-XF anyway? the
documentation mentions KZPAC-AA, -CA and -CB.

Thanks - I will summarize all this information in one step-by-step summary.

John
Received on Tue Jan 11 2000 - 16:25:26 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:40 NZDT