SUMMARY: disaster recovery question

From: Izzet Ergas <izzet.ergas_at_citrix.com>
Date: Thu, 28 Sep 2000 18:01:51 -0400

Many thanks to Philip.Ordinario_at_bmo.com & Oisin McGuinness for responding.
Further, I bow down to the superior knowledge of
randy.rodgers_at_ci.ft-wayne.in.us and Paul.Sinclair_at_KSCL.com who were kind
enough to send step by step instructions. I haven't had a chance to test
Paul Sinclair's procedure yet but it looks good.

The resolution was to modify the root filesystem while I was still running
off of the install cd shell so that the boot disk encapsulation was removed.
I was then able to boot into single user mode off the restored disk and
restore the rest of the filesystems.

Randy Rodgers' extremely detailed instructions on removing the encapsulation
follow:

Once you have restored the root file system, you have to do the
following to remove indications of LSM system disk encapsulation:

* Edit the /etc/fstab changing any references to LSM volumes to
system disk partitions:

Example:
    change:/dev/vol/rootdg/swapvol
            to:/dev/rz0b

Also remove any secondary swap entries that reference LSM volumes

* Remove the following LSM entries from /etc/sysconfigtab (should be
near the end of the file):

lsm_rootdev_is_volume = 1
lsm_swapdev_is_volume = 1

(Changing the settings to 0 also works)

* Change the AdvFs links that reference LSM volumes to system disk
partitions:

# cd /mnt/etc/fdmns_root_domain
# rm rootvol
# ln -s /dev/rz0a rz0a
# cd ../usr_domain
# rm vol-rzag
# ln -s /dev/rzag rzag

(The above assumes the system disk is rz0...substitute the
appropriate disk)

* Change the swapdefault link that references an LSM volume to the
system disk partition:

# cd /mnt/sbin
# rm swapdefault
# ln -s /dev/rzab swapdefault

* If rootdg ONLY contains the mirrored system disks, remove the
volboot file:

# rm /mnt/etc/vol/volboot

Once you have done the above you should be able to boot the new
system disk single-user mode & restore the /usr file system. Then
you can follow the normal procedure to encapsulate and mirror the
system disk.

Paul Sinclair wrote in with detailed instructions for another way to attack
this problem. I did not use this method but it does seem useful so I'll
provide it.

Boot off cd
>>> boot dka400 (or whatever)

make device files for disks, and tapes floppy etc
# cd /dev
# MAKEDEV rz0 rz1 fd0 tz5

make device files for LSM

# mknod volconfig c 41 0
# mknod volevent c 41 1
# mknod voliod c 41 2
# mknod volinfo c 41 3
 Then you have to do a little mfs trick to allow the vol directory to be
writeable
# cd /
# mkdir /var/mnt1
# mfs -s 10000 /var/mnt1
# cd /etc/vol
# tar -cvf - . | ( cd /var/mnt1; tar -xvf -)

# cd /
# mfs -s 10000 /etc/vol
# cd /var/mnt1
# tar -cvf - . | ( cd /etc/vol; tar -xvf -)

So you have just taken the /etc/vol and stored it on a memory file system
and then mounted another memory file system over the original /etc/vol
directory and then copied everything back.

Now you have to set the host name as LSM needs thew host name

# hostname mybox

So now you can staert LSM

# vold -m disable
# voliod set 2

Initialise the volboot in memory
# voldctl init
# voldctl add disk rz0d
# voldctl add disk rz1d

now you can enable vold

# voldctl enable

Start the root volume

# volume start rootvol &

And after resync the root should be OK

Now these instructions are really for booting off the CD and starting LSM
but you could use them for your predicament...
so after this you would be able to mount the other volumes ..


I know this is not exactly what you asked but it is kind of...... So I hope
this helps
You would also have to restore the volume confuguration using volrestore and
this could be saved using the volprint -htb (or something like that) you
will have to man it for the actual syntax!
Your backup would not include the volume configuration unless you had saved
to the default location!

-----Original Message-----
From: Izzet Ergas [mailto:izzet.ergas_at_citrix.com]
Sent: Thursday, September 28, 2000 12:34 PM
To: 'tru64-unix-managers_at_ornl.gov'
Subject: disaster recovery question


I have a 4.0F system with the boot disk mirrored using advfs & LSM
encapsulation. I'm trying to put together a disaster recovery plan for this
system and I've hit a stumbling block. I have to be able to restore the
entire system to a different server with identical hardware. I've gotten to
the point where I've been able to restore the root partition by booting off
the install CD. Here's what I've done so far:

Boot from 4.0F install CD & go into shell
# disklabel -Rr -t advfs rz0 BACKUP_COPY_OF_LABEL RZ2DD-LS
(restore disk label)
# mkfdmn /dev/rz0a root_domain
# mkfset root_domain root
# mount -t advfs root_domain#root /mnt
# vrestore -vxf root.dmp -D /mnt

And now I'm stuck. I boot into single user mode from the freshly restored
root partition to restore the rest of the filesystems. I can't mount the
other filesystems in rootdg since they're in LSM and LSM isn't running. I
can't unencapsulate them since the only procedure I've found to do so
requires the system to be in multi-user mode.

Can anyone help? I need to a) restore dumps w/o unencapsulating the root
disk, or b) unencapsulate the root disk in single user mode.

Thanks in advance.
Received on Thu Sep 28 2000 - 22:03:08 NZST

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:41 NZDT