SUMMARY: ADVFS JBOD -->RAID convert ??

From: Rohn Wood <rohn_at_selway.umt.edu>
Date: Mon, 25 Jan 1999 10:50:49 -0700

et al,

The original question was:

> We are working through the migration of an Alpha 2100 4/200 with a
> mylex swxcr controller from its present JBOD configuration to a RAID
> config. The most salient question I have is reconciling the old adfvs

> JBOD config with the new RAID config. The /et/fdmns shows the
> deployment of advfs domains across the multiple /dev/reXX devices
> comprising the JBOD configuration. When/if I reimplement the
> RAID config (probably mirrored system devices and RAID 5 on the rest)
--
> my /etc/fdmns and /etc/fstab are obviously going to be worthless in
> terms of just trying to do a vrestore to the new disk configuration.
>
>     Is there a way to reconcile this situation to allow a vrestore of
> the system from tape, or is this going to require resinstallation of
DU
> from scratch, rebuild the advfs domains/filesets and then selectively
> restoring to bring the system back up?
--------------------------------------------------
As would be expected, there is no single method for accomplishing the
task.  Since it is hard to summarize procedures, I have included the
full text of responses below.  I would like to thank the following for
their assistance in this regard:
Bill Anderson
Ton Blinn
Dejan Muhamedagic
Kevin Partin
alan_at_nabeth.cxo.dec.com
Kris Chandresekhar
---------------------------------------------------
Bill Anderson's reply:
There is a actually a very good way to do this, with the combination of
Advfs and LSM along with the RAID controller or just with Advfs and the
RAID
controller.  Lets say for example you have the following configuration
with
Advfs and
and JBOD devices. 3 - 4GB JBOD drives mounted at 3 different mount
points
using Advfs.
Original Config....
dom_A#fs_A              /dataA
dom_B#fs_B              /dataB
dom_C#fs_C              /dataC
Now lets say I want to add 3 more drives and make a great big RAID 5 set
and
keep my same configurations.  Physically I would have one great big 20GB
drive (N-1=20GB for RAID 5).  What I would now do is create a single
domain
with Advfs, dom_New and use the entire 20GB drive (ie. /dev/rz1c).  Now
create 3 seperate filesets using the SAME domain, dom_New, and mount
them at
the 3 mount points you just created.  What you have is logically the
same as
you had before, but physically the data is layed out over the entire
RAID 5
set.  Now just restore your data and off you go.
dom_New#fs_A            /dataA  }
dom_New#fs_B            /dataB  }----------  All three of these will
share
the 20GB unless you hard code limits
dom_New#fs_C            /dataC  }
Now the second alternative is to use the Advfs and LSM combination.
Which
is the solution I prefer, but it is a bit more complex and it requires
you
to have a pretty good grasp of LSM.  We will start with the same example
above.  The first step is to create an LSM volume that consists of the
entire 20GB's.  Using LSM you would partition the 20GB's and create
logical
volumes (not Advfs volumes) based on the sizes you require.  For
example, I
want 10GB for /dataA, 5GB for /dataB, and 5GB for /dataC.  These devices
will show up at /dev/vol/volA, /dev/vol/volB, /dev/vol/volC (i just made
up
the volA, B, and C names you can pick whatever you want).  Now just make
advfs volumes out of these devices just like you did with JBOD.
dom_A#fs_A              /dataA
dom_B#fs_B              /dataB     ------again all three of these file
systems will be spread over the RAID 5 set.
dom_C#fs_C              /dataC
Now lets work with mirroring the system disk.  This should be easier,
since
the mount structure needs to be exactly the same.  Again we have two
ways to
do this.  The first is just using the RAID controller and the second is
using LSM.
In the first case, just MIRROR the system disk and restore accordingly
/ ,
/usr, /usr/var will be the same.
For the second case, you can use LSM to MIRROR the system drive.  Just
add
the disk as another JBOD on the SWXCR, and follow the steps to mirroring
the
system disk in the LSM manual.
Since the mirror is going to happen on a single controller, ie. the
SWXCR, I
would go with letting the SWXCR do the mirroring.  However, if you
wanted
higher availability and you had another controller or SWXCR, I would use
software mirroring and mirror the disks accross both controllers, that
way
you can not only take a disk failure, but also a controller failure.
---------------------------------------------------
Tom P. Blinn's reply:
For each domain that is presently a set of JBOD disks (that is, there is
a
domain name directory under /etc/fdmns full of symlinks to /dev/re
device
names), you can simply remove the directory that has the same name as
the
domain (e.g., if the domain is "jbod", then rm -rf /etc/fdmns/jbod as
root)
and the domain (and all the information about how to find the volumes
and thus
find the filesets) will be gone.
Then, once you've established the RAID set and got the right device name
for
it set up in the system, and made sure it's got a valid disk label, and
so on
as necessary to make it a valid device to hold an AdvFS domain, you can
just
use mkfdmn to create a new domain, and mkfset to create the fsets, then
mount
each fset and use vrestore to reload the data into it.
This might even work for a root domain, although I personally don't know
for
sure, and anyway, the root domain must have only one fileset, and I have
no
idea whether you could restore it.  But it will certainly work for
filesets
that contain application data.
Tom
 Dr. Thomas P. Blinn + UNIX Software Group + Compaq Computer Corporation
  110 Spit Brook Road, MS ZKO3-2/U20   Nashua, New Hampshire 03062-2698
   Technology Partnership Engineering           Phone:  (603) 884-0646
    Internet: tpb_at_zk3.dec.com           Digital's Easynet: alpha::tpb
     ACM Member: tpblinn_at_acm.org         PC_at_Home: tom_at_felines.mv.net
----------------------------------------------------
alan_at_nabeth...cxo.dec.com's reply:
        What you're doing is little different than recovering
        from total loss of the data.  You can boot from the
        CDROM, create a root file system on the appropriate
        partition, restore from the backup, adjust the links
        to reflect the new configuration of the root, boot
        from that as verification, recreatethe remaining files
        systems, mount them and restore.
        The procedure should be described in one of the system
        management documents.  All the core documentation is on
        the documentation CDROM.
        Now, if you happen to have enough spare disk space or can
        borrow spare disks, are using AdvFS for some of the file
        systems and have the AdvFS utilities you might be to migrate
        much of the data online.  Run verify on the file system
        you intend to move, add the target device to the domain
        and remove the old device(s) from the domain.  That will
        migrate all the data to the new device.  You can do this
        for everything but the root file system.
        If you can keep the root and old root online at the same
        time you can create the target domain, shutdown to single
        user and vdump/vrestore across a pipe to move the data.
        Then you just need to update /etc/fstab and/or the domain
        links to reflect the new configuration.
-----------------------------------------------------
Kevin Partin's reply:
If I follow you correctly, then all of your disks are JBOD with one
AdvFS
domain#fileset per disk. I assume you are trying to go to a single RAID
disk. If that is the case, then create one AdvFS domain for the entire
RAID
disk and create multiple filesets in the one domain. If you are trying
to go
to one fileset in a single domain, then you can also create links to
simulate the old disk layout. However if none of the high-level
directories
in each of the existing JBOD filesystems are duplicated, then you can
simply
vrestore all data into the new directory structure without fear of
overwriting anything. I also believe the vrestore has a switch to
override
the directory where the data gets restored to.
--------------------------------------------------------
Dejan Muhamedagic's reply:
You should proceed like this:
1.  Make a backup of existing filesystems, those which reside on JBOD
drives; and make sure the backup is OK.
2.  Reconfigure the SWXCR the way you need and create filesystems on new
volumes (these will be called /dev/rexx as well, but they will refer to
different storage);  you'll have to update/change /etc/fstab and
/etc/fdmns in this respect, either by hand or through AdvFS tools.
3.  Restore from the backup.
You can do all of booting from CD, but it will be faster to do this with
the system disk and then, as a last step, make a copy of it on the
mirrored RAID group.  You should find summaries about creating a system
disk in this list's archive.
------------------------------------------------------
Kris Chandresekhar's reply:
Since you are re-creating your RAID structure, the data on the disks
will
be destroyed. What you want to do is backup the data, delete the
domains,
re-create the RAID structure from the SWXCR menu, re-create the domains
and
restore the data. If  you want to be extremely safe, make two backups to
tape.
Received on Mon Jan 25 1999 - 18:07:17 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:38 NZDT