AdvFS for root and usr, LSM over RAID, and other abominations

From: Richard Sharpe <sharpe_at_ns.aus.com>
Date: Fri, 19 Mar 1999 23:39:06 +1000

Hi,

I recently came across a machine that seems to have been set up very badly.

This machine had two RZ1D (9GB) disks on a SCSI controller, and a KZPSC or
something like that with two re devices configured with abiyt 25GB of disk
each.

Root and usr were set up as AdvFS file systems on one of the 9GB disks,
although it looks like someone set up one of the disks with LSM so they
could mirror the root disk, but could not figure out how to do it and gave up.

Then the two re disks had LSM applied to them, and they were sliced into 12
volumes (no mirroring) and an application was added that seems to use all
12 slices.

I would like feedback on this and other opinions here in.

1. Using AdvFS for the root and usr file systems seems like total nonsense
in a production system.

Its much vaunted advantages do not gain much because root and usr are
slowly changing file systems, and AdvFS is slower that UFS, but perhaps not
in ways that make a difference.

The biggest problem, it seems to me however, is the problem of recovery.
Ever had to recover a root file system that is on AdvFS? You have to boot
the CD, make the devices, create /etc/fdmns and all the subdirectories, and
make the links. Then you can mount the root file system, if it is still OK.
You may have to actually make it, and so on it goes ... Probably more than
your average operations person can handle.

It seems to me that using UFS for root and usr makes for easier recovery,
and using LSM and mirroring root, swap and usr, while making the root disk
more complex, gives you advantages that outweigh the complexity. Lose a
disk, simply plug a replacement in (perhaps after a reboot).

2. LSM should not be used on top of RAID, as it is only adding to the path
lengths in the OS, and not achieving anythingg.

Even if one needs to slice the storage up into 12 partitions, then one
could simply set up two logical disks and then partition them into
a,b,d,e,f,g etc
until one got 12 partitions.

However, unless there was a real need to make lots of smaller slices, it
would generally be better (and easier to manage) having one or two large
volumes and just use the space as one big pool. Since UFS is in use on each
of the slices, there is nothing dynamic about any of it. Need to expand
anything, you get to rebuild the whole lot.

I would be tempted to use AdvFS here for its ability to add partitions to
an AdvFS file system, esp with RAID-5 under it. That is, unless databases
have real problems with AdvFS on RAID-5. There is a lot to be said for
proper planning of storage requirements, and scheduling an outage where it
is all backed up and the file systems rebuilt on a larger number of disks.


Regards
-------
Richard Sharpe, sharpe_at_ns.aus.com, NIC-Handle:RJS96
NS Computer Software and Services P/L,
Ph: +61-8-8281-0063, FAX: +61-8-8250-2080,
Samba (Team member), Linux, Apache, Digital UNIX, AIX, C, ...
Received on Fri Mar 19 1999 - 12:03:09 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:39 NZDT