SUMMARY: LSM vs StorageWorks Raid 0+1

From: Judith Reed <jreed_at_AppliedTheory.com>
Date: Mon, 04 Aug 1997 14:36:13 -0400

My query:
  --------------------------------------------------------
We have a StorageWorks array of mirrored disks which we want to use
for large (greater than disksize) filesystems. Does anyone know of any
informative network whitepapers, etc., discussing management of
a Raid 0+1 configuration, or of similar resources about using LSM to
create large partitions across multiple disks?

The speed advantage of striping is a given. What I'm wondering about
is manageability, backup recoverability, reconfigurability in a production
environment with minimal downtime allowances.
  ---------------------------------------------------------
I received a number of informative responses. We decided to go with RAID 0+1
on the SWXCR controller for speed and ease of use, but other options made
equally good sense, as described below. Thanks to all...
------------------------------------------------------------------------------
* >From my experiences (and I worked for Digital for a long time installing
large systems), I would recommend doing the striping on the controller.
If you have multiple SCSI buses, then I would recommend doing the
mirroring through LSM across SCSI buses. This will remove the
controller from being a single point of failure to the storage. The
performance penalty for using LSM rather than hardware for mirroring is
quite low. We tend to use hardware striping to make large stripe disks,
then use LSM for mirroring and carve the disk space into multiple LSM
volumes to break the storage into smaller chunks where needed (database
environment).
* The key differences between RAID 0+1 under LSM and
        under an array controller are:

        o One is host based and the other is controller
           based.
        o The array controller does striped mirror sets, while
           LSM does mirrored stripe sets.

        The controller advantage for the first point is that
        all the CPU load of doing mirroring is handled by the
        controller. Also, the data is only moved to the controller
        once. The disadvantages are that the performance of the
        controller is limited by the single I/O adapter that
        connects the controller to the host and the single I/O
        adapter is a potential failure point.

        The host based advantages are that you can spread the
        devices among more I/O adapters, both for performance
        and to limit failure points. The disadvantage is that
        whatever CPU load mirroring requires is on the host. The
        bandwidth load is also present for every member of the
        array.

        The controller has the additional advantage of supporting
        hot spares that I don't think LSM supports. Having used
        both I find them equally inconvient to use from a management
        standpoint. LSM's GUI does make it easy to do many of the
        management tasks, but getting started is hard.

        I've never had to use either in a disaster, though I came
        close with LSM once. The documentation on disaster recovery
        is adequate, though barely so.

        The 2nd of the points is more interesting. To build a RAID
        0+1 on the HSZ family of StorageWorks controllers, you build
        mirrored devices and then create a stripe set out of those.
        If any single device fails, only a part of the stripe set is
        at risk and only part needs to be copied.

        LSM takes the opposite approach. It creates stripe sets at
        the lowest layer and then mirrors those. If one member of
        a mirror fails, then you have to copy all the data in the
        stripe set to regenerate the mirror. I don't completely
        understand the advantage (if any) of the LSM approach.

* Given the current Digital products, the best combination of availability
and performance is LSM mirror across different SCSI adapters and
redundant HSZ controllers to perform striping.
Received on Mon Aug 04 1997 - 20:51:03 NZST

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:36 NZDT