SUMMARY: HSG80/43x4 Performance

From: Thomas, Douglas L. <dthomas_at_glgt.com>
Date: Wed, 06 Mar 2002 12:59:01 -0500

Thanks to all for your contributions. Below I will add my original
questions and the replies I received since paraphrasing them will destroy
their content.

The consensus is that, the more SCSI busses and disk spindles used the
better performance will be. In my scenario, the 4354 option would appear
best, but risks losing a RAID set if a chassis fails that has two disks in
it even though they may be on different SCSI busses. Since I have room in
the budget, I will get 6 4314's and only fill them with 7 disks each. If I
reach a point where I need more capacity, I will be able to evaluate
performance at that point and determine whether I can add disk to this SAN
or if I need to build a new one. This will offer me the most flexibility
and security all around.

Original question:

I'm putting together a HSG80/43x4 SAN for use by our Tru64 5.1a servers and
WinNT/2K servers. Filling three 4314's with 36G disks should provide more
than enough capacity for our environment, but I am more concerned with
performance of the SAN.

Would I see a great performance gain by using Dual Bus 4354's than the
Single Bus 4314's?

Compaq makes it sound like you could connect a ton of servers to the SAN and
would still be faster than Direct Attached Storage. I have mostly Oracle
running on Tru64 and a mix of custom apps, as well as Exchange, Citrix, and
such running on WinX. Most of the WinX servers will only require a few
Gigs, so I foresee several WinX boxes hitting the same disks. I am
concerned that if I go with the 4314's I will generate too much buss
activity and cause access degradation.

Any real world perspectives would be appreciated.

****************************************************************************
********************
Replies:

alan_at_nabeth.cxo.cpqcorp.net said:

        It depends on your I/O load. It always does.

        The back-end SCSI adapter used in the HSG80 is UltraFast
        and wide. In a moment of perfect data transfer it can
        move data at a rate that resembles 40 MB/sec. There are
        six such adapters on the back end, which if all had such
        perfect moments at the same time would resemble 240 MB/sec.

        One modern SCSI disk of that speed may be able to transfer
        data at speed in a burst, and might be able to sustain close
        to it. A couple of disks, would have an easier time saturating
        the backend bus. 7 in the 14 slot shelf would probably be
        fighting over the limited bandwidth. 10 or 14 going with
        single bus shelves, would be in worse shape then 7.

        Now, *Bandwidth* isn't the only I/O load of interest. Your
        application mix, might be more interested in lots of relatively
        small I/Os that won't come close to saturating the bandwidth
        available from the back-end busses. But, all those I/O will
        produce a lot of spindle contention, which might be better
        served by more disks. Having twice as many disks available,
        such as on a single bus shelf, might be able to support the
        I/O load with reasonable response times. Having the extra
        capacity might allow using less of it to improve seek times
        on the individual disks even more; short stroking.

        If an HSG80 pair could use the full bandwidth of all four
        fabric side ports at the same time (I don't know if it can
        or not), this would be on the order of 400 MB/sec. This
        ignores the possbility that the cache and internal data
        paths may not have that much bandwidth available. On the
        other hand, the HSG80 has been well tuned for I/O loads
        involving lots of relatively small I/Os (that can't saturate
        the fabric side ports or back-end busses).

        There are corners of Compaq that stopped paying attention to
        the performance capacity of parallel SCSI when mere UltraFast
        was current. That was in the 40 MB/sec days. Today, Ultra-3
        and faster make 1 Gb Fibre Channel look down right slow.

****************************************************************************
********************
Matt Morris said:

It depends on how you setup your RAIDSets. Ideally, each RAIDset should
have a maximum of (1) disk per SCSI bus to maximize performance. This isn't
always possible, but there are other things you can do to help performance.


Maximizing the number of SCSI busses is definitely desired.

Also note that you get much better performance from the 15K rpm disks than
the 10K rpm disks. All the Compaq storage classes say that you get better
performance by increasing your drive speed than by anything else (I have
never seen hard numbers outside Compaq Class books). This assumes that your
performance bottleneck would be at the SCSI interface between the drives and
the HSG80 controllers (With Fibre Channel (I assume FC-SW not FC-AL) this is
probably correct.

Also keep in mind that you should place all your high I/O StorageSets on the
same SCSI buses if you can help it. Try to mix them up.

How many RAIDSets are we talking about? What size are they? What RAID
level? Is this a Mostly read, Mostly Write, Even Read/Write?

****************************************************************************
********************
Raul Sossa said:

Well I can tell you that I have a 3 x 4354R MA/8000 Local SAN with 42 x 36GB
Hard Drives (1.5TB data) I have a TruCluster 5.x configuration and
performance is really very good.

Two AlphaServers ES40.
Orale8i with Two Instances (Running at the same time, one on each node).
1GB Oracle SGA.
4 GB RAM on each node.
4x500MHZ processors.
Tru64UNIX 5.1 + PK4.

I really don't like 42xxx single channel configurations.

If you have two channel configuration you can set one controller
to Tru64UNIX SAN Servers Zone and the other controller to Windows XXX SAN
Servers
Zone (this will be faster).

****************************************************************************
********************
Blair Phillips said:

I'd go for 6 4314 shelves.
Reason: you can then use 6 member RAID-5 sets with no more than 1 member per
shelf.
If you lose a shelf for any reason, you haven't lost a RAID set.

I would avoid putting more than 1 member of a RAID-5 set on a single SCSI
bus.
The only disabling failures I've seen in HSx series controller based systems
have been as a result of a SCSI glitch taking out an entire SCSI bus, which
came good after the bus was reset, but only after much reconstruction of
RAID sets. And a 6 x 36GB RAID set takes quite a while to reconstruct!

A solution using 3 x 4354 shelves is better, but while an individual shelf
is unlikely to go entirely dead, it does represent a single point of failure
if it contains more than 1 member of a RAID-5 set.

The incremental cost of 6 4314 vs 3 4354 shelves should be justified by the
extra redundency provided.

****************************************************************************
********************
Martin Petder said:

> Would I see a great performance gain by using Dual Bus 4354's than the
> Single Bus 4314's?

Nope. What matters is the number of disks you use per scsi channel, not
the number of empty slots. Using more than 4-5 disks per channel would
probably (depending on raid configuration and your applications)
saturate the channel and start hindering perfomance.

> Compaq makes it sound like you could connect a ton of servers to the SAN
and
> would still be faster than Direct Attached Storage. I have mostly Oracle

Again - it depends. It's definitely faster than UW-Scsi (40MB/s), that's
default on most alphaservers. It might be faster than U2W-Scsi (80MB/s),
that's equipped on newer alphaservers. It's certainly faster than the
old mylex (KZPAC)... But newer raid controllers (KZPCC and stuff) and
quite possibly U3W scsi controllers (not supporter by Compaq at the
time) would give you definitely better throughoutput perfomance and
possibly better i/o rate perfomance.

With heavily tuned oracle and good raid configuration on HSG's you
should expect about 60MB/s read perfomance and 40MB/s write perfomance
and about 7-8k iops. And that's _total_ per hsg80 controller.

> running on Tru64 and a mix of custom apps, as well as Exchange, Citrix,
and
> such running on WinX. Most of the WinX servers will only require a few
> Gigs, so I foresee several WinX boxes hitting the same disks. I am
> concerned that if I go with the 4314's I will generate too much buss
> activity and cause access degradation.

Again it all depends how many actual disks you use... :)
But using same disks for the different server heavily used partitions
(esp. on file servers and probably exchange) might not be such a good
idea.. :)

In case of good funding you might be interested in getting the newer
MSA1000 to serve your file serving and other minor wintel applications
and leave HSG80 based storage to serve your databases.

****************************************************************************
********************
Udo de Boer said:

Each HSG80 has 6 scsi busses. And spreading the load over these six
busses will make it faster. So use the dual bus feature or six single
bus racks.

Performance hmmmm. A single SCSI bus inside the hsg80 can deliver about
40 megabytes per second. A single FC channel can deliver about 100
megabytes per second. A single modern scsi disk can do about 20 to 30
megabytes. All these figures are maximum figures while streaming large
files. But everything changes while doing small reads or writes. Then
maximum throughput is not important. It is transactions per second. The
HSG80 is fast in that. Especially when you use raid. It has some large
cache both for reading and writing. Also it can use two of its four
ports to serve the disks to the san. (Or was it four ???)


It will problably be more than fast enough for you. It will be faster
than a bunch of scsi disk on each system.

PS, How do you want to share the disk with windows. I don't think there
is a solution where one disk can really be shared like trucluster. With
trucluster one disk can be accessed directly over fibre channel (or
scsi) by more than 1 node. In windows as far as I know the disk is
accessed by one node which serves the disk to other nodes.

****************************************************************************
********************

Douglas L. Thomas
Unix Administrator
Great Lakes Gas Transmission
dthomas_at_glgt.com
Received on Wed Mar 06 2002 - 18:02:05 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:43 NZDT