[Original (partially wrong) query was how to add disks to an HZ70 RAID
array and which procedure to follow]
Well,
first of all thanks to all those who responded:
"John F. Harvey" <jharvey_at_eccnet.eccnet.com>
Alan Davis <Davis_at_Tessco.Com>
Alan Rollow <alan_at_nabeth.cxo.dec.com>
"Eliahu, Ronny" <Ronny.Eliahu_at_disney.com>
Peter Reynolds <PReynolds_at_synstar.com>
anthony.miller_at_vf.vodafone.co.uk
amongst them they clarified the issue but sadly the first issue which
got clarified was that I had one part and manuals for a different
one... Basically I don't have an HZ70 (even though I have all the
manuals in the world for it) but a BA310 which is a different beast
and no manuals for it. No wonder I couln't figure out the examples,
should have continued enjoying my 3000/300s...
This beast is a stand-alone tower with seven slots, all of which are
now populated.
Adding disks to it is trival, with the array on just slot them
in. They you have to tell the RAID array how to do it but, once you
get the correct manuals it is relatively easy. I used
RUN CFMENU
from the serial console (other maintenace was happening on the 8200 so
I couldn't use hszterm).
Fundamentally I had no requirement to keep the service up so I simply
backed up the previous data onto DLT, deleted the RAIDset and
re-created it with seven disks instead of four. All this is rather
trivial using CFMENU and the introductory manual since it just walks
you through the menus.
Once that was done there was nothing more to do except disklabel and
restore from DLT.
Now, a couple of interesting issues were raised by Alan Davis and Alan
Rollow concerning the performance and data-integrity of my
configuration. First of all with seven out of seven disks in the array
I have no spareset defined. i.e. if a disks dies there is no hot-spare
ready for the controller to start using. Secondly the seven disks are
actually split four-three on the SCSI controller of the BA310 over two
SCSI controllers. In the concise words of Alan Rollow:
re: Performance of multiple array members on a single bus.
For most applications, probably not bad. SCSI can only have a
single transfer from/to a device active at a time (per bus). If
the load is such that two (or more) devices could have transferred
at the same time, one has to wait for the other to finish. I'm
sure one could find a work-load where the difference was noticable
and another where it wasn't. The single-point of failure argument
(two array members on the same bus can't protect from a bus failure)
is better reason not to have arrays with more than one member per
bus.
Alan Davis:
Your performance over one scsi interface to the raid box is mostly
dependent on what type of controller you have. The KZPSA's are slow
compared to the capability of the HSZ's with cache. The newer KZPBA-CB
has a much better throughput. The dual channel characteristics of the
HSZ simply increase the total number of disks that you can hang off of
one and allows the dual-hsz failover to work. It doesn't increase the
overall throughput of the single scsi interface from the system.
Configuring redundant systems is fairly complex, with tradeoffs made
for speed vs. safety. For max safety with less performance
degradation, you would use two pairs of HSZ, two SCSI i/f to the
system, stripe the disks on each pair of HSZ's and mirror across the
HSZ's using LSM. This eliminates the single points of failure from a
hardware perspective. It still doesn't address software or filesystem
corruption, however. That's a topic for another time.
You might look into using the partition command to create different
targets for the OS from a single raid set, that saves one disk per
target vs. creating different raid sets where you lose a disk each
for parity.
You should allocate at least 2 spare disks for a large raid set, and
one each to a smaller one. Without allocating spares, you are still at
risk for downtime even with raid.
Hope this helps...
Arrigo
Received on Fri Apr 16 1999 - 18:03:40 NZST