Dear managers,
Michael Polnick and Ole Nielsen pointed out, that some tools of 5.0A had
problems with disks larger than 1 TB. In fact disklabel of V5.0A shows a
neg. number of blocks in the C slice.
However i found out that the FS was in fact created with V5.1 and NOT 5.0A
as reported in my last post. V5.1 reports disk sizes correctly.
Since there was a discussion about the general use of SCSI-to-IDE Raid
systems i would like to share my own experience. We use four SCSI-IDE raid
systems all configured as Raid5 with hotspare all with 1.7TB capacity. We do
run Oracle DBs there, disk IO is heavy. However the content of the DBs is
publicly available genome data. In case we have a crash we simply recreate
the Oracle instance and re-import the genome from internet.
Performance: The infortrend IFT 7200 reads/writes at bus saturation with
about 37mb/sec on a 16bit UW scsi bus. The same hardware, when attached to a
LVD bus reads >80mb/sec and writes at about 50mb/sec. All seq. io.
Reliability: Some batches of IDE disks have terrible failure rates, others
not at all. We had a problem with Maxtor since they did not want to replace
the disks at warranty in case of SMART errors which are handled by the disk.
The controller will fail the disk in case of repeated SMART errors, since it
does not respond quick enought and triggers an timeout err.
Finally Dr. Thomas Blinn pointed out the following:
"Once you lose enough of the on-disk file system metadata that maps
to the various parts of the domain including the filesets which are
the repositories for the file naming, you are in big trouble, and
the failure you experienced almost certainly was in a part of the
storage that just happened to contain critical domain metadata.
It sounds like there's enough metadata left for salvage to find a
lot of the file data, but not enough to recover the filesets and
the file system data they contain (like directories). Bad luck."
Best regards and have a nice weekend!
Armin
Received on Fri Jul 08 2005 - 14:47:58 NZST