Thanks to Tom Blinn, who got me thinking a bit harder about the problem
with my AlphaPC.
Turns out that the KZPSC works just fine with the AlphaPC 64 in every
respect! I had created my own problems by upgrading the firmware on the
controller and trying to use the latest Windows-based management tools.
After re-flashing the controller to firmware v2.42 and switching off the
fault-management on same, my error messages went away. The system now
boots and runs properly from a Raid 0 group of two older SCSI drives.
Throughput is fairly good. The 'a' partition (closest to the outer tracks
of the disks) clocks write speeds of 7 MB/sec. sustained. Not bad
considering these are 5400rpm. drives.
Although I've been warned repeatedly about potential incompatibilities
between the AlphaPC, Tru64 and various bits of hardware, a bit of
perseverance has gotten me through all of it :-).
The only real drawback to running the KZPSC raid on this box is that I
have to either re-flash the firmware to ARC or move the board to an Intel
machine in order to configure the drives. Or, is there some way to
kickstart ARC and/or the RCU utility from an older SRM console?
Steve
----------------------------------------------------
My original post:
All,
I recently purchased a used KZPSC controller with the intent of using it
on my AlphaPC 64 system under Tru64 5.0.
By plugging the board into an Intel box, I was able to upgrade the
firmware to v2.49 and configure a JBOD arrangement using an IBM brand SCSI
drive. Once this was transferred over to the Alpha and the kernel rebuilt
to support the 'xcr' device, I was able to partition the disk (with UFS
filesystem), mount it and access it normally.
Next, I wanted to migrate my boot partition from the existing primary
drive (on a clone NCR 53C875 controller) over to the RAID controller. In
the process, the boot partition would change from an AdvFS filesystem to
UFS. The files were copied using a pair of GNU 'tar' processes (moves
special files, links, etc. properly).
The admin guide for T64 suggests that the hardware database will still
want to regard the drive as the device it was under the original boot
environment - /dev/disk/dsk3a, 3b, 3g (root, swap, usr). I modified fstab
and sysconfigtab accordingly and erased links from /etc/fdmns/* to the old
AdvFS device.
Disk label was written to the JBOD drive using:
# disklabel -Rr -t ufs /dev/disk/dsk3a disk3 rz
where 'disk3' was an ASCII save of the partition information.
>From the console, a boot starts off fine by:
>>> boot dra0
However, immediately after initializing the NCR controller the process
fails with a series of fast-scrolling messages in the form:
XCR_COM
re_getdrive
Cmd should always work
...
then dies with a message that it cannot mount the root filesystem.
I see these messages during a normal (successful) boot as well, but
everything ends up working properly.
I must be overlooking something in the configuration, but what? Is the
driver complaining because I don't have an actual Storageworks shelf
connected to the controller?
In desperation, I tried to make a new install from the distribution CD,
but the kernel fails to recognize the RAID controller or the attached
device! I only get the option of targeting the two drives attached to the
NCR controller.
Any help appreciated at this point.
Steve
Received on Thu Jul 06 2000 - 15:31:25 NZST