I apologize for the delay. Six hours after my original email request,
I was in bed for 4 days recuperating from the flu.
Thanks so much to everyone for their quick response. The following
is my original email request(plea) and the responses I received.
I was unable to run the commands myself, but a co-worker told me
that he did the following to fix the problem:
boot from genvmunix
bcheckrc
update
sizer -n SWXCR
and ran MAKEDEV
Thanks, again, for all the great recommendations!!! It is really
appreciated!!
Annette Hearn
hearn_a_at_hccs.cc.tx.us
Houston Community College System
ORIGINAL EMAIL:
--------------------------
Successfully converted one OpenVMS 2100 server to DigitalUNIX 4.0b.
Thanks, again, for all your help reminding me about firmware, base
license and smp license.
Ran into a RAID 1 problem. We used SWXCRMGR to set up 4 logical
drives across 2 BA350 Storageworks cabinets. Today, we added
another drive to each cabinet to make 4 logical drives. Used
SWXCRMGR to define group, format, etc.
Upon boot, at the console prompt, a SHOW DEV shows all 5 logical
drives. However, once the system is booted under DG4.0b, the
OS doesn't recognize the drives, ie, we cannot see them to
mount them. Had no problems with the original 4. The system
has been completely powered off and on, etc. We even broke the
5th logical set apart, regrouped them and reinitialized them with
no luck.
RECOMMENDATIONS:
----------------------
Susan Rodriguez <SUSROD_at_HBSI.COM>
Did you boot your system with genvmunix and build a new kernel?
b -fi genvmunix
cp vmunix vmunix.sav
doconfig
----------------------
Tom Webster <webster_at_ssdpdc.lgb.cal.boeing.com
What does the SWXCR report when DU is booting? There should be a
bit that looks like:
----- snip ----- snip ----- snip ----- snip ----- snip -----
Initializing xcr0. Please wait.
Initializing xcr0. Please wait.
Initializing xcr0. Please wait.
Initializing xcr0. Please wait.
xcr0 at pci1 slot 7
re0 at xcr0 unit 0 (unit status = _ONLINE, raid level = 5)
re1 at xcr0 unit 1 (unit status = _ONLINE, raid level = 5)
re2 at xcr0 unit 2 (unit status = _ONLINE, raid level = 5)
----- snip ----- snip ----- snip ----- snip ----- snip -----
You can use the following command to review your bootup messages:
uerf -R -r 300 | more
I seem to recall having to add SWXCR drives to the system configuration
files, either by hand or by booting to genvmunix and rebuilding the
config file and the kernel.
In any event, I'd suggest booting genvmunix and seeing if it can
pickup your new drive sets. If it can, build a new kernel --
don't use the old config, have doconfig generate a new one with the
new drive info. Once upon a time I did things like editing the
config files by hand to add drives, but building a kernel while
booted from genvmunix does it nicely and it will also automatically
add the needed device files.
To boot to genvmunix, halt the system down to the srm console and
issue the following:
boot -fi genvmunix
----------------------
"Dr. Tom Blinn, 603-884-0646" <tpb_at_zk3.dec.com
Is the result any different if you boot the /genvmunix kernel?
Running /genvmunix, booted to single-user mode, do this:
# bcheckrc
# update
# sizer -n SWXCR
and then look at the file /tmp/SWXCR.devs and see what new drive
names you should have on the system.
I believe the problem is you don't have the device special names
for the new logical drives, and the above procedure will create
a MAKEDEV command that should create the right files. Once you
know the command,
# cd /dev
# ./MAKEDEV ren
where "n" identifies the missing logical drive name; I'd guess
it will be re4, but I'm not 100% sure, since I don't have a SWXCR
to deal with in most of the work I do.
----------------------
Paul Kitwin <PAUKIT_at_HBSI.COM>
These are the steps that have worked for me.
1) format the drives with SWXCRMGR
2) boot -file genvmunix
3) doconfig
4) reboot
5) disklabel -rw reX SWXCR
6) newfs reXc
7) mount /dev/reXc /mountpoint
----------------------
"Sunil Kumar" <ksunilk_at_hotmail.com>
Did ya try rebuilding a new kernel from genvmunix. When the new
kernel is built, the new devices (re*) wud automatically be created
for you.
----------------------
"Doug Pajtas" <Doug.Pajtas_at_MCI.Com>
Is the drive built into the kernel is there a /dev/rrexx file?
----------------------
Gert Maaskant <gertm_at_cvi.ns.nl>
What you can do is boot from the genvmunix kernel like:
P00 >> b -fl i
The machine ask you which kernel to boot.
In the startup you can see the devices he recognised.
Start to multi-user mode (level 3) and run sizer -n
Now the achine make a kernel-config-file of the new configurartion.
Diff. the both files, (the new sizer file, en the
/sys/conf/<machine> file)
Edit the original kernel file and make a new kernel (doconfig -c).
----------------------
Knut Hellebx Knut.Hellebo_at_nho.hydro.com
Are there correct disk entries in the kernel config file ?
----------------------
"Jason Neil"<Jason_Neil_at_CITYMAX.CO.UK>
You need to build the fith drive into the kernel. Just edit the
/sys/conf/HOSTNAME file and add another line like the below...
device disk re0 at xcr0 drive 0
device disk re1 at xcr0 drive 1
device disk re2 at xcr0 drive 2
device disk re3 at xcr0 drive 3
device disk re4 at xcr0 drive 4
device disk re5 at xcr0 drive 5
Say your logical drive is 5 -- you would have added the re5 line.
Then you have to make the device files.
----------------------
Paul Grant <paul.grant_at_kscl.com>
If you go through the "uerf -r | pg" and look at the startup is the
XCR initialising ok. It should then tell you if the disks have been
found, and what they are. If you can't see them then I suggest you
look at firmware of the disks. We had a problem with a rev7 disk being
added to a DU4.0b setup. We also had a problem with the seating of the
personality modules in the BA350 cabinates...
Is the device you have added a fast wide or ultra scsi device and does
your cabinate support them?
Have you tried to remove the new disks and re-boot......
Are any of the disks failing on power up? This may indicate termination
problems.....
----------------------
Mike Grau <m.grau_at_kcc.state.ks.us>
I had a similiar problem - the devices showed up at the firmware level
but not at boot up. Turned out that I had to have fault management
disabled.
I think, but don't really know yet (just configured the box yesterday,
and I'm new to RAID) that the problem was as described below from
"Rel.txt" off the Standalone RAID Array Software disk:
SCSI Termination and Fault Management
The controller must be at the end of the SCSI bus for proper termination
of the SCSI signals. Consult the appropriate documentation for your
storage enclosure to ensure it is properly terminated. When using Fault
Management the terminators used must support Fault Management. The
SWXCR controllers are supplied with termination that supports Fault
Management. Check with your system supplier to be sure that system/device
termination supports Fault Management, otherwise the Fault Management
option must be DISABLED on the controller. If a terminator that does
not support Fault Management is attached to any channel of the SWXCR,
the results are unpredictable when Fault Management is ENABLED.
---------------------------------------------------------------------
Received on Mon Oct 06 1997 - 22:48:04 NZDT