SUMMARY: disk failure - which procedure to replace?

From: Alex Harkema <HarkemaA_at_vertis.nl>
Date: Tue, 24 Oct 2000 15:15:10 +0200

Hi Admins,

Well, that "disk crash" turned out to be my mistake. No faulty hardware,
nothing in (lsm)distress.
After I de-attached the plex from the volume, and removed the subdisk etc...
I noticed I couldn't access rz13 (the disk) at all...

I did a show dev on the console, and according to what I've been taught,
rz13 should be lun 0 / bus 1 / target 5
show dev showed me that there was a "sequential access device on that right
spot, so it seemed I attached the DLT device on the same ID as that
particular disk.
All I had to do was re-create the subdisks, re-create the plex and attach
the plex to the volume, from which the mirror plex had taken over the
original's functionality.

Well, we all know that the 'easy' thing always give us a lot of headache;)
While doing
# volplex -g diskgroupname att volumename plexname_to_be_attached
I got this strange error:
fsgen/volplex: Volume vol06, plex vol06-01, block 0: Plex write:
        Error: Write failure
fsgen/volplex: I/O error on plex vol06-01, not attached to volume vol06

I found some info on Internet. (below/before original message)
I remembered something like that...
The (correct) solution came from Compaq Support: offset of the first disk in
the plex should not be 0 but 16

# voldisk list rz13
to check the offset value for "public"

So I had to initialize disk rz13 again.
The steps I took:
I dragged the subdisk out of the plex (I'm using command line in combination
with dxlsm)
then
# voldg -g diskgroupname rmdisk disk06

Now re-initialize the disk, using the correct offset:
# voldisk -f init rz13 puboffset=16
After this I created a new subdisk on disk06
I dragged this subdisk into the plex
Now I had to attach this plex in the volume, by now still only containing
the mirror.
# volplex -g diskgroupname att volumename plexname
e.g. # volplex -g datadg att vol06 vol06-pl01

And...this seems to work fine. At the moment the volume is busy to re-mirror
the attached original...

BTW: I found another bus to connect the DLT device to:)

Many thanx to all that responded! I'm adding here the info about preserving
block 0, for I couldn't find anything in the ornl.gov archives about this.

Regard,
Alex Harkema

---------- info - original message is below -----------
 

Block 0 on a Digital UNIX disk device is read-only by default. UFS does not
use block 0 when putting data on device partitions. To
preserve the LBN mapping, the LSM nopriv disk must start at LBN block 0. As
long as this disk is used for UFS volumes, this does
not present a problem. However, if the disk is reused for other applications
which write to block 0, then a write failure will occur. To
help avoid such failures, earlier releases of LSM labeled the LSM nopriv
disk with the unique administration name device-name
_blk0 . If the volume is no longer needed, remove this nopriv disk from the
LSM disk group and redefine the disk without block 0

Starting with Version 4.0, the voldiskadd and voldisksetup utilities
automatically map out block 0. Digital recommends that you
use these utilities to add disks to LSM. Note that if volencap is used to
add a disk to LSM, it will not preserve block 0. This can cause
problems if an application writes to that part of the disk.
-------



> -----Original Message-----
> From: Alex Harkema
> Sent: Tuesday, October 17, 2000 4:27 PM
> To: 'tru64-unix-managers_at_ornl.gov'
> Subject: disk failure - which procedure to replace?
>
> Hi,
>
> Although I see no orange leds on any of the disk indicating
> problems/failure, I get the message "disk06 failed, was rz13"
> failed disks:
> disk06
>
> failed plexes:
> vol06-01
>
> I guess I have to replace the disk; what is the procedure to follow? I
> searched the archives, but could not find anything satisfying. I'm running
> tru64 4.0F on a Alphaserver 1200 /patchk 0003
> Disks are configured using LSM
>
> Thanx in advance,
> Alex
>
>
>
Received on Tue Oct 24 2000 - 13:17:25 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:41 NZDT