Gidday again;
Thought you might like to know the results of the upgrade and the path we
eventually took. (See below for original question and summary)
We ended up by removing a directory and its contents. These were to be
restored on Monday anyway. This left enough free space with the addition
of a couple of free 9Gb drives that were already in the rack to do a
rmvol of one of the raid 5 sets. This was done before the engineers arrived
on the floor. They exchanged the disks and rebuilt the raid 5 set. I addvol
the new raid set back in. There was now enough room to remove all the other
raid sets but they went one at a slow time.
This took a little over 4 hours with a total time for the job of 7 hours.
The engineers were on site for a stretch of this but as soon as the last
raid 5 set was rebuilt they were on there way. The raid sets were addvol'd
back and we went from 180Gb capacity to 604 Gb with 144Gb spare for other
activities we are planning.
Upgrade completed, no downtime recorded.
BTW: The raid sets removed were lsm volumes but we decided to only put back
advfs disks. Seemed silly to add the extra layer when we weren't using the
functionality.
One tip from this exercise. Allow the raid sets to complete synchronising
otherwise any rmvol gets drastically slowed down.
Wayne Blom
Systems Specialist
IT_at_HEALTHCARE
F H Faulding & Co Limited
email: wayne.blom_at_au.faulding.com
pmail: Wayne Blom, 115 Sherriff St, Underdale, SA, 5032
earmail: +61 8 84083656
========================================================================
Thanks to all for the many replies. Turns out the disks are not mirrored
that we need to upgrade (Some decision taken some time back before I came on
the scene.) - although there are other disks which are. Second thing I
forgot to mention was that the disks are part of an ase cluster service.
Thanks to Toby Shalless for mentioning this.
Out of all the options we have come down to only 2 that will work on our
setup.
1) Trash the lot and rebuild with the new disks - (Note: take backups first)
- This is the simplest but perhaps longest and does require downtime.
2) Manipulate the ADVFS domain with addvol/rmvol to enable the disks to be
upgraded - more complicated, does require a rebuild of the ase service, but
means the other cluster member can continue to run.
We have decided to go with option 2 (mainly to test the option for future
knowledge ;-)) with option 1 as a standbye.
People asked if the version the system was - 4.0F cluster V1.6
People asked if we had the needed licenses - we do.
People suggested breaking the mirror and rebuilding one side first,
vdump|vrestore the data, rebuild the second side and resync the mirror.
People suggested adding a third plex to the mirror and breaking out the old
one at a time.
Each of the above would work if we had space in the cabinet I am sure.
Unfortunately sheez a little bit full.
Thanks again to all who offered their support. Will give another additional
summary after the outcome.
Wayne Blom
Systems Specialist
IT_at_HEALTHCARE
F H Faulding & Co Limited
email: wayne.blom_at_au.faulding.com
pmail: Wayne Blom, 115 Sherriff St, Underdale, SA, 5032
earmail: +61 8 84083656
Received on Wed Oct 24 2001 - 07:55:49 NZDT