SUMMARY: "Reattaching" ADVFS and 5.1 upgrade sanity check

From: Rick Beebe <richard.beebe_at_yale.edu>
Date: Tue, 24 Apr 2001 17:03:09 -0400

Thanks as usual for the great responses. The general answer was that it
should work. The unanswered question--can TruCluster 5.1 use my existing
Advfs 4.x partition or will I need to reformat it. Anyone know for sure?
It will take most of a day to reinstall all that data and if I don't
have to I'd sure rather not. Barring good news, I'll plan accordingly
(and I'll try calling Compaq). A conversion program would be _real_
handy.

My original question and the responses I received are below.

_______________________________________________________________________

    Rick Beebe (203) 785-6416
    Manager, Systems & Network Engineering FAX: (203) 785-3978
    ITS-Med Production Services Richard.Beebe_at_yale.edu
    Yale University School of Medicine
    Suite 214, 100 Church Street South, New Haven, CT 06519
 
_______________________________________________________________________

-------- Original Question --------

Our primary email system is currently two DS20Es running DU 4.0D and
TruCluster 1.5. They are connected to dual-reduntant HSZ70's. They talk
to each other over an internal ethernet. I.e. no memory channel cards in
them.

They've been running great but I want to get them to DU 5.1 and
TruCluster 5.1. I have memory channel cards for them sitting in my
office. Because of that and because of the big differences between
TruCluster 1.5 and TruCluster 5.1 I want to do a full install rather
than an upgrade. I've formulated a rolling install plan (because, of
course, mail can't be down) and am looking for a sanity check on it.

Step one is to remove one of the nodes from the cluster. I'll power that
node down and install the memory channel card (will it work with nothing
on the other end?). Then I'll install DU 5.1 on its internal 9GB hard
drive. I have a 'spare' 9 GB mirror set on the HSZ70s; neither node is
using it today. I'll partition that mirror set into 5 pieces. Next I'll
install TruCluster 5.1 and tell it to use one of the partitions for the
cluster boot disk. I'll use another partition for the cluster root and
one for the quorum disk. The remaining partition will be the cluster
root for the other node. The 5 partitions on a mirror set, btw, worked
great when I set up a new web server recently. I can't go beyond a
2-node cluster with it but if I need to do that I'll just install more
disks.

My hope is that all of the above can be done during working hours and
that it won't affect the other node, which is still pumping mail. Is
that a reasonable assumption?

On "the big day" I'll shutdown the other node, install the memory
channel card and bring it back up as a member of the cluster. Voila,
instant DU5.1 (I _love_ TC5.1). My next hitch is how to 'reattach' the
150GB mail volumes to the new cluster. I would really like to be able to
do it without reformatting them, because a full reload takes forever.
Will advscan do it? Or is DU5.1 going to insist on its own version of
advfs on those drives? The mail volumes are, of course, all on the
HSZ70's.

Anything I've forgotten or overlooked?

---------------------------------------------------------------

You can easily 'reattach' the old advfs volumes:

Record the advfs domain links (ls -lR /etc/fdmns).

Once you have built the 5.1 cluster and they are connected to the
hsz70s,
and can see the disks as /dev/disk/dskNN, recreate the symbolic links.

Then a showfsets on the advfs domain will show you which fsets are
where,
and you can edit the fstab.

It's that simple ! I did it on a move from a AS8400 ASE1.6 cluster to a
GS160 5.1 Cluster.

HTH.

Gary

P.S. If you want more detail, reply to me and I'll gladly help....

--
Gary Phipps
Unix Systems Manager
BT Group Finance
ph: 0121 230 4204
e: gary.phipps_at_bt.com
----------------------------------------------------------------
>My hope is that all of the above can be done during working hours and
>that it won't affect the other node, which is still pumping mail. Is
>that a reasonable assumption?
Yup, it sounds like a good plan...I'm actually up in Nashua this week,
teaching a TCS 5.1 class, so I just
might show them your plan, because it's similar to some migration ideas
we've kicked around this week...
>On "the big day" I'll shutdown the other node, install the memory
>channel card and bring it back up as a member of the cluster. Voila,
>instant DU5.1 (I _love_ TC5.1). My next hitch is how to 'reattach' the
>150GB mail volumes to the new cluster. I would really like to be able to
>do it without reformatting them, because a full reload takes forever.
>Will advscan do it? Or is DU5.1 going to insist on its own version of
advscan will do fine.  HOWEVER,  you might want to consider doing a
vdump or other backup and doing a
mkfdmn over again, since the current domain is AdvFS V3 format.  You
might want to go to V4 at some point
and get the benefits of the new file format.  That would mean absolute
downtime, though, as you do the
restore.
Cheers,
Ed.
---
Edward J. Branley   elendil_at_yatcom.com, ed_at_softadv.com
Seashell Software, New Orleans, LA
Home Page/PGP public key at www.yatcom.com/ejb
Tru64/TruCluster Instructor, Institute for Software Advancement,
www.softadv.com
----------------------------------------------------------------
        V5 changed the on-disk format for AdvFS.  While it supports
        both the old and new format, I don't know whether the old
        format can be used for a shared cluster file system.  I
        would expect that sort of information to be documented
        somewhere.  While advscan is enough to get the old domain
        setup on the new system, it may not be enough to use it
        as a shared file system.
From: alan_at_nabeth.cxo.dec.om
-------------------------------------------------------------------
Rick a few probably obvious things.
The memory channel card has a couple of jumpers  - make sure they are
set 
for Tru64
After splitting one node off the existing cluster you will need to
change 
the unit access to
prevent inadvertent access by your new 5.1 node.
Similarly, you don't want the 4.x system looking at the new volume
initially.
I assume you are running Transparent failover as you can't partition
HSZ's 
in multibus failover mode.
Obvious need to check firmware revs (system and KZPBA's) and HSZ
software rev.
I'm not sure if the haven't changed the ODS format for advfs. They
needed 
at lot of fixes for
advfs to get the cluster file system to work.
You can't use ufs file systems other than readonly in a 5.x cluster.
There was an article recently about mounting existing advfs partitions
on a 
newly created system disk. There should be no difference there.
You need to modify disk access for your 5.1 disk before you add member 2 
into your cluster.
After you bring node 2 into your 5.1 cluster, you need to allow access
to 
your disks from both systems.
Use hwmgr - scan scsi to get your devices added to the device database.
Use hwmgr -refresh component just to refresh the component list.
Application note - at 5.1 ensure that if one node is predominantly using 
some disks, that you get the device locks owned by that node otherwise
you 
get a lot of memory channel traffic. Best to look after this in the CAA 
failover scripts.
Oracle caution - There is a patch for direct io for Non-clustered 5.1 
systems otherwise database corruption will occur. Your upgrade roadmap 
doesn't show any application testing from the 5.1 system before it runs
as 
a cluster so you should be fine anyway.
That's all I can think of.
Good luck,
Greg Palmer
Queensland University of Technology.
Received on Tue Apr 24 2001 - 21:03:15 NZST

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:42 NZDT