--
Gary Phipps
Unix Systems Manager
BT Group Finance
ph: 0121 230 4204
e: gary.phipps_at_bt.com
----------------------------------------------------------------
>My hope is that all of the above can be done during working hours and
>that it won't affect the other node, which is still pumping mail. Is
>that a reasonable assumption?
Yup, it sounds like a good plan...I'm actually up in Nashua this week,
teaching a TCS 5.1 class, so I just
might show them your plan, because it's similar to some migration ideas
we've kicked around this week...
>On "the big day" I'll shutdown the other node, install the memory
>channel card and bring it back up as a member of the cluster. Voila,
>instant DU5.1 (I _love_ TC5.1). My next hitch is how to 'reattach' the
>150GB mail volumes to the new cluster. I would really like to be able to
>do it without reformatting them, because a full reload takes forever.
>Will advscan do it? Or is DU5.1 going to insist on its own version of
advscan will do fine. HOWEVER, you might want to consider doing a
vdump or other backup and doing a
mkfdmn over again, since the current domain is AdvFS V3 format. You
might want to go to V4 at some point
and get the benefits of the new file format. That would mean absolute
downtime, though, as you do the
restore.
Cheers,
Ed.
---
Edward J. Branley elendil_at_yatcom.com, ed_at_softadv.com
Seashell Software, New Orleans, LA
Home Page/PGP public key at www.yatcom.com/ejb
Tru64/TruCluster Instructor, Institute for Software Advancement,
www.softadv.com
----------------------------------------------------------------
V5 changed the on-disk format for AdvFS. While it supports
both the old and new format, I don't know whether the old
format can be used for a shared cluster file system. I
would expect that sort of information to be documented
somewhere. While advscan is enough to get the old domain
setup on the new system, it may not be enough to use it
as a shared file system.
From: alan_at_nabeth.cxo.dec.om
-------------------------------------------------------------------
Rick a few probably obvious things.
The memory channel card has a couple of jumpers - make sure they are
set
for Tru64
After splitting one node off the existing cluster you will need to
change
the unit access to
prevent inadvertent access by your new 5.1 node.
Similarly, you don't want the 4.x system looking at the new volume
initially.
I assume you are running Transparent failover as you can't partition
HSZ's
in multibus failover mode.
Obvious need to check firmware revs (system and KZPBA's) and HSZ
software rev.
I'm not sure if the haven't changed the ODS format for advfs. They
needed
at lot of fixes for
advfs to get the cluster file system to work.
You can't use ufs file systems other than readonly in a 5.x cluster.
There was an article recently about mounting existing advfs partitions
on a
newly created system disk. There should be no difference there.
You need to modify disk access for your 5.1 disk before you add member 2
into your cluster.
After you bring node 2 into your 5.1 cluster, you need to allow access
to
your disks from both systems.
Use hwmgr - scan scsi to get your devices added to the device database.
Use hwmgr -refresh component just to refresh the component list.
Application note - at 5.1 ensure that if one node is predominantly using
some disks, that you get the device locks owned by that node otherwise
you
get a lot of memory channel traffic. Best to look after this in the CAA
failover scripts.
Oracle caution - There is a patch for direct io for Non-clustered 5.1
systems otherwise database corruption will occur. Your upgrade roadmap
doesn't show any application testing from the 5.1 system before it runs
as
a cluster so you should be fine anyway.
That's all I can think of.
Good luck,
Greg Palmer
Queensland University of Technology.
Received on Tue Apr 24 2001 - 21:03:15 NZST
This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:42 NZDT