Upgrade to 4.0D (w/TCR 1.5) AdvFS verify question

From: Todd V. Minnella <minnella_at_fas.harvard.edu>
Date: Thu, 31 Dec 1998 17:24:34 -0500 (EST)

Ultra Short Question Summary:
  What do we risk by not running AdvFS verify on our DU 4.0B file domains
when upgrading to DU 4.0D with TruCluster ASE v1.5?

Related Parallel Question:
  How long does verify typically take to check a large (33GB/600000 files)
AdvFS domain?

Details:
  We have a 3 AlphaServer 4000s running Digital Unix v4.0B with TruCluster
ASE v1.4A. These systems are providing NFS service of several large AdvFS
domains (see below list). We're doing a rolling upgrade to Digital Unix
v4.0D with TruCluster ASE v1.5 in a little over a week. Due to the nature
of our services, an extended downtime (> 6hours) is politically
impossible.

The Digital Unix 4.0D Release Notes (section 2.2.2) explicitly recommends
running AdvFS verify on all local file systems before upgrading. Given
the size of our domains, we are facing the possibility that we will not
have an opportunity to take our services off-line to verify them before
our scheduled upgrade. What do we risk by not doing this step?

If verify is likely to complete within a 6-hour window, we may be able to
verify our file systems. Has anyone run v4.0B AdvFS verify on file
systems similar in layout to ours? And if so, how long did it take?

We've heard that it is safe to terminate a running verify. If this is so,
would we gain anything by running it for as long as possible, and then
terminating it when our downtime window ended?

Here are our filesystems:

Available Space Used Space # of Files
--------------- --------------- ----------
21GB 15GB 430000
21GB 6GB 27000
21GB 15GB 260
89GB 34GB 540000
89GB 30GB 615000
89GB 21GB 430000
89GB 23GB 450000
89GB 19GB 276000

All of the filesystems in question have been defragmented weekly, and were
created with plenty of preallocated BMT extents and large (2048 pages)
transaction logs. The 21GB filesystems are 6-member RAID5 sets all on one
redundant pair of HSZ50s connected with KZPSAs. The 89GB filesystems are
6-member RAID5 sets on two independent redundant pairs of HSZ70s connected
with KZPSAs (3 filesystems on one HSZ70 pair, 2 filesystems on the other
HSZ70 pair).

We appreciate any and all advice about our dilemma. Thanks in advance for
your assistance!

Todd V. Minnella
Senior Unix Systems Analyst, Unix Systems Group
Faculty of Arts and Sciences Computer Services, Harvard University
Received on Thu Dec 31 1998 - 22:25:22 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:38 NZDT