SUMMARY: balancing AdvFS domain (INN related)

From: Nebojsa Hrmic <nebojsa_at_alf.tel.hr>
Date: Fri, 31 Jan 1997 10:34:02 +0100 (MET)

Hello!
        Question was why balance is not working on AdvFS domain where INN is.
All original answers bellow. What I tried, is using defragment before balance,
which temporarily helped balancing all volumes, but this morning I found
Vol 3 much more full again, like new articles have NOT been proportionally
distributed. Running defragment and balance out of the cron looks like
postpoding a solution, so rebuilding is what is left.
        Thanx to all
                
        Nebojsa Hrmic
        nebojsa_at_alf.tel.hr
        Croatian Post and Telecommunications

--------------------------------------------------------------------------
Harald Lundberg <hl_at_tekla.fi> wrote:

AdvFS inconsistencies have been reported. You have the log on volume 2,
personally I've had interesting things happen when the device where the
log originally was is removed, i.e. the log has to be moved to another
disk. So rebuilding the domain might be the only way out if your filesets
aren't full.

--------------------------------------------------------------------------
Sean O'Connell <sto_at_stat.Duke.EDU> wrote:

Have you tried using defragment?

Run, the following:

# defragment -n -v news_domain

This will show the level of fragmentation and give you a good
picture of how much space is wasted in fragmentation. Also, if
you defragment then balance, it works nice.

# defragment news_domain

will actually perform the defrag (it takes quite awhile, depending
on the speed of bus/machine). I seen cases where a balance doesn't
balance it, but a defragment following a balance will. I suspect
that defragent will fix a lot of your troubles because of the large
# of small files being created/destroyed. We incorporate a defragment
and balance into our level 0 backup routine (monthly).
-------------------------------------------------------------------------

Alan Rollow <alan_at_nabeth.cxo.dec.com> wrote:

News and AdvFS are well known for not getting along. The metadata space
used to store information about files is allocated dynamically as medium-
large extents when needed. This data must be contiguous. News tends to
quickly fragment the free space and often there isn't enougn contiguous
space to allocate a new extent. Often running defragment will fix this
part of the problem.

The 2nd part of the problem is that the number of extents that can be
allocated for this space is limited, which limits the number of files
on the file system. When the file system is created, a large extent
size can be selected, but that makes it harder to find contiguous space
when the space needs to grow. The release notes or administration
guide may have recommended extent sizes for various number of files.
-------------------------------------------------------------------------

Esther Filderman <moose+_at_andrew.cmu.edu> wrote:

At the last LISA [systems administrator] conference I took classes about
INN administration. They said, very emphatically, do NOT run a news
server on AdvFS. It's -very- fast when you're getting news. When you go
to do the expires, the whole thing tends to blow up.
------------------------------------------------------------------------

Christian Miranda <cmiranda_at_gmd.com.pe> wrote:

Have you tried first defragment command?

# defragment [domain]
-----------------------------------------------------------------------

Jeff Penfold <Jeff.Penfold_at_comunion.demon.co.uk> wrote:

Here at CU we have had the same problem arise rather "suddenly". There is no ap
parent warning or signs that you can look for. DEC gave use the same advice - b
ackup the data, recreate the domain, and restore. Since our systems were runnin
g base OSF/1 3.2 (they have been "in production", "in the field" for some time)
DEC also suggested that we apply thier jumbo-mega-patch-from-hell for base 3.2,
which amongst other things fixes some problems with AdvFS. We applied the patch
, and did the backup/restore cycle (we in our case found enough space disk space
 in another disk array and copied the data across (28 GBytes takes a long time t
o backup and restore). All works OK now,

Currently we believe that the problem lies with a combination of a very large nu
mber of files (our file set has in excess of 500,000 files), and a heavily fragm
ented file system. I believe it is caused by the systems inability to handle th
e number of free disk fragments, the responce to this is unpredictable. Our pro
blem was caused to two separate causes:-

1/ Our ignorance. Just because DEC say that you don't need the AdvFS license t
o use AdvFS (and they imply that it is quite reasonable to base a whole machines
 file systems upon AdvFS, root, usr, var - all of it) doesn't mean that you *sho
uld* do it. What the manuals don't tell you is that routine maintenance is esse
ntial if you want the system to run reliably for any period of time. To this en
d you should run defragment and balance (if necessary) on some sort of regular b
asis (based, I'd guess, on the rate at which data is modified, added or deleted
from the domain).

2/ Base OSF/1 3.2, was broken with respect to AdvFS. Or more specifically, the
 mkfdmn command made some basic assumptions about the size of a domain, and the
number of files which it should contain. If you exceeded these assumption (and
we did on both accounts), trouble is only just over the horizon. The solution i
s to use some of the "extra" arguments which arrive with the jumbo-mega-patch-fr
om-hell version of mkfdmn when you recreate the file domain (you'll have trouble
 finding the new version of the manual page for the patched command - the patch
doesn't contain it! However, if you run mkfdmn *with no arguments* it prints a
new manual page!).

Well, the end of my story. As far as I am aware you now have few options. We f
ound that trying to use defragment to work your way out of this doesn't work, th
e problem is too big for it. Following DEC's advice is really the only option l
eft, and then when remaking the domain pick your parameters carefully.
Received on Fri Jan 31 1997 - 10:48:37 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:47 NZDT