Firstly apologies for not posting this summary sooner.
I got one reply from Tom Blinn
The original post
>
> Hi
> After running a defragment on one of my AdvFS domains I got the standard message
> from the -v option.
>
> The first pass gave
>
> defragment: Defragmenting domain 'variation'
>
> Pass 1; Clearing
> Volume 1: area at block 31687632 ( 1814288 blocks): 43% full
> Volume 2: area at block 26025280 ( 1865680 blocks): 2% full
> Volume 3: area at block 22574288 ( 6624432 blocks): 25% full
> Volume 4: area at block 11761088 ( 2129408 blocks): 36% full
> Domain data as of the start of this pass:
> Extents: 119954
> Files w/extents: 119640
> Avg exts per file w/exts: 1.00
> Aggregate I/O perf: 100%
> Free space fragments: 7313
> <100K <1M <10M >10M
> Free space: 0% 3% 12% 85%
> Fragments: 3771 2029 1127 386
>
> Which look all well and good.
>
> However the second pass and subsequent passes gave me this
>
> Pass 2; Clearing
> Volume 1: area at block 18449184 ( 946560 blocks): 23% full
> Volume 2: area at block 18292784 ( 4946688 blocks): 75% full
> Volume 3: area at block 4546880 ( 193312 blocks): 69% full
> Volume 4: area at block 7312496 ( 73632 blocks): 35% full
> Domain data as of the start of this pass:
> Extents: 119640
> Files w/extents: 119640
> Avg exts per file w/exts: 1.00
> Aggregate I/O perf: 100%
> Free space fragments: 6633
> <100K <1M <10M >10M
> Free space: -8% 3% 14% 91%
> Fragments: 3193 1979 1100 361
>
> A <100K freespace of -8%!!!!!! I havent seen this before. What is going on?
>
> This was run on a AS8400 running 4.0B with patchkit 9 applied.
>
Tom's Answer
I am only guessing about this, I haven't bothered to go look at the code (and
since very few of the people who will ever see your message have access to the
source code for the AdvFS defragment utility, the best you are likely to get
is going to be informed guesses), but I'd bet that there are rounding errors
(perhaps large ones) in the free space percentage calculations for the larger
buckets, and that the value computed for the <100K case is the sum of the %s
for the large buckets subtracted from 100%. -8% = 100% - (3%+14%+91%). Why
would there be such a large error? Well, if the free space percentages are
based on some kind of summing of the sizes of the fragments, with rounding or
some other calculation anomaly involved, you might get weird data.
Two questions: (1) what are you going to do next year, or even later during
this year, when V4.0B is NO LONGER SUPPORTED and is NOT YEAR 2000 COMPLIANT?
(2) What does it matter if those percentages are meaningless, as long as you
get the defragmentation you seek and the file system doesn't get trashed by
the defragger?
If it really matters to you, then (a) upgrade to at least V4.0D to get to a
supported release, and (b) file a formal problem report against the problem
if you see it again in the V4.0D version of the utilities.
Tom
When we ran another defragment session after I posted this. We now get a
sensible 0% freespace reported.
As Tom points out correctly 4.0B soon will no longer be supported and the Y2K
issues. I am slowly working my courage up to do my first Tru64 upgrade ( in fact
2 off them ) so these issues will not be relevant.
--
Gwen Pettigrew
Computer Officer
Institute of Theoretical Geophysics
Department of Earth Sciences
Downing Street
Cambridge
CB2 3EQ
UK
Tel 01223 333464
E-mail gwen_at_itg.cam.ac.uk
W3 http://www.itg.cam.ac.uk/ITG/members/gwen/
Received on Tue Jul 20 1999 - 15:16:25 NZST