Dear Managers ,
Here is summary of my posting asking about increasing
the aggregate i/o perf for a domain having lots of small files .
I got only a couple of response .
I might have to live with slow I/O for sometime before I can recreate it .
Pat OBrien who also had similar problems , As he is using 5.1B he
suggested "VFast" . He apparently had destroy the domains and recreate
them to solve the issue.
As I'm on 5.1a, I still have to use good old defragment util .
The other response was from Alan Rollow , I 'm just doing a cut paste
job of his comments ;) .
How big are the small files? If I remember correctly,
the basic allocation size for AdvFS is 8 KB (a page).
For files smaller than that they can be packed together
as fragments of the basic allocation size or each be
given their own page. How the small files are being
handled may be affecting how the defragmentation index
is being reported.
At the other end of things, how much free space does the
file system have. Below the page size, small files are
contiguous. It is large files that tend to be scattered
around. Defragment needs free space to reorganize to make
the large files contiguous. Without enough free space, it
may not be do as well as it could, if it had more.
You might also want to check the AdvFS documentation to
see what it suggests for handling lots of small files.
If the AdvFS documentation isn't on the base system
documentation CDROM, check the documentation directory
of the AdvFS utilities on the Associated Product CDROMs.
Received on Mon Jul 28 2003 - 15:25:46 NZST