SUMMARY: AdvFS reporting wrong space used

From: Dougal Scott <dwagon_at_aaii.oz.au>
Date: Mon, 29 Jan 1996 11:36:39 +1100

Original problem:
> We have a problem with AdvFS behavior. It reports different amounts of
> disk usage between df and vdump.
> % df -k /var
> Filesystem 1024-blocks Used Avail Capacity Mounted on
> alpha#var 456080 24449 272680 8% /var
>
> yet:
>
> % vdump 9f - /var >\! /dev/null
> ...
> vdump: Dumping 47108952 bytes, 50 directories, 308 files

I received three answers:
* Benoit Maillard <maillard_at_fgt.dec.com> though there might have been sparse
  files hanging around especially in /var/adm/crash. This was not the case, it
  reports different amounts for any filesystem, not just var.
* Saul Tannenbaum <stannenb_at_emerald.tufts.edu> said that all the AdvFS tools
  report usage inconsistently, and that he had been unable to get a useful
  response from Digital.
* Dr. Tom Blinn <tpb_at_zk3.dec.com> came through with the best answer, which
  I'll quote large chunks of:

  On an AdvFS file system, the "cluster size" (that is, the minimum
  amount of disk space allocated to a file) is 8K -- 8192 bytes. This
  differs from the UFS allocation of 1K (1024 bytes) as the smallest
  "chunk".

  Thus, if your file system contains a lot of small files, the amount
  of data that vdump needs to write to the backup medium may be
  substantially less than the total space allocated on the disk (as
  reported by df -k). If most of the files are less than 1K in size, for
  example, you will overallocate the space on the disk by about a factor
  of 8 -- there will be about 8 times as much disk space allocated as
  there actually is data in the files.

Thank you to all that responded.



Dougal Scott Australian Artificial Intelligence Institute
dwagon_at_aaii.oz.au 6/171 La Trobe St. Melbourne 3000
Programmer and Tech Support Australia
Phone: +61 3 9663 7922 Fax: +61 3 9663 7937
Received on Mon Jan 29 1996 - 02:10:03 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:46 NZDT