vdump on node gecko (OSF/1 3.2c) is showing a +1gig "bytes to dump" on a
65meg filesystem containing less than 50 meg of data.
The bytes to dump figure from a dummy run of vdump (as below) is used by a
dump-management tool (amanda) to schedule dumps. The size error is causing
the dump to fail because the utility thinks it won't fit on a single volume
on the DAT drive.
The value is pretty close to right on a variety of other nodes, and on this
node for a ~1gig fileset on another domain. This is the only small filset I
have to test. The domain has been defragmented today. No errors detected in
binary.errorlog for the last month or so.
Any good guesses for a fix are appreciated.
--Grant
====Log=================
gecko* /sbin/vdump -V
$Date: 1995/06/12 13:18:39 $ Version 1.3
gecko* uname -a
OSF1 gecko.rsmas.miami.edu V3.2 148 alpha
gecko* /sbin/vdump -0 -f - / >/dev/null
path : /
dev/fset : root_domain#root
type : advfs
advfs id : 0x3065ee79.000d5060.1
vdump: Date of last level 0 dump: the start of the epoch
vdump: Dumping directories
vdump: Dumping 1212253279 bytes, 107 directories, 1067 files
vdump: Dumping regular files
gecko* df -k /
Filesystem 1024-blocks Used Avail Capacity Mounted on
root_domain#root 65536 48319 11720 80% /
gecko* showfdmn root_domain
Id Date Created LogPgs Domain Name
3065ee79.000d5060 Sun Sep 24 19:49:13 1995 512 root_domain
Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name
1L 131072 23440 82% on 128 128 /dev/rz0a
gecko* showfsets root_domain
root
Id : 3065ee79.000d5060.1.8001
Files : 1072, SLim= 0, HLim= 0
Blocks (512) : 96640, SLim= 0, HLim= 0
Quota Status : user=on group=on
spare
Id : 3065ee79.000d5060.2.8001
Files : 2, SLim= 0, HLim= 0
Blocks (512) : 32, SLim= 0, HLim= 0
Quota Status : user=on group=on
--
Grant Basham (305)361-4026 University of Miami
grant_at_rsmas.miami.edu RSMAS Computer Facility/Systems
Received on Tue May 21 1996 - 20:34:40 NZST