Hi Mangers,
I thought I should share this with the list as it might be useful.
I had a mail from Bob Vickers how had experienced a similar problem
as myself in the past. As I said in my summary (but not in the
original message) I had done a vrestore from a different fileset
(/usr 9GB partition with soft-limits set to 9GB) on to a temp dir in
the problem fileset a little while before I noticed the problem. The
df output corresponded to the limits on /usr and we both think that
there is metadata being restored from the backup which overwrites the
quotas set on the fileset that I restored to.
I guess you should either do vrestore to the original fileset (in a
temp dir) or make sure you run a chfsets on the fileset you restore
to after you complete the restore. What I don't know is if the new
quota comes into effect immediately or if it shows up after a re-
boot.
Good luck.
Dp.
------- Forwarded message follows -------
Thanx to Lawrie Smith for his help. The cigar goes to Tom Swigg for
the resolution. I don't know how but the soft limit on the domain was
set to 18418400 (I wonder if this was due to a vrestore of a file I
did a week back from a different fileset). You can get this info from
`showfsets domain`. You change the soft (or hard) limit with chfsets,
in my case I zeroed the limits as I don't want to enforce any so
`chfsets -B 0 -b 0 usr_0.
Again a big thanx to Tom for his help.
Dp.
------- Original message follows -------
Hi managers,
SYS: DS10 wth external SCSI pedastal and 6 x 36GB disks. Tru64 5.1.
advfs fileystsem throughtout.
I did a df today and noticed that one domain was at 100% capacity. I
was a bit shocked because this domain was just over half full (about
14GB free) at the end of last week.
I did a find with mtime -7 and there are only a dozen files added in
the last seven days. So I did a showfdmn on the domain and it shows:
Id Date Created LogPgs Version Domain Name
3b04fd12.000b9595 Fri May 18 11:44:34 2001 512 4 usr_0
Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name
1L 71132000 34372560 52% on 256 256
/devices/disk/dsk1c
I figure that the showdfm is correct but I am baffled by the sudden
discrepancy. I can't see anything that exactly replicates my problem
in the archives but I saw something about quotacheck -v. I did this
and got: root fixed: blocks 121 -> 41 system fixed: blocks
5671969 -> 5671889
Still df (and vdf) is reporting the wrong usage:
Filesystem 1024-blocks Used Available Capacity Mounted on
spl_0#usr0 9209200 9209200 0 100% /usr0
Id Date Created LogPgs Version Domain Name
3b04fd12.000b9595 Fri May 18 11:44:34 2001 512 4 usr_0
Vol 512-Blks Free % Used Cmode Rblks Wblks Vol Name
1L 71132000 34365504 52% on 256 256
/devices/disk/dsk1c
I wonder if df is looking at a completely different disk as I have a
internal 9GB disk? I also ran verify with the -a and then with -f but
still no change.
Has anyone seen this before? I am almost certain that I have about
14Gb left as files are being added and there is no trouble there.
Thanx.
Dp.
------- End of forwarded message -------
------- End of forwarded message -------
~~
Dermot Paikkos * dermot_at_sciencephoto.com
Network Administrator _at_ Science Photo Library
Phone: 0207 432 1100 * Fax: 0207 286 8668
Received on Mon Jul 12 2004 - 09:31:26 NZST