Hello all. I have some more information on the error I was getting from
fsck on a 400 Gb disk set. After a power failure, it was failing with
this error:
/sbin/ufs_fsck /dev/rrz0c
** /dev/rrz0c
cannot alloc 200256002 bytes for lncntp
As mentioned in the last summary, I ran newfs on the disk set after
setting per-proc datasize up to 350Mb.
Many thanks to Steve Hancock, Dr. Tom Blinn, Chang Song and Alan Nabeth
for help on this issue.
This morning we took an opportunity to unmount and test fsck on the disk
set with data, and found that fsck again failed with the same error. I
set per-proc datasize up from 350M to 512M and rebooted, and after that
fsck worked fine (though it took a long time.)
This is version 4.0F on an ES40 with 1Gb RAM, with two stripe sets planned
for news, 12x36gb (binaries) and 6x18gb (non-binaries) on redundant
HSZ70's.
Chang Song and Alan Nabeth pointed out that my original 350 Mbytes was not
enough to allocate.
Chang Song wrote:
>
> Hi. It was the problem of shortage of memory available to fsck.
> fsck allocated, up front, twice of (maxinode+1) + size of max. block
> on disk. So 350Mbyte still is not enough for fsck to run. (Twice of
> 200256002 + size of max. block).
>
Alan Nabeth wrote:
>
> I wouldn't be surprised if fsck allocates more tables than
> just "lncntp". The size of many data structures that fsck
> needs is dependent on the size of the file system. I think
> this is one of the reasons that they stopped qualifying UFS
> larger than 128 MB (and the fact that AdvFS was around).
> Looking at the sources of the ULTRIX version of fsck (I don't
> have Tru64 UNIX sources), I'd guess that fsck may allocate
> another space the the same size as the "lncntp" and one other
> large space for the fragment map. Assuming it uses a one bit
> per fragment, it will need over 50 MB for a 400 GB file
> system.
>
> So, a rough guess is that fsck will need nearly 500 MB of
> virtual memory for a 400 GB file system. To run decently
> will want that much physical memory.
>
> So, unless you intend to recreate the file system everytime
> something happens to it, you may want to figure out just
> how much fsck will need or break up the file system.
>
>
There was speculation of a superblock problem. If that had been the case,
the tools to use would have been fsck with a -b option to use an alternate
superblock, and probably newfs -N to get alternate superblock locations.
Today's test points to memory, though.
Thanks again!
Ann Cantelow
Received on Mon Mar 06 2000 - 22:10:23 NZDT