My original question was about not being able to allocate a large
amount of memory from a program. Thanks to Alan Rollow and Fabrice Cuq
for their incredibly fast response, I was able to check the one thing
that I missed: the per process limits. (This is something that makes
one feel kinda stupid after he realizes the obvious.)
While my default data size limit was already at 128 MB, appearantly
enough of it was used up by something or another so that I could not
allocate 80 MB more of data space within the program. Upping the data
size limit allowed me to run the program.
Alan's reply:
V1.3 and V1.2 used a doubling allocation algorithm for
virtual memory. For example if malloc uses sbrk to get
1 MB initially, it will want 2 MB next, then 4 MB and
so on. Your 80 MB is probably up in the range of asking
for 128 MB in addition to what's already been allocated
and is running up against the data size limit.
Use the shell built-in to check the datasize limit and
see if you can raise it. The system data size looked
big enough for most things, so the dfldsiz is probably
the initial limit for the shell.
In V2.0 the doubling algorithm is replaced by one that
uses a constant size, which is much more friendly for
large allocations.
In my case, I boiled down my problem to a simple program with a single
malloc call and upped the number of bytes until it returned NULL. So
each time the program was run, it only called malloc once. But it is
good to remember the doubling behavior for other parts of the program.
thanks for the responses.
eugene
Received on Thu Apr 06 1995 - 20:42:27 NZST