![]() |
![]() HP OpenVMS Systemsask the wizard |
![]() |
The Question is: Is there a way to determine dynamically how much memory I can allocate through calloc ? I have tried using getjpiw and then calculating this amount of memory by subtracting ppgcnt and gpgcnt from either pagfilcnt or 1Gb, whichever is smallest, but apparently I always seem to be off for an indeterminate number of pages. Also, calling calloc and free subsequently, there appears to be less and less memory available using the above calculation. After a number of calls though this seems to stabilize. Is there a dummy explanation for this? Regards, Jose. The Answer is : There is no effective nor simple nor reliable means to predict if any particular calloc or malloc or lib$get_vm call will succeed, other than the simplest approach: try it, and capture any errors. With 32-bit addressing and the traditional address space segmentation model of OpenVMS virtual addressing, you are limited to a theoretical maximum of two (2) gigabytes (GB) of process virtual memory -- with the use of 64-bit addressing, rather more address space is available. That said, there is no entirely reliable means to determine how much memory is remaining in the heap, and the recommended approach is to simply capture and report an allocation error. (Even if you could predict the remaining memory, there are things that can change between the check and the corresponding call, and there are also ways to allocate virtual memory that are not charged against your pagefile quota.) The same approach holds for checking for file access security and such -- while you can implement the security check in your code, you are duplicating the check that will be performed within OpenVMS, and your code must also still contend with potential changes to the object security profile that can occur between the application-internal security check and the actual access attempt. In simplest terms, it is best simply to try the access, and to capture any potential failure(s). If you want to be absolutely certain that you have memory available, then you will want to pre-allocate the required memory. The OpenVMS Wizard prefers to use the RTL LIB$GET_VM and related calls, as these provide far better control over memory management than the inherently generic C memory management calls. There are available capabilities -- such as the ability to flush an entire memory zone, useful when managing a pool of temporary memory as you don't have to track individual memory blocks for the deallocation -- that are entirely lacking with the C memory management calls. With the statistics calls available, you can also track various of the memory-management related activities of your application using the RTL LIB$ VM. When you repeatedly allocate and freeing memory using the C calls and several sizes of memory blocks, you can run afoul of the documented and intended behaviour where the previous memory allocation remains available for immediate reallocation. This long-standing allocation behaviour was deliberately implemented in support of (usually) older C code that implicitly assume it. Virtual address space is implemented using the process working set -- that part of process virtual memory that is currently loaded into physical memory -- and backing storage; that part of virtual memory that resides on disk storage in a section file or in a system pagefile. Operating systems mechanisms known as paging and swapping move the contents of physical memory in and out of backing storage, while updating the data structures known as the page tables to reflect the current state of virtual memory. (The transition itself is known as a pagefault or a swap, though the difference between these two is largely irrelevent for this discussion.) Three process virtual address space ranges exist on OpenVMS: P0, P1, and P2 -- though OpenVMS Alpha prior to V7.0 and OpenVMS VAX lack P2 space. P0 and P1 are the traditional portion of OpenVMS process address space and are one gigabyte in size each, while P2 space is the rather larger process-specific portion of sixty-four (64) bit address space. The amount of pagefile storage that a process can allocate is limited by the process quota PGFLQUOTA. The amount of section file storage is limited by the available disk space and (if enabled) by the user's disk quota settings. Taken together, these two quotas are the basic limit on the amount of virtual address space that can be written to and can be modified beyond the process working set storage. PPGCNT is a count of pages in the process working set; the amount of physical memory. GPGCNT is a count of global pages, which is effectively physical memory that is not charged against the process pagefile quota. Neither of these are specific to the amount of remaining process virtual address space, which is the limit behind the C and RTL LIB$ VM memory management calls. The amount of process virtual address space is (coursely) limited by the VIRTUALPAGECNT system parameter and (for that part of the virtual memory that is backed there) by the amount of process pagefile quota available. You can track peak process virtual memory use with the VIRTPEAK item available on the $getjpi. The only available estimate for remaining P0 virtual address space -- and it is a poor estimate -- is based on the virtual address (hex) 3FFFFFFF (the architected top of P0 space) and the FREP0VA address (the first free address within P0 space). This potential allocation is limited by the remaining pagefile quota -- and this calculation will overestimate remaining address space, as there is overhead within individual memory allocations, and as there is likely other code active that can be consuming and releasing heap space. Further, applications can also use P1 or P2 space. Some of the related topics include (1661), (3257), and (7552). Also of interest will be (2486), (3115), (3748), (3764), and (5455). For details of virtual memory, paging, swapping, pagefaults and pagefault handlers, page tables, and other memory-related operating system details, please see one of the Internals and Data Structures Manuals for OpenVMS.
|