![]() |
![]() HP OpenVMS Systemsask the wizard |
![]() |
The Question is: at RT priorities (16+) working sets stop growing after reaching 65535 pages but set them to prio 15, for example, and they can grow beyond 65535. as workaround we might set the RT processes' priority low at startup, wait until the most/all of (our)DB has faulted in, and reset to a higher priority. (assume DB >> 65535, and ample free memory) i'm most just curious if this behavior will chg w/ Alpha 7.1/7.2 (eh, we're not too far long w/ our Alpha porting ..) i'd wondered if it had bothered anyone else, say someone running with a multi-gig DB. and, wondered if this limit had a tie-in w/ SWAPPER priority. remarks inre our usage of RT priorities: we use them as somewhat somewhat a dodgy soln for what others, nowadays, might've used the Classs Scheduling routines for. and to some extent, provide for absolute pre-emption if we run tight on CPU resources. PS. VMS is rock-solid stuff; Keep up the good work. The Answer is : The Wizard will start with a few assumptions: - The system has sufficient available physical memory; AUTOGEN will set the WSMAX value to one-quarter of physical memory with an upper limit of 4GB (512 K pages; 8 M pagelets). - The working set quota is set to its maximum value of 512 MB (64K pages; 1M pagelets). The automatic working set adjustment (AWSA) mechanism allows a process to extend the working set above the WSQUOTA value (512MB), provided that a sufficient WSEXTENT has been configured and a sufficient number of page faults have occured to necessitate the increase. In the Internals and Data Structures Manual, you will find that AWSA processing is disabled for real time processes. The check for this is located in the quantum end routine (SCH$QEND). A working set larger than the working set quota is subject to swapper trimming. However, the SWAPPER runs at the lowest real-time priority (16), and it could not affect this process if it is running at the same or higher real-time priority. An appropriate approach for applications requiring larger amounts of physical memory (available in OpenVMS Alpha V7.1 and later) is the use of memory resident sections. Assuming the target system has sufficient memory and assuming there is a willingness to designate some of this memory to the specific application, you will gain the following benefits: - Memory resident sections are not counted against the working set. - Using the SYSMAN> RESERVED_MEMORY ... mechanism, you can reserve the desired amount of physical memory for this application. (And AUTOGEN will take this into account.) - Memory resident sections can share page tables. Multiple processes accessing the same memory resident section for reading and writing can share the same physical pages for use as level 3 page tables. Otherwise, 100 processes sharing a 4GB section would require a total of 400 MB for use as level 3 page tables. - If this memory is allocated in a single, contiguous chunk during the system bootstrap, it will be mapped using granularity hints (GH), resulting in some access performance improvements. Using GH hints reduces the number of translation buffer entries required to access the section data. Multiple gigabyte caches for databases use memory resident sections. As a starting point, the wizard recommends reading the "OpenVMS Alpha Guide to 64-bit Addressing and VLM Features" to learn about this area. The existing WSMAX limitations are currently under review for a future OpenVMS release (beyond V7.2). OpenVMS Engineering is interested in customer feedback on applications that can (or can not) use memory resident sections. In particular, OpenVMS Engineering is interested in learning about customers with requirements for process working sets beyond 512 MB, or beyond 4 GB. Memory resident sections require OpenVMS ALpha V7.1, and the applicable ECO kits.
|