![]() |
![]() HP OpenVMS Systems Documentation |
![]() |
OpenVMS Performance Management
3.5.6 Voluntary DecrementingThe parameters PFRATL and WSDEC, which control voluntary decrementing, are very sensitive to the application work load. For the PFRATH and PFRATL parameters, it is possible to define values that appear to be reasonable page faulting limits but yield poor performance. The problem results from the page replacement algorithm and the time spent maintaining the operation within the page faulting limits. For example, for some values of PFRATL, you might observe that a process continuously page faults as its working set size grows and shrinks while the process attempts to keep its page fault rate within the limits imposed by PFRATH and PFRATL. However, you might observe the same process running in approximately the same size working set, without page faulting once, with PFRATL turned off (set to zero).
Oscillation occurs when a process's working set size never stabilizes.
To prevent the site from encountering an undesirable extreme of
oscillation, the system turns off voluntary decrementing by initially
setting parameter PFRATL equal to zero. You will achieve voluntary
decrementing only if you deliberately turn it on.
The following table summarizes adjustments to AWSA parameters:
3.5.8 CautionYou can circumvent the AWSA feature by using the DCL command SET WORKING_SET/NOADJUST. Use caution in disabling the AWSA feature, because conditions could arise that would force the swapper to trim the process back to the value of the SWPOUTPGCNT system parameter.
Once AWSA is disabled for a process, the process cannot increase its
working set size after the swapper trims the process to the SWPOUTPGCNT
value. If the value of SWPOUTPGNT is too low, the process is restricted
to that working set size and will fault badly.
By developing a strategy for performance management that considers the desired automatic working set adjustment, you will know when the AWSA parameters are out of adjustment and how to direct your tuning efforts. Sites choose one of the following general strategies for tuning AWSA parameters:
The first strategy works best in the time-sharing environment where
there can be wild fluctuations in demand for memory from moment to
moment and where there tends to be some number of idle processes
consuming memory at any moment. The second strategy works better in a
production environment where the demand tends to be more predictable
and far less volatile.
The swapper process performs two types of memory management activities---swapping and swapper trimming. Swapping is writing a process to a reserved disk file known as a swapping file, so that the remaining processes can benefit from the use of memory without excessive page faulting. To better balance the availability of memory resources among processes, the operating system normally reclaims memory through a more complicated sequence of actions known as swapper trimming. The system initiates swapper trimming whenever it detects too few pages in the free-page list.
Trimming takes place at two levels (at process level and systemwide)
and occurs before the system resorts to swapping.
The swapper performs first-level trimming by checking for processes
with outstanding loans; that is, processes that have borrowed on their
working set extent. Such processes can be trimmed, at the swapper's
discretion, back to their working set quota.
If first-level trimming failed to produce a sufficient number of free pages, then the swapper can trim at the second level. With second-level trimming, the swapper refers to the systemwide trimming value SWPOUTPGCNT. The swapper selects a candidate process and then trims the process back to SWPOUTPGCNT and outswaps it. If the deficit is still not satisfied, the swapper selects another candidate.
As soon as the needed pages are acquired, the swapper stops trimming on
the second level.
Because the swapper does not want to trim pages needed by an active process, it selects the processes that are candidates for second-level trimming based on their states. Memory is always reclaimed from suspended processes before it is taken from any other processes. The actual algorithm used for the selection in each of these cases is complex, but those processes that are in either local event flag wait or hibernate wait state are the most likely candidates. In addition, the operating system differentiates between those processes that have been idle for some time and are likely to remain idle and those processes that have not been idle too long and might become computable sooner.
By freeing up pages through outswapping, the system should allow enough processes to satisfy their CPU requirements, so that those processes that were waiting can resume execution sooner. After suspended (SUSP) processes, dormant processes are the most likely candidates for memory reclamation by the swapper. Two criteria define a dormant process as follows:
3.6.4 Disabling Second-Level TrimmingTo disable second-level trimming, increase SWPOUTPGCNT to such a large value that second-level trimming is never permitted. The swapper will still trim processes that are above their working set quotas back to SWPOUTPGCNT, as appropriate. If you encounter a situation where any swapper trimming causes excessive paging, it may be preferable to eliminate second-level trimming and initiate swapping sooner. In this case, tune the swapping with the SWPOUTPGCNT parameter.
For a process with the PSWAPM privilege, you can also disable swapping
and second-level trimming with the DCL command SET PROCESS/NOSWAPPING.
On most systems, swapper trimming is more beneficial than voluntary decrementing because:
The AUTOGEN command procedure, which establishes parameter values when
the system is first installed, provides for swapper trimming but
disables voluntary decrementing.
On VAX systems, under certain circumstances, the operating system provides an alternative to costly outswap operations. A virtual balance slot (VBS) holds the mapping for a memory resident process that currently does not own a real balance slot. Virtual balance slots allow a virtually unlimited number of concurrent memory resident processes through the timesharing of available system address space limited by the system parameter MAXPROCESSCNT. Traditionally, the sizes of system virtual address (SVA) space and the largest process supported in the system have limited the number of concurrent memory resident processes. When this limit was exceeded, processes were swapped out. Excessive swapping can seriously degrade the overall performance of the system.
When to Use Virtual Balance Slots
When is it appropriate to use virtual balance slots? Typically, you will have adequate memory but no balance slots. The types of applications most likely to be affected are those that have a large number of processes resident in memory, either because of a large number of users (for example, ALL-IN-1 applications) or because of a large number of processes per user (such as DECwindows terminals). See the OpenVMS System Management Utilities Reference Manual for information about tuning virtual balance slots using MONITOR.
Enabling and Disabling Virtual Balance Slots
Virtual balance slots and their dynamic capabilities are enabled by default. As the system manager, you can enable or disable the use of virtual balance slots by using the bit-encoded system parameters VBSS_ENABLE and VBSS_ENABLE2. Table 3-1 describes the various bit settings of VBSS_ENABLE.
You must reboot the system after setting VBSS_ENABLE. VBSS_ENABLE2 enables and disables the dynamic capabilities of VBS. It is valid only when VBS is enabled. Table 3-2 describes the various bit settings of VBSS_ENABLE2.
3.7 Active Memory Reclamation from Idle ProcessesThe memory management subsystem includes a policy that actively reclaims memory from inactive processes when a deficit is first detected but before the memory resource is depleted. The active memory reclamation policy acts on two types of idle processes:
3.7.1 Reclaiming Memory from Long-Waiting ProcessesA candidate process for this policy would be in the LEF or HIB state for longer than number of seconds specified by the system parameter LONGWAIT. By setting FREEGOAL to a high value, memory reclamation from idle processes is triggered before a memory deficit becomes crucial and thus results in a larger pool of free pages available to active processes. When a process that has been swapped out in this way must be swapped in, it can frequently satisfy its need for pages from the large free-page list. The system uses standard first-level trimming to reduce the working set size. Second-level trimming with active memory reclamation enabled occurs, but with a significant difference. When shrinking the working set to the value of SWPOUTPGCNT, the active memory reclamation policy removes pages from the working set but leaves the working set size (the limit to which pages can be added to the working set) at its current value, rather than reducing it to the value of SWPOUTPGCNT. In this way, when the process is outswapped and eventually swapped in, it can readily fault the pages it needs without rejustifying its size through successive adjustments to the working set by AWSA.
Swapping Long-Waiting Processes
Long-waiting processes are swapped out when the size of the free-page list drops below the value of FREEGOAL.
A candidate long-waiting process is selected and outswapped no more
than once every 5 seconds.
The active memory reclamation policy also targets processes that do the following:
Because it has a periodically waking behavior, a watchdog process is not a candidate for swapping but might be a good candidate for memory reclamation (trimming). For this type of process, the policy tracks the relative wait-to-execution time. When the active memory reclamation policy is enabled, standard first- and second-level trimming are not used. When the size of the free-page list drops below twice the value of FREEGOAL, the system initiates memory reclamation (trimming) of processes that wake periodically.
If a periodically waking process is idle 99 percent of the time and has
accumulated 30 seconds of idle time, the policy trims 25 percent of the
pages in the process's working set as the process reenters a wait
state. Therefore, the working set remains unchanged.
The system parameter FREEGOAL controls how much memory is reclaimed from idle processes. Setting FREEGOAL to a larger value reclaims more memory; setting FREEGOAL to a smaller value reclaims less.
For information about AUTOGEN and setting system parameters, refer to
the OpenVMS System Manager's Manual, Volume 2: Tuning, Monitoring, and Complex Systems.
Because it reclaims memory from idle processes by trimming and swapping, the active memory reclamation policy can increase paging and swapping file use. Use AUTOGEN in feedback mode to ensure that your paging and swapping files are appropriately sized for the potential increase.
For information about sizing paging and swapping files using AUTOGEN,
refer to the OpenVMS System Manager's Manual, Volume 2: Tuning, Monitoring, and Complex Systems.
Active memory reclamation is enabled by default. By using the system parameter MMG_CTLFLAGS which is bit encoded, you can enable and disable proactive memory reclamation mechanisms. Table 3-3 describes the bit settings.
1If MMG_CTLFLAGS equals 0, then active memory reclamation is disabled. MMG_CTLFLAGS is a dynamic parameter and is affected by AUTOGEN.
Memory sharing (either code or data) is accomplished using a systemwide
global page table similar in function to the system page table.
Figures 3-6 and 3-7 illustrate how memory can be conserved through the use of global (shared) pages. The three processes (A, B, and C) run the same program, which consists of two pages of read-only code and one page of writable data. Figure 3-6 shows the virtual-to-physical memory mapping required when each process runs a completely private copy of the program. Figure 3-7 illustrates the physical-memory gains possible and the data-structure linkage required when the read-only portion of the program is shared by the three processes. Note that each process must still maintain a private data area to avoid corrupting the data used by the other processes. Figure 3-6 Example Without Shared Code ![]() Figure 3-7 Example with Shared Code ![]()
The amount of memory saved by sharing code among several processes is
shown in the following formula:
For example, if 30 users share 300 pages of code, the savings are 8700
pages.
The small amount of overhead required to obtain these memory savings consists of the data-structure space required for the (1) global page table entries and (2) global section table entries, both of which are needed to provide global mapping.
For more information about global sections, see the OpenVMS Linker Utility Manual.
|