Thanks to all who replied:
"John P . Speno" <speno_at_isc.upenn.edu>
farley_at_Manassas1.TDS-GN.LMCO.COM (Patrick Farley)
kstran - Keith Strange <kstran_at_acxiom.com>
"Allen, Mark R (PBD)" <Mark.Allen_at_pbdir.com>
Kurt Carlson <snkac_at_java.sois.alaska.edu>
Willig Reimund <Willig.Reimund_at_gdr.de>
alan_at_nabeth.cxo.dec.com
Many people who replied suggested that I simply needed more RAM and swap
space. Alan from DEC suggested that there are other considerations. (See
his message below.) I have decided to collect system statistics in order
to make a better determination of the problem and the best solution.
Many people suggested various including:
top
monitor gatekeeper.dec.com:/pub/DEC/monitor.alpha.tar.Z
collect also from gatekeeper (collects data and then displays it using a
GUI interface)
There is a built in program in CDE under system administration then click
on tuning then process tuning. From there you can configure it to show
real/vm per process in descending order.
Randy Hayman has sevaral tools available including:
iostat like tool:
ftp://raven.alaska.edu/pub/sois/uaio-v2.0b.tar.Z
ps like tool:
ftp://raven.alaska.edu/pub/sois/uakpacct-v2.0.tar.Z
vm_mon, syd, disks performance monitoring tools:
ftp://raven.alaska.edu/pub/randy/perf_mon_tools/
alan_at_nabeth.cxo.dec.com:
Measuring free memory on modern UNIX systems is hard, because
most of them support a unified buffer cache (UBC). Older versions
set aside a fixed percentage of memory for the file system
buffer cache and left everything else for processes. Today,
the buffer cache and processes share the same available
memory; unified.
Before, a low memory condition meant memory had to be taken
away from processes, which nearly always meant page-outs.
On UBC systems the first quick source for memory is cache
buffers that haven't been used recently and don't hold
dirty (writing) data. If there aren't any of those, then
dirty buffers and program data are probably a toss up,
with dirty buffers having a slight edge since they need
to be written anyway.
So, a system with lots of I/O going on, may appear to always
be "low" on memory, simply because the buffer cache has most
of it. Where you get performance problems is when you have
normal processes wanting memory and programs doing lots of
I/O that use large amounts of the cache. In such a case,
you can limit the amount of memory that the cache will use.
One of the VM subsystem configuration parameters is the amount
of free memory that the system considers "low". When it get
to this point, the system will actively start looking for
ways to get memory back; invalidate cache buffers, write
dirty buffers, write process pages out, etc. I think this
number is 128 pages by default, but I haven't paid close
attention to such things in a long time.
Knowing what the threshold values are between what's acceptable
and what isn't is hard, because it depends on the applications.
Background processing may be able to with stand high levels
of paging then interactive use that wants quick response. One
thing you can look at is to consider that each page-out operation
requires an I/O request. A high rate of page-out also means
lots of I/Os which may saturate disks. To find the upper
bound for saturation I/O, do sequential page size reads to
your page/swap device. That gives you a best case for how
much I/O a given disk can handle. Then, when the system is
paging or slow see where the I/O is going. If it is only the
paging device(s) and the pageout rate is above normal, then
more memory could help. If the system is slow and there is
I/O on other devices, then it may be the I/O that's the problem.
Ellen Davis
Ellen.Davis_at_uc.edu
Received on Mon May 18 1998 - 15:29:21 NZST