Serious Problems with ASE Configuration !!!

From: Sreekumaran Padiyath <kumar.padiyath_at_psi.ch>
Date: Tue, 08 Feb 2000 11:23:41 +0100

   Since beginning of this year we are having serious problems with our
ASE
   cluster. There are 2 HSZ40 controllers connected to a raid array with
raid
   level5. We are running Tru64 V4.0D with Patch level5 including
cluster
   patches. The message I receive on the console is the following:

Feb 8 10:07:40 psw223 vmunix: malloc_mem_alloc: no space in map
<---------

   Afterwards I cannot execute any commands till I reboot the system(By
pressing
   the reset button on DEC3800). The 2 machines in the cluster are
DEC3800 with each
  196Mb memory. As our compaq support person suggested I added the
following values
   to vm and proc parameters without success.
vm:
   ubc-maxpercent = 90

proc:
   maxusers = 128

  According to support person I have to reduce to 80 of vm which Iam
doing next time.

  The output from "sysconfig -q vm is":
vm:
ubc-minpercent = 10
ubc-maxpercent = 90
ubc-borrowpercent = 20
ubc-maxdirtywrites = 5
ubc-nfsloopback = 0
vm-max-wrpgio-kluster = 32768
vm-max-rdpgio-kluster = 16384
vm-cowfaults = 4
vm-mapentries = 200
vm-maxvas = 1073741824
vm-maxwire = 16777216
vm-heappercent = 7
vm-vpagemax = 16384
vm-segmentation = 1
vm-ubcpagesteal = 24
vm-ubcdirtypercent = 10
vm-ubcseqstartpercent = 50
vm-ubcseqpercent = 10
vm-csubmapsize = 1048576
vm-ubcbuffers = 256
vm-syncswapbuffers = 128
vm-asyncswapbuffers = 4
vm-clustermap = 1048576
vm-clustersize = 65536
vm-zone_size = 0
vm-kentry_zone_size = 16777216
vm-syswiredpercent = 80
vm-inswappedmin = 1
vm-page-free-target = 128
vm-page-free-swap = 74
vm-page-free-hardswap = 2048
vm-page-free-min = 20
vm-page-free-reserved = 10
vm-page-free-optimal = 74
vm-page-prewrite-target = 256
dump-user-pte-pages = 0
kernel-stack-guard-pages = 1
vm-min-kernel-address = 18446744071562067968
malloc-percpu-cache = 1
contig-malloc-percent = 20
vm-aggressive-swap = 0
vm-map-index-count = 64
vm-map-index-rebalance = 128
vm-map-index-enabled = 1
vm-map-index-hiwat = 4
vm-map-index-lowat = 2
new-wire-method = 1
vm-segment-cache-max = 50
vm-page-lock-count = 0
gh-chunks = 0
gh-min-seg-size = 8388608
gh-fail-if-no-mem = 1
private-text = 0
private-cache-percent = 0

proc:

proc:
max-proc-per-user = 64
max-threads-per-user = 256
per-proc-stack-size = 2097152
max-per-proc-stack-size = 33554432
per-proc-data-size = 134217728
max-per-proc-data-size = 1073741824
max-per-proc-address-space = 1073741824
per-proc-address-space = 1073741824
autonice = 0
autonice-time = 600
autonice-penalty = 4
open-max-soft = 4096
open-max-hard = 4096
ncallout_alloc_size = 8192
round-robin-switch-rate = 0
round_robin_switch_rate = 0
sched-min-idle = 0
sched_min_idle = 0
give-boost = 1
give_boost = 1
maxusers = 128
task-max = 1045
thread-max = 2088
num-wait-queues = 128
num-timeout-hash-queues = 128
enable_extended_uids = 0
enhanced-core-name = 0
enhanced-core-max-versions = 16


  I would like to know:

  1. How can I get rid of this problem?
  2. Which system parameter I have to tune now?
     (vm-mapentries or vm-vpagemax or ubc-nfsloopback or something
else?)
Received on Tue Feb 08 2000 - 10:25:00 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:40 NZDT