Admins,
I have a question about AdvFS performance and the
limitations on special device files.
SENARIO:
We have two clusters, old and new, both running Tru64
V5.1B SRM 6.6.
Applications, multiple instances of Progress RDBMS
The old cluster:
2 x ES40 (4 x CPU, 8Gb RAM), dual MC Interconnect in
failover.
Dual HSZ80 controllers & cache, BA370 cabinets.
Data disk partition is a two volume domain comprising
20 x 36Gb disks in RAID 0 + 1
all configured at HSZ level, no LSM in use.
The new cluster:
2 x ES40 M1 (4 x CPU, 8Gb RAM) 2 x ES45 M2 ( 2 x CPU,
8Gb RAM) dual MC Interconnect in failover
Dual FCA-2354 in each server, multipathing via two
Cisco MDS9216 SAN switches and
storage provided by 15k RPM 72Gb disks in RAID5 with
4Gb front end cache (no LSM).
Data partition is 600Gb provided by amalgamation of
three LUN's from three seperate
Raid groups and presented to the OS as a singlr LUN.
PROBLEM:
The old cluster is suffering very poor performance,
apparently an IO bottle neck,
with collect demonstrateing:
# DISK Statistics
#DSK NAME B/T/L R/S RKB/S W/S WKB/S AVS
AVW ACTQ WTQ %BSY
0 dsk20 0/0/0 0 0 0 0 0.00
0.00 0.00 0.00 0.00
1 dsk21 0/1/0 0 0 0 0 0.00
0.00 0.00 0.00 0.00
2 dsk22 0/2/0 0 0 0 0 0.00
0.00 0.00 0.00 0.00
3 dsk23 0/3/0 0 0 0 0 0.00
0.00 0.00 0.00 0.00
4 dsk24 0/4/0 0 0 0 0 0.00
0.00 0.00 0.00 0.00
5 dsk26 0/5/0 0 0 0 0 0.00
0.00 0.00 0.00 0.00
6 dsk0 1/0/1 0 3 0 0 61.53
0.00 0.01 0.00 0.60
7 dsk1 1/0/2 0 0 0 0 0.00
0.00 0.00 0.00 0.00
8 dsk2 1/0/3 0 0 0 0 0.00
0.00 0.00 0.00 0.00
9 dsk3 1/1/1 0 0 0 0 0.00
0.00 0.00 0.00 0.00
10 dsk4 1/1/2 0 0 0 0 0.00
0.00 0.00 0.00 0.00
11 dsk5 1/1/3 0 0 0 0 0.00
0.00 0.00 0.00 0.00
12 dsk6 1/2/1 0 0 0 0 0.00
0.00 0.00 0.00 0.00
13 dsk7 1/2/2 0 0 0 0 0.00
0.00 0.00 0.00 0.00
14 dsk8 1/2/3 0 0 0 0 0.00
0.00 0.00 0.00 0.00
15 dsk9 1/2/4 27 2572 0 0 72.42
0.00 1.96 0.00 99.44
16 dsk10 1/2/5 0 25 0 0 65.92
0.00 0.01 0.00 1.29
17 cdrom2 2/0/0 0 0 0 0 0.00
0.00 0.00 0.00 0.00
There appears to be no problem with RAM, no swapping
and ~2.5Gb in UBC.
CPU is OK etc.
The new cluster:
We have the same problem, no improvement in
performance seen!
performance degrades noticibly as the user session
count grows and is significant
at 20 users. Disk IO utilisation is much the same as
with the old cluster for the
data partition, >90% Busy.
A call has been raised with HP regarding this.
Their response was not overly helpful, along the lines
of:
"No hardware or OS configuration fault noted"
QUESTIONS:
1) Is there a limit on the IO through a single
Special device file that
we might have reached?
2) Can anyone offer insight, advice, further
questions which may lead me
to a solution?
generic:
memberid=1
msgbuf_size=524288
new_vers_high=1445673072572143232
new_vers_low=268488508
act_vers_high=1445673072572143232
act_vers_low=268488508
rolls_ver_lookup=0
kmem_debug=0x40
kmem_protected_size=32
version_vendor = Compaq Computer Corporation
version_avendor = COMPAQ
version_product = Tru64 UNIX
version_banner = Compaq Tru64 UNIX
vm:
swapdevice=/dev/disk/dsk0f,/dev/disk/dsk3f
vm_page_free_reserved=20
vm_page_free_min=30
vm-swap-eager = 1
vm-page-free-target = 2048
vm-ubcseqstartpercent = 50
ubc_maxpercent = 70
vm_ubcdirtypercent = 10
ubc_minpercent = 15
vm_page_prewrite_target = 5500
vm_segmentation = 1
proc:
max_proc_per_user=2000
max_threads_per_user=4096
round_robin_switch_rate=60
autonice=1
autonice_time=300
autonice_penalty=15
per_proc_data_size=10737418240
max_per_proc_data_size=10737418240
max_per_proc_address_space=10737418240
per_proc_address_space=10737418240
inet:
udp_ttl=60
udp_recvspace=262144
tcp_keepalive_default=1
tcp_keepidle=3600
ipport_userreserved=65000
ipqmaxlen=2048
socket:
sominconn=65000
somaxconn=65000
pts:
nptys=512
vfs:
bufcache = 1
noadd_exec_access = 0
ipc:
shm_max=1073741824
shm_seg=116
sem_mni=512
sem_msl=1024
shm_mni=512
sec:
acl_mode = enable
lsm:
lsm_rootdev_is_volume=0
clubase:
cluster_expected_votes=5
cluster_name=wm-hart-cluster
cluster_node_name=CH004
cluster_node_inter_name=CH004-ics0
cluster_node_votes=1
cluster_interconnect=mct
cluster_seqdisk_major=19
cluster_seqdisk_minor=143
cluster_qdisk_major=19
cluster_qdisk_minor=383
cluster_qdisk_votes=1
advfs:
AdvfsAccessMaxPercent=35
Received on Tue Mar 09 2004 - 10:43:21 NZDT