Hello -
I'm having a problem where an application on a client machine (see below)
ocassionally complains about not being able to open an ftp connection, or
pull a file from the server machine (see below), once an ftp connection has
been established. Not that I can really toss an application error in to the
mix and have it be valid for my question, but the application error refers
to "Can't build data connection, Address already in use."
*Vmstat shows where the server box is 70% - 90% idle.
*Netstat -m shows zero requests for mbufs denied on the server box.
*Netstat -A does show quite a few TIME_WAITs on the server coming from my
client boxes.
*Netstat -i shows no collisions, Ierrs, or Oerrs.
*Our network guys assure me all is well with the network. I have to trust
them, since I have no way to check behind them.
*I have NETrained interfaces, and have failed them back and forth just to
make sure nothing was flaky. The interfaces are at running FastFD.
The sominconn and somaxconn have been increased to 65535 on the client and
server boxes; the boxes have been rebooted to make them active. So, I am
pretty sure I'm not running out of sockets. The one thing I was looking at
increasing was the ipport_userreserved parameter on the client to give them
more ports, based upon what my application is complaining about.
My main problem is that I can't figure out if the server is having a problem
with handling the connection load, or if the client does not have enough
ports to open to connect to the server. There are quite a few ftps
happening between the systems.
Any tips and hints are welcome!
Server
ES40 4.0f pk6
4 x 667 processors
4GB of memory
utx:
Device_Char_Files = utx
Device_Char_Major = ANY
Device_Dir = /dev/utx
Device_Group = 0
Device_Mode = 644
Device_User = root
Module_Config_Name = utx
Module_Path = /sys/BINARY/utx.mod
Module_Type = Static
Subsystem_Description = UNIX terminal pseudo-device
UTX_Device_Number = 32
generic:
insecure-bind = 1
message-buffer-size = 32768
msgbuf_size = 32768
ipc:
dflssiz = 8388608
num-of-sems = 200
sem-mni = 200
sem-mns = 1000
sem-msl = 1600
sem-opm = 800
sem-ume = 800
shm-max = 2147483647
shm-min = 1
shm-mni = 1000
shm-seg = 450
ssm-threshold = 0
proc:
max-per-proc-data-size = 10737418240
per-proc-data-size = 10737418240
max-per-proc-address-space = 10737418240
per-proc-address-space = 10737418240
per-proc-stack-size = 8388608
max-proc-per-user = 1024
maxusers = 2048
rt:
aio-max-num = 8192
aio-max-percent = 5
aio-max-retry = 20
aio-retry-scale = 5
aio-task-mas-num = 4096
sigqueue-max-num = 128
vm:
vm-mapentries = 20000
vm-maxvas = 137438953472
vm-vpagemax = 20971520
socket:
somaxconn = 65535
sominconn = 65535
Client
ES40 4.0f pk6
4 x 667 processors
16GB of memory
utx:
Device_Char_Files = utx
Device_Char_Major = ANY
Device_Dir = /dev/utx
Device_Group = 0
Device_Mode = 644
Device_User = root
Module_Config_Name = utx
Module_Path = /sys/BINARY/utx.mod
Module_Type = Static
Subsystem_Description = UNIX terminal pseudo-device
UTX_Device_Number = 32
generic:
insecure-bind = 1
message-buffer-size = 32768
msgbuf_size = 32768
ipc:
msg-mnb = 65536
msg-tql = 1024
num-of-sems = 500
sem-mni = 1024
shm-max = 2147483647
shm-min = 512
shm-mni = 8192
shm-seg = 8192
vm:
ubc-maxpercent = 80
vm-mapentries = 2000
vm-maxvas = 137438953472
vm-maxwire = 137438953472
vm-vpagemax = 20971520
proc:
max-per-proc-address-space = 137438953472
max-per-proc-data-size = 17179869184
max-per-proc-stack-size = 33554432
max-proc-per-user = 4096
max-threads-per-user = 16384
maxusers = 4096
per-proc-address-space = 137438953472
per-proc-data-size = 17179869184
per-proc-stack-size = 33554432
task-max = 32768
thread-max = 65534
inet:
udp_sendspace = 16384
socket:
somaxconn = 65535
sominconn = 65535
Thanks,
Thomas Kemp
tom.kemp_at_dowjones.com
Received on Fri Jul 26 2002 - 18:49:13 NZST