Hello all, hope everyone is getting ready for the wonders of a three day
work week (hopefully for some, most of us I would assume are like me, which
= 24X7).
Anyways on to my question, Once again we are running into the dreaded
"vmunix: fork/procdup: task_create failed. Code: 0x11" error, which I
understand is a process limit. Which is fine, however I don't think I am
hitting that limit and am trying to figure out why this happens, here are
the specifics:
DU 3.2C
Alpha 1000A 4/266 128M Ram, ~ 20 Gigs of disk space (with tons of swap)
Web server for a few sites
Consistantly under 200 total processes
Maxusers set to 128
Memory fine "Memory: Real: 36M/123M act/tot Virtual: 37M/446M use/tot
Free: 51M"
Uptime= load average: 0.03, 0.35, 0.40
So basically not a very busy box, there isn't excessive swapping or
anything. However when the web usage gets above 400 or so simultanous
connections to one of the ethernet cards, people's custom code (cgi and the
like) spit out those task_create errors. If we run the web servers at 200
processes each they don't complain (nor does the kernal) but the second
that person tries to fork a new process, boom we get the error.
Now last time this happened I upped maxusers from 64 to 128 which should
have been plenty (this isn't a heavily used interactive machine, it is a
web server and that's about it) to solve the problem, but to no avail.
So I am scratching my head and wondering were to turn next... So if anyone
could help me out that would be wonderful...
-Dave
dboyle_at_liquidaccess.net
Look ma no .sig, I get more mail when I post to this list concerning my sig
than I do concerning my problem... <g>
Received on Tue Nov 26 1996 - 20:24:36 NZDT