Not too long ago, I posted the following question:
>I searched the archives for this problem and did not come up with
>anything. We are experiencing a problem with cron deferring jobs. I am
>receiving the following in the /usr/adm/cron/log:
>
>! MAXRUN (25) procs reached Wed Nov 15 04:34:00 1995
>! rescheduling a cron job Wed Nov 15 04:34:00 1995
>! MAXRUN (25) procs reached Wed Nov 15 04:34:00 1995
>! rescheduling a cron job Wed Nov 15 04:34:00 1995
>! MAXRUN (25) procs reached Wed Nov 15 04:35:00 1995
>! rescheduling a cron job Wed Nov 15 04:35:00 1995
>! MAXRUN (25) procs reached Wed Nov 15 04:35:00 1995
>! rescheduling a cron job Wed Nov 15 04:35:00 1995
>! MAXRUN (25) procs reached Wed Nov 15 04:36:00 1995
>! rescheduling a cron job Wed Nov 15 04:36:00 1995
>
>After looking around in this file, I realized that we have been seeing this
>message back under 2.0, just after putting this machine into production
>(we added several cron jobs at that point). Many cron jobs are simply
>not being run, without any errors (besides the one above) evident. I
>have searched the kernel files for a tuneable parameter called MAXRUN
>but did not find anything. SYSLOG reports nothing.
>
>Does anyone know what is causing this, and if there is a patch
>available (and the ID)? This is causing our backup to not run, which is
>needless to say a critical problem.
>
>I will summarize.
First off, I wish to thank the following people for their responses:
Don Ritchey <dritchey_at_chipsi.com>
sysadmin_at_homer.bus.miami.edu
Tim Mooney mooney_at_toons.cc.ndsu.NoDak.edu
Jon Buchanan <Jonathan.Buchanan_at_ska.com>
<iwm_at_uvo.dec.com>
Steve Hancock (Digital Unix Support)
Steve Hancock provided to me what cron under Digital Unix was doing:
I was looking into your problem yesterday and I looked at the cron source
code for the MAXRUN parameter. The problem is the MAXRUN value is
hardcoded into cron. Here is the header entry in cron.c for it:
#define MAXRUN 25 /* max total jobs allowed in system */
So, to answer your question, max-proc-per-user has nothing to do with
the problem. It is simply that there are more simultaneous jobs queued to
cron than it was written to handle.
==========================================================
So the answer is that it can't handle more than 25 simultaneous jobs.
Pretty limiting, if you want to run more than 25 jobs, isn't it?
Tim Mooney provided me with a better method of cron job handling, as
evidenced through HP-UX:
I couldn't find anything in any of the pages I checked, but actually ran into
this same problem on an HP machine about a year ago, so I knew what
was going on.
Here's the deal. If you check the man page for `at', you will see that at
and batch and cron dump jobs into `queues'. It just so happens that by
default these queues have a maximum number of simultaneous jobs that
may be running -- if you hit that limit, new jobs don't get started, until you
drop back down below the magic number.
Although I could find no mention of it anywhere in the man pages I looked
at, take a look at /usr/lib/cron/queuedefs. You can edit this file, and
change the number of simultaneous allowable jobs in any queue. Then
just restart cron and the problem goes away... You can also control
what `nice' level jobs run at when run with at from a certain queue, etc...
I'm assuming the HP-UX queuedefs file is the same, so I'm appending the
HP-UX man page on the subject.
=========================================================
I've omitted the man page for HP-UX queuedefs Tim kindly provided me,
as I am unsure of the legal ramifications of including such material. The
queuedefs file in Digital Unix has a pretty good description of what the
fields mean. I still haven't answered the question of what does Digital
Unix actually do with the file, since at least one of the parameters is hard
coded.
Since I posted the question, the deferring of jobs by cron has
mysteriously gone away. I did make changes to the queuedefs file (I
upped the maximum number of jobs from 100 to 200), but I don't think that
it made a difference.
I've requested that Digital Unix actually *use* the queuedefs file through
an SPR via Steve Hancock. We'll see where it goes.
Bob
haskins_at_myapc.com
Received on Tue Nov 21 1995 - 01:01:48 NZDT