user cron jobs in a disk service

From: Scott Mutchler <smutchler_at_gfs.com>
Date: Mon, 24 Aug 1998 12:30:46 -0400

Hello all,

We have two 4100's running DU 4.0d (with patch kit 2) and TruCluster 1.5 (with patch kit 2). We have a disk service that runs a database (Progress) and makes use of four AdvFS filesets. We have user accounts with home directories in one of those filesets that need to run cron jobs to take action against the database, such as loading data and running reports.

When the disk service moves to a node, cron on that node needs to "wake up" and see that it now has jobs to do for these users. The cron on the other node needs to wake up and see that it should no longer be running those jobs. We have discovered that cron is different from most daemons in that kill -1 does not cause it to re-examine its files (in /var/spool/cron/crontabs). We are handling this in our failover scripts (the user-defined stop action and start action scripts).

We also tried the variation of completely killing and restarting cron. This works well once the nodes are booted. However, when a node boots, it runs the stop action script. Our stop action script would look for a cron to kill, adjust crontab files, then start a new cron. The problem comes in at /sbin/rc3.d/S57cron, which in turns start up another cron daemon.

So my question: how have others of you handled cron jobs in your clusters that need to move with a disk service?

Thank you

Scott Mutchler
Gordon Food Service Marketplace
smutchler_at_gfs.com
Received on Mon Aug 24 1998 - 16:31:34 NZST

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:38 NZDT