Summary Find problem on DUX 4.0D JPKT3

From: Loucks Guy <Guy.Loucks_at_det.nsw.edu.au>
Date: Wed, 11 Aug 1999 17:13:45 +1000

My many thanks to all respondants, as always, the answer arrived as soon as
I had emailed the question.

Brown, Phillip [Phillip.Brown_at_COMPAQ.com]
Phil Farrell [farrell_at_pangea.Stanford.EDU]
Grant Van Dyck [vandyck_at_zk3.dec.com]
Duncan Webbe [WebbeD_at_franklins.com.au]
Joerg Bruehe [joerg_at_sql.de]
Stephen Dennis [digikno1_at_zeta.org.au]
Nick Leonard [nickl_at_poole-tr.swest.nhs.uk]
Richard Bemrose [rb237_at_phy.cam.ac.uk]
Larry Griffith [larry_at_cs.wsc.ma.edu]
Neil 'Smokes' Fulton [fulton_at_warp6.cs.misu.nodak.edu]
Wayne Blumstengel [Wayne.Blumstengel_at_CRHA-Health.Ab.Ca]
Paul Crittenden [crittend_at_storm.simpson.edu]
rdonov - Ray Donovan [ray.donovan_at_acxiom.com]


The answer was obvious, there was a cron job changing some access
permissions, so some web clients could look at some information, this of
course changed the inode (ctime) information.

Therefore using mtime, resolved the problem. In this instance, the O/S was
behaving properly.

I will go back to being a bone headed knuckle dragging Neanderthal.

Some relevant responses below:

P.Brown:
Try this:
find /var/adm/syslog.dated -depth -type d -ctime +5 -print | xargs rm -rf

Actually, I'd test the output before the pipe to see what I'm getting i.e.
is this what I really want?

find /var/adm/syslog.dated -depth -type d -ctime +5 -print

Once this is established, add the xargs pipe section. Careful as always. By
the way, this is a much more efficient process than doing an exec for each
instance of a directory. It should be much faster. Didn't test this, but I
think it's right. Hope this helps.

P.Farrell:
I use the GNU find with this command in my daily system mgt. script,
where syslogtime is a variable defined to the number of days worth
to keep.

/local/bin/gfind /var/adm/syslog.dated -maxdepth 1 -mtime +$syslogtime -exec
/usr/bin/rm -rf {} \; -print

I always explicitly give the entire path for all commands in these
scripts (e.g., /usr/bin/rm instead of just rm) to avoid problems
with directories missing from the path or trojan horse programs.

G.Van Dyck:
I've had trouble with a construct live this before. I think the problem
is with ctime as it's defined as:

  -ctime number
      TRUE if the file inode was changed in the past number days, where
      number is interpreted as described in this reference page.

Try -mtime. I'm pretty sure it'll do what you want.

D.Webbe:
Use -mtime not -ctime.

J.Bruehe:
Some points:

1) You might need to check the manuals to find out _which_ actions
   mark the 'ctime' field of a directory for update.
   There will probably be some significant difference from 'mtime',
   but off-hand I cannot tell which one.

2) Have you tried a 'ls -lc' to see the 'ctime' field ?

3) If all else fails, you might remove (using 'find') all files
   in subdirectories which are older than five days,
   and then use 'rmdir' to remove all empty subdirectories.
   IMHO it is safe to apply 'rmdir' to all subdirectories
   because it will fail on those which still contain entries.

4) If I were in your place, I would try to find names which sort
   in proper order: year-month-day (all numeric). Doing it that way,
   you might simply remove all but the five last ones listed
   (using 'ls -r | tail +5' should do it).

5) Alternatively, sort them by date and then use approach 4).

N.Leonard:

I have had a similar problem try rewriting the line for scratch
this seems to solve the problem.
this is the current line in the root crontab (3.2c) 2100
40 4 * * * find /var/adm/syslog.dated -depth -type d -mtime +3 -exec rm -rf
{} \
;

R.Bemrose:

Try replacing 'rm -rf {}' with '\rm -rf {}' just in case you have aliased
'rm' to say 'rm -i'.

L.Griffith:
        I checked my crontab and it says -mtime +7 (modify time rather
than status change time, I keep 7 days worth of logs). I'm not sure
if there is any difference in this situation or even why there should
be a difference, but it might be worth a try.

N.Fulton:
I had that kind of problem. It turned out to be our backup software (CA
ArcServe). The backup client was changing internal times on the files.
This meant when "find ..." ran and checked the time, it didnt' see it as
being older then 5 days. It's probably a long shot if this is your
problem specificly, but maybe you have some other new program running that
is changing internal times on your files.

R.Donovan:
rm -fR


Guy R. Loucks
Senior Unix Systems Administrator
Networks Branch
NSW Department of Education & Training
Information Technology Bureau
Direct +61 2 9942 9887
Fax +61 2 9942 9600
Mobile +61 (0)18 041 186
Email guy.loucks_at_det.nsw.edu.au
Received on Wed Aug 11 1999 - 07:18:51 NZST

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:39 NZDT