SUMM: Preventing core file creation

From: Guy Dallaire <dallaire_at_total.net>
Date: Fri, 7 Mar 1997 22:32:51 -0500

Thanks to all who replied.

Here is my original post as well as the answers:

>From time to time, my Oracle RDBMS shadow process in a production database
crash and produce HUGE (~360Mb) core dumps. At this rate, my disks fill
pretty fast and that could potentially bring my whole applications DOWN
during a "shaky" weekend or night.

I was wondering if there is a way to prevent core dump creation for a
specific user (in my case the 'oracle' user). Oracle procedes by creating a
directory called 'ora_xxxxx' (where xxx=pid of the failing process) in a
parametrized location and puts the 'core' file in this directory.

I tried to tell oracle to create his core dump directories and files in
/dev/null, but /dev/null is not a directory, you cannot create the ora_xxx
directory in /dev/null so there is no way of throwing the dumps in the bit
bucket.

------------------- SOLUTIONS
-----------------------------------------------

a) In those cases where the program ALWAYS create it's core dump in
directory /x/y/z, just make a directory named /x/y/z/core and the progam
will not be able to create its core dump there because a directory named
core exists there.

In the case of oracle, this does not work. I should have been clearer when
I posted my message. Here is how oracle creates it's core dumps:

Whenever an oracle process dies, oracle places the core dump under
DIFFERENT
directories. For example, if the oracle parameter core_dump_dest is
/u01/oracle/dump and oracle shadow process with PID=12345 crashes, oracle
will create the /u01/oracle/dump/ora_12345 directory and put the 'core'
file in it.

I cannot predict the future (yet) so I just cannot create ora_xxxxxx/core
directories in advance. If I could predict the future, I would strongly
reconsider putting some money in the stock exchange...

b) Create a cron job that remove 'core' files and run it at a high
frequency. I already have that, it deletes old core files but runs only
once a day. Why waste CPU cycles ? I would have to run it every 10 minutes
or so when a bunch of processes die and it's not a good thing IMHO. If the
frequency is not set high enough, my disk would fill too fast at 350Mb a
core dump.

c) Use the 'ulimit' shell built-in. I remember having read about that limit
before, now that people from the list reminded me of this function. There
are subtelties between the sh, ksh and csh version but I RTFM. Only problem
was that I did not remember in which FM is was...

                                                Thanks again
Received on Sat Mar 08 1997 - 05:16:35 NZDT

This archive was generated by hypermail 2.4.0 : Wed Nov 08 2023 - 11:53:36 NZDT