hello all,
I'm curious if anyone has any input on a problem we are having. one of
our developers runs a possix shell script that calls sqlldr (oracle)
the program is attempting to load about 250,000 rows into a table ( I
know what has this got to do with DEC UNIX ). Oracle tech support says
to crank up the users ulimit. here are the user settings:
nitman> ulimit -Ha
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 1572864
stack(kbytes) 49152
memory(kbytes) 4114136
coredump(blocks) unlimited
nofiles(descriptors) 4096
vmemory(kbytes) 1048576
nitman> ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 131072
stack(kbytes) 2048
memory(kbytes) 4114136
coredump(blocks) unlimited
nofiles(descriptors) 4096
vmemory(kbytes) 1572864
if I issue a ulimit -d 1572864 it doesn't change, a ulimit -a still
reports as above. Am I missing something?
And will the ulimit still effect the called program, or is it in a
second environment.
Received on Wed Jan 06 1999 - 21:37:52 NZDT