Thanks to all the people who responded :-
Knut.Hellebo_at_nho.hydro.COM
gupe_at_elogica.com.br
alan_at_nabeth.cxo.dec.COM
peter_at_osm.co.uk
aad_at_olympus.nwnet.NET
pct_at_atom.ansto.gov.au
ccam_at_lux.latrobe.edu.au
rioux_at_ip6480nl.ce.utexas.EDU
Below are the responses :-
****************************************************************************
***************
On Aug 12, 7:32am, Surash Natarajan wrote:
> Subject: csh Resource buffers limitation
>
> Hi,
> We are using DU 3.2c and recently we wrote a script that does recursive
> processing on the current directory and all its subdirectories. If the
> directory structure is small the script run okay, but if the directory
> structure is big the script aborts. The problem was traced to the
following
> statement :-
> foreach file ('cat $tmpdir/clean.out')
Try `cat $tmpdir/clean.out|xargs`
Knut.Hellebo
****************************************************************************
****************
Hi,
I don't know whether there is a workarounb but at csh man pages you will
find:
Limitations
The following are csh limitations:
+ Words can be no longer than 1024 bytes.
+ Argument lists are limited to 38912 bytes. However, the argument
list space is shared with the space for environment variables;
having
a large list of environment variables can severely limit the
allowable argument list length.
+ The number of arguments to a command that involves filename
expansion is limited to 1/6th the number of characters allowed in
an
argument list.
+ Command substitutions can substitute no more characters than are
allowed in an argument list.
+ To detect looping, the shell restricts the number of alias
substitutions on a single line to 20.
+ Words can also be separated by double spaces.
You better play with metacharacters. I had the same problem and I
decided to split by using [a-e]*, [f-l]*,... I don't know whether this
applies to you.
Good luck,
Gustavo.
****************************************************************************
************
The command line size limits are due to the size of fixed size
buffers used by the exec family of system calls. There isn't
a parameter that can be changed to raise it. An alternative
might be to have a version of the script that runs for a list
of file names file name and then use xargs to control the list
size.
alan
****************************************************************************
************
I'm not trying to be smart here, but why not write the script in Bourne
shell? Your c-shell users can still use it from their c-shell prompt,
provided
the first line in the script reads "#! /bin/sh"
I suggest you could do something like this in Bourne shell:
while read file; do
### do something with the $file variable ###
done < $tmpdir/clean.out
Hope this helps.
- Peter.
****************************************************************************
***********
Every shell is going to have a command-line length limit. To avoid
this, use xargs instead, eg:
xargs rm <$tmpdir/clean.out
aad
****************************************************************************
***********
Hi.
I suspect you may not get an answer to this directly. However if you
are using "find" earlier in your script and it appears as though you are
processing each file individually, you might like to look at the "-exec"
option to find. For each match that find makes, the command which
follows the exec option is executed. As this is done as a match is made,
the problem of long lists of files doesn't occur.
Another option is to script using the shell /bin/sh. While it will probably
have the same problem if scripted like above, you could do the following:
grep -v '^$' $tmpdir/clean.out | (
read file
while [ X$file != X ]
do
<stuff you would do in you loop plus the below line>
read file
done
)
If you can guarantee there are no empty lines, then you can change the
grep into a cat. Note however that while inside the (), you don't have
access to any other input. There may be a csh equivalent to this but
as I don't know csh very well I couldn't find one in the manual. Note
however that this second solution relies on the file clean.out to have
one entry per line.
Paul Tyler
****************************************************************************
************
Hi Surash,
I'm no csh programmer but I thought that you should be using the ` quotes,
instead of the ' one's. But then again, I could be completely wrong :)
Bye
Andrew Moar : Ph (03) 9479 3928 email A.Moar_at_latrobe.edu.au
Systems Programmer, Information Technology Services
La Trobe University, Melbourne
****************************************************************************
************
another solution is to use ksh:
find whatever -print >/usr/tmp/filesnames.scr
while read line
do
whatever you weant to do with ${line}
done </usr/tmp/filenames.scr
rm -f /usr/tmp/filenames.scr
rioux
****************************************************************************
************
Below is my original question :-
>>Hi,
>>We are using DU 3.2c and recently we wrote a script that does recursive
>>processing on the current directory and all its subdirectories. If the
>>directory structure is small the script run okay, but if the directory
>>structure is big the script aborts. The problem was traced to the
following
>>statement :-
>>foreach file ('cat $tmpdir/clean.out')
>>In this statement, the directory structure and its file names are kept is
>>the $tmpdir/clean.out file and this script process each line by line from
>>this file. The problem is when this file contains a large listing of
>>filenames, it aborts with the following message :-
>>Too many words from ''.
>>
>>My question is how can we control the resources that are allocated by the
>>system to the environment that csh operates in, to our understanding there
>>is some sort of buffer that stores the input for the "foreach" command and
>>when this limit is hit the scripts aborts.
>>Any response will be much appreciated.
>>
>>Surash Natarajan
>>OTIS Engineering Center, Penang, Malaysia
>>surash_at_omc.otis.utc.com
Received on Fri Aug 16 1996 - 06:47:47 NZST