If you haven't seen this guide yet, stop now and read through it first. It will cover all the basic concepts you need to know.
http://mywiki.wooledge.org/BashGuide
Also read these three links. It's
vital in scripting to understand exactly how the shell handles arguments and whitespace:
http://mywiki.wooledge.org/Arguments
http://mywiki.wooledge.org/WordSplitting
http://mywiki.wooledge.org/Quotes
Now for some specific comments.
1) Let's start off with formatting. Clear and consistent formatting helps make your script easier to read and debug. Indent all of your loops, branches, and functions, and use blank lines to separate blocks of lines that naturally group together. Add some comments to explain your code. Putting the "do" on the same line as the for/while/until also looks a little nicer.
2)
Change this to...
Code:
if (( $? != 0 )); do
Numeric tests can better be performed with
((..)), and for string/complex tests the newer
[[ test is recommended.
3)
Code:
if [ `cat $i | grep read` ];
a.
$(..) is highly recommended over `..`
b. Always quote variable and other substitutions unless you want word splitting to occur; particularly inside
[ tests. See the links I gave above.
c. grep can read files directly, so you don't need cat.
Code:
if [[ "$( grep "read" "$i" )" ]]; then
d. But this can be simplified even further. If a successful match is made, grep will exit true, and that's all we're interested in. Now
grep -q will suppress output, meaning you can simply test the exit code only. This can even be done directly without the test brackets.
Code:
if grep -q "read" "$i" ; then
echo "found"
fi
4)
Code:
cut -d: -f1 /etc/passwd | grep "$USR" > /dev/null
This is backwards and inefficient. first use grep to get the line you want,
then cut it. Or use a tool like sed or awk instead that can do both the matching and the extracting (
grep -o would even work here).
But actually, in this case, the cutting is superfluous; you already have the name, you just want to find out if it's in the file. So just grep for the line that contains the username, using
grep -q as in the previous example. You might consider anchoring the match to the beginning of the line, however, to avoid possible false matches elsewhere in the file.
Code:
grep -q "^$USR" /etc/passwd
Many tools like grep have "quiet" or "silent" options, meaning you don't have to dump the output to /dev/null. Always try to learn the capabilities of the tools you're using and see if it can do what you want internally, before turning to shell options.
5)
Code:
myscriptname=`basename $0`;
bash has
parameter expansion and other text extraction abilities, making many tools like basename, pathname, and even cut, unnecessary much of the time.
Code:
myscriptname="${0##*/}"
6)
Parsing ls is risky, difficult to do properly, and usually unnecessary. So don't use it. Most of the time you can use
globbing or
find instead (and use the null-separator option when you do, if possible).
In this case, the dotglob shell option will allow you to match hidden files.
Code:
shopt -s dotglob
for i in * ; do
7)
Code:
if [ $i != $myscriptname ]
then
echo $i | xargs grep -l read
fi
xargs is a bulk command processor. As you seem to have realized, you shouldn't need to use it on a single entry like this if you're processing individual inside a loop. But if you have a whole list of filenames from some input source, such as from a find or a file, it's a great tool to use.
Your buffering problem above is mostly caused by the use of the for loop, which expands the glob into a complete list of files to process before running. But instead of using xargs, you can probably use a
while+read loop, which takes in only a single line at a time.
The only question then is how generate the list that it needs to read. I'm thinking that using printf to break up the globbing pattern by newlines may work, although it might encounter the same problem. We might have to switch to find.
Code:
while read i; do
echo "$i"
done < <( shopt -s dotglob; printf "%s\n" * )
# or if that doesn't work:
while read i; do
echo "$i"
done < <( find -maxdepth 1 -iname "*" -print )
Another option may be to load the entire filelist into an
array first, then process them by index number instead. Using the index numbers in the for loop instead of names will probably bring the total line length down to less than the maximum.
Code:
shopt -s dotglob
files=( * )
for i in "${!files[@]}": do
echo "{files[i]}"
done
Finally, have a look at the
ulimit built-in command. You may be able to increase the size of the command buffer using it ("stack size", perhaps?).