for line in `cat -n $file`; do
If you don't supply an argument to grep, it takes its input from the standard input.
You could replace this loop with the single command: grep -n 'one' $file
If grep didn't have the -n option, as maybe the original version of grep, then you could use:
cat -n $file | grep one
You are still thinking like you are programming in basic instead of using the tools as filters.
At work, I produced a couple one liners that would catalog DVD backups and create a csv file of contents and disc label names. Another oneliner merges and sorts the csv files ( actually I changed it to a tab seperated file ) and uses enscript to produce a nice looking postscript file. Then I use ps2pdf to convert the file to a pdf which I share on the server. Most of this is done using ls, sed, sort and uniq. Simple text based filters. By piping the output of one to the other and tweaking the arguments till they get right, I was able to create the scripts referencing the man pages and applying trial and error. I didn't have to use the "read" command once.
Also consider this construct:
for file in *.jpg; do
This is a loop but uses wild cards to set up the loop.
Another common way of doing the same thing is used when the number of files is too great:
find ./ -maxdepth 1 -iname "*.jpg" -print0 | xargs -0 -L 1000 ...
This uses the NULL character to separate arguments and pipes the list to xargs which can limit how many arguments are handled at one time.
After I back up files before deleting them using K3B, I will save the k3b project file. It is a zipped file containing a "maindata.xml" xml file that contains the files backed up.
I will use sed to extract the filenames to delete. Then I'll pipe that through "tr" to convert return characters to NULLs and then pipe that to "xargs -0 rm" to remove the files. By piping the output of "sed" through "tr" I am able to produce the same text stream that "find -print0" would produce and use xargs the same way.
I would highly recommend installing the source packages for coreutils, sed, and awk. Not to install these tools you already have, but to produce pdf versions of the manuals from the .texi source that these source packages supply.