Hmm, isn't perl a bit of overkill? To get the files that contain a certain searchstring I would use something like
grep -rc <searchstring> <where to search> | grep -v :0
The c flag in the grep command counts the occurences of the searchstring in every file, so to only get the files that actually contains the searchstring, I would have to make an inverse grep on :0 to filter out the interesting files.
Now, if we wanted to get the "pure" file name, we have to get rid of the trailing ":<number>" that every hit has. I would perhaps do it like this:
grep -rc <searchstring> <where to search> | grep -v :0 | tr \: '\t' | cut -f1
It seems a bit cumbersome but that's the best I could come up with now
To change a string to something else I would use sed, like this:
cat oldfile | sed s/<searchstring>/<replacement>/g > newfile
I don't think that the new and old files can be the same one. I've tried that and that was REALLY bad
So, in your case a script would look something like this
Code:
#!/bin/bash
for file in `grep -rc \/usr\/users /usr/local/users/*.cgi | grep -v :0 | tr \: '\t' | cut -f1` ; do
cat $file | sed s/'\/usr\/users'/'\/usr\/local\/users'/g > $file.new
done
I hope I got all the backslashes in the right places

Of course, with this script, you'd have to copy all the .cgi.new files to the proper .cgi file. This can of course be avoided by just adding the line
cp $file.new $file
in the loop after the replacement line. However, in this way you can check that it works properly first, and that it doesn't eat all you files.