-   Linux - Software (
-   -   Finding files and then finding content within those files... (

Maeltor 03-12-2007 11:57 AM

Finding files and then finding content within those files...
Hi everyone,

I'm looking for some assistance writing the correct find command:

I need to find all files of type *System.cfg in a directory, and then i need to look into those files and find out which files contain a given string so i know which files i need to change.

We changed our time server and need to jump into all of our phone configs and change it from to

I know that i can use

find / | grep *System.cfg
That will find all the files named *System.cfg in the entire file sytem structure. I only want to recursively look in a given directory and all directories beneath it.

Any help would be appreciated.

MensaWater 03-12-2007 12:10 PM

"find /" would find ALL files not just the *System.cfg ones.

The "/" in above tells it "root filesystem" (the base of all other filesystems).

To limit it do a specific subtree just specify that subtree instead of "/".

Also you don't have to grep for a name with find because it has a flag for name called appropriately enough "-name".

So to get the list of files below say /etc named *System.cfg":


find /etc -name *System.cfg
You then want to check for specific information in these files so you could pipe into xargs to do the grep for the string you're looking for in each file it finds.


find /etc/ -name *System.cfg |xargs grep <string>
Substituting your literal string value where I show <string> above.

Maeltor 03-12-2007 12:18 PM

Hi jlightner,

Thanks for the help.

It has found the files I need. Now going off of that:

Can it just list the files containing that string? Right now it is printing off the actual operand containing the string i search for, i'd rather just have it print the name of the file containing the string.

Secondly, and I know this is getting away from the actual question, but is there a way to do a REPLACE? I'd like to just find and replace everything that says with

MensaWater 03-12-2007 12:44 PM

Sure you can get a list of just the files:


for FILE in `find /etc -name *System.cfg`
do if grep <string> $FILE >/dev/null
  then echo $FILE contains the string

To do replace (assuming these are standard ASCII text files) you can use the sed command:


for FILE in `find /etc -name *System.cfg 2>/dev/null`
do if grep <oldstring> $FILE >/dev/null
  then echo $FILE contains the string
        cp $FILE ${FILE}.20070312
        sed s/<oldstring>/<newstring>/g ${FILE}.20070312 >$FILE

I'd strongly urge you to try this on a test directory before doing it everywhere else to verify it has the results you want.

As always no warranty provided - try the above at your own risk.

Maeltor 03-12-2007 12:59 PM

Ok hopefully last question:

I'm assuming those were shell scripts that you gave me?

What was the
>/dev/null in the first and the 2>/dev/null` in the second?

Also in the second, I see you have the date. What is that used for?

MensaWater 03-13-2007 12:06 PM

Yes shell scripts. Just create text files with the content. (You can include an interpreter line if you want.)

e.g. #!/bin/bash

That would insure they run as bash shell scripts even if they were called by someone running csh for some reason. Not mandatory but a good practice.

/dev/null a/k/a "the bit bucket" is a place you can send output you don't want. Essentially it just goes away. (Sort of like where Michael Valentine sent people in Stranger in a Strange Land :jawa: )

In the original >/dev/null I'm redirecting output of the command to null because I'm not interested in the output itself - just checking to make sure the command works.

In the 2>/dev/null I'm telling it to send stderr (Standard Error a/k/a File Handle 2) to /dev/null so that only stdout (Standard Output a/k/a File Handle 1) shows up.

The ">" by itself in the original by the way is sending stdout (to /dev/null - the "1" is understood so doesn't have to be listed there though it could be "1>/dev/null" if I wanted.

You can see the difference by doing (assumes you do NOT have a file named ralph):
ls -l * ralph

The above will show all your files (in stdout) AND give an error that "ralph" doesn't exist (in stderr).

If you instead do:
ls -l * ralph >/dev/null

You'll see ONLY the error message that ralph doesn't exist - all of stdout (the files that DO exist) will have been displayed to /dev/null.

If you then do:
ls -l * ralph 2>/dev/null
You'll see the list of files but NOT the error about file ralph not existing because you sent the error to /dev/null.

You can redirect into files if you wish instead of /dev/null. e.g.
ls -l * ralph >ls_stdout.out 2>ls_stderr.out
Would create the files "ls_sdtout.out" and "ls_stderr.out". You'd see all the files that exist in the first file and the error about ralph not existing in the seccond file.

Finally you can do the most common usage which is redirecting BOTH stderr and stdout to a log file:
ls -l * ralph >ls.out 2>&1
The special syntax 2>&1 says to redirect file handle 2 (stderr) into whatever has been defined for file handle 1 (stdout). Since we previously redirect stdout to ls.out it means stderr will also go to ls.out.

In this it is always important to define stdout BEFORE doing the special syntax. If you did:
ls -l * ralph 2>&1 >ls.out
It would send your stderr to stdout but stdout would not have been defined as "ls.out" yet so it would use the default (your terminal session) instead. A lot of people make this mistake so wanted to mention it.

All times are GMT -5. The time now is 04:09 PM.