Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
The first -exec runs grep in quiet mode, resulting in the exit status (0 success, 1 fail). The second -exec command is executed based on the exit status of the preceeding command. In other words, you tell to find recursively all files in the current directory and if they match the pattern, remove them.
I put an echo before the rm command, for testing purposes. Check the results and when you are satisfied, strip out the echo and the files will be actually removed.
You could also use find to pipe the filenames to grep using xargs again.
find ./ -maxdepth 1 -type f -name "*.txt" -print0 | xargs -0 grep -l '<pattern>' | tr '\n' '\0' xargs rm
Note the use of "tr '\n' '\0'" to convert the arguments so that they are null separated. This simple trick even allows you to use the output of "ls ." with xargs when filenames might contain whitespace characters. This can be less clumsy then changing IFS.
Thanks for the info. a couple of points:
1. There's about 50,000 files in the directory in question. This appraoch of passing a list of filenames to rm, would this handle hundreds or possibly thousands of arguments?
2. How can I get a match if the file contains one of a selected of words? I tried (using bathory's code)
Its true that if you use a wildcard like *.txt and that expands into too many items (set in the lib somewhere I believe), you'll get the dreaded 'Too many arguments'.
The usual trick is to get all the testable files into a dedicated dir and then process in a loop
for file in `ls`
test/rm $file here
may be a bit slower, but it'll work.
If you truly need more speed than this, write a prog in Perl: http://perldoc.perl.org/
Look at the man page for the xargs command. At 50,000 files, the number of arguments may be too large for bash or the shell. Remember that when a command runs, the arguments are in an array argv. There are some xarg options to limit the number of arguments that are handled at a time. ( -L, -n or -s options)
Also, there is an option of grep that stops searching a file after the first match. This can save considerable time if you are grep'ing a very long file and the match is near the beginning. ( edit: the -l, --files-with-matches will do this )