sed or grep : delete lines containing matching text
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
sed or grep : delete lines containing matching text
Hi,
I have been struggling with this for really long.
I want to match text patterns from a file and delete all lines in a 2nd file that contain the matching pattern from the first file.
I have 2 files , namely "emails" and "delemails" . "emails" contains a list of 2000 emails and "delemails" contains a list of 200 emails that need to be deleted from "emails".
I know i can do it using sed or grep -v option...but can't get the syntax right.
grep simply finds lines containing a particular pattern. grep -v lloks for line that do NOT contain the pattern. Not sure how that relates to your problem....
sed is a relatively complex function. At its core is is used to find and change patterns of any size, but it has a whole bunch of options. Here is a very good tutorial: http://www.grymoire.com/Unix/Sed.html#uh-8
You may also want to look at awk.
When dealing with two different files, it may be easier to have a script that opens one file, finds the pattern of interest and then feeds that to the function that is going to operate on the other file.
If you post some of your actual code, we may be able to be more helpful on the approach you are trying.
Are you saying that your files are mbox files containing entire messages? Then simple pattern matching on separate lines won't be very helpful. To me it seems more reasonable to write a Perl script using modules from http://www.cpan.org/modules that are designed to handle email properly (have a look at Email::Folder and Email::LocalDelivery).
Distribution: Arch Linux && OpenBSD 7.4 && Pop!_OS && Kali && Qubes-Os
Posts: 824
Rep:
#!/bin/bash
# makedict.sh [make dictionary]
# Modification of /usr/sbin/mkdict script.
#
# Original script copyright 1993, by Alec Muffett.
#*************************************************************************************#
# This modified script included in this document in a manner consistent with the
# "LICENSE" document of the "Crack" package that the original script is a part of.
# This script processes text files to produce a sorted list of words found in the files.
# This may be useful for compiling dictionaries and for lexicographic research.
#*************************************************************************************#
#Usage: /root/Desktop/makedict.sh files-to-process
#
# or in this case :
#cat /root/Desktop/delemails /root/Desktop/emails > /root/Desktop/cattedlist
#
# NOTICE look at /root/Desktop/cattedlist , there is one line that could have
# 2 email addresses combined like this : Someone@crap.comSomeone@crap2.com
#Then
#/root/Desktop/makedict.sh /root/Desktop/cattedlist > /root/Desktop/cleanlist.txt
E_BADARGS=65
if [ ! -r "$1" ] # Need at least one
then # valid file argument.
echo "Usage: $0 files-to-process"
exit $E_BADARGS
fi
in the above example, i need a sed command that would take inputs from file2, 1 line at a time, delete email1@email.com and email2@email2.com from file1 and output the result to file3.
Actually, grep -Ev 'crap0|crap1|crap2|crap3' is very close to what i want, but i want to input the regular expressions from a file. Ouput will be fairly simple like >>file3
If this is what you want to do, you can do it with that script.
What that script does is that it cat's them is one big file [emails+delemails], then it looks for duplicates [uniq -u] and outputs only (it deletes duplicates) unique email addresses, and if you have catted both emails and delemails you end up with email list that doesnt anymore contain those emails that you wanted to delete.
And that "cleanlist.txt" will contain only those emails you wanted. Just save that script as "makedict.sh" and modify the paths and it will work, I tested it ;-)
hmmm...very promising solution . Its very close to what i wanted , except for one thing.
The file "delemails" may contain some emails that do not exist in the file "emails". I want these to be ignored. In the solution you have mentioned, mails that are in "delemails" but not in "emails" will not be duplicate. So when i get the output of "cleanlist", it will contain emails that existed in "delemails", but did not exist in "emails"
Is there not a sed command that takes the input from a file to delete emails?
I could use excel to append a "-d" (or whatever) to the beginning of each line in delemails so it looks like -:
As a solution in that line of thought, you could first pipe both files to "(sort | uniq -d)", which will return only those lines that appear twice. This may then be subtracted from the first file as before.
But you'll have to make sure that neither of the two input files contains duplicate lines (possibly by using "uniq -u" on each file separately).
grep -lir 'string to search in files' * |xargs rm -rf
Options
l - for listing file name only
i - ignore case while searching
r - search recursively within sub directories also
xargs - the list of files from grep command are passed as parameter to "rm -rf" command
Last edited by asnani_satish; 08-02-2009 at 09:46 PM.
Hi i got the same problem here i want to delete each lines of codes that has H3qqea3ur6p in it which is a virus thats infecting people browsing a site that im hosting.. anyhow the grep -Ev doesn't seem to work for me any suggestions???
i had done 1400 lines of removing code so far lol.. but 700 are still not done.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.