[SOLVED] Comparing two files and looking for the same line based on field - awk
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Read all lines from all files, accumulating counts of lines found in an associative array. When all the files have been read, the unique lines will have a count of '1'. You must use this test before printing your result:
Code:
awk '{lineCounts[$0]++} END{ for( line in lineCounts ){ if( lineCounts[line] == 1 ){ print line;} } }' newfile databasefile
Hi, theNbomr command is work for this situation, because each line is unique, in the real case, sometimes we find file with different filename, but have similar md5sum, and theNbomr will not work...
I want to looking for unique line based on field $1, not the $0
Any other suggestion? Thank you...
Last edited by sopier; 12-18-2011 at 04:53 AM.
Reason: code need improvement
Also look at the comm command.
comm -12 <(sort file1) <(sort file2)
will give you a list of common lines.
I must admit that I left out the field on which the match should be made in the join command, as it’s the first by default. AFAICS comm will match complete lines, which might not work for all of the records in the context of the OP.
The input files look like lists of hash;filename, probably created by script or command. I don't think the input will be vary between identical inputs.
I will create lists of the md5 output to locate duplicates based on the md5sum column.
sort | uniq -w32 -D
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.