how to find duplicate strings in vertical column of strings
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
So I need to remove one of them. Now if this had happened once or twice this is easy to do...but it happened 800 times. So I need somehting clever to do it for me. So far I can get the number (e.g. 1397 in the above example) of each file via:
So now I need to find any duplicated strings and then remove one of the files in this case. The second part I can do, but how to make the list of duplicated number strings I am not sure. In theory all I need to do is read the string on line one and then compare it to all the other strings in the column and if it matches dump the string to another file. I dont know how to do this comparison using bash though...does anyone else?
This tells me the list of numbers after removing any duplicates. I want it to tell me which numbers are duplicated though...I looked in the sort manual and dont see that it can give me this list. Can it?
1. Boil your raw "ls" down to the list you actually want to check for duplicates:
atlasdata1 T2_McAtNLO_top500]$ ll -t | gawk '{print $NF}' | sed -e 's/._/._ /g' | sed -e 's/.AOD/ .AOD/g' ... > list.txt
The problem is that markhod isn't trying to remove identical lines, merely ones that are identical up to a point. I'd use perl and save each line to compare it with the next (assuming the shortest line is the one to keep) and skip the ones that match. When you find one that doesn't match, output it and save that as the one to use for matching. I'll leave the implementation as an exercise for the reader. Ask again if you get really stuck and I'll engage my brain for you.
1. Boil your raw "ls" down to the list you actually want to check for duplicates:
atlasdata1 T2_McAtNLO_top500]$ ll -t | gawk '{print $NF}' | sed -e 's/._/._ /g' | sed -e 's/.AOD/ .AOD/g' ... > list.txt
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.