LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 10-27-2005, 07:26 AM   #1
markhod
Member
 
Registered: Sep 2003
Posts: 103

Rep: Reputation: 15
how to find duplicate strings in vertical column of strings


Hello,

So I have a directory full of files. Sometimes I have the same file with sligthly different names:

rome.004101.recov10.T2_McAtNLO_top500._01397.AOD.pool.root
rome.004101.recov10.T2_McAtNLO_top500._01397.AOD.pool.root.6

So I need to remove one of them. Now if this had happened once or twice this is easy to do...but it happened 800 times. So I need somehting clever to do it for me. So far I can get the number (e.g. 1397 in the above example) of each file via:


[hodgkinson@atlasdata1 T2_McAtNLO_top500]$ ll -t | gawk '{print $NF}' | sed -e 's/._/._ /g' | sed -e 's/.AOD/ .AOD/g' | gawk '{print $(NF-1)}'

03998
03999
04000
03990
03991
01397
01397
etc
etc

So now I need to find any duplicated strings and then remove one of the files in this case. The second part I can do, but how to make the list of duplicated number strings I am not sure. In theory all I need to do is read the string on line one and then compare it to all the other strings in the column and if it matches dump the string to another file. I dont know how to do this comparison using bash though...does anyone else?

Thanks,

Mark
 
Old 10-27-2005, 10:02 AM   #2
naf
Member
 
Registered: Oct 2005
Location: Chicago, USA
Distribution: Slackware & Fedora
Posts: 66

Rep: Reputation: 15
Use the 'sort' command with the -u flag for unique:
Code:
ll -t | gawk '{print $NF}' | sed -e 's/._/._ /g' | sed -e 's/.AOD/ .AOD/g' | gawk '{print $(NF-1)}' | sort -u
 
Old 10-27-2005, 10:07 AM   #3
markhod
Member
 
Registered: Sep 2003
Posts: 103

Original Poster
Rep: Reputation: 15
Quote:
Originally posted by naf
Use the 'sort' command with the -u flag for unique:
Code:
ll -t | gawk '{print $NF}' | sed -e 's/._/._ /g' | sed -e 's/.AOD/ .AOD/g' | gawk '{print $(NF-1)}' | sort -u
This tells me the list of numbers after removing any duplicates. I want it to tell me which numbers are duplicated though...I looked in the sort manual and dont see that it can give me this list. Can it?

Thanks,

Mark
 
Old 10-27-2005, 10:48 AM   #4
naf
Member
 
Registered: Oct 2005
Location: Chicago, USA
Distribution: Slackware & Fedora
Posts: 66

Rep: Reputation: 15
Well, there is nothing that comes to mind. Perhaps you can create an awk script. Or expand your shell script.

Alternatively, you can create a mini program duplicates.c:

Code:
#include <stdio.h>

int main( int argc, char **argv )
{
    const long max = 256;
    char buffers[2][max];
    char *active, *previous;
    char *pointer;

    buffers[0][0] = buffers[1][0] = '\0';

    for( active = &(buffers[0][0]), previous = &(buffers[1][0]); fgets( active, max, stdin );  )
        if( ! strcmp( active, previous ) )
            fprintf( stdout, "%s", active );
        else
            pointer = active, active = previous, previous = pointer;
 
    return 0;
}
then
Code:
gcc duplicates.c -o duplicates
Then use it as:
Code:
ll -t | gawk '{print $NF}' | sed -e 's/._/._ /g' | sed -e 's/.AOD/ .AOD/g' | gawk '{print $(NF-1)}' | sort | duplicates
 
Old 10-27-2005, 01:56 PM   #5
paulsm4
LQ Guru
 
Registered: Mar 2004
Distribution: SusE 8.2
Posts: 5,863
Blog Entries: 1

Rep: Reputation: Disabled
Try this:

1. Boil your raw "ls" down to the list you actually want to check for duplicates:
atlasdata1 T2_McAtNLO_top500]$ ll -t | gawk '{print $NF}' | sed -e 's/._/._ /g' | sed -e 's/.AOD/ .AOD/g' ... > list.txt

LIST.TXT == "03998
03999
04000
03990
03991
01397
01397
etc
etc

2. Now just compare the "sorted" list with the "sorted|uniq" list:

a) sort list.txt > 1
b) sort list.txt|uniq > 2
c) diff 1 2
2d1
< 01397
<= VOILA! "01397" IS A DUPLICATE!

'Hope that helps .. PSM
 
Old 10-29-2005, 03:49 PM   #6
eddiebaby1023
Member
 
Registered: May 2005
Posts: 378

Rep: Reputation: 33
Duplicate post, sorry.

Last edited by eddiebaby1023; 10-29-2005 at 03:51 PM.
 
Old 10-29-2005, 03:49 PM   #7
eddiebaby1023
Member
 
Registered: May 2005
Posts: 378

Rep: Reputation: 33
The problem is that markhod isn't trying to remove identical lines, merely ones that are identical up to a point. I'd use perl and save each line to compare it with the next (assuming the shortest line is the one to keep) and skip the ones that match. When you find one that doesn't match, output it and save that as the one to use for matching. I'll leave the implementation as an exercise for the reader. Ask again if you get really stuck and I'll engage my brain for you.
 
Old 11-02-2005, 04:04 AM   #8
markhod
Member
 
Registered: Sep 2003
Posts: 103

Original Poster
Rep: Reputation: 15
Quote:
Originally posted by paulsm4
Try this:

1. Boil your raw "ls" down to the list you actually want to check for duplicates:
atlasdata1 T2_McAtNLO_top500]$ ll -t | gawk '{print $NF}' | sed -e 's/._/._ /g' | sed -e 's/.AOD/ .AOD/g' ... > list.txt

LIST.TXT == "03998
03999
04000
03990
03991
01397
01397
etc
etc

2. Now just compare the "sorted" list with the "sorted|uniq" list:

a) sort list.txt > 1
b) sort list.txt|uniq > 2
c) diff 1 2
2d1
< 01397
<= VOILA! "01397" IS A DUPLICATE!


Thanks - that does the trick!

'Hope that helps .. PSM
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Problem the Strings in C++ gjagadish Programming 4 10-15-2005 03:45 AM
Bash scripting: column-ize file of varying length strings Quantum0726 Programming 4 08-13-2005 06:19 PM
splitting strings marri Programming 5 04-08-2005 05:33 PM
strings in c djgerbavore Programming 8 01-11-2005 04:27 PM
Java Strings jbstew32 Programming 3 02-15-2004 12:08 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 06:25 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration