Perform a uniq and count on 2nd column
Hello,
I need to perform a unique and count of dups at the same time and it need to be done on the 2nd column in a file. the delimiter is space, and the file looks like: column1 column2 abc xyz.com 123 xyz.com 456 xyz.com peter abc.com John 123.com Kyle amazon.com George amazon.com The out put to me should look like, something below: 3 xyz.com 1 abc.com 1 123.com 2 amazon.com I need the uniqueness on column 2 and I want to count the number of occurrence also on column 2. I tried "sort -u -k2 <filename>" but it does not give me the count of occurrence. Any thoughts..... help with this is much appreciated. Thanks, |
How about this order of operations with suitable flags...
Code:
cut {field spec}| grep {drop heading}| sort | uniq {with count}| sort {count ordering} Code:
1 123.com |
Associative arrays are designed for this - can be done (simply) with one pass of the data in for example awk.
|
You got columns ... you use awk :)
|
All times are GMT -5. The time now is 07:14 PM. |