[SOLVED] uniq command not able to remove duplicate entries
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Ok! Thanks for quick response.
Well, you mean to say ">" simply redirecting input
But, still I am not able to understand that why duplicate entries of baby.us
zym.com were removed.
tells you that it won't work unless lines are adjacent , it also hints that you can use sort -u
so try
Code:
sort -u
assuming that still doesn't work, we go back to my cryptic question.
Since you mention blacklists, I'm assuming you want to weed out duplicates from several lists, to make one big list.
So it is possible that some of those lists have 'Dos' EOL ( end of line ) while others are 'Unix' EOL
Can you plz suggest how to remove these duplicates (ip addresses or integer values)??
Thanks in advance.
You must have some extra space/tab or probably some other hidden characters. I tried your list and it works fine, but when I added extra space to one of the lines, it appeared twice. So, remove that extra space and check.
Ok! Thanks for quick response.
Well, you mean to say ">" simply redirecting input
But, still I am not able to understand that why duplicate entries of baby.us
zym.com were removed.
As firerat says, there may be different line endings if they come from different sources.
Also, now that I think about it, it is not entirely clear whether your file is sorted. Are you running sort on some sources then redirecting output to a the file for uniq? Or are you running sort on the file and expecting it to be sorted?
Can you verify that the file used by uniq is actually sorted... sanity check.
I offer the following script to strip M$ line endings from the file.
Copy paste to a file, I name it undos, make it executable, then ./undos filename.txt.
NOTE: USE AT YOUR OWN RISK!! It will prompt you for the --really option to confirm use!
But it should work well enough for this...
Code:
#!/bin/bash
#Quick util to strip \r from text files
if [[ $# == 0 ]]
then
echo "Usage $0 filename --really"
exit;
fi
for what in $*
do
if [[ $what == '--really' ]]
then
ok=1
fi
done
if [[ $ok == 1 ]]
then
sed 's/\r//g' $1 -i
else
echo "You are about to strip characters from a file, --really to continue!"
fi
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.