[SOLVED] uniq command not able to remove duplicate entries
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
tells you that it won't work unless lines are adjacent , it also hints that you can use sort -u
assuming that still doesn't work, we go back to my cryptic question.
Since you mention blacklists, I'm assuming you want to weed out duplicates from several lists, to make one big list.
So it is possible that some of those lists have 'Dos' EOL ( end of line ) while others are 'Unix' EOL
Can you plz suggest how to remove these duplicates (ip addresses or integer values)??
Thanks in advance.
You must have some extra space/tab or probably some other hidden characters. I tried your list and it works fine, but when I added extra space to one of the lines, it appeared twice. So, remove that extra space and check.
Ok! Thanks for quick response.
Well, you mean to say ">" simply redirecting input
But, still I am not able to understand that why duplicate entries of baby.us
zym.com were removed.
As firerat says, there may be different line endings if they come from different sources.
Also, now that I think about it, it is not entirely clear whether your file is sorted. Are you running sort on some sources then redirecting output to a the file for uniq? Or are you running sort on the file and expecting it to be sorted?
Can you verify that the file used by uniq is actually sorted... sanity check.
I offer the following script to strip M$ line endings from the file.
Copy paste to a file, I name it undos, make it executable, then ./undos filename.txt.
NOTE: USE AT YOUR OWN RISK!! It will prompt you for the --really option to confirm use!
But it should work well enough for this...
#Quick util to strip \r from text files
if [[ $# == 0 ]]
echo "Usage $0 filename --really"
for what in $*
if [[ $what == '--really' ]]
if [[ $ok == 1 ]]
sed 's/\r//g' $1 -i
echo "You are about to strip characters from a file, --really to continue!"