Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a large log file which contains a list of IP addresses (only a list of IP adresses, nothing else), like this:
more <logfile.txt>
10.199.1.1
10.199.1.2
10.199.1.3
10.199.1.1
10.199.1.5
10.199.1.3
10.199.1.4
10.199.1.4
And so on...
But I want to extract only unique values i.e. IP adresses from this list. I have tried sort -u and uniq commands as filters, but everytime I am out of the luck .
I am surprized that even after using sort -u or uniq or uniq -u, the values are repeating!! So is there any way to sort it out? Any thing from awk? Thanks a lot!
"sort -u" works for me - what do you get ?. And what system are you using ?. "uniq" is a bit unique ...
In order to get unique values I have to use sort -u 2 times i.e. filename | sort -u | sort -u
I think it's because, IP addresses are 4 digit numbers, and thus sort command is getting little confused as which digit it should sort with. That is why it's leaving duplicate values.
But I want something simple, so I need not to use sort 2 times.
I think it's because, IP addresses are 4 digit numbers, and thus sort command is getting little confused as which digit it should sort with. That is why it's leaving duplicate values.
The sort command will do string sorting by default; this will look wrong if you want the ips in order, but it won't matter for removing duplicates.
It's giving an error, _[ event not found. Did you check it and your side?
Could you test it again and rectify?
Instead of us checking, why don't you provide the exact error, what system you are running it on and what version and type of awk (mawk, nawk, gawk, awk ...) you are using?
Instead of us checking, why don't you provide the exact error, what system you are running it on and what version and type of awk (mawk, nawk, gawk, awk ...) you are using?
I have tried it on both Linux as well as Solaris.
RHEL 5 and awk version is 3.1.5
Solaris 10 and awk version I can't find.
==========
more /home/jack/logfile.txt | awk '!_[$1]++'
_[: Event not found.
================
It's perhaps considering "_[" after "!" as any perviously run command, which it can't find, thus throwing the error... Am I right? I can although use '\!_[$1]++', but it's also not working on Solaris.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.