Culling Data from a CSV file to output in excel
I am trying to cull out all rows of a csv file that match a specific group of IPs, i.e.100.25.40.* for instance and out that to a file displayable in excel.
Assuming this requires sed and awk commands or Perl. Any ideas on how to do this? I need to get all the data form the files with rows matchign the IP $string. Hoping to use Korn shell for this but open to other ideas. |
where i am right now
grep 10.25.40. cvs.file > cvs.out This should give me a list of all the lines that contain 10.25.40.xxx uniq -d cvs.out > cvs.out.uniq This should result in a list of all lines that are repeated. awk is the usual tool i think to pass this info on but need to bone up on it. |
Guess you are trying to do:
awk '/10.25.40/ {print $0}' cvs.file | uniq -d > cvs.out.uniq |
All times are GMT -5. The time now is 09:11 AM. |