Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
This looks like it might possibly be a homework program. Could you explain how you will use this, or show what you have tried so far. Then we will be able provide hints. If this is something you can use, you could tools such as cut, sort & tail to achieve the same thing.
The file is a csv file which is exported from a website. The file contains delivery information and the last line for each name is the line that is required to print delivery labels.
The final file is imported into an application which prints the labels.
I have been able to print the required columns from an original csv file into a new file and delete repeated lines by using the uniq command. This wont work here though because the lines are all different.
I want to be able to use the awk match function to match lines based on a pattern once the lines have been matched I should be able to print the last line into a new file.
The patterns will always be different so I will have to match the lines based on the contents of column 1 being the same.