Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Some people just use it for one-liners to eg extract one or more fields separated by random amts of space from a record.
Others write entire programs in it.
This may help http://www.grymoire.com/Unix/Awk.html
I'll use awk if the problem needs less than about 50 lines to implement. If it looks like it will take more then I will use Perl, as it is faster than awk, and MUCH more flexible, with better diagnostics.
Distribution: Debian Wheezy/Jessie/Sid, Linux Mint DE
I use it for numerous one-time data processing projects. For example when I get a CSV file and I must process it on one or another way to include in a report.
For projects with a longer life time I use it often to transform CSV data or database output into something else. That something else is often SQL statements to get it onto a different database. Examples include radio program schedules from one format to another format. Database output to Google KML format. Database output to Latex.
I don't know if Perl would have been better for those purposes. I never learned it because of its incomprehensible syntax.
I learned perl first, then some years later awk. Now-adays I would use awk every day, (new) perl maybe once a month if something big comes along. Bash a distant third because I keep having to look things up due to irregular usage.
I decided to learn awk before sed because it can do everything sed can do. Some things can be done more elegantly by sed but awk can do them. And, having already learned C, awk was easier.
Since the I have written some 1000+ line awk scripts -- for example for transforming CSV and LDIF files.
Now I'm comfortable with sed too and use whichever is best suited to the task.
Perl may be great but I cannot generate any enthusiasm for learning it. Reportedly it's easy to learn if you know C, regular expressions and shell script but first attempts were much more frustrating than first awk attempts. And it doesn't look nice
I learnt awk because I raised a question about data that could be defined by a delimiter and was told this is the tool
As with others, the right tool for the job is best, but I would predominantly use awk for quick and short program manipulation and then switch to Ruby (catkin you might give this a try)
for larger tasks that require more finesse