Red HatThis forum is for the discussion of Red Hat Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I'm just looking for a better way to calculate the line numbers from a file that builds to several hundred MB during the course of a day that takes seconds or less and doesn't require using an editor. has to be command line.
The sed solution offered by druuna just has to be the choice. sed always wins these simple problems - easy to do, easy to remember.
I suspect all the tests above will be invalid - unless specific steps were taken between every run to purge the disk cache.
Edit: still can't understand why the obvious (wc -l) isn't acceptable.
If this is a log file that grows over time, you could use dd with a block size of 1 and an offset equal to the size of the file the last time you ran your check. This will output the new contents of the file, which you could pipe through wc -l.
Record the size of the file each time you run the check.
For a file this large, maybe using the size of the file instead of the number of lines would be the information you should be using.
If you just need the estimate of count and not the exact count, there is always something called "Mathemetics".
1. Do a head -1 FILE >NEW_FILE
2. get size of NEW_FILE
3. Get size of FILE
4. Divide result of 3 by result of 2 to get an estimated row count.
To get better result estimate, do a head -n and adjust accordingly during division.
I have almost 46 Gig file still growing in size, doing a wc -l or any other methods would take quite a bit. I do above to estimate when my process would finish (An oracle spool of 300 Mn Records). Just a thought....
wc -l counts the number of newlines and as such has to scan to the end of each line. To speed it up you can first use cut to reduce the lines down to a couple characters and pipe it into wc. This can cut the time down to less than half of what it takes to just run wc.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.