Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Each line of text in your file contains 3 bytes. When you read blocks of 16 bytes and skip the first two blocks, that works out to 10 complete lines (30 bytes) and the two digits of the next line. The next byte is the newline character at the end of that line, and that will be the first byte in the output. You seemed to understand what was happening when you skipped 16 bytes (1 block), so why would the result of skipping 32 bytes be such a mystery?
This wont be exact. [Now I really feel, that I should have posted the question with normal test file, and not lines of data in file].
my requirement is to read data from an offset with count. so I want something that skips the initial addresses, and read the data from given offset.
Head /tail will read these 20lines, but outputs the 10 which are required, but I dont want to read the extra thing. just the count after an offset.
I was not sure of the fact that newline was encountered correctly.
Thanks, your answer confirms it, and I could possibly have a solution to my problem now, but I need to confirm that, whether dd suppresses the o/p [read whole file but o/p only the required set, asked]. if it just skips the initial range that would be it.
Is your offset a byte count, or is it a line count? You won't get anywhere until you get that straight. If you are dealing with a known byte count, then dd can do the job just fine. If you need to count lines which can be variable length, then there is no choice except to read the file from the beginning and count the newline characters. Text processing tools like head, tail, sed, etc. can do that for you, but dd cannot.
Whenever possible, dd will do a skip by using a seek() call to skip over the initial data without reading it. If dd is reading from a non-seekable stream (such as a pipe), then it has no choice but to read and discard that data. Unless you are skipping over quite a bit of data, there is really little difference. The common file system blocksize is 4 kilobytes, so if the seek is less than that the OS needs to read the whole block anyway.
Yeah I have an idea of full block read in case of the request is less than that width. Thanks you again for the valuable info provided.
I have a 64k fs bs, and the dd looks like working fetching the required data when the requests are block aligned.
Also I am trying it over nfs , but in this case the rsize I have is only 32k, that creates couple of requests for block read. But I think I am getting the desired o/p now. Thanks.