Print a line on n'th position within a large file...will PIPE optimize this?
OK so I am NEW to Linux , what i know i have learned by reading and asking, so if what I say or do here is incorrect please feel free to correct me.
I have to read a single line from a file , this file may be of any size but i know what line to read within the file.. I have two commands that will do the same thing cat filename | head -6520 | tail -1 sed -n '6520{p;q;}' filename I like the command working with cat and the pipe directive but is this the best solution? It is my understanding that with this command cat will stream to head witch will in turn stream to tail ... will this not "eat" a lot of memory? I like this command due to the fact that i may change the head and tail values to receive more output. So will the pipe directive somehow optimize this, or will the different commands fully execute in memory and only return that what is needed. i hope i made this question clear enough since English is not my first language :) Thank you |
No, it won't optimize it. Unless it really is a big file then it shouldn't use a lot of memory. I'd personally use the red route though - it's more elegant and "correct".
|
thank you for the reply devnull10 .. ill look at sed then :D
|
consider using time to find out the cpu/ real time it takes to run your program.
|
Quote:
process reading the pipe whether or not things "happen" to the data stream. Code:
$ time cat biographies.list.new > /dev/null to get the large file into memory took 5 seconds. Cheers, Tink |
You're just processing one line with the head command though so it can stop once it's got that line. The OP was reading 6520 lines into it before then passing that output into the tail command. I'd imagine this would take slightly more memory, albeit probably hardly noticeable on current machine speeds.
When I said it wouldn't optimize I was referring to the fact that I didn't believe it would be smart enough to see that those sequence of commands were equal to just reading a specific line from a file. [edit] Just done a little test - not much in it to be honest! :) Code:
~ $ for i in $(seq 1 100); do cat /usr/share/dict/words >> bigfile; done |
Thank you for the help guys, i even learned about "time" :), ill run some random test on this end with the files i have and see what works the best.
Thanks F1DG3T |
All times are GMT -5. The time now is 03:56 PM. |