LinuxQuestions.org
Latest LQ Deal: Linux Power User Bundle
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 03-18-2010, 08:07 AM   #1
f1dg3t
LQ Newbie
 
Registered: Mar 2010
Posts: 3

Rep: Reputation: 0
Print a line on n'th position within a large file...will PIPE optimize this?


OK so I am NEW to Linux , what i know i have learned by reading and asking, so if what I say or do here is incorrect please feel free to correct me.

I have to read a single line from a file , this file may be of any size but i know what line to read within the file..

I have two commands that will do the same thing

cat filename | head -6520 | tail -1

sed -n '6520{p;q;}' filename

I like the command working with cat and the pipe directive but is this the best solution? It is my understanding that with this command cat will stream to head witch will in turn stream to tail ... will this not "eat" a lot of memory? I like this command due to the fact that i may change the head and tail values to receive more output.
So will the pipe directive somehow optimize this, or will the different commands fully execute in memory and only return that what is needed.

i hope i made this question clear enough since English is not my first language

Thank you
 
Old 03-18-2010, 08:12 AM   #2
devnull10
Member
 
Registered: Jan 2010
Location: Lancashire
Distribution: Slackware Stable
Posts: 553

Rep: Reputation: 116Reputation: 116
No, it won't optimize it. Unless it really is a big file then it shouldn't use a lot of memory. I'd personally use the red route though - it's more elegant and "correct".
 
Old 03-18-2010, 09:03 AM   #3
f1dg3t
LQ Newbie
 
Registered: Mar 2010
Posts: 3

Original Poster
Rep: Reputation: 0
thank you for the reply devnull10 .. ill look at sed then
 
Old 03-18-2010, 01:30 PM   #4
schneidz
LQ Guru
 
Registered: May 2005
Location: boston, usa
Distribution: fc-15/ fc-20-live-usb/ aix
Posts: 5,167

Rep: Reputation: 889Reputation: 889Reputation: 889Reputation: 889Reputation: 889Reputation: 889Reputation: 889
consider using time to find out the cpu/ real time it takes to run your program.
 
Old 03-18-2010, 01:42 PM   #5
Tinkster
Moderator
 
Registered: Apr 2002
Location: in a fallen world
Distribution: slackware by choice, others too :} ... android.
Posts: 23,067
Blog Entries: 11

Rep: Reputation: 910Reputation: 910Reputation: 910Reputation: 910Reputation: 910Reputation: 910Reputation: 910Reputation: 910
Quote:
Originally Posted by devnull10 View Post
No, it won't optimize it. Unless it really is a big file then it shouldn't use a lot of memory. I'd personally use the red route though - it's more elegant and "correct".
I don't think this is "quite" right. It does depend on the
process reading the pipe whether or not things "happen"
to the data stream.
Code:
$ time cat biographies.list.new > /dev/null

real    0m0.143s
user    0m0.003s
sys     0m0.137s
$ time cat biographies.list.new|head -n 1 > /dev/null

real    0m0.003s
user    0m0.000s
sys     0m0.000s
And yes, I've taken into account caching - the initial run
to get the large file into memory took 5 seconds.



Cheers,
Tink

Last edited by Tinkster; 03-18-2010 at 01:44 PM.
 
Old 03-18-2010, 02:39 PM   #6
devnull10
Member
 
Registered: Jan 2010
Location: Lancashire
Distribution: Slackware Stable
Posts: 553

Rep: Reputation: 116Reputation: 116
You're just processing one line with the head command though so it can stop once it's got that line. The OP was reading 6520 lines into it before then passing that output into the tail command. I'd imagine this would take slightly more memory, albeit probably hardly noticeable on current machine speeds.

When I said it wouldn't optimize I was referring to the fact that I didn't believe it would be smart enough to see that those sequence of commands were equal to just reading a specific line from a file.

[edit]

Just done a little test - not much in it to be honest!

Code:
 ~ $ for i in $(seq 1 100); do cat /usr/share/dict/words >> bigfile; done
 ~ $ time cat bigfile | head -n 6520 | tail -n 1
chapters

real    0m0.003s
user    0m0.000s
sys     0m0.003s
 ~ $ time sed -n '6520{p;q;}' bigfile
chapters

real    0m0.002s
user    0m0.002s
sys     0m0.000s
 ~ $ wc -l bigfile 
3861900 bigfile
 ~ $

Last edited by devnull10; 03-18-2010 at 02:50 PM.
 
Old 03-19-2010, 03:41 AM   #7
f1dg3t
LQ Newbie
 
Registered: Mar 2010
Posts: 3

Original Poster
Rep: Reputation: 0
Thank you for the help guys, i even learned about "time" , ill run some random test on this end with the files i have and see what works the best.

Thanks
F1DG3T
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How can I stream line or optimize a .bashrc file jamtech Programming 9 04-30-2009 05:04 PM
to print the last character on the last line of the file naveensankineni Programming 12 02-28-2008 09:12 PM
Sed/Awk: print lines between n'th and (n+1)'th match of "foo" xaverius Programming 17 08-20-2007 12:39 PM
How to print a given line from a file on the standard output Fond_of_Opensource Linux - Newbie 1 08-24-2006 03:45 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 05:54 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration