[SOLVED] Reading the content of line number $ (Shell Script)
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Reading the content of line number $ (Shell Script)
Hello Geeks and Fellows,
I'm writing a shell script.. and in some part of it I need that script to read the content of a line by giving that line number.
Well, I found a work-around to achieve that, but I'm looking for something more efficient to enhance the script performance.
Here is the work-around I got to:
Code:
#!/bin/bash
LineNumber=5
FileLocation='/some/locations/file'
# Reading the content of line number $LineNmber stored in FileLocation
Content=$(head $FileLocation -n$LineNumber | tail -n1)
# End
one of the advantage is that if he is reading a very big file, using a file reading tool such as awk is the way to go. bash's loop+read is simply too slow.
But, if the files are small and he is doing lots of reads on multiple files, the bash solution will be much faster, than calling any external program. I suspect that sed would be faster than awk, which would be faster than perl -because of latency in forking the external program.
Here's another example using bash:
Code:
#!/bin/bash
LineNumber=5
FileLocation='/some/locations/file'
# or: LineNumber=$1 FileLocation=$2
COUNT=1
while read LINE ; do
# break once the line is found to avoid looping through the whole file
[[ $COUNT -eq $LineNumber ]] && echo $LINE && break
done < $FileLocation
But, if the files are small and he is doing lots of reads on multiple files, the bash solution will be much faster, than calling any external program.
how about a small test on file with 20000 lines ( a small file) and getting the 1000th line
Code:
$ head -5 file
1 this is a line
2 this is a line
3 this is a line
4 this is a line
5 this is a line
$ tail -5 file
19996 this is a line
19997 this is a line
19998 this is a line
19999 this is a line
20000 this is a line
$ more test.sh
#!/bin/bash
LineNumber=$1
FileLocation="file1"
COUNT=1
while read LINE ; do
[[ $COUNT -eq $LineNumber ]] && echo $LINE && break
let COUNT=COUNT+1
done < $FileLocation
$ time ./test.sh 1000
1000 this is a line
real 0m0.090s
user 0m0.072s
sys 0m0.012s
$ time ./test.sh 1000
1000 this is a line
real 0m0.096s
user 0m0.073s
sys 0m0.012s
$ time awk 'NR==1000{print;exit}' file
1000 this is a line
real 0m0.003s
user 0m0.001s
sys 0m0.001s
$ time awk 'NR==1000{print;exit}' file
1000 this is a line
real 0m0.004s
user 0m0.000s
sys 0m0.003s
$ time sed -n '1000{p;q}' file
1000 this is a line
real 0m0.003s
user 0m0.000s
sys 0m0.003s
$ time sed -n '1000{p;q}' file
1000 this is a line
real 0m0.003s
user 0m0.001s
sys 0m0.002s
calling sed or awk is much faster. not to mention the amount of bash code to cook up as well.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.