LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices

Reply
 
Search this Thread
Old 12-09-2011, 01:42 PM   #1
christinek
LQ Newbie
 
Registered: Dec 2011
Posts: 1

Rep: Reputation: Disabled
display one line from multiple files


I have 500 files that I have to look through but only interested in 20th line. Is there any way for me to view the 20th line of all of the 500 files at the same time?

I've been using
head -n 20 filename | tail -n 1
but then I have to do this 500 times...

Thank you for the help!
 
Old 12-09-2011, 01:45 PM   #2
kbscores
Member
 
Registered: Oct 2011
Location: USA
Distribution: Red Hat
Posts: 259
Blog Entries: 9

Rep: Reputation: 32
If they are all in same directory you could do:
Code:
for i in `ls -1 /directory`
do
head -n 20 $i| tail -n 1
done
 
Old 12-09-2011, 01:49 PM   #3
druuna
LQ Veteran
 
Registered: Sep 2003
Posts: 10,532
Blog Entries: 7

Rep: Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371
Hi,

Are all the files in the same directory?

If so:
Code:
for THISFILE in `ls`; do sed -n '21q;20p' $THISFILE; done
If you also want/need the file to be printed:
Code:
for THISFILE in `ls`; do echo -n "$THISFILE : " ; sed -n '21q;20p' $THISFILE; done
Hope this helps.

Last edited by druuna; 12-09-2011 at 03:25 PM. Reason: Added 21q; for speed
 
Old 12-09-2011, 02:36 PM   #4
colucix
Moderator
 
Registered: Sep 2003
Location: Bologna
Distribution: CentOS 6.5 OpenSuSE 12.3
Posts: 10,452

Rep: Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941
Using awk:
Code:
awk 'FNR==20' *
To print the filename use an explicit print statement. Moreover, if the files are huge in size, you may want to speed up the whole process by adding nextfile:
Code:
awk 'FNR==20{print FILENAME ":", $0; nextfile}' *
 
1 members found this post helpful.
Old 12-09-2011, 02:42 PM   #5
Telengard
Member
 
Registered: Apr 2007
Location: USA
Distribution: Kubuntu 8.04
Posts: 579
Blog Entries: 8

Rep: Reputation: 147Reputation: 147
I started with nine files.

Code:
test$ ls
file_1  file_2  file_3  file_4  file_5  file_6  file_7  file_8  file_9
test$
Each file is formatted as shown.

Code:
test$ cat file_1
file_1 line_1
file_1 line_2
file_1 line_3
file_1 line_4
file_1 line_5
file_1 line_6
file_1 line_7
file_1 line_8
file_1 line_9
file_1 line_10
file_1 line_11
file_1 line_12
file_1 line_13
file_1 line_14
file_1 line_15
file_1 line_16
file_1 line_17
file_1 line_18
file_1 line_19
file_1 line_20
(... continues ...)
file_1 line_50
Then I tested my script to print line 20 from each file.

Code:
test$ awk 'FNR == 20' file_*
file_1 line_20
file_2 line_20
file_3 line_20
file_4 line_20
file_5 line_20
file_6 line_20
file_7 line_20
file_8 line_20
file_9 line_20
test$
You say you're working with 500 or more files, so my pattern file_* would expand into quite a long list. I can imagine such a long command line having negative consequences. With that in mind, it may be wise to execute the script on one file at a time. This is easily automated with the find command.

Code:
test$ find -maxdepth 1 -name 'file_*' -exec awk 'NR == 20 {print FILENAME ":" $0}' {} \;
./file_9:file_9 line_20
./file_5:file_5 line_20
./file_4:file_4 line_20
./file_1:file_1 line_20
./file_2:file_2 line_20
./file_6:file_6 line_20
./file_7:file_7 line_20
./file_8:file_8 line_20
./file_3:file_3 line_20
test$
If you would prefer the list be sorted by filename, then simply pipe the output of find into sort.

Code:
test$ find -maxdepth 1 -name 'file_*' -exec awk 'NR == 20 {print FILENAME ":" $0}' {} \; | sort
./file_1:file_1 line_20
./file_2:file_2 line_20
./file_3:file_3 line_20
./file_4:file_4 line_20
./file_5:file_5 line_20
./file_6:file_6 line_20
./file_7:file_7 line_20
./file_8:file_8 line_20
./file_9:file_9 line_20
test$
If you also want to print line 20 of files from subdirectories then specify how many directories deep in the -maxdepth clause. If you want to recurse infinitely deep into all subdirectories then the -maxdepth clause may be omitted.

Code:
test$ find -name 'file_*' -exec awk 'NR == 20 {print FILENAME ":" $0}' {} \; | sort
./file_1:file_1 line_20
./file_2:file_2 line_20
./file_3:file_3 line_20
./file_4:file_4 line_20
./file_5:file_5 line_20
./file_6:file_6 line_20
./file_7:file_7 line_20
./file_8:file_8 line_20
./file_9:file_9 line_20
./subdir/file_10:file_10 line_20
test$
References
HTH

EDIT
colucix made a good point about accelerating awk by using nextfile to ignore all lines after line 20. If I incorporate that into my last example I get this.

Code:
test$ find -name 'file_*' -exec awk 'NR == 20 {print FILENAME ":" $0; nextfile}' {} \; | sort
./file_1:file_1 line_20
./file_2:file_2 line_20
./file_3:file_3 line_20
./file_4:file_4 line_20
./file_5:file_5 line_20
./file_6:file_6 line_20
./file_7:file_7 line_20
./file_8:file_8 line_20
./file_9:file_9 line_20
./subdir/file_10:file_10 line_20
test$

Last edited by Telengard; 12-09-2011 at 02:48 PM. Reason: addendum
 
Old 12-09-2011, 03:07 PM   #6
colucix
Moderator
 
Registered: Sep 2003
Location: Bologna
Distribution: CentOS 6.5 OpenSuSE 12.3
Posts: 10,452

Rep: Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941Reputation: 1941
Quote:
Originally Posted by Telengard View Post
You say you're working with 500 or more files, so my pattern file_* would expand into quite a long list. I can imagine such a long command line having negative consequences.
Good point! It may result in the notorious and feared "Arguments list too long" error.
 
Old 12-09-2011, 03:23 PM   #7
Juako
Member
 
Registered: Mar 2010
Posts: 202

Rep: Reputation: 84
The proverbial sed-based one:

Code:
find <base_dir> -type f -exec sed -n '20{p;q}' {} \;

Last edited by Juako; 12-09-2011 at 03:25 PM.
 
1 members found this post helpful.
  


Reply

Tags
awk, find, sed


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
bash reading multiple files line by line. danraider Programming 3 03-08-2011 07:54 AM
BASH: Each line of multiple text files gets added to one line Gavin Harper Programming 3 09-12-2010 07:31 PM
Search and replace line in multiple files dnoy Linux - General 13 07-13-2010 01:11 PM
[SOLVED] Read multiple files line by line idaham Linux - Newbie 6 11-05-2009 03:19 AM
Search values within multiple files line to line Chrizzieej Programming 5 09-26-2008 04:11 PM


All times are GMT -5. The time now is 09:33 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration