Efficient way to read only 1st line from number of files
Hi,
I am struck up with some problem. I am analyzing on an efficient way to read number of files. Actually I want to read only the first line from each of them. There could be hundreds of them in the directory/sub-directories; but not huge in size. Can anyone suggest a way I can do it without compromising too much on memory and performance. Thanks in advance. /Anu |
"head n=1" ???.
Stick it in a (bash) loop. Shouldn't be too onerous. |
Quote:
Code:
find . -type f -exec head -1 {} \; Code:
find . -type f | while read FN |
All times are GMT -5. The time now is 09:37 AM. |