bash - print in a file on the same first line in a loop
I need to print a $WORD in this file $FILE_OUTPUT on the same line in a while loop,
if I use Code:
while read line; Code:
WORD1 Code:
WORD1 WORD2 WORD3 Thanks |
Hello,
it should work just fine if $WORD has no endline, how do you construct it ? To be sure of it, try Code:
echo -n "$WORD" | tr -d "\n" >> $FILE_OUTPUT |
Quote:
|
your $WORD variable is not the $line variable, so you must be processing the content of $line somehow to obtain $WORD. I was asking what that process was.
Anyway, as removing the end of line with tr doesn't work and the ".txt" extension, I suspect your input file is actually is dos format where end of lines are not only a line feed "\n" but a carriage return followed by a linefeed "\r\n". Either convert your input to the unix format with Code:
dos2unix input_file.txt Code:
while read line; |
Quote:
thanks but it doesn't work either. I tried both commands you gave me. this is all my loop: Code:
dos2unix $FILE_INPUT |
Okay, the file that actually needs the converstion is the file from where $WORD comes from. You have to convert your $dictfile in that case. Or again, if you don't want to modify your file, you could trim the CR at the definition of $WORD like
Code:
WORD=$(awk "NR==$line{print;exit}" $dictfile | tr -d "\r") Code:
echo -n "$WORD " >> $FILE_OUTPUT |
great, it works perfectly thanks
|
The simple way would be to move the redirection from the "echo -n $word" to the "done" where you redirect stdin. The "done" would then look like "done < $FILE_INPUT >$FILE_OUTPUT"
|
something as simple as
Quote:
|
I think it's more efficient if you just use sed:
Code:
WORD=$(exec sed -n "${line}{ s@\\r@@; p; q; }" "$dictfile") |
Quote:
|
Code:
mapfile -t lines <infile.txt The "*" expansion of the array prints the entire contents of it as a single unit, with the first character in your IFS variable separating the individual entries. By default this is the space character. You can also use mapfile with internal commands, BTW, by using a process substitution, or a here document/here string in combination with a command substitution. Code:
mapfile -t lines < <( mycommand ) |
Just remember that there is a limit on the number of lines you can use in that echo.. it is fairly large (10,000 I believe), but anything larger will fail.
|
@jpollard. Yes, but I can't imagine that he's going to be searching for that many words at once. And actually, modern bash seems to be kind of smart when it comes to large array use. I just ran a test where I loaded up an array with more than 50k filenames, and it had no trouble printing them all out, either with echo or a for loop.
Anyway, after looking at the above code carefully, since I see that you want to do some on-screen printing as well, I think something like this might be a bit more appropriate. Code:
dos2unix "$file_input" The negative index number in "${words[-1]}" prints only the final element of the array, and is also new to bash 4.2. For earlier versions "${words[@]:(-1)}" will do the same job. In the awk command, I imported the $line number into an awk variable first instead of inserting it directly, and I set the RS value to one that can handle both dos and unix-style line endings. Although it would be better in the long run if you could just run dos2unix on the $dictfile instead. Notice also my demonstrations of cleaned up formatting, double brackets, and quoted variables (very important!). ;) |
It isn't the size of the data array, it is the number of parameters allowed for a command. echo, being built in just might be able to bypass that, but in the general case, you are limited. That is why the loops using echo -n work on any data, not just small data.
|
All times are GMT -5. The time now is 11:53 AM. |