LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   bash - print in a file on the same first line in a loop (https://www.linuxquestions.org/questions/linux-newbie-8/bash-print-in-a-file-on-the-same-first-line-in-a-loop-4175446799/)

Hoxygen232 01-22-2013 04:06 PM

bash - print in a file on the same first line in a loop
 
I need to print a $WORD in this file $FILE_OUTPUT on the same line in a while loop,

if I use
Code:

while read line; 
do
  ......
  echo -n "$WORD" >> $FILE_OUTPUT
done < $FILE_INPUT

it doesn't work well because the output file is in this way:

Code:

WORD1
WORD2
WORD3

instead I would like to see this:

Code:

WORD1 WORD2 WORD3
words separeted by spaces


Thanks

antegallya 01-22-2013 04:21 PM

Hello,
it should work just fine if $WORD has no endline, how do you construct it ?
To be sure of it, try
Code:

echo -n "$WORD" | tr -d "\n" >> $FILE_OUTPUT

Hoxygen232 01-22-2013 04:34 PM

Quote:

Originally Posted by antegallya (Post 4875707)
Hello,
it should work just fine if $WORD has no endline, how do you construct it ?
To be sure of it, try
Code:

echo -n "$WORD" | tr -d "\n" >> $FILE_OUTPUT

it still doesn't work, those words come from a .txt file (simple words dictionary)

antegallya 01-22-2013 04:59 PM

your $WORD variable is not the $line variable, so you must be processing the content of $line somehow to obtain $WORD. I was asking what that process was.
Anyway, as removing the end of line with tr doesn't work and the ".txt" extension, I suspect your input file is actually is dos format where end of lines are not only a line feed "\n" but a carriage return followed by a linefeed "\r\n".
Either convert your input to the unix format with
Code:

dos2unix input_file.txt
or trim the carriage return of your input line by using something like :
Code:

while read line; 
do
  line=$(echo $line | tr -d "\r")
  ......
  echo -n "$WORD" >> $FILE_OUTPUT
done < $FILE_INPUT


Hoxygen232 01-23-2013 03:56 AM

Quote:

Originally Posted by antegallya (Post 4875727)
your $WORD variable is not the $line variable, so you must be processing the content of $line somehow to obtain $WORD. I was asking what that process was.
Anyway, as removing the end of line with tr doesn't work and the ".txt" extension, I suspect your input file is actually is dos format where end of lines are not only a line feed "\n" but a carriage return followed by a linefeed "\r\n".
Either convert your input to the unix format with
Code:

dos2unix input_file.txt
or trim the carriage return of your input line by using something like :
Code:

while read line; 
do
  line=$(echo $line | tr -d "\r")
  ......
  echo -n "$WORD" >> $FILE_OUTPUT
done < $FILE_INPUT




thanks but it doesn't work either. I tried both commands you gave me.

this is all my loop:

Code:

dos2unix $FILE_INPUT
 
  while read line; 
  do 
    if [ -z "$line" ] 
    then
      echo "......."
      echo
    else             
      echo "... $line"
      WORD=$(awk "NR==$line{print;exit}" $dictfile)  # it search in a dictionary
      echo -e "\t$WORD"
      echo
  echo -n "$WORD" >> $FILE_OUTPUT
    fi
  done < $FILE_INPUT

it still prints the words one in each line intead of being on the same line.

antegallya 01-23-2013 05:08 AM

Okay, the file that actually needs the converstion is the file from where $WORD comes from. You have to convert your $dictfile in that case. Or again, if you don't want to modify your file, you could trim the CR at the definition of $WORD like
Code:

WORD=$(awk "NR==$line{print;exit}" $dictfile | tr -d "\r")
Don't forget to add a space between your words too when you echo it to your output file
Code:

echo -n "$WORD " >> $FILE_OUTPUT

Hoxygen232 01-23-2013 07:28 AM

great, it works perfectly thanks

jpollard 01-23-2013 07:38 AM

The simple way would be to move the redirection from the "echo -n $word" to the "done" where you redirect stdin. The "done" would then look like "done < $FILE_INPUT >$FILE_OUTPUT"

cbtshare 01-23-2013 11:06 PM

something as simple as
Quote:

while read line;
do
echo -n "$line " >>out.txt
done < in.txt
works fine

konsolebox 01-24-2013 01:13 AM

I think it's more efficient if you just use sed:
Code:

WORD=$(exec sed -n "${line}{ s@\\r@@; p; q; }" "$dictfile")
Btw next time please give all the details. It wasn't expected that you were reading another file.

jpollard 01-24-2013 06:15 AM

Quote:

Originally Posted by cbtshare (Post 4876578)
something as simple as


works fine

Just don't run it twice...

David the H. 01-27-2013 07:43 AM

Code:

mapfile -t lines <infile.txt
echo "${lines[*]}" >outfile.txt

The mapfile command (bash v4+) loads the contents of the input file into an array, one line per entry.

The "*" expansion of the array prints the entire contents of it as a single unit, with the first character in your IFS variable separating the individual entries. By default this is the space character.

You can also use mapfile with internal commands, BTW, by using a process substitution, or a here document/here string in combination with a command substitution.

Code:

mapfile -t lines < <( mycommand )

mapfile -t lines <<<$( mycommand )


jpollard 01-27-2013 08:24 AM

Just remember that there is a limit on the number of lines you can use in that echo.. it is fairly large (10,000 I believe), but anything larger will fail.

David the H. 01-27-2013 08:47 AM

@jpollard. Yes, but I can't imagine that he's going to be searching for that many words at once. And actually, modern bash seems to be kind of smart when it comes to large array use. I just ran a test where I loaded up an array with more than 50k filenames, and it had no trouble printing them all out, either with echo or a for loop.


Anyway, after looking at the above code carefully, since I see that you want to do some on-screen printing as well, I think something like this might be a bit more appropriate.

Code:

dos2unix "$file_input"

while read line; do 

    if [[ -z "$line" ]]; then

        echo "......."
        echo

    else             

        words+=( "$( awk -v ln="$line" -v RS'\r?\n' 'NR==ln{print;exit}' "$dictfile" )" )
        echo "... $line"
        echo
        echo "${words[-1]}"

    fi

done < "$file_input"

echo "${words[*]}" >> "$file_output"

We're still doing the same thing, but now we're just adding each entry to the array inside the loop, and waiting until the end to echo them all out to file together.

The negative index number in "${words[-1]}" prints only the final element of the array, and is also new to bash 4.2. For earlier versions "${words[@]:(-1)}" will do the same job.

In the awk command, I imported the $line number into an awk variable first instead of inserting it directly, and I set the RS value to one that can handle both dos and unix-style line endings. Although it would be better in the long run if you could just run dos2unix on the $dictfile instead.

Notice also my demonstrations of cleaned up formatting, double brackets, and quoted variables (very important!). ;)

jpollard 01-27-2013 08:50 AM

It isn't the size of the data array, it is the number of parameters allowed for a command. echo, being built in just might be able to bypass that, but in the general case, you are limited. That is why the loops using echo -n work on any data, not just small data.


All times are GMT -5. The time now is 11:53 AM.