[SOLVED] bash - print in a file on the same first line in a loop
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
your $WORD variable is not the $line variable, so you must be processing the content of $line somehow to obtain $WORD. I was asking what that process was.
Anyway, as removing the end of line with tr doesn't work and the ".txt" extension, I suspect your input file is actually is dos format where end of lines are not only a line feed "\n" but a carriage return followed by a linefeed "\r\n".
Either convert your input to the unix format with
Code:
dos2unix input_file.txt
or trim the carriage return of your input line by using something like :
Code:
while read line;
do
line=$(echo $line | tr -d "\r")
......
echo -n "$WORD" >> $FILE_OUTPUT
done < $FILE_INPUT
your $WORD variable is not the $line variable, so you must be processing the content of $line somehow to obtain $WORD. I was asking what that process was.
Anyway, as removing the end of line with tr doesn't work and the ".txt" extension, I suspect your input file is actually is dos format where end of lines are not only a line feed "\n" but a carriage return followed by a linefeed "\r\n".
Either convert your input to the unix format with
Code:
dos2unix input_file.txt
or trim the carriage return of your input line by using something like :
Code:
while read line;
do
line=$(echo $line | tr -d "\r")
......
echo -n "$WORD" >> $FILE_OUTPUT
done < $FILE_INPUT
thanks but it doesn't work either. I tried both commands you gave me.
this is all my loop:
Code:
dos2unix $FILE_INPUT
while read line;
do
if [ -z "$line" ]
then
echo "......."
echo
else
echo "... $line"
WORD=$(awk "NR==$line{print;exit}" $dictfile) # it search in a dictionary
echo -e "\t$WORD"
echo
echo -n "$WORD" >> $FILE_OUTPUT
fi
done < $FILE_INPUT
it still prints the words one in each line intead of being on the same line.
Okay, the file that actually needs the converstion is the file from where $WORD comes from. You have to convert your $dictfile in that case. Or again, if you don't want to modify your file, you could trim the CR at the definition of $WORD like
The simple way would be to move the redirection from the "echo -n $word" to the "done" where you redirect stdin. The "done" would then look like "done < $FILE_INPUT >$FILE_OUTPUT"
The mapfile command (bash v4+) loads the contents of the input file into an array, one line per entry.
The "*" expansion of the array prints the entire contents of it as a single unit, with the first character in your IFS variable separating the individual entries. By default this is the space character.
You can also use mapfile with internal commands, BTW, by using a process substitution, or a here document/here string in combination with a command substitution.
Just remember that there is a limit on the number of lines you can use in that echo.. it is fairly large (10,000 I believe), but anything larger will fail.
@jpollard. Yes, but I can't imagine that he's going to be searching for that many words at once. And actually, modern bash seems to be kind of smart when it comes to large array use. I just ran a test where I loaded up an array with more than 50k filenames, and it had no trouble printing them all out, either with echo or a for loop.
Anyway, after looking at the above code carefully, since I see that you want to do some on-screen printing as well, I think something like this might be a bit more appropriate.
Code:
dos2unix "$file_input"
while read line; do
if [[ -z "$line" ]]; then
echo "......."
echo
else
words+=( "$( awk -v ln="$line" -v RS'\r?\n' 'NR==ln{print;exit}' "$dictfile" )" )
echo "... $line"
echo
echo "${words[-1]}"
fi
done < "$file_input"
echo "${words[*]}" >> "$file_output"
We're still doing the same thing, but now we're just adding each entry to the array inside the loop, and waiting until the end to echo them all out to file together.
The negative index number in "${words[-1]}" prints only the final element of the array, and is also new to bash 4.2. For earlier versions "${words[@]-1)}" will do the same job.
In the awk command, I imported the $line number into an awk variable first instead of inserting it directly, and I set the RS value to one that can handle both dos and unix-style line endings. Although it would be better in the long run if you could just run dos2unix on the $dictfile instead.
It isn't the size of the data array, it is the number of parameters allowed for a command. echo, being built in just might be able to bypass that, but in the general case, you are limited. That is why the loops using echo -n work on any data, not just small data.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.