head adds chars to end of each line (Red Hat Enterprise Linux)
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
head adds chars to end of each line (Red Hat Enterprise Linux)
Hello!
I wrote a bash-script that splits each of many .sql-files into two parts by some condition using head utlity.
After that I execute all the scripts in sqlplus, and in one or two of them I get an error:
SP2-0042: unknown command ")" - rest of line ignored.
If I open the file with vi, I can see that in the end of each line there's a "^M", which is treated as a single character. If I delete this character placed before the closing parenthesis, the scripts executes without any errors.
In the initial script opened by vi there's no such characters.
Is it a problem with the head utility or with something else? What could I do with such a problem?
Of course, I cannot grep these special chars.
I'm not sure if head is the culprit. Can you post the relevant code of your bash script?
One way to remove those characters is by using the dos2unix command. Most of the time one encounters those characters in a file that originates from Windows.
This is because the file you are looking at has CRLF line endings instead of LF. Have a look for fileformat in vi.
'fileformat' 'ff' string (MS-DOS, MS-Windows, OS/2 default: "dos",
Unix default: "unix",
Macintosh default: "mac")
local to buffer
{not in Vi}
This gives the <EOL> of the current buffer, which is used for
reading/writing the buffer from/to a file:
dos <CR> <NL>
unix <NL>
mac <CR>
When "dos" is used, CTRL-Z at the end of a file is ignored.
See |file-formats| and |file-read|.
For the character encoding of the file see 'fileencoding'.
When 'binary' is set, the value of 'fileformat' is ignored, file I/O
works like it was set to "unix'.
This option is set automatically when starting to edit a file and
'fileformats' is not empty and 'binary' is off.
When this option is set, after starting to edit a file, the 'modified'
option is set, because the file would be different when written.
This option can not be changed when 'modifiable' is off.
For backwards compatibility: When this option is set to "dos",
'textmode' is set, otherwise 'textmode' is reset.
I very much doubt that head would introduce those. What you see are DOS
CarriageReturns. What platform are you on, where are the files from?
These files were checked out in RedHat and then copied to another RedHat server using WinSCP plugin for FAR. They were not edited using any Windows editors.
Quote:
Originally Posted by EricTRA
I'm not sure if head is the culprit. Can you post the relevant code of your bash script?
Code:
dos2unix <filename>
Thank you, this solution worked perfectly, but I'm also interested in finding the causes of this problem.
Relevant code of bash script:
Code:
while read fname
do
newf=`basename $fname`
lnum=`grep -inm 1 "ALTER" $fname | cut -d: -f1`
if [ 0$lnum != 0 ]
then
head -$(echo $lnum"-1" | bc) $fname >$tab_dir/$newf
tail -$(echo $(cat $fname | wc -l)"-"$lnum"+1" | bc) $fname >$const_dir/$newf
else
cp $fname $tab_dir/$newf
fi
done
This bash script reads filenames from a pipeline and splits each SQL-script into two parts: before the first ALTER operator and after it.
These files were checked out in RedHat and then copied to another RedHat server using WinSCP plugin for FAR. They were not edited using any Windows editors.
Hello,
I'm almost sure that WinSCP is the culprit in this case. You copied from server1 to a Windows workstation and then from the Windows pc to server2 with the same WinSCP, right? You can verify by copying the file directly from server1 to server2 using scp if you can connect from one to another.
You copied from server1 to a Windows workstation and then from the Windows pc to server2 with the same WinSCP, right?
That's right. But in the starting post I said about the initial scripts where the were no DOS-like line endings. The rub is that the "initial" scripts are the scripts that had already been copied using WinSCP.
Quote:
Originally Posted by EricTRA
You can verify by copying the file directly from server1 to server2 using scp if you can connect from one to another.
I've tried to, but didn't manage to do so. I'm not sure, what is the problem with my SSH-key. I created it using PuTTY (in MS Windows) and placed it to ~/.ssh/, it is loaded when I use the scp and asks for my passphrase, but the passphrase that I successfully use to create a session in WinSCP is said to be erroneous
Is this happening consistently, i.e. on all line endings, or only on a few?
If it's only happening on a few, what is the common denominator connecting those?
Is this happening consistently, i.e. on all line endings, or only on a few?
If it's only happening on a few, what is the common denominator connecting those?
I'm confused. Previously it was a stable error: it was on all line-endings (in all files that I checked), and after the next execution of my scripts the problem disappeared...
Just an idea: the first time you copied over your files they were copied using WinSCP which might have resulted in the error. Next when you ran the script, if it writes to the files then I'd assume it's doing it correctly in the 'unix/linux' way so the error will not reproduce.
I've found that the cause of this problem is in another script. That's why the problem was unstable - I didn't execute this script always.
But I'm still confused, I can't understand why the problem appears.
Here's the script:
Code:
while read dir
do
pwd1=`pwd`
if [ x$dir != x ]
then
cd $dir
for file in `ls | grep -i '\.sql'`
do
echo $'\nexit;\n' >>$file
done
fi
cd $pwd1
done
It just opens each .sql file in a directory and appends "newline exit; newline" to the end of the file. So, I can't understand why it appends windows-style line-endings to all other lines.
The script is executed from PuTTY terminal via "find * -type d | append_exit" command, the "append_exit" file is placed in ~/bin.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.