Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm using a Variable, which is associated to a file.
Code:
FILES=/tmp/files
Into which a list of filenames are written... every time the script is run it is appending the new filenames to the list within that file.
I don't want it to do that, I'd like that file only to be used when the script is running..
So, the script runs, puts the filenames into the file (variable to be called later in the script) and then the file to be wiped (but remain there) for use next time.
I'm always conscious to make sure I've searched and tried various things before I post a question... which on this occasion I did... however I think I may have found the answer just after posting...
The initial problem is that if you append the new file to the end before you finish processing everything, then you have an infinite loop (well, until you run out of disk space).
If you have a limited number of files (limited being under around 10,000), you can first load the file into an array (closing the file after loading). Then you can process the file from the array and append whatever you want to the file.
OR you can create a new empty file, and copy whichever names you need from the first file, and add new names. When finished, just "mv" the new file to the old file name.
If it helps here is the section of script that's using it:
Code:
FILES=/tmp/upload
FTP=`awk '{print "put "$0;}' $FILES`
find /sitsimp -type f -name "*$(date +%d%m%y)*.csv.gpg" | awk -F/ '{print $NF}' >> $FILES
cd /sitsimp
ftp -inv server <<!
user user pass
binary
$FTP
bye
!
> $FILES
So the variable is used to store the results from the find command, but this information is only valid until it's been uploaded by FTP. At which point the file needs to be emptied.
What would be the best way of achieving this? At the moment, with >$ FILES is does not work at all
I would do something like this though... UNTESTED, but should be close.
Code:
#create the tmp file
TMPFILE=$(mktemp /tmp/myfile.XXXXX)
#populate tmpfile
find /sitsimp -type f -name "*$(date +%d%m%y)*.csv.gpg" > $TMPFILE
cd /sitsimp
#read each line of tmp file, upload that file, move to the next
while read LINE
do
ftp -inv server << EOF
user user pass
binary
put $LINE
bye
EOF
done < $TMPFILE
#remove tmp file
rm -f $TMPFILE
Hmm.. it seems if I remove a > from the find command and insert the data into the file with a single > it over writes what's in there and does no append.
This may be okay... but still the script has to be run twice.. once to update the file and then again to upload the data.
I would do something like this though... UNTESTED, but should be close.
Code:
#create the tmp file
TMPFILE=$(mktemp /tmp/myfile.XXXXX)
#populate tmpfile
find /sitsimp -type f -name "*$(date +%d%m%y)*.csv.gpg" > $TMPFILE
cd /sitsimp
#read each line of tmp file, upload that file, move to the next
while read LINE
do
ftp -inv server << EOF
user user pass
binary
put $LINE
bye
EOF
done < $TMPFILE
#remove tmp file
rm -f $TMPFILE
Hi,
Thanks for your message.
I've been playing around with the while loop all day and I've been pulling my hair out.. couldn't get it to work.
I'll have a play around with what you've suggested and see if I have more luck
Thanks for your help with the FTP Loop thing, that works nice It seems when I was trying to get it working I didn't put the;
Code:
done < $TMPFILE
Does this tell it that it's only done when it's read through all lines within that file?
The following is my now (nearly) completed script... Is it possible for me to consolodate some of the loops? I've tried renaming the file extension in the same loop as encrypting the, but it doesn't work and only renames the unencrypted file.
Code:
#!/bin/bash
TMPFILE=$(mktemp /tmp/tempfile1.XXXXX)
HOST=X.X.X.X
USER=ftpxfer
PASS=xxxx
FTPLOG=/tmp/ftplogfile
cd /sitsimp
find . -type f -name "*$(date +%d%m%y)*.csv" |
while read i; do
gpg -r server@bathspa.ac.uk -e "$i";
done
find . -type f -name "*$(date +%d%m%y)*.csv.gpg" |
while read i; do
mv $i `basename $i .csv.gpg`.csv.pgp;
done
find . -type f -name "*$(date +%d%m%y)*.csv.pgp" |
awk -F/ '{print $NF}' > $TMPFILE
read LINE
do
ftp -inv $HOST << EOF > $FTPLOG
user $USER xxxx
binary
put $LINE
bye
EOF
fgrep "226 Transfer complete" $FTPLOG ; then
echo -e "FTP TRANSFER SUCCESS.\n\n $LINE has uploaded to Server\n"
else
echo -e "FTP TRANSFER ERROR.\n\n $LINE failed to upload\n"
fi
done < $TMPFILE
rm -f $TMPFILE
I had thought about a switch to do it inline, but didn't think it would be as simple as $i.pgp!!
I like it
Do you think the rest of the script is okay? I am still learning BASH and broke the script up into individual jobs, worked out how to do that particular bit, and then added it all together I'm pleased with what I've learned but always want to learn more
Ooh, and, was I correct about the:
done < $TMPFILE
Does that mean that, the loop is only done when the end of $TMPFILE has been reached?
Does that mean that, the loop is only done when the end of $TMPFILE has been reached?
Sort of. It means all the commands inside the while loop have their standard input redirected from $TMPFILE. The while read LINE part means to read from input ($TMPFILE because of the redirection) and stop when end of file is reached.
All the input files are expected to be in the same directory, right? That is, you don't expect to find files in subdirectories (I think the basename thing wouldn't work otherwise)? If that's so you can replace the find | while with globbing in a for loop:
Code:
for i in *"$(date +%d%m%y)"*.csv; do
gpg -r server@bathspa.ac.uk -e "$i" -o "$i.pgp";
done
And you could combine the ftp loop contents into this one and avoid $TEMPFILE altogether:
Code:
for i in *"$(date +%d%m%y)"*.csv; do
gpg -r server@bathspa.ac.uk -e "$i" -o "$i.pgp";
ftp -inv $HOST << EOF > $FTPLOG
user $USER xxxx
binary
put $i.pgp
bye
EOF
if fgrep "226 Transfer complete" $FTPLOG ; then
echo -e "FTP TRANSFER SUCCESS.\n\n $i.pgp has uploaded to Server\n"
else
echo -e "FTP TRANSFER ERROR.\n\n $i.pgp failed to upload\n"
fi
done
Last edited by ntubski; 06-16-2014 at 03:03 PM.
Reason: forgot to replace $LINE with $i.pgp
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.