In a bash script, how do I move a line to a new file.
I've got a bash script I'm using to download a text file list of links via axel. What I'd like to do is automate the movement of completed links in the for loop when axel has successfully completed the download.
This is what I've got. I can figure that I can just echo append the line to a new file, but what is the easiest way to delete the line with the link I just downloaded? Code:
|
Hi,
Assuming that all lines are unique: Code:
#remove $i line from $1 file...? |
You do also know that your export line is not required. In fact, you could just use the return value on its own:
Code:
if (( $? )) |
You don't even need to look at the return value at all, if already does that:
Code:
if axel --alternate --num-connections=6 $i ; then |
just an example. i would do things this way.
Code:
#!/bin/bash |
I just had a chance to change up the script and try it out, and I now see the duplication of the return value and indeed I see the improve simplicity of konsolebox's example script. Thanks guys!
One problem that I didn't anticipate was that although the example script does work, it's doesn't initiate the sed change till after all the links are downloaded, so if I break the script during download (or during the while function) and restart it start's over at the top of the list. This is partially due to the fact that I reorder the links due to change in priorities. Normally I just break the script and restart it to affect that change in priorities, but is there a way to create a loop that 1. when it's finished downloading a link it removes the line, 2. actively reads the file during every loop and initiates the download of the top line. 3. stops at the end of the file? I started on this idea below, but am not well versed enough in sed, loops, and bash to make it work. Specifically I'm struggling with the sed line as the links are fully qualified urls so I can't use / for a delimiter. Code:
#!/bin/bash |
I can't test this but I hope it works for you.
Code:
#!/bin/bash |
I am not sure I see how the for loop construct would be better than a simple while loop reading from the file? (of course not to say it doesn't work)
@OP - I am not sure how breaking the script at the point of using axel makes a difference if after each successful download you are removing the lines from the file? Obviously if you cancel the script at any point and a download is incomplete then the corresponding entry will not have been removed so the download will start afresh once the script is started again. Maybe I am missing something but is it not enough to say: Code:
#!/bin/bash |
Thanks all. Tis all much appreciated.
To answer your question grail, ideally i'd just do a loop based on reading from the file, however I edit the url file on the fly in the background and so the number of lines change in the file up and down depending. If I use a loop to read the file, it's retains the order and number without integrating new lines, deleted lines, or order changes, which is why I was trying to figure out how to make the loop dependent on the end of file. This also makes any attempt at deleting the line based solely on the line number precarious, as it could have changed, so that's why I was trying to delete the actual entry rather then the line number or number of lines. I think this answers your second question too, but I thought I'd say that yes axel and the script does just leave the incomplete download listed. The reason I move it out is because if axel is given a download and finds a dup file without state information (i.e. an .st file), it restarts the download from scratch with the .0 extension (and increments). This means if it's completed the download of a file (and there's no longer an .st file), it'll just keep downloading the first link on the list adding new extension even if the last file has been successfully completed. It's not quite as smart as wget with no clobber, but the multi-connection is more useful in this particular case. Any ideas? Was my loop just missing the sed delimiter, or was the whole loop entirely malformed? Many thanks guys! Rich |
Code:
sed -i '$remove' $1 to get around this. Now that you have explained further I do understand what you intend but would caution you that it is indeed fraught with danger. Even though you only access the file using head to get an entry the big problems will occur while you are changing the file and head tries to grab the next line. Then at the same time if the download is quick you may also execute a sed on the file whilst you are still editing. All of this chills me to the bone as data will actively be added and deleted at the same time. Sounds like a recipe for disaster. The only thing springing to mind would be to create a lock on the file so when one or the other application is making changes the other one, you or script, will have to wait until it is freed. I'll be interested to see how this is solved?? |
you can try this one. it requires bash version 4.0 or newer.
Code:
#!/bin/bash |
@konsolebox - whilst an interesting solution, how does this address the issues of the FILE being edited whilst, for example, you are catting the temp.txt back over the same FILE?
I am not saying this won't work, I am more just curious how this may / will stop this from occurring? |
sorry i had to revise it. this is the best solution i know so far since i can't consider file locking yet:
Code:
#!/bin/bash |
hmmm ... so you have effectively replaced sed work with a while loop ... my thought here is that this would probably take longer to process than sed and lead more to
the issue of not locking the file (maybe) Ignoring this constant issue, I am curious about this portion of code: Code:
while read; do By virtue of the last while loop in the code, all not null SUCCESSes will be removed, yes? And the FAILED items, do we not wish to retry these? Maybe I am following it wrong ... sorry :( |
Quote:
Quote:
Btw, it's ok :) |
How about using the trap mechanism:
Code:
#!/bin/bash |
@ntubski - So I understand how you are waiting till exit to make the changes, but I believe the bigger issue is that the FILE you are reading
from will be actively changed, according to OP, whilst your loop is running. So in turn your script will not be aware of changes until the script is started again. |
Oops, I was confused, second try:
Code:
#!/bin/bash |
Yes that looks like a better alternative :) This appears to address my concerns at least ... let us see what the OP thinks?
|
All times are GMT -5. The time now is 10:36 PM. |