LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   Simplify this shell script (https://www.linuxquestions.org/questions/programming-9/simplify-this-shell-script-710695/)

vinayakm 03-11-2009 03:23 AM

Simplify this shell script
 
Hi Friends

I have written the below mentioned shell script to basically download files from the url mentioned in the file ur.txt and after completion of downloading the entire file move it to a specified directory. I would like to know whether there is any shorter way to do it instead of the long about method I have followed in the Script. One more problem which I am facing is that when I run this script through a ssh session using putty and I close the session the current file which is being downloaded get downloaded completely but after that no other files get downloaded. Can you please help me out.


Code:

i=0
b=0
c=""
while [ `find ur.txt -size +0` ]
 do
  url=`head -n1 ur.txt`
  wget -c --http-user=xxx --http-password=abc  $url
for((i=1;i<${#url};i++))
        do
        c=${url:$i:1}
        if [ $c == "/" ]; then
        b=$(($i))
        fi
        done
        b=$(($b+1))
        c=${url:$b}
        mv -v $c t
  sed -si 1d ur.txt
 done

Thanks
Vinayakm

eco 03-11-2009 04:19 AM

To avoid killing the session, use screen. It will allow you do disconnect and reconnect to a session at will as well as a few other nifty things.

As for the script, you could start by putting meaningful variables ;)

weibullguy 03-11-2009 04:29 AM

Well, you could add the following to the wget command to eliminate the mv -v statement
Code:

--directory-prefix=<DIRECTORY_TO_STORE_TARBALLS>
Instead of parsing the URL from the file, you could add the following to the wget command
Code:

-i ../ur.txt
If you do that, your script could be one line...the wget command.

And I agree with eco, your variables are meaningless.

vinayakm 03-12-2009 02:57 AM

Hi Friends

Thanks for the replies. @eco suggestion taken regarding variables. The reason why I am parsing the url from a file is that once a file is downloaded completely from a url specified in the file then the url is removed. So at any point of time I will always have an updated list of what is remaining to be downloaded. The reason why i am not using directory-prefix is that sometimes i terminate the download midway so the download is not complete and whatever file was downloaded partially will not work. What functionality I wanted was that once a file is completely downloaded then the file should be moved to some other directory from where I can copy it to another machine. What i want to shorten in this script is that for extracting the filename alone i have to use a for loop instead of that is there any other easier way to do it

Thanks and Regards
Vinayakm

weibullguy 03-12-2009 05:46 PM

You could use the --timestamp option with wget to only download files in your list if they are newer on the server. The --continue option will continue the download of files that are interrupted. Using those options, you could do something like this
Code:

#!/bin/sh

# What directory is the final storage location
# for the downloaded files?
permdir="/downloads/test"

# Get 'em, baby.
wget -i ~/downloads/url.txt --continue --timestamp --directory-prefix=$permdir

exit 0

If you want to use looping, you could do something like this.
Code:

#!/bin/sh

# What directory is the final storage location
# for the downloaded files?
permdir="/downloads/test"

# How many lines are in the file of tarballs?
n=`awk '//{n++}; END {print n+0}' url.txt`

# Initialize the count.
i=0

# Do it, baby.
while [ $i -lt $n ]
do
       
        # Get the first line (i.e., first file to download),
        # then download it using wget.
        url=`head -n1 url.txt`
        wget -c --http-user=xxx --http-password=abc $url

        # Find the name of the file that was just downloaded.
        file=`basename $url`

        # Move the recently downloaded file to another directory.
        mv -v $file $permdir

        # Remove the line for the file just downloaded.
        sed -si 1d url.txt

        # Increment the count.
        ((i+=1))
       
done


vinayakm 03-12-2009 11:24 PM

Hi Friends

Thanks a lot for the help

Thanks and Regards
Vinayakm


All times are GMT -5. The time now is 05:21 AM.