LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   need linux script to copy large deep directory structure to other filesystem (https://www.linuxquestions.org/questions/programming-9/need-linux-script-to-copy-large-deep-directory-structure-to-other-filesystem-766432/)

vanvoj 11-03-2009 04:01 AM

need linux script to copy large deep directory structure to other filesystem
 
Hi,

i'am a newby in scripting i have the following scenario

a large nfs filesystem with a deep ~100.000 directory structure
Need to copy the contents to other storage (filesystem), but will also need logging in case something doesn't get copied over to the new nfs filesystem.
i want to copy in a controlled (batch)way with logging

Any info on how to appreciated (with the cp -R command)

Rgds,Tom

Guttorm 11-03-2009 07:25 AM

Hi

You could use rsync. If you give it the -v option you get a list of all the files transfered. Also, if you add the -t option, and the rsync fails for some reason, it will be fast the next time you run it. If you don't use the -v option, and there is no output, you will know that all files were copied ok.

Sergei Steshenko 11-03-2009 07:31 AM

Quote:

Originally Posted by vanvoj (Post 3742365)
Hi,

i'am a newby in scripting i have the following scenario

a large nfs filesystem with a deep ~100.000 directory structure
Need to copy the contents to other storage (filesystem), but will also need logging in case something doesn't get copied over to the new nfs filesystem.
i want to copy in a controlled (batch)way with logging

Any info on how to appreciated (with the cp -R command)

Rgds,Tom

cp -p -r src_dir dst_dir 2>copy.log

(the above is in sh/bash syntax).

The above is assuming your 100000 directories reside under one common src_dir directory, and after copying they will be under dst_dir.

vanvoj 11-04-2009 02:01 AM

Thks Guy's

That will surely help but, i need to let the script pause after lets say 1000 dirs, the reason for this the data is archived and there is an archivig mechanisme in place which retrieves de data from an other tier of storage

so we have an NFS exported share with archived data stubs, when we do de copy to the other nfs share it will retrieve de data back but the machine where the nfs-data resides isn't really fast and can only hold a certain amount off data, every minute the data gets stubbed again to free up space on the local nfs machine

Any thoughts on this

Guttorm 11-04-2009 03:19 AM

Maybe the --bwlimit option of rsync? It will not pause, but slow down the transfers, so the target can be able to write the data.

For example, if you use --bwlimit 300 it will only transfer 300k a second.

If that doesn't work, you could use the -v option so you get output from the rsync command. Then pass the output of rsync to a script that reads 1000 lines, and pauses for some time before it reads more.

Here's a slow reader script:
Code:

#!/bin/sh
count=0
while read line
do
        echo $line
        count=$(($count+1))
        if [ $count -eq 1000 ]
        then
                sleep 60
                count=0
        fi
done



All times are GMT -5. The time now is 03:52 AM.