need linux script to copy large deep directory structure to other filesystem
Hi,
i'am a newby in scripting i have the following scenario a large nfs filesystem with a deep ~100.000 directory structure Need to copy the contents to other storage (filesystem), but will also need logging in case something doesn't get copied over to the new nfs filesystem. i want to copy in a controlled (batch)way with logging Any info on how to appreciated (with the cp -R command) Rgds,Tom |
Hi
You could use rsync. If you give it the -v option you get a list of all the files transfered. Also, if you add the -t option, and the rsync fails for some reason, it will be fast the next time you run it. If you don't use the -v option, and there is no output, you will know that all files were copied ok. |
Quote:
(the above is in sh/bash syntax). The above is assuming your 100000 directories reside under one common src_dir directory, and after copying they will be under dst_dir. |
Thks Guy's
That will surely help but, i need to let the script pause after lets say 1000 dirs, the reason for this the data is archived and there is an archivig mechanisme in place which retrieves de data from an other tier of storage so we have an NFS exported share with archived data stubs, when we do de copy to the other nfs share it will retrieve de data back but the machine where the nfs-data resides isn't really fast and can only hold a certain amount off data, every minute the data gets stubbed again to free up space on the local nfs machine Any thoughts on this |
Maybe the --bwlimit option of rsync? It will not pause, but slow down the transfers, so the target can be able to write the data.
For example, if you use --bwlimit 300 it will only transfer 300k a second. If that doesn't work, you could use the -v option so you get output from the rsync command. Then pass the output of rsync to a script that reads 1000 lines, and pauses for some time before it reads more. Here's a slow reader script: Code:
#!/bin/sh |
All times are GMT -5. The time now is 03:52 AM. |