[SOLVED] shell script to copy newly changed files with rsync
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
shell script to copy newly changed files with rsync
I've got quite a decent rsync script setup, however I'd like to invoke it whenever there's change to a file. My initial idea was to use find, however this has two major flaws - the first being my particular unix veriant cant understand -print0 which means this doesn't work, the second is that I'm not 100% sure how to put variables into quotation marks so ls can understand the target:
Code:
for i in `find /shares/ -mtime -1 -print`; do ls -ltr $i;done
If anyone has a better idea on how to do this, please tell me This was just an idea, which in itself is full of flaws - for example, it means files will only be in sync once an hour (I might as well just run an hourly cron job).
This is between two NAS boxes running some 'weird and wonderful' trimmed down unix variant. Thanks
OK. Any other ideas? This isn't going to compile as amazingly there's no C compiler! If not, think I'll just call it a day and use cron to keep my files up to date. Nice idea though
The ls doesn't work. Any files/folders with spaces in the name ls tries to print as individual files, e.g:
a test file.mp3
the code would print:
ls: a: No such file or directory
ls: test: No such file or directory
ls: file.mp3: No such file or directory
Which wouldn't be hard to get around if print0 worked (it doesn't). Can I stick it in speech marks somehow? Not entirely sure how to encase variables in "". That way the ls would do ls -ltr "a test file.mp3"
Last edited by genderbender; 04-20-2010 at 04:42 PM.
Same response with that code, interestingly - I tried this:
Code:
ls -ltr "`find /shares/ -mtime -1`"
and it gave a response not unlike this:
ls: /shares/a specific file
/shares/a different file
/shares/some other file: No such file or directory
I think it's treating the contents of find as one long file rather than a consecutive list of files, this is why I initially tried looping it. I'm thankful for your help, but as this is difficult for you to test as I have an obscure version of find, I think it would be difficult to advise (feel free to stop helping me
Last edited by genderbender; 04-20-2010 at 09:40 PM.
That's right "it's treating the contents of find as one long string rather than a consecutive list of files". You can get around it by using the shell's read command to read a line at a time. The only robust way to do this is using find's print0 facility but that's only necessary when you have pathological (!) file names including characters such as line end.
This technique relies on your shell supporting "process substitution". Beware there is a space between the "<" characters after the done.
Code:
read -r file ; do
<whatever you want using $file>
done < <(find ...)
If this does not work there are variations not using process substitution.
I think part of our issue is using the ls, may I ask what is it you really want to do with the files you have identified?
rsync newly changed files based on their time signature, ideally by the hour (original idea was rsyncing each file the moment it changes, but this has since proved impossible). The files and folders are all samba shares and it's entirely upto the userbase what is contained.
Last edited by genderbender; 04-21-2010 at 04:32 AM.
Yeah, but my question is, as your find already denotes that they are a <day old, what are you going to do with the files?
But then I was going to suggest catkin's option above as it will negate the word splitting.
Well I'll cross that bridge when I come to it, a day is rather a significant ammount of time for new files. On other forums and posts, users have created a new file with the current timestamp (minus an hour) and then done a find with the new file as a comparison, I'm not quite at that point yet but I think once I've got this first step working the rest should be easy. I'm still in two minds as to whether this is a decent option, it maybe easier to run rsync multiple times an hour.
Try catkin's option as this will give you access to the files you are looking for, but it will be one at a time,
hence the ls option is not of much use as there will only be the one file.
Try catkin's option as this will give you access to the files you are looking for, but it will be one at a time,
hence the ls option is not of much use as there will only be the one file.
I'm no rsync-spert; what are the pros and cons of running rsync once for many files and once for each file?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.