Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
i'm using tar's --listed-incremental option to perform incremental backups. i have made this work using test directories on my linux machine. however, the backup jobs are backing up three mounted windows cifs shares and are placing the .tar files in a network attached storage. this works fine for a full backup using the --listed-incremental option to create a snapshot file. to get an incremental backup i simply have to run the tar command again specifying the same snapshot file, and tar will only archive the modified files. like i said, this works fine when i run it on test directories in my linux machine, but when i run this on the smb shares it wants to do a full backup every time. and the only explanation i can come up with is that my snapshot file has been modified, but i never changed the contents of the snapshot file, only its name. i also changed the mounted shares to a different directory, but then again, i did not change contents of any file. my users may have changed some of the files' contents (which is fine) but i know for sure they have not changed all of them for tar to think that it needs to backup every single file as in a full backup. i am hesitant to run a full backup again to create a fresh snapshot file (i think i can do this, however, without actually creating the archive) because it will take a long time.
so what's my questions??? does renaming or moving the snapshot file or the mounted shares that need to be archived cause tar to think the files have changed and therefore the snapshot file becomes useless if it was created before?
Distribution: ubuntu-desktop 8.10 and ubuntu-server 8.10
Posts: 21
Rep:
I do not know your exact use of tar, however, I have a suggestion for backups: maybe you'd try to use another approach to snapshotting backups by using something like rdup.
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep:
I presume you are using gnutar. It ought to be able to do it. That's actually the way amanda does backups of smb shares, including incrementals. You might want to check out how they do it. I haven't dug into that section of the program, so I'm not sure of the details, but it is open source.
You could also just adopt amanda and let it manage your gnutar backups of the smb shares.
i've looked at amanda. i'm a little lazy when it comes to learning new software. i like to write my own (although i have to admit i'm not very good at it). i am looking at rdup right now (thank you merijnv). i like the idea and its very low level. stay tuned for updates, if you have any other suggestions please feel free.
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep:
I can understand that. Software is supposed to save you time and effort, or do something for you that you couldn't do otherwise.
I used my own scripts for backups for years; and, if I may say so myself, they were fairly sophisticated. Eventually, I decided I needed to find something that wasn't so labor intensive. I settled on Amanda and a tape library.
There is always an initial overhead of effort to get into a new application, and often it is proportional to the complexity of what it does, though it's also related to how well the software is designed. The initial overhead for Amanda wasn't bad; and, for me, the payoff has been huge. It just runs, and I hardly ever have to do anything. Several months ago, I had a tape drive breakdown. Amanda just kept running. It saw the drive gone, dropped back to incrementals, and saved them to my holding disk. Then when the drive was back up a couple of days later, it flushed everything out to tape. I never had to intervene at all. It just worked.
Anyway, that's the solution I chose. Someone else would need to evaluate their alternatives and choose the appropriate solution for their own situation and needs.
have you used rdup before? i'm having trouble figuring out how to use this program. the man pages a little vague. can u provide example commands and explain to me what they do? thanks.
The GNU tar manual has a warning about using --after-date and --newer-mtime for incremental backup, but it doesn't explain why. Could anyone enlighten me?
"Please Note: --after-date and --newer-mtime should not be used for incremental backups. See Incremental Dumps, for proper way of creating incremental backups."
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197
Rep:
I guess there are two issues here. One is, "follow the link." It explains how to do proper incrementals using gnutar.
But it sounds like you are just wondering why there is a problem with using what would seem like a reasonable method (--after-date and --newer-mtime). For answers to this, you just have to watch the discussion lists of backup software packages that try to write their own backup procedures (as opposed to using native tools like Amanda does). They end up getting into extended discussions of the situations that fail for incremental backups, possible workarounds, and the interactions of the various time stamps.
For a couple of simple examples, try deleting a few files from a directory and then doing an incremental followed by a full recovery in which you recover the full and then recover the incrementals. Use the wrong procedures and the deleted files will still be there. For another example, try moving some files from one directory to another. Then see how incremental backup and recovery works. Use the wrong methods and the recovered system will not properly reflect the move.
Using mtime won't catch changes to permissions. That would be ctime. Dump bases it's incrementals on ctime. However, common lore (Linus) says that dump is broken in Linux (for other reasons). Ufsdump in Solaris uses additional information. I've tried experiments with it, and it actually catches deleted files and files moved from one directory to another properly.
If you google "mtime atime ctime", you'll find an interesting link or two:
Distribution: ubuntu-desktop 8.10 and ubuntu-server 8.10
Posts: 21
Rep:
The author of rdup has a website with examples. I must admit that I haven't used rdup myself, I used the previous tool by this author named hdup.
More info about rdup may be found on: http://www.miek.nl/projects/rdup/index.html
Good luck backing up (I should raise the priority on implementing a backup-strategy myself).
here's another hint: make sure all system clocks are synchronized between machines backing to and from. this must be done if your backups will rely on checking a, c, m times. i don't have much else to say. thanks for everyones suggestions. i think i'm going to stick with tar and run it with a bash script or eventually upgrade it to a perl script, with web interface for easier administration :-)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.