LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   backup script assistance (https://www.linuxquestions.org/questions/linux-newbie-8/backup-script-assistance-4175528686/)

bluforce 12-18-2014 03:22 PM

backup script assistance
 
Guys,
Trying to create a backup script on 2 machines to back up to an array, which is backed up nightly. The 2 workstation machines are running 32bit RHEL and the version of Netbackup we use here doesn't support 32bit.

So, we had the idea of creating a tarball of the directories my guys need backed up, then setup a cron job to scp those to a directory on a 64bit RHEL server, which is backed up.

What would be the best way to set this up?
I reckon we would need to delete the tarball after it has been scp'ed to the server (for disk space reasons).

So far, my script looks like this:

Code:

TIME=`date +%b-%d-%y`
FILENAME=backup-$TIME.tar.gz
SRCDIR="/build /var/spool/cron /etc/cron*"
DESDIR=/build/backups
tar -cpvz --exclude-from='exclude_list.txt' -f $DESDIR/$FILENAME $SRCDIR

Can you guys offer assistance on how to approach this after the tarball is created? Thanks!

suicidaleggroll 12-18-2014 03:27 PM

scp with passwordless ssh keys set up?

Or you could skip the tarball entirely and just rsync the directories over.

Or you could mount the backup location on the server via NFS and just dump the tarball straight to its final location without having to create an intermediate copy on the local machine.


The problem with tarballs is that you have to redo the ENTIRE backup process EVERY time. With something like rsync you can do incremental backups, so the first one takes forever but all future ones run very quickly. You can even use the --link-dest flag in rsync to make hard links to unchanged files so you can have full, separate backups that only use the space of the files that changed. Plus with rsync your backups are live and navigable, which makes them much more convenient to use.

bluforce 12-18-2014 03:32 PM

Haven't set up keys yet.
Do you think rsync would work better in my situation since disk space would likely be an issue on down the road? The tarball would be ~275gb on a 1TB drive. 275GB is already in use on this drive.

suicidaleggroll 12-18-2014 03:35 PM

If you're rsyncing the files straight to the server, then no additional space will be used on the local machine. You only need to worry about available space on the machine when you want to make a giant local tarball (275 GB is insanely unmanageable BTW) and then copy it over, which is a terribly inefficient way of doing things.

jpollard 12-19-2014 08:43 PM

Quote:

Originally Posted by suicidaleggroll (Post 5287110)
scp with passwordless ssh keys set up?

Or you could skip the tarball entirely and just rsync the directories over.

Or you could mount the backup location on the server via NFS and just dump the tarball straight to its final location without having to create an intermediate copy on the local machine.


The problem with tarballs is that you have to redo the ENTIRE backup process EVERY time.

No - tar supports incremental backups as well. See the -g option in the tar man page.

suicidaleggroll 12-19-2014 08:46 PM

Quote:

Originally Posted by jpollard (Post 5287849)
No - tar supports incremental backups as well. See the -g option in the tar man page.

Good to know, thanks.

jpollard 12-19-2014 08:51 PM

I'll add a little more about tar...

It is POSSIBLE to use tar to create a multi-file archive. You have to use the -L option to specify how long you want the file, then use the -F option to run a script at the end of each file. This should allow you to rename the current output, and resume with a new output file being created.

I will admit to having tried this... But if you fear large tarballs this is one way to break them up. Another is to use the split utility (like "tar cxf - . | split ...") which will also work.


All times are GMT -5. The time now is 09:20 PM.