Need a fresh pair of eyes to review my backup+restore scripts
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Need a fresh pair of eyes to review my backup+restore scripts
I've created a Backup script and a Restore script to backup my websites on a shared host (dreamhost) that I have SSH access to. The scripts should backup/restore my main website [domain.com], my subdomain [store.domain.com] of the website (which is a Magento Store), and lastly my Magento DB.
#!/bin/sh
###
# 5/30/09
# restore.domain.com.sh
# RESTORE /domain.com & /store.domain.com & mysql.domain.com
# You must enter the DATE you want to restore from
###
echo -n "Please Enter the RESTORE DATE you would like to Restore (YYYYMMDD): "
read -e RESTOREDATE
DOTCOMSOURCE="/home/user/BACKUPS/$RESTOREDATE/domain.com-BACKUP-$RESTOREDATE.tgz"
STORESOURCE="/home/user/BACKUPS/$RESTOREDATE/store.domain.com-BACKUP-$RESTOREDATE.tgz"
MYSQLDBDUMP="/home/user/BACKUPS/$RESTOREDATE/mysql.domain.com-BACKUP-$RESTOREDATE.sql"
LOG="/home/user/BACKUPS/$RESTOREDATE/domain.com-RESTORE-$RESTOREDATE.log"
echo "Restore .COM Begin: $(date)" >> $LOG
tar xvzf "$DOTCOMSOURCE" >> $LOG
echo "Restore .COM End: $(date)" >> $LOG
echo "#######################" >> $LOG
echo "Restore STORE Begin: $(date)" >> $LOG
tar xvzf "$STORESOURCE" >> $LOG
echo "Restore STORE End: $(date)" >> $LOG
echo "#######################" >> $LOG
echo "Restore MySQL DB Begin: $(date)" >> $LOG
tar xvzf $MYSQLDBDUMP.tgz >> $LOG
mysql --user=****** --password=****** --host=mysql.domain.com magento_**** < $MYSQLDBDUMP
rm $MYSQLDBDUMP
echo "Restore MySQL DB End: $(date)" >> $LOG
I am new to scripting so I just need a fresh pair of eyes to review this to see if I've missed anything. If it looks good to you please don't hold back from posting "looks fine to me", and of course any/all critiques are greatly appreciated. I just need some outside feedback to make sure I am doing this right before I go and royally mess things up. Oh and did I mention I am new to scripting :-P
Looks good to me. Only things I'd do is consistent and proper quoting of variables ("${STORESOURCE}" vs $STORESOURCE), making sure you're rooted in the right place (as in 'BACKUPDIR="/home/user/BACKUPS/${TODAYSDATE}"; mkdir "${BACKUPDIR}"; cd "${BACKUPDIR}" || exit 1'), append both stdout and stderr to the log (' 2>&1>> "${LOG}"') and pipe mysqldump output through gzip to get a compressed backup. Optionally you could use MD5 or SHA1 sums to be able to verify file integrity and use "getopts" to combine making, verifying and restoring backups from one script.
Looks good to me. Only things I'd do is consistent and proper quoting of variables ("${STORESOURCE}" vs $STORESOURCE)
For my own education, what's the difference?
And do I have to put quotes around it ( is it "${STORESOURCE}" or ${STORESOURCE} ) ?
Quote:
Originally Posted by unSpawn
, making sure you're rooted in the right place (as in 'BACKUPDIR="/home/user/BACKUPS/${TODAYSDATE}"; mkdir "${BACKUPDIR}"; cd "${BACKUPDIR}" || exit 1')
Doesn't the script already handle this by having the path's defined in the variables (DESTINATION for backup script, and SOURCE for restore script)?
Quote:
Originally Posted by unSpawn
, append both stdout and stderr to the log (' 2>&1>> "${LOG}"')
How would that look on my script?
I am confused about how to output the stdout and stderr to the log, and what is "stdout" and "stderr" ?
Quote:
Originally Posted by unSpawn
and pipe mysqldump output through gzip to get a compressed backup.
Again, not sure how to do this?
Sorry, I am new at this :-P
Quote:
Originally Posted by unSpawn
Optionally you could use MD5 or SHA1 sums to be able to verify file integrity
Interesting, I'll have to research this one, any chance I could get a nudge in the right direction?
Would this be able to verify the integrity of the MySQL DB also?
Quote:
Originally Posted by unSpawn
and use "getopts" to combine making, verifying and restoring backups from one script.
Time for some googling :-P
...
Quote:
Originally Posted by harry edwards
I would add some sanity checks, like:
Do files and directories exists before I create them. i.e.
Code:
if [ ! -x "$TODAYSDATE" ] then
mkdir $TODAYSDATE
else
Do something like email me "Back-up executed twice: Directory already exists $TODAYSDATE?"
fi
Do I have write permissions to the target output directories and folders i.e.
Code:
if [ ! -w "$LOG" ] then
Do something like email me "Permission denied to $LOG"
fi
Did the result of tar return an error i.e.
Code:
tar cvpzfP "$STOREDESTINATION" "$STORESOURCE" >> $LOG
if [ ! #? -eq 0 ] then
Do something like email me "tar failed!"
fi
Thanks for the idea's I will look into adding these to the scripts...
is that if $var is embedded in another string eg string$varstring2, the interpreter can't figure out where the varname ends, so use string${var}string2 and the interpreter knows the varname starts at '${' and stops at '}' (ignoring quote marks).
Bookmark my hyperlink above and read it.
stdin = standard input channel (aka '0' ie zero)
stdout = standard output channel (aka '1')
stderr = standard error channel (aka '2')
By default, these 3 are automatically assigned to each process at creation. To capture stdout, stderr to same file
/path/to/program 1> program.log 2>&1
The first mention of '1' can be assumed, so we get
/path/to/program > program.log 2>&1
ie send output to channel 1 (log file), send chan 2 to chan 1 => logfile also
...in addition, wrt "${var}" vs $var it's not only embedding but also safeguarding against default IFS probs, names with spaces.
Compressing a backup here is as simple as running 'mysqldump $youroptions | gzip > "${MYSQLDBDUMP}.gz"'.
Hashing files can be done by looping over a list of files with md5sum or sha1sum, as in 'find /path/to/files -type f -print0|xargs -0 -iF md5sum 'F' > /path/of/files.md5', or recursively hashing them with md5deep or sha1deep: 'sha1deep -krs /path/to/files > /path/of/files.sha1'. Hashing a database dump doesn't make much sense unless you would want to verify the integrity of the package moving it between systems. I don't think MySQL supports internal hashing like Oracle does, maybe Maatkit's 'mk-table-checksum' can help. Checking integrity by cronjobbing 'mysqlcheck -s -u root -p --all-databases' could be a way except this locks the db while it is checking but unfortunately I'm not that deep in database forensics to suggest better/more efficient ways to hash or validate stuff.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.