Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I assume you need the variable WEEK1 in order to access the previous backup. That's fine as long as the backups run every week without fail, but what if something goes wrong and the backup misses a week? That could happen if, for example, the system was down, or the rsync destination was offline or full. You must design your script to work even if the last backup was not 7 days ago.
A more robust approach would be to identify the previous backup regardless of its date, and use it if its date is within an acceptable range. If your backup sets are in directories named according to $(date -I), which yields YYYY-MM-DD, it is simple to identify the most recent set:
You can determine if its date is acceptable by bash string comparison, for example,
Code:
if [[ $last_backup > $(date -I -d "3 week ago") ]]
then
echo "last backup set is less than 3 weeks old"
else
echo "last backup set is 3 weeks or more old, or non-existent"
fi
Beryllos, you are correct about it will not run every week, it may run for weeks and then down a while. Thanks for the input. I'll see about adjusting my script and post the full script when I'm at home.
Since the script creates a new destination directory every time, the --delete option does nothing and you could omit it. (If you were syncing an existing destination directory, --delete would delete files on the destination which had been deleted on the source.)
The -v option may result in a lengthy output, which I think cron will e-mail to you. If you prefer, you could redirect the output to a log file.
If you want to identify the most recent backup even if it is not from 1 week ago, you could cd to the destination parent directory and then use a command like the one I posted previously:
Code:
cd "/mnt/usb/Funtoo-Bkup/sappy-user"
LNK="$(pwd)"/"$(ls -d [0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] | tail -1)"
In this example, the quotation marks could be omitted, but they are necessary when there are spaces in the directory names. Edit: ... but then also $OPT would have to be constructed differently to preserve quotation marks around $LNK.
Last edited by Beryllos; 09-30-2015 at 12:15 PM.
Reason: Blue text about quoting quotation marks when they are needed
Could make it more universal maybe? This is what I'm using for incrementals. Put it together after googling "time machine on linux". This is clearly for remote destination. Easily modified for local.
Code:
#!/bin/bash
#mybackups
#tadaen sylvermane | jason gibson
# notes - $1 is name of folder being backed up | $2 is /path/to/folder being backed up
##### variables #####
DATABACKUP=/spinner/users/"$USER"/"$HOSTNAME" # path to backup to on server
DATADAYS=10 # days to keep backups
NOW=$(date +Y.m.d.%H.%M) # time when script is run
RSYNCOPT=auz # rsync options
SERVER=10.0.1.250 # server ip
##### begin script #####
if ping -c 1 "$SERVER" ; then
if [[ -e "$2" ]] ; then
if ssh "$USER"@"$SERVER" "[[ ! -e ${DATABACKUP}/${1} ]]" ; then
ssh "$USER"@"$SERVER" "mkdir ${DATABACKUP}/${1}"
fi
if ssh "$USER"@"$SERVER" "[[ -e ${DATABACKUP}/${1}/current ]]" ; then
# incremental backups, this is the one normally run
rsync -"$RSYNCOPT" --link-dest="$DATABACKUP"/"$1"/current "$2" "$USER"@"$SERVER":"$DATABACKUP"/"$1"/"$1"."$NOW"
ssh "$USER"@"$SERVER" "rm -f ${DATABACKUP}/${1}/current"
else
# initial backup, typically runs once
rsync -"$RSYNCOPT" "$2" "$USER"@"$SERVER":"$DATABACKUP"/"$1"/"$1"."$NOW"
fi
ssh "$USER"@"$SERVER" "ln -s ${DATABACKUP}/${1}/${1}.${NOW} ${DATABACKUP}/${1}/current"
ssh "$USER"@"$SERVER" "find ${DATABACKUP}/${1}/ -atime +${DATADAYS} -exec rm -rf {} \;"
fi
fi
##### end script #####
I think you want to have some tests to make sure your backup destination is mounted, else it will just backup to the directory and eat up space that you won't even know is missing depending on how often your backup is unplugged / removed. I also see no reason to run this as root. Unless you start backing up stuff outside of your /home/$USER you probably should just run it on your personal crontab.
Last edited by jmgibson1981; 09-30-2015 at 07:14 PM.
unix date can be unreliable for any of serveral causes. (locale problem (or infact, bug in hacks to locale rules), clock chip setting at boot-time problem, incorrect time update issue, power issue, clock battery issue, and more!). yet another issue is that without care the time stamps on files can be incorrect. and finally: timezone change issues (local rules) v. what your (current locality) is now using, which is by edict not by any scientific system.
never rely on "time of day" to decide if a backup "had been done" or if "files can be removed". and if comparing times be very cautious because old file may be newer if you were by mistake running with the wrong time at any point in time.
TIP: be careful with time
(microsoft is infamous for clobbering time, but with mistakes made: linux will do it too. lost timestamps are a sure sign either someone unknowlegeable had access or they are files the admin did not care about the time of. if you see mirrors with incorrect time of source_code.tar.gz: that is a BIG problem - the people running mirror should be dismissed because it makes life 100 times harder for those needing access to library)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.