-   Linux - General (
-   -   root cron script using cp commands doesn't copy all files (

legacyprog 04-26-2009 06:45 PM

root cron script using cp commands doesn't copy all files
I suspect I'm missing something obvious but I'm stumped. The following script, called SystemBackup

cp -purL /home /archive5/LinuxBackup/Tree
cp -purL /etc /archive5/LinuxBackup/Tree
cp -purL /var /archive5/LinuxBackup/Tree
cp -purL /root /archive5/LinuxBackup/Tree
rm -d -r /archive5/LinuxBackup/Tree/var/cache/beagle/indexes

when run under Ubuntu 8.04 as

sudo ./SystemBackup

seems to copy all the files in the source directories to my backup area on an external hard drive (directory "Tree" having been emptied before hand)

but when run as a root crontab entry (again having emptied "Tree" first)

50 16 * * * /home/user/SystemBackup

it copied some of the sub-folders under home, etc, var, root but not all (maybe only a third). I don't see any consistency vis permissions or ownership between what it does and does not copy.

I also modified the cp -purL /var to cp-vpurL /var and piped the output to my /home/user folder. It just seems to only be copying some of the files. I don't see any error messages.

I've googled but similar situations seem to be based on not running as root, but I checked that by substituting /usr/bin/id for ./SystemBackup in the crontab entry and piping the output to my /home/user folder. It showed that I was running as root.

I know I'm going to kick myself, but I just don't have a handle on this (and at best I'm intermediate Linux skills, more likely seasoned newbie). Any ideas? Need more details?

aus9 04-26-2009 07:14 PM


I am more concerned that the name of the script is called systembackup. Have you tested this assuming your system files are lost or attempted a restore on a new hard drive? Me thinks you do not have enough to get back to where you want?

2) so can you advise what normal backup software you have tried and why they do not meet your requirements. And while I am being nosy.....have you copied stuff to tape drive or external drive or dvdrws?

3) I am not perfect btw....I use partimage and the image is backed up to dvdrw Its not an incremental backup so you need to take some risks and backup key files separate to partition....but I am more confident I can rebuild onto a new hard I have done so in the past.

4) others like dd if they do not use dedicated backup software

5) if you like scripts.....instead of software...take a peek at this could suit ?


legacyprog 04-26-2009 07:34 PM

Hi aus9, thanks for the reply.

Well, when I was on Suse I used its backup which created a tarball. Back in Jan 08 when I switched to Kubuntu I posted here at LQ asking for "best practice" since it didn't seem to have an equivalent backup. See the thread:

From advice received in that thread I created the script I posted which *seems* to have worked fine for over a year. Note that my data files are on separate partitions and are handled differently. This "SystemBackup" is only meant to provide a way to get back the state of Linux itself and/or to provide a reference point if I mess up some config file. Granted, I haven't had to recover, nor have I had a fire drill to simulate same.

After posting this question I realized I could have stepped outside the box and changed to use rsync. The cp command looked like it should be fine for my simple needs.

I'm not sure if it was my recent switch to Ubuntu 8.04, or changing the destination to a USB external drive instead of a Samba share, or maybe my script has never *really* been working (in the sense that the "u" switch and my not deleting the destination as a matter of course could mean it stopped working months ago and I didn't realize it until "testing" my new script to a new, empty destination).

jhwilliams 04-26-2009 07:34 PM

-- woops nothing to see here --

GlennsPref 04-26-2009 07:41 PM

Hi, I would look into rsync for backups rather than cp.

rsync won't care if the file is protected or not.

My original reference is gone, here is a plain text copy, check the man page for rsync for more...

ref. (gone)
Fun With Rsync - Part I

April 17, 2008 | 15 Comments

If you’ve used rsync in the past, you know that it makes quite a few things much easier to manage.
For example, you can use rsync to backup files from one server to another, you can use it to
automate patch deployments, you can mirror entire websites and also have it function like a
semi-NFS client/server which can synchronize data between several servers by running it like a
watchdog so that as soon as a file gets updated on primary server, it instantly syncs up all
other servers.

What is Rsync?
rsync is a file transfer program for Unix systems. rsync uses the “rsync algorithm” which provides a
very fast method for bringing remote files into sync. It does this by sending just the differences in
the files across the link, without requiring that both sets of files are present at one of the ends of
the link beforehand. Some features of rsync include

  1. Can update whole directory trees and filesystems
  2. Optionally preserves symbolic links, hard links, file ownership, permissions, devices and times
  3. Requires no special privileges to install
  4. Internal pipelining reduces latency for multiple files
  5. Can use rsh, ssh or direct sockets as the transport
  6. Supports anonymous rsync which is ideal for mirroring

The Basics

If you don’t already have it installed, get it done this way on Debian systems:

    apt-get install rsync

On other Linux distributions you would use yum(Fedora/CentOS) or yast (SuSE) to install rsync.

The base requirement of the rsync system is that you install the rsync program on the source and
destination hosts. The easiest way to transfer files is to use a remote shell account.
To copy a group of files to your home directory on host, you can run this command:

    rsync file1 file2 … host:

Rsync defaults to rsh as the remote shell and that is not secure. Instead you can have rsync use
ssh by using the option –rsh or -e ssh option:

    rsync -e ssh file1 file2 … host:destination_dir

Copying one folder to another:

    rsync -r /home/lxpages/junk /home/backups/junk/

That will copy the folder with simple application, without that much are to preservation of
permissions/owners/etc. To make exact
duplicates, we can add the -a switch.

    rsync -av /home/lxpages/junk /home/backups/junk

The above retains ALL ownership/permission and rsync in verbose mode.

    rsync -av --delete /var/www/junk /home/lxpages/junk

The –delete keeps destination folder identical to source by removing any files that don’t match src

    rsync -av --delete -e ssh /home/lxpages/junk

Rsync everything from remote server and folder /home/lxpages/junk to local server and folder

There are many more examples which I will include later parts. For now just remember that man page is
your friend. Simply type ‘man rsync’ and read/view all the available options you can play around with.


rsync -av /media/cdrom/win_c /mnt/win_c

cheers Glenn

legacyprog 04-26-2009 08:59 PM

Thanks Glenn. I was away from home, just saw your advice. I will work on converting my simple script from cp to rsync and then retest. Thanks for the "Fun with rsync" doc, plus (as you say) I'll look at the man page also.

*** EDIT ***
I have now changed the original script where it said "cp -purL" to have "rsync -a --delete" and rerun both the sudo version from the command line, and the same script as a root cron entry, deleting the output folder first so they both started clean. Both give the correct results (i.e., number of files and total size of the four source folders and their backed-up versions match). Thanks for the advice and for helping me to learn!

GlennsPref 04-27-2009 02:38 AM

No problems,

The next time you run it it should only update files that have changed,

like a back program would.

The nice part is that it will write to ntfs too and keeping permissions is handy.

Some one told me about rsync, I'm just passing it on. That's community.

Cheers Glenn

All times are GMT -5. The time now is 04:11 PM.