Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
As you can see, the files are much smaller than they were when I ran the command manually. I have tried listing the contents of these archives with tar -tvf:
For the system archive, tar lists several files, but then suddenly exists with:
Code:
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
And for the /home archive, there is no output at all.
So, does anybody know why this is happening? I thought that maybe there was a time limit on cron jobs which my script was exceeding, and that it was getting killed at this point.. but I failed to find any info confirming this, and it would not explain why the second tar archive is created.
I am at a total loss. Would appreciate any help or suggestions.
Maybe tar gets problems when reading "/dev"? (Just a guess...)
I guess this is possible, though I do not understand why it only has this problem when it is being run as a cron job (if I run it manually, it works fine).
It also does not explain why it is failing to backup my /home directory (which runs as a different tar command than the one which archived /dev)
You just made me realize that it may be of interest to know what the last items listed in the tar archive is - when I do
Code:
ls -tvf 2007.02.22_sys.tgz
[...]
-rw-r--r-- root/root 0 2007-02-10 14:13:02 /var/lib/apt/lists/it.archive.ubuntu.com_ubuntu_dists_dapper-backports_restricted_binary-i386_Packages
-rw-r--r-- root/root 195085 2007-02-10 14:13:02 /var/lib/apt/lists/it.archive.ubuntu.com_ubuntu_dists_dapper-backports_universe_binary-i386_Packages
-rw-r--r-- root/root 25437 2007-02-10 14:13:02 /var/lib/apt/lists/it.archive.ubuntu.com_ubuntu_dists_dapper-backports_multiverse_binary-i386_Packages
-rw-r--r-- root/root 13018 2007-02-10 14:17:31 /var/lib/apt/lists/it.archive.ubuntu.com_ubuntu_dists_dapper-backports_main_source_Sources
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
You're right. Running it from a command should not work then.
Another idea is to check the error messages. Normally they would be emailed to root. You can also add a line like "MAILTO=username" first in the cron file you get when you type "sudo crontab -e".
There is nothing wrong in your script, apparently. Nor the crontab imposes a time limit on the scheduled jobs. First of all, you can check for any standard error and/or output from your cron job. To do this, login as the user to which crontab belongs (root?) and issue the mail command. On most Linux systems the output and error from cron jobs are sent to the system mail, unless they are redirected to a file or device.
If you don't have the mail command there is no clue to retrieve standard error and output from there (very strange indeed, the mail command should be always available). As an alternative you can redirect both standard error and output from your cronjob, appending the following to the command in the crontab
Code:
> $HOME/cron.log 2>&1
where $HOME/cron.log is a totally arbitrary filename.
Well, the default is "MAILTO=root" for cron running as root, so there's no need to specify this. Since cron doesn't set the environment like when you're using a shell, above your line in the cron file, you can add stuff like:
SHELL=
PATH=
MAILTO=
HOME=
But if you're haveing trouble with email, another way would be to simply log to file:
tar -cvpzP --exclude=/home --exclude=/proc --exclude=/lost+found --exclude=/mnt --exclude=/sys -f /home/shares/allusers/Server-Files/backups/$DATE\_sys.tgz / &>/tmp/tar.log
Thanks for the suggestions guys. I will redirect the output in this way.
Had a couple questions though: what does "2>&1" do in the example command colucix gave?,
Also, now that I have installed the mail command, should I get mail in the future? (In case you are interested, Ubuntu stopped including mail by default after hoary: http://www.ubuntuforums.org/showthread.php?t=102941)
Had a couple questions though: what does "2>&1" do in the example command colucix gave?
In simple words, this tells to redirect the standard error to the same place where standard output is redirected. More technical: 2 is the file descriptor for standard error, 1 is the file descriptor for standard output. The >& is the symbol to perform such a redirection.
Quote:
Also, now that I have installed the mail command, should I get mail in the future?
Yes, if the mail service is correctly configured. I guess it is, if you have installed with apt-get or synaptic!
I added redirected the output of my script, and changes the cron schedule from thursday morning at 04:01 to Monday 15:20 (current time here), and it ran fine - no errors. The archives are full.
This is very frustrating, as the problem I described above did not only occur once, it happened every time my script ran as a cron job, until this very moment.
I am going to change the schedule back to sometime at night - maybe 03:00 this time.
Could it be, that there is some other system-maintenance task scheduled for 04:00, which was accessing (blocking?) some of the that my script was trying to archive?
Nope. The crontab jobs should be indipendent from each other. I suggest to leave the scheduled crontab as is (4:01) and see the cron.log which will be created after that. Just for debugging purposes. In any case you have a full backup, right now!
The important thing is that you check your backup and see if the files you want backup of really are there. Making a backup script, and then not checking it is a common mistake.
And having root email go to a real email account that you read is a good thing. It will email you different messages every once in a while - it's better to have cron output emailed, than being put in a log file. You can also get emails about different problems.
"2>&1" means send all error messages to standard output. I just used a little shortcut. "2>&1 >somefile.log" is the same as "&>somefile.log".
Another tip, add a line like in the beginning of your script:
dpkg --get-selections >/etc/deb.packages.txt
This makes a text file called deb.packages.txt in /etc. If your computer totally dies, it would not be so easy to get a new computer running with all the files you have in the tar files. If you have that "deb.packages.txt" file available, you can just do a minimum (server) install on whatever computer, then do a
dpkg --set-selections </etc/deb.packages.txt
Also, a good test, if you have the time: Imagine the computer you take backup of is not available anymore (stolen or whatever). Find a spare computer and get it up. You only have those tar files from the server (and perhaps an Ubuntu CD). A practical test can often reveal problems you didn't even think about.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.