Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I've been racking my brain on this issue. I gave up running a software raid1 in lieu of simply doing incremental backups of important data to one of the three 1TB drives I have installed in my system.
The problem I'm running into now is when I try to copy a lot of data from one SATA drive to another, the machine will lock up and needs a hard reset.
Originally I was trying to copy all of ~800GB of data from one drive to another and it locked up, but last night when trying to copy about 20GB of data it locked up then as well.
I've tried with the Thunar file manager as well as simply opening up a terminal and running "cp -rv /mnt/1TB/* /mnt/backup" command, both with the same results.
It is a MSI P45 NEO-F motherboard with the SATA mode set to ACHI (not IDE), more specs in my signature. DVD+RW DL on SATA 1, and 1 TB drives on SATA 2, SATA 3, and SATA 4. OS is on a 160GB drive on IDE master, and there is another 750GB Data drive on IDE slave (only 1 IDE port, no secondary).
OS is Mythbuntu 8.10 with kde-core installed for access using FreeNX. I've tried with both the console user (myth tv which loads XFCE I believe) and my other user through FreeNX which loads KDE.
I'm not exactly sure what is causing the problem. I'm thinking of going so far as to mapping both volumes as network drives from a windows box via samba and copying then, maybe that will take some load off of the box and keep it from locking up (although the copy will take a very long time). I'm also wondering if this is contributing to the issues I was having with making RAID work properly.
How many arguments does the /mnt/1TB/* expand to? You could have used up nearly all of the memory available to the shell and after it copies, it runs out. I don't know if the -r option will expand internally to a list of files in a subdirectory as well.
For copying a very large number of files, I would use tar.
Code:
tar -C /mnt/1TB -cf - . | tar -C /mnt/backup -xvf - > logfile
Also look at the -g option. You can save a snapshot file and only copy new files since the last backup. The first time, all of the files will be copied. The second time only changed files.
tar will copy permissions and file attributes. In my version, it doesn't copy ACLs. To do that, look at the star command (secure tar).
--
Your copy command didn't use the -a option. This is the `archive' option which copies the timestamps, permissions and implies the -r option.
--
Make sure you don't have a circular symbolic link reference. Also, if you are backing up files from a partition used for linux, don't do so in a single broad stroke. Learn which directories to back up and which to not. E.G. don't backup /sys, /proc, /mnt/, /media, /sys, /dev, /tmp.
The tar or copy command can contain multiple directories to be backed up. I.E. the system directories that you aren't excluding.
Thank you for your help, I will try what you suggest.
When you are asking how many arguments the /mnt/1TB* expands to, do you mean how many files there are on the volume? maybe how deep the directory structure is? I'm not sure how to answer that.
I'll report back with my results of your suggestions (probably later this week).
The asterisk is expanded by the shell to the number of files and directories in the directory. This happens even before the tar command runs. To demonstrate this, in your home directory run: `set *' and then `echo $@'.
The "set" command will show you what the arguments to a command will look like when a command runs.
$0 is the command. $1 is the first argument. $2 is the second argument, etc. This was just for a demonstration.
With just 4 directories and no files in /mnt/1TB/, you won't run out of memory due to the expansion of the wildcard by the bash shell.
For incremental backups, I would use tar with the -g argument. See the info manual. There is a section called "Incremental Dumps" that you want to read. Here is the short description on this argument:
Code:
`--listed-incremental=SNAPSHOT-FILE'
`-g SNAPSHOT-FILE'
During a `--create' operation, specifies that the archive that
`tar' creates is a new GNU-format incremental backup, using
SNAPSHOT-FILE to determine which files to backup. With other
operations, informs `tar' that the archive is in incremental
format. *Note Incremental Dumps::.
Only new files would be copied.
Maybe I should explain the line I gave better.
`-C /mnt/1TB'
This will change the working directory to /mnt/1TB which is the base of the filesystem you want to back up.
`-cf - .'
The -c option tells tar that you are creating an argument. The -f <filename> argument tells tar the filename of the archive. The dash `-' tells tar to stream the archive output via stdout instead of creating the file. The dot character refers to the current directory. Normally you would list the directories or files to be backed up. The . says to backup up the files & directories in the current directory.
The vertical bar `|' is the pipe character. The output of the command to the left of the `|' becomes the input of the command to the right.
`tar -C /mnt/backup'
The tar command on the right hand side runs at the same time as the tar command on the lhs. The current directory is changed to the destination.
`-xvf - > logfile'
The -x tells tar that you want to extract. The -v tell tar to list the files being extracted. The `-f -' tells tar to read the input from stdin instead of reading from a file.
On the right hand side, you could even use ssh to run the extraction on another computer. This allows you to back up from one computer to another, even across the Internet securely. (If you use pubkey authentication, you won't get the username/password authentication messing up the pipe.)
---
I'm wondering if there might be a problem with the /mnt/1TB filesystem causing the lockup. Another thing is if the destination filesystem is FAT and you are trying to save a file over 2GB.
---
If one of these mounts is a network mount, then you have to take into consideration whether the network protocol has a limit in filesystem size and a filesize limit. This may be a problem if you use samba for example. Also there may be limitations on the remote host.
---
Another problem could be filesystem corruption or a bad drive on either drive. Running a filesystem check would be a good idea.
---
For such large backups, I would work from the terminal instead of graphically. The graphical client may be doing things such as sorting filenames in a subdirectory.
---
Also check your /var/log/messages log. If there is a filesystem problem, the kernel might log it.
Thanks for taking the time to explain in such detail, I need to digest what you said a bit
The source hard drive is an NTFS partition, originally used in windows. I have it mounted in linux using ntfs-3g, and am trying to copy to one of the 2 drives I have that have just been formatted with xfs within the last week or so.
I'll report back with any findings I have. Thanks again.
Are you recieving any DMA errors? What HD Controller are you using? This sound like a hardware problem to me. Not that your equipment is bad but maybe a memory or IRQ conflict. Although PCI has come a long ways it still doesnt always share well. One suggestion is to put your controller in a slot that doesn't have shared resources. I find that complete system freezes are normally hardware related and not software related. However this could also be caused by a firmware malfunction.
I'm not sure how to check for DMA errors. The controller is built-in to my MSI P45 NEO-F M/B, I think its an ICH 10 or something simliar to that.
It is definetely a hardware problem. I recently did an O/S reinstall to mythbuntu 9.04 and it has the same issues. It seems to only be a problem when copying lots of files quickly.
I started copying from one volume to another using samba shares from another machine over the network and it ran all night w/o locking up (although its copying very slowly).
Now, I'm running a virtual XP machine on the box, and using the virtual machine to copy using windows shares, and its working without locking up.
I haven't gone the route of removing hardware to troubleshoot, and I'm not sure if I ever will. It can take hours to lockup and that troubleshooting process would take quite a while. I am considering installing a cheap PCI-E SATA card and trying to copy with that to see if that helps.
Under normal circumstances, the machine works great. This first backup locks it, and I think incremental backups may not lock it up, so I will probably just leave it alone.
I will report back if I ever figure out what it is, or if the PCI-E SATA card helps.
Well, late and I'm not sure this would have helped, but when things are lagging the system you can try nicing and ionicing them (if you have a recent enough kernel for ionice).
Code:
nice ionice -c3 normal-command here
nice will make your process a lower priority on the CPU, and ionice lower priority for disk access. This will of course make it go slower, but will affect the rest of your system less.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.