LinuxQuestions.org
Register a domain and help support LQ
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices

Reply
 
Search this Thread
Old 04-21-2009, 12:06 PM   #1
AboveTheLogic
LQ Newbie
 
Registered: Apr 2009
Location: Vegas
Distribution: Mythbuntu w/ KDE
Posts: 14

Rep: Reputation: 0
Machine locks up when copying lots of files


Hi All-

I've been racking my brain on this issue. I gave up running a software raid1 in lieu of simply doing incremental backups of important data to one of the three 1TB drives I have installed in my system.

The problem I'm running into now is when I try to copy a lot of data from one SATA drive to another, the machine will lock up and needs a hard reset.

Originally I was trying to copy all of ~800GB of data from one drive to another and it locked up, but last night when trying to copy about 20GB of data it locked up then as well.

I've tried with the Thunar file manager as well as simply opening up a terminal and running "cp -rv /mnt/1TB/* /mnt/backup" command, both with the same results.

It is a MSI P45 NEO-F motherboard with the SATA mode set to ACHI (not IDE), more specs in my signature. DVD+RW DL on SATA 1, and 1 TB drives on SATA 2, SATA 3, and SATA 4. OS is on a 160GB drive on IDE master, and there is another 750GB Data drive on IDE slave (only 1 IDE port, no secondary).

OS is Mythbuntu 8.10 with kde-core installed for access using FreeNX. I've tried with both the console user (myth tv which loads XFCE I believe) and my other user through FreeNX which loads KDE.

I'm not exactly sure what is causing the problem. I'm thinking of going so far as to mapping both volumes as network drives from a windows box via samba and copying then, maybe that will take some load off of the box and keep it from locking up (although the copy will take a very long time). I'm also wondering if this is contributing to the issues I was having with making RAID work properly.

Any input is appreciated. Thanks!
 
Old 04-22-2009, 04:55 AM   #2
jschiwal
Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 654Reputation: 654Reputation: 654Reputation: 654Reputation: 654Reputation: 654
How many arguments does the /mnt/1TB/* expand to? You could have used up nearly all of the memory available to the shell and after it copies, it runs out. I don't know if the -r option will expand internally to a list of files in a subdirectory as well.

For copying a very large number of files, I would use tar.

Code:
tar -C /mnt/1TB -cf - . | tar -C /mnt/backup -xvf - > logfile
Also look at the -g option. You can save a snapshot file and only copy new files since the last backup. The first time, all of the files will be copied. The second time only changed files.

tar will copy permissions and file attributes. In my version, it doesn't copy ACLs. To do that, look at the star command (secure tar).

--

Your copy command didn't use the -a option. This is the `archive' option which copies the timestamps, permissions and implies the -r option.

--

Make sure you don't have a circular symbolic link reference. Also, if you are backing up files from a partition used for linux, don't do so in a single broad stroke. Learn which directories to back up and which to not. E.G. don't backup /sys, /proc, /mnt/, /media, /sys, /dev, /tmp.

The tar or copy command can contain multiple directories to be backed up. I.E. the system directories that you aren't excluding.

Last edited by jschiwal; 04-22-2009 at 05:02 AM.
 
Old 04-22-2009, 10:30 AM   #3
AboveTheLogic
LQ Newbie
 
Registered: Apr 2009
Location: Vegas
Distribution: Mythbuntu w/ KDE
Posts: 14

Original Poster
Rep: Reputation: 0
Thank you for your help, I will try what you suggest.

When you are asking how many arguments the /mnt/1TB* expands to, do you mean how many files there are on the volume? maybe how deep the directory structure is? I'm not sure how to answer that.

I'll report back with my results of your suggestions (probably later this week).

Thanks again
 
Old 04-22-2009, 05:07 PM   #4
jschiwal
Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 654Reputation: 654Reputation: 654Reputation: 654Reputation: 654Reputation: 654
The asterisk is expanded by the shell to the number of files and directories in the directory. This happens even before the tar command runs. To demonstrate this, in your home directory run: `set *' and then `echo $@'.
 
Old 04-22-2009, 06:00 PM   #5
AboveTheLogic
LQ Newbie
 
Registered: Apr 2009
Location: Vegas
Distribution: Mythbuntu w/ KDE
Posts: 14

Original Poster
Rep: Reputation: 0
is echo $@ supposed to do the same thing as ls (but list horizontally instead of vertically)?

`echo $@' does the same thing as `set *', takes me to some sort of sub command prompt that I don't recognize (similar to FTP and others)

I have 4 directories in the /mnt/1TB location, each with a whole bunch of subdirectories
 
Old 04-22-2009, 07:11 PM   #6
jschiwal
Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 654Reputation: 654Reputation: 654Reputation: 654Reputation: 654Reputation: 654
The "set" command will show you what the arguments to a command will look like when a command runs.
$0 is the command. $1 is the first argument. $2 is the second argument, etc. This was just for a demonstration.

With just 4 directories and no files in /mnt/1TB/, you won't run out of memory due to the expansion of the wildcard by the bash shell.

For incremental backups, I would use tar with the -g argument. See the info manual. There is a section called "Incremental Dumps" that you want to read. Here is the short description on this argument:
Code:
`--listed-incremental=SNAPSHOT-FILE'
`-g SNAPSHOT-FILE'
     During a `--create' operation, specifies that the archive that
     `tar' creates is a new GNU-format incremental backup, using
     SNAPSHOT-FILE to determine which files to backup.  With other
     operations, informs `tar' that the archive is in incremental
     format.  *Note Incremental Dumps::.
Only new files would be copied.

Maybe I should explain the line I gave better.

`-C /mnt/1TB'
This will change the working directory to /mnt/1TB which is the base of the filesystem you want to back up.

`-cf - .'
The -c option tells tar that you are creating an argument. The -f <filename> argument tells tar the filename of the archive. The dash `-' tells tar to stream the archive output via stdout instead of creating the file. The dot character refers to the current directory. Normally you would list the directories or files to be backed up. The . says to backup up the files & directories in the current directory.

The vertical bar `|' is the pipe character. The output of the command to the left of the `|' becomes the input of the command to the right.

`tar -C /mnt/backup'

The tar command on the right hand side runs at the same time as the tar command on the lhs. The current directory is changed to the destination.
`-xvf - > logfile'
The -x tells tar that you want to extract. The -v tell tar to list the files being extracted. The `-f -' tells tar to read the input from stdin instead of reading from a file.

On the right hand side, you could even use ssh to run the extraction on another computer. This allows you to back up from one computer to another, even across the Internet securely. (If you use pubkey authentication, you won't get the username/password authentication messing up the pipe.)

---

I'm wondering if there might be a problem with the /mnt/1TB filesystem causing the lockup. Another thing is if the destination filesystem is FAT and you are trying to save a file over 2GB.

---

If one of these mounts is a network mount, then you have to take into consideration whether the network protocol has a limit in filesystem size and a filesize limit. This may be a problem if you use samba for example. Also there may be limitations on the remote host.

---

Another problem could be filesystem corruption or a bad drive on either drive. Running a filesystem check would be a good idea.

---

For such large backups, I would work from the terminal instead of graphically. The graphical client may be doing things such as sorting filenames in a subdirectory.

---

Also check your /var/log/messages log. If there is a filesystem problem, the kernel might log it.

Good Luck!

Last edited by jschiwal; 04-22-2009 at 07:21 PM.
 
Old 04-22-2009, 08:39 PM   #7
AboveTheLogic
LQ Newbie
 
Registered: Apr 2009
Location: Vegas
Distribution: Mythbuntu w/ KDE
Posts: 14

Original Poster
Rep: Reputation: 0
Thanks for taking the time to explain in such detail, I need to digest what you said a bit

The source hard drive is an NTFS partition, originally used in windows. I have it mounted in linux using ntfs-3g, and am trying to copy to one of the 2 drives I have that have just been formatted with xfs within the last week or so.

I'll report back with any findings I have. Thanks again.
 
Old 04-22-2009, 08:55 PM   #8
Absent Minded
Member
 
Registered: Nov 2007
Location: Washington State U.S.A.
Distribution: Debian testing
Posts: 74

Rep: Reputation: 21
Are you recieving any DMA errors? What HD Controller are you using? This sound like a hardware problem to me. Not that your equipment is bad but maybe a memory or IRQ conflict. Although PCI has come a long ways it still doesnt always share well. One suggestion is to put your controller in a slot that doesn't have shared resources. I find that complete system freezes are normally hardware related and not software related. However this could also be caused by a firmware malfunction.
 
Old 04-28-2009, 03:10 PM   #9
AboveTheLogic
LQ Newbie
 
Registered: Apr 2009
Location: Vegas
Distribution: Mythbuntu w/ KDE
Posts: 14

Original Poster
Rep: Reputation: 0
I'm not sure how to check for DMA errors. The controller is built-in to my MSI P45 NEO-F M/B, I think its an ICH 10 or something simliar to that.

It is definetely a hardware problem. I recently did an O/S reinstall to mythbuntu 9.04 and it has the same issues. It seems to only be a problem when copying lots of files quickly.

I started copying from one volume to another using samba shares from another machine over the network and it ran all night w/o locking up (although its copying very slowly).

Now, I'm running a virtual XP machine on the box, and using the virtual machine to copy using windows shares, and its working without locking up.

I haven't gone the route of removing hardware to troubleshoot, and I'm not sure if I ever will. It can take hours to lockup and that troubleshooting process would take quite a while. I am considering installing a cheap PCI-E SATA card and trying to copy with that to see if that helps.

Under normal circumstances, the machine works great. This first backup locks it, and I think incremental backups may not lock it up, so I will probably just leave it alone.

I will report back if I ever figure out what it is, or if the PCI-E SATA card helps.
 
Old 05-03-2009, 10:48 AM   #10
AboveTheLogic
LQ Newbie
 
Registered: Apr 2009
Location: Vegas
Distribution: Mythbuntu w/ KDE
Posts: 14

Original Poster
Rep: Reputation: 0
Update:

After an upgrade to 9.04, the problem seemed to have gone away. I could copy files back and forth without the OS locking up.

This was pretty odd since I was convinced that it was a hardware error.

Last night I started some copies overnight, and this morning the drive in SATA port 3 (the destination drive) was no longer recognized by the OS.

So, I moved some things around: unplugged the DVD drive from SATA1, put the hard drives on SATA1, SATA2, and SATA4; and all the drives are recognized.

I guess the problem is the SATA3 port.
 
Old 05-03-2009, 11:23 AM   #11
AlucardZero
Senior Member
 
Registered: May 2006
Location: USA
Distribution: Debian
Posts: 4,647

Rep: Reputation: 524Reputation: 524Reputation: 524Reputation: 524Reputation: 524Reputation: 524
Well, late and I'm not sure this would have helped, but when things are lagging the system you can try nicing and ionicing them (if you have a recent enough kernel for ionice).
Code:
nice ionice -c3 normal-command here
nice will make your process a lower priority on the CPU, and ionice lower priority for disk access. This will of course make it go slower, but will affect the rest of your system less.
 
Old 05-03-2009, 11:26 AM   #12
AboveTheLogic
LQ Newbie
 
Registered: Apr 2009
Location: Vegas
Distribution: Mythbuntu w/ KDE
Posts: 14

Original Poster
Rep: Reputation: 0
ahhhhhhh yeah that definetely would have helped me earlier
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Lots of disk usage for no reason, system locks up gimpy530 Ubuntu 16 04-15-2008 05:44 PM
installation by copying it from one machine to another machine betrussell23 Linux - Newbie 5 03-05-2007 09:56 AM
Strange speeds copying files to gateway machine Kristijan Linux - Networking 2 07-30-2004 06:32 AM
Machine locks up at 100% usage... vbp6us Linux - General 10 08-18-2003 01:24 PM
Yast2 Locks machine dbs Linux - Software 0 08-17-2002 04:34 PM


All times are GMT -5. The time now is 09:08 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration