RAID5 and RAID1 causing high system load on Suse 10.1 with no activity
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
RAID5 and RAID1 causing high system load on Suse 10.1 with no activity
I recently installed Suse 10.1 and when I create a RAID5 array my system load goes up by 2. If I create 2 RAID5 arrays, I'm at a load average of 4.
With RAID1 my load goes to 10! for 1 array with no activity??
The arrays are empty and there is no activity on the machine at all. The HD LED is constantly on when I create a RAID5 or RAID1 array. It stays on for more than 24 hours with high load.
My computer has a Highpoint 372 on board ATA RAID controller, but I get the message from Suse that this is not supported in kernel 2.6 and onwards -- could there be a conflict?
I'm using reiserfs on the arrays. The RAID5 arrays consists of 1 partition from 4 different disks on 2 different IDE channels. The RAID1 can be pretty much any combination of any 2 partitions on any controller - still high load.
Anybody has had similar experiences or any hints to what could be wrong? Any hints to how I can figure out what is causing the high system load?
I see the high load by just creating the arrays of empty disks. The high load persists for more than 24 hours on empty disks with no other activity on the computer. I'm not trying to recover from a lost disk, I'm installing Linux on an empty computer.
Any hints to how I can see what is causing the load/disk activity?
When you create a brand new software raid1 using mdadm, the system immediately begins a rebuilding of the mirror drive from scratch.
When you create a brand new software raid5 using mdadm, the system immediately begins to create parity using the last drive from scratch.
Depending on the size of the array and your system’s computing power, the initialization process can go on from seconds to minutes to hours.
If you reboot before the rebuilding process is complete, then the initialization process will start over from the beginning. Fortunately, most true hardware raid cards are set up to restart initialization where they left off after a reboot, but the software raids aren’t so lucky.
As farslayer pointed out, the easiest way to follow the rebuilding process is:
Thanks for the answers. It seems to be the slow building of the arrays that confuses me. My array is 4 x 190GB and projected build time was 3300 minutes, which after many hours of running seems to pretty accurate. The machine is an Athlon XP2200 with 1GB RAM.
I'm still puzzled to why it would take this computer 3300 minutes to calculate the parity of an empty disk, what data is there to run parity on? couldn't the mdadm just assume the entire disk is 0's?? Or is there still something wrong in my HW/setup?
Test System: 800MHz P3 (1 cpu) with 4x320GB Western Digital drives (WD3200JB) running from two Promise Ultra100 TX2 IDE controller cards on a PCI/33 bus. The recovery example above contains a 36GB volume group and is on the bottom 5% of the drives, so it doesn’t get any slower than this.
Last edited by WhatsHisName; 06-09-2006 at 01:29 PM.
I've tried arrays on both the Highpoint controller and the Maxtor PCI controllers, both are slow.
I do have one funny problem. None of my disks are labled /dev/hda, my first disk is /dev/hde and its on the Maxtor PCI controller, my "/dev/hda", i.e. on-board ctrl1-ch1-master is called /dev/hdm -- is this a hint?
Could it be some IRQ/address conflict?
Any other advice than start pulling out HW piece by piece?
I do have one funny problem. None of my disks are labeled /dev/hda, my first disk is /dev/hde and its on the Maxtor PCI controller, my "/dev/hda", i.e. on-board ctrl1-ch1-master is called /dev/hdm -- is this a hint?
IDE and SCSI assignments start with the motherboard controllers and then move to cards on the PCI/PCI-X/PCIe buses. For example, if you have the typical 2 IDE channels on the motherboard, they will be hd[a-d]. The first 2-channel card on the PCI bus will be assigned hd[e-h], the second card assigned hd[j-l] and so on.
The speed problem may involve the driver for the Highpoint card, but I donít have a good recommendation there. Also, mixing two types of IDE controller cards might be an issue as related to the motherboard BIOS, especially if the system is a Dell.
As a test, you could try removing the Highpoint card and distributing the drives across the Maxtor cards, which I assume are identical to the Promise Ultra133 TX2 cards.
The other thing that strikes me is the use of two drives per IDE channel (master/slave). Notice in my example above that only the master positions are used on each IDE card channel (hd[egik]). With some master/slave drive combinations, you can run into conflicts that slow things down. That isnít an issue when you only use 1 drive per IDE channel.
The other potential problem is PCI bus congestion, especially if you have a PCI/33 bus.
DMA settings was "udma5" and "udma6", drive settings looked correct to me.
I pulled all the HW from the computer so I only had 1 disk and 1 cdrom. No other controllers. First disk was called /dev/hde -- not /dev/hda on Suse 10.1.
I decided to try Debian instead of Suse, same problem. Debian got the disks confused too, /dev/hda was not the motherboard Via KT400 chipset controller, but on the motherboards Highpoint372 controller. Also Debian hangs on ide-detect on the Highpoint 372 controller module.
Gentoo boots and works, but I had to select numerous options to get basic stuff enabled, e.g. MySql was not selected by default and it seems like you have to recompile everything during Gentoo install???
Finally I went back to Fedora. FC5 boots, finds the disks in the correct order. Builds the arrays with correct speed. I still use reiserfs, same HW, in same locations. RAID1 builds with 20-30MB/sec in parallel with a RAID5 building around 20MB/sec.
Something is wrong in Suse 10.1 and Debian reguarding the Highpoint372 controller. BTW FC5 calls the disks on the Promise 133TX2 controller /dev/mapper/<something> and associates these with a hpt37x module?? maybe this is where Suse and Debian gets confused, if it uses hpt37x module for both the Promise TX2 and for the on-board HPT372 controllers?
Anyway, thanks for the help, sorry I didn't have the stamina to find the problem in Suse and Debian.