LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 08-10-2012, 06:36 AM   #1
kikinovak
MLED Founder
 
Registered: Jun 2011
Location: Montpezat (South France)
Distribution: CentOS, OpenSUSE
Posts: 3,453

Rep: Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154
RAID 5 with 4 hard disks... array started with only 3 out of 4 disks


Hi,

I'm currently installing Slackware 13.37 on a small HP Proliant ML36 server with four hard disks, 4x250GB. Each one of the disks has three RAID type (FD) partitions.

- one for /boot (4 disks, RAID 1)
- one for swap (4 disks, RAID 1)
- one for / (4 disks, RAID 5)

I created the RAID arrays with mdadm --create, and everything went OK... except something seems puzzling to me. The raid array for the / partition started only with 3 out of 4 disks. Here's how it looks:

Code:
# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md3 : active raid5 sda3[0] sdd3[4] sdc3[2] sdb3[1]
      729364992 blocks level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [>....................]  recovery =  3.5% (8652032/243121664) finish=166.5min speed=23463K/sec
      
md2 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[1]
      979840 blocks [4/4] [UUUU]
      
md1 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      96256 blocks [4/4] [UUUU]
      
unused devices: <none>

# mdadm --detail /dev/md3
/dev/md3:
        Version : 0.90
  Creation Time : Thu Aug  9 12:43:55 2012
     Raid Level : raid5
     Array Size : 729364992 (695.58 GiB 746.87 GB)
  Used Dev Size : 243121664 (231.86 GiB 248.96 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Fri Aug 10 13:39:27 2012
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 7% complete

           UUID : 2f5b58e4:d1cc9b55:208cdb8d:9e23b04b
         Events : 0.3345

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       4       8       51        3      spare rebuilding   /dev/sdd3
Until now, I've only ever used RAID level 5 with a maximum of 3 disks. This is the first time I use RAID 5 with 4 disks.

Is this behaviour normal ? Will the fourth disk eventually be included in the RAID 5 array in what looks like two and a half hours ?

I prefer to ask, since I intend to load quite some important data onto that server.

Cheers,

Niki

Last edited by kikinovak; 08-10-2012 at 06:40 AM.
 
Old 08-10-2012, 06:41 AM   #2
lithos
Senior Member
 
Registered: Jan 2010
Location: SI : 45.9531, 15.4894
Distribution: CentOS, OpenNA/Trustix, testing desktop openSuse 12.1 /Cinnamon/KDE4.8
Posts: 1,144

Rep: Reputation: 217Reputation: 217Reputation: 217
Quote:
Originally Posted by kikinovak View Post
Hi,

I'm currently installing Slackware 13.37 on a small HP Proliant ML36 server with four hard disks, 4x250GB. Each one of the disks has three RAID type (FD) partitions.

- one for /boot (4 disks, RAID 1)
- one for swap (4 disks, RAID 1)
- one for / (4 disks, RAID 5)

I created the RAID arrays with mdadm --create, and everything went OK... except something seems puzzling to me. The raid array for the / partition started only with 3 out of 4 disks. Here's how it looks:

Code:
# cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md3 : active raid5 sda3[0] sdd3[4] sdc3[2] sdb3[1]
      729364992 blocks level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [>....................]  recovery =  3.5% (8652032/243121664) finish=166.5min speed=23463K/sec
      
md2 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[1]
      979840 blocks [4/4] [UUUU]
      
md1 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      96256 blocks [4/4] [UUUU]
      
unused devices: <none>
Until now, I've only ever used RAID level 5 with a maximum of 3 disks. This is the first time I use RAID 5 with 4 disks.

Is this behaviour normal ? Will the fourth disk eventually be included in the RAID 5 array in what looks like two and a half hours ?

I prefer to ask, since I intend to load quite some important data onto that server.

Cheers,

Niki
What do you mean started with 3 out of 4 - IT IS USING 4 but RAID 5 uses 1 disk for checksum so you have total space 3* size of drive.
You see your md uses:
Code:
sda, sdb, sdc and sdd drives (4)
active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
Code:
 Update Time : Fri Aug 10 13:39:27 2012
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 7% complete


    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       2       8       35        2      active sync   /dev/sdc3
       4       8       51        3      spare rebuilding   /dev/sdd3
this is normal for RAID5
it states that one drive was somehow faulty and is now rebuilding the array with that 4th spare drive to include it in the array.

Last edited by lithos; 08-10-2012 at 06:44 AM.
 
1 members found this post helpful.
Old 08-10-2012, 06:42 AM   #3
Celyr
Member
 
Registered: Mar 2012
Location: Italy
Distribution: Slackware+Debian
Posts: 321

Rep: Reputation: 81
Yes it's normal, the linux kernel is now creating the raid5.
You can use it right now but if a drive fail you will lose everything because the ditribuited parity isn't build yet.
 
1 members found this post helpful.
Old 08-10-2012, 06:58 AM   #4
kikinovak
MLED Founder
 
Registered: Jun 2011
Location: Montpezat (South France)
Distribution: CentOS, OpenSUSE
Posts: 3,453

Original Poster
Rep: Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154
Quote:
Originally Posted by Celyr View Post
Yes it's normal, the linux kernel is now creating the raid5.
You can use it right now but if a drive fail you will lose everything because the ditribuited parity isn't build yet.
OK thanks very much!
 
Old 08-10-2012, 08:05 AM   #5
Celyr
Member
 
Registered: Mar 2012
Location: Italy
Distribution: Slackware+Debian
Posts: 321

Rep: Reputation: 81
If you have time issue maybe you can get it faster
Code:
# sysctl dev.raid.speed_limit_min
# sysctl dev.raid.speed_limit_max
speed is often limited by those parameters, you can change it using
Code:
sysctl -w dev.raid.speed_limit_max=value
 
Old 08-10-2012, 08:05 AM   #6
foobarz
Member
 
Registered: Aug 2010
Distribution: slackware64-current
Posts: 48

Rep: Reputation: 10
Yes, this is solved already, but as I understand it, this is normal right after create of raid5/6. For raid5, 1 disk is parity that is initially added as a spare and immediately begins to rebuild. Same for raid6, but 2 disks are initially added as spares and they rebuild one at a time.

When running these mdadm raids, be aware of the udev rules, /lib/udev/rules.d/64-md-raid.rules that udev will run on type "linux_raid_member" block devices: it will run "mdadm -I $tempnode" on them to try to incrementally assemble the raid array and start it if all members are added incrementally. Then, in /boot/initrd-tree/init, slackware will do this stuff after udev has been settled and done its mdadm incremental assmebly attempt:

mdadm -E -s > /etc/mdadm #(this will overwrite whatever /etc/mdadm.conf you might have copied into your initrd-tree)
mdadm -S -s # stop all detected arrays
mdadm -A -s # start all detected arrays if they can be started, even if they will start degraded (it will start them degraded at boot, be aware of that)


I have been looking into this stuff some because I will maybe use mdadm (or zfs; not decided yet). The udev mdadm incremental assembly can do the assemble correctly if you copy a good /etc/mdadm.conf into your initrd-tree/etc and comment out the mdadm runs in the initrd-tree/init script. And, the incremental assemble method when you have it assemble your arrays will not start them degraded at boot, which can be important potentially to protecting your data. Starting the array degraded because a drive is missing by some accident will result in the missing drive to need a rebuild, and you want to avoid that happening unless a drive as really failed while the system was up and running, not because a drive went missing at boot for some weird reason like you pulled the power cable on the drive or somehow the drive port assignment changed and the drive is somehow not found or it failed to get power spin-up (using staggered spin-up jumper on the hard drive).
 
Old 08-10-2012, 08:09 AM   #7
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
your create command should be include the "-f" flag.

as per mdadm's manual:
Quote:
-f, --force
Insist that mdadm accept the geometry and layout specified without question. Normally mdadm will not allow creation of an array with only one device, and will try to create a RAID5 array with one missing drive (as this makes the initial resync work faster). With --force, mdadm will not try to be so clever.

Last edited by Slax-Dude; 08-10-2012 at 08:11 AM. Reason: grammar :(
 
Old 08-10-2012, 05:44 PM   #8
kikinovak
MLED Founder
 
Registered: Jun 2011
Location: Montpezat (South France)
Distribution: CentOS, OpenSUSE
Posts: 3,453

Original Poster
Rep: Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154Reputation: 2154
raid

Several hours later.

Code:
[root@nestor:~] # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] 
md3 : active raid5 sda3[0] sdd3[3] sdc3[2] sdb3[1]
      729364992 blocks level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      
md2 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[1]
      979840 blocks [4/4] [UUUU]
      
md1 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      96256 blocks [4/4] [UUUU]
      
unused devices: <none>
[root@nestor:~] # df -h
Sys. de fichiers Taille Util. Disp. Uti% Monté sur
/dev/md3           685G   39G  612G   6% /
/dev/md1            92M   29M   59M  33% /boot
tmpfs              436M     0  436M   0% /dev/shm
No need to use any extra options here. Apparently, all I had to do was to wait until the last disk was synchronised. Took a couple of hours.
 
Old 08-10-2012, 06:39 PM   #9
wildwizard
Member
 
Registered: Apr 2009
Location: Oz
Distribution: slackware64-14.0
Posts: 875

Rep: Reputation: 282Reputation: 282Reputation: 282
Quote:
Originally Posted by Slax-Dude View Post
your create command should be include the "-f" flag.

as per mdadm's manual:
Your reading the sentence backwards.

If you use force it takes longer.
 
Old 08-11-2012, 06:33 AM   #10
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
So, does [CODE]mdadm --detail /dev/md3[/QUOTE] now show the array with all 4 disks as opposed to the 3 disks + 1 spare?
I didn't know that, sorry
 
  


Reply

Tags
mdadm, raid


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
RAID 6 Array coming up with all disks as spare pjakobs Linux - Server 2 04-23-2011 09:52 AM
RAID mdadm cant add disks to array vockleya Linux - Software 4 09-13-2010 05:37 PM
Looking for good hard disks to use in a Raid 1 array gregw040 Linux - Hardware 2 01-14-2010 03:26 AM
Raid 5 Array with Different Sized Disks Dewar Linux - Hardware 1 11-19-2004 10:09 PM
How do Install to system with 3 disks, 2 of which are RAID array ? Raptor Ramjet Slackware 1 09-28-2003 09:07 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 02:05 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration