[SOLVED] RAID 5 with 4 hard disks... array started with only 3 out of 4 disks
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
RAID 5 with 4 hard disks... array started with only 3 out of 4 disks
Hi,
I'm currently installing Slackware 13.37 on a small HP Proliant ML36 server with four hard disks, 4x250GB. Each one of the disks has three RAID type (FD) partitions.
- one for /boot (4 disks, RAID 1)
- one for swap (4 disks, RAID 1)
- one for / (4 disks, RAID 5)
I created the RAID arrays with mdadm --create, and everything went OK... except something seems puzzling to me. The raid array for the / partition started only with 3 out of 4 disks. Here's how it looks:
Code:
# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md3 : active raid5 sda3[0] sdd3[4] sdc3[2] sdb3[1]
729364992 blocks level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
[>....................] recovery = 3.5% (8652032/243121664) finish=166.5min speed=23463K/sec
md2 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[1]
979840 blocks [4/4] [UUUU]
md1 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
96256 blocks [4/4] [UUUU]
unused devices: <none>
# mdadm --detail /dev/md3
/dev/md3:
Version : 0.90
Creation Time : Thu Aug 9 12:43:55 2012
Raid Level : raid5
Array Size : 729364992 (695.58 GiB 746.87 GB)
Used Dev Size : 243121664 (231.86 GiB 248.96 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 3
Persistence : Superblock is persistent
Update Time : Fri Aug 10 13:39:27 2012
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 7% complete
UUID : 2f5b58e4:d1cc9b55:208cdb8d:9e23b04b
Events : 0.3345
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
4 8 51 3 spare rebuilding /dev/sdd3
Until now, I've only ever used RAID level 5 with a maximum of 3 disks. This is the first time I use RAID 5 with 4 disks.
Is this behaviour normal ? Will the fourth disk eventually be included in the RAID 5 array in what looks like two and a half hours ?
I prefer to ask, since I intend to load quite some important data onto that server.
I'm currently installing Slackware 13.37 on a small HP Proliant ML36 server with four hard disks, 4x250GB. Each one of the disks has three RAID type (FD) partitions.
- one for /boot (4 disks, RAID 1)
- one for swap (4 disks, RAID 1)
- one for / (4 disks, RAID 5)
I created the RAID arrays with mdadm --create, and everything went OK... except something seems puzzling to me. The raid array for the / partition started only with 3 out of 4 disks. Here's how it looks:
Until now, I've only ever used RAID level 5 with a maximum of 3 disks. This is the first time I use RAID 5 with 4 disks.
Is this behaviour normal ? Will the fourth disk eventually be included in the RAID 5 array in what looks like two and a half hours ?
I prefer to ask, since I intend to load quite some important data onto that server.
Cheers,
Niki
What do you mean started with 3 out of 4 - IT IS USING 4 but RAID 5 uses 1 disk for checksum so you have total space 3* size of drive.
You see your md uses:
Code:
sda, sdb, sdc and sdd drives (4)
active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
Code:
Update Time : Fri Aug 10 13:39:27 2012
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Rebuild Status : 7% complete
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
4 8 51 3 spare rebuilding /dev/sdd3
this is normal for RAID5
it states that one drive was somehow faulty and is now rebuilding the array with that 4th spare drive to include it in the array.
Yes it's normal, the linux kernel is now creating the raid5.
You can use it right now but if a drive fail you will lose everything because the ditribuited parity isn't build yet.
Yes it's normal, the linux kernel is now creating the raid5.
You can use it right now but if a drive fail you will lose everything because the ditribuited parity isn't build yet.
Yes, this is solved already, but as I understand it, this is normal right after create of raid5/6. For raid5, 1 disk is parity that is initially added as a spare and immediately begins to rebuild. Same for raid6, but 2 disks are initially added as spares and they rebuild one at a time.
When running these mdadm raids, be aware of the udev rules, /lib/udev/rules.d/64-md-raid.rules that udev will run on type "linux_raid_member" block devices: it will run "mdadm -I $tempnode" on them to try to incrementally assemble the raid array and start it if all members are added incrementally. Then, in /boot/initrd-tree/init, slackware will do this stuff after udev has been settled and done its mdadm incremental assmebly attempt:
mdadm -E -s > /etc/mdadm #(this will overwrite whatever /etc/mdadm.conf you might have copied into your initrd-tree)
mdadm -S -s # stop all detected arrays
mdadm -A -s # start all detected arrays if they can be started, even if they will start degraded (it will start them degraded at boot, be aware of that)
I have been looking into this stuff some because I will maybe use mdadm (or zfs; not decided yet). The udev mdadm incremental assembly can do the assemble correctly if you copy a good /etc/mdadm.conf into your initrd-tree/etc and comment out the mdadm runs in the initrd-tree/init script. And, the incremental assemble method when you have it assemble your arrays will not start them degraded at boot, which can be important potentially to protecting your data. Starting the array degraded because a drive is missing by some accident will result in the missing drive to need a rebuild, and you want to avoid that happening unless a drive as really failed while the system was up and running, not because a drive went missing at boot for some weird reason like you pulled the power cable on the drive or somehow the drive port assignment changed and the drive is somehow not found or it failed to get power spin-up (using staggered spin-up jumper on the hard drive).
your create command should be include the "-f" flag.
as per mdadm's manual:
Quote:
-f, --force
Insist that mdadm accept the geometry and layout specified without question. Normally mdadm will not allow creation of an array with only one device, and will try to create a RAID5 array with one missing drive (as this makes the initial resync work faster). With --force, mdadm will not try to be so clever.
Last edited by Slax-Dude; 08-10-2012 at 08:11 AM.
Reason: grammar :(
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.