Raid 0 becomes inactive in raid 1+0 when one drive is removed
CentOSThis forum is for the discussion of CentOS Linux. Note: This forum does not have any official participation.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Raid 0 becomes inactive in raid 1+0 when one drive is removed
Hi guys I need help regarding Raid 0 becoming inactive on Raid 1+0 when one drive is removed.
We use centos 6.5 that is booting from the usb and the raid 1+0 is mounted as /data which holds virtualbox images and files.
We format the drives using gparted and created the raid using mdadm.
This is software-raid
so the setup is
/dev/sda and /dev/sdb = /dev/md1 (raid 1)
/dev/sdc and /dev/sdd = /dev/md2 (raid 1)
then we set them up as raid 0
/dev/md1 and /dev/md2 = /dev/md0 (raid 0)
I remember testing this a few years back that when /dev/sda is removed raid 0 is still active and just reports that a drive had failed.
same for sdb, sdc and sdd.
But recently when we tested it again we had found out that raid 0 is becoming inactive when any one of the four drives is removed.
So this is basically a change in behavior. I had tested this with the old kernel version on which we first tested the raid 1+0 and the new version via yum update and the result is the same.
Hi that is what we are simulating. We are simulating a failed drive so we just turn off the server then pull one drive out then boot the system.
cat /proc/mdstat with all drives will produce something like
Personalities : [raid1] [raid0]
md125 : active raid0 md127[0] md126[1]
1953262592 blocks super 1.2 512k chunks
md126 : active raid1 sdc[2] sdd[1]
976631360 blocks super 1.2 [2/2] [UU]
md127 : active raid1 sdb[1] sda[2]
976631360 blocks super 1.2 [2/2] [UU]
If you feel you have sufficient documentation of your efforts, raise a bug against CentOS. I'm not a CentOS user, but you might not get a sympathetic response given the release you are running.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.