LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Software RAID issue in RHEL5 (https://www.linuxquestions.org/questions/linux-newbie-8/software-raid-issue-in-rhel5-744006/)

saagar 07-30-2009 04:10 PM

Software RAID issue in RHEL5
 
hi friends,
I have created RAID 1 containing 3 partitions in the same hard disk.
Code:

#mdadm --create /dev/md0 --level=1 --raid-devices=3 /dev/hda{11,12,13}
#mdadm --detail /dev/md0
Number  Major  Minor  RaidDevice State
      0      3      11        0      active sync  /dev/hda11
      1      3      12        1      active sync  /dev/hda12
      2      0        0        2      active sync  /dev/hda13

Now, I am setting the /dev/hda13 as faulty..
Code:

#mdadm --set-faulty /dev/md0 /dev/hda13
#mdadm --detail /dev/md0

  Number  Major  Minor  RaidDevice State
      0      3      11        0      active sync  /dev/hda11
      1      3      12        1      active sync  /dev/hda12
      2      0        0        2      removed

      3      3      13        -      faulty spare  /dev/hda13

Now, i rebooted the system and when I type
#mdadm --detail /dev/md0, it shows the following:
Code:

Number  Major  Minor  RaidDevice State
      0      3      11        0      active sync  /dev/hda11
      1      3      12        1      active sync  /dev/hda12
      2      0        0        2      removed

The /dev/hda13 partition is automatically removed..
Why is this so..?

Even if I add a new hot spare and reboot the system, only /dev/hda11 and /dev/hda12 could be seen.. the new hot spare is not permamently sitting there in the raid array...
please help..!

vishesh 07-31-2009 02:39 AM

Dear sagar

do you creating RAID 1 with three disk ?

saagar 07-31-2009 02:40 PM

what is wrong in creating raid 1 with 3 disks...?

by the way, the same error occurs for RAID 5 also.

vishesh 08-01-2009 12:59 AM

Try this

#mdadm --create /dev/md0 --level=1 -x 1 --raid-devices=3 /dev/hda{11,12,13}

Thanks

saagar 08-01-2009 10:44 AM

vishesh,
thanks for your response. i will try that and respond to you. thanks a lot.

saagar 08-02-2009 12:47 AM

hi vishesh,
I tried with the -x option, it accepts the command and works, but..once I fail one of raid array disks, this spare becomes active and the failed one becomes a faulty spare, ...then once i reboot the machine and issue the mdadm --detail /dev/md0 both the faulty spare and the new active device(which we configured as a spare before)are removed.
the same thing happens for raid5.
can u help with this....?
thanks.

vishesh 08-02-2009 03:53 AM

Is there any relevant messages in log files.

Thanks

saagar 08-03-2009 12:24 PM

vishesh,
I will check out and let u know.


All times are GMT -5. The time now is 03:29 PM.