LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (http://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   dirty degraded md raid array (http://www.linuxquestions.org/questions/linux-hardware-18/dirty-degraded-md-raid-array-695296/)

edgjerp 01-06-2009 06:06 AM

dirty degraded md raid array
 
I find myself in the situation that I have a degraded raid array that reads as dirty, so I cannot start it. I have a "spare" partition (until yesterday, this was the 5th segment, but for some reason my system now keeps it as a spare in a degraded array.) I know the drive itself is good, as I have two other md raid arrays that each have a section on it.

mdadm -E /dev/sdX shows me that the different parts of the array do not agree about how many sections are present. there are 5 devices in the array, 2 of them show 5 working devices, 4 active, 1 spare, while the 3 others show only 4 working, all active.

how can I make the devices agree on this?

edgjerp 01-07-2009 02:51 PM

is it not possible to remove any reference to segment 4, the one that is marked as "removed"?


Code:

mdadm -D /dev/md2

/dev/md2:
        Version : 00.90
  Creation Time : Thu May  1 22:41:34 2008
    Raid Level : raid5
  Used Dev Size : 14651136 (13.97 GiB 15.00 GB)
  Raid Devices : 5
  Total Devices : 5
Preferred Minor : 2
    Persistence : Superblock is persistent

    Update Time : Mon Jan  5 15:39:03 2009
          State : active, degraded, Not Started
 Active Devices : 4
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 1

        Layout : left-symmetric
    Chunk Size : 64K

          UUID : 8c00dfaf:a414eba5:fa99d161:76122a73
        Events : 0.853386

    Number  Major  Minor  RaidDevice State
      0      8      33        0      active sync  /dev/sdc1
      1      8      19        1      active sync  /dev/sdb3
      2      8      97        2      active sync  /dev/sdg1
      3      8      81        3      active sync  /dev/sdf1
      4      0        0        4      removed

      5      8      51        -      spare  /dev/sdd3

dmesg gives me:

Code:

[ 3802.752955] raid5: device sdg1 operational as raid disk 2
[ 3802.752960] raid5: device sdf1 operational as raid disk 3
[ 3802.752963] raid5: device sdb3 operational as raid disk 1
[ 3802.752965] raid5: device sdc1 operational as raid disk 0
[ 3802.752967] raid5: cannot start dirty degraded array for md2
[ 3802.752970] RAID5 conf printout:
[ 3802.752972]  --- rd:5 wd:4
[ 3802.752973]  disk 0, o:1, dev:sdc1
[ 3802.752975]  disk 1, o:1, dev:sdb3
[ 3802.752976]  disk 2, o:1, dev:sdg1
[ 3802.752978]  disk 3, o:1, dev:sdf1
[ 3802.752980] raid5: failed to run raid set md2
[ 3802.752981] md: pers->run() failed ...



All times are GMT -5. The time now is 09:33 PM.