-   Linux - Server (
-   -   mdadm forced resyncing to activate spare drive (

javaholic 12-14-2008 07:36 PM

mdadm forced resyncing to activate spare drive
oh okay so previous to my other threads about my raid5. I have managed to get the following output from cat "/proc/mdstat"


Personalities : [raid6] [raid5] [raid4]
md0 : active(auto-read-only) raid5 sda1[0] sdd1[4](S) sdc1[2] sdb1[1]
      2930279808 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]

unused devices: <none>

How would you go about making sure that sdd1 (FD FS code autodetect linux raid) is resynced and added right?

A similar sata card can be found here(similar model but without link status LEDs but uses same Sil31114 chip):


I think it is likely that my software fakeraid card doesn't support greater than 2TB configurations. As when i try to set it up in its software set up i get a proposed size of 748GB instead of the 2748GB it should be. what do you think. Is this the likely answer to my problems scoping many months of reading of what i can find on mdadm.

fmua 12-15-2008 07:01 AM


I did it at the following way

mdadm /dev/md0 -f /dev/sdc1 disk to faulty
mdadfm /dev/md0 -r /dev/sdc1 hot removed /dev/sdc1
mdadm /dev/md0 -a /dev/sdc1 Platte wieder in den Raidverbund eingetragen oder neu erkannte Platte

mdadm -D /dev/md0
Shows the contens of the mirror

javaholic 12-15-2008 07:12 AM

so i haven't given up on getting all 4 drives working as they should but i guessed i should try a less complex setup including only three of the drives so i did:


mdadm -Cv /dev/md0 -n3 -l5 /dev/sda1 /dev/sdb1 /dev/sdc1
i then re jigged the fakeraid setup via a reboot to represent the changes made after that i changed the mdadm.conf though not necessary and then did an assemble using:


mdadm --Assemble -verbose /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1
But i still get the following as a result of cat /proc/mdstat


Personalities : [raid6] [raid5] [raid4]
md0 : active(auto-read-only) raid5 sdc1[3](S) sdb1[1] sda1[0]
      1953519872 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]

unused devices: <none>

With a spare listed again which still doesn't help my setup.

javaholic 12-15-2008 07:24 AM

I wrote up the previous at or around the same time as previous post.

oh okay so i did that

and after checking cat/ proc/mdstat after a few minutes gave me:

Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[3](S) sdb1[4](F) sda1[0]
      1953519872 blocks level 5, 64k chunk, algorithm 2 [3/1] [U__]

unused devices: <none>

One failed drive and one spare.

I have been fighting with mdadm for months now.

The detail description shows this(mdadm --detail /dev/md0):


        Version : 00.90
  Creation Time : Mon Dec 15 10:50:54 2008
    Raid Level : raid5
    Array Size : 1953519872 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
  Raid Devices : 3
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Mon Dec 15 12:13:21 2008
          State : clean, degraded
 Active Devices : 1
Working Devices : 2
 Failed Devices : 1
  Spare Devices : 1

        Layout : left-symmetric
    Chunk Size : 64K

          UUID : 5224c5ea:00a41204:7b403d38:22f8ac8c (local to host bible)
        Events : 0.8

    Number  Major  Minor  RaidDevice State
      0      8        1        0      active sync  /dev/sda1
      1      0        0        1      removed
      2      0        0        2      removed

      3      8      33        -      spare  /dev/sdc1
      4      8      17        -      faulty spare  /dev/sdb1

The install of Debian that this is running under is is almost completely clean. All changes that have been made is removing the default apache settings(apache2-default triggered home page)

All times are GMT -5. The time now is 04:21 PM.