View Single Post
Old 12-24-2006, 06:55 AM   #5
bnuytten
LQ Newbie
 
Registered: Dec 2006
Posts: 2

Rep: Reputation: 0
Talking raid5 + LVM

I experienced a similar problem. Using raid5 on 4 drives and LVM+ext3. After manually overriding the arrays state
Code:
echo "clean" > /sys/block/md0/md/array_state
I checked the events on all disks and the array itself
Code:
[root@juno ~]# mdadm --examine /dev/hd[bdfh]1 | grep Event
         Events : 0.87645
         Events : 0.87645
         Events : 0.87644
         Events : 0.87462
[root@juno ~]# mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Sat Nov  4 02:38:57 2006
     Raid Level : raid5
    Device Size : 156288256 (149.05 GiB 160.04 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun Dec 24 07:31:10 2006
          State : active, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 916d12f4:0df2cd68:594a1080:6da31000
         Events : 0.87645

    Number   Major   Minor   RaidDevice State
       0       3       65        0      active sync   /dev/hdb1
       1      22       65        1      active sync   /dev/hdd1
       2      33       65        2      active sync   /dev/hdf1
       0       0        0        0      removed
As you all know I need n-1 good drives in a RAID5 array to recover the data. In this case I need three. But I only have two according to the events. So I took the three best, i.e. those three closest to the value of the md array itself. Using the same technique described above, I was able to recover all my data. Phew!