Hello,
Since I moved my machine, I am unable to mount my RAID array. At first, I think sdb was in trouble, but since next reboot, it's sdc. Everything happened in a short time frame and there was no write on my valuable data. Then I think I can recover everything, provided that I am careful with what I do.
'mdadm --detail /dev/md0' gives:
Code:
/dev/md0:
Version : 01.02
Creation Time : Wed Apr 15 00:00:14 2009
Raid Level : raid5
Used Dev Size : 976762432 (931.51 GiB 1000.20 GB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Thu Apr 22 23:40:27 2010
State : active, degraded, Not Started
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Name : server:d0 (local to host server)
UUID : e84b8f97:fd7fd496:1f9adc88:b8915c4d
Events : 299841
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 spare rebuilding /dev/sdb
2 0 0 2 removed
I notice a different state for sdb and sdc.
But although sdb is marked as "rebuilding", it is not since there is only one good drive. So, 'mdadm --examine /dev/sd[abc] | grep Events' returns:
Code:
Events : 299841
Events : 299841
Events : 299838
sdb seems up to date. So, I was about to recreate the array with 'mdadm --create /dev/md0 --assume-clean --level=5 --verbose --raid-devices=3 /dev/sda /dev/sdb missing' as explained here:
http://kevin.deldycke.com/tag/mdadm/ and then add the third drive for reconstruction.
But I also noticed the following line in the report from 'mdadm --examine /dev/sdb':
Code:
Recovery Offset : 6400 sectors
Do you think it is safe to assume this drive is clean (Events is the same as sda) and type the above command?
Thank you very much,
Yann