Repair ReiserFS on LVM after removal of one or more PV's
My setup is as following:
I have a ReiserFS build on top of a LVM of 12 raid5 arrays
suddenly i lost 4 of the arrays due to a hardware error
So now i cant build the lvm.
My question is is there anyway i can remove the broken PV's from the LVM and somehow repair the filesystem?
im thinking something in the line of
vgreduce and reiserfsck --scan-whole-partition --rebuild-tree
basicly i was thinking on doing something in the line of:
vgreduce vg1 /dev/mapper/raid6
vgreduce vg1 /dev/mapper/raid7
vgreduce vg1 /dev/mapper/raid8
vgreduce vg1 /dev/mapper/raid9
(the broken arrays)
and then
reiserfsck --scan-whole-partition --rebuild-tree /dev/vg1/disk1
I know im going to loose the data on the broken arrays but at this point im just trying to restore the data on the healty arrays
thanks in advance
some system information
:~# uname -a
Linux FileServer 2.6.18-6-686 #1 SMP Thu May 8 07:34:27 UTC 2008 i686 GNU/Linux
:~# mdadm -D /dev/md2 (the layout of md3/md4/md5 is the same with partition 2/3/4 on the same disc)
/dev/md2:
Version : 00.90.03
Creation Time : Fri Sep 26 06:02:43 2008
Raid Level : raid5
Array Size : 549422592 (523.97 GiB 562.61 GB)
Device Size : 183140864 (174.66 GiB 187.54 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Apr 1 11:15:15 2009
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 52530d51:8623ef3b:2f9c02ea:59be1f97
Events : 0.33626
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 49 1 active sync /dev/sdd1
2 8 33 2 active sync /dev/sdc1
3 8 1 3 active sync /dev/sda1
:~# mdadm -D /dev/md6 (the layout of md7/md8/md9 is the same with partition 2/3/4 on the same disc)
/dev/md6:
Version : 00.90.03
Creation Time : Wed Apr 1 10:11:40 2009
Raid Level : raid5
Array Size : 244187776 (232.88 GiB 250.05 GB)
Device Size : 122093888 (116.44 GiB 125.02 GB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 6
Persistence : Superblock is persistent
Update Time : Wed Apr 1 10:16:01 2009
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : fe3084c9:bb9f484e:4e126113:4708354e (local to host FileServer)
Events : 0.2
Number Major Minor RaidDevice State
0 0 0 0 removed
1 0 0 1 removed
2 8 145 2 active sync /dev/sdj1
3 8 81 - faulty spare /dev/sdf1
:~# mdadm -D /dev/md10 (the layout of md11/md12/md13 is the same with partition 2/3/4 on the same disc)
/dev/md10:
Version : 00.90.03
Creation Time : Tue Mar 31 21:55:00 2009
Raid Level : raid5
Array Size : 488375808 (465.75 GiB 500.10 GB)
Device Size : 244187904 (232.88 GiB 250.05 GB)
Raid Devices : 3
Total Devices : 2
Preferred Minor : 10
Persistence : Superblock is persistent
Update Time : Tue Mar 31 21:55:00 2009
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : bc8c15a7:d4d609b3:4e126113:4708354e (local to host FileServer)
Events : 0.1
Number Major Minor RaidDevice State
0 8 113 0 active sync /dev/sdh1
1 0 0 1 removed
2 8 97 2 active sync /dev/sdg1
All the arrays is encrypted with DM-Crypt and then added to the LVM
:~# pvs
/dev/md6: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/md7: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/md8: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/md9: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf1: read failed after 0 of 2048 at 0: Ind/ud-fejl
/dev/sdf2: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf3: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf4: read failed after 0 of 4096 at 0: Ind/ud-fejl
PV VG Fmt Attr PSize PFree
/dev/dm-1 vg1 lvm2 a- 523,97G 0
/dev/dm-10 vg1 lvm2 a- 465,75G 0
/dev/dm-11 vg1 lvm2 a- 465,75G 0
/dev/dm-12 vg1 lvm2 a- 232,87G 0
/dev/dm-2 vg1 lvm2 a- 523,97G 0
/dev/dm-3 vg1 lvm2 a- 523,97G 0
/dev/dm-4 vg1 lvm2 a- 523,99G 0
/dev/dm-5 vg1 lvm2 a- 232,87G 0
/dev/dm-6 vg1 lvm2 a- 232,87G 0
/dev/dm-7 vg1 lvm2 a- 232,87G 0
/dev/dm-8 vg1 lvm2 a- 465,75G 0
/dev/dm-9 vg1 lvm2 a- 465,75G 0
:~# vgs
/dev/md6: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/md7: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/md8: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/md9: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf1: read failed after 0 of 2048 at 0: Ind/ud-fejl
/dev/sdf2: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf3: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf4: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdl: read failed after 0 of 4096 at 0: Ind/ud-fejl
VG #PV #LV #SN Attr VSize VFree
vg1 12 1 0 wz--n- 4,78T 0
:~# lvs
/dev/md6: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/md7: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/md8: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/md9: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf1: read failed after 0 of 2048 at 0: Ind/ud-fejl
/dev/sdf2: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf3: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdf4: read failed after 0 of 4096 at 0: Ind/ud-fejl
/dev/sdl: read failed after 0 of 4096 at 0: Ind/ud-fejl
LV VG Attr LSize Origin Snap% Move Log Copy%
disk1 vg1 -wi-a- 4,78T
|