replacing dead hdd in software raid and lvm.
i'm using debian etch.
i just installed debian etch having 4hdds (1tb each). I've tried using LVM only mixing all of my hdds without knowing the consequences.. when my 1hdd died, all my data was lost. Now, i thought of something that can help me maintaining my server. and i found this. Software RAID5 and LVM with the Etch Installer (http://dev.jerryweb.org/raid/).
Now, my problem is. again, my 1 hdd hdd died again.. i thought i just have to replace it and it will work again.. but i was really wrong. i've already attached my new hdd with the same brand and same size.. how can i make it work using mdadm? i've already partitioned it using fdisk with the same partition as the other hdds.. when i try detailing my md0 and md1 the hdd that died was marked as removed. how can i replace that? or should i say how can i put my new partitions in that said "removed"? does it mean i just lost my files again?
and the hdd that died is the sdb1 and sdb2 (which is my 2nd hdd).. and what i dont understand is my 4th hdd (which is sdd1 and sdd2) automatically being removed in both md0 and md1.. and it says in startup that "failure to assemble arrays in raid5". i think i just have to add the sdd1 in md0 and sdd2 in md1.. how can i make it start again? and how can i migrate my sdb1 in md0 and sdb2 in md1? and how can i make it start? because my md0 is my boot.. and what's inside my md1 was my /var /tmp /swp and /home.. and the device mapper says that my "volumegroup" was not found. i think my LVM is being affected by those devices that is being removed.
Please post your thread in only one forum. Posting a single thread in the most relevant forum will make it easier for members to help you and will keep the discussion in one place. This thread is being closed because it is a duplicate.
|All times are GMT -5. The time now is 05:52 AM.|