LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   Sofware RAID woews. (https://www.linuxquestions.org/questions/linux-server-73/sofware-raid-woews-729845/)

snowmobile74 06-01-2009 08:26 AM

Sofware RAID woews.
 
Greetings all, I've been chewing on this problem for a day and getting to take a break. I'm moving my software raid over from one computer to a new one. The old box had 4 x 1TB drives that where in a RAID 5 configuration. Currently I have a mush of 12 1TB drives that will be combined into a singel RAID 5. The drives I currently have.
5 x 1TB in a RAID 5 (new)
1 x 1TB NTFS drive
2 x 1TB ReiserFS drives
4 x 1TB Old raid5 *broken and only sees 2 drives* <-- this is what I'm trying to fix

Where I'm at
2 of the drives that where part of the original RAID 5 had their partition table wiped seemingly randomly. Now when I scan all my drives with the mdadmin UID's it only finds two drives (oh crud!).

What I've done.
-As of yet I have only take then step to re-make a partition table on the two suspect drives in an attempt to reclaim my array with no success. (its a simple grow to the entire disk x83 partition)
-I ran TestDisk it didnt really help any. I should have done it before I re-created the partitions.


Can anyone out there give some sage advice to help me fix my dabockle?


I'm going to try running http://www.cgsecurity.org/wiki/TestDisk to see what it reports.

Thanks in advance!


PS I'm ruining slackware linux 11
I consider myself knowledgeable about linux but nowhere near a Guru

chrism01 06-01-2009 09:01 PM

Can we get some clarification:

1. you've got a total of 12 drives, all 1TB each, in 4 different format states: raw, ext3, ntfs, reiserfs ?
2. you want to put all these in one raid array?
3. you want raid 5?


4. separate qn from the prev qns: you want to try and recover the previously existing 4 x 1 TB array's data first?

snowmobile74 06-01-2009 10:32 PM

Heh, sorry I was up really late trying to figure this out.

"4. separate qn from the prev qns: you want to try and recover the previously existing 4 x 1 TB array's data first? "

Yea thats all I really need help with. I did a little more looking and I think I fond something thats similar to what happened to my array.

http://kev.coolcavemen.com/2007/03/h...d-superblocks/

is re-making the array really the answer? 2 of the drives are not at present recogznied.

chrism01 06-01-2009 11:18 PM

That would probably work. Creating an array does just that, it doesn't remove any data (probably !).
I'd run fdisk -l first, just to see if the partition type is still 'fd' = linux sw raid

snowmobile74 06-02-2009 10:14 PM

well shux
 
Okay I tried that and it appears theres nothing on the new volume. I've tried runing Reiserfsck --rebuild-trees -S on /dev/md0 with no sucess. The program just exits without doing anything.

snowmobile74 07-02-2009 07:21 PM

Alright so its been a while since I posted, sorry about that. I've learned a TON about mdadm and linux along the way with my troubles and somehow I'm greatfull. I'm posting this so hoepfully some poor sap can google it and have an appifany. One issue I ran into after I remade my entire array was that I had older drives mixed in with newer ones. Well one of the old drives crapped out (funeral services where held) and I moved on to buying a new 2TB drive (to eventually replace ALL drives some day). From the old drive that was (mostly) working but would fail to recover with mdadm I used ddrescue because I had one drive go offline while another one decided to have problems.

ddrescue see here http://www.gnu.org/software/ddrescue/ddrescue.html

Anyway it did a GREAT job copying an entire hard drives worth of data to the new drive and preserved the partition table. I had to reboot to remake the array. After mdadm gave me an error stating drive /dev/sdd1 and drive /dev/sdm1 had the same superblock. Alrigh fine dont panic I (carefully) zeroized the superblock of the BAD drive to keep it from being detected (simply chaning the partition type didnt work).

mdadm --zeroize-superblock /dev/sdd1

again I tried rebuilding the array and for some odd reason /dev/hdd1 claimed it didnt have a superblock. Sure enough mdadm -E (for examine) /dev/hdd1 revealed it had nothing. Checking to see if the drive had gone offline I did cfdisk /dev/hdd and it did show all partition information. So for kicks I tried rebooting. giving it another shot.

mdadm -E /dev/hdd1

Strangely enough it appeared this time with all its wonderfull information about my array and its position in it. So after that I started a sync again of the array.

mdadm --assemble --force --update=resync /dev/md126 /dev/hdd1 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1

And it came up with all my data and rebuilding with 1 spare drive. Now I had to do resync because the bad drive I had kept going offline during my rebuilds causing 2 drives to be nixed from my raid 5 rather than just a hickuped drive.

Now previously my other drive /dev/hdd1 could have been saved (if the superblock was really destroyed) by doing.

mdadm --create /dev/md0 --level=5 --raid-devices=12 --assume-clean

using all the previous configurations I had before, example I used 128k chunks for my array. BUT BEWARE, if you do --create again the drives have to be in the EXACT same order you originaly created teh array in, you can find this out by simply doing

cat /proc/mdstat

and it should print out all of the information you need including drive orders, Best of luck to all who take on the linux sofware raid endevour. I really like it msyelf.


All times are GMT -5. The time now is 02:10 AM.