LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   RAID 5 Recovery (https://www.linuxquestions.org/questions/linux-newbie-8/raid-5-recovery-932417/)

cyberblitz 03-02-2012 07:33 PM

RAID 5 Recovery
 
Hello all...


I'm a newish Linux user...

I have WD ShareSpace NAS drive showing all four, 1TB drives have failed following power failure.



My situation: my wife plugged in a kettlle and powered it on. For some reason it tripped the whole house, switching everything off, inclduing the NAS drive. When we discovered it was the problem kettle, we swtiched everything back on and I worryingly ran to look at the NAS drive, which showed 4 failed drives. I did the normal things, switched it off and truned it back on...the same... I accessed the GUI interface which displayed the drives as failed, giving me the option to remove or format. At first I thought the drives where buggered but after reading around I realised it was the RAID controller not the drives which had failed.


I accessed the NAS drive using a Gentoo LIVECD, utilising SSH.

ssh root@"your NAS IP address" --- (withouth the parenthesis)

password: "welc0me" --- (without the parenthesis)

YOU WILL THEN BE IN THE "SSH" SHELL



I succesfully logged into the system and typed 'fdisk -l' and to my delight could see all the drives pop up with their relvent partitions. That supported the idea the drives were fine. For some reason though, WD format each drive with 4 partitons, the last holding all the data and which the chosen RAID (5 in my case) is applied. After talking around with a few people, it seems WD utilise the hard drives for a very basic UNIX/LINUX base system (hence the other 3 partitions), suggesting the RAID configuration is software based. This make things a little complicated.



Next, I examined the drives using "cat /proc/mdstat" and "mdadm --examine --scan" revealing that the drives were good too (I don't have the outputs of these, sorry). I don't remeber eveything about the details but the outputs showed the drives were clean, not degraded. It surprises me that I can manage to access the system this way but the system maintains the drives have failed!? but they obviously have not.



Anyways, I deceide to be safe (and because i thiought i didn't have the relevent hardware) to take to an IT store. They didn't do data recovery per se but did have some success in RAID recovery, and they were cheap. I also made sure they cloned the drives before their recovery attempt, which they did normally anyway. After a week of them messing around GUI interface nor see their partitions. This got me worried they'd mucked it all up. But, alas, they did not. When I got it back home, it was still the same state as I sent it.. I could till accessand see the drives and their partitions and the access the GUI interface.. Why could they not???? I reset the GUI interface so it would be defaulted for easy access... I got further than they did in a few whours to their week... Not taking things their again.. Useless..





Now I have it home, i'm reattempting recovery.. I bought an addittional 2TB drive to 2TB I already had and have them both in individual USB caddies. I did thing about using DD to clone the relevant partions to the other drives but have been pursuaded to use ddreacue instead. DD will stop if it finds any bad blocks, ddrescue does not.



This is how I have set up things cuurently:

I have one of the 1TB drives from the NAS drive housed in one of the USB caddies and a 2TB drive in the other, which has 2 partitions on it. I've initated the ddrescue to clone the 4th partition of the NAS drive to the 1st partion of the 2TB drive.

ddrescue -v -f /dev/sdb4 /dev/sdc1 bs=64k

(-v = verboses (displays text of whats going on) the porcess

-f = forces the process to overwrite the destination

/dev/sda4 (the 4th partition of the NAS drive (holding all the data you need) BE AWARE: THIS CAN CHANGE DEPENDING ON THE ORDER YOUR SYSTEM REGISTERES THE HARD DRIVES. IN MY CASE, I HAVE AN INTERNAL HARD DRIVE (dev/sda) AND TWO USB DRIVES, ONE HOLDING THE NAS DRIVE (/dev/sdb), THE OTHER IS THE DESTINATION DRIVE (/dev/sdc).

/dev/sdc1 = 1st partition of the destination hard drive.

bs=64k = block size is 64k (this was determined by examining the NAS using 'mdadm --examine /dev/sdb4' - however, 64k is pretty standard from what I understand)

This is where i'm upto. I'm currently cloning the 4th partition of the NAS drives into 4 partitions on 2 2TB drives (ensuring to maintain the order of the NAS drives as they are in the NAS housing).This ensures the original drives don't become damaged by mistakes i may make.

I will then attempt to reassemble the RAID using mdadm.

My only concern regarding this method is I'm sure not it will work, whether i can reassmeble the RAID with 2 TB with 2 partitions on each.. I've searched the interent for an answer and by asking around but I haven't found an answer. Is this possible, i'm assuming it can at this point...

Does anyone have any suggestions???

cyberblitz 03-03-2012 09:18 AM

RAID 5 continued...
 
Ok, it turned out my 2TB drives weren't divided into 2 partitions equally... When i tried to copy files using ddrescue it threw out an "too many files" statement at me.. I couldn't find anywhere what this meant but I assuming i didn't have enough space on the destination drive...

So now i'm using dd to transfer the files. I know my drives are clean and therefore have nothing to rescue per se, so have no need to use ddrescue.


So, reattempt ti copy files across before rebuilding the array..

I also wish i had an eSata cable.. My transfer rates are averaging 25MB/s via USB to USB.. Painfully slow...


All times are GMT -5. The time now is 04:35 PM.