-   Linux - Hardware (
-   -   mdadm RAID 0 Recovery? (

romeo_tango 06-08-2010 10:51 PM

mdadm RAID 0 Recovery?

I have this problem that my system (it's not mine actually, I'm not the one who installed it but the one who should recover it) and it has two SATA disks that was configured as a RAID 0 using software RAID.

The person who installed said he RAID-ed it using webmin so I assume that was mdadm. Okay, so yesterday suddenly one of the drive is 'dead'.. it became undetected in BIOS and I can't boot to the system (Centos 5.3).

Is it possible to recover the disk / array? Because all I know RAID 0 doesn't have any 'mirroring' but I am sure that the second disk is in a good shape.

I tried to mounted the second drive using Ubuntu livecd but it said wrong fs type. I tried ext3. Perhaps it is a software raid file system. If that so, how could I possibly read the files inside it?


Jerre Cope 06-09-2010 12:44 AM

You'll want to mount /dev/md0, not /dev/sda1 (md0 and sda1 being generic examples)

It sounds like grub was not set on both drives, so that when one drive failed, the other had no boot record.

You can probably boot off a live CD, mount the drive and fix the grub.

The following was taken from

and shows how to set the grub on both physical drives.


# grub

GNU GRUB version 0.95 (640K lower / 3072K upper memory)

[ Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists the possible
completions of a device/filename.]

grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd0)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd0)"... 15 sectors are embedded.
Running "install /boot/grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/boot/grub/stage2
/boot/grub/grub.conf"... succeeded

grub> root (hd1,0)
Filesystem type is ext2fs, partition type 0xfd

grub> setup (hd1)
Checking if "/boot/grub/stage1" exists... yes
Checking if "/boot/grub/stage2" exists... yes
Checking if "/boot/grub/e2fs_stage1_5" exists... yes
Running "embed /boot/grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded.
Running "install /boot/grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/boot/grub/stage2
/boot/grub/grub.conf"... succeeded

grub> quit
Use mdadm to examine the array and add the new drive to the array.

DaneM 06-09-2010 01:09 AM


RAID0 stores literally half of every piece of data on each drive, and accesses both places simultaneously in order to put the data back together in memory and make it usable. This increases both the practical speed of the volume, and makes it twice as large. The problem is that since each piece of data is split up and dispersed over two drives, if one of the drives fails, the rest of the data is unrecoverable (unless you manage to somehow get the other drive working again).

If the drive doesn't show up in BIOS, it's probably the circuit board on the hard drive that is bad. It could also be a mechanical problem. Either way, it requires one to replace the damaged/unusable parts. So far as I know, a clean room (dust-free workshop) is required to fix anything that would expose the platters to the outside air (such as taking screws out of the hard drive and opening it up). There are professional companies that can do this for you, but it's VERY expensive (over a thousand dollars, usually). I've also seen instructions online for making a homemade, inexpensive clean room-like environment for doing this type of work. I'm not sure how effective or expensive the latter is.

Sorry to be the bearer of bad news! I hope what I've written helps at least a little.


DaneM 06-09-2010 01:13 AM

Edit: @Jerre Cope: the original poster indicated that this a RAID0 (stripe) configuration, whereas the instructions you gave are for RAID1 (mirror). The information is good, and would would work on the right setup, but unfortunately it won't help the OP. Sorry!

romeo_tango 06-09-2010 03:15 AM

@Jerre Cope
I've tried using LiveCD but no md drives shown at all. It only displayed a /dev/sda1 and /dev/sda2 in the Ubuntu live CD and I cannot mounted it because of the wrong fs type.

Yes let's say that we wouldn't want to restore the data in the first hard drive because of the price.

What I want to know is whether the data in the disk 2 could be saved or not.
In other words, could the RAID 0 data in the disk2 be read in any ways?

Thanks for the input Sirs :)

DaneM 06-09-2010 03:31 AM


Because the working drive only contains half of every block written to the array, the data on that drive is completely useless and unrecoverable. Please note that what you have is NOT "the first half" or "the second half" of the data, but rather it is a disk of nothing but half-pieces of the data. (It's like reading an entire book, but having only half of each word in the book - unintelligible.)

The jist of it is that unless you can get the failed drive to work again (including keeping the filesystem and data intact), the data on the remaining drive is effectively unreadable.



romeo_tango 06-09-2010 04:26 AM

Ah okay then.
Lucky us to have backup although we still loss 3 days data. :(

Thanks a lot Sir.

strick1226 06-09-2010 09:05 AM

I can't really add much to DaneM's explanations here but, if you're already going to be rebuilding the server from scratch and then restoring the data, I definitely recommend using a RAID 1 mirror if at all possible... just in case you have another drive failure in the future.

Good luck!

DaneM 06-09-2010 09:14 AM

Good call, strick1226!

If you need more space than a single drive can allow, you should also look into RAID0+1 (a mirrored stripe), or RAID5 (a stripe with parity that allows you to rebuild the array if only 1 drive fails). RAID6 is like a beefier version of RAID5 that still works if 2 drives fail, and allows you to have even bigger arrays. Cost-wise, RAID5 is probably your best option. (You get to use the space on all but one drive; RAID6 requires 2 extra drives, and RAID0+1 requires twice the number of drives that you want to actually use.)

romeo_tango 06-09-2010 11:43 PM

Well this might be already out of topic, but I wondered.. I always setup my servers using RAID1 but it was also always on branded servers such as IBM and DELL which support hot-swap. I remembered once experienced the disk broken and we just simply swap it.

My question is if you were using a software RAID such as mdadm in RAID 1 and the disk is not hot-swappable, do we use the same way to restore the disk? I mean by simply just turning the machine off and replace the broken drive and then turn on the power? Does it worked that way?


DaneM 06-10-2010 01:23 PM

Yes, to my knowledge, it does work that way.

romeo_tango 06-10-2010 08:19 PM

alright then. Thanks again :)

All times are GMT -5. The time now is 06:57 AM.