Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have this problem that my system (it's not mine actually, I'm not the one who installed it but the one who should recover it) and it has two SATA disks that was configured as a RAID 0 using software RAID.
The person who installed said he RAID-ed it using webmin so I assume that was mdadm. Okay, so yesterday suddenly one of the drive is 'dead'.. it became undetected in BIOS and I can't boot to the system (Centos 5.3).
Is it possible to recover the disk / array? Because all I know RAID 0 doesn't have any 'mirroring' but I am sure that the second disk is in a good shape.
I tried to mounted the second drive using Ubuntu livecd but it said wrong fs type. I tried ext3. Perhaps it is a software raid file system. If that so, how could I possibly read the files inside it?
RAID0 stores literally half of every piece of data on each drive, and accesses both places simultaneously in order to put the data back together in memory and make it usable. This increases both the practical speed of the volume, and makes it twice as large. The problem is that since each piece of data is split up and dispersed over two drives, if one of the drives fails, the rest of the data is unrecoverable (unless you manage to somehow get the other drive working again).
If the drive doesn't show up in BIOS, it's probably the circuit board on the hard drive that is bad. It could also be a mechanical problem. Either way, it requires one to replace the damaged/unusable parts. So far as I know, a clean room (dust-free workshop) is required to fix anything that would expose the platters to the outside air (such as taking screws out of the hard drive and opening it up). There are professional companies that can do this for you, but it's VERY expensive (over a thousand dollars, usually). I've also seen instructions online for making a homemade, inexpensive clean room-like environment for doing this type of work. I'm not sure how effective or expensive the latter is.
Sorry to be the bearer of bad news! I hope what I've written helps at least a little.
Edit: @Jerre Cope: the original poster indicated that this a RAID0 (stripe) configuration, whereas the instructions you gave are for RAID1 (mirror). The information is good, and would would work on the right setup, but unfortunately it won't help the OP. Sorry!
Because the working drive only contains half of every block written to the array, the data on that drive is completely useless and unrecoverable. Please note that what you have is NOT "the first half" or "the second half" of the data, but rather it is a disk of nothing but half-pieces of the data. (It's like reading an entire book, but having only half of each word in the book - unintelligible.)
The jist of it is that unless you can get the failed drive to work again (including keeping the filesystem and data intact), the data on the remaining drive is effectively unreadable.
Distribution: Arch, CentOS, Fedora, OS X, SLES, Ubuntu
I can't really add much to DaneM's explanations here but, if you're already going to be rebuilding the server from scratch and then restoring the data, I definitely recommend using a RAID 1 mirror if at all possible... just in case you have another drive failure in the future.
If you need more space than a single drive can allow, you should also look into RAID0+1 (a mirrored stripe), or RAID5 (a stripe with parity that allows you to rebuild the array if only 1 drive fails). RAID6 is like a beefier version of RAID5 that still works if 2 drives fail, and allows you to have even bigger arrays. Cost-wise, RAID5 is probably your best option. (You get to use the space on all but one drive; RAID6 requires 2 extra drives, and RAID0+1 requires twice the number of drives that you want to actually use.)
Well this might be already out of topic, but I wondered.. I always setup my servers using RAID1 but it was also always on branded servers such as IBM and DELL which support hot-swap. I remembered once experienced the disk broken and we just simply swap it.
My question is if you were using a software RAID such as mdadm in RAID 1 and the disk is not hot-swappable, do we use the same way to restore the disk? I mean by simply just turning the machine off and replace the broken drive and then turn on the power? Does it worked that way?