Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have a Raid 5 xfs setup with 2 disk gone bad at the same time. I can assemble the array successfully,
mdadm -Afv /dev/md0 -f /dev/hd[ab]1
but I get xfs IO read errors rather quickly when I start to go into the mounted array.
running xfs_repair on the xfs raid array proved unsuccessful because xfs_repair aborts upon an io error.
xfs_repair -v /dev/md0
So installed a blank hard drive and made a disk image with dd
dd bs=256b conv=noerror if=/dev/hdb1 of=/dev/hdd1
(the original array was composed of /dev/hd[abc]1. /dev/hdd1 is the new blank disk
dd encountered 1 read error but completed successfully (or so it seems). However, for some reason, dd doesn't copy the superblock info.
After the dd, I run
mdadm -Av /dev/md0 /dev/hd[ad]1
which gets a no raid superblock found message.
So then I built the array with one of the bad disks
mdadm -Afv /dev/md0 -f /dev/hd[ab]1
and then hot added the new dd'd disk
mdadm -a /dev/md0 /dev/hdd1
This marked /dev/hdd1 as a spare but the info can't be rebuilt because hdb1 will throw up an io error and the raid array will mark hdb1 as faulty rather quickly.
What I need is to get /dev/hdd1 be an *exact* image of /dev/hdb1 then theoretically, I can assemble the array with hda1 and hdd1 and then run xfs_repair on the array and hopefully recover most of the data.
So how can I modify the superblock into manually to make mdadm think the /dev/hdd1 is a disk in the array?
Am I going down the right path or is there another method I should be trying?
I read somewhere that someone used strace to digure out the location of the superblock and used the "skip" parameter on dd to get the superblock into on the new disk.
www dot nabble dot com (slash) Re:-need-help-with-raid6-recovery-p1754352.html
(i apologize for the links - this forum isn't letting me post links yet)
Even though I'm a fairly proficient java developer, I'm hoping I don't have to resort to writing java to rebuild the info like this guy did!
www dot freesoftwaremagazine dot com (slash) articles/recovery_raid?page=0%2C1
Any advice would be gratefully received at this point! Thank you in advance.
Using IDE hard drives in RAID-5 is tricky. First, each IDE hard drive in the RAID array have to be on its own channel or else two drives will seem to crash instead of one. It is likely there is no way to get back your RAID-5 array when two drives fail. Data is lost forever unless you have backups or do not mind spending near $1000 to have a data service fix it for you. Second, built-in cache have to be disabled, so data is not lost when rebooting or powering down. Third, you need to specify hot-spares just in case the drives fail during operation.
I have not used xfs_repair in a long time (2 years). I suggest xfs_repair -n to find out the best way to fix the drives before doing any real fixing. Probably you need to zero the log. I have used xfs_repair with 100% success.
Your dd command will not do a near sector by sector copy. Hard drives are 512 bytes per block, so use bs=512. Also inlcude sync to make sure it reads and writes it correctly as it sees it. The hard drive that you are writing the image to have to be the same model and brand.