Help with mounting a dd-rescue image of a partition from a RAID drive
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Help with mounting a dd-rescue image of a partition from a RAID drive
I have searched extensively, but can't seem to find the answer to my question. I still consider myself a new user, so I apologize for any wrong terminology I may use.
I have a NAS box that had two drives in a JBOD configuration. I had to reset the box, and now it won't boot with the disks installed. So I am hoping to pull the original data off of them, reinitialize the box with two new drive, and then re-copy the data back over. The original drives are new, and seem to be intact, so I don't think I will have any read errors when trying to copy the data. Also, MDADM examine did verify the RAID type was linear, so the data shouldn't be striped across the two drives.
On a different computer, I used dd-rescue to copy the single data partition off one of the drives, and now I am trying to mount it. From everything I have read, I should just be able to mount it normally. But I am getting errors.
I tried:
user@SERVER:sudo mount -o loop sdg3.img /media/img
mount: unknown filesystem type 'linux_raid_member'
I tried specifying the file type:
user@SERVER:/storage/data$ sudo mount -t ext4 -o loop sdg3.img /media/img
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
I've also tried EXT3 and EXT2 with the same results.
From what I have read, since I just have the individual partition, I should be able to just mount it. However, I can't seem to.
I think my final option would be to use dd-rescue on the second HDD, and try and rebuild the raid locally, but I would prefer to get the data off the individual disks if I can.
A JBOD set must have volume metadata stored somewhere, and if it's stored at the beginning of the partition, mount won't be able to find the file system.
The error message from mount (unknown filesystem type 'linux_raid_member') seems to indicate that this is indeed the case with your drive image. You'll either have to assemble the RAID/JBOD device and mount the md device, or figure out how far into the partition the actual file system begins and use the "offset=" parameter when mounting it.
Is that the case even if I only copied the one particular partition that the data is on? The original disk had multiple partitions and I only copied partition number 2. Can I rebuild if I copied partition #2 from both drives, or will I need to make copies of all the partitions?
Here is the output from parted for the drive in question. I copied partition #2.
From what I have read, since I just have the individual partition, I should be able to just mount it. However, I can't seem to.
That might be true if you had the partition the filesystem was created on - but mdadm interposes another block device layer (/dev/md0 say) that the mkfs is run on.
So I imagine you would need to associate a loop device to that img and proceed from there ("man losetup"). Then you would need to try to build the array degraded and then mount the filesystem. That will probably fail as the filesystem will be incomplete.
Testdisk may get the files off at this point, but it can be time consuming recognising and renaming all the files - and you will need as much space again to write the recovered files.
Quote:
I think my final option would be to use dd-rescue on the second HDD, and try and rebuild the raid locally
Much better idea IMHO.
Edit: slow typing while the above discussion was going on.
Thanks. I assume I will need to copy all 5 partitions in order to properly rebuild the RAID?
Or, I just thought of another way I could get this done. I think I should be able to remove the ZPOOL in my second computer and pull the drives. Then replace the drives with the 2 RAID drives, then build the RAID in that computer and copy off the data. Does that sound more feasible?
Normally I would say stick the physical drives in and see if the array(s) will rebuild. But if the drives themselves are suspect (something must have caused the failure), I'd try and get images of (all) the partitions and try recovery from them.
I've no idea why there are so many partitions - if they "pair up" on both drives I'd start with number 1 and see if it has array info on it. Just a matter of legwork from here.
Normally I would say stick the physical drives in and see if the array(s) will rebuild. But if the drives themselves are suspect (something must have caused the failure), I'd try and get images of (all) the partitions and try recovery from them.
I've no idea why there are so many partitions - if they "pair up" on both drives I'd start with number 1 and see if it has array info on it. Just a matter of legwork from here.
The drives are relatively new, and were working fine earlier. I think the NAS box firmware got corrupted, which is what started this whole mess. I ended up having to reflash the firmware.
I just removed the 4-disk pool from my other server and stuck these two RAID disks in. I was able to mount them with no problems. I think I was too scared to try this earlier.
Thanks everyone for the suggestions! I will be copying this data off. This is what I get for procrastinating on getting a backup done!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.