Trying to understand if I got these Linux Terminal commands right to backup RAW RAID drives before attempting recovery
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Wow, it's been a while, but I finally managed to try this after having a lot of problems finding parts and assembling the new system I was going to do the recovery on.
I was able to image the drives ok, but, I can't seem to assemble them into a raid after with either dmraid or mdadm.
I used the following the image the drives (TempBackup was the 3TB HDD and sdc was the USB bay I used for each RAID0 disk):
I then used "sudo losetup -f -r /media/caztest/TempBackup/drive1" and the same for drive2 to mount each of the resulting images in read-only mode, the were mounted as loop0 and loop1.
However, after that, I can't seem to get the two loopback devices to assemble into a raid with either dmraid or mdadm. Trying "sudo dmraid -ay" just claims there are no raid devices, and "sudo mdadm --assemble /dev/md0 /media/caztest/TempBackup/drive1 /media/caztest/TempBackup/drive2" complains there is no superblock data in loop0. However, trying mdadm -examine on either the direct file or the loopback device created from it shows that the first drive/image does have a superblock, though the second one does not, which seems correct to me:
Have you enabled the Intel RAID support (RST or IMSM) in the firmware of the new machine ?. My understanding is that mdadm will honour meta-data exposed by the Intel module, but has no knowledge of the internal structure on-disk.
No, I wasn't aware I would need to enable the motherboard's raid to do this in software like this, I thought the software would handle the raid using it's own methods?
EDIT: It doesn't appear to have made any difference unfortunately.
Last edited by Cyber Akuma; 04-04-2021 at 12:40 AM.
I would expect you would have to insert the original 2 SSDs into the new mother-board. With a bit of luck they should re-assemble automatically. One of the problems of using proprietary solutions.
At least you now have backups if you need to recreate the disks.
I was trying to avoid using the disks directly just in case something goes wrong or whatever new UI the motherboard has I mis-understand and create a new array instead of importing my old one. So I wanted to try this method first where I used images just to be safe.
I managed to get it to work with the images themselves, figured might as well explain how if it could help anyone else someday.
I had someone suggest to me that since mdadm is intended to create it's own software raids, it would not be able to read the superblock of the Intel hardware raid. And in order to mount it, I would have to use the sort of legacy/manual --built command instead of -assemble, and manually set the raid type, disks, stripe size, etc.
I tried looking up information on the raid, and if I had posted on any forums about it when I was first making it in 2012. I saw that Intel tends to default to 128K stripe sizes for raids, but that I had apparently experimented with 32 and 64.
So I tried 64 first with: "sudo mdadm --build /dev/md0 --chunk=64 --level=0 --raid-devices=2 /dev/loop0 /dev/loop1"
It mounted the raid, and I saw all my partitions displayed properly in lsblk, but it count not read any of them.
So I tried stopping it and re-building it with 128 instead of 64.... and that appears to have worked. It mounted the "System" partition properly and I can read it's data, all using read-only clone images of the raid disks and not accessing the disks themselves or even having them plugged in.
Now I just have to clone this virtualized RAID drive to a physical drive. Not just clone that partition mind you, but the whole drive as it was a boot/os drive with a GPT boot sector and multiple partitions. Going to have to Google how to do that, I assume one would use dd? Never used it manually to clone a drive, espcially along with it's boot information instead of just a partition.
Last edited by Cyber Akuma; 04-06-2021 at 12:56 PM.
I think at this point, instead of cloning it directly with dd, you might use clonezilla and do a backup / clone of the raid array contents.
Then restore that image to the new drive after you have created the partitions, etc.
Since this is the OS, you still will need to recreate the initramfs and tweak grub for the new hardware, but that would be a great start.
Wouldn't I need to boot into the clonezilla bootdisk? This array is only going to exist while this Linux session is active, it will go away if I reboot.
And it's a Windows install as seen in the screenshot that I am trying to clone off the reconstructed raid0.
Hmm, this is not good, got an error after dd took about 6 hours:
dd: error writing '/dev/sdc': No space left on device
1907730+0 records in
1907729+0 records out
2000398934016 bytes (2.0 TB, 1.8 TiB) copied, 19491.3 s, 103 MB/s
I remember back when I imaged this raid long ago to a HDD because I needed a temporary clone that I had no trouble doing it, but when I tried to clone it back Clonezilla (IIRC that's just a frontend for dd isn't it?) complained that the SSDs were too small by a few megabytes.
I assumed that this was due to provisioning or something (though since I had no resized the partitions, I found this odd).... but now that I am trying to copy the SSDs back to a HDD I am again getting this space error.
Does the 1907730 records in being one higher than the 1907729 records out mean that it just barely didn't fit? Or did dd just stop at 1907730 because the drive filled up at 1907729? If it did just barely not fit, then would trying to clone it to a 3TB drive work?
Also, the last 400MB or so of the RAID I am trying to clone is unpartitioned space, so I have no idea if that part was the end of the drive it tried to and failed to copy and it did get all the data, or if there could possibly be anything important or a boot record or anything at the very end of the drive's unpartitioned data, does anything do that?
dd copies bit for bit the entire device, including all the empty/unused blocks. The destination MUST be larger than the source (or for a direct disk to disk copy it may be the same size)
Those numbers tell you that dd was able to read that block but unable to write it. It says nothing about the actual size of the source or how much is remaining to be copied.
Use df to get an idea of the size of the source (with it mounted), then decide how much space you need to hold it all. Alternatively, you did not say the size of the devices that contained the raid at the beginning, but the destination should be at least as large as both those devices combined.
You also could look at the size of those 2 image files that make up the raid and add the numbers together to see how large the destination must be.
Last edited by computersavvy; 04-07-2021 at 10:13 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.