Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have removed a small raid controller from my debian fileserver (I was getting IO errors which took the whole array offline - I'll put a separate post up on the state of the disks).
It had a pair of 2TB disks in a raid 1 configuration.
I'd like to mount these disks independently.
This is the response I get from attempting to mount the disk:
Code:
$ sudo mount /dev/sda /media/disk1
mount: unknown filesystem type 'ddf_raid_member'
SDA and SDB are the drives in question, they're on the 6gbps ports now.
How can I mount these disks as individual non-raid devices?
I do have a backup, but I'd prefer not to have to resort to that, as the whole point of raid 1 should be for data redundancy.
Long term I'm thinking RAID 1 is a bit of a hassle, especially with hardware as it's another link in the chain. So I'll probably revert to drive and backup format.
ddf is fake RAID which uses software versus a real hardware RAID controller. You should be able to assemble and mount the RAID using mdadm if not already recognized by the operating system. As root run the following command and post the output.
ddf is fake RAID which uses software versus a real hardware RAID controller. You should be able to assemble and mount the RAID using mdadm if not already recognized by the operating system. As root run the following command and post the output.
ddf is fake RAID which uses software versus a real hardware RAID controller. You should be able to assemble and mount the RAID using mdadm if not already recognized by the operating system. As root run the following command and post the output.
mdadm --query --detail /dev/md*
the command as written above yields a no such device type response:
Code:
mdadm: cannot open /dev/md*: No such file or directory
but paraphrasing into my context gets the following:
Code:
$ sudo mdadm --query --detail /dev/sd*
mdadm: /dev/sda does not appear to be an md device
mdadm: /dev/sdb does not appear to be an md device
mdadm: /dev/sdc does not appear to be an md device
mdadm: /dev/sdc1 does not appear to be an md device
mdadm: /dev/sdc2 does not appear to be an md device
mdadm: /dev/sdc5 does not appear to be an md device
A recent version of mdadm should be able to pick up the fake raid but I do not know if Marvell is supported. Depending on what version was previously installed it might of been using dmraid instead of mdadm. What happens when you run this command.
Mdadm was installed fresh yesterday, so is as recent as they get.
Previous attempts to examine using mdadm said there were no superblocks or something. I didn't make a note at the time.
Out of frustration I've just reinstalled the RAID card (I used different cables, in case they were the cause of the unreliability). I've rebooted and I'm now seeing the array - albeit as 2 devices:
OP posted in another thread card is 2-Port-PCI-Express-SATA-6-Gbps-Controller-Card~PEXSAT32. Offers Linux support but note the limits - there started being significant changes to APIs around 4.11 that I noticed stymied Realtek network drivers. Changes keep occurring at almost every release that require manual patching - I wonder if this is related.
Also, RAID1 is not just one disk being copied to another - as the message mentions, there is also meta-data that defines how/where the data is located, compression/encryption if any, that sort of thing. And for a proprietary card, that format may not be public. Hence the general (Linux) exhortation to use software RAID for resiliency.
Hi all, I appreciate your thoughts on this thread, however due to time constraints, and the need to access the data promptly, I re-installed the disk array in its RAID1 configuration, to return things to operational capacity. All the data appears intact and available. I'm coming to the realization that RAID1 might not be the best solution for accessibility after failure, and I'm actively looking into better offline backups such as fast removable storage. (it's a tossup between a Caddy for a hot swappable disk, or a USB drive & enclosure)
This is a learning experience for me, I'll get there in the end. I'm also currently engaged in a dialogue with the Technical team at Star Tech, they seem enthusiastic to help, or at least identify the cause of my problems. This feels positive.
With hardware RAID, you have to basically use a hardware card - and usually from the same manufacturer. Hopefully they keep supporting it. The deficiency is not in the RAID1 specification itself, but the fact you used a hardware RAID - which does offer the opportunity for performance benefits, so is not all bad as a choice.
However, especially in a Linux environment, may suffer from recoverability issues - because of the proprietary entanglements.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.