mdadm and parity checking and advice for RAID expansion
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
mdadm and parity checking and advice for RAID expansion
Getting ready to re-do my Plex server and will be taking my existing setup (8, 4TB drives RAID 6 on a PERC H700) to either:
8, 4TB drives hooked to a HBA (IBM 1015 in 'IT' mode)
4, 4TB drives hooked to my motherboard
using mdadm RAID 6 (eliminating the PERC) to combine them all
Linux Mint 18.2
or
12, 4TB drives hooked to an Intel SAS expander and using a SFF-8087 male-to-male cable and feeding them into an LSI 9361-8i and combining the drives into a hardware RAID 6
Linux Mint 18.2
Couple of questions. My CPU will be an i7 7700K. How fast would the rebuild times be with mdadm vs a dedicated RAID card? Or will the bottleneck be my hard drive write speed?
A few things bother me about mdadm though. I know that Neil stepped down maintaining it and somebody else took over but, what if the new guy gets tired of it and nobody else wants to step up to the plate? With file systems like ZFS and BTRFS, is mdadm a dead man walking?
Secondly, from reading around, it seems that when mdadm is doing a scrub (or resync?), it always assumes the data is correct and if it fails any sort of check, it doesn't 'vote' on who has the correct data. The data on the drive or the parity bits (RAID 6). It just re-writes the parity bits. (Hope I'm explaining that right.) I know that mdadm is used in a lot of boxes (Synology, QNAP, etc) and if they don't seem to be losing any sleep over it, neither should I, right?
Have you thought about moving the H700 to the new server? Or if it is built into MB how about getting an Avago RAID controller? Avago bought LSI and LSI made the PERC OEM controllers. you might even be able to read the config off your existing drive on an Avago controller to recover your raid in the new system.
PERC despite "Power Edge" in its name works in other non-Dell servers because it is just an OEM from LSI (Avago). Similarly some other server brands also have OEMed cards that can be moved to different server brands - long ago I was running a Compaq RAID card in an old Dell PE.
Have you thought about moving the H700 to the new server? Or if it is built into MB how about getting an Avago RAID controller? Avago bought LSI and LSI made the PERC OEM controllers. you might even be able to read the config off your existing drive on an Avago controller to recover your raid in the new system.
PERC despite "Power Edge" in its name works in other non-Dell servers because it is just an OEM from LSI (Avago). Similarly some other server brands also have OEMed cards that can be moved to different server brands - long ago I was running a Compaq RAID card in an old Dell PE.
The PERC is just a PCI card and can be moved. I'm just trying to remove failure points and if MDADM is the same speed as using a dedicated raid 6 card, what am I buying myself by sticking with a card when I can just do it in software?
I find myself entertaining the idea of ZFS again because with de-dupe turned off and no compression or snapshots, maybe it won't be such a RAM pig? Is it worth using it due to bit-rot or is bit-rot just a boogeyman?
I've not used ZFS (even when I was on Solaris) and since Oracle owns it now I'd be wary of relying on it as they might do to it what they did to OpenOffice.
In fact there are hints of that in this article which also points out the it isn't GPL so you might run into other issues using it if Oracle asserts their rights: https://www.theregister.co.uk/2017/1...fs_into_linux/
The title by the way is incorrect. If you read the article what it actually says is one guy wishes they would consider it - not that they are.
So far as PERC/LSI goes I've not seen many failures over the 13 years I've been using them. On the rare occasion one does fail replacing it with another and reading in the config from the drives restores the RAID set without data loss.
If one were to be extremely concerned re the future of mdadm, LVM would be the obvious other contender (in software).
I find it much more flexible, and you might consider it less likely RH are going to vanish into thin air. BTW I use both at present, and see no reason to change. I also use btrfs specifically for RAID5/6 and have, successfully, for years.
Lots of options, lots of horror stories out there in the wild. Believe what you will.
I don't have any hardware RAID, so I'll sidestep that discussion.
If one were to be extremely concerned re the future of mdadm, LVM would be the obvious other contender (in software).I find it much more flexible
I definitely agree one should use LVM for the flexibility it brings. We use it both for our hardware RAID (PERC/AVAGO) internal LUNs and for LUNs presented from our SAN disk array. We've also used it for mirroring to migrate from one disk array to another to avoid downtime.
I haven't any other level of LVM RAID since I typically have at least hardware RAID for internal and for larger systems, the SAN LUNs which are created from RAID within the disk array.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.