Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I guess somewhere in the world there are people who put broccoli on ice cream. md on partitions, never partitions on md. It just wouldn't feel right. All of my scripts would break.
I answered "No" because I feel like the question was directed toward software raid (given the md0 example). For software RAID, I partition the disks, and then assemble the partitions into the RAID array. For hardware RAID, however, since the card interfaces with the raw disks, I build the RAID first and then partition it as necessary (99% of the time it's just one big partition though).
1. Depends on which RAID, 0, 1, ...
2. Reason for RAID is for stability of data, and access. Order of those 2 varies by need.
3. Assume RAID 1 or 5. Then you could (I do this) after RAID 5 assembly, partition the new "disk" to your needs.
4. Discussions will abound about how good software RAID is to hardware RAID. Again, it depends. Hobbyist, software is "good enough." Business or speed critical, then hardware is "good enough."
1. Depends on which RAID, 0, 1, ...
2. Reason for RAID is for stability of data, and access. Order of those 2 varies by need.
3. Assume RAID 1 or 5. Then you could (I do this) after RAID 5 assembly, partition the new "disk" to your needs.
4. Discussions will abound about how good software RAID is to hardware RAID. Again, it depends. Hobbyist, software is "good enough." Business or speed critical, then hardware is "good enough."
... Mark
What are your thoughts on the hardware raids included with consumer mother boards? Are they sufficiently better than software raids?
What are your thoughts on the hardware raids included with consumer mother boards? Are they sufficiently better than software raids?
Those aren't hardware raids, they're just a convenient front end for software raid. Go ahead and configure it using the low level utility after the BIOS, but you'll find that once you boot into Linux it's running under mdadm.
Those aren't hardware raids, they're just a convenient front end for software raid. Go ahead and configure it using the low level utility after the BIOS, but you'll find that once you boot into Linux it's running under mdadm.
What I suspected, I didn't plan to try it since it seems to limit your options in the event of a mobo failure. It seems (in my view) you have to go buy the same mobo just to reassemble the raid.
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,818
Rep:
Quote:
It seems (in my view) you have to go buy the same mobo just to reassemble the raid.
Good luck with that. It seems that m'board models come and go almost as quickly as new cellphones. You could always buy a spare to keep on the shelf in case the one in use decides to develop a bad capacitor. Just MHO, but I'd just go with the mdadm tools. And document the heck out of your configuration. (It limits the level of excitement when a RAIDset goes belly up from being "Argh! The world is ending!" to something like "Well, that's annoying".)
We don't partition md devices but we commonly use them as LVM physical volumes hence they are part of a volume group in which we create logical volumes which are arguably equivalent to partitions.
What are your thoughts on the hardware raids included with consumer mother boards? Are they sufficiently better than software raids?
I have an "old" ASUS mobo where the RAID on the mobo also needs (gak) Windows drivers to work. Given I'm a *nix geek, that's not going to happen. So I went with software RAID.
Since the RAID configurations are supposed to be standard, you should "technically" be able to replace the borken mobo with an un-borken one and after reconfiguring the BIOS settings, have the drives come up correctly. Yes, broken is intentionally misspelled.
You did ...
1. Mark all the cables (each end).
2. Mark the drive that goes with each cable.
3. Document your RAID configuration from the BIOS.
4. Document it all, somewhere NOT on the hard drives.
Something else to consider, a RAID PCI card or two (or however many backup cards you think you need). They are less expensive to replace than a mobo.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.