[SOLVED] Should I use LVM on a motherboard controller RAID 5...
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Should I use LVM on a motherboard controller RAID 5...
I have a raid controller on the motherboard. I want to do a RAID 5( 3 x 4TB drives) for my media server. I don't do that many writes and it's great to have the mirroring in case 1 drive fails. I am going to mount this baby on /home. But I'm trying to decide if I should go LVM or not. The OS will be on the root partition another set of drives.
For now I will have Slackware 14.2 but say down the road if I need to install Slackware 15.2 from scratch using the same exact hardware, how easy would it be to move the LVM to the new OS using the same hardware. This my main concern. Is it simply when I install the new OS fresh the OS will recognize that the volumes are LVM (I am going to use the lvm label in fdisk) and I simply mount the /dev/sd(x)? I have never done it. It feels like it would be rather difficult if this is not the case. I'd have to almost install the OS on a separate computer and rsync the data over, which I don't want to do.
Although it would be nice with LVM in case I want to expand it or for LVM snapshots but I don't think I will need that. 8TB is more than enough for me, plus I don't have that many more free sata slots on my motherboard. Plus the mirroring ability will take the place of LVM snapshots.
LVM has meta-data that identifies it - been a while since I looked at Slack, but the initrd should find it for you, else a simple "vgchange -ay" should do.
I'd be more worried about the m/board failing - will you be able to re-construct the RAID devices properly. I didn't see any mention of backups.
Last edited by syg00; 12-25-2016 at 05:06 PM.
Reason: typo
Yea if the motherboard fails. I'd have to get the same exact motherboard to reconstruct the RAID 5 or do I simply write down the size of the parity and stripes in which case any motherboard that support RAID 5 would do?
Yea if the motherboard fails. I'd have to get the same exact motherboard to reconstruct the RAID 5 or do I simply write down the size of the parity and stripes in which case any motherboard that support RAID 5 would do?
-Tristan
First: RAID is not a backup, and does not secure your data. You need a backup system and schedule. Setting up a BURP server might be indicated.
Second: Odds are against being able to replicate the motherboard setup well enough to recover from a MB failure, even on an identical MB. Different vendors implement the low levels differently, at times even between minor model numbers.
Plan for catastrophic failure by keeping regular full backups. Disaster Recovery will consist of bringing up a replacement server on different hardware with adequate storage to allow a full restore from backup.
LVM is always a good option. It recovers from some issues better than a raw file system, and it allows you to make certain kinds of on-the-fly storage changes without downtime. It is not required for a situation like yours, but I would not advise against using it.
How for the sake of understanding, what if I simply had 3 disks each 4 TB and put them all in one volume group. This will allow for 8 TB of space in the volume group to be used for LVM snapshots. I only use 4 TB for the LVM. Then if the disk that the LVM is being used for fails, can I not simply mount the LVM snapshot to take its place?
If that disk fails, are the errors informative enough so that I can tell which drive in the VG failed?
This would thereby avoid RAID all togther. Turns out my motherboard does not have a raid controller and I don't want to use those PCI ones.
Or avoid RAID and LVM all together and simply use 1 of the 3 disks for the data and the other 2 as backups. Then simply rsync the diffs over nightly. Assuming I do not have an option for backups off site.
Essentially, the power of LVM is that it separates the physical picture from the logical one. You can allocate, and re-allocate, physical storage to support the logical view, without changing that view. This can handle running out of space, or volumes that are beginning to fail, and many other situations that happen in real life.
I very-flatly recommend that you should use it all the time.
LVM is not a silver bullet - it still requires sane sysadmin practices. Especially where multi-volume lvs are involved.
And snaps are definitely not backups. They are great sources for backups, but are also susceptible to breakage when hardware dies - so both the original data and the snaps are likely to disappear at the same time if a disk breaks in a vg. And they burn up disk space if left unmanaged.
Having backups in the same machine (even on separate disk(s)) is not an acceptable backup IMHO. Better than nothing, but only just. I might be inclined to simply use RAID1 and have the third disk as a hot replacement - current LVM allows for failure policy where you can specify LVM to automatically replace and synch a failed disk.
You still have to watch the logs continually.
All of the above does not weaken the requirement for an external backup.
Alright, I'll do a RAID 1. I'm going to need a PCI RAID controller. Can you guys recommend any that are compatible with kernel 4.4.14? Preferably one that works with the megaraid software suite.
You don't need a hardware card, although they are a good (best ?) option.
Software raid (at the Linux level) is extremely robust - see the wiki here.
These days LVM can manage (and setup) RAID without the need for the user to manually set up RAID volumes beforehand. Works a treat.
You don't need a hardware card, although they are a good (best ?) option.
Software raid (at the Linux level) is extremely robust - see the wiki here.
These days LVM can manage (and setup) RAID without the need for the user to manually set up RAID volumes beforehand. Works a treat.
If that is still accurate, LVM raid is still less than mature in terms of failure recovery and tool maturity. LVM over MD is may still be the better option. MDRAID and LVM are both quite mature, but the RAID features of LVM may be less so.
Although it seems entirely counter-intuitive, I have found software raid on a fast controller and with adequate CPU power FASTER than hardware raid. That result may be representative only for the particular hardware I used for testing, but is something to keep in mind. It appears that the communication between MD and LVM is quite advanced and well developed to optimize performance.
I would experiment with RAID1 or RAID5 in software on your current controller, using LVM and EXT4. IF that will serve your needs, that may be all you need. Advantage: the configuration is well tested and documented. There must be dozens of good (and a few bad) how-to documents on the web for this configuration.
Another thought: BTRFS does RAID1 nicely and easily. The RAID5 and RAID6 recovery code has known behavior issues, and I would not recommend them, but RAID1 appears solid. It is much easier to set up BTRFS raid than any other kind. The performance is lightly less than EXT4, and the recovery tools are nto as mature as MDRAID tools. I consider BTRFS adequate for a laptop or workstation, but not for a critical server. BTRFS may not apply today, but is something we should keep our eyes on for future application.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.