LVM on top RAID: Disadvantages?
Hi All,
I have use RAID on my slackware box for a long time (about 10 years) but i never use LVM because I don't see advantages in it. I try it several times but without find real utility. At the moment I have an array of six 6TB Red drive in RAID6 formated in JFS and it work fine for many years (the array have almost 5 year). But I want to make snapshot so i'm sure that when i begin a backup all the data are consistent. This is plan:
in case of backup i will create a snaphot using the free space. Questions:
Best regards, Razziatore |
I've used LVM on RAID some time ago.
There are probably many disadvantages but I know one - some performance lost as there are one more abstraction level. But there are advantages in freedom with partition sizes, physical disc used etc. Once I was forced to change 8 disk RAID 10 to 6 disk RAID 6 online - thanks to LVM I was able to do it: - degrade 8 disk RAID 10 to 4 disk; - make from freed 4 disk degraded RAID 6; - create PV on RAID 6 and add it to VG; - move VG from RAID 10 PD to RAID 6 PD; - remove RAID 10 PV from VG and delete it; - use 2 from 4 free left disks from RAID 10 to recover RAID 6. This wasn't particularly safe but was doable. And some my thoughts about this. 1. Use only one partition on a drive as RAID volume for PD. Use LV as partitions instead. Creating multiple RAIDs on same drive is pointless and adds unnecessary complicity. 2. Don't use whole VG space for LV. Resize volumes on need. This gives you more freedom with partition sizes, data movement between PD, etc. 3. You can use RAID 1 as boot when metadata ar stored at the end of drive (0.9, 1.0) 4. Create some test loop partitions create LVM on it and play on it to get familiar with commands and capabilities it have (extend, shrink, move, add, remove them). This may give you clue to better design your system. 5. My design for you: One PV on RAID6 One VG on PV. Two LV (JFS, snapshot) on VG with sizes required for today. You can extends them later. You can, however create more VG with separated partitions for '/' '/home' or '/var' - all of them with flexible size to be change in future. |
Hi Labinnah,
Thanks for you replay but i'm not agree with you in some points and I would like to analyze them together. Quote:
Quote:
I do this when I migrate from my 6x2 TB array to my "new" 6x6 TB array. I degrade the old array, i create the new one degrade and then copy the files. At the end I disconect the old array and plug the new disc to recover the new RAID. ( unfortunately I didn't have enough ports on the HBA/PSU to do this without degrading arrays ). But now we come to your suggestions: Quote:
Maybe I could leave a less than 1% planed but I want to be safe. At the begining I had planned to lose 1% for raid safe and 1% for SNAPSHOT data but then I thought of making them share and reuse the space that would otherwise be thrown away. Quote:
Quote:
Quote:
|
Quote:
Quote:
Second RAID 10 can fail half drives but this must be specific ones not any of them. So 2 disk fails may fail RAID 10 and RAID 6 can fail 2 drives safely. And lets say, drives I have to used was not reliable at all... Quote:
Quote:
Quote:
Quote:
Maybe you should look at BTRFS? It has some kind of volume management implemented on filesystem level. |
I've used LVM on top of raid for over a decade, I believe.
(As it so happens, LVM can also provide RAID functionality but I have not explored that at all.) In my upgrades, I've created new RAID arrays, created new PVs out of them, added the new PV to the volume group, and then told LVM to move everything off the old PV. All of that happens while the system is running; no down time at all and no worries about missing information from the old PV because people were modifying the logical volumes while all of this was going on. Once the move is complete, remove the old PV from the volume group, and do whatever you want to do with the old array. Not to mention that if you are making backups, you can create a snapshot LV and take your backup from *that*. Just get rid of the snapshot LV when you are done, because it takes some resources to maintain it. Since it is very easy to extend a logical volume, there's no need to allocate all of your space in a volume group in the beginning. You may want a new mount point in the future; you may want a given logical volume to max out at a given size to simplify your backup strategy. In my case... Code:
0 ✓ cranium ~ # pvs Code:
root@gateway:~# pvs |
Quote:
I used just single small SSD to do snapshot LV. And my /home is on LV made of PV which is RAID1. |
Quote:
Quote:
Quote:
For BTRFS... is not production ready. And RHEL have removed from version 8 after initial support with version 6. This isn't good. |
Quote:
Quote:
Quote:
Quote:
As I say I prefer have a big partition (at the moment 24TB) to store my files. I don't want the extra work to say "oh i fill my space here... extend it" and after a bit "oh i fill my space there... extend it". I prefer use directory instead of partitions. what is the advantage of having so many LVs? |
Quote:
Now I have 2 main RAIDs for data. The md1 for /home and md2 for /srv. With a third raid (say md3) I can add it to VG of md1 do the backup of /home, remove it and add it to VG of md2 and do the backup of /srv. I don't think there are downside. |
Quote:
Quote:
|
Quote:
Quote:
Quote:
|
Quote:
|
Quote:
BTW, I'll point out that you need enough free space in your PVs to handle the expected updates when you create a snapshot volume. (More precisely, the old version of the block that was changed needs to stick around for the snapshot to access. That's the other reason to remove the snapshot volume when you are done; that behind the scenes block duplication will continue while the snapshot is active.) |
Quote:
|
Quote:
It seems to me too much work and too much effort for a simple advantage. For example, I saw an article that said create one LV for each VM and assign direct control to the VMs. Okay I can see the benefits. But it seems to me an excessive optimization... My fault. Quote:
Quote:
Interesting. |
All times are GMT -5. The time now is 03:10 AM. |