LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   When to use multiple LVM Volume Groups (https://www.linuxquestions.org/questions/linux-newbie-8/when-to-use-multiple-lvm-volume-groups-784099/)

koenigj3 01-22-2010 04:47 PM

When to use multiple LVM Volume Groups
 
I'm new to LVM. I use Red Hat and CentOS 5. I'm setting up a database server and I want to setup the local drives for performance. My plan is to have three storage locations, 1st for Linux, 2nd for the application, and 3rd for the data files. Each location will be appropriately redundant. The OS and application drives will be local. Because my goal is to dedicate one spindle for the OS and another for the application, is there a best practice that would say I should create two LVM volume groups, each with one logical volume associated with one of the physical partitions or one LVM volume group with two logical volumes each associated with one of the physical partitions?
I've read that a physical disk can only belong to one volume group. So if I want to add 70GB to both logical volumes, I could add a single 140GB drive to a single volume group and then add half to each logical volume. If I have two volume groups, I would need to add two additional disks. Are there other suggestions? I may be missing an obvious consideration or be missing a basic concept of LVM so any comments are appreciated.

acid_kewpie 01-22-2010 05:03 PM

You can put a single disk into multiple volume groups no problem. Or rather, you can have multiple physical volumes (which just look like a filesystem on a partition to the outside world) with each one belonging to a different volume group. However all this seems to be what you want to avoid.

I would possibly question your motives for using LVM in the first place... why do you want to? It's got lots of good features, provides a lot of flexibility and all that, however servers tend not to need to be flexible, they need to be solid and reliable and never change at all because they've been planned well. Doesn't say you can't do that of course. if you want one disk (or pair if you're doing a raid or lvm2 mirror - another nice lvm feature) for the OS, then sure, use a dedicated VG across that, split into as many LV's as the OS requires (plus a nice simple primary /boot partition), and then a 2nd disk for the app, which is much more likely to be a single partition on an entire disk, making LVM less relevant i'd have thought.

koenigj3 01-22-2010 05:15 PM

That's a good point. My intent to use LVM is simply for the flexibility that I would hope to never use. My storage locations are already fast and at least double the recommended size. I would expect to replace the server before needing to increase local storage. I just want to use the tool if it might be helpful and if it doesn't complicate the situation (simple is best). Thanks!

tommylovell 01-22-2010 08:30 PM

koenigj3, there are arguably a few reasons to have more than one volume group. You can use multiple volume groups to provide a "sense of separation" between the operating system and your "user disk". You can create multiple VGs to physically isolate different types of data for performance reasons. Or to reduce the total PV count of a VG. An example: if you write LVM metadata to every PV and you have many, many PV's, like you might have with a large database on SAN, you can experience long delays performing certain LVM functions like the 'vgchange -ay' that is done as the system initializes... or merely doing a 'pvs' command. You might want to put your database indexes in one VG, logs in another, and the actual data in still another.

Quote:

I've read that a physical disk can only belong to one volume group.
That is sort-of vague. A physical disk can obviously have multiple partitions, and a partition can only belong to a single VG. But other partitions on that same physical disk can belong to other VGs.

Code:

pvcreate /dev/sda2
pvcreate /dev/sdb1
vgcreate osvg /dev/sda2 /dev/sdb1

pvcreate /dev/sda3
pvcreate /dev/sdb2
vgcreate uservg /dev/sda3 /dev/sdb2

This would work fine. But once a PV is added to a VG, it can't be added to another.

Code:

vgextend uservg /dev/sda2
would not work. /dev/sda2 already belongs to 'osvg'

Also, there is "full volume" PVs. You can use a disk "unpartitioned" like:

Code:

pvcreate /dev/sdc
pvcreate /dev/sdd
pvcreate /dev/sde
pvcreate /dev/sdf
vgcreate datavg /dev/sdc /dev/sdd /dev/sde /dev/sdf

You'd possibly use this for SAN LUNs. The philosophy behind this is "why partition the disk if you are going to use the whole volume anyway?" The argument against this is that it can look like an uninitialized volume to some utilities or to another OS if your system is multiboot.

In this particular case "a physical disk can only belong to one volume group" is true as stated.

YMMV

Smartpatrol 01-22-2010 09:32 PM

...

acid_kewpie 01-22-2010 11:46 PM

It's not "soooo wrong". don't be melodramatic, and I've built plenty thanks. On an enterprise server, database storage, transaction logs etc, would tend to be on a NAS / SAN which leaves local disk for static OS data and logfiles. With a disk as big as you would tend to get on an average server, there's no excuse for designing partition sizes poorly.

Sometime I use LVM, sometimes I don't, it depends on what's being done, but building clustered servers with a consistent build image, many of our systems are much easier to rebuild from scratch than deviate from a standard with messily changing LVM volumes.

chrism01 01-24-2010 06:29 PM

Re
Quote:

"full volume" PVs
There's a recommendation somewhere in the docs iirc, that you should still partition the disk, even though it's a whole disk partition. It also avoids the
Quote:

look like an uninitialized volume to some utilities or to another OS if your system is multiboot.
issue.

koenigj3 01-25-2010 11:52 AM

Smartpatrol, Isolating OS and application IO to spindles is exactly my goal. I have four local disks configured as two RAID1 arrays via a hardware RAID controller. LVM sees two disks. The third storage location will be on a SAN, 16 disks presented as one (not presented yet and I know I will need to configure multipath).
In my experiments, I have created two LVs each associated with one PV. On one test server I have both LVs in the same VG and on a second server I have two VGs with one LV each (still one LV to one PV). Just “clicking around” I don’t see a difference, but I understand tommylovell’s comment (I really appreciate the examples) that there can be overhead with a lot of LVs in one VG (I’m guessing I won’t notice much being such a small implementation).

Smartpatrol 01-25-2010 06:23 PM

...

tommylovell 01-25-2010 09:52 PM

Just to be clear, there is no discernible overhead to using LVM.

(LVM uses Device Mapper to do its magic, just like DM-Multipath does. Do a 'dmsetup ls' once you have LVM and multipath setup to see all of the virtual block devices you end up with. Device Mapper is a good bulletproof implementation of a well thought out concept.)

When I stated
Quote:

...if you write LVM metadata to every PV and you have many, many PV's, like you might have with a large database on SAN, you can experience long delays performing certain LVM functions...
what I meant to say, but may not have been clear, is that when you have a lot of PVs in a VG, doing certain administrative things may take a long time.

A real life example:
I have a VG for an Oracle database that has 110 25.9GB physical volumes in it. By default the way LVM works they all have identical metatdata written to them. There is no discernible overhead to doing I/O to the volume group, however if you issue a 'pvs' command it might take 45 minutes to complete. The 'vgchange -ay' that is done at system initialization takes about 20 minutes on average.

(Sometimes we use the ready, fire, aim approach to our systems.)

So, LVM - good; using LVM in a stupid manner - bad. 16 SAN LUNs in your VG will not be a problem.

koenigj3 02-04-2010 01:44 PM

Thanks for everyone’s help. After walking around and thinking about this, I believe I’ve discovered my confusion. I’m posting this in case anyone else runs into this same learning curve. My mistake was not understanding the relationship between LV, VG, and PV.

I usually work with Windows dynamic disks. Dynamic disks allow you to create logical volumes (simple, spanned, striped, mirrored, etc…) on physical disks. In Windows there are only two components... the logical volume and the physical disk.

With LVM I needed to understand the roll of the VG. PVs are added to VGs and not LVs. LVs are created in VGs and not PV.

On one of my test installs, I created one VG with one PV and used all the space to create a single LV. Then I added a new PV to the VG and created a new LV using all the new space. My thinking was that I created a LV on a dedicated PV giving me a dedicated spindle. While this might be true in this test, it's really just coincidence (LVM had no other option). By creating a new VG and adding a PV to that VG, I guarantee that any LV in that VG will be assigned to the intended PV, even if I don’t use all the space.

Thanks again everyone!

chrism01 02-04-2010 06:20 PM

This is a really good HOWTO/explanation: http://tldp.org/HOWTO/LVM-HOWTO/
Deeper: http://sunoano.name/ws/public_xhtml/lvm.html


All times are GMT -5. The time now is 04:28 PM.