Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place! |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
01-22-2010, 04:47 PM
|
#1
|
LQ Newbie
Registered: Jun 2009
Posts: 5
Rep:
|
When to use multiple LVM Volume Groups
I'm new to LVM. I use Red Hat and CentOS 5. I'm setting up a database server and I want to setup the local drives for performance. My plan is to have three storage locations, 1st for Linux, 2nd for the application, and 3rd for the data files. Each location will be appropriately redundant. The OS and application drives will be local. Because my goal is to dedicate one spindle for the OS and another for the application, is there a best practice that would say I should create two LVM volume groups, each with one logical volume associated with one of the physical partitions or one LVM volume group with two logical volumes each associated with one of the physical partitions?
I've read that a physical disk can only belong to one volume group. So if I want to add 70GB to both logical volumes, I could add a single 140GB drive to a single volume group and then add half to each logical volume. If I have two volume groups, I would need to add two additional disks. Are there other suggestions? I may be missing an obvious consideration or be missing a basic concept of LVM so any comments are appreciated.
|
|
|
01-22-2010, 05:03 PM
|
#2
|
Moderator
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417
|
You can put a single disk into multiple volume groups no problem. Or rather, you can have multiple physical volumes (which just look like a filesystem on a partition to the outside world) with each one belonging to a different volume group. However all this seems to be what you want to avoid.
I would possibly question your motives for using LVM in the first place... why do you want to? It's got lots of good features, provides a lot of flexibility and all that, however servers tend not to need to be flexible, they need to be solid and reliable and never change at all because they've been planned well. Doesn't say you can't do that of course. if you want one disk (or pair if you're doing a raid or lvm2 mirror - another nice lvm feature) for the OS, then sure, use a dedicated VG across that, split into as many LV's as the OS requires (plus a nice simple primary /boot partition), and then a 2nd disk for the app, which is much more likely to be a single partition on an entire disk, making LVM less relevant i'd have thought.
|
|
|
01-22-2010, 05:15 PM
|
#3
|
LQ Newbie
Registered: Jun 2009
Posts: 5
Original Poster
Rep:
|
That's a good point. My intent to use LVM is simply for the flexibility that I would hope to never use. My storage locations are already fast and at least double the recommended size. I would expect to replace the server before needing to increase local storage. I just want to use the tool if it might be helpful and if it doesn't complicate the situation (simple is best). Thanks!
|
|
|
01-22-2010, 08:30 PM
|
#4
|
Member
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 384
Rep:
|
koenigj3, there are arguably a few reasons to have more than one volume group. You can use multiple volume groups to provide a "sense of separation" between the operating system and your "user disk". You can create multiple VGs to physically isolate different types of data for performance reasons. Or to reduce the total PV count of a VG. An example: if you write LVM metadata to every PV and you have many, many PV's, like you might have with a large database on SAN, you can experience long delays performing certain LVM functions like the 'vgchange -ay' that is done as the system initializes... or merely doing a 'pvs' command. You might want to put your database indexes in one VG, logs in another, and the actual data in still another.
Quote:
I've read that a physical disk can only belong to one volume group.
|
That is sort-of vague. A physical disk can obviously have multiple partitions, and a partition can only belong to a single VG. But other partitions on that same physical disk can belong to other VGs.
Code:
pvcreate /dev/sda2
pvcreate /dev/sdb1
vgcreate osvg /dev/sda2 /dev/sdb1
pvcreate /dev/sda3
pvcreate /dev/sdb2
vgcreate uservg /dev/sda3 /dev/sdb2
This would work fine. But once a PV is added to a VG, it can't be added to another.
Code:
vgextend uservg /dev/sda2
would not work. /dev/sda2 already belongs to 'osvg'
Also, there is "full volume" PVs. You can use a disk "unpartitioned" like:
Code:
pvcreate /dev/sdc
pvcreate /dev/sdd
pvcreate /dev/sde
pvcreate /dev/sdf
vgcreate datavg /dev/sdc /dev/sdd /dev/sde /dev/sdf
You'd possibly use this for SAN LUNs. The philosophy behind this is "why partition the disk if you are going to use the whole volume anyway?" The argument against this is that it can look like an uninitialized volume to some utilities or to another OS if your system is multiboot.
In this particular case "a physical disk can only belong to one volume group" is true as stated.
YMMV
Last edited by tommylovell; 01-22-2010 at 08:33 PM.
|
|
1 members found this post helpful.
|
01-22-2010, 09:32 PM
|
#5
|
Member
Registered: Sep 2009
Posts: 196
Rep:
|
...
Last edited by Smartpatrol; 03-11-2010 at 10:03 PM.
|
|
|
01-22-2010, 11:46 PM
|
#6
|
Moderator
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417
|
It's not "soooo wrong". don't be melodramatic, and I've built plenty thanks. On an enterprise server, database storage, transaction logs etc, would tend to be on a NAS / SAN which leaves local disk for static OS data and logfiles. With a disk as big as you would tend to get on an average server, there's no excuse for designing partition sizes poorly.
Sometime I use LVM, sometimes I don't, it depends on what's being done, but building clustered servers with a consistent build image, many of our systems are much easier to rebuild from scratch than deviate from a standard with messily changing LVM volumes.
Last edited by acid_kewpie; 01-22-2010 at 11:49 PM.
|
|
|
01-24-2010, 06:29 PM
|
#7
|
LQ Guru
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,397
|
Re There's a recommendation somewhere in the docs iirc, that you should still partition the disk, even though it's a whole disk partition. It also avoids the
Quote:
look like an uninitialized volume to some utilities or to another OS if your system is multiboot.
|
issue.
|
|
|
01-25-2010, 11:52 AM
|
#8
|
LQ Newbie
Registered: Jun 2009
Posts: 5
Original Poster
Rep:
|
Smartpatrol, Isolating OS and application IO to spindles is exactly my goal. I have four local disks configured as two RAID1 arrays via a hardware RAID controller. LVM sees two disks. The third storage location will be on a SAN, 16 disks presented as one (not presented yet and I know I will need to configure multipath).
In my experiments, I have created two LVs each associated with one PV. On one test server I have both LVs in the same VG and on a second server I have two VGs with one LV each (still one LV to one PV). Just “clicking around” I don’t see a difference, but I understand tommylovell’s comment (I really appreciate the examples) that there can be overhead with a lot of LVs in one VG (I’m guessing I won’t notice much being such a small implementation).
|
|
|
01-25-2010, 06:23 PM
|
#9
|
Member
Registered: Sep 2009
Posts: 196
Rep:
|
...
Last edited by Smartpatrol; 03-11-2010 at 10:03 PM.
|
|
|
01-25-2010, 09:52 PM
|
#10
|
Member
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 384
Rep:
|
Just to be clear, there is no discernible overhead to using LVM.
(LVM uses Device Mapper to do its magic, just like DM-Multipath does. Do a 'dmsetup ls' once you have LVM and multipath setup to see all of the virtual block devices you end up with. Device Mapper is a good bulletproof implementation of a well thought out concept.)
When I stated
Quote:
...if you write LVM metadata to every PV and you have many, many PV's, like you might have with a large database on SAN, you can experience long delays performing certain LVM functions...
|
what I meant to say, but may not have been clear, is that when you have a lot of PVs in a VG, doing certain administrative things may take a long time.
A real life example:
I have a VG for an Oracle database that has 110 25.9GB physical volumes in it. By default the way LVM works they all have identical metatdata written to them. There is no discernible overhead to doing I/O to the volume group, however if you issue a 'pvs' command it might take 45 minutes to complete. The 'vgchange -ay' that is done at system initialization takes about 20 minutes on average.
(Sometimes we use the ready, fire, aim approach to our systems.)
So, LVM - good; using LVM in a stupid manner - bad. 16 SAN LUNs in your VG will not be a problem.
|
|
|
02-04-2010, 01:44 PM
|
#11
|
LQ Newbie
Registered: Jun 2009
Posts: 5
Original Poster
Rep:
|
Thanks for everyone’s help. After walking around and thinking about this, I believe I’ve discovered my confusion. I’m posting this in case anyone else runs into this same learning curve. My mistake was not understanding the relationship between LV, VG, and PV.
I usually work with Windows dynamic disks. Dynamic disks allow you to create logical volumes (simple, spanned, striped, mirrored, etc…) on physical disks. In Windows there are only two components... the logical volume and the physical disk.
With LVM I needed to understand the roll of the VG. PVs are added to VGs and not LVs. LVs are created in VGs and not PV.
On one of my test installs, I created one VG with one PV and used all the space to create a single LV. Then I added a new PV to the VG and created a new LV using all the new space. My thinking was that I created a LV on a dedicated PV giving me a dedicated spindle. While this might be true in this test, it's really just coincidence (LVM had no other option). By creating a new VG and adding a PV to that VG, I guarantee that any LV in that VG will be assigned to the intended PV, even if I don’t use all the space.
Thanks again everyone!
|
|
|
02-04-2010, 06:20 PM
|
#12
|
LQ Guru
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,397
|
|
|
|
All times are GMT -5. The time now is 08:38 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|