LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 01-22-2010, 04:47 PM   #1
koenigj3
LQ Newbie
 
Registered: Jun 2009
Posts: 5

Rep: Reputation: 0
When to use multiple LVM Volume Groups


I'm new to LVM. I use Red Hat and CentOS 5. I'm setting up a database server and I want to setup the local drives for performance. My plan is to have three storage locations, 1st for Linux, 2nd for the application, and 3rd for the data files. Each location will be appropriately redundant. The OS and application drives will be local. Because my goal is to dedicate one spindle for the OS and another for the application, is there a best practice that would say I should create two LVM volume groups, each with one logical volume associated with one of the physical partitions or one LVM volume group with two logical volumes each associated with one of the physical partitions?
I've read that a physical disk can only belong to one volume group. So if I want to add 70GB to both logical volumes, I could add a single 140GB drive to a single volume group and then add half to each logical volume. If I have two volume groups, I would need to add two additional disks. Are there other suggestions? I may be missing an obvious consideration or be missing a basic concept of LVM so any comments are appreciated.
 
Old 01-22-2010, 05:03 PM   #2
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985
You can put a single disk into multiple volume groups no problem. Or rather, you can have multiple physical volumes (which just look like a filesystem on a partition to the outside world) with each one belonging to a different volume group. However all this seems to be what you want to avoid.

I would possibly question your motives for using LVM in the first place... why do you want to? It's got lots of good features, provides a lot of flexibility and all that, however servers tend not to need to be flexible, they need to be solid and reliable and never change at all because they've been planned well. Doesn't say you can't do that of course. if you want one disk (or pair if you're doing a raid or lvm2 mirror - another nice lvm feature) for the OS, then sure, use a dedicated VG across that, split into as many LV's as the OS requires (plus a nice simple primary /boot partition), and then a 2nd disk for the app, which is much more likely to be a single partition on an entire disk, making LVM less relevant i'd have thought.
 
Old 01-22-2010, 05:15 PM   #3
koenigj3
LQ Newbie
 
Registered: Jun 2009
Posts: 5

Original Poster
Rep: Reputation: 0
That's a good point. My intent to use LVM is simply for the flexibility that I would hope to never use. My storage locations are already fast and at least double the recommended size. I would expect to replace the server before needing to increase local storage. I just want to use the tool if it might be helpful and if it doesn't complicate the situation (simple is best). Thanks!
 
Old 01-22-2010, 08:30 PM   #4
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
koenigj3, there are arguably a few reasons to have more than one volume group. You can use multiple volume groups to provide a "sense of separation" between the operating system and your "user disk". You can create multiple VGs to physically isolate different types of data for performance reasons. Or to reduce the total PV count of a VG. An example: if you write LVM metadata to every PV and you have many, many PV's, like you might have with a large database on SAN, you can experience long delays performing certain LVM functions like the 'vgchange -ay' that is done as the system initializes... or merely doing a 'pvs' command. You might want to put your database indexes in one VG, logs in another, and the actual data in still another.

Quote:
I've read that a physical disk can only belong to one volume group.
That is sort-of vague. A physical disk can obviously have multiple partitions, and a partition can only belong to a single VG. But other partitions on that same physical disk can belong to other VGs.

Code:
pvcreate /dev/sda2
pvcreate /dev/sdb1
vgcreate osvg /dev/sda2 /dev/sdb1

pvcreate /dev/sda3
pvcreate /dev/sdb2
vgcreate uservg /dev/sda3 /dev/sdb2
This would work fine. But once a PV is added to a VG, it can't be added to another.

Code:
vgextend uservg /dev/sda2
would not work. /dev/sda2 already belongs to 'osvg'

Also, there is "full volume" PVs. You can use a disk "unpartitioned" like:

Code:
pvcreate /dev/sdc
pvcreate /dev/sdd
pvcreate /dev/sde
pvcreate /dev/sdf
vgcreate datavg /dev/sdc /dev/sdd /dev/sde /dev/sdf
You'd possibly use this for SAN LUNs. The philosophy behind this is "why partition the disk if you are going to use the whole volume anyway?" The argument against this is that it can look like an uninitialized volume to some utilities or to another OS if your system is multiboot.

In this particular case "a physical disk can only belong to one volume group" is true as stated.

YMMV

Last edited by tommylovell; 01-22-2010 at 08:33 PM.
 
1 members found this post helpful.
Old 01-22-2010, 09:32 PM   #5
Smartpatrol
Member
 
Registered: Sep 2009
Posts: 196

Rep: Reputation: 38
...

Last edited by Smartpatrol; 03-11-2010 at 10:03 PM.
 
Old 01-22-2010, 11:46 PM   #6
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985
It's not "soooo wrong". don't be melodramatic, and I've built plenty thanks. On an enterprise server, database storage, transaction logs etc, would tend to be on a NAS / SAN which leaves local disk for static OS data and logfiles. With a disk as big as you would tend to get on an average server, there's no excuse for designing partition sizes poorly.

Sometime I use LVM, sometimes I don't, it depends on what's being done, but building clustered servers with a consistent build image, many of our systems are much easier to rebuild from scratch than deviate from a standard with messily changing LVM volumes.

Last edited by acid_kewpie; 01-22-2010 at 11:49 PM.
 
Old 01-24-2010, 06:29 PM   #7
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,360

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
Re
Quote:
"full volume" PVs
There's a recommendation somewhere in the docs iirc, that you should still partition the disk, even though it's a whole disk partition. It also avoids the
Quote:
look like an uninitialized volume to some utilities or to another OS if your system is multiboot.
issue.
 
Old 01-25-2010, 11:52 AM   #8
koenigj3
LQ Newbie
 
Registered: Jun 2009
Posts: 5

Original Poster
Rep: Reputation: 0
Smartpatrol, Isolating OS and application IO to spindles is exactly my goal. I have four local disks configured as two RAID1 arrays via a hardware RAID controller. LVM sees two disks. The third storage location will be on a SAN, 16 disks presented as one (not presented yet and I know I will need to configure multipath).
In my experiments, I have created two LVs each associated with one PV. On one test server I have both LVs in the same VG and on a second server I have two VGs with one LV each (still one LV to one PV). Just “clicking around” I don’t see a difference, but I understand tommylovell’s comment (I really appreciate the examples) that there can be overhead with a lot of LVs in one VG (I’m guessing I won’t notice much being such a small implementation).
 
Old 01-25-2010, 06:23 PM   #9
Smartpatrol
Member
 
Registered: Sep 2009
Posts: 196

Rep: Reputation: 38
...

Last edited by Smartpatrol; 03-11-2010 at 10:03 PM.
 
Old 01-25-2010, 09:52 PM   #10
tommylovell
Member
 
Registered: Nov 2005
Distribution: Raspbian, Debian, Ubuntu
Posts: 380

Rep: Reputation: 103Reputation: 103
Just to be clear, there is no discernible overhead to using LVM.

(LVM uses Device Mapper to do its magic, just like DM-Multipath does. Do a 'dmsetup ls' once you have LVM and multipath setup to see all of the virtual block devices you end up with. Device Mapper is a good bulletproof implementation of a well thought out concept.)

When I stated
Quote:
...if you write LVM metadata to every PV and you have many, many PV's, like you might have with a large database on SAN, you can experience long delays performing certain LVM functions...
what I meant to say, but may not have been clear, is that when you have a lot of PVs in a VG, doing certain administrative things may take a long time.

A real life example:
I have a VG for an Oracle database that has 110 25.9GB physical volumes in it. By default the way LVM works they all have identical metatdata written to them. There is no discernible overhead to doing I/O to the volume group, however if you issue a 'pvs' command it might take 45 minutes to complete. The 'vgchange -ay' that is done at system initialization takes about 20 minutes on average.

(Sometimes we use the ready, fire, aim approach to our systems.)

So, LVM - good; using LVM in a stupid manner - bad. 16 SAN LUNs in your VG will not be a problem.
 
Old 02-04-2010, 01:44 PM   #11
koenigj3
LQ Newbie
 
Registered: Jun 2009
Posts: 5

Original Poster
Rep: Reputation: 0
Thanks for everyone’s help. After walking around and thinking about this, I believe I’ve discovered my confusion. I’m posting this in case anyone else runs into this same learning curve. My mistake was not understanding the relationship between LV, VG, and PV.

I usually work with Windows dynamic disks. Dynamic disks allow you to create logical volumes (simple, spanned, striped, mirrored, etc…) on physical disks. In Windows there are only two components... the logical volume and the physical disk.

With LVM I needed to understand the roll of the VG. PVs are added to VGs and not LVs. LVs are created in VGs and not PV.

On one of my test installs, I created one VG with one PV and used all the space to create a single LV. Then I added a new PV to the VG and created a new LV using all the new space. My thinking was that I created a LV on a dedicated PV giving me a dedicated spindle. While this might be true in this test, it's really just coincidence (LVM had no other option). By creating a new VG and adding a PV to that VG, I guarantee that any LV in that VG will be assigned to the intended PV, even if I don’t use all the space.

Thanks again everyone!
 
Old 02-04-2010, 06:20 PM   #12
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,360

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
This is a really good HOWTO/explanation: http://tldp.org/HOWTO/LVM-HOWTO/
Deeper: http://sunoano.name/ws/public_xhtml/lvm.html
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Advice on LVM planning. Disadvantage to more Volume Groups? Vanyel Linux - Server 5 04-28-2009 07:46 PM
Fedora LVM volume group & Physical Volume resize problem gabeyg Fedora 1 05-14-2008 11:26 AM
shutting down LVM volume groups aztek Debian 2 06-02-2006 09:58 PM
Can I delete LVM Volume Groups? second_existence Linux - Newbie 1 01-24-2006 10:20 PM
Activating LVM Volume Groups at startup TomF Linux - Newbie 0 05-05-2004 05:05 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 02:17 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration