Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I'm new to LVM. I use Red Hat and CentOS 5. I'm setting up a database server and I want to setup the local drives for performance. My plan is to have three storage locations, 1st for Linux, 2nd for the application, and 3rd for the data files. Each location will be appropriately redundant. The OS and application drives will be local. Because my goal is to dedicate one spindle for the OS and another for the application, is there a best practice that would say I should create two LVM volume groups, each with one logical volume associated with one of the physical partitions or one LVM volume group with two logical volumes each associated with one of the physical partitions?
I've read that a physical disk can only belong to one volume group. So if I want to add 70GB to both logical volumes, I could add a single 140GB drive to a single volume group and then add half to each logical volume. If I have two volume groups, I would need to add two additional disks. Are there other suggestions? I may be missing an obvious consideration or be missing a basic concept of LVM so any comments are appreciated.
You can put a single disk into multiple volume groups no problem. Or rather, you can have multiple physical volumes (which just look like a filesystem on a partition to the outside world) with each one belonging to a different volume group. However all this seems to be what you want to avoid.
I would possibly question your motives for using LVM in the first place... why do you want to? It's got lots of good features, provides a lot of flexibility and all that, however servers tend not to need to be flexible, they need to be solid and reliable and never change at all because they've been planned well. Doesn't say you can't do that of course. if you want one disk (or pair if you're doing a raid or lvm2 mirror - another nice lvm feature) for the OS, then sure, use a dedicated VG across that, split into as many LV's as the OS requires (plus a nice simple primary /boot partition), and then a 2nd disk for the app, which is much more likely to be a single partition on an entire disk, making LVM less relevant i'd have thought.
That's a good point. My intent to use LVM is simply for the flexibility that I would hope to never use. My storage locations are already fast and at least double the recommended size. I would expect to replace the server before needing to increase local storage. I just want to use the tool if it might be helpful and if it doesn't complicate the situation (simple is best). Thanks!
koenigj3, there are arguably a few reasons to have more than one volume group. You can use multiple volume groups to provide a "sense of separation" between the operating system and your "user disk". You can create multiple VGs to physically isolate different types of data for performance reasons. Or to reduce the total PV count of a VG. An example: if you write LVM metadata to every PV and you have many, many PV's, like you might have with a large database on SAN, you can experience long delays performing certain LVM functions like the 'vgchange -ay' that is done as the system initializes... or merely doing a 'pvs' command. You might want to put your database indexes in one VG, logs in another, and the actual data in still another.
I've read that a physical disk can only belong to one volume group.
That is sort-of vague. A physical disk can obviously have multiple partitions, and a partition can only belong to a single VG. But other partitions on that same physical disk can belong to other VGs.
You'd possibly use this for SAN LUNs. The philosophy behind this is "why partition the disk if you are going to use the whole volume anyway?" The argument against this is that it can look like an uninitialized volume to some utilities or to another OS if your system is multiboot.
In this particular case "a physical disk can only belong to one volume group" is true as stated.
Last edited by tommylovell; 01-22-2010 at 08:33 PM.
It's not "soooo wrong". don't be melodramatic, and I've built plenty thanks. On an enterprise server, database storage, transaction logs etc, would tend to be on a NAS / SAN which leaves local disk for static OS data and logfiles. With a disk as big as you would tend to get on an average server, there's no excuse for designing partition sizes poorly.
Sometime I use LVM, sometimes I don't, it depends on what's being done, but building clustered servers with a consistent build image, many of our systems are much easier to rebuild from scratch than deviate from a standard with messily changing LVM volumes.
Last edited by acid_kewpie; 01-22-2010 at 11:49 PM.
Smartpatrol, Isolating OS and application IO to spindles is exactly my goal. I have four local disks configured as two RAID1 arrays via a hardware RAID controller. LVM sees two disks. The third storage location will be on a SAN, 16 disks presented as one (not presented yet and I know I will need to configure multipath).
In my experiments, I have created two LVs each associated with one PV. On one test server I have both LVs in the same VG and on a second server I have two VGs with one LV each (still one LV to one PV). Just “clicking around” I don’t see a difference, but I understand tommylovell’s comment (I really appreciate the examples) that there can be overhead with a lot of LVs in one VG (I’m guessing I won’t notice much being such a small implementation).
Just to be clear, there is no discernible overhead to using LVM.
(LVM uses Device Mapper to do its magic, just like DM-Multipath does. Do a 'dmsetup ls' once you have LVM and multipath setup to see all of the virtual block devices you end up with. Device Mapper is a good bulletproof implementation of a well thought out concept.)
When I stated
...if you write LVM metadata to every PV and you have many, many PV's, like you might have with a large database on SAN, you can experience long delays performing certain LVM functions...
what I meant to say, but may not have been clear, is that when you have a lot of PVs in a VG, doing certain administrative things may take a long time.
A real life example:
I have a VG for an Oracle database that has 110 25.9GB physical volumes in it. By default the way LVM works they all have identical metatdata written to them. There is no discernible overhead to doing I/O to the volume group, however if you issue a 'pvs' command it might take 45 minutes to complete. The 'vgchange -ay' that is done at system initialization takes about 20 minutes on average.
(Sometimes we use the ready, fire, aim approach to our systems.)
So, LVM - good; using LVM in a stupid manner - bad. 16 SAN LUNs in your VG will not be a problem.
Thanks for everyone’s help. After walking around and thinking about this, I believe I’ve discovered my confusion. I’m posting this in case anyone else runs into this same learning curve. My mistake was not understanding the relationship between LV, VG, and PV.
I usually work with Windows dynamic disks. Dynamic disks allow you to create logical volumes (simple, spanned, striped, mirrored, etc…) on physical disks. In Windows there are only two components... the logical volume and the physical disk.
With LVM I needed to understand the roll of the VG. PVs are added to VGs and not LVs. LVs are created in VGs and not PV.
On one of my test installs, I created one VG with one PV and used all the space to create a single LV. Then I added a new PV to the VG and created a new LV using all the new space. My thinking was that I created a LV on a dedicated PV giving me a dedicated spindle. While this might be true in this test, it's really just coincidence (LVM had no other option). By creating a new VG and adding a PV to that VG, I guarantee that any LV in that VG will be assigned to the intended PV, even if I don’t use all the space.