Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a few linux servers with LVM configured for storage.
Couple of times when I've needed to grow the file system I've managed to break things.
The servers exist in a virtual environment.
I'm now thinking it'd be easier and less "breakable" if I just gave each volume its own disk. Then in my virtual environment I could grow the disk or add a new one and then move data/remount etc as a safer way of working.
Why should I NOT do this?
Is it just the flexibility of growing the file system without downtime I'd be giving up or more?
LVM, being another management layer between you and your HDDs, adds another bit of complexity to your setup, but it pays back with flexibility and possibilities.
Just having normal disks assigned to your mount points doesn't allow for resizing (of course, you can shutdown the machine, take out the disk and stick it in another machine, copy the content to a bigger disk, connect the bigger disk to your system and restart; with a few TB HDD that takes about a day...).
Without LVM you are bound to your HDDs capacity, you can't have greater volumes than your largest HDD.
SNAPSHOTS! Without LVM you can't use snapshots: for backup, saving capacity, roll back options, test systems, system or data clones...
And: If you don't want to use online-resizing since you feel that's too risky (That's definitely a legitimate reason) you can still do it same way offline as you would do it without having LVM (ok, you have to remember the extra layer while formatting the new HDDs), but you still could make use of all the other features.
Dont really want to stop using LVM.
Copying data across disks and remounting would take ages as you say.
But twice I've had it cause problems.
Most recently with a striped lvm volume.
Left feeling there must be a better way of doing this.
The process for growing disks seems simpler and less problematic on that other platform (can it be mentioned?!)
Without LVM you are bound to your HDDs capacity, you can't have greater volumes than your largest HDD.
SNAPSHOTS! Without LVM you can't use snapshots: for backup, saving capacity, roll back options, test systems, system or data clones...
Sort of. I'm pretty sure that you can do any of these things with BTRFS or ZFS.
OTOH, these filesystems are a bit like LVM and a filesystem rolled into one, so, if the objective is to get rid of LVM, then getting something else that is like LVM-with-something-else may not be the win that you'd want.
On the other, other, hand, the command line interface to ZFS is particularly clean, and that might make its use less error prone.
And you could assert that both BTRFS and ZFS have disadvantages that you'd rather not have. BTRFS is a bit immature, and that may make it unsuitable for your application. ZFS itself has a non-compliant license for kernel incorporation, although there is a userspace version and a less mature ZFS-on-linux. And ZFS performance can be lower (...or not, as usual, depending on exactly what you test...)
And it kind of breaks the 'lots of small programs that each does exactly one thing well' bit of Unix philosophy (I know of no one who has used it seriously who has this objection); its a big program that lumps LVM-and-filesystem together and re-writes the style of how they work, in the interests of a clean interface, and if you are happy with stuff working more like Solaris, this might not be an issue for you.
Quote:
Originally Posted by dt64
SNAPSHOTS! Without LVM you can't use snapshots: for backup, saving capacity, roll back options, test systems, system or data clones...
Be cautious with the 'for backups' part; it protects against the 'Idiot user deleted it, and has now realised how wrong that was' problem (up to the point that backups can protect against that problem; the frequency needs to be high enough, and if the idiot user genuinely is the better idiot of legend, this can always be overcome, and it is easier to run snapshots with high frequency). You also need to think about the 'one disk goes bad' problem; if it doesn't protect against that, you still need real backups, as well as the 'nearly backups' offered by LVM (etc).
On the backup topic I believe you took me a bit wrong, or maybe I haven't worded it right.
I didn't mean the snapshots are The Backup Solution as such, but can be used for taking backups, e.g. you could dd a whole life partition, even the system is running and writing to it. And of course, it all depends on the exact use case.
If you have a reasonable disk setup (e.g. with hotswap), you can PV whole drives and move PVs off of a drive (the use space on other disks in the VG) and replace it ... just an example of where LVM is a plus. So.. yes... certainly for augmenting out of a VG pool and dynamically growing areas, but also can allow for live disk replacement in certain scenarios.
Thanks for all the responses. Interesting stuff for me.
In my virtual environment (vmware), some of the servers have striped volume groups in LVM.
Not sure why they were done like this. These servers are running databases but I cant think there would be any performance advantage to striping on virtual disks. When it comes to expanding these volumes, it looks like historically someone has just added more virtual disks and added these to the volume group. Starting to think its this that is causing the problems.
Some of these virtual disks are on different datastores. I've not seen this configuration much. Usually just a single virtual disk which is grown as needed. I'm going to put a test server up without the multiple disks spanning datastores. Am I barking up the wrong tree?
I tend to agree with the OP that LVM adds a layer of complication to a VM that can be more trouble than it is worth. I have also had problems with LVMs where had I been using physical partitions or virtual disks I would not have had the same issues. In my case it was easier to rebuild the system without LVM. In most VM environments I have worked with the hypervisor can manage the disks.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.