LVM: more than 405 LVs and they start disappearing
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
LVM: more than 405 LVs and they start disappearing
(I'm trying to assist a coworker at a different location, so my description may not be 100% accurate..) This on a SuSE 9.3 system with LVM2.
We need to create a system with about 450 logical volumes (application requires LVs). Because of an earlier problem with LVM and >64K metadata, we broke the config into two 'partitions' (actually, RAID logical drives, but don't worry about that) and created a Volume Group on each partition/Physical Volume. We have scripts that create the LV, one VG at a time. The first VG goes fine, with about 240 volumes created. The 2nd VG seems to go OK also, creating the 215 volumes. But, upon rebooting (or running vgscan -ay ) many of the LVs in the first VG have disappeared. They are not in /dev/vg1 and the entries are not in /dev/mapper. Other testing has indicated that the VGs are actually scanned in the 'reverse' order and it's the last VG scanned that gets 'truncated'. Looking at the mapper minor nmumbers, it looks like "something" happens at about LV 405. Any LVs after that do not get activated. We can manually activate them (lvchange) but then some other LVs disappear.
Anyone seen anything like this?? Any suggestions?
Thanks....
It sounds like you're pretty familiar with LVM, so I hate to state something that is probably obvious, but I will anyway!
You *did* name each volume group uniquely, didn't you? They weren't BOTH named "vg1" by mistake?
I'm not too sure what you're meaning with this comment, so I'll ask:
Quote:
Originally Posted by mhammock
We have scripts that create the LV, one VG at a time.
This almost implies to me that LV's contain VG's ... which is backwards. Maybe you just meant "create the LV's (plural), one VG at a time"? I'm not trying to imply lack of knowledge on your part, just looking for clarification. If you have something set up with 405 LV's you're WAY more advanced than I am with LVM. I basically use LVM with one LV per mountpoint, which generally equates to one LV per filesystem, but I do have a few exceptions. I have nowhere near 405 filesystems!!!
I asked my coworker to verify that all LV names are unique and he said "yes, they are". I may have him send me a copy of the scripts to make sure. From his other info, I can verify the two VGs are definitely uniquely named.
And speaking of the scripts.... yes, I should have made it clearer.. the scripts (one per VG) are creating lots of LVs (lvcreate) within each VG.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.