Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If I don't change anything, the system installs with no problems. But, if I attempt to change the swap space to match my RAM of 8GB, then I get the following message:
Not enough space
The current request size (5709984.00MB) is larger than the maximum logical volume size (2097152.00MB). To increase this limit you can create more Physical Volumes from unpartitioned disk space and add
them to this Volume Group.
What??? If I leave everything alone, it creates a logical volume size of 5709984.00 MB. Last time I checked 5709984 was larger than 2097152. By Physical Volumes, does it want me to break up /dev/sdb into /dev/sdb1, /dev/sdb2, /dev/sdb3, etc.?
Does anyone have an idea what is going on here? Why does it work if I accept the defaults and doesn't work when I try an change something?
Looks to me like it created swap on it's on logical volume. How are you trying to partition? Is CentOS already installed? Did you already allocate all the available space to partitions? What do you get with the vgdisplay command?
I could not create a boot partition on a 5TB partition so I created one small 1GB drive and let the system break it up into a 100MB boot and the 900MB is empty. The 5TB second drive seems to have a problem with an LVM created using sdb1 as 5720959. So I think what the message means is that I need to break up sdb1 in to two parts, sdb1, sdb2 and sdb3 each with a max of 2097152.00 MB. I am just guessing on this and will have to try it. Let you know the results. I still don't understand why if I leave everything alone it works, unless under the covers it is breaking it up and just not showing me.
As it turns out, it really doesn't matter how I break up /dev/sdb, what matters is when I am creating the LVM. If I create an LVM mount point with 5719840MB, the installer complains and says I can't do that. It says I can only create a mount point with 2097152MB. However, if I just let it default, the installer will create a mount point with 5719840MB. Why? What is the difference between the install creating a 5.45TB mount point and me creating a 5.45TB mount point? Is this just an edit check left over from smaller drives?
I found a solution. trickykid you are correct for the Linux 2.4.x kernel - 2.1TB. However, for a 32bit 2.6.x kernel, the logical volume size limit is 16TB. For the 64bit 2.6.x kernel, the logical volume size limit is 8EB (EB - Extremely Big lol). I am installing CentOS 5.2x64. I solved my problem by adjusting the PE size from 32 to 128 and I was able to create a 5.4TB logical volume. It seems that when you attempt to edit the default LVM on an initial install, it does not display the correct PE value. It was displaying a PE of 32 which cannot be correct, it should have displayed a PE of 128 since it created an LV of 5.4TB. In order to limit the Linux kernel memory usage, there is a limit of 65,536 physical extents (PE) per logical volume (LV). Hence, the LVM PE size will directly determine the maximum size of a logical volume (LV)! I found my answer by reading the following link ... http://www.walkernews.net/2007/07/02...-volume-in-lvm