LVM - what if disk partition change causes the PV device file to change?
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
LVM - what if disk partition change causes the PV device file to change?
Hello
Suppose the first logical partition (so /dev/sda5) is the only LVM PV and that there is some space in the extended partition before the first logical partition.
What happens if a new logical partition is created in the space? Presumably the new logical partition gets /dev/sda5 and the logical partition containing the LVM PV becomes /dev/sda6. If that happens would it break LVM?
The question arises because I'm planning partitioning for a 320 GB disk and would like to maximise flexibility. LVM will work fine at the top of the disk; I can't find the Windows XP disk layout restrictions but recall some problems with big disk addresses during boot -- or was that only with NT4 and W2K?
Is there somewhere I can learn how LVM initialises at boot so I could answer my own question?
Most puzzling is how the booting Linux kernel knows to start LVM and how LVM knows where to find the metadata. According to RHEL documentation the "lvm.conf configuration file is loaded from the directory specified by the environment variable LVM_SYSTEM_DIR, which is set to /etc/lvm". How could that work when root is an LVM volume?
I'm not sure about Redhat, as i'm not familiar with its init scripts, but LVM itself won't care what block device it finds the PVs on as it scans all of them looking for PVs when it initialises (dependent on what you tell it in the filters in lvm.conf).
I've made some alterations to my slackware init scripts to allow it to cope with more than 1 encrypted physical volume (it doesn't out of the box) and when the device names changed because of this, it didn't bat an eye.
Unless Redhat do something 'clever' during their init that I'm not aware of, adding a new logical partition and your existing one changing from hda5 to hda6 shouldn't be a problem.
As for the reading. I picked up what I know from dipping into the lvm manual pages, and prior knowledge of IBMs AIX LVM which is almost identical.
if / is on a lvm lv then your ramdisk (initrd/initramfs) will need to activate lvm vgscan --mknodes and vgchange -ay commands before trying to mount the real root filesystem to make the lvm devices available.
So the booting kernel, having scanned the hardware and created device files in the RAM file system then scans the block devices looking for LVM metadata. That's a nicely robust system and presumably oe of the reasons why /dev ends up being the mount point for the udev file system:
So that's why the vgscan man page says "In LVM2, vgscans take place automatically; but you might still need to run one explicitly after changing hardware" and "--mknodes Also checks the LVM special files in /dev that are needed for active logical volumes and creates any missing ones and removes unused ones."
Thanks for the education. Now I'll go right ahead and try and break the system (joking!). It's ubuntu, BTW. The RHEL LVM documentation turned up in a netsearch ignoring Linux flavour.
Best
Charles
Last edited by catkin; 12-18-2008 at 04:18 AM.
Reason: Improve!
So the booting kernel, having scanned the hardware and created device files in the RAM file system then scans the block devices looking for LVM metadata.
Pretty much, yes.
The bootloader/kernel will load the ramdisk and use that as the temporary root filesystem. It then executes a shell script in the ramdisk, named 'init'. The 'init' script will then do the minimum needed to mount the real root filesystem. This will include loading any needed kernel modules, creating the block devices for the disks/partitions and a 'vgscan --mknodes' to setup the lvm devices.
Once the real root filesystem has been mounted over the ramdisk, the ramdisk's /dev directory contents will no longer be available, so the system init script (rc.S in Slackware) called by the init process will then have to go through the whole process of populating /dev again. This is where udev gets involved, and there'll be a second run of vgscan to setup the devices for the lvm objects again.
It's worth pointing out that, if you've only got a partial LVM setup and your root filesystem is not on an lvm logical volume, the lvm processing in the ramdisk isn't needed at all and creating the lvm devices will be left until the real system init scripts run.
On my systems I prefer to only have /boot on a real partition and everything else in LVM lvs, including the root fs. Some people will advocate leaving the root fs outside of LVM to make recovery easier should you encounter problems in your LVM. Doing this will also negate the need for a ramdisk (assuming you don't need one for reasons other than LVM).
Anyways, getting away from the point now, and I've written far more than you probably care to read. Good luck with your Ubuntu tinkering!
You certainly haven't written more than I care to read; it is a rare opportunity to learn how LVM actually works; this is invaluable information when diagnosing obscure problems, when recovering from disasters without doing a full system recovery and when designing layouts and procedures (knowing the limits of possibilities.
The LVM documentation is good for HOWTOs and the background to them but short on explanations of the initialisation process.
If you are inclined to answer some further questions to test my understanding ...
Where does the initial init script come from before the real / is mounted?
Presumably changing from RAM / to the LVM root volume requires a chroot (I'd often wondered where there came in useful apart from for security such as used by FTP server)
Is it not possible (it would be more efficient) to copy the RAM /dev rather than re-scanning? That's a polite way of asking if you are sure this is not what actually happens!
FYI, on Debian systems (including unbuntu) the kernel runs init (/sbin/init - an ELF executable) but this is not possible until the LVM / is mounted so perhaps it has its own copy or waits until after LVM / is mounted) which runs /etc/init.d/rc (a shellscript) that runs the scripts pointed to by symlinks in the appropriate /etc/rc<n>.d directory.
There are no /etc/rc*.d/*lvm* symlinks so presumably all the LVM setup has already been done and the Slackware procedure you describe (the system init script (rc.S in Slackware) called by the init process will then have to go through the whole process of populating /dev again. This is where udev gets involved, and there'll be a second run of vgscan to setup the devices for the lvm objects again.) is not used. ???
If you are inclined to answer some further questions to test my understanding ...
I'll do my best. Bare in mind I'm just a user like you. This is based on my understanding of how it works, which may differ from actuality and shouldn't be taken as absolute gospel
Quote:
Originally Posted by catkin
Where does the initial init script come from before the real / is mounted?
The ramdisk is populated from an archive file, and the bootloader config identifes the file. With lilo, which is the bootloader I use the entry looks like
Quite how the bootloader and kernel interact internally to load the ramdisk I'm not sure, but that's how you tell it what file to use. Grub has something simliar, but slightly different syntax
Quote:
Originally Posted by catkin
Presumably changing from RAM / to the LVM root volume requires a chroot (I'd often wondered where there came in useful apart from for security such as used by FTP server)
In the slackware scripts it calls a binary called switch_root which I'm guessing is a specialised equivalent of chrooting which also executes /sbin/init
Code:
exec switch_root /mnt /sbin/init $RUNLEVEL
Quote:
Originally Posted by catkin
Is it not possible (it would be more efficient) to copy the RAM /dev rather than re-scanning? That's a polite way of asking if you are sure this is not what actually happens!
It's an interesting point, but I'm guessing there's not much overhead in rescanning /dev/sysfs for the devices so they don't bother copying them. All I can say for certain is that I could see no evidence for any copying in the scripts I looked at.
Quote:
Originally Posted by catkin
There are no /etc/rc*.d/*lvm* symlinks so presumably all the LVM setup has already been done and the Slackware procedure you describe (the system init script (rc.S in Slackware) called by the init process will then have to go through the whole process of populating /dev again. This is where udev gets involved, and there'll be a second run of vgscan to setup the devices for the lvm objects again.) is not used. ???
Debian systems have a different approach to init scripts than Slackware's so I can't say for certain how they do it. It's been a good 10 years or so since I last ran Debian. Having said that I'd still expect debian to run the vgscan etc before it gets to the runlevel specific rc[1|2|3|4...].d stuff so although there's no rc.lvm anywhere, its probably included in whatever script is debian's equivalent of rc.S. If you really want to dig into this, your best bet is to grep around in some of the scripts that gets mentioned in /etc/inittab for vgscan and you'll probably find it in one of them.
Thank you for all your help; it has been enlightening. ubuntu 8.04 (= debian) does not have an /etc/inittab although some of the scripts are coded to use it if it does exist. I think the replacement is /etc/event.d but that has no scripts that call vgscan. My working hypothesis (don't you just love reverse engineering?!) is that all the LVM work is done before the init scripts are run.
Regardless of the details the essential concept of the booting system scanning the block devices for LVM metadata holds good; what a nice design decision, creating robustness and flexibility.
As a minor aside, I've often wondered what "rc" means in the UNIX lexicon but have never seen an explanation. It stems from the earliest, terse days when user IO was slow (think teletype!). Perhaps it's a contraction of a contraction based on cfg and conf meaning configuration. Maybe rc = "run cfg" = "run configuration".
As a minor aside, I've often wondered what "rc" means in the UNIX lexicon but have never seen an explanation. It stems from the earliest, terse days when user IO was slow (think teletype!). Perhaps it's a contraction of a contraction based on cfg and conf meaning configuration. Maybe rc = "run cfg" = "run configuration".
A quick google for "unix rc files" will give you your answer. As it happens you weren't that far off.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.