Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
my data resides in a partition sda2 - in a logical volume lv_root
unless I'm wrong lv_root contains the information on how to load the partition
so superficially it seems the partition must be loaded bofore we get the info on how to load it.
obviously I'm confused
Maybe someone could point me to documentation that explains this? or educate me a bit?
As you might surmise, you are wrong. I'm not clear on what you'd mean by lv_root containing information on how to load. that doesn't mean much to me. on boot, the partition is recognised as an LVM partition and the metadata inside is accordingly read and the physical volumes and assembled into volume groups. Once those are established, then the logical volumes within the are recognised and from that point on treated as normal fileystems ready for mounting. There is no info required on how to load it. As long as the partition type in the partition table says it's LVM (or actually, even if not, as it can still be scanned to see if it's recognised as a known partition type) then the LVM logic is followed and everything (should) fall into place.
DIck, what distro are you using? Aha! You're Fedora. I think there are some subtle differences between distros.
acid_kewpie is correct, but he didn't back up all the way to the beginning. I'm not going to either. But speed forward from power on to your boot loader loading your kernel in, the initrd being loaded into RAM and mounted (initial RAM disk - it's an abbreviated filesystem in RAM), an initialization file (possibly an ash shell, possibly called "init") being passed control, initialization being done, eventually your real disk root filesystem is mounted/chroot'd to, and then /etc/fstab can finally be used to mount everything else (and even here it differs a little based on whether you have a System V or a BSD style of initialization - Fedora is system V).
Not terribly technically accurate, but you get the picture. You can actual "blow up" your initrd and look at what is inside of it. I differs from distro to distro and has even evolved over time within some distros (Fedora/Redhat/Centos) as to how it was actually done. So it's easy to be general and vague.
Last edited by tommylovell; 08-27-2010 at 03:09 PM.
I was going to say "here are some of the gory details", but actually this is pretty elegant - and you have full control over what happens, so if you were developing an imbedded system you could target this to your hardware and not have to be so generalized...
'init' gets passed control after the RAMdisk is loaded and mounted. Here's the contents. It is a nash script.
Code:
[root@athlonz initrdcontents]# cat init
#!/bin/nash
mount -t proc /proc /proc
setquiet
echo Mounting proc filesystem
echo Mounting sysfs filesystem
mount -t sysfs /sys /sys
echo Creating /dev
mount -o mode=0755 -t tmpfs /dev /dev
mkdir /dev/pts
mount -t devpts -o gid=5,mode=620 /dev/pts /dev/pts
mkdir /dev/shm
mkdir /dev/mapper
echo Creating initial device nodes
mknod /dev/null c 1 3
mknod /dev/zero c 1 5
mknod /dev/systty c 4 0
mknod /dev/tty c 5 0
mknod /dev/console c 5 1
mknod /dev/ptmx c 5 2
mknod /dev/fb c 29 0
mknod /dev/tty0 c 4 0
mknod /dev/tty1 c 4 1
mknod /dev/tty2 c 4 2
mknod /dev/tty3 c 4 3
mknod /dev/tty4 c 4 4
mknod /dev/tty5 c 4 5
mknod /dev/tty6 c 4 6
mknod /dev/tty7 c 4 7
mknod /dev/tty8 c 4 8
mknod /dev/tty9 c 4 9
mknod /dev/tty10 c 4 10
mknod /dev/tty11 c 4 11
mknod /dev/tty12 c 4 12
mknod /dev/ttyS0 c 4 64
mknod /dev/ttyS1 c 4 65
mknod /dev/ttyS2 c 4 66
mknod /dev/ttyS3 c 4 67
/lib/udev/console_init tty0
daemonize --ignore-missing /bin/plymouthd
plymouth --show-splash
echo Setting up hotplug.
hotplug
echo Creating block device nodes.
mkblkdevs
echo Creating character device nodes.
mkchardevs
echo "Loading raid1 module"
modprobe -q raid1
echo "Loading raid456 module"
modprobe -q raid456
echo "Loading sata_nv module"
modprobe -q sata_nv
echo "Loading pata_acpi module"
modprobe -q pata_acpi
echo "Loading ata_generic module"
modprobe -q ata_generic
echo Making device-mapper control node
mkdmnod
modprobe scsi_wait_scan
rmmod scsi_wait_scan
mkblkdevs
mdadm -As --auto=yes --run /dev/md0
echo Scanning logical volumes
lvm vgscan --ignorelockingfailure
echo Activating logical volumes
lvm vgchange -ay --ignorelockingfailure vgz00
resume UUID=b827e20f-db5b-47b2-be38-17489895a0ad
echo Creating root device.
mkrootdev -t ext3 -o defaults,ro UUID=54716048-c91b-4f48-b37e-fb09ce21e412
echo Mounting root filesystem.
mount /sysroot
cond -ne 0 plymouth --hide-splash
echo Setting up other filesystems.
setuproot
loadpolicy
plymouth --newroot=/sysroot
echo Switching to new root and running init.
switchroot
echo Booting has failed.
sleep -1
[root@athlonz initrdcontents]#
'switchroot' chroot's you over to the disk-based root filesystem.
According to the 'nash' manpage:
Quote:
switchroot newrootpath
Makes the filesystem mounted at newrootpath the new root filesystem by moving the mountpoint. This will only work in 2.6 or later kernels.
My assumption is that once we are chroot'd (switchroot'd?), /etc/rc.sysinit is given control, and it continues the system initialization.
It should be noted that the initrd is basically only there to provide enough support to mount the real (final) root. Could be esoteric hardware drivers - could be esoteric software drivers, LVM e.g.
So the initial premise was correct - in-kernel LVM support is insufficient to mount the root. The smarts have to be loaded beforehand - in the initrd. Then the real init is run.
(I'm not a great fan of LVM)
sysg00
I haven't wanted to be to open about it, but it seems to me that LVM is an idea whose time has not come - at least for casual desktop users.
For someone running a multi drive system with changing needs - maybe - but I'm not even sure of that.
One thing is clear - it ups the ante as regards knowledge of arcane stuff, for one to manage his machine.
My impression is that the free Fedora thing is to get a group of beta test users to shake out stuff that is aimed at the big system users. In this context it may make sense.
If it weren't for LVM, in no time and with not much knowledge - using fdisk and tar - I beleive I could have cloned my system onto a smaller disk (boot from CD, partition my target disk, tar pipe tar the 2 partitions. Done.) As it is it's still an elusive goal.
There is no requirement to use LVM - even on Enterprise systems. Just be aware of what the installer is going to use if you allow it to default.
Even on big systems I prefer to have all my "system" data (anything I need to recover the system) on non-LVM. The user data can go there if they want.
I was going to say "here are some of the gory details", but actually this is pretty elegant - and you have full control over what happens, so if you were developing an imbedded system you could target this to your hardware and not have to be so generalized...
Kenny, that's the init executable in your real /sbin on your real disk root filesystem.
The init I showed was the one that is imbedded in the initial RAMdisk (initrd). They just happen to have the same name. I'm not certain why the Fedora team chose to name it that. It clearly is confusing. It would have made more sense to call it 'init.sh' or something like that to differentiate it from the 'init' executable.
Code:
[root@athlonz ~]# file /sbin/init
/sbin/init: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped
[root@athlonz ~]# cd initrdcontents/
[root@athlonz initrdcontents]# file init
init: a /bin/nash script text executable
[root@athlonz initrdcontents]#
btw, I think the init nash script in initrd had a different name once upon a time. Maybe back in the 2.4 kernel days.
syg00 and rmknox, LVM2 is improving. It still has bugs. LVM1 was hopeless. Just hopelessly bug ridden.
But even with its warts LVM2 is quite useful at times.
Case 1: one of my home machines - Fedora - multiple drives that I have in RAID1 pairs.
To replace a pair of RAID1 drives: physically add the new drives; partition the new ones; set them up as RAID1; pvcreate the new RAID md device; vgextend the root VG; pvmove the old md device to the new one; install grub on the new drive; vgreduce the root VG to remove the old md device; pvremove the old md device; physically remove the old drives; ta-da, you didn't have to reinstall. A non-boot disk is even easier.
Case 2: In work I have a system with 3.1TB of disk on 127 26GB SAN LUNS.
Upside: you can create some really large filesystems.
Downside: the LVM metadata is written on every LUN. When you make changes or even just do an 'lvs' command it can take a considerable amount of time (I think it has to validate the metadata on every physical volume - all 127 of them.)
Like I said, it's got warts.
(yes, i know that you don't HAVE to put metadata on every drive, but in this case i think you do. what if two LUNS were to come up in a different order? ...)
Last edited by tommylovell; 08-28-2010 at 12:00 AM.
The dm (lack of) integration with LVM was what annoyed me. And the apparent design presumption that the user would only ever be expanding allocations.
Things are better, but I just think it's bad design to lump another block device layer on top of what's already there. I wonder if recent changes are in response to things like zfs and btrfs - the latter has a way to go still, but has to be a better option.
IMHO ...
I want to replace my larger than needed single hard drive with a moderately oversized hard drive
While I haven't worked thropugh all the steps yet, it sounds like case1 above is a model for what I want to do.
Am I right?
Dick
tommylovell
Your two examples seem to confirm my impression - that this tool fills a need for sophisticated installations (raid / terrabyte) and that need may not exist for "simple sit at the command line and edit -- complile -- test programs" systems.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.