SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi!
I intend installing Slackware-13.0 using LVM partitioning scheme. My root partition is going to be on LVM too. I have 2 preliminary questions.
1. I have a single hard drive sda. It is usually recommended to create a single partition encompassing the whole hard drive, i.e. sda1, and then use it a physical volume for the LVM volume group. However in the LVM howto it is written that there should be no problem using the whole hard drive directly as a physical volume in lvm, i.e. instead of running
Code:
pvcreate /dev/sda1
running:
Code:
pvcreate /dev/sda
My question is, will this be possible, bearing in mind that the root logical volume will also be LVM?
2. Everywhere it is said that to have /boot (or /) on LVM, one has to use initrd. If I create a custom kernel will LVM and device mapper support built-in, will I still need to create and use initrd?
I could find the answers to these questions through trial and error, but I would be glad if someone could shed some light on these issues so that I save some time of unpleasant experiences
If I remember correctly, this is not possible...
I use LVM as well on my machine, and the LVM partitions can only be mounted later on in the boot process (in /etc/rc.d/rc.S).
So you will need to have / mounted before you can mount the LVM volumes.
But, maybe there is a way... won't be simple though.
If I remember correctly, this is not possible...
I use LVM as well on my machine, and the LVM partitions can only be mounted later on in the boot process (in /etc/rc.d/rc.S).
So you will need to have / mounted before you can mount the LVM volumes.
But, maybe there is a way... won't be simple though.
AIUI, /boot cannot be on an LVM volume because grub doesn't have LVM capability but needs to read from /boot.
/ can be an LVM volume (my ubuntu 8.04 system has it) but I see little advantage in it.
Distribution: Slackware64 14.2 and current, SlackwareARM current
Posts: 1,645
Rep:
Quote:
Originally Posted by gegechris99
For creating LVM volumes, I just followed instructions from the README_LVM.TXT written by Alien Bob (a.k.a. Eric Hameleers) and had no problem.
Not that I don't believe you, but does it really work with the kernel and initrd.gz inside the LVM itself? I wonder if LILO has LVM features build inside, else it couldn't work I guess.
EDIT: Using GRUB myself I'm just curious ...
Last edited by titopoquito; 08-31-2009 at 04:46 PM.
Not that I don't believe you, but does it really work with the kernel and initrd.gz inside the LVM itself? I wonder if LILO has LVM features build inside, else it couldn't work I guess.
EDIT: Using GRUB myself I'm just curious ...
I'm not an expert, but after creating the initrd.gz image and editing the /etc/lilo.conf file, you need to run lilo. At that point, lilo copies the necessary code/program in the MBR (Master Boot Record) of your machine to enable boot. The LVM stuff is stored in the initrd.gz image.
Distribution: Slackware64 14.2 and current, SlackwareARM current
Posts: 1,645
Rep:
Quote:
Originally Posted by gegechris99
I'm not an expert, but after creating the initrd.gz image and editing the /etc/lilo.conf file, you need to run lilo. At that point, lilo copies the necessary code/program in the MBR (Master Boot Record) of your machine to enable boot. The LVM stuff is stored in the initrd.gz image.
Guys, thank you for the discussion.
Nobody however ventured to answer any of the 2 questions in my first post
Question 1: No, it's better to create PV using the partition /dev/sda1
Question 2: Yes you still need to create the initrd.gz image which will contain the code to enable LVM at boot time (that's the "-L" option in the mkinitrd command). Options you mentioned in the kernel allow the running kernel to manage LVM after the boot.
I hope it clears things out
Last edited by gegechris99; 09-01-2009 at 02:51 AM.
Reason: correct typo errors
1. Yes, it is possible to run your root filesystem on a logical volume, if you do not boot from the same device that your root logical volume is located on. The reason is that lilo or grub cannot read logical volumes (which ties into the answer for your question #2). Given hardware support, you could boot another hard disk or USB HD/CD/flash drive or even PXE boot.
2. Yes, (AFAIK) you still need an initrd to have a root filesystem on a logical volume, because the logical volumes need a userspace component (in this case, /sbin/lvm.static) to make the logical volumes available to the kernel (vgscan --mknodes && vgchange -a y). Also, you can pass either the symlinked or real names of the root device to the kernel at boot.
Either:
Question 1: No, it's better to create PV using the partition /dev/sda1
Question 2: Yes you still need to create the initrd.gz image which will contain the code to enable LVM at boot time (that's the "-L" option in the mkinitrd command). Options you mentioned in the kernel allow the running kernel to manage LVM after the boot.
I thought of initiating a new thread, but in view of the rather general title of this one, I think I can post my new questions here.
1. With LVM, what happens if a badblock occurs in the hard drive? With the usual partitioning with fdisk, if a badblock occurs on the drive, one can fsck and repair the file system on the partition affected. Does the same refer to an LVM partition? To be more specific, my whole disk /dev/sda will be made into a single partition /dev/sda1. It will be the only physical volume within my volume group HDvg, in which I will create several logical volumes: root, usr, home, swap and so on. If a badblock occurs , say in /dev/HDvg/usr, will I be able to correct its filesystem and restore the contents from my backup?
2. I use the jfs filesystem, as a discussion in this forum lead me to the conviction that it is the best filesystem for my needs. If at some point in time I decide to shrink the /dev/HDvg/usr logical volume with, say 5 GB to transfer them to /dev/HDvg/home, will I be able to do that, having in mind that the jfs filesystem cannot be shrunk? This is the only clue as to what might be done that I found in the internet, but I do need some guru to shed some more light: http://unix.derkeiler.com/Newsgroups...5-06/0761.html
If that should somehow be possible, what a procedure would you recommend? Maybe I should delete /dev/HDvg/usr, expand /dev/HDvg/home with 5 GB, and then create a 5 GB smaller/dev/HDvg/usr, format it with jfs and restore its contents from my backup? Or maybe I should switch to jfs2 and use chfs to shrink it?
What follows is a demonstration of what happens when you create a logical volume from a block device with bad blocks:
Code:
02:30:10 root@grail:/home/tom/temp# dd if=/dev/zero bs=512 count=102400 of=./disk && for iter in {1..3} ; do badblock=$(($RANDOM * 4 % 102400)) ; echo badblock\#$iter\:$badblock ; tr \\0 1 </dev/zero | dd seek=$badblock bs=512 count=1 conv=notrunc of=./disk ; done && losetup -f ./disk && losetup -j ./disk && badblocks -t0 /dev/loop4
102400+0 records in
102400+0 records out
52428800 bytes (52 MB) copied, 0.834813 s, 62.8 MB/s
badblock#1:27660
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00339028 s, 151 kB/s
badblock#2:13424
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00491653 s, 104 kB/s
badblock#3:39548
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.00220096 s, 233 kB/s
/dev/loop4: [0805]:386076694 (/home/tom/temp/disk)
Checking for bad blocks in read-only mode
6712
13830
19774
02:30:37 root@grail:/home/tom/temp# pvcreate /dev/loop4 && vgcreate testvg /dev/loop4 && lvcreate -n testlv -l 100%VG testvg && vgchange -a y testvg
Physical volume "/dev/loop4" successfully created
Volume group "testvg" successfully created
/dev/cdr: open failed: Read-only file system
Logical volume "testlv" created
1 logical volume(s) in volume group "testvg" now active
02:31:02 root@grail:/home/tom/temp# dmsetup table /dev/mapper/testvg-testlv
0 98304 linear 7:4 384
02:33:32 root@grail:/home/tom/temp# badblocks -t0 /dev/mapper/testvg-testlv
Checking for bad blocks in read-only mode
6520
13638
19582
02:34:59 root@grail:/home/tom/temp# badblocks -t0 /dev/mapper/testvg-testlv | while read block ; do echo $(($block + 192)) ; done
Checking for bad blocks in read-only mode
6712
13830
19774
Sorry for the mess of commands, but the first command creates a file of zeroes and has a for loop that makes a selection of 3 pseudo-random blocks that fail badblocks' test pattern of zeroes. Then, it assigns the first unallocated loopback block device out of the file of zeroes and some ones, and shows you what it assigned, then checks for the badblocks that were just created.
The second command makes a physical volume, a volume group, and a logical volume for the newly-mapped loopback device.
I ran the third command mostly to show the offset the the test device. I don't know exactly what's going on here, but I've tested it a bit before this, and badblocks' output on the mapped device is incremented by half of the device's offset (384/2 for 192 here).
Gegechris99 was right in saying that creating a partition is a good idea for your lvm data. Boot loaders, live CD's, other OS's are all more likely to mistake your LVM data for unallocated space and lose it for you.
So, to answer your first question, the bad blocks are also present in the logical volumes you create out of underlying device(s). I don't think JFS, XFS, or reiserfs do anything but fail with bad blocks present, at least not without kludges. The ext[234] filesystems are capable of not using bad blocks on the underlying device. You would need to use a file system that avoids bad blocks to do a fsck and then restore from backup to not lose data.
I know ofno way of using lvm2 to avoid bad blocks on a device. As I read here, especially appendix A, I found that the mapped blocks need to be contiguous when using the linear mapping targets, and the other targets were irrelevant. Someone could possibly do things such as distribute encrypted data within a block device, like an encrypted device-within-a-device with device mapper if contiguous blocks for each device node weren't required and could be chained in sequence to define a volume.
The advice I've read about drives with bad blocks is that you should not trust a drive with bad blocks present. They're likely to lose your data for you. Despite that, what I would do, if I found bad blocks (back up your data!) and wanted to use lvm2, would be to go ahead and move the affected data elsewhere and partition around the bad blocks, leaving the bad block(s) unallocated. LVM has no trouble making logical volumes span multiple devices .
As for your second question, I haven't decided for myself what is best. What you suggested -- juggling data across devices -- would work fine, if you watch disk usage closely. But, it sure would be a nice feature for a filesystem to do online shrinking. I'm an XFS user, mostly because it's what I know best, not necessarily because it's the best filesystem for all uses. XFS does not do online shrinking, but I have not yet had the need to scavenge space from mostly static data devices to allocate to growing data.
Problem installing lilo in my root partition inside the LVM
To install Slackware 13 in a LVM I followed instructions from the README_LVM.TXT written by Eric Hameleers and tried to install lilo in my root partition (in the LVM):
Code:
boot = /dev/vg/root
I thought it had worked, but at reboot I got the infamous message:
Code:
No sig. in partition
Does that mean that I should have installed lilo either in the MBR (which I do not want to do) or at least outside the LVM ?
Sorry to hijack this thread but this is also a LVM question, isn't it ?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.