[SOLVED] lvm and luks on raid1 in slackware64-14.1
SlackwareThis Forum is for the discussion of Slackware Linux.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I'm stuck in a hotel room for a few days while truck is repaired, so figured it was a good time to play with slackware, and that I would try to put slackware64-14.1 on lvm+luks running on md1, leaving md0, unencrypted just for /boot...
Do you still have to patch /boot/initrd-tree/init in order to get the raid1 devices properly assembled in time during boot, when using luks and lvm on top of raid1?
And if so, would this patch, from slackware 12.1 still work for 14.1?
I'm asking, because after my first attempt on slackware64-14.1, I have the same problem gargamel had in 12.1, that he got fixed with the Alien Bob's patch.
I haven't figured out how to copy and paste from one machine to another (I'm short on thumbdrives and ethernet cables in my hotel room,) so I don't want to type the patch file if it won't work. I started to type it, and got to the "2008-04-03 22:22:23" and realized it was obviously a date, and that maybe there would be updated dates in a patch file for 14.1. Figured I'd stop and ask before typing the rest and failing.
Hopefully I can get this booting without having to start over, since dd takes so long writing random data over both disks in the raid1.
Right now, I have booted off of a slackware64-14.1 usb installer, and then assembled md0 and md1, and done the luksOpen and activated all the logical volumes, mounted them to /mnt,mounted md0 to /mnt/boot and chrooted into the new installation. I shouldn't think the patch would not be needed, because:
cat /boot/initrd-tree/init | grep mdadm
if [ -x /sbin/mdadm ]; then
# If /etc//mdadm.conf is present, udev should DTRT on its own;
if [ ! -r /etc/mdadm.conf ]; then
/sbin/mdadm -E -s >/etc/mdadm.conf
/sbin/mdadm -S -s
/sbin/mdadm -A -s
# partitions or mdadm arrays.
Seems like it should assemble the arrays with the /sbin/mdadm -A -s.
But shouldn't there be a space between the > and /etc/mdadm.conf ?
Last edited by slac-in-the-box; 03-13-2014 at 12:46 PM.
Reason: changed a would to a would not.
Another thing that confuses me with this setup is configuring lilo.
For raid, I thought lilo liked the "-x mbr-only" flag or a line with 'raid-extra-boot=mbr-only'. But if I do this, I get "Fatal: Not a RAID install." To get lilo to think it is a raid install, I have to use "root=/dev/md#" in the kernel image parameters. But for a CRYPT setup, I have to use "root=/dev/cryptvg/root." How can I get lilo to know it is a RAID install as well as a LVM install?
Sorry. I probably should have posted this in the Installation forum, but when I searched that forum for lvm raid1 and crypt, there weren't any relevant threads, and when I searched the main forum, I found the 12.1 thread where gargamel already went through this. So I think the real question I'm having here is about /boot/initrd-tree/init for slackware-14.1, and whether it is assembling raid properly for anyone else.
I just learned that there doesn't need to be a space after the ">" I had just been putting a space there for so long, never tried without one... but just tried and still works.
I wonder if gargamel got his setup going with 14.1.
One of the relevant errors I get when trying to boot is:
LUKS device '/dev/md1' unavailable for unlocking!
This is what makes me think it is related to initrd and raid.
*sigh* I need a bookmark for this; I've whined about it enough.
The default /etc/mdadm.conf that is installed on a new system is readable. Unfortunately, it is merely filled with comments. So the bit of code...
/sbin/mdadm -E -s >/etc/mdadm.conf
/sbin/mdadm -S -s
/sbin/mdadm -A -s
...is never actually run on your system. Since there's nothing useful in /etc/mdadm.conf, when udevd assembles the arrays it has no idea what to name them. It chooses names starting with (I think) /dev/md127 and increments the number as it finds additional arrays.
So either remove /etc/mdadm.conf from your initrd or ensure that it has the information that allows you to correctly create your raid arrays.
Thank you Richard. Removing /etc/mdadm.conf from initrd makes sense to me. Now my only problem is, that in the interim before you responded, I decided that maybe my posts didn't provide enough info, so I started over from scratch, documenting every step along the way, and got to writing random data with dd over the array before running cryptsetup, and, I had to checkout of my hotel before it finished, lol... I'll post back from the new system once it's done. Thanks again.
1. Booted off of slackware64-14.1 usb installation thumb drive.
2. cleared partition table:
root@slackware:/# dd if=/dev/zero of=/dev/sda bs=512 count=1024
1024+0 records in
1024+0 records out
524288 bytes (512.0KB) copied, 0.013450 seconds, 37.2MB/s
root@slackware:/# dd if=/dev/zero of=/dev/sdb bs=512 count=1024
1024+0 records in
1024+0 records out
524288 bytes (512.0KB) copied, 0.307764 seconds, 1.6 MB/s
3. Partitioned /dev/sda
root@slackware:/# fdisk /dev/sda
Device Boot Start End Blocks ID System
/dev/sda1 2048 526335 262114 fd Linux raid autodetect
/dev/sda2 526336 976773167 488123416 fd Linux raid autodetect
5. rebooted off of usb installtion thumb drive, because of fdisk warnings that old partition tables were still in use, and to reboot before installing a file sytem. Don't know if necessary, but I always do this if I get that warning.
6. made sure there was nothing left of prior raids, since I had same partition layout last time:
8. Fill the array I plan to encrypt with random data:
root@slackware:/# dd if=/dev/urandom of=/dev/md1
(time to take a break, walk the dog, and come back tomorrow)
9. Setup encryption on raid1 array (/dev/md1):
root@slackware:/# cryptsetup -s 256 -y luksFormat /dev/md1
This will overwrite data on /dev/md1 irrevocably.
Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase: secretkeyichose
Verify passphrase: secretkeyichose
10. Decrypt array and map to unencrypted block device (I named mine egg):
root@slackware:/# cryptsetup luksOpen /dev/md1 slackluks
Enter passphrase for /dev/md1: secretkeyichose
11. Create a physical volume out of the unencrypted block device:
root@slackware:/# pvcreate /dev/mapper/slackluks
Physical volume "/dev/mapper/slackluks" successfully created
12. Create a volume group on the physical volume:
root@slackware:/# vgcreate cryptvg /dev/mapper/slackluks
Volume group "cryptvg" successfully created
13. Create logical volumes suitable for my installation:
root@slackware:/# lvcreate -L 1G -n swap cryptvg
Logical volume "swap" created
root@slackware:/# lvcreate -L 128G -n root cryptvg
Logical volume "root" created
root@slackware:/# lvcreate -l 100%FREE -n home cryptvg
Logical volume "home" created
14. Create Device Nodes:
root@slackware:/# vgscan --mknodes
Reading all physical volumes. This may take a while...
Found volume group "cryptvg" using metadata type lvm2
15. Activate Volumes:
root@slackware:/# vgchange -ay
5 logical volume(s) in group "cryptvg" now active
16. Create some swapspace:
root@slackware:/# mkswap /dev/cryptvg/swap
mkswap: /dev/cryptvg/swa;: warning: don't erase bootbits sectors on whole disk. Use-f to force. Setting up swapspace version1, size = 1048572 KiB
no label, UUID=blahblahblah-blah-blah-blah-blah-blahblah
17. Run setup:
17A Choose ADDSWAP and then select /dev/cryptvg/swap
17B Select and format /dev/cryptvg/root to be root (/) Linux partition
17C Select and format other volumes and give them mount points.
17D Select and format /dev/md0 and make /boot its mount point.
17E Continue and install the packages
17F I skipped making a USB boot stick
Last edited by slac-in-the-box; 03-21-2014 at 01:36 AM.
Reason: wasn't really done yet... posted by accident... fixed vgname mismatches
Posting from my working installation of slackware64-14.1 on raid1, with luks and lvm!
I had several issues thwarting my attempts. One was physical: my son gave up on this machine because of erratic booting, and returned it to me. He had changed drives many times, and had lost some screws, such that drive sdb was seated loosely, and vibrations could change weather or not it got detected..
On my first failed attempt, I had been using ext2 on md0 (/boot), and xfs on my logical volumes. Along with the errors about not being able to open luks device because it couldn't find md1, there were also lots of xfs errors, so I switched to ext4.
On my second failure, when creating raid1 array, I received warning that I shouldn't use it as a boot disk, and that if I wanted to boot from it I should add --metadata=0.90... So I made md0 with 0.90 metadata and created md1 with its default 1.x metadata... However, when I changed my partition types to 'da', it panicked on reboot--and when back in installer, the partition tables were no longer visible in fdisk -- like they got wiped out...
I noticed that advice to use 'da' as partition type was recommended for using 1.x metadata.
So, now, on this third attempt--which is working--I toggled the boot flag on /dev/sd[ab]1, and I went back to partition type 'fd,' and created both arrays with the --metadata=0.90 tag. I used ext4. I removed mdadm.conf from /boot/initrd.gz, directions for which I found here. I added the initrd to lilo, and reran lilo. Everything works great. Thanks.
I would like to learn more about raid metadata. I wonder if I used partition type 'fd' on the partitions for /dev/sd[ab]1 and --metadata=0.90 on md0, but used partition type 'da' for /dev/sd[ab]2 and left the metadata at the default 1.x, if it would still work... I wonder if that note that partition type 'fd' can interfere from array recovery from cdroms, only applied in 1.x metadata... or if it is valid for 0.90 as well...
But overall, I am happy that my data is kept private with encryption, and redundant with raid1. Thanks again Richard Cranium for your assistance. Cheers.
I don't have LUKS on my system, but I do use LVM on top of software RAID. The motherboard on the machine where I'm typing this is an ASUS M4N98TD EVO. I also use GRUB2 to boot, which may be the big difference.
dorkbutt@dorkbutt:~$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdd3 sde3
142716800 blocks super 1.2 [2/2] [UU]
md2 : active raid1 sda3 sdc3
974999360 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sdd2 sde2 sdc2(S)
523968 blocks super 1.2 [2/2] [UU]
unused devices: <none>
dorkbutt@dorkbutt:~$ cd /boot
dorkbutt@dorkbutt:/boot$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/md0 496M 43M 428M 10% /boot
The mdadm.conf issue is kinda weird. It pretty much boils down to that you should choose one of two setups:
Ensure that the "real" /etc/mdadm.conf has the correct information to assemble all of your RAID arrays prior to running mkinitrd (which copies /etc/mdadm.conf into /boot/initrd-tree/etc/mdadm.conf).
Ensure that you don't have an /etc/mdadm.conf file at all so the initrd code queries and assembles the arrays at boot.
Slackware, unfortunately, provides an /etc/mdadm.conf that contains nothing but comments. That's sufficient for the initrd code to skip the part where it runs the "query and assemble all arrays for you" but not sufficient for the udevd code to give your arrays the correct names. That actually works fine for me since LVM can use the meta-data that it writes into the superblocks to assemble itself even with "bad" names for the underlying block devices.
LUKS, as one would hope from an encrypted block device, is a lot more picky. If udevd has no information about the RAID arrays that it assembles for you, then it picks names like /dev/md127 and higher. That probably doesn't match what you picked when you created the array and LUKS fails because it can't find the correct device.
So, I'm glad that you're up and running and sad that I gave you bad advice that led you down some wild goose chases. There's only so many minutes in all of our lives and I don't like wasting anyone else's with bad advice.
I don't think you gave me bad advise. I come to this site to learn, because it is more affordable and more efficient than college, and quicker than waiting on hold for phone tech support from any of the commercial operating systems... (back in my xserve days, I would wait hours for applecare, just to be told it was an enterprise issue, and I had to have enterprise applecare for $500 each case, or $10000 a year--whoa, and they were just putting gui wrapper around the same open source software running in slackware (like apache, sendmail, etc.)) So, I have learned everything linux here at LQ, with hardly having to make any posts, so much has already been solved...
Your advise lead me to learn about the differences between metadata versions... since I am only using level one array, with two components, version 0.90 is just fine, even though it is no longer default metadata since 2009. It's limitations are it can only have 28 array components; size must be less than 4TB; and disks cannot be moved between big and little endian machines... with the 1.x metadata, you can have 384+ components! whoa!
I also learned that
The boot-loader LILO also can only boot from the version 0.90 superblock arrays. Alternative boot loaders, GRUB specifically, probably don't have this particular limitation.
(Maybe this will get Pat to include grub with next slackware release--could provide option to choose lilo or grub)