lvm and luks on raid1 in slackware64-14.1
I'm stuck in a hotel room for a few days while truck is repaired, so figured it was a good time to play with slackware, and that I would try to put slackware64-14.1 on lvm+luks running on md1, leaving md0, unencrypted just for /boot...
Do you still have to patch /boot/initrd-tree/init in order to get the raid1 devices properly assembled in time during boot, when using luks and lvm on top of raid1? And if so, would this patch, from slackware 12.1 still work for 14.1? I'm asking, because after my first attempt on slackware64-14.1, I have the same problem gargamel had in 12.1, that he got fixed with the Alien Bob's patch. I haven't figured out how to copy and paste from one machine to another (I'm short on thumbdrives and ethernet cables in my hotel room,) so I don't want to type the patch file if it won't work. I started to type it, and got to the "2008-04-03 22:22:23" and realized it was obviously a date, and that maybe there would be updated dates in a patch file for 14.1. Figured I'd stop and ask before typing the rest and failing. Hopefully I can get this booting without having to start over, since dd takes so long writing random data over both disks in the raid1. |
Right now, I have booted off of a slackware64-14.1 usb installer, and then assembled md0 and md1, and done the luksOpen and activated all the logical volumes, mounted them to /mnt,mounted md0 to /mnt/boot and chrooted into the new installation. I shouldn't think the patch would not be needed, because:
Code:
cat /boot/initrd-tree/init | grep mdadm But shouldn't there be a space between the > and /etc/mdadm.conf ? Hmm. |
Another thing that confuses me with this setup is configuring lilo.
For raid, I thought lilo liked the "-x mbr-only" flag or a line with 'raid-extra-boot=mbr-only'. But if I do this, I get "Fatal: Not a RAID install." To get lilo to think it is a raid install, I have to use "root=/dev/md#" in the kernel image parameters. But for a CRYPT setup, I have to use "root=/dev/cryptvg/root." How can I get lilo to know it is a RAID install as well as a LVM install? |
Sorry. I probably should have posted this in the Installation forum, but when I searched that forum for lvm raid1 and crypt, there weren't any relevant threads, and when I searched the main forum, I found the 12.1 thread where gargamel already went through this. So I think the real question I'm having here is about /boot/initrd-tree/init for slackware-14.1, and whether it is assembling raid properly for anyone else.
I just learned that there doesn't need to be a space after the ">" I had just been putting a space there for so long, never tried without one... but just tried and still works. I wonder if gargamel got his setup going with 14.1. One of the relevant errors I get when trying to boot is: Code:
LUKS device '/dev/md1' unavailable for unlocking! |
Delete /etc/mdadm.conf and rebuild your initrd.
*sigh* I need a bookmark for this; I've whined about it enough. The default /etc/mdadm.conf that is installed on a new system is readable. Unfortunately, it is merely filled with comments. So the bit of code... Code:
/sbin/mdadm -E -s >/etc/mdadm.conf So either remove /etc/mdadm.conf from your initrd or ensure that it has the information that allows you to correctly create your raid arrays. |
Thank you Richard. Removing /etc/mdadm.conf from initrd makes sense to me. Now my only problem is, that in the interim before you responded, I decided that maybe my posts didn't provide enough info, so I started over from scratch, documenting every step along the way, and got to writing random data with dd over the array before running cryptsetup, and, I had to checkout of my hotel before it finished, lol... I'll post back from the new system once it's done. Thanks again.
|
1. Booted off of slackware64-14.1 usb installation thumb drive.
2. cleared partition table: Code:
root@slackware:/# dd if=/dev/zero of=/dev/sda bs=512 count=1024 Code:
root@slackware:/# fdisk /dev/sda Code:
root@slackware:/# sfdisk -d /dev/sda | sfdisk --force /dev/sdb 6. made sure there was nothing left of prior raids, since I had same partition layout last time: Code:
root@slackware:/# mdadm --manage --stop /dev/md0 Code:
root@slackware:/# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1 --metadata=0.90 Code:
root@slackware:/# dd if=/dev/urandom of=/dev/md1 9. Setup encryption on raid1 array (/dev/md1): Code:
root@slackware:/# cryptsetup -s 256 -y luksFormat /dev/md1 Code:
root@slackware:/# cryptsetup luksOpen /dev/md1 slackluks Code:
root@slackware:/# pvcreate /dev/mapper/slackluks Code:
root@slackware:/# vgcreate cryptvg /dev/mapper/slackluks Code:
root@slackware:/# lvcreate -L 1G -n swap cryptvg Code:
root@slackware:/# vgscan --mknodes Code:
root@slackware:/# vgchange -ay Code:
root@slackware:/# mkswap /dev/cryptvg/swap Code:
root@slackware:/# setup 17B Select and format /dev/cryptvg/root to be root (/) Linux partition 17C Select and format other volumes and give them mount points. 17D Select and format /dev/md0 and make /boot its mount point. 17E Continue and install the packages 17F I skipped making a USB boot stick |
RE step #3: I think that you should tag software RAID partitions with "da" (Non-FS data) instead of "fd". See https://raid.wiki.kernel.org/index.php/Partition_Types
|
lvm and luks onr raid 1 is working!!!
Thanks Mr. Cranium.
Posting from my working installation of slackware64-14.1 on raid1, with luks and lvm! I had several issues thwarting my attempts. One was physical: my son gave up on this machine because of erratic booting, and returned it to me. He had changed drives many times, and had lost some screws, such that drive sdb was seated loosely, and vibrations could change weather or not it got detected.. On my first failed attempt, I had been using ext2 on md0 (/boot), and xfs on my logical volumes. Along with the errors about not being able to open luks device because it couldn't find md1, there were also lots of xfs errors, so I switched to ext4. On my second failure, when creating raid1 array, I received warning that I shouldn't use it as a boot disk, and that if I wanted to boot from it I should add --metadata=0.90... So I made md0 with 0.90 metadata and created md1 with its default 1.x metadata... However, when I changed my partition types to 'da', it panicked on reboot--and when back in installer, the partition tables were no longer visible in fdisk -- like they got wiped out... I noticed that advice to use 'da' as partition type was recommended for using 1.x metadata. So, now, on this third attempt--which is working--I toggled the boot flag on /dev/sd[ab]1, and I went back to partition type 'fd,' and created both arrays with the --metadata=0.90 tag. I used ext4. I removed mdadm.conf from /boot/initrd.gz, directions for which I found here. I added the initrd to lilo, and reran lilo. Everything works great. Thanks. I would like to learn more about raid metadata. I wonder if I used partition type 'fd' on the partitions for /dev/sd[ab]1 and --metadata=0.90 on md0, but used partition type 'da' for /dev/sd[ab]2 and left the metadata at the default 1.x, if it would still work... I wonder if that note that partition type 'fd' can interfere from array recovery from cdroms, only applied in 1.x metadata... or if it is valid for 0.90 as well... But overall, I am happy that my data is kept private with encryption, and redundant with raid1. Thanks again Richard Cranium for your assistance. Cheers. |
Hmm.
I don't have LUKS on my system, but I do use LVM on top of software RAID. The motherboard on the machine where I'm typing this is an ASUS M4N98TD EVO. I also use GRUB2 to boot, which may be the big difference. Code:
dorkbutt@dorkbutt:~$ cat /proc/mdstat
Slackware, unfortunately, provides an /etc/mdadm.conf that contains nothing but comments. That's sufficient for the initrd code to skip the part where it runs the "query and assemble all arrays for you" but not sufficient for the udevd code to give your arrays the correct names. That actually works fine for me since LVM can use the meta-data that it writes into the superblocks to assemble itself even with "bad" names for the underlying block devices. LUKS, as one would hope from an encrypted block device, is a lot more picky. If udevd has no information about the RAID arrays that it assembles for you, then it picks names like /dev/md127 and higher. That probably doesn't match what you picked when you created the array and LUKS fails because it can't find the correct device. So, I'm glad that you're up and running and sad that I gave you bad advice that led you down some wild goose chases. There's only so many minutes in all of our lives and I don't like wasting anyone else's with bad advice. |
I don't think you gave me bad advise. I come to this site to learn, because it is more affordable and more efficient than college, and quicker than waiting on hold for phone tech support from any of the commercial operating systems... (back in my xserve days, I would wait hours for applecare, just to be told it was an enterprise issue, and I had to have enterprise applecare for $500 each case, or $10000 a year--whoa, and they were just putting gui wrapper around the same open source software running in slackware (like apache, sendmail, etc.)) So, I have learned everything linux here at LQ, with hardly having to make any posts, so much has already been solved...
Your advise lead me to learn about the differences between metadata versions... since I am only using level one array, with two components, version 0.90 is just fine, even though it is no longer default metadata since 2009. It's limitations are it can only have 28 array components; size must be less than 4TB; and disks cannot be moved between big and little endian machines... with the 1.x metadata, you can have 384+ components! whoa! I also learned that Quote:
Found this info at https://raid.wiki.kernel.org/index.p...rblock_formats. So I think your advise was correct for your 1.2 metadata and grub... Thanks for your many contributions to LQ! |
All times are GMT -5. The time now is 02:14 AM. |