LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   lvm and luks on raid1 in slackware64-14.1 (https://www.linuxquestions.org/questions/slackware-14/lvm-and-luks-on-raid1-in-slackware64-14-1-a-4175498072/)

slac-in-the-box 03-13-2014 12:24 PM

lvm and luks on raid1 in slackware64-14.1
 
I'm stuck in a hotel room for a few days while truck is repaired, so figured it was a good time to play with slackware, and that I would try to put slackware64-14.1 on lvm+luks running on md1, leaving md0, unencrypted just for /boot...

Do you still have to patch /boot/initrd-tree/init in order to get the raid1 devices properly assembled in time during boot, when using luks and lvm on top of raid1?

And if so, would this patch, from slackware 12.1 still work for 14.1?

I'm asking, because after my first attempt on slackware64-14.1, I have the same problem gargamel had in 12.1, that he got fixed with the Alien Bob's patch.

I haven't figured out how to copy and paste from one machine to another (I'm short on thumbdrives and ethernet cables in my hotel room,) so I don't want to type the patch file if it won't work. I started to type it, and got to the "2008-04-03 22:22:23" and realized it was obviously a date, and that maybe there would be updated dates in a patch file for 14.1. Figured I'd stop and ask before typing the rest and failing.

Hopefully I can get this booting without having to start over, since dd takes so long writing random data over both disks in the raid1.

slac-in-the-box 03-13-2014 12:45 PM

Right now, I have booted off of a slackware64-14.1 usb installer, and then assembled md0 and md1, and done the luksOpen and activated all the logical volumes, mounted them to /mnt,mounted md0 to /mnt/boot and chrooted into the new installation. I shouldn't think the patch would not be needed, because:

Code:

cat /boot/initrd-tree/init | grep mdadm

if [ -x /sbin/mdadm ]; then
  # If /etc//mdadm.conf is present, udev should DTRT on its own;
  if [ ! -r /etc/mdadm.conf ]; then
    /sbin/mdadm -E -s >/etc/mdadm.conf
    /sbin/mdadm -S -s
    /sbin/mdadm -A -s
  # partitions or mdadm arrays.

Seems like it should assemble the arrays with the /sbin/mdadm -A -s.

But shouldn't there be a space between the > and /etc/mdadm.conf ?

Hmm.

slac-in-the-box 03-13-2014 01:13 PM

Another thing that confuses me with this setup is configuring lilo.

For raid, I thought lilo liked the "-x mbr-only" flag or a line with 'raid-extra-boot=mbr-only'. But if I do this, I get "Fatal: Not a RAID install." To get lilo to think it is a raid install, I have to use "root=/dev/md#" in the kernel image parameters. But for a CRYPT setup, I have to use "root=/dev/cryptvg/root." How can I get lilo to know it is a RAID install as well as a LVM install?

slac-in-the-box 03-13-2014 05:29 PM

Sorry. I probably should have posted this in the Installation forum, but when I searched that forum for lvm raid1 and crypt, there weren't any relevant threads, and when I searched the main forum, I found the 12.1 thread where gargamel already went through this. So I think the real question I'm having here is about /boot/initrd-tree/init for slackware-14.1, and whether it is assembling raid properly for anyone else.

I just learned that there doesn't need to be a space after the ">" I had just been putting a space there for so long, never tried without one... but just tried and still works.

I wonder if gargamel got his setup going with 14.1.

One of the relevant errors I get when trying to boot is:

Code:

LUKS device '/dev/md1' unavailable for unlocking!
This is what makes me think it is related to initrd and raid.

Richard Cranium 03-13-2014 08:20 PM

Delete /etc/mdadm.conf and rebuild your initrd.

*sigh* I need a bookmark for this; I've whined about it enough.

The default /etc/mdadm.conf that is installed on a new system is readable. Unfortunately, it is merely filled with comments. So the bit of code...
Code:

    /sbin/mdadm -E -s >/etc/mdadm.conf
    /sbin/mdadm -S -s
    /sbin/mdadm -A -s

...is never actually run on your system. Since there's nothing useful in /etc/mdadm.conf, when udevd assembles the arrays it has no idea what to name them. It chooses names starting with (I think) /dev/md127 and increments the number as it finds additional arrays.

So either remove /etc/mdadm.conf from your initrd or ensure that it has the information that allows you to correctly create your raid arrays.

slac-in-the-box 03-15-2014 01:16 AM

Thank you Richard. Removing /etc/mdadm.conf from initrd makes sense to me. Now my only problem is, that in the interim before you responded, I decided that maybe my posts didn't provide enough info, so I started over from scratch, documenting every step along the way, and got to writing random data with dd over the array before running cryptsetup, and, I had to checkout of my hotel before it finished, lol... I'll post back from the new system once it's done. Thanks again.

slac-in-the-box 03-20-2014 11:13 PM

1. Booted off of slackware64-14.1 usb installation thumb drive.
2. cleared partition table:
Code:

root@slackware:/#  dd if=/dev/zero of=/dev/sda bs=512 count=1024
1024+0 records in
1024+0 records out
524288 bytes (512.0KB) copied, 0.013450 seconds, 37.2MB/s
root@slackware:/#  dd if=/dev/zero of=/dev/sdb bs=512 count=1024
1024+0 records in
1024+0 records out
524288 bytes (512.0KB) copied, 0.307764 seconds, 1.6 MB/s

3. Partitioned /dev/sda
Code:

root@slackware:/#  fdisk /dev/sda
o
n
p
1
2048
+256M
t
1
fd
n
p
2
526336
976773167
t
2
fd
p
  Device Boot    Start          End    Blocks  ID    System
/dev/sda1          2048      526335    262114  fd    Linux raid autodetect
/dev/sda2        526336    976773167  488123416  fd    Linux raid autodetect
w

4. Cloned partition table to /dev/sdb:
Code:

root@slackware:/#  sfdisk -d /dev/sda | sfdisk --force /dev/sdb
5. rebooted off of usb installtion thumb drive, because of fdisk warnings that old partition tables were still in use, and to reboot before installing a file sytem. Don't know if necessary, but I always do this if I get that warning.

6. made sure there was nothing left of prior raids, since I had same partition layout last time:
Code:

root@slackware:/# mdadm --manage --stop /dev/md0
root@slackware:/# mdadm --manage --stop /dev/md1
root@slackware:/# mdadm --misc --zero-superblock /dev/sda1
root@slackware:/# mdadm --misc --zero-superblock /dev/sdb1
root@slackware:/# mdadm --misc --zero-superblock /dev/sda2
root@slackware:/# mdadm --misc --zero-superblock /dev/sdb2

7. Create new raid1 arrays:
Code:

root@slackware:/#  mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]1 --metadata=0.90
root@slackware:/#  mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sd[ab]2 --metadata=0.90

8. Fill the array I plan to encrypt with random data:
Code:

root@slackware:/#  dd if=/dev/urandom of=/dev/md1
(time to take a break, walk the dog, and come back tomorrow)

9. Setup encryption on raid1 array (/dev/md1):
Code:

root@slackware:/#  cryptsetup -s 256 -y luksFormat /dev/md1
WARNING!
This will overwrite data on /dev/md1 irrevocably.

Are you sure? (Type uppercase yes):  YES
Enter LUKS passphrase: secretkeyichose
Verify passphrase: secretkeyichose

10. Decrypt array and map to unencrypted block device (I named mine egg):
Code:

root@slackware:/#  cryptsetup luksOpen /dev/md1 slackluks
Enter passphrase for /dev/md1:  secretkeyichose

11. Create a physical volume out of the unencrypted block device:
Code:

root@slackware:/# pvcreate /dev/mapper/slackluks
  Physical volume "/dev/mapper/slackluks" successfully created

12. Create a volume group on the physical volume:
Code:

root@slackware:/# vgcreate cryptvg /dev/mapper/slackluks
  Volume group "cryptvg" successfully created

13. Create logical volumes suitable for my installation:
Code:

root@slackware:/# lvcreate -L 1G -n swap cryptvg
  Logical volume "swap" created
root@slackware:/# lvcreate -L 128G -n root cryptvg
  Logical volume "root" created
root@slackware:/# lvcreate -l 100%FREE -n home cryptvg
  Logical volume "home" created

14. Create Device Nodes:
Code:

root@slackware:/# vgscan --mknodes
  Reading all physical volumes. This may take a while...
  Found volume group "cryptvg" using metadata type lvm2

15. Activate Volumes:
Code:

root@slackware:/# vgchange -ay
  5 logical volume(s) in group "cryptvg" now active

16. Create some swapspace:
Code:

root@slackware:/# mkswap /dev/cryptvg/swap
mkswap:  /dev/cryptvg/swa;: warning: don't erase bootbits sectors on whole disk. Use-f to force.  Setting up swapspace version1, size = 1048572 KiB
no label, UUID=blahblahblah-blah-blah-blah-blah-blahblah

17. Run setup:
Code:

root@slackware:/# setup
17A Choose ADDSWAP and then select /dev/cryptvg/swap
17B Select and format /dev/cryptvg/root to be root (/) Linux partition
17C Select and format other volumes and give them mount points.
17D Select and format /dev/md0 and make /boot its mount point.
17E Continue and install the packages
17F I skipped making a USB boot stick

Richard Cranium 03-21-2014 12:49 AM

RE step #3: I think that you should tag software RAID partitions with "da" (Non-FS data) instead of "fd". See https://raid.wiki.kernel.org/index.php/Partition_Types

slac-in-the-box 03-24-2014 01:17 AM

lvm and luks onr raid 1 is working!!!
 
Thanks Mr. Cranium.

Posting from my working installation of slackware64-14.1 on raid1, with luks and lvm!

I had several issues thwarting my attempts. One was physical: my son gave up on this machine because of erratic booting, and returned it to me. He had changed drives many times, and had lost some screws, such that drive sdb was seated loosely, and vibrations could change weather or not it got detected..

On my first failed attempt, I had been using ext2 on md0 (/boot), and xfs on my logical volumes. Along with the errors about not being able to open luks device because it couldn't find md1, there were also lots of xfs errors, so I switched to ext4.

On my second failure, when creating raid1 array, I received warning that I shouldn't use it as a boot disk, and that if I wanted to boot from it I should add --metadata=0.90... So I made md0 with 0.90 metadata and created md1 with its default 1.x metadata... However, when I changed my partition types to 'da', it panicked on reboot--and when back in installer, the partition tables were no longer visible in fdisk -- like they got wiped out...

I noticed that advice to use 'da' as partition type was recommended for using 1.x metadata.

So, now, on this third attempt--which is working--I toggled the boot flag on /dev/sd[ab]1, and I went back to partition type 'fd,' and created both arrays with the --metadata=0.90 tag. I used ext4. I removed mdadm.conf from /boot/initrd.gz, directions for which I found here. I added the initrd to lilo, and reran lilo. Everything works great. Thanks.

I would like to learn more about raid metadata. I wonder if I used partition type 'fd' on the partitions for /dev/sd[ab]1 and --metadata=0.90 on md0, but used partition type 'da' for /dev/sd[ab]2 and left the metadata at the default 1.x, if it would still work... I wonder if that note that partition type 'fd' can interfere from array recovery from cdroms, only applied in 1.x metadata... or if it is valid for 0.90 as well...

But overall, I am happy that my data is kept private with encryption, and redundant with raid1. Thanks again Richard Cranium for your assistance. Cheers.

Richard Cranium 03-24-2014 06:33 PM

Hmm.

I don't have LUKS on my system, but I do use LVM on top of software RAID. The motherboard on the machine where I'm typing this is an ASUS M4N98TD EVO. I also use GRUB2 to boot, which may be the big difference.

Code:

dorkbutt@dorkbutt:~$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md1 : active raid1 sdd3[0] sde3[1]
      142716800 blocks super 1.2 [2/2] [UU]
     
md2 : active raid1 sda3[2] sdc3[0]
      974999360 blocks super 1.2 [2/2] [UU]
     
md0 : active raid1 sdd2[0] sde2[1] sdc2[2](S)
      523968 blocks super 1.2 [2/2] [UU]
     
unused devices: <none>
dorkbutt@dorkbutt:~$ cd /boot
dorkbutt@dorkbutt:/boot$ df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        496M  43M  428M  10% /boot
dorkbutt@dorkbutt:/boot$

The mdadm.conf issue is kinda weird. It pretty much boils down to that you should choose one of two setups:
  1. Ensure that the "real" /etc/mdadm.conf has the correct information to assemble all of your RAID arrays prior to running mkinitrd (which copies /etc/mdadm.conf into /boot/initrd-tree/etc/mdadm.conf).
  2. Ensure that you don't have an /etc/mdadm.conf file at all so the initrd code queries and assembles the arrays at boot.

Slackware, unfortunately, provides an /etc/mdadm.conf that contains nothing but comments. That's sufficient for the initrd code to skip the part where it runs the "query and assemble all arrays for you" but not sufficient for the udevd code to give your arrays the correct names. That actually works fine for me since LVM can use the meta-data that it writes into the superblocks to assemble itself even with "bad" names for the underlying block devices.

LUKS, as one would hope from an encrypted block device, is a lot more picky. If udevd has no information about the RAID arrays that it assembles for you, then it picks names like /dev/md127 and higher. That probably doesn't match what you picked when you created the array and LUKS fails because it can't find the correct device.

So, I'm glad that you're up and running and sad that I gave you bad advice that led you down some wild goose chases. There's only so many minutes in all of our lives and I don't like wasting anyone else's with bad advice.

slac-in-the-box 03-24-2014 10:38 PM

I don't think you gave me bad advise. I come to this site to learn, because it is more affordable and more efficient than college, and quicker than waiting on hold for phone tech support from any of the commercial operating systems... (back in my xserve days, I would wait hours for applecare, just to be told it was an enterprise issue, and I had to have enterprise applecare for $500 each case, or $10000 a year--whoa, and they were just putting gui wrapper around the same open source software running in slackware (like apache, sendmail, etc.)) So, I have learned everything linux here at LQ, with hardly having to make any posts, so much has already been solved...

Your advise lead me to learn about the differences between metadata versions... since I am only using level one array, with two components, version 0.90 is just fine, even though it is no longer default metadata since 2009. It's limitations are it can only have 28 array components; size must be less than 4TB; and disks cannot be moved between big and little endian machines... with the 1.x metadata, you can have 384+ components! whoa!

I also learned that
Quote:

The boot-loader LILO also can only boot from the version 0.90 superblock arrays. Alternative boot loaders, GRUB specifically, probably don't have this particular limitation.
(Maybe this will get Pat to include grub with next slackware release--could provide option to choose lilo or grub)

Found this info at https://raid.wiki.kernel.org/index.p...rblock_formats.

So I think your advise was correct for your 1.2 metadata and grub...

Thanks for your many contributions to LQ!


All times are GMT -5. The time now is 02:14 AM.