LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 10-05-2022, 07:34 PM   #1
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Rep: Reputation: 242Reputation: 242Reputation: 242
Slackware on SSD/NVMe with full enryption (LUKS + LVM): Caveats, tips?


Hi everyone,

believe it or not, up to now all my systems are running on 'classic' harddisks, up to today. However, I am planning an fresh install with an NVME disk (M2) on new hardware and a couple of upgrades from harddisk to SSD (SATA). All the hard disks in my systems are encrypted (LUKS + LVM), so I thought I would do the same with my SSD installs.

So I started to research the web on the topic. There is a lot of information available, and I have literally read tons and tons of it, but most of it seems to be outdated or is contradictory. The more I read the more I got confused, regarding aspects like wear-levelling, overprovisioning and TRIM. That's why I am asking for a little bit of advice from users with practical experience in installing Slackware on SSDs.

Here are some questions I am particularly interested in:

Overprovisioning
Does it make sense to leave some empty space in the partitioning scheme, additonal to what is built in anyway by the manufacturer? If so: how much?
While I found sources that highly recommend this, others say that it is totally unnecessary with 'modern' SSDs. Which is correct?
BTW, the oldest SSD I have here is from 2020, so they can all be considered reasonably 'modern', for that matter, I guess.

TRIM
How would I enable TRIM (discard) in /etc/mkinitrd? Is LUKSTRIM=yes sufficient? I am asking, because some of the sources I found say that TRIM must be enabled separately for LUKS and LVM, in order to make it work properly. LUKSTRIM seems to refer to LUKS only. What about LVM?

Swap partition?
Would you recommend to have a swap partition on an SSD, on a desktop PC or a laptop? I no: What about sleep mode and hibernation, where my laptop usually would save a RAM image on swap, and reload it into RAM on wake-up.

Offical documentation
Apart from arbitrary documentation I found on the internet, there is official documentation for Slackware, too. Unfortunately, it's either not very fresh or was written with only classic harddisks in mind.
Old, referring to Slackware 14.1: Installing Slackware 14.1 on a SSD drive
Written with mainly harddisks in mind, no mention of SSD or NVME: README_CRYPT.TXT
Are those instructions valid for installations on an SSD?

So, I'd appreciate if someone could provide some information of the current status, what's really necessary or recommended as well as possible caveats, when installing Slackware on an SSD.

Thanks a lot in advance!

gargamel
 
Old 10-06-2022, 05:23 AM   #2
ctrlaltca
Member
 
Registered: May 2019
Location: Italy
Distribution: Slackware
Posts: 323

Rep: Reputation: 361Reputation: 361Reputation: 361Reputation: 361
I'm currently running LUKS + LVM on NVMe on my primary laptop.
The setup was done mostly following the "Combining LUKS and LVM" section of README_CRYPT.TXT.
One caveat is surely about the boot loader: lilo/elilo are not really up to the task, so i ended up creating an unenctypted, FAT32 partition for EFI boot (mounted at /boot/efi) and using Refind as the boot loader.

I use the folder /boot/efi/EFI/Slackware/ to save a copy of the kernel and initrd: vmlinux-generic, vmlinux-huge and initrd.gz
The bootloader itself is saved at /boot/efi/EFI/refind/refind_x64.efi, and refind.conf contains the following section for Slackware:
Code:
banner banners/slackware-banner.png

menuentry "Slackware" {
    icon \EFI\refind\icons\os_slackware.png
    loader \EFI\Slackware\vmlinuz-generic
    initrd \EFI\Slackware\initrd.gz
    options "root=/dev/cryptvg/root vga=normal ro resume=/dev/cryptvg/swap preempt=full acpi.ec_no_wakeup=1 mitigations=off"
}

menuentry "SlackwareH" {
    icon \EFI\refind\icons\os_slackware.png
    loader \EFI\Slackware\vmlinuz-huge
    options "root=/dev/cryptvg/root vga=normal ro resume=/dev/cryptvg/swap preempt=full"
}
As you can see i added the "resume" kernel parameter needed for hibernation and a few others as well for my personal preference.
You may want to have a look at https://docs.slackware.com/howtos:sl...administration to see how you can set and edit EFI boot entries.

About your questions:
* i didn't take care of overprovisioning, since the NVMe drive is already meant to handle it by itself;
* TRIM: i read some documentation about all the steps required to enable TRIM on LUKS+LVM, but i gave up on trying.
* SWAP: depending on available RAM and you usage. I created an encrypted swap partition to be used for hibernation, since my laptop only supports "modern standby" and it kinda sucks (battery won't last 4 days in standby).
 
Old 10-06-2022, 12:03 PM   #3
drumz
Member
 
Registered: Apr 2005
Location: Oklahoma, USA
Distribution: Slackware
Posts: 905

Rep: Reputation: 694Reputation: 694Reputation: 694Reputation: 694Reputation: 694Reputation: 694
Quote:
Originally Posted by ctrlaltca View Post
One caveat is surely about the boot loader: lilo/elilo are not really up to the task
How so? I'm using LUKS for my root partition and use elilo. Of course /boot/efi is on vfat.

Code:
# lsblk
NAME              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda                 8:0    0   4.5T  0 disk  
└─sda1              8:1    0   4.5T  0 part  
  └─sda1_luks     253:2    0   4.5T  0 crypt /home/<user>/data
sdb                 8:16   0   4.5T  0 disk  
└─sdb1              8:17   0   4.5T  0 part  
  └─sdb1_luks     253:3    0   4.5T  0 crypt            (Part of /home/<user>/data - btrfs partition)
nvme0n1           259:0    0   1.8T  0 disk  
├─nvme0n1p1       259:1    0  1010M  0 part  /boot/efi
├─nvme0n1p2       259:2    0   3.5G  0 part  /recovery  (For Pop!_OS - Ubuntu derivative)
├─nvme0n1p3       259:3    0 279.4G  0 part             (For Pop!_OS - Ubuntu derivative)
├─nvme0n1p4       259:4    0     4G  0 part  
│ └─cryptswap     253:1    0     4G  0 crypt [SWAP]
└─nvme0n1p5       259:5    0   1.5T  0 part  
  └─luksnvme0n1p5 253:0    0   1.5T  0 crypt /
Code:
# cat /etc/fstab
#/dev/nvme0n1p4   swap             swap        defaults,pri=1   0   0
/dev/mapper/cryptswap  swap       swap        defaults,pri=1   0   0
UUID=3976aedc-9966-4b4d-b707-27d270367882   /                ext4        defaults,discard         1   1
#/dev/mapper/luksnvme0n1p5   /                ext4        defaults,discard         1   1
UUID=63F4-DC42   /boot/efi        vfat        defaults,discard         1   0
/dev/mapper/sda1_luks /home/<user>/data btrfs defaults,nosuid,nodev,compress=zstd,subvol=/data 0 0
UUID=63F5-1F74   /recovery        vfat        fmask=133,dmask=022,discard 1   0
(removed other entries)
Code:
# cat /etc/crypttab
cryptswap  /dev/disk/by-partuuid/08cb4a62-8545-45e2-82be-faef8117db1b  none  swap
luksnvme0n1p5  /dev/nvme0n1p5 none discard
Overprovisioning

I use all my SSD.

TRIM

Code:
# cat /etc/cron.weekly/fstrim 
#!/bin/sh

fstrim -v -a > /var/log/fstrim.log
Code:
# grep TRIM /etc/mkinitrd.conf
LUKSTRIM="/dev/nvme0n1p5" # verify support with 'hdparm -I $dev | grep TRIM'
See discard options in /etc/fstab above.

Unlike you I'm only using LUKS. I'm not using LVM. But when I was experimenting with it I had:

Code:
# grep discards /etc/lvm/lvm.conf 
        # Configuration option devices/issue_discards.
        # Issue discards to PVs that are no longer used by an LV.
        # used. Storage that supports discards advertise the protocol-specific
        # way discards should be issued by the kernel (TRIM, UNMAP, or
        # benefit from discards, but SSDs and thinly provisioned LUNs
        # generally do. If enabled, discards will only be issued if both the
        #issue_discards = 0
        issue_discards = 1  <-- This is important
        # Configuration option allocation/thin_pool_discards.
        # The discards behaviour of thin pool volumes.
        # thin_pool_discards = "passdown"
        # causing problems. Features include: block_size, discards,
        # discards_non_power_2, external_origin, metadata_resize,
        # thin_disabled_features = [ "discards", "block_size" ]
Swap

My swap is on SSD. But due to having gobs of memory swap never gets used.

Official documentation

I used README_CRYPT.TXT and README_LVM.TXT as guides.
 
2 members found this post helpful.
Old 10-07-2022, 02:24 AM   #4
ctrlaltca
Member
 
Registered: May 2019
Location: Italy
Distribution: Slackware
Posts: 323

Rep: Reputation: 361Reputation: 361Reputation: 361Reputation: 361
Quote:
Originally Posted by drumz View Post
How so? I'm using LUKS for my root partition and use elilo. Of course /boot/efi is on vfat.
I'm sure it can be made to work, but every few months a new issue will pop up on new systems (eg. see last august's recent kernel boot failure with elilo).
I love lilo and use it on legacy systems because of its simplicity, but I kinda agree that's a dead horse to ride.

Code:
# grep TRIM /etc/mkinitrd.conf
LUKSTRIM="/dev/nvme0n1p5" # verify support with 'hdparm -I $dev | grep TRIM'
I'm not sure you can actually use hdparm on NVMe. It could work for old SATA SSDs.
 
Old 10-07-2022, 07:56 AM   #5
drumz
Member
 
Registered: Apr 2005
Location: Oklahoma, USA
Distribution: Slackware
Posts: 905

Rep: Reputation: 694Reputation: 694Reputation: 694Reputation: 694Reputation: 694Reputation: 694
Quote:
Originally Posted by ctrlaltca View Post
I'm sure it can be made to work, but every few months a new issue will pop up on new systems (eg. see last august's recent kernel boot failure with elilo).
It Works For Me™. No issues.

Quote:
Originally Posted by ctrlaltca View Post
Code:
# grep TRIM /etc/mkinitrd.conf
LUKSTRIM="/dev/nvme0n1p5" # verify support with 'hdparm -I $dev | grep TRIM'
I'm not sure you can actually use hdparm on NVMe. It could work for old SATA SSDs.
You are correct. The comment is there in the original file, which I didn't delete.

Code:
# hdparm -I /dev/nvme0n1p5

/dev/nvme0n1p5:
# hdparm -I /dev/nvme0n1

/dev/nvme0n1:
# hdparm  /dev/nvme0n1

/dev/nvme0n1:
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 1907729/64/32, sectors = 3907029168, start = 0
# hdparm  /dev/nvme0n1p5

/dev/nvme0n1p5:
 readonly      =  0 (off)
 readahead     = 256 (on)
 geometry      = 1612932/64/32, sectors = 3303284736, start = 595351552
 
Old 10-11-2022, 08:33 PM   #6
gargamel
Senior Member
 
Registered: May 2003
Distribution: Slackware, OpenSuSE
Posts: 1,839

Original Poster
Rep: Reputation: 242Reputation: 242Reputation: 242
@ALL: Thanks a lot for sharing your experience!
To summarise, the general recommendation for Slackware on NVMEs (and, I guess, newer SATA SSDs, as well) seems to be:
  • Activate TRIM (allow discard)
  • Don't worry about wear levelling and don't leave space unused for the sake of overprovisioning, assuming that the manufacturer has taken care of this, already, appropriately
  • provided enough RAM, so that swap is rarely used, except for hibernating of a laptop, there is no need to worry about premature wear out of the SSD/NVME by swapping

"Newer" meaning produced after 2017 or so, right?

Let me know, if there is something to add or needs to be corrected in the above, admittedly very concise summary. Thanks in advance, again!

Last edited by gargamel; 10-11-2022 at 08:44 PM.
 
1 members found this post helpful.
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Uefi and full disk encryption with lvm on luks with luks keyfile lancsuk Slackware 2 04-02-2021 02:43 PM
LXer: Data in a Flash, Part II: Using NVMe Drives and Creating an NVMe over Fabrics Network LXer Syndicated Linux News 0 05-20-2019 11:41 PM
[SOLVED] Starting an LFS but have some caveats (LVM/LUKS, BusyBox, Dual-boot, EFI, SecureBoot) AdriaanP Linux From Scratch 7 12-05-2018 01:55 AM
Migrate Linux/win10 dual boot from MBR nvme drive to a new GPT nvme drive bluemoo Linux - Software 7 09-25-2018 06:42 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 05:34 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration