SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hi, trying to install Slackware64-current on an ASUS VivoBook 15 which uses NVMe, but the installer can't see it. It shows up in the output of lspci, but nowhere else that I can find.
Interestingly, I succeeded in installing Slackware64-current on an ASUS M570DD, which as far as I can tell is basically the same thing as the VivoBook 15. NVMe was no trouble at all.
Even more interestingly, CloneZilla Live 20201102 is able to access NVMe on the ASUS VivoBook 15.
What's the difference between Slackware64-current and CloneZilla Live? Why can the latter see NVMe but not the former on this laptop? Slackware64-current definitely has NVMe support:
Hmm, just took a look around, and I see there's a newer version of CloneZilla Live available (20210127). I'll give it a try to check if it also can access my NVMe. If not, then maybe my issue is due to some recent kernel bug or incompatibility.
My ChangeLog.txt's date is Tue Feb 2 20:16:43 UTC 2021.
I have noticed for SoC platforms it is mandatory to turn on GPIO options in kernel or many things won't work.
When you boot with Clonezilla, do you see GPIO modules loaded?
In Clonezilla, there are no GPIO modules loaded, no mention of GPIO in dmesg. The Clonezilla initrd has plenty of GPIO stuff, including specific to Asus computers.
The newer Clonezilla works just as well as the old one, so there's not been a kernel regression. The problem is isolated to Slackware itself. Apparently there's something it's not doing at boot time that's needed in order to enable NVMe.
Clonezilla does load these modules with Asus in the name:
asus_nb_wmi
asus_wmi
pegasus (haha)
None of these modules are present in the Slackware installer.
NVMe is definitely enabled. I'm able to boot Clonezilla and see the NVMe. I'm also able to boot Windows which came preinstalled. Clonezilla sees NVMe just fine, and it shows up in both /dev and lsblk. Yet, Slackware does not, so naturally I can't partition it either.
Experimenting to figure out how best to add a module to the installer...
First, I got the latest slackware64-current to be the baseline for my local customizations. This was a minor update to the mirror I already maintain for my home network. I have scripts for all this, but in this post I'll just spell out the underlying commands for parsimony.
Next, I updated my host system to the latest, giving me easy access to the right version of the kernel modules. I already have file://home/ftp/slackware64/ in /etc/slackpkg/mirrors. This particular update didn't involve installing or removing any packages, though I checked anyway.
I could have rebooted at this point to use the new kernel, but it wasn't really necessary since I can just tell depmod what kernel version will be used.
I proceeded to unpack the initrd.img file, to which I'll be adding modules.
Code:
mkdir /tmp/initrd-tree
cd /tmp/initrd-tree
xzcat /home/ftp/slackware64/EFI/BOOT/initrd.img | cpio -i
To make things simpler for myself, I then decompress all the *.ko.xz files. I have no clue why they're compressed in the first place, since the entire initrd will be compressed anyway.
Code:
find -name '*.ko.xz' | xargs unxz
Then, to add the Asus modules (not Pegasus) and their dependencies:
This complains about there being no modules.order or modules.builtin, but I guess they're not needed just for running the installer. Also I notice that all the other kernel modules are compressed with xz, but that seems like a waste since the entire initrd.img is going to be compressed anyhow.
With that, I recreated initrd.img with the additional modules:
Maybe they won't automatically get loaded at boot, but who cares? I'll just modprobe them. Anyway, with all of the above done, the next step is to create the disc image. I used the procedure in isolinux/README.TXT, slightly modified. Because my disc burner seems to be having an issue finalizing DVDs, I instead wrote this image to an SD card.
I then booted the installer and logged in as root. It automatically loaded asus-wmi and asus-nb-wmi. Alas, this was not enough for NVMe to work.
I'm considering comparing the contents of dmesg between Clonezilla and Slackware to see what's different. This could get annoying to do with no network and no common storage. I will have to hunt up a USB memory stick for each one to write to.
I finally got a chance to compare the CloneZilla and Slackware dmesg outputs. The only USB memory stick I could find turned out to be totally dead, so I scrounged up one of those weird combination USB/SD memory cards which did the job.
Here's CloneZilla's dmesg, in part:
Code:
i2c i2c-0: 2/8 memory slots populated (from DMI)
i2c i2c-0: Systems with more than 4 memory slots not supported yet, not instantiating SPD
intel-lpss 0000:00:15.0: enabling device (0004 -> 0006)
idma64 idma64.0: Found Intel integrated DMA 64-bit
nvme nvme0: pci function 10000:e1:00.0
pcieport 10000:e0:1d.0: can't derive routing for PCI INT A
nvme 10000:e1:00.0: PCI INT A: not connected
nvme nvme0: 8/0/0 default/read/poll queues
nvme0n1: p1 p2 p3 p4
Here's Slackware:
Code:
i2c i2c-0: 2/8 memory slots populated (from DMI)
i2c i2c-0: Systems with more than 4 memory slots not supported yet, not instantiating SPD
drm_kms_helper: Unknown symbol fb_sys_write (err -2)
drm_kms_helper: Unknown symbol sys_imageblit (err -2)
drm_kms_helper: Unknown symbol sys_fillrect (err -2)
drm_kms_helper: Unknown symbol sys_copyarea (err -2)
drm_kms_helper: Unknown symbol fb_sys_read (err -2)
The drm_kms_helper errors are entirely unrelated. I only put them there to show the gaping void after initializing i2c. The Slackware kernel messages don't mention intel-lpss, idma64, or nvme. I know Slackware includes nvme support, but that doesn't help when the storage device isn't accessible in the first place.
This wasn't enough to get my nvme to show up, though I did confirm that lpss and idma64 are now being initialized the same in Slackware as in Clonezilla. I'll write a follow-up post as I continue to investigate.
This round, I tried to use a big hammer and see if I got results. I went ahead and put every *.ko file I have into the initrd.
Code:
# Import all modules
(cd /lib/modules/5.10.17; find -name '*.ko') | tar -C /lib/modules/5.10.17 -T - -c |
tar -C lib/modules/5.10.17 -x -v
zcat /home/ftp/slackware64/kernels/huge.s/System.map.gz | depmod -F /dev/stdin -b . 5.10.17
# Build the new initrd
# [...]
# "Burn" to SD card
# [...]
Wonder of wonders, that did it! Now I have /dev/nvme*.
Next, I will try to track down which modules are actually required to make this work, since we should probably include them in the installer by default.
Can try looking at the output of `lsmod` and comparing with output of lsmod when something was missing. That should help id which modules were additionally loaded when it does work.
I compared the stock Slackware dmesg with my Christmas Tree-edition Slackware (i.e. includes every module), and the main thing that jumped out at me was vmd.
modules.dep reveals that vmd.ko doesn't depend on anything, so the command to put it into the initrd is much simpler than any of the above.
Tested it, and yup! vmd.ko is the only thing I needed to add in order to get /dev/nvme*.
I'm writing a much longer procedure to show exactly how I set up the laptop. I'll post it later (when everything possible is working).
Last edited by andygoth; 02-22-2021 at 09:55 AM.
Reason: specificity
I compared the stock Slackware dmesg with my Christmas Tree-edition Slackware (i.e. includes every module), and the main thing that jumped out at me was vmd.
modules.dep reveals that vmd.ko doesn't depend on anything, so the command to put it into the initrd is much simpler than any of the above.
Tested it, and yup! vmd.ko is the only thing I needed to add in order to get /dev/nvme*.
I'm writing a much longer procedure to show exactly how I set up the laptop. I'll post it later (when everything possible is working).
Thanks for this. The latest -current installer had no problems seeing my PCIe SSD but rebooting with the huge kernel failed with kernel panics, solved by recompiling huge with nvme target and vmd support built in.
Distribution: VM Host: Slackware-current, VM Guests: Artix, Venom, antiX, Gentoo, FreeBSD, OpenBSD, OpenIndiana
Posts: 1,008
Rep:
Quote:
Originally Posted by dr.s
Thanks for this. The latest -current installer had no problems seeing my PCIe SSD but rebooting with the huge kernel failed with kernel panics, solved by recompiling huge with nvme target and vmd support built in.
Very interesting. I installed Slackware-current with (then) latest iso on NVMe
without any issues on the laptop as seen above from kernel config.
Yeah got some inconsistent results here as well.
The unmodified huge kernel had no problems booting up an older laptop with an nvme/PCIe ssd
but would panic on the new laptop.
I've tested with 5.13.x and 5.14-rc, haven't tried 5.12.x though.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.