[SOLVED] Which is the best way to create a Slackware package which installs differently on different machines?
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Which is the best way to create a Slackware package which installs differently on different machines?
In earlier versions of Slackware I created my own custom kernel packages which were pushed to different machines.
The package install/doinst.sh would look at which kernel configuration was currently installed and copy that kernel to a directory structure below /boot. Next to install/doinst.sh I had directory called new_kernels containing different kernels with their corresponding files. Finally the doinst.sh would look at which bootloader were used (lilo for legacy or extlinux for uefi) and configure that boot loader.
Now, when trying to install such a package on Slackware 15 it fails, the explanation is described in a comment in installpkg:
Code:
# Tue Apr 17 17:26:44 UTC 2018
# Quit with the funny business in /install. Note however that /install still
# isn't a safe directory to use in a package for anything other than package
# metadata. Other files placed there are going to be left on the system in
# /installpkg-$(mcookie). That could be worked around, but we'll wait until
# someone reports there is a need. The main reason to do this is that /install
# was a collision point if more than one copy of installpkg was running at
# once. With this change, the pkgtools are (more or less) thread-safe.
I understand the point of making installpkg thread safe. However, it would have been nice to have some good place to put files like this. With the current solution of moving the directory to /installpkg-$(mcookie) the script doinst.sh is unable to find that directory as mcookie returns different strings every time called and no INSTDIR environment variable or any other variable giving a hint of what mcookie returned are exported to the doinst.sh script.
What would be a good place to put directory structures like this? Somewhere below /tmp or /var/tmp? Should that place also take thread safety in consideration?
Of course my doinst.sh finished by cleaning up with "rm -r install/new_kernels".
Interesting thought experiment. I think the suggestion of using some sort of holding directory will be your best bet.
The approach I took when using custom kernels was to simply not bother packaging them. You know what the files are and where they go. When upgrading without a package, you can leave the old kernel in situ to give you a fallback plan.
Thanks for both your replies! I didn't even think of that upgradepkg will complain if I put the temporary kernel directories outside /install. Maybe I will have to resort to making doinst.sh a self extracting archive...
My current doinst.sh does also keep a copy of the previous kernel.
Code:
#!/bin/bash
kernel=`file boot/vmlinuz | colrm 1 31 | xargs -0 dirname`
if [ "$kernel" == "." ]; then
# Fallback for old machines not having kernel in subdir of /boot
kernel=huge.s
ln -sf ${kernel}/bzImage boot/vmlinuz
ln -sf ${kernel}/config boot/config
ln -sf ${kernel}/System.map boot/System.map
fi
if [ -d install/new_kernels/$kernel ]; then
mkdir -p boot/old_kernels
mv boot/$kernel boot/old_kernels/${kernel}.`date +%y%m%d` || true
cp -rp install/new_kernels/$kernel boot
gunzip boot/${kernel}/System.map.gz
chown -R root.root boot
if [ -d boot/efi/EFI/Boot ]; then
# This machine boots with UEFI
cp -p boot/${kernel}/bzImage boot/efi/EFI/Boot/vmlinuz
else
# Do not run lilo if extlinux is used to boot
if [ ! -r boot/extlinux.conf ]; then
# Doing this more than once might help against "volid read error" from
# removable discs
lilo -r .
sleep 1
lilo -r .
sleep 1
lilo -r .
fi
fi
else
echo New kernel for $kernel is missing!
fi
# cleanup
rm -r install/new_kernels
Maybe I will have to resort to making doinst.sh a self extracting archive...
That would indeed preserve your old behavior and avoid the small nuisances of my approach
Since you're already creating backups of your old kernel in doinst.sh, might I also suggest for your UEFI machines to also copy vmlinuz to vmlinuz-old in boot/efi/EFI/Boot/? (And of course modify elilo.conf to support vmlinuz-old.) That way in the unfortunate event that the new kernel doesn't boot you have a fallback. I do something similar except I manually copy my kernel and initrd with a versioned filename to my EFI/Slackware directory, so I also have to manually edit elilo.conf every time. (I don't create packages out of my custom-compiled kernels, I just install the files.)
might I also suggest for your UEFI machines to also copy vmlinuz to vmlinuz-old in boot/efi/EFI/Boot/? (And of course modify elilo.conf to support vmlinuz-old.) That way in the unfortunate event that the new kernel doesn't boot you have a fallback. I do something similar except I manually copy my kernel and initrd with a versioned filename to my EFI/Slackware directory, so I also have to manually edit elilo.conf every time. (I don't create packages out of my custom-compiled kernels, I just install the files.)
Thanks for your suggestion! For those UEFI machines I don't use elilo but instead a newer version of syslinux/extlinux. Both my lilo configuration and those syslinux/extlinux configurations only have a single boot choice pointing to the latest kernel. If something would go wrong I would have to boot from some rescue media like Knoppix or the Slackware installation iso to fix things up, but everything needed would be there. Before pushing this package to many machines I try it on some test machine(s) (maybe virtual). It was on such a virtual test machine in qemu I found that installpkg failed to install the new kernel and from my attempts left /installpkg-xxx directories on the file system. No harm done as this virtual machine was run in qemu snapshot mode, at next restart of qemu the file system is restored to its reference state.
My main reason for creating my own kernel slackware packages is that it is the easiest way to install on multiple machines. Having different kernel configurations and different boot loaders on different machines is the easy part. The hard part are those machines with nVidia cards requiring different (possibly legacy-) versions of the binary nVidia driver.
The hard part are those machines with nVidia cards requiring different (possibly legacy-) versions of the binary nVidia driver.
That one's simple, store the required NV*.run in /root/ of these machines and:
Code:
sh /root/NV*.run --ui=none -s --uninstall && sh /root/NV*.run --ui=none -a -s
You probably need -a for some legacy drivers, or it will pause the script. See NV*.run --advanced-options for details.
It's more of a workaround than a solution, but works fine, used it a lot before nouveau became stable.
The hard part are those machines with nVidia cards requiring different (possibly legacy-) versions of the binary nVidia driver.
Newer version of the nVidia driver support dkms, which streamlines the process somewhat.
My kernel installation script includes:
Code:
dkms autoinstall -k ${KVER}${LOCALVER}
which you could put in your doinst.sh.
Except, recently the nVidia *.run installer is "failing" during installation (it installs all the files correctly but then errors out somewhere). I've read online that it errors out during the dkms setup. But I haven't tested NOT enabling dkms in the installer yet. But even with the errors the files for dkms are installed so I can still use dkms on different kernels than the one that I was running during the original driver install.
On the other hand, if not all your nVidia drivers support dkms it would probably be better to be consistent and not use dkms for all of them.
Newer version of the nVidia driver support dkms, which streamlines the process somewhat.
Yes, that might help for those newer drivers and one day when nVidia has opensourced its module it might be included in the kernel. When the "proprietary" nvidia module is included in the kernel sources and all nvidia cards are supported by that module all these trobules will be gone.
But today I have some machines without nvidia cards, some machine needing old legacy nvidia drivers and some machines capable of using the latest driver. Life would be easier if I could use the nouveau driver included in the kernel, but then things like cuda would no longer be usable.
I saw that post and it is interesting, but my problem on which driver package to choose for which machine remains.
For Slackware 15.0 I have been using different versions of the nVidia driver and kernel packages from slackbuilds.org.
For earlier versions of Slackware I have been building my own packages using checkinstall to call the .run file, those packages has then had the need for some manual cleanup and repackaging.
But the trouble is with multiple machines, having different versions of the nVidia driver or no binary nVidia driver at all. Those machines are configured to use a cron job to look for new Slackware packages in a NFS directory and upgrade or install those packages. A kernel upgrade becomes tricky as it might also require a new nvidia driver. To make things worse, all those machines are allways in use and can't be rebooted by the cron job. They will simply be rebooted when it suits them which in extreme cases might be months or even years after the updated kernel package has been installed. Because of this trouble, I have previously avoided upgrading the kernel and instead manually patched the old kernel against all CVEs listed. Then I have pushed kernel and module packages with the patched old kernel which still works fine with any installed binary nvidia driver or any other installed binary module.
Now I can count my Slackware 15 installations on a few fingers and the kernel upgrade listed almost 50 CVEs. Usually, I know that most CVEs doesn't require any patching as they are in parts of the kernels that I don't use, but all CVEs has to be manually examined before any such conclusion can be drawn. So this time I am really trying to upgrade the kernel to a new version.
But will it do the right thing if called from doinst.sh while the old kernel is still running?
What? No. It's not a kernel package job, or doinst.sh job to depend on external closed source binaries.
The command is for /usr/local/bin, or rc.local where you'd conveniently check if /lib/modules already contains the driver before reinstalling it.
If you really must package legacy binary drivers, then use SBo script.
If you really must package legacy binary drivers, then use SBo script.
Yes, I will probably end up doing so. And then create a special package where doinst.sh checks if any nVidia binary module package was installed before and if so installs thew new nvidia module package for the new kernel and with the right legacy/latest version number for the nvidia card.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.