LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 12-02-2022, 01:41 PM   #1
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 963

Rep: Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650
Which is the best way to create a Slackware package which installs differently on different machines?


In earlier versions of Slackware I created my own custom kernel packages which were pushed to different machines.

The package install/doinst.sh would look at which kernel configuration was currently installed and copy that kernel to a directory structure below /boot. Next to install/doinst.sh I had directory called new_kernels containing different kernels with their corresponding files. Finally the doinst.sh would look at which bootloader were used (lilo for legacy or extlinux for uefi) and configure that boot loader.

A typical package could have contents like this:

Code:
install/
install/doinst.sh
install/new_kernels/
install/new_kernels/huge.s/
install/new_kernels/huge.s/System.map.gz
install/new_kernels/huge.s/bzImage
install/new_kernels/huge.s/config
install/new_kernels/mmc.s/
install/new_kernels/mmc.s/System.map.gz
install/new_kernels/mmc.s/bzImage
install/new_kernels/mmc.s/config
Now, when trying to install such a package on Slackware 15 it fails, the explanation is described in a comment in installpkg:

Code:
# Tue Apr 17 17:26:44 UTC 2018
# Quit with the funny business in /install. Note however that /install still
# isn't a safe directory to use in a package for anything other than package
# metadata. Other files placed there are going to be left on the system in
# /installpkg-$(mcookie). That could be worked around, but we'll wait until
# someone reports there is a need. The main reason to do this is that /install
# was a collision point if more than one copy of installpkg was running at
# once. With this change, the pkgtools are (more or less) thread-safe.
I understand the point of making installpkg thread safe. However, it would have been nice to have some good place to put files like this. With the current solution of moving the directory to /installpkg-$(mcookie) the script doinst.sh is unable to find that directory as mcookie returns different strings every time called and no INSTDIR environment variable or any other variable giving a hint of what mcookie returned are exported to the doinst.sh script.

What would be a good place to put directory structures like this? Somewhere below /tmp or /var/tmp? Should that place also take thread safety in consideration?

Of course my doinst.sh finished by cleaning up with "rm -r install/new_kernels".

regards Henrik
 
Old 12-02-2022, 01:45 PM   #2
drumz
Member
 
Registered: Apr 2005
Location: Oklahoma, USA
Distribution: Slackware
Posts: 905

Rep: Reputation: 695Reputation: 695Reputation: 695Reputation: 695Reputation: 695Reputation: 695
I think I'd put that stuff in /usr/share/<yourpackage>/. In doinst.sh do your normal copy to /boot.

Then you have 2 choices:

1. Just leave the files in /usr/share/<yourpackage>/. Yes, it's a little bloat, but whatever.

2. Delete /usr/share/<yourpackage> in doinst.sh. Then be annoyed with warning messages when running upgradepkg or removepkg due to "missing" files.
 
1 members found this post helpful.
Old 12-02-2022, 03:12 PM   #3
rkelsen
Senior Member
 
Registered: Sep 2004
Distribution: slackware
Posts: 4,454
Blog Entries: 7

Rep: Reputation: 2558Reputation: 2558Reputation: 2558Reputation: 2558Reputation: 2558Reputation: 2558Reputation: 2558Reputation: 2558Reputation: 2558Reputation: 2558Reputation: 2558
Interesting thought experiment. I think the suggestion of using some sort of holding directory will be your best bet.

The approach I took when using custom kernels was to simply not bother packaging them. You know what the files are and where they go. When upgrading without a package, you can leave the old kernel in situ to give you a fallback plan.

Nowadays, I just run stock kernels everywhere.
 
1 members found this post helpful.
Old 12-02-2022, 03:39 PM   #4
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 963

Original Poster
Rep: Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650
Thanks for both your replies! I didn't even think of that upgradepkg will complain if I put the temporary kernel directories outside /install. Maybe I will have to resort to making doinst.sh a self extracting archive...

My current doinst.sh does also keep a copy of the previous kernel.

Code:
#!/bin/bash

kernel=`file boot/vmlinuz | colrm 1 31 | xargs -0 dirname`
if [ "$kernel" == "." ]; then
  # Fallback for old machines not having kernel in subdir of /boot
  kernel=huge.s
  ln -sf ${kernel}/bzImage boot/vmlinuz
  ln -sf ${kernel}/config boot/config
  ln -sf ${kernel}/System.map boot/System.map
fi

if [ -d install/new_kernels/$kernel ]; then
  mkdir -p boot/old_kernels
  mv boot/$kernel boot/old_kernels/${kernel}.`date +%y%m%d` || true
  cp -rp install/new_kernels/$kernel boot
  gunzip boot/${kernel}/System.map.gz
  chown -R root.root boot
  if [ -d boot/efi/EFI/Boot ]; then
    # This machine boots with UEFI
    cp -p boot/${kernel}/bzImage boot/efi/EFI/Boot/vmlinuz
  else
    # Do not run lilo if extlinux is used to boot
    if [ ! -r boot/extlinux.conf ]; then
      # Doing this more than once might help against "volid read error" from
      # removable discs
      lilo -r .
      sleep 1
      lilo -r .
      sleep 1
      lilo -r .
    fi
  fi
else
  echo New kernel for $kernel is missing!
fi
# cleanup
rm -r install/new_kernels
regards Henrik
 
Old 12-02-2022, 03:57 PM   #5
drumz
Member
 
Registered: Apr 2005
Location: Oklahoma, USA
Distribution: Slackware
Posts: 905

Rep: Reputation: 695Reputation: 695Reputation: 695Reputation: 695Reputation: 695Reputation: 695
Quote:
Originally Posted by henca View Post
Maybe I will have to resort to making doinst.sh a self extracting archive...
That would indeed preserve your old behavior and avoid the small nuisances of my approach

Since you're already creating backups of your old kernel in doinst.sh, might I also suggest for your UEFI machines to also copy vmlinuz to vmlinuz-old in boot/efi/EFI/Boot/? (And of course modify elilo.conf to support vmlinuz-old.) That way in the unfortunate event that the new kernel doesn't boot you have a fallback. I do something similar except I manually copy my kernel and initrd with a versioned filename to my EFI/Slackware directory, so I also have to manually edit elilo.conf every time. (I don't create packages out of my custom-compiled kernels, I just install the files.)
 
Old 12-03-2022, 03:43 AM   #6
chrisretusn
Senior Member
 
Registered: Dec 2005
Location: Philippines
Distribution: Slackware64-current
Posts: 2,975

Rep: Reputation: 1551Reputation: 1551Reputation: 1551Reputation: 1551Reputation: 1551Reputation: 1551Reputation: 1551Reputation: 1551Reputation: 1551Reputation: 1551Reputation: 1551
I don't how many custom kernels your are talking about. I would just make a package for each machine and name that package accordingly.
 
Old 12-03-2022, 03:51 AM   #7
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 963

Original Poster
Rep: Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650
Quote:
Originally Posted by drumz View Post
might I also suggest for your UEFI machines to also copy vmlinuz to vmlinuz-old in boot/efi/EFI/Boot/? (And of course modify elilo.conf to support vmlinuz-old.) That way in the unfortunate event that the new kernel doesn't boot you have a fallback. I do something similar except I manually copy my kernel and initrd with a versioned filename to my EFI/Slackware directory, so I also have to manually edit elilo.conf every time. (I don't create packages out of my custom-compiled kernels, I just install the files.)
Thanks for your suggestion! For those UEFI machines I don't use elilo but instead a newer version of syslinux/extlinux. Both my lilo configuration and those syslinux/extlinux configurations only have a single boot choice pointing to the latest kernel. If something would go wrong I would have to boot from some rescue media like Knoppix or the Slackware installation iso to fix things up, but everything needed would be there. Before pushing this package to many machines I try it on some test machine(s) (maybe virtual). It was on such a virtual test machine in qemu I found that installpkg failed to install the new kernel and from my attempts left /installpkg-xxx directories on the file system. No harm done as this virtual machine was run in qemu snapshot mode, at next restart of qemu the file system is restored to its reference state.

My main reason for creating my own kernel slackware packages is that it is the easiest way to install on multiple machines. Having different kernel configurations and different boot loaders on different machines is the easy part. The hard part are those machines with nVidia cards requiring different (possibly legacy-) versions of the binary nVidia driver.

regards Henrik
 
Old 12-03-2022, 04:28 AM   #8
elcore
Senior Member
 
Registered: Sep 2014
Distribution: Slackware
Posts: 1,753

Rep: Reputation: Disabled
Quote:
Originally Posted by henca View Post
The hard part are those machines with nVidia cards requiring different (possibly legacy-) versions of the binary nVidia driver.
That one's simple, store the required NV*.run in /root/ of these machines and:
Code:
sh /root/NV*.run --ui=none -s --uninstall && sh /root/NV*.run --ui=none -a -s
You probably need -a for some legacy drivers, or it will pause the script. See NV*.run --advanced-options for details.
It's more of a workaround than a solution, but works fine, used it a lot before nouveau became stable.
 
1 members found this post helpful.
Old 12-03-2022, 07:10 AM   #9
drumz
Member
 
Registered: Apr 2005
Location: Oklahoma, USA
Distribution: Slackware
Posts: 905

Rep: Reputation: 695Reputation: 695Reputation: 695Reputation: 695Reputation: 695Reputation: 695
Quote:
Originally Posted by henca View Post
The hard part are those machines with nVidia cards requiring different (possibly legacy-) versions of the binary nVidia driver.
Newer version of the nVidia driver support dkms, which streamlines the process somewhat.

My kernel installation script includes:

Code:
dkms autoinstall -k ${KVER}${LOCALVER}
which you could put in your doinst.sh.

Except, recently the nVidia *.run installer is "failing" during installation (it installs all the files correctly but then errors out somewhere). I've read online that it errors out during the dkms setup. But I haven't tested NOT enabling dkms in the installer yet. But even with the errors the files for dkms are installed so I can still use dkms on different kernels than the one that I was running during the original driver install.

On the other hand, if not all your nVidia drivers support dkms it would probably be better to be consistent and not use dkms for all of them.
 
1 members found this post helpful.
Old 12-04-2022, 05:03 AM   #10
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 963

Original Poster
Rep: Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650
Quote:
Originally Posted by drumz View Post
Newer version of the nVidia driver support dkms, which streamlines the process somewhat.
Yes, that might help for those newer drivers and one day when nVidia has opensourced its module it might be included in the kernel. When the "proprietary" nvidia module is included in the kernel sources and all nvidia cards are supported by that module all these trobules will be gone.

But today I have some machines without nvidia cards, some machine needing old legacy nvidia drivers and some machines capable of using the latest driver. Life would be easier if I could use the nouveau driver included in the kernel, but then things like cuda would no longer be usable.

regards Henrik
 
Old 12-04-2022, 05:08 AM   #11
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 963

Original Poster
Rep: Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650
Quote:
Originally Posted by elcore View Post
Code:
sh /root/NV*.run --ui=none -s --uninstall && sh /root/NV*.run --ui=none -a -s
That is one interesting concept! But will it do the right thing if called from doinst.sh while the old kernel is still running?

regards Henrik
 
Old 12-04-2022, 05:26 AM   #12
kjhambrick
Senior Member
 
Registered: Jul 2005
Location: Round Rock, TX
Distribution: Slackware64 15.0 + Multilib
Posts: 2,159

Rep: Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512Reputation: 1512
drumz --

I tried the dkms feature fairly recently and it failed so I've never tried it again.

I guessed that 'I did it wrong' and I didn't really need dkms so I gave up on it.

So this is my standard method for each new Kernel: boot runlevel 3 ; log in as root ; rerun the NVidia.run script ; rerun vmware-modconfig ; reboot.

It's kinda nice to know that ( this time ) 'maybe it wasn't just me' having dkms problems with the NVidia stuff.

Thanks.

-- kjh

p.s. 0XBF has 'just now' shared a nice SlackBuild for the NVidia.run Files here:

An alternative approach to packaging NVIDIA's *.run drivers

I like it and I am going to test it soon and I'll probably start using it
 
1 members found this post helpful.
Old 12-04-2022, 07:21 AM   #13
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 963

Original Poster
Rep: Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650
Quote:
Originally Posted by kjhambrick View Post
p.s. 0XBF has 'just now' shared a nice SlackBuild for the NVidia.run Files here:

An alternative approach to packaging NVIDIA's *.run drivers
I saw that post and it is interesting, but my problem on which driver package to choose for which machine remains.

For Slackware 15.0 I have been using different versions of the nVidia driver and kernel packages from slackbuilds.org.

For earlier versions of Slackware I have been building my own packages using checkinstall to call the .run file, those packages has then had the need for some manual cleanup and repackaging.

But the trouble is with multiple machines, having different versions of the nVidia driver or no binary nVidia driver at all. Those machines are configured to use a cron job to look for new Slackware packages in a NFS directory and upgrade or install those packages. A kernel upgrade becomes tricky as it might also require a new nvidia driver. To make things worse, all those machines are allways in use and can't be rebooted by the cron job. They will simply be rebooted when it suits them which in extreme cases might be months or even years after the updated kernel package has been installed. Because of this trouble, I have previously avoided upgrading the kernel and instead manually patched the old kernel against all CVEs listed. Then I have pushed kernel and module packages with the patched old kernel which still works fine with any installed binary nvidia driver or any other installed binary module.

Now I can count my Slackware 15 installations on a few fingers and the kernel upgrade listed almost 50 CVEs. Usually, I know that most CVEs doesn't require any patching as they are in parts of the kernels that I don't use, but all CVEs has to be manually examined before any such conclusion can be drawn. So this time I am really trying to upgrade the kernel to a new version.

regards Henrik
 
1 members found this post helpful.
Old 12-04-2022, 11:01 AM   #14
elcore
Senior Member
 
Registered: Sep 2014
Distribution: Slackware
Posts: 1,753

Rep: Reputation: Disabled
Quote:
Originally Posted by henca View Post
But will it do the right thing if called from doinst.sh while the old kernel is still running?
What? No. It's not a kernel package job, or doinst.sh job to depend on external closed source binaries.
The command is for /usr/local/bin, or rc.local where you'd conveniently check if /lib/modules already contains the driver before reinstalling it.
If you really must package legacy binary drivers, then use SBo script.
 
Old 12-04-2022, 11:05 PM   #15
henca
Member
 
Registered: Aug 2007
Location: Linköping, Sweden
Distribution: Slackware
Posts: 963

Original Poster
Rep: Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650Reputation: 650
Quote:
Originally Posted by elcore View Post
If you really must package legacy binary drivers, then use SBo script.
Yes, I will probably end up doing so. And then create a special package where doinst.sh checks if any nVidia binary module package was installed before and if so installs thew new nvidia module package for the new kernel and with the right legacy/latest version number for the nvidia card.

regards Henrik

Last edited by henca; 12-04-2022 at 11:08 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] No package 'x11' found No package 'xext' found No package 'xdamage' found No package 'xfixes' found No package 'x11-xcb' found Jigsaw Linux From Scratch 14 02-23-2021 08:35 PM
Init script behaving differently when run on machines in the UK vs US BrianK Programming 6 07-21-2011 12:04 AM
[SOLVED] #!/bin/sh vs #!/bin/bash -- script executes differently; each way = different bugs GrapefruiTgirl Programming 21 12-16-2009 05:30 PM
date function working differently between machines johnfman Linux - Software 3 12-21-2007 11:24 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 05:59 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration