Quote:
Quote:
Quote:
Quote:
|
Quote:
But if you can't trust a packager, should you be trusting them to write a doinst.sh or the SlackBuild itself? What about overwriting files from other packages? If the packager doesn't have that particular program installed, they may not even realize that their package is overwriting another one. And it is very possible that it could cause breakage. Point blank, if you're using someone else's work, from SBo, from a package repo, you're inherently placing some trust in that person (although, you can still validate that trust by checking the SlackBuild, doinst.sh, douninst.sh, and installpkg --warn to make sure files won't be overwritten. Quote:
Luckily, none of my scripts on SBo will need the douninst.sh functionality (that I can see), and I imagine very few will actually need it, but it's nice to have the option rather than potentially leaving stale files/folders around the system that aren't tracked by the package manager. |
FWIF, I just ran into the situation where, I think a douninst.sh is fitting. The dkms-package (on SBo) helps to maintain modules built externally from the kernel after a kernel-upgrade. It creates its own build-area in /var/lib/dkms.
Installing a kernel-module to be built via dkms involves placing the required source files in /usr/src -which can be done via a build-script that then runs dkms from doinst.sh to build and to install the kernel-driver. After running removepkg the build tree for the kernel-module in /var/lib/dkms, some symlinks, and the dkms-installed kernel-modules will be left behind (in /lib/modules/(kernelversion)/extra). Dkms-installs leave original kernel-modules intact. Ideally, one would 'dkms remove <module/version> -all' before (with removepkg) deleting all source files. This because these sources contain the conf.file dkms needs for completing the 'remove' instruction. As it is uncommon and therefore prone to happen, there is a risk that the dkms step will be skipped. This can be repaired with a dounist.sh that cleans up the bits that would have been removed by the 'dkms remove' command. Maybe, instead of delving into the -for the likes like me- quite abstract pro-and cons of a douninst.sh, it would be interesting to see examples of how people use it or bypass the use of it. With the risk of being branded an 'untrusted' maintainer, the discussed dkms example is here. All comments to improve the scripts(s) are more than welcome. It is early days, but a dounist.sh template on SBo would be very helpful in streamlining intended usage. |
@brobr I posted an example of how you can use it in this thread. Nobody really commented on that either for or against, so that's either because nobody understood it, or perhaps due to my code generating a slackbuild instead of actually *being* a slackbuild people didn't trust it. To remove a layer of abstraction I've now posted the *generated* slackbuild.
initrd is just a simple python module that I registered on pypi. In order to register it certain metadata must be supplied, and this metadata is queryable using the pypi API. So rather than duplicate this metadata on SBo, sbgen.py simply queries the metadata and generates a slackbuild from it. The generated slackbuild has been uploaded here: https://github.com/bifferos/slackbui.../master/initrd The readme gives the generated file structure: https://github.com/bifferos/slackbui.../initrd/README You can see the only file that matters is: opt/afterpkg-python/initrd-0.1.tar.gz This is the package source. The doinst.sh installs it using pip. The douninst.sh removes it using pip. Of course, the Python package must appear twice, once in source form, and also in installed form, but Python packages are not generally that large. For me it doesn't matter. This allows upwards of 1000 python packages on SBo to be generated. Where dependency management finds a requirement for a specific Python package, instead of looking on SBo, we can generate the slackbuild wrapper package and pip install it, and according to the Slackware package management tools it's listed as being there, whilst not limiting us to the pypi versions that SBo maintainers have got around to pulling in. All this is highly experimental at the moment and nowhere near ready for people to use in anger, it's just something I'm working on, but the new douninst.sh functionality allows me to experiment with such concepts without having to hack removepkg all the time. |
Quote:
Quote:
Quote:
Plus, it seems your douninst assumes the running kernel will be the one where the modules where installed. It couldn't be the case if the admin first upgrades the kernel to check it globally works and only then upgrades the modules, and when it is the case, how does the doinst know the targeted new kernel (open question, I don't know dkms)? With my model, the script would get the names of the old/new kernels and could determine where to remove and add things. |
Quote:
For users who don't want to leave their system in the package maintainer's hands, they can simply create an alias like the following: Code:
alias removepkg='removepkg --skip-douninst' Code:
/sbin/removepkg |
Thanks for the example. And an interesting one (because it deals with python packages):
Quote:
Instead of 'shipping' built python packages from one computer to another, with afterpkg the admin instructs/distributes pip installs of those (Please correct me if I get this still wrong). This example makes it much clearer (at least for me) to understand how afterpkg is intended to work. Such an approach makes management of complex packages like Leo much easier. Also the python-bloat on SBo can be minimized to only those packages that can be considered on the interface between python and the OS. Leo is an interesting use_case in this respect, very fast development, continually changing requirements (dependencies); but maybe (too quickly) linked to novel versions of non-python ware such as QT5 (via PyQT). And it is at these cross-sections that one might get limited to which versions of a program like Leo could be used on Slackware (so there a SLackbuild as published on SBo still comes in useful) or the other way round; a python module (like pysam) that can be linked to a native library (htslib) breaks when the latter is upgraded and the python module is not following in its tracks. Still, such info would be trackable in the 'requires.txt' files any python module/program comes with. So these problems are not immediately related to the way afterpkg uses the doinstall/douninstall. (The relevant bit comes in whether/how easy one can control which version afterpkg takes as a starting point). Ideally one would be able to run 'afterpkg python.app' and get the stuff installed by pkgtools. Analogous to sbopkg, maybe consider putting the downloaded source files in /var/cache/afterpkg instead of /opt ( /var gets space for itself on my box so that overflowing kernel-messages are not obstructing other stuff). Also, after installation they could be removed, couldn't they (I won't have space on my laptop for 1000 packages ;-)? Also note, as mentioned a while back on the SlackBuilds.users list, that any python package distributed via pypi can be downloaded using this (logical) format: Quote:
|
Quote:
|
Yes, that's a fairly good description of where Afterpkg is trying to go. However it's better if you consider it as a tool to assist me in experimentation with SBo rather than some kind of finished product. sbgen.py, which generates the Python packages isn't fully integrated into Afterpkg yet, and I don't know if it should be. For instance do we need to actually create real Slackware packages, or should we just be fooling dependency management systems into thinking they're present with some kind of hint files? There are a lot of Python packages, not so many dependency management tools, after all.
Don't be too worried about where I'm putting downloaded source files. This is so experimental and needs so many edge cases ironed it may be impossible to complete in any meaningful way. For instance it's better to use the python wheels, but they don't always exist for our platform, or they may not have been created at all for some older packages. initrd (my example) works and builds, other packages do not. |
@biffereos Well, more fantasizing at this end, than being worried ;-).
Quote:
One could imagine that the whole python-tree in SBo would just be a list of 'safe' package-names for python-modules that can be processed by afterpkg. Possibly, a lot of other python-only packages (like some in the 'academic' tree ) could be treated as such as well. I imagine that such a combo (of afterpkg, sbopkg, pkgtools) would reduce enormously the work load of people maintaining python packages or those that use them but need uncovered versions (absent or more up-to-date). Management of R-packages, not of R itself, is already completely going through the R-interface and not via slack-builds. I wonder whether it's future-proof to try to keep up with the ever increasing complexity of the python jungle through slackbuilds as we know. Afterpkg seems a nice gel that could keep either style of package-management a valid option. |
Quote:
Of course, I'll put a big fat warning in the README that "removepkg" will be doing extra work to fully remove system76-io-dkms from the system. |
Hi drumz, thanks for the feedback. When cobbling mine together I had a look at your Slackbuild to see how others had used dkms; well, there was only one... possibly because of what NonNonBa put forward. Maybe at one point, after I have not experienced any glitches, I will try to get my script accepted along yours...
|
I did have a dkms build working from within a SlackBuild on an older version of the amdgpu-pro driver without needing to do it in doinst.sh.
https://github.com/bassmadrigal/slac...ver.SlackBuild Code:
if [ "${BUILDDKMS:-no}" == "yes" ]; then |
Quote:
Quote:
Quote:
|
Quote:
I am going to test that, thanks !-) One thing I do not completely understand: Quote:
|
All times are GMT -5. The time now is 06:07 PM. |