LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   How can removepkg be triggered to remove stuff placed by a doinst.sh? (https://www.linuxquestions.org/questions/slackware-14/how-can-removepkg-be-triggered-to-remove-stuff-placed-by-a-doinst-sh-4175672073/)

NonNonBa 04-06-2020 09:20 AM

Quote:

Originally Posted by bifferos (Post 6108396)
'required' and 'available' are two different concepts. Just because something is required doesn't mean it has to be available. I require it. I don't really understand your argument.

When a software requires another element, it means it won't build without it. When a mechanism is required by a package, it means the system won't work well without it.

Quote:

Originally Posted by bifferos (Post 6108396)
What I don't understand is why you seem to be telling me I can't/shouldn't use any uninstall steps, and how it creates some kind of problem for either you or the wider Slackware community. It doesn't, this is simply my choice for my own systems.

I don't tell you what you should do or not, I even submitted how you could do it in a way that wouldn't alter removepkg. I'm not against a mechanism allowing to automate tasks after a removal, I just think it has little/nothing to do on the packager's side.

Quote:

Originally Posted by bifferos (Post 6108396)
Then why would you use his packages? When you download the package, grep the contents for douninst.sh and if you find it choose to refuse on the basis that the maintainer is in your view incompetent :-).

It has nothing to do with the competency of the packager. It's just you can ask to a packager to provide sane install defaults not sane uninstall defaults.

Quote:

Originally Posted by bifferos (Post 6108396)
If my douninst.sh script contains a 'pip uninstall XXX' line, then that also has zero to do with removepkg. Removepkg has no knowledge of pip installed packages at all. I don't really get your point.

And how do you know any other packages or local script don't need XXX if you are not the admin of the machine? You can't. Here is all the point, precisely.

bassmadrigal 04-06-2020 11:03 AM

Quote:

Originally Posted by NonNonBa (Post 6108373)
You can't trust the packager, that's my point, because a packager just can't know nor anticipate what the other packagers do or will do, as there is no way to guess what a doinst generates (precisely the targets of the undoinst). More generally, except in some rare cases, a packager can't really know what an admin will want to keep or not and is then always prone to fight him.

With Pat adding the ability to skip running the douninst.sh and your ability to alias removepkg to contain --skip-douninst, it seems to be the best of both worlds. Many package managers support some post uninstall script, so it is nice for Slackware to add that functionality and leave it optional.

But if you can't trust a packager, should you be trusting them to write a doinst.sh or the SlackBuild itself? What about overwriting files from other packages? If the packager doesn't have that particular program installed, they may not even realize that their package is overwriting another one. And it is very possible that it could cause breakage. Point blank, if you're using someone else's work, from SBo, from a package repo, you're inherently placing some trust in that person (although, you can still validate that trust by checking the SlackBuild, doinst.sh, douninst.sh, and installpkg --warn to make sure files won't be overwritten.

Quote:

Originally Posted by NonNonBa (Post 6108373)
Sure, but you see? You're then partially re-implementing the core functionality of removepkg in a file ran from removepkg.

But if that file was placed there after the creating the package itself (either in the doinst.sh or during execution of that program), then it would be impossible to use the functionality within removepkg to ensure a file removed won't reside in another package.

Luckily, none of my scripts on SBo will need the douninst.sh functionality (that I can see), and I imagine very few will actually need it, but it's nice to have the option rather than potentially leaving stale files/folders around the system that aren't tracked by the package manager.

brobr 04-07-2020 06:25 AM

FWIF, I just ran into the situation where, I think a douninst.sh is fitting. The dkms-package (on SBo) helps to maintain modules built externally from the kernel after a kernel-upgrade. It creates its own build-area in /var/lib/dkms.

Installing a kernel-module to be built via dkms involves placing the required source files in /usr/src -which can be done via a build-script that then runs dkms from doinst.sh to build and to install the kernel-driver. After running removepkg the build tree for the kernel-module in /var/lib/dkms, some symlinks, and the dkms-installed kernel-modules will be left behind (in /lib/modules/(kernelversion)/extra). Dkms-installs leave original kernel-modules intact.

Ideally, one would 'dkms remove <module/version> -all' before (with removepkg) deleting all source files. This because these sources contain the conf.file dkms needs for completing the 'remove' instruction. As it is uncommon and therefore prone to happen, there is a risk that the dkms step will be skipped. This can be repaired with a dounist.sh that cleans up the bits that would have been removed by the 'dkms remove' command.

Maybe, instead of delving into the -for the likes like me- quite abstract pro-and cons of a douninst.sh, it would be interesting to see examples of how people use it or bypass the use of it.

With the risk of being branded an 'untrusted' maintainer, the discussed dkms example is here. All comments to improve the scripts(s) are more than welcome.

It is early days, but a dounist.sh template on SBo would be very helpful in streamlining intended usage.

bifferos 04-07-2020 08:01 AM

@brobr I posted an example of how you can use it in this thread. Nobody really commented on that either for or against, so that's either because nobody understood it, or perhaps due to my code generating a slackbuild instead of actually *being* a slackbuild people didn't trust it. To remove a layer of abstraction I've now posted the *generated* slackbuild.

initrd is just a simple python module that I registered on pypi. In order to register it certain metadata must be supplied, and this metadata is queryable using the pypi API. So rather than duplicate this metadata on SBo, sbgen.py simply queries the metadata and generates a slackbuild from it. The generated slackbuild has been uploaded here:
https://github.com/bifferos/slackbui.../master/initrd

The readme gives the generated file structure:
https://github.com/bifferos/slackbui.../initrd/README

You can see the only file that matters is:

opt/afterpkg-python/initrd-0.1.tar.gz

This is the package source. The doinst.sh installs it using pip. The douninst.sh removes it using pip. Of course, the Python package must appear twice, once in source form, and also in installed form, but Python packages are not generally that large. For me it doesn't matter.

This allows upwards of 1000 python packages on SBo to be generated. Where dependency management finds a requirement for a specific Python package, instead of looking on SBo, we can generate the slackbuild wrapper package and pip install it, and according to the Slackware package management tools it's listed as being there, whilst not limiting us to the pypi versions that SBo maintainers have got around to pulling in.

All this is highly experimental at the moment and nowhere near ready for people to use in anger, it's just something I'm working on, but the new douninst.sh functionality allows me to experiment with such concepts without having to hack removepkg all the time.

NonNonBa 04-07-2020 08:18 AM

Quote:

Originally Posted by bassmadrigal (Post 6108422)
But if you can't trust a packager, should you be trusting them to write a doinst.sh or the SlackBuild itself?

Not the same thing. You can expect a packager to provide what is required/advised to run a software according to the upstream, but there's nothing as a "sane removal". Breaking things following the INSTALL instructions is not the same as breaking them because you walk in the dark. installpkg already tracks everything has no need to be linked to your particular setup, removepkg shouldn't cross this line, which is the admin's territory.

Quote:

Originally Posted by bassmadrigal (Post 6108422)
Luckily, none of my scripts on SBo will need the douninst.sh functionality (that I can see), and I imagine very few will actually need it.

You partially meet my point. I think on the packager's side this feature is either useless or dangerous, while on the admin's side it could be useful, safe, and far more powerful.

Quote:

Originally Posted by brobr (Post 6108696)
Maybe, instead of delving into the -for the likes like me- quite abstract pro-and cons of a douninst.sh, it would be interesting to see examples of how people use it or bypass the use of it.

You have to hack the pkgtools if you want to test my solution. I think your package does to much things for a source package: the doinst shouldn't try to build anything. With my suggestion, the admin could just automate the remove/rebuild of the modules each time the kernel is upgraded.

Plus, it seems your douninst assumes the running kernel will be the one where the modules where installed. It couldn't be the case if the admin first upgrades the kernel to check it globally works and only then upgrades the modules, and when it is the case, how does the doinst know the targeted new kernel (open question, I don't know dkms)? With my model, the script would get the names of the old/new kernels and could determine where to remove and add things.

bassmadrigal 04-07-2020 10:30 AM

Quote:

Originally Posted by NonNonBa (Post 6108724)
Not the same thing. You can expect a packager to provide what is required/advised to run a software according to the upstream, but there's nothing as a "sane removal". Breaking things following the INSTALL instructions is not the same as breaking them because you walk in the dark. installpkg already tracks everything has no need to be linked to your particular setup, removepkg shouldn't cross this line, which is the admin's territory.

You partially meet my point. I think on the packager's side this feature is either useless or dangerous, while on the admin's side it could be useful, safe, and far more powerful.

I'm not going to continue this debate. It has been determined with a lot of different distro's package managers that a post-uninstall script is a good idea. Slackware has now implemented it and it is up to the package maintainers to use it in a sane manner. If someone is able to convince Pat to remove the douninst support, I doubt it will bother me, but it doesn't bother me that it's included either. I imagine as SlackBuilds are developed that start containing douninst scripts, the SBo admins will be making sure they are unlikely to cause harm to the system.

For users who don't want to leave their system in the package maintainer's hands, they can simply create an alias like the following:

Code:

alias removepkg='removepkg --skip-douninst'
Then if they need to override that sometime, they'd just call removepkg with the absolute path

Code:

/sbin/removepkg

brobr 04-07-2020 11:55 AM

Thanks for the example. And an interesting one (because it deals with python packages):
Quote:

Originally Posted by bifferos (Post 6108714)
@brobr ..

The doinst.sh installs it using pip. The douninst.sh removes it using pip. ..

.. instead of looking on SBo, we can generate the slackbuild wrapper package and pip install it, and according to the Slackware package management tools it's listed as being there, whilst not limiting us to the pypi versions that SBo maintainers have got around to pulling in.

Thus, afterpkg generates "footprints" of pip-installed modules that can be picked up by Slackware's pkgtools; as if these modules have been installed using Slackbuilds. This makes a lot of sense; not only because of the hassle of dependency-tracking (often one needs to update the SBo-scripts to get a correct version or -at the moment- have to change python-version manually). But more so because (new) developers of python modules do not create a setup.py script that caters for a 'python setup.py install' route that works in the traditional way outwith pip (as is needed to create a distributable slackware pkg). Mainly because they are not aware of this.

Instead of 'shipping' built python packages from one computer to another, with afterpkg the admin instructs/distributes pip installs of those (Please correct me if I get this still wrong).

This example makes it much clearer (at least for me) to understand how afterpkg is intended to work.

Such an approach makes management of complex packages like Leo much easier. Also the python-bloat on SBo can be minimized to only those packages that can be considered on the interface between python and the OS. Leo is an interesting use_case in this respect, very fast development, continually changing requirements (dependencies); but maybe (too quickly) linked to novel versions of non-python ware such as QT5 (via PyQT). And it is at these cross-sections that one might get limited to which versions of a program like Leo could be used on Slackware (so there a SLackbuild as published on SBo still comes in useful) or the other way round; a python module (like pysam) that can be linked to a native library (htslib) breaks when the latter is upgraded and the python module is not following in its tracks. Still, such info would be trackable in the 'requires.txt' files any python module/program comes with. So these problems are not immediately related to the way afterpkg uses the doinstall/douninstall. (The relevant bit comes in whether/how easy one can control which version afterpkg takes as a starting point).

Ideally one would be able to run 'afterpkg python.app' and get the stuff installed by pkgtools.

Analogous to sbopkg, maybe consider putting the downloaded source files in /var/cache/afterpkg instead of /opt ( /var gets space for itself on my box so that overflowing kernel-messages are not obstructing other stuff). Also, after installation they could be removed, couldn't they (I won't have space on my laptop for 1000 packages ;-)?

Also note, as mentioned a while back on the SlackBuilds.users list, that any python package distributed via pypi can be downloaded using this (logical) format:
Quote:

https://pypi.python.org/packages/source/<package first letter>/<package name>/<package name-version.extension>
So, python-defusedxml would look like:
Code:

"https://pypi.python.org/packages/source/d/defusedxml/defusedxml-0.6.0.tar.gz"


brobr 04-07-2020 12:04 PM

Quote:

Originally Posted by NonNonBa (Post 6108724)
I think your package does to much things for a source package: the doinst shouldn't try to build anything. With my suggestion, the admin could just automate the remove/rebuild of the modules each time the kernel is upgraded.

Plus, it seems your douninst assumes the running kernel will be the one where the modules where installed. It couldn't be the case if the admin first upgrades the kernel to check it globally works and only then upgrades the modules, and when it is the case, how does the doinst know the targeted new kernel (open question, I don't know dkms)? With my model, the script would get the names of the old/new kernels and could determine where to remove and add things.

Hmm, yes that makes sense; With dkms, you only change the module when there is an upgrade of that. When you upgrade the kernel that module gets automatically installed in the new one; that is how dkms is set up. So the updating of the module is uncoupled of that of the kernel. If you miss uninstalling from an old kernel how important is that (quite often with something like using Virtual box, modules stay behind so that you have to remove old kernel modules manually)? For me my slackbuild calling dkms via doinst.sh is a kind of script you talk about...

bifferos 04-07-2020 12:28 PM

Yes, that's a fairly good description of where Afterpkg is trying to go. However it's better if you consider it as a tool to assist me in experimentation with SBo rather than some kind of finished product. sbgen.py, which generates the Python packages isn't fully integrated into Afterpkg yet, and I don't know if it should be. For instance do we need to actually create real Slackware packages, or should we just be fooling dependency management systems into thinking they're present with some kind of hint files? There are a lot of Python packages, not so many dependency management tools, after all.

Don't be too worried about where I'm putting downloaded source files. This is so experimental and needs so many edge cases ironed it may be impossible to complete in any meaningful way. For instance it's better to use the python wheels, but they don't always exist for our platform, or they may not have been created at all for some older packages. initrd (my example) works and builds, other packages do not.

brobr 04-07-2020 01:41 PM

@biffereos Well, more fantasizing at this end, than being worried ;-).

Quote:

do we need to actually create real Slackware packages, or should we just be fooling dependency management systems
Ideally, would this not be a kind of choice/option that can be build in: say -p for pip-install only and -s for an package.tar.gz as end product, i.e. an SBo-like archive for the generated SlackBuild. This option could then also mean that the directed tar.gz is passed on to say sbopkg, for generating (and installing) the _SBo.tgz. When bypassing the pip-route,the generation of donistsk/dounist.sh could be omitted. But maybe this latter route will become error-prone in the future when pip-install is taking dominance, as mentioned above.

One could imagine that the whole python-tree in SBo would just be a list of 'safe' package-names for python-modules that can be processed by afterpkg. Possibly, a lot of other python-only packages (like some in the 'academic' tree ) could be treated as such as well.

I imagine that such a combo (of afterpkg, sbopkg, pkgtools) would reduce enormously the work load of people maintaining python packages or those that use them but need uncovered versions (absent or more up-to-date).

Management of R-packages, not of R itself, is already completely going through the R-interface and not via slack-builds. I wonder whether it's future-proof to try to keep up with the ever increasing complexity of the python jungle through slackbuilds as we know. Afterpkg seems a nice gel that could keep either style of package-management a valid option.

drumz 04-07-2020 02:34 PM

Quote:

Originally Posted by brobr (Post 6108696)
FWIF, I just ran into the situation where, I think a douninst.sh is fitting. The dkms-package (on SBo) helps to maintain modules built externally from the kernel after a kernel-upgrade. It creates its own build-area in /var/lib/dkms.

Installing a kernel-module to be built via dkms involves placing the required source files in /usr/src -which can be done via a build-script that then runs dkms from doinst.sh to build and to install the kernel-driver. After running removepkg the build tree for the kernel-module in /var/lib/dkms, some symlinks, and the dkms-installed kernel-modules will be left behind (in /lib/modules/(kernelversion)/extra). Dkms-installs leave original kernel-modules intact.

Ideally, one would 'dkms remove <module/version> -all' before (with removepkg) deleting all source files. This because these sources contain the conf.file dkms needs for completing the 'remove' instruction. As it is uncommon and therefore prone to happen, there is a risk that the dkms step will be skipped. This can be repaired with a dounist.sh that cleans up the bits that would have been removed by the 'dkms remove' command.

Yes, I agree. When I uploaded system76-io-dkms (https://slackbuilds.org/repository/1...tem76-io-dkms/) to SlackBuilds I felt "dirty" for having the "dkms install system76-io/$VER" command in doinst.sh, but no way to remove the module when doing "removepkg". I will definitely be updating this SlackBuild for 15.0 to make use of douninst.sh.

Of course, I'll put a big fat warning in the README that "removepkg" will be doing extra work to fully remove system76-io-dkms from the system.

brobr 04-07-2020 05:13 PM

Hi drumz, thanks for the feedback. When cobbling mine together I had a look at your Slackbuild to see how others had used dkms; well, there was only one... possibly because of what NonNonBa put forward. Maybe at one point, after I have not experienced any glitches, I will try to get my script accepted along yours...

bassmadrigal 04-07-2020 08:07 PM

I did have a dkms build working from within a SlackBuild on an older version of the amdgpu-pro driver without needing to do it in doinst.sh.

https://github.com/bassmadrigal/slac...ver.SlackBuild

Code:

if [ "${BUILDDKMS:-no}" == "yes" ]; then

  # Patch the crap outta the source so we can build the module
  for i in $CWD/patches/*.patch; do patch -p1 < $i; done

  # Prevent dkms from trying to rebuild an initrd
  sed -i 's/REMAKE_INITRD="yes"/REMAKE_INITRD="no"/' $PKG/usr/src/${SRCNAM}-${SRCVER}/dkms.conf

  # Add the kernel module to the correct location
  sed -i 's|/updates|/kernel/drivers/gpu/drm/amd/amdgpu|' $PKG/usr/src/${SRCNAM}-${SRCVER}/dkms.conf

  # Set up the source for dkms building
  mkdir -p $PKG/var/lib/dkms/${SRCNAM}/${SRCVER}/source/
  ln -s /var/lib/dkms/${SRCNAM}/${SRCVER}/source usr/src/${SRCNAM}-${SRCVER}

  # Check if dkms is installed
  if [ ! -x /usr/sbin/dkms ]; then
    echo "Please install dkms from SBo"
    exit 1
  fi

  # Let's build it
  mkdir -p $PKG/lib/modules/`uname -r`/kernel/drivers/gpu/drm/amd/amdgpu
  dkms install \
    -m ${SRCNAM} \
    -v ${SRCVER} \
    --sourcetree $PKG/usr/src \
    --dkmstree $PKG/var/lib/dkms/${SRCNAM}/${SRCVER} \
    --installtree lib/modules
fi

But if you have a new kernel added, those modules would never be tracked by pkgtools and wouldn't be removed if the package was removed. douninst.sh would allow you to use dkms to remove all versions of that module.

NonNonBa 04-08-2020 04:03 AM

Quote:

Originally Posted by bassmadrigal (Post 6108760)
It has been determined with a lot of different distro's package managers that a post-uninstall script is a good idea.

And as many that think the same touching tracking dependencies.

Quote:

Originally Posted by bassmadrigal (Post 6108760)
For users who don't want to leave their system in the package maintainer's hands, they can simply create an alias like the following:

Code:

alias removepkg='removepkg --skip-douninst'

This option should then be added to upgradepkg and in any other tool calling the pkgtools.

Quote:

Originally Posted by bassmadrigal (Post 6108760)
I'm not going to continue this debate.

Yep, I think I've also far enough exposed my views. ;)

brobr 04-08-2020 05:47 AM

Quote:

Originally Posted by bassmadrigal (Post 6108943)
I did have a dkms build working from within a SlackBuild
...
Code:

if [ "${BUILDDKMS:-no}" == "yes" ]; then

 ...

  # Set up the source for dkms building
  mkdir -p $PKG/var/lib/dkms/${SRCNAM}/${SRCVER}/source/
  ln -s /var/lib/dkms/${SRCNAM}/${SRCVER}/source usr/src/${SRCNAM}-${SRCVER}

  # Check if dkms is installed
  if [ ! -x /usr/sbin/dkms ]; then
    echo "Please install dkms from SBo"
    exit 1
  fi

  # Let's build it
  mkdir -p $PKG/lib/modules/`uname -r`/kernel/drivers/gpu/drm/amd/amdgpu
  dkms install \
    -m ${SRCNAM} \
    -v ${SRCVER} \
    --sourcetree $PKG/usr/src \
    --dkmstree $PKG/var/lib/dkms/${SRCNAM}/${SRCVER} \
    --installtree lib/modules
fi

..

Ha, so you can call dkms during the package build so that it installs into the package before it is installed via pkgtools?
I am going to test that, thanks !-)

One thing I do not completely understand:
Quote:

douninst.sh would allow you to use dkms to remove all versions of that module.
How would this work when -as is the case for the digimend-drivers- the dkms.conf is removed with the source by pkgtools before the douninst.sh is run?


All times are GMT -5. The time now is 08:59 PM.