What If .........Slack needs Systemd (Slackbuilds)
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: slack 7.1 till latest and -current, LFS
Posts: 368
Original Poster
Rep:
I am currently in the process of converting rc.d/rc.xxx files into service files without rebuilding the source.
For example the rc.acpid script would look like this:
Code:
#
# This service file starts + stops the rc.acpid daemon
#
[Unit]
Description=ACPI Event Daemon
After=syslog.target
[Service]
Type=forking
ExecStart=/etc/rc.d/rc.acpid start
ExecStop=/etc/rc.d/rc.acpid stop
[Install]
WantedBy=multi-user.target
Except for figuring out the right Type and right order of startup,
I have been thinking about how to make this easy for setting up for users.
1. we enable all these services, if the file in /etc/rc.d/ is not set +x it would not start anyway.
2. we create a script that checks what services are set +x, and the script will enable these services.
3. instead of starting the service by calling the script in /etc/rc.d/, the service file would start the program (this is only a nice solution for scripts that just need to run once (ldconfig for example)
Both options have downsides.
1. you get some extra errors during startup/run that a service can not be started.
2. if after initial setup you want to activate a service at startup, you will need to set the script in rc.d/ to +x and enable the service through systemd.
3. If you do this for daemons, you will get unexpected behaviour
- a WiP would be to have slackbuilds available for these programs, so that you can replace the script in rc.d.
personally I opt for option 1, and create the slackbuilds for these services, so that if people want to build the program against their systemd setup, it would be a cleaner setup.
This will also make it easier to see which packages in the stock slackware tree would be needed to rebuild against systemd.
its a nice referrence for PV (in case he needs it).
Distribution: slack 7.1 till latest and -current, LFS
Posts: 368
Original Poster
Rep:
wrapper scripts do work, however like said in the post, it would not be a clean setup.
there are a bunch of programs started in rc.M for example:
update-mime-database and fc-cache, we could just create these services by using the Type=Oneshot
in the current setup, this is not being run.
so the boot is extremely fast.
some programs dont need to be rebuild, but just can live with a service file, like the ones described above (the 1 time run programs from rc.M or rc.4)
actually, this makes the following situation maybe better:
temporarily, create service files for the services that use the Type=Oneshot (fontconfig, etc)
and make slackbuilds available later for these programs.
for all other scripts in /etc/rc.d/ make slackbuilds available.
Yes, the database and font cache update scripts actually do slow down the boot time considerably. I imported these into LFS and it slowed my boot down more than doubling my boot time. My LFS boot time without these scripts was about 6 seconds. Now it's about 15 seconds on a 2.5 GHz Dual-Core system.
However, these scripts, and others like them in rc.M are required to set up profiling and ensure that all the databases, caches, and pathways are set, configured, and properly assigned.
I have no idea how they'll work setup and launched in parallel, but they should be launched after the last service daemon is loaded and before rc.local is launched.
However, these scripts, and others like them in rc.M are required to set up profiling and ensure that all the databases, caches, and pathways are set, configured, and properly assigned.
They're not usually required on every reboot and to be honest it's pretty much over kill to have them there.
Ideally they should only be called when something adds files related to those functions
ie the doinst.sh script in a package should rebuild the appropriate indexes, though this does leave us with a hole during uninstall as there is no uninstall function at this time.
They're not usually required on every reboot and to be honest it's pretty much over kill to have them there.
Ideally they should only be called when something adds files related to those functions
ie the doinst.sh script in a package should rebuild the appropriate indexes, though this does leave us with a hole during uninstall as there is no uninstall function at this time.
Putting it in the doinst.sh script would be rather inefficient in the event of multiple packages all updating relevant stuff. I believe OpenSUSE do these things when the package manager exits. Perhaps the doinst.sh could remove a .fast_font-cache file that the rc.M script can check for: similar to .fastboot, and a similar approach for the mime, and gtk stuff. It'd still leave us with the hole between package activity and reboot though.
I commented out the mime/pango stuff in rc.M and moved it into a script in /etc/cron.weekly. I have another script in cron.weekly to clean up /tmp and /var/tmp because it doesn't make sense doing it every reboot for desktops that have uptimes of 3+ months.
Distribution: slack 7.1 till latest and -current, LFS
Posts: 368
Original Poster
Rep:
basic requirements for systemd has been updated, for next release these are the requirements:
Code:
glibc >= 2.14 | (in Slackware L series )
libcap | (in Slackware L series )
libblkid >= 2.20 (from util-linux) (optional) | (in Slackware A series )
libkmod >= 15 (optional) | (in Slackware A series )
PAM >= 1.1.2 (optional)
libcryptsetup (optional) | (in Slackware A series )
libaudit (optional)
libacl (optional) | (in Slackware A series )
libattr (optional) | (in Slackware A series )
libselinux (optional)
liblzma (optional) | (in Slackware A series )
tcpwrappers (optional) | (in Slackware N series )
libgcrypt (optional) | (in Slackware N series )
libqrencode (optional)
libmicrohttpd (optional)
libpython (optional) | (in Slackware D series )
make, gcc, and similar tools | (in Slackware D series )
During runtime, you need the following additional
dependencies:
util-linux >= v2.19 (requires fsck -l, agetty -s) | (in Slackware A series )
sulogin (from util-linux >= 2.22 or sysvinit-tools, optional but recommended) | (in Slackware A series (shadow) )
dracut (optional)
PolicyKit (optional) | (in Slackware L series )
As seen above slackware has standard the 3 deps: glibc, util-linux and libcap
It has also almost all optional dependencies, except for pam, selinux, libqrencode, libaudit, libmicrohttpd and dracut
Considering that pam, selinux, libaudit, libqrencode are aimed for high security packages.
and libmicrohttpd is for remote system management
dracut is an initramfs tool.
With this information at hand, it means (IMO), that it is possible to compile systemd with almost all features on a stock slackware system.
That still doesn't make it work properly - meaning, as reliable, stable, auditable, and reparable as what Slackware already has.
Even on fedora things still don't work right - shutdowns still hang, databases don't start properly...
And if you manage to force things to barely work, you are fighting the very premise of systemd, which is parallel processing for startup and shutdown...
It works, but only when startup/shutdown are for very simple environments. No databases, no web servers, few networks (most reliable when there is only one), no complex mounts of devices... and no complex services.
Interesting project you got going here dude. Honestly I have no interest in doing any of this myself right now, but I've been enjoying reading what you're doing.
That still doesn't make it work properly - meaning, as reliable, stable, auditable, and reparable as what Slackware already has.
Even on fedora things still don't work right - shutdowns still hang, databases don't start properly...
And if you manage to force things to barely work, you are fighting the very premise of systemd, which is parallel processing for startup and shutdown...
It works, but only when startup/shutdown are for very simple environments. No databases, no web servers, few networks (most reliable when there is only one), no complex mounts of devices... and no complex services.
That exact issue has been the suicide pill of systemd for a long time, even on Arch it still raises an ugly head.
The problem is when you create systemd loading scripts to launch the service daemons, you have to create a tree based dependency script, which systemd only supports through sysvinit-like wrapper scripts, and not natively. This actually hurts systemd's overall benefits because nearly half the scripts used have to be rewritten to work in tree based dependency. The native method is to use a hybridized load and check system to where a service daemon is only allowed to start if a prerequisite service daemon is started, but this creates a problem as this script often does not work as intended forcing a fallback to sysvinit-like tree-based scripts for dependency loading.
This is where it becomes an over-glorified mess to deal with in comparison to sysvinit/bsdinit as scripts for service daemons have to be checked again and again to make sure they work right, and if they don't new sysvinit-like scripts for systemd have to be written which defeats the purpose of even using systemd in the first place, to disable, or even introduce/force in hybrid-script loading.
It's Friday night, and I'm dog-tired, and probably about to make myself look stupid, but here goes...
Would it be possible to run systemd *under* the old init system?
So systemd would be just another subordinate daemon that performs only those functions it has completely assimilated (cgroups, logind, udev), but it wouldn't be doing the stuff we can still do the old way, like starting daemons and changing runlevels.
I'm guessing that this would make "upstream" go ballistic, and it's a use case that they haven't allowed for, and will not consider patches for, and I'm stupid, and I hate disabled people and kittens ... but if we ever eventually *had* to use it, I'd prefer it to have its own small and rigidly defined area of doubt and uncertainty.
ps. Please, please tell me that the libqrencode dependency is an April Fool's that they forgot to take out...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.