What If .........Slack needs Systemd (Slackbuilds)
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I was able to get back to that machine earlier than I expected and could check it, /dev/shm is properly created when starting with systemd.
I know that a quick fix for booting with sysvinit is possible, but it shouldn't be necessary in the first place.
On further note, I think the ArchWiki is a great help with configuring systemd, for I now I could (easily) solve the keyboard layout problem with its help. Now I will look if they have a .service file for my display manager.
On a side note: the Debian developers just voted in favour of systemd for their next release.
A great example of why voting is not the smartest way to make design decisions.
Quote:
I've been using Debian on servers and desktops for a few years, so it will be interesting to see the consequences of this decision on a traditionally rock-solid distribution.
"Rock-solid" due to ancient packages. In my view Debian was mainly popular for creating distribution forks based on it (like Ubuntu). Linux systems with a relevant end-user market share (Android and the rest of the embedded crowd) currently even avoid udev.
I think GNU failed its goal and is on the way out. At the end of the PC era it is unlikely that I will ever have to deal with Debian again in my life. So I'm fine with it.
well you are in control.
you decide which graphical target you want to start.
in other words, if you want the graphical loader of KDE (not sure what it is) I use Gnome (GDM)
you can create a standard .service file and place it inside the directory graphical.target.wants
That is what I will do next, I use Slim as DM.
Quote:
also UTF8 is needed (forgot to list that somewhere I think)
If you mean starting with " vt.default_utf8=1" as kernel parameter, that is the case here.
Quote:
The swap is indeed an issue on certain installations.
As it seems that issues was caused by me, I just noticed that I forgot about the udev.new configuration file.
Distribution: slack 7.1 till latest and -current, LFS
Posts: 368
Original Poster
Rep:
The reason why I said it, was that I encountered the issue myself before.
When you have created a slim.service file could you share it with me ?
I will post it up on the site, so that others can use it as well if that is ok with you.
Getting the network up and running has been a long running argument for server usage of systemd and parallel loading of daemons compared to traditional linear loading methods used by sysvinit.
Here's a good question, could systemd be scripted to load network related services without using parallel loading?
Nope.
The way it is designed, I don't think so.
Now that said, maybe - but you would first have to drop NetworkManager (as well as any other service providing a tree of services), and make changes to every service.
First, systemd would have to start the service, then it must wait for each service to tell systemd when that service has completed its initialization (loading any tables, initialized any interfaces, setup any sockets it will be using (from what I see, this still isn't being done).
Second, once everything is READY for incoming requests (but before servicing any) it has to tell systemd that it is ready to process requests.
The SysVinit way - this is done automatically. A process gets started by the init process, which then waits until the service script exits. And the script starts the service (but doesn't continue - it is waiting for the service daemon to fork a new process and the parent service daemon exits. This provides the automatic delay until all initialization is completed. The service exit is the signal that the daemon is now active or has failed. But doing it this way means that the init process doesn't know the identity of the service.
But the startup is clean.
Part of the problem is the process fork-exec pair. It is an advantage for sysVinit (simple code), but it is a penalty to systemd that must worked around (which requires a lot more code).
Systemd wants to work more like Windows where they are combined into one system call (a number of things there came from VMS, including the use of termination messages via mailbox to pass more data than just an exit code). Another part of the problem is that a process exit can only pass one byte of information to its parent (the exit status). The systemd approach uses dbus to get around that limit. Dbus may also be being used to try to get around the fork-exec issue.
But that means that dbus and systemd are now co-dependant, if either one exits, the system is going down (I don't think even restarting dbus works as it doesn't have all the information that its dead predecessor had about routing the information around). I think that is part of the reason for the hard push that has been going on to force a version of dbus into the kernel.
Dbus is an interesting feature... a nice additional method of IPC. adding it to the kernel does make Dbus more efficient by eliminating two kernel transitions (from kernel to dbus process and back). But is it actually more efficient than just using a fork/exec to start a service?
Now USING dbus makes shutdown appear more efficient, it only calls for sending the shutdown to every process using dbus. It also appears to make service monitoring easy as a failed service causes an event that can identify which service actually failed. But the assumption on that "more efficient" is that EVERY service has to use it. Things that don't use it get lost - and systemd still seems to have problems with that. I haven't been sure that my VMs actually shutdown before systemd terminates (it may explain why the system seems to hang for no apparent reason though it does usually complete, shutting down a bunch of VMs running unknown operating systems with unknown dependencies is always tricky, and it can't shut anything else down until they have shutdown).
And systemd still has to send out a signal to kill all non-root processes... first to send a signal to save and exit, then a forced kill at some later time (same thing with sysVinit). As to whether this is done before/during/after the normal systemd service shutdown, I'm not sure. Personally, I think before is better. Too many times I've seen messages implying that there has been no delay at all between the first signal (save and exit) and the second (forced kill).
As to the implementation, it works for IPC fairly well. The techniques for dbus go back at least to VMS (termination mailboxes, completion routines... What? you thought dbus was new? Granted, even I forgot about VMS, not having used it for almost 30 years, but you can still lookup what the SYS$EXIT system call does and what "image rundown" means for an exit). But for reliability it does depend on the rest of the system... asynchronous operation is not well controlled in the UNIX environment. Not even by dbus. And trying to make the system do something it isn't designed to do just creates a source of problems, not solutions. Now changing how processes are created/destroyed, adding in event message handling at the same time might fix that - but you end up with a kernel that is just as bloated (and slow) as NT.
My main problems with systemd aren't the mechanics - it is the assumption that the dependency analysis can be successfully done... Yes, it can - but only for fixed, and relatively small networks. Systemd is still having trouble mounting filesystems - especially those that use the network. People keep putting the mounts in rc.local adding service startups, then adding sleeps to make it delay long enough that it works... conditional network structures are really bad. It works for small things... but I still wonder if anyone has tried it on a large system - several hundred disks, multiple LVM partitions, and a number of distributed services... It just doesn't seem like systemd is designed for large problems.
Distribution: slack 7.1 till latest and -current, LFS
Posts: 368
Original Poster
Rep:
@jpollard thanks for this explanation.
And it is not a maybe. it is a yes.
I can edit all my service file and give them a number for example and tell them:
start 1, when 1 is finished start 2, when 2 is finished start 3, etc.
So it is possible. (the user/administrator is is control)
Now that said, maybe - but you would first have to drop NetworkManager (as well as any other service providing a tree of services), and make changes to every service.
First, systemd would have to start the service, then it must wait for each service to tell systemd when that service has completed its initialization (loading any tables, initialized any interfaces, setup any sockets it will be using (from what I see, this still isn't being done).
Second, once everything is READY for incoming requests (but before servicing any) it has to tell systemd that it is ready to process requests.
The SysVinit way - this is done automatically. A process gets started by the init process, which then waits until the service script exits. And the script starts the service (but doesn't continue - it is waiting for the service daemon to fork a new process and the parent service daemon exits. This provides the automatic delay until all initialization is completed. The service exit is the signal that the daemon is now active or has failed. But doing it this way means that the init process doesn't know the identity of the service.
But the startup is clean.
Part of the problem is the process fork-exec pair. It is an advantage for sysVinit (simple code), but it is a penalty to systemd that must worked around (which requires a lot more code).
Systemd wants to work more like Windows where they are combined into one system call (a number of things there came from VMS, including the use of termination messages via mailbox to pass more data than just an exit code). Another part of the problem is that a process exit can only pass one byte of information to its parent (the exit status). The systemd approach uses dbus to get around that limit. Dbus may also be being used to try to get around the fork-exec issue.
But that means that dbus and systemd are now co-dependant, if either one exits, the system is going down (I don't think even restarting dbus works as it doesn't have all the information that its dead predecessor had about routing the information around). I think that is part of the reason for the hard push that has been going on to force a version of dbus into the kernel.
Dbus is an interesting feature... a nice additional method of IPC. adding it to the kernel does make Dbus more efficient by eliminating two kernel transitions (from kernel to dbus process and back). But is it actually more efficient than just using a fork/exec to start a service?
Now USING dbus makes shutdown appear more efficient, it only calls for sending the shutdown to every process using dbus. It also appears to make service monitoring easy as a failed service causes an event that can identify which service actually failed. But the assumption on that "more efficient" is that EVERY service has to use it. Things that don't use it get lost - and systemd still seems to have problems with that. I haven't been sure that my VMs actually shutdown before systemd terminates (it may explain why the system seems to hang for no apparent reason though it does usually complete, shutting down a bunch of VMs running unknown operating systems with unknown dependencies is always tricky, and it can't shut anything else down until they have shutdown).
And systemd still has to send out a signal to kill all non-root processes... first to send a signal to save and exit, then a forced kill at some later time (same thing with sysVinit). As to whether this is done before/during/after the normal systemd service shutdown, I'm not sure. Personally, I think before is better. Too many times I've seen messages implying that there has been no delay at all between the first signal (save and exit) and the second (forced kill).
As to the implementation, it works for IPC fairly well. The techniques for dbus go back at least to VMS (termination mailboxes, completion routines... What? you thought dbus was new? Granted, even I forgot about VMS, not having used it for almost 30 years, but you can still lookup what the SYS$EXIT system call does and what "image rundown" means for an exit). But for reliability it does depend on the rest of the system... asynchronous operation is not well controlled in the UNIX environment. Not even by dbus. And trying to make the system do something it isn't designed to do just creates a source of problems, not solutions. Now changing how processes are created/destroyed, adding in event message handling at the same time might fix that - but you end up with a kernel that is just as bloated (and slow) as NT.
My main problems with systemd aren't the mechanics - it is the assumption that the dependency analysis can be successfully done... Yes, it can - but only for fixed, and relatively small networks. Systemd is still having trouble mounting filesystems - especially those that use the network. People keep putting the mounts in rc.local adding service startups, then adding sleeps to make it delay long enough that it works... conditional network structures are really bad. It works for small things... but I still wonder if anyone has tried it on a large system - several hundred disks, multiple LVM partitions, and a number of distributed services... It just doesn't seem like systemd is designed for large problems.
Let me do a little summary of your speech:
Under systemd, administrator or System Builder must be very careful of the order of services.
My opinion:
It's the old story of RPM's dependencies hell. And I think the culprit of dependencies Hell, is not of RPM, but of the system builder(s) who is/are careless in to configure them.
And about stories about clustered RAID of hundreds of hard drives ... I tell you to lay low with this story.
First, because in this case are administrators who work full-time at Enterprise level. And technically, at this level, we have in-house distributions, created by the company's Linux teams.
Secondly, I do not think Slackware provides support at Enterprise level. Is just the pet-project of P.V.
And yet if a huge Company wants to make a second Watson under Slackware, I think that the presentation of a team to P.V. door, together with a nice suitcase (full of dollars), is the slightest problem. Because, for a amount of 7-8 zeros, I think P.V. would be very happy to do the system builder and administrator role for them. Full time.
Last edited by Darth Vader; 02-10-2014 at 10:46 AM.
Not a full-blown professional project with support at Enterprise level like the Titanic.
However, we do not know for sure if Noah's Ark really existed or it is just a Hebrew old legend, but we know for sure that there was a ship called Titanic.
BTW, also Titanic sank not because of a engineering or construction error, but, lets say, due to it's captain human pride ...
Finally, I have a question for you:
Let's say that a big company contacts you to create an in-house Linux distribution, loosely based on Slackware, but i686 pure and driven by SystemD, to work with a very, very large distributed RAID. On the top, of course, a nice Beowulf (how else to manage 500 HDD?). But, a contract with a figure of seven zeros, full-time.
What will you say? You will say that your religious beliefs will not allow the use of SystemD, or will you accept?
My bet is that you'll instantly become die-hard fan of SystemD in Slackware.
Last edited by Darth Vader; 02-10-2014 at 12:26 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.