SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Ah, yes, the "let's do as much as possible in parallel when we start up" story.
I work in telephony. Did you ever notice that when you take your phone off the cradle that you almost always get a dial tone? In fact, you expect to get a dial tone? Do you remember that scene in one of the Jurassic Park movies where they see a phone in an obviously deserted building and they nonetheless take the receiver off hook because "Hey, it may work."?
My daytime job is to write sh*t that works, no matter what you idiots throw at it. (Yes, we now use SIP, which increases the amount and flavors of cr*p that's thrown at us.)
On one of the projects that I've worked on, we actually attempted to start up various subsystems in parallel while initializing. After all, why not? The arguments to do so are seductive and seemingly without honest rebuttal.
Turns out, there's at least one very good reason not to do it. Is there a chance that whatever it is that you're starting can take a relatively random amount of time to finish? There is? Welcome to the absolute hell of reproducing a problem that your customer has. His/Her setup takes just a little bit longer to finish initializing some init-controlled-thingy than your testing system does. Good luck getting that error just right so you can fix their problem. Good luck keeping them as a customer too.
Being able to consistently reproduce the steps taken by a system during startup is a godsend. Don't throw it away without cause.
Basically, in plain English, your saying Richard that even with a Parallel Loading of system resources, resources still need linear loading of dependency and child processes to work correctly, otherwise nothing works at all and tracking loading of PIDs in linear is superior because it can tell if a system resource fails at the right moment and why, right?
And Jens... currently which program controls cgroup at the moment not counting systemd? The way the document is worded it seems the shell system like Bash currently manages cgroup as a background process.
If that's the case, and the shell system manages it as a background process, why again, do we still need systemd if any at all? Is Bash now broken?
If I understand Richard's point correctly, it is the unpredictability of the execution order that poses a testing and troubleshooting problem. Each service will be affected differently by small changes in the disk access timing, execution timing and even interrupt latency or ordering. On the same computer, the order will change from one boot to the next, and the order is almost guaranteed to be different between computers.
One way to mitigate that problem is to divide the system start-up into a finite number of phases and only start services in each phase that do not depend on each other. Each phase then waits for start-up (or failure) of all the services in that phase before the system proceeds to the next phase.
Not knowing anything about systemd, I can't comment about how the design deals with the testability and repeatability issue.
I think that the best answer to systemd is to wait and see how well it works in other distros before deciding whether to adopt it in Slackware. By other distros I mean other distros that don't have paid-for support.
Throwing lots of money at something may get it to work on a case-by-case basis. In fact, that's a frequent money making tactic of software vendors. Sell it cheap and charge lots for the support to make it actually work. The worst that happens is some customers walk away in disgust. Hell, you don't even care if someone pirates it because they'll never get it to work on their own.
If you're lucky enough to have something that can be released as "open source" then you can take advantage of other people's solutions to the problems you create. There's nothing to lose, and you can charge for upgrades that are nothing more than re-engineered fixes someone else designed and tested.
Commercial Linux OS customers may want systemd, or at least the features it is supposed to provide. Whether it is practical or appropriate for the non-commercial distros remains to be seen. I'm always a little suspicious when commercial organizations try to promote ideas to non-commercial organizations.
It's no traditional init either.
If you do consider it as natural evolution (I don't), sure.
I don't think anyone has said we need to stay with traditional sysvinit and never evolve it. We didn't however need to completely re-invent init and alienate a large part of the community just to boot a little faster.
OpenRC solves the few small problems in the "traditional" init, yet it still remains compatible with BSD and isn't proprietary / tied to a single platform.
Just a few examples of experience with systemd on my Arch Linux box and why I so "like" it:
systemd refuses to run in chrooted environment. After a kernel upgrade I had to rebuild initrd and rerun lilo so the simple procedure of booting from install/rescue CD, mounting linux partiton, chroot to it and update lilo was not possible.
There was an issue with console font setting in systemd. Proper setting in /etc/vconsole.conf but ignored by systemd. The standard procedure in BSD/SysV init system is to walk through/trace shell init scripts in /etc/rc.d or /etc/init.d and quickly locate the source of problem. With systemd I've got stuck at binary file /lib/systemd/systemd-vconsole-setup. Now the only way is to track its source C file, study its contents, do changes, recompile and pry. Thats magnitude more difficult in comparison to shell scripts.
stale links in /etc/systemd after package removal, systemd throws errors when trying to run a service that no more exists on system. Manual intervention to get rid of such links is required.
to reliably run dhcp for just one wireless network device whether it was/was not successfully connected to an AP virtually impossible without a bit of hackery
When it does work, one can enjoy faster boot times but when something gets screwed, you have a much harder time to fix things. Not worth it, if you ask me.
I can also remember, when systemd devs ignored services not related to a concrete process like iptables settings. Just ridiculous.
The unique and limited (in sense and comparison to universall shell code) configuration syntax. If there are special needs systemd devs just ignored, you have to hack your way around.
Hope there always will be the dependency on systemd as optional for upstream sources.
systemd refuses to run in chrooted environment. After a kernel upgrade I had to rebuild initrd and rerun lilo so the simple procedure of booting from install/rescue CD, mounting linux partiton, chroot to it and update lilo was not possible.
There was an issue with console font setting in systemd. Proper setting in /etc/vconsole.conf but ignored by systemd. The standard procedure in BSD/SysV init system is to walk through/trace shell init scripts in /etc/rc.d or /etc/init.d and quickly locate the source of problem. With systemd I've got stuck at binary file /lib/systemd/systemd-vconsole-setup. Now the only way is to track its source C file, study its contents, do changes, recompile and pry. Thats magnitude more difficult in comparison to shell scripts.
stale links in /etc/systemd after package removal, systemd throws errors when trying to run a service that no more exists on system. Manual intervention to get rid of such links is required.
to reliably run dhcp for just one wireless network device whether it was/was not successfully connected to an AP virtually impossible without a bit of hackery
I'd say that's all some very real-world experience as to why systemd sucks, for sure. And why some former Arch users have left for non-systemd pastures:
Basically, in plain English, your saying Richard that even with a Parallel Loading of system resources, resources still need linear loading of dependency and child processes to work correctly, otherwise nothing works at all and tracking loading of PIDs in linear is superior because it can tell if a system resource fails at the right moment and why, right?
Well, not quite.
If you do a topological sort on a directed acyclic graph, you'll find in a lot of cases that there are several sorts that work and some that don't. Those that don't work are due to a missing dependency that wasn't shown in the original graph for whatever reason.
The key here is being able to reproduce the problem. If the problem is due to system X being able to complete a task 3 milliseconds faster than system Y, then I wish you luck in being able to reproduce and solve the underlying problem.
With sysvinit or the BSD init sequence, there are no such timing issues (assuming, of course, that the underlying init scripts do NOT return prior to actually completing their initialization process).
I'm not even assuming incompetence to the people who described the initial dependency. Good Lord, what engineer would choose to mess up? Humans screw up, for whatever reason. If we didn't, we'd be gods.
I lost confidence in ArchLinux after several crashes and re-installs due to updates from pacman not working right and too much buggy code from the beta package tree spilling into the mainline packages. It WAS a good distribution but the maintainers get very sloppy at times.
The problem with forcing users to use a system they don't want to use, nor care to use, has killed many a Linux project in the past. The again some people just love to cut their own throats to see how long it takes for them to perish, but I digress.
Projects like mdev and eudev could prove to be useful in not only keeping systemd out of Linux distributions, but keeping most distributions adhering to the UNIX Philosophy.
I wonder if Patrick has tinkered with mdev or even eudev yet for Slackware as a replacement for udev-systemd? I know from reading that mdev doesn't support auto loading of modules yet, this could be added eventually with time though, but projects like mdev are a step in the right direction to keep Linux out of the hands of evil programmer who want to do a hostile takeover of the core of Linux and force their will upon others.
Edit: BTW, nobody answered my question regarding what currently controls cgroups outside of systemd. Is cgroups currently controlled by the system shell?
I'll start off by conceding that telephony is a rather unique use case for parallelism. On a given telephone switch, there really are a large set of utterly disjoint state machines running that really couldn't care less about each other. The fact that I'm talking to several people somewhere doesn't have much of an impact on your talking to your best friend somewhere else.
On the other hand, that's the perfect environment for parallelism.
The big deal hits when all the threads need to look at something that they all share. Normally, that's some type of resource; in the telephony world, that would be a database or a trunk or something similar. So, how do you ensure that those threads have a sane view of whatever the *bleep* they are looking at?
One way is to lock the object(s) in question so that nitwit thread A can't change what nitwit thread B is looking at while thread B is reading the shared object. However, if you don't always access the locks in the same order by each thread, you can create a deadlock condition, where two different threads own a lock that the other one wants. Deadlocks are bad. Really, really bad.
Another way is to make the shared object(s) accessible via some type of task and asynchronous messaging framework. That was used by DESQview, which provided stable multitasking on top of Microsoft DOS (of all things). In that case, you are really serializing access to the shared data via a dedicated thread that accepts incoming messages that tell it to perform work. That actually works fairly well.
I lost confidence in ArchLinux after several crashes and re-installs due to updates from pacman not working right and too much buggy code from the beta package tree spilling into the mainline packages. It WAS a good distribution but the maintainers get very sloppy at times.
The problem with forcing users to use a system they don't want to use, nor care to use, has killed many a Linux project in the past. The again some people just love to cut their own throats to see how long it takes for them to perish, but I digress.
I couldn't have said it better. You probably know the greek myth of Sisyphos. You know, the guy condemned to push a rock on a mountain, only to see it roll down on the other side for all eternity. Basically, Sisyphos is a mythical projection for compulsive obsessional behaviour.
In my humble opinion, one of the biggest issues with software development is what one should call the "Sisyphos syndrome", and which could be described as follows. As soon as a piece of software approaches a state of perfect usability (KDE 3.5.10, GNOME 2.32, ALSA and SysV come to mind), the developers let it just roll down the hill, only to start from scratch all over again with something almost unusable: KDE 4.0, GNOME 3.0, Pulseaudio, Systemd, Unity, etc. Rock'n'roll, Lennart!
I couldn't have said it better. You probably know the greek myth of Sisyphos. You know, the guy condemned to push a rock on a mountain, only to see it roll down on the other side for all eternity. Basically, Sisyphos is a mythical objection for compulsive obsessional behaviour.
In my humble opinion, one of the biggest issues with software development is what one should call the "Sisyphos syndrome", and which could be described as follows. As soon as a piece of software approaches a state of perfect usability (KDE 3.5.10, GNOME 2.32, ALSA and SysV come to mind), the developers let it just roll down the hill, only to start from scratch all over again with something almost unusable: KDE 4.0, GNOME 3.0, Pulseaudio, Systemd, Unity, etc. Rock'n'roll, Lennart!
I totally agree with you... This is a very bad situation for Linux. Developers need to make sure things work and work well instead of changing things so dramatically and frequently.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.