Quote:
2) You are the one having deaf ears, stop commenting on what you don't even understand. Systemd use socket activation to parallelize the boot process. As far as I know, OpenRC doesn't use this, so even if they try to accomplish the same thing, it's not the same thing at all. Therefore, you can't say systemd parallelization capabilities are buggy because OpenRC ones are. 3) How can you estimate how much time it will take for a software to "mature" if you don't even have a single idea about what programming is ? |
Quote:
|
Quote:
I also like the way the second article makes a big deal of starting CUPS on-demand, which would previously have required one line of text in /etc/inetd.conf. But no, in order to cater for every possibility (heaven forbid we leave anything for the user to decide or configure), a hideously complex system of 4 different "activations" are designed to start CUPS. It continues to amaze me how anyone can say with a straight face that this is in any way an improvement. |
Quote:
My boot time with systemd on a SSD is like 3 seconds. On slackware it is a least 10 seconds, I didn't know 7 seconds was the same thing as a microsecond Anyway, the url that you pointed out is for developper, not user. I don't believe an average developper can't make an effort to understand the basics behind socket activation. |
Quote:
My main point about the added layer of unnecessary complexity added by systemd still stands. Even if systemd didn't introduce compatibility issues and breakage (which it does in spades), there are plenty of reasons why it isn't a viable replacement to SysV, inetd, syslogd and who knows how many other components the project will attempt to absorb before it falls over. |
Quote:
Despite this it's just GPLed vendor-proprietary infrastructure. There is no open standard at all. |
Quote:
|
Quote:
|
Quote:
The 7 second start is still there - after all, when you start to print it still takes time to get the service running. And adding up the time for all services still gets you the 7 seconds. The problem with it is that you don't know if the printer is ready for use when the system is "ready for use". Normally, if there is a configuration problem with CUPS you know it as soon as the service attempts to start, and can start getting ready to fix it before users get to the system. With systemd, you don't know... until users start complaining. And if you want about 5 second boot on Slackware, try bypassing the initrd... The other thing about SysVinit and parallel startup - SysVinit does permit it, if it is implemented. Those services with the same start value (the Snn... and Knn...) are permitted to run in either order, or in parallel. It is just that none of the implementations actually did that. They all used the alphabetic sort order of the file name - so those services with the same Snn/Knn values determined the start order based on the rest of the name. Most of the implementations I've seen were rather simple shell scripts. So yes, things COULD be improved. I just don't think systemd is the right one. Look up the information on PERT charts and their use. You will recognize a less complex form of scheduling that is equivalent to what systemd does. http://en.wikipedia.org/wiki/Program...view_Technique And you will see why it isn't used that much anymore. One problem with socket activation is that it depends on TCP starting properly. I worked with a UNICOS system back in the mid-to-late 90s that had "optimization" for socket services added to inetd. What they did was put the entire listen/accept loop in inetd. This was done because a fork/exec was quite slow, and if the connection was denied, it wasted a fork/exec, and cause possible swapping activity (which on a Cray was quite expensive). Worked well in the lab. Improved response rates something around 5%, reduced system overhead as well. Unfortunately, when it was installed at our site - it hung the system (as far as external users could see). What was happening was the initial connection from a user came in... and the acks/response sent back. But a router/switch (fairly close tot the user) had an error... and forwarded the acks out the wrong interface, so the client system never saw them. And hung their connection. Of course since the client system didn't see the acks, it had no way to complete the TCP setup handshake. Since the handshake wasn't completed the server (the Cray inetd service) never completed the accept. That meant that the entire inetd service hung. All we had to do when it occurred was abort inetd and restart it... and inform Cray of the problem. It took their engineers only about three days to identify the solution (undo the optimization). We had that by the end of the week, and the restored functionality was there. The result was that instead of hanging inetd, it would hang the specific service for a specific accept - and that would timeout after 10-15 minutes (or when we aborted it). But users not on that particular client system had no problems. I vaguely remember that it took the network people about another week to find out which router/switch had the error. (Having three different network groups involved didn't help, too many "no problem on my network"...). But for reliability that optimization was dropped. |
Quote:
Of course, no services will be started, but hey, it's all about speed isn't it? Seriously dude, nobody cares about boot times. |
Quote:
For others it just starts them and assumes that they have completed. This is why they have so much trouble with the network. Originally, it would start NetworkManager, and because NetworkManager had not exited then "the network was ready". Except that it isn't. The problem was that it takes time for NetworkManager to read its configuration, and then initialize network interfaces. In that gap, other services that depend on the network COULD start... and fail because the network really wasn't ready. So they added another target (more complexity) called "NetworkManager-wait-online"... Which causes systemd to wait until NetworkManager sends a message back (I believe via dbus) to tell systemd it has started. Sounds good. Works... well sometimes. The problem with NetworkManager is that it has to wait for dhclient sometimes. So it appears that NetworkManager sends the "ready" message when it has started dhclient.. but dhclient can take up to a minute or so to initialize the network. So again, services that need the network can fail. Granted, static networks do seem to get initialized (as long as there is only one interface) properly. DHCP seems to defeat it - as does complex networks. I still can't get the networks to function properly when hosting VMs. Networks don't work until I completely disable NetworkManager and go with the legacy startup scripts. Then everything works OK. This is the problem with systemd - it has no way to know if a service is actually running. In the SysVinit method, the script is delayed until the service becomes a daemon. At that point, the service has read its configuration file, and initialized any network/socket requirements. If this fails, you know it immediately. Systemd has no way to know unless EVERY service gets modified to send a "Working" message back. And what to do with compound services (like NetworkManager and dhclient)? Make dhclient send a message to NetworkManager so NetworkManager can then send a message back to systemd? What happens with multiple networks and one DOESN'T come up? Is the network ready? Or not???? Does the boot hang? |
Quote:
But that only really applies to severs. On a desktop system you may not even notice that CUPS is only bound to localhost. Same goes for the abomination that is firewalld. Even the documentation can make your hair stand on end; it makes it possible for other processes to reconfigure the firewall using D-BUS messages. Why on earth would you ever want that? That's a potential security hole right there. But of course, it may make sense on a desktop system where an application suddenly wants to listen to incoming connections. |
Quote:
We know systemd isn't ready because we've dealt with systems and distributions that have implemented it and it's nothing but a burden. I use ArchLinux, and honestly it's very poorly implemented for a distribution that prides itself on being bleeding edge. There's a fine line on having bleeding edge software, and being reckless with unstable software. As I stated, I know from experience that systemd is not ready regardless of what Red Hat keeps projecting, or YOU keep spouting off. A well seasoned administrator knows when software is NOT reliable, and what is reliable. Until you are a seasoned administrator of systems and networks... you don't have much room to talk. Quote:
|
Quote:
Quote:
Quote:
If so, maybe you should ask them for a job, since you seem to be so much more experienced and can help them to make a better product. |
There are good programmers, and there are bad programmers.
The good programmers eat their own dog food, and more importantly provide support to others who eat it as well. This has a tendency to keep solutions tied to simplicity and necessity. |
All times are GMT -5. The time now is 01:56 AM. |