LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 10-10-2016, 01:22 PM   #61
ttk
Senior Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
Blog Entries: 27

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484

Quote:
Originally Posted by pingu_penguin View Post
A few questions from my side ...

1. systemd was honestly not needed ? they first said it was just a alternative system for faster boot initially
It is, on the most part, not needed. It eventually gained some capabilities which genuinely help GNOME developers with some security problems intrinsic to using Linux as a desktop, but otherwise it replaces known-good old components with dubious new ones.

The picture also gets muddled because of claims that systemd facilitates things which sysvinit does not, when it really does -- like killing all subprocesses of a process. From what I've seen, the people making such claims genuinely believe what they are saying, but do not know sysvinit well enough to realize sysvinit can do (or facilitates scripts to do) what systemd does.

Quote:
2. they said systemd is good for servers farms - the problems lacking in SysV or upstart could be handled in other ways instead of writing a complete new system from scratch?
Yes. The ways they claim systemd is "better" for servers are actually worse. For instance, not starting a service until the first request is made will mask problems that service might have (think erroneously edited httpd.conf), which you really want revealed as early as possible -- say, at boot time, when the administrators are watching the machine for problems anyway (or should be).

The sysvinit and inittab approach to monitoring/restarting services does have some genuine shortcomings, which IT professionals are still getting a handle on, but imo the chef/puppet approach is more effective and flexible than systemd's hard-coded behavior.

Quote:
If it is in fact so ridiculously complex, why arent distros reverting to older methods?
Some simply don't care, some are GNOME-oriented and thus constrained by GNOME's dependency on systemd, and some are reverting from systemd. I keep some relevant notes here: http://ciar.org/ttk/public/systemd.html

Quote:
What is more amazing is that only slackware was sensible enough to avoid it, all the other distros just pressed it in their latest release (esp shocked at debian , considering there are so many distros forked from it).
Not just Slackware, no. Three major distributions have refrained (Android, Slackware, Gentoo) and several minor ones as well. These are enumerated here http://forums.debian.net/viewtopic.php?f=3&t=118319 and here http://without-systemd.org/wiki/index.php/Main_Page

Quote:
What really bugs me , is that the average linux user , is left with no choice but to use systemd whether he likes it or not. If anything , GNU/Linux is about choice and freedom.
This adds insult to injury, yes. The systemd folks have been evangelical in their advocacy, and quite tactical in subsuming systems (like udev) which most Linux distributions depend upon. They have been stymied, to a degree, by their failure to get device driver developers moved over from the traditional netlink API to the systemd-specific API. Had these developers been more co-operative, it would be game over.

Quote:
Call me crazy (or thank me later), but my gut feeling says someone is trying to control/influence things (just my 2 cents on this one).
I've heard the conspiracy theories, but imo none of them hold water. I think we're just seeing people aggressively pushing their favorite project.

Last edited by ttk; 10-10-2016 at 01:23 PM.
 
Old 10-10-2016, 01:45 PM   #62
ttk
Senior Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
Blog Entries: 27

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Regarding the GNOME systemd dependency, this is relevant and could help facilitate migrating GNOME-oriented distributions off systemd:

http://ostatic.com/blog/hpe-donates-...e-sans-systemd

Quote:
Willy Sudiarto Raharjo today blogged that a new project contacted him about bringing GNOME 3.22 to Slackware. Slackware does not support or supply systemd and GNOME from Scratch is said to work without it.
 
1 members found this post helpful.
Old 10-10-2016, 09:53 PM   #63
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by pingu_penguin View Post
1. systemd was honestly not needed ? they first said it was just a alternative system for faster boot initially
I wrote the below in another thread on systemd, but from my understanding, the reasoning had nothing to do with getting a faster boot, that was just the biggest "benefit" that people spoke about, even though it is just a side-effect of parallelization. There is a large list of things that systemd provides that isn't provided with Slackware's stock init system. I wrote this as a response to someone asking the benefits of systemd. I included my personal belief on whether that particular feature might be beneficial.

Overall, you're right, systemd is not needed... but it does provide several features that many distro maintainers have felt are important enough that makes it worth it to add systemd to their distro. In general, a lot of times new software isn't needed, but that doesn't mean that it doesn't contain stuff that's wanted. However, so far, I'm glad that Pat hasn't wanted systemd and that it hasn't been forced by other dependencies yet.

Sorry for the wall of text below

Quote:
This is all gathered via a google search (not all are unique to systemd, but none are currently a part of Slackware's init system). I have not had any actual experience with systemd and I don't plan to unless it's included in Slackware in the future. This is to the best of *my* understanding. If I misunderstood something, please let me know.
  • Logging - Say what you will about binary logs, if they do function properly, you get immediate logging as soon as the init RAM disk is started all the way to the final shutdown of the system. (How many times have you tried to glean information from the boot process only to have it go by too quickly?) It can also store non-text, like memory dumps, which can be used later in further debugging. The old syslog can be used in place without the extra benefits of binary logs and without the possibility of corruption. Do the benefits of binary logging outweigh the possibility of corrupted logs? To me, no.
  • Unit files - Easy to make config files using declarative language used to start up system daemons. This replaces the various rc.* files. They are simplistic to make and seem to have little to no downsides (other than ReaperX7 stating that it could lead to people not learning proper scripting techniques). It still supports running shell scripts if needed (although, I believe it would require a basic unit file). Benefits outweigh the costs? To me, yes.
  • Dependencies - Services are able to be started once a certain set of dependencies are met. This is different than rc.* when startup is based on location in the script. The dependencies can be more than just a program/service starting, it can be based on udev, dbus, sockets, etc. One example of this is to mount the network harddrives after the network comes up, but mounting any physical discs before. The possible benefits from this are there, but I have no idea how well it works in practice. It could cause dependency circles when two things are waiting for each other to start. I don't know the likelihood of this, so do the benefits outweigh the costs? To me, unknown.
  • Parallelization - Allows the startup sequence to occur in parallel. This allows two or more services to start at the same time. Because of unit files and the dependencies specified, this should prevent programs from starting too early. This does usually equate to a boot speed increase and I am not aware of downsides of parallelization (short of the downsides listed in dependencies. Benefits outweigh the costs? To me, probably.
  • Cgroups - Cgroups containerizes a process and all child processes. This allows systemd to keep track of child processes even when the parent exits and shut them down if needed. This, in theory, is much better than using a PID, since we've all ran into the issue when a process won't start because an old PID is left in /run. It seems you are also able to limit resources to certain containers, meaning that you can limit how much RAM or CPU usage. I could see this adding complexity to the system, but the tradeoffs may be worth it. Benefits outweigh the costs? To me, undecided.
  • Additional Core Components - Replaces startup shell scripts, pm-utils, inetd, acpid, syslog, watchdog, cron, and atd with systemd components. This makes it easier for maintainers to provide the features those programs do, since they will all be included and won't need to be gathered separately and require verification they all work together. They also fall under a unified release schedule and come in the same tarball. Some replaced abandoned software, like logind replacing consolekit. However, all of this also causes feature creep and creates a larger amount of code to debug. This is probably one of my biggest complaints about systemd. The combination of many programs that don't need to be combined. It creates the possibility for more problems and works to prevent you from using alternatives if desired. You can see what it has taken over when you look at its ancillary components: consoled, journald, logind, networkd, timedated, udevd, and libudev. Adding more services to systemd seems very reminiscent of Windows and its svchost.exe file. Bug reports are sometimes ignored. Benefits outweigh the costs? To me, no.

SOURCE: http://www.linuxquestions.org/questi...ml#post5305685

Last edited by bassmadrigal; 10-10-2016 at 09:56 PM.
 
2 members found this post helpful.
Old 10-10-2016, 10:56 PM   #64
pingu_penguin
Member
 
Registered: Aug 2004
Location: pune
Distribution: Slackware
Posts: 350

Rep: Reputation: 60
Even upstart boot starts services in parallel with its event based model, its not half bad.

Quote:
Some simply don't care, some are GNOME-oriented and thus constrained by GNOME's dependency on systemd,
I use gnome classic desktop , am on ubuntu 14.04 and there is NO systemd here , everything has been fine for a long time.
It does work without systemd pretty decently. I find it highly irresponsible for distros to not care about users.

Also if systemd is good for servers, they shouldnt be including it in desktop version of distros(ubuntu for eg.)
 
Old 10-11-2016, 06:38 AM   #65
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by bassmadrigal View Post
I wrote the below in another thread on systemd, but from my understanding, the reasoning had nothing to do with getting a faster boot, that was just the biggest "benefit" that people spoke about, even though it is just a side-effect of parallelization. There is a large list of things that systemd provides that isn't provided with Slackware's stock init system. I wrote this as a response to someone asking the benefits of systemd. I included my personal belief on whether that particular feature might be beneficial.

Overall, you're right, systemd is not needed... but it does provide several features that many distro maintainers have felt are important enough that makes it worth it to add systemd to their distro. In general, a lot of times new software isn't needed, but that doesn't mean that it doesn't contain stuff that's wanted. However, so far, I'm glad that Pat hasn't wanted systemd and that it hasn't been forced by other dependencies yet.

Sorry for the wall of text below
logging - it still loses data. What is NOT mentioned is that what was done was to make the log buffer bigger. The only way to ensure that you capture the very early logs is to not use an initrd.

unit files - no gain. They are not generic. And if you have to have generic, you STILL end up writing shell scripts.

dependencies - no gain. Having a dependency graph makes it next to impossible to add a service without completely destroying the net. This was learned back in the 1970s with PERT graphs (which are a dependency graph). The other problem with them is that there isn't just one dependency graph, but several.

parallelization - same topic as dependencies. Having it is nice - but being able to CORRECTLY set them up is mandatory. And that is not possible to do except in the simple cases.

cgroups were handled before systemd. Shutting processes down arbitrarily is wrong. PIDS are still used - just not necessarily pid 1 (see prctl system call).

Addtitional core elements - and make the entire system fall to a small bug in one. This is overly complex and ties too many independent functions into one failure prone result. It also eliminates the possiblity to substitute other elements with other capabilities.

Last edited by jpollard; 10-11-2016 at 06:41 AM.
 
7 members found this post helpful.
Old 10-11-2016, 07:48 AM   #66
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by jpollard View Post
logging - it still loses data. What is NOT mentioned is that what was done was to make the log buffer bigger. The only way to ensure that you capture the very early logs is to not use an initrd.
I doubt any log is foolproof in retaining all data. However, where did you find that you can't capture early logs via initrd? That was mentioned in a lot of sources when I was looking up that information previously. Personally, I don't like the idea of binary logs. They are far too easy to corrupt and I don't like requiring a special viewer to read them.

Quote:
Originally Posted by jpollard View Post
unit files - no gain. They are not generic. And if you have to have generic, you STILL end up writing shell scripts.
Sure, there'll probably be difficult unit files. GazL posted a link to a story about getting the unit file for nfs to cover more obscure cases, which seemed quite complicated... but the work they did ended up covering pretty much any use case, where rc. scripts would need to be custom-tailored for certain situations. Overall, it's not necessarily bad to edit an rc. script to handle a specific case, but having it work without editing is even better.

However, for a basic service, it seems like writing a unit file is much easier than writing an rc. file. I've written several rc. files, and the declarative statements of unit files seem quite simple and straightforward (however, I haven't used/created them in practice, so take that with a grain of salt). Personally, I feel I have more control over an rc. script, but that is probably because I'm not familiar with unit files. After reading through that article, it seems systemd's unit files have the potential to cover extremely complicated situations.

Quote:
Originally Posted by jpollard View Post
dependencies - no gain. Having a dependency graph makes it next to impossible to add a service without completely destroying the net. This was learned back in the 1970s with PERT graphs (which are a dependency graph). The other problem with them is that there isn't just one dependency graph, but several.
How can you say there is no gain? Yes, it can end up being extremely complicated, but that doesn't negate the benefits it provides when everything syncs. Knowing you have a service which relies on NFS being up and running and being able to tell the system to wait until it is can be quite beneficial. It is much better than adding some sleep commands to your init scripts and hoping NFS finishes coming up before the call to start your service, or adding a delay with an until statement waiting for NFS to become active enough to start your service.

Quote:
Originally Posted by jpollard View Post
parallelization - same topic as dependencies. Having it is nice - but being able to CORRECTLY set them up is mandatory. And that is not possible to do except in the simple cases.
It isn't possible to do, yet many init systems offer parallelization? You can get some degree of parallelization with Slackware's init by backgrounding processes, but without having dependency information, you could run into failures during the boot process. I'm not saying it isn't complicated to pull off properly, but many distros are obviously doing it, so it can't be "not possible". However, slightly faster boot times (theoretically... some have documented the same boot times or even longer than whatever previous init they were using) are an insignificant benefit in the overall scheme of things (especially when you don't need to start your system frequently), but it is still a "nice to have".

Quote:
Originally Posted by jpollard View Post
cgroups were handled before systemd. Shutting processes down arbitrarily is wrong. PIDS are still used - just not necessarily pid 1 (see prctl system call).
cgroups isn't just to shut down processes, but it also allows you to track what created what and allow the system to monitor child processes even after the parent process has ended. It also allows you to limit resources to certain cgroups (so you could theoretically create one for Chrome and prevent it's ever expanding memory from taking 90% of your RAM). And I know cgroups were handled before systemd, but as I said, Slackware's init does not manage cgroups (however, if you have it set up, it can at least start it), which is why I added it to the list.

Quote:
Originally Posted by jpollard View Post
Addtitional core elements - and make the entire system fall to a small bug in one. This is overly complex and ties too many independent functions into one failure prone result. It also eliminates the possiblity to substitute other elements with other capabilities.
I agree with this, however, some tout this as a feature of systemd, so it should still be included in an unbiased list. Personally, I think it is systemd's biggest drawback, which we can obviously see a great example of the issues this can cause based on the original post of this thread.

Overall, I tried to remain as unbiased as I could with providing the information I did (except for my personal opinion at the end on whether the benefits were worth the costs). I feel far too many articles/wikis/publications either rave about systemd or attack it. It is not very easy to find unbiased information out there. And while I tried to stay unbiased with my list, I still would prefer to not use systemd and I'm glad Pat has been able to stay away from it thus far.
 
1 members found this post helpful.
Old 10-11-2016, 10:23 AM   #67
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by bassmadrigal View Post
I doubt any log is foolproof in retaining all data. However, where did you find that you can't capture early logs via initrd? That was mentioned in a lot of sources when I was looking up that information previously. Personally, I don't like the idea of binary logs. They are far too easy to corrupt and I don't like requiring a special viewer to read them.
There is no place to write the logs... The initrd, though writable, is a memory resident filesystem wiped out on any reboot (nearly mandatory if for some reason it can't find the root filesystem - and still can not be recorded until AFTER the root filesystem is mounted read/write) - thus anything that MIGHT have been recorded is lost. The only way to be sure to have at least some records is to not use an initrd, going directly to a root filesystem with persistence.
Quote:

Sure, there'll probably be difficult unit files. GazL posted a link to a story about getting the unit file for nfs to cover more obscure cases, which seemed quite complicated... but the work they did ended up covering pretty much any use case, where rc. scripts would need to be custom-tailored for certain situations. Overall, it's not necessarily bad to edit an rc. script to handle a specific case, but having it work without editing is even better.
Being able to debug it when it doesn't work is even better.
Quote:

However, for a basic service, it seems like writing a unit file is much easier than writing an rc. file. I've written several rc. files, and the declarative statements of unit files seem quite simple and straightforward (however, I haven't used/created them in practice, so take that with a grain of salt). Personally, I feel I have more control over an rc. script, but that is probably because I'm not familiar with unit files. After reading through that article, it seems systemd's unit files have the potential to cover extremely complicated situations.
The problem is not the unit files... it is the complexity of the dependency graph that creates the most failures.
Quote:

How can you say there is no gain? Yes, it can end up being extremely complicated, but that doesn't negate the benefits it provides when everything syncs.
The problem is that when it DOESN'T sync you can be at a total loss to figuring out WHY. The other problem is reliability - sometimes things work... and sometimes they don't. When the dependency network is incorrect it is next to impossible to fix when it just happens to work. You don't see the problem until adding another service causes the system load ordering to change... and then fail.
Quote:
Knowing you have a service which relies on NFS being up and running and being able to tell the system to wait until it is can be quite beneficial. It is much better than adding some sleep commands to your init scripts and hoping NFS finishes coming up before the call to start your service, or adding a delay with an until statement waiting for NFS to become active enough to start your service.
And yet, adding sleeps is EXACTLY what people are resorting to use to work around the failures in systemd.

Quote:

It isn't possible to do, yet many init systems offer parallelization? You can get some degree of parallelization with Slackware's init by backgrounding processes, but without having dependency information, you could run into failures during the boot process. I'm not saying it isn't complicated to pull off properly, but many distros are obviously doing it, so it can't be "not possible".
The key is "complicated to pull off". A dependency graph is NOT easy to work with. Even with Fedora, systemd took several years before the dependency graph was barely workable. It was supposed to be available in Fedora 14. Nope, wasn't usable. It was presented in Fedora 15... with boot hangs, shutown hangs, nonstarting services, unworking NFS... It was in Fedora 16 - which sort of worked... if you didn't have anything complicated (multiple network interfaces and VM networking didn't work at all, setting default routes on the wrong network, not starting networks at the right time, not starting network services...) and still had boot hangs and shutdown hangs. Didn't bother with 17, 18, 19 - there were still too many reports of hangs, lost logs, nonstarting services, startup/shutdown failures... 20 mostly worked. It FINALLY did the network properly. But still people were having trouble with services - even resorting to putting long sleeps in rc.local along with systemctl service restarts to get things working (which some are still doing).

Even in Fedora 23 problems still exists trying to add a new service... sometimes the service starts... sometimes it doesn't. Getting in the right place in the graph is very difficult.

I had worked with dependency graphs before (back in the old days of the 1970s). It took weeks to get a relatively simple, one level graph workable (less than 100 nodes, one direction only); just adding one node to the list could completely tangle up the network. What systemd uses is a two level graph at best... and dynamic, not static. It also ignores the possibility of external dependencies...
Quote:
However, slightly faster boot times (theoretically... some have documented the same boot times or even longer than whatever previous init they were using) are an insignificant benefit in the overall scheme of things (especially when you don't need to start your system frequently), but it is still a "nice to have".
"nice to have" doesn't make it reliable. Reliability is "nice to have". The normal way is "First make it work. Then make it faster". The problem is that systemd doesn't work reliably.

Quote:
cgroups isn't just to shut down processes, but it also allows you to track what created what and allow the system to monitor child processes even after the parent process has ended. It also allows you to limit resources to certain cgroups (so you could theoretically create one for Chrome and prevent it's ever expanding memory from taking 90% of your RAM). And I know cgroups were handled before systemd, but as I said, Slackware's init does not manage cgroups (however, if you have it set up, it can at least start it), which is why I added it to the list.
cgroups wasn't created for that purpose. It was originally designed to provide better resource allocation control for batch jobs. That it also can be used for interactive jobs has been known for ages. One interactive UNIX based system did the same function by making the interactive connections part of the batch scheduling system. It worked... mostly. The problem then (and now) is the loss of control over the processes. The "nice" utility is now useless - where formerly it allowed a user to determine what priority their processes should be.
Quote:

I agree with this, however, some tout this as a feature of systemd, so it should still be included in an unbiased list. Personally, I think it is systemd's biggest drawback, which we can obviously see a great example of the issues this can cause based on the original post of this thread.
no problem there. IMO it is one of the drawbacks of systemd, though I personally am not sure about it being the biggest drawback. I find the lack of flexibility, reliability and debugging the biggest drawback, and this is just a symptom of poor design.
Quote:

Overall, I tried to remain as unbiased as I could with providing the information I did (except for my personal opinion at the end on whether the benefits were worth the costs). I feel far too many articles/wikis/publications either rave about systemd or attack it. It is not very easy to find unbiased information out there. And while I tried to stay unbiased with my list, I still would prefer to not use systemd and I'm glad Pat has been able to stay away from it thus far.

Last edited by jpollard; 10-11-2016 at 10:25 AM.
 
2 members found this post helpful.
Old 10-11-2016, 11:30 AM   #68
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Member response

Hi,

I am very happy with Slackware and the init system. I really do not have issues with my logs. I can read them directly and rotate with no issues.

I am comfortable with the way PV & Team have kept Slackware from the systemd mess. KISS is working for users of Slackware so why even think of changing to what advantage because a commit developed by a committee that desires change for the sake of change to an unproven system without thinking things out intrinsically. Sure, improvements happen everyday but usually the changes are done in stages not all in all one time one shot.

I do still believe in the UNIX way and Slackware meets that for me.

Not to start an argument or debate but Slackware is stable an steadily meeting the needs of the community without systemd.

Have fun & enjoy!
 
4 members found this post helpful.
Old 10-11-2016, 11:43 AM   #69
ttk
Senior Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
Blog Entries: 27

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
A couple of things:

Where I work, we've started phasing in CentOS7, and discovered failure modes where RAID controller problems weren't getting logged during boot-up. I wasn't part of the effort to troubleshoot that, but the people who did, blamed systemd's wacky logging.

Regarding binary logs, you can get the same advantages by using JSON-encoded structured logs, and they're still text, so you can still use standard unixy tools to filter/manipulate them, and eyeball them directly as needed (though they're more readable with a filter such as json2json).

Example of an unformatted JSON log entry (newline-delimited from other entries):

Quote:
[1400734753.93283, "Wed May 21 21:59:13 2016", 1138, "DEBUG", 3, ["90D4391A"], [["lib/TCC/Voice/ESRV/Native/Address/Validation.pm", 234, "TCC::Voice::ESRV::Native::Address::Validation::_rate_candidate"], ["lib/TCC/Voice/ESRV.pm", 234, "TCC::Voice::ESRV::Native::Address::Validation::validate_address_against_msag"]], "initializing champion", "c_id", "296274", "sim_score", 0.608695652173913, "similarity_threshold", 0.801, "best_hr", {"no_tf_count": 0, "smash_count": 0}]
That same entry filtered through "json2json -l":

Code:
[ 1400734753.93283, "Wed May 21 21:59:13 2016", 1138, "DEBUG", 3, ["90D4391A"],
  [ ["lib/TCC/Voice/ESRV/Native/Address/Validation.pm", 234, "TCC::Voice::ESRV::Native::Address::Validation::_rate_candidate"],
    ["lib/TCC/Voice/ESRV.pm", 234, "TCC::Voice::ESRV::Native::Address::Validation::validate_address_against_msag"]
  ],
  "initializing champion", "c_id", "296274", "sim_score", 0.608695652173913, "similarity_threshold", 0.801, "best_hr", {"no_tf_count": 0, "smash_count": 0}
]
For very complex records, there is also "json2json -P" for maximum formatting:

Code:
[
   1400734753.93283,
   "Wed May 21 21:59:13 2016",
   1138,
   "DEBUG",
   3,
   [
      "90D4391A"
   ],
   [
      [
         "lib/TCC/Voice/ESRV/Native/Address/Validation.pm",
         234,
         "TCC::Voice::ESRV::Native::Address::Validation::_rate_candidate"
      ],
      [
         "lib/TCC/Voice/ESRV.pm",
         234,
         "TCC::Voice::ESRV::Native::Address::Validation::validate_address_against_msag"
      ]
   ],
   "initializing champion",
   "c_id",
   "296274",
   "sim_score",
   0.608695652173913,
   "similarity_threshold",
   0.801,
   "best_hr",
   {
      "no_tf_count": 0,
      "smash_count": 0
   }
]
JSON is well supported by almost every programming language in use, and there are many many existing tools for manipulating it, on top of the standard unixy tools like grep. It imposes almost no processing overhead to encode or decode, and only a slight increase in space (the ""[]{}:, make it about 20% more "fluffy" on average than a pure binary format).

All of my projects use JSON-encoded structured logs. If one needs more than an unformatted plain text log, it's the way to go.
 
Old 10-11-2016, 12:46 PM   #70
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by jpollard View Post
There is no place to write the logs... The initrd, though writable, is a memory resident filesystem wiped out on any reboot (nearly mandatory if for some reason it can't find the root filesystem - and still can not be recorded until AFTER the root filesystem is mounted read/write) - thus anything that MIGHT have been recorded is lost. The only way to be sure to have at least some records is to not use an initrd, going directly to a root filesystem with persistence.
Actually, this post led me to this page that states that /run/ should be passed pre-mounted to the main system when running systemd. Per this page, "By default, the journal stores log data in /run/log/journal/." Then, once the system comes up, journald should automatically copy the contents to /var/log/journal/ (assuming you have it set up that way).

Quote:
Originally Posted by jpollard View Post
Being able to debug it when it doesn't work is even better.

The problem is not the unit files... it is the complexity of the dependency graph that creates the most failures.

The problem is that when it DOESN'T sync you can be at a total loss to figuring out WHY. The other problem is reliability - sometimes things work... and sometimes they don't. When the dependency network is incorrect it is next to impossible to fix when it just happens to work. You don't see the problem until adding another service causes the system load ordering to change... and then fail.

And yet, adding sleeps is EXACTLY what people are resorting to use to work around the failures in systemd.

The key is "complicated to pull off". A dependency graph is NOT easy to work with. Even with Fedora, systemd took several years before the dependency graph was barely workable. It was supposed to be available in Fedora 14. Nope, wasn't usable. It was presented in Fedora 15... with boot hangs, shutown hangs, nonstarting services, unworking NFS... It was in Fedora 16 - which sort of worked... if you didn't have anything complicated (multiple network interfaces and VM networking didn't work at all, setting default routes on the wrong network, not starting networks at the right time, not starting network services...) and still had boot hangs and shutdown hangs. Didn't bother with 17, 18, 19 - there were still too many reports of hangs, lost logs, nonstarting services, startup/shutdown failures... 20 mostly worked. It FINALLY did the network properly. But still people were having trouble with services - even resorting to putting long sleeps in rc.local along with systemctl service restarts to get things working (which some are still doing).

Even in Fedora 23 problems still exists trying to add a new service... sometimes the service starts... sometimes it doesn't. Getting in the right place in the graph is very difficult.

I had worked with dependency graphs before (back in the old days of the 1970s). It took weeks to get a relatively simple, one level graph workable (less than 100 nodes, one direction only); just adding one node to the list could completely tangle up the network. What systemd uses is a two level graph at best... and dynamic, not static. It also ignores the possibility of external dependencies...
Maybe I'm not understanding things properly (after all... this was just based on research into the "features" of systemd, not any specific implementations, as I have no desire to try any other OSes anymore), but how are Debian, Arch, Fedora, etc, able to use systemd if there's such a complete failure of the dependency graph? Based on what you're saying, the OS should be unable to boot in a lot of different scenarios. But, these distros have released "stable" versions based on systemd and there are plenty of people out there who are using it without issue (that's not to say there aren't people who are having issues, but if it's as bad as you're implying, then it sounds like there should be a lot more people complaining).

However, it should also be kept in mind that any new init system, whether that was the initial BSD, SysV, OpenRC, runnit, etc would likely have run into similar issues during development... working out the order that things need to start. The order of Slackware's scripts wasn't just written off the top of somebody's head... it took a lot of trial and error, and people are still finding issues where the scripts don't match their needs (I know Richard Cranium runs into some issues with the default scripts since he has /usr/ on a separate partition from his root partition).

Quote:
Originally Posted by jpollard View Post
"nice to have" doesn't make it reliable. Reliability is "nice to have". The normal way is "First make it work. Then make it faster". The problem is that systemd doesn't work reliably.
But right now, there isn't even necessarily reliability with Slackware's init system. Because if a service fails to start, then it fails to start and things move along. Other services that depend on that service may try to start and fail too. There's no accountability for that in Slackware's init system.

Quote:
Originally Posted by jpollard View Post
cgroups wasn't created for that purpose. It was originally designed to provide better resource allocation control for batch jobs. That it also can be used for interactive jobs has been known for ages. One interactive UNIX based system did the same function by making the interactive connections part of the batch scheduling system. It worked... mostly. The problem then (and now) is the loss of control over the processes. The "nice" utility is now useless - where formerly it allowed a user to determine what priority their processes should be.
Wasn't created for what purpose? I covered several in that response. Wikipedia states that cgroups is used for resource limiting, prioritization, accounting, and
control.

Quote:
Originally Posted by jpollard View Post
no problem there. IMO it is one of the drawbacks of systemd, though I personally am not sure about it being the biggest drawback. I find the lack of flexibility, reliability and debugging the biggest drawback, and this is just a symptom of poor design.
Don't get me wrong... I am NOT a fan of systemd. I hope it never needs to get implemented in Slackware, and I've been grateful how Pat and team have avoided it thus far. However, the feature list is provides (whether or not they are currently implemented well) does have some nice things in there. In the other thread (that my quoted response is from), I was not well-versed on what systemd was trying to bring to the table. I saw all the hate behind it, and I was constantly seeing people spouting off completely incorrect information (like systemd was created to just speed up the boot process... which is totally not true -- as I said above, this was just a result of parallelization). I was frustrated, because pretty much everyone was either for or against it, but it'd always turn into a flame war without any valid substance from either side. I tried to find resources to better explain what systemd offers that are missing in other inits... because there had to be something since many distros were making the switch. There were very few knowledgeable people who shared unbiased opinions on it. That is what led to my research and my post. The things I listed are what systemd claims to offer that Slackware's init system does not. It doesn't mean that Slackware needs them (personally, I have no issues with Slackware's init system... it works great for what I need it to do), but it is still something that a competing init system offers that Slackware doesn't. For some, that is a big deal. Unfortunately, as you showed above, claims don't always equal results (all those fat loss commercials can be a testament to that).

My post was not meant to be "reasons why Slackware should switch to systemd", because I don't think it needs to (assuming upstream doesn't eventually force it due to systemd dependencies that don't have alternatives available). It was more of a "what's different between Slackware's init and systemd". There's a lot of differences, not all of them good.
 
Old 10-11-2016, 04:13 PM   #71
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by bassmadrigal View Post
Actually, this post led me to this page that states that /run/ should be passed pre-mounted to the main system when running systemd. Per this page, "By default, the journal stores log data in /run/log/journal/." Then, once the system comes up, journald should automatically copy the contents to /var/log/journal/ (assuming you have it set up that way).
It is still in a memory resident filesystem. If for some reason you can't mount the destination and copy it, it is still lost. Thus no gain - just more complexity.
Quote:

Maybe I'm not understanding things properly (after all... this was just based on research into the "features" of systemd, not any specific implementations, as I have no desire to try any other OSes anymore), but how are Debian, Arch, Fedora, etc, able to use systemd if there's such a complete failure of the dependency graph? Based on what you're saying, the OS should be unable to boot in a lot of different scenarios. But, these distros have released "stable" versions based on systemd and there are plenty of people out there who are using it without issue (that's not to say there aren't people who are having issues, but if it's as bad as you're implying, then it sounds like there should be a lot more people complaining).
It does provide some benefit for laptops - anywhere that already has a relatively simple hardware configuration. It seems to handle wireless a bit better.
Quote:

However, it should also be kept in mind that any new init system, whether that was the initial BSD, SysV, OpenRC, runnit, etc would likely have run into similar issues during development... working out the order that things need to start. The order of Slackware's scripts wasn't just written off the top of somebody's head... it took a lot of trial and error, and people are still finding issues where the scripts don't match their needs (I know Richard Cranium runs into some issues with the default scripts since he has /usr/ on a separate partition from his root partition).
Yes - they did. but it only takes a few seconds to reorder the sequence. It is also trivial to get them working. Most of the added complexity was done by third party vendors. IRIX (SGI) was the first place I saw the "chkconfig" utility. Nice - but it also required a more complex init script that contained the metadata that chkconfig used. AT&T didn't use it at the time - all that was needed was to add a symbolic link or put the actual script in the rc directory. Order was determined by an alphabetic sort. Even then those startups that were in the same Snn specification could be done in parallel (never was though - but that simplified the startup).

Slackware scripts migrated - it never did really use rc scripts (being a bit more BSD/SunOS derived). These were provided for compatibility with SysV init scripts, and worked exactly the same way.

Quote:

But right now, there isn't even necessarily reliability with Slackware's init system. Because if a service fails to start, then it fails to start and things move along. Other services that depend on that service may try to start and fail too. There's no accountability for that in Slackware's init system.
Except that you KNOW they failed, when they failed, and can do something about it.
Quote:

Wasn't created for what purpose? I covered several in that response. Wikipedia states that cgroups is used for resource limiting, prioritization, accounting, and
control.
Nope. It is VERY good for handling batch jobs. This was how the "fair-share" scheduler worked in supercomputers in the early 1990s. That fair-share scheduling evolved - and when Linux made significant inroads into supercomputing, there was again a need for good resource management.

The major difference in the way cgroups are actually used doesn't include the long-term accounting. Only the resource control. But that accounting can still be done.

Quote:

Don't get me wrong... I am NOT a fan of systemd. I hope it never needs to get implemented in Slackware, and I've been grateful how Pat and team have avoided it thus far. However, the feature list is provides (whether or not they are currently implemented well) does have some nice things in there.
As long as you realize they are lying about them. They SOUND nice - the problem is they cannot deliver.
Quote:
In the other thread (that my quoted response is from), I was not well-versed on what systemd was trying to bring to the table. I saw all the hate behind it, and I was constantly seeing people spouting off completely incorrect information (like systemd was created to just speed up the boot process... which is totally not true -- as I said above, this was just a result of parallelization).
That was one of their statements - now gradually being dropped.
Quote:
I was frustrated, because pretty much everyone was either for or against it, but it'd always turn into a flame war without any valid substance from either side. I tried to find resources to better explain what systemd offers that are missing in other inits... because there had to be something since many distros were making the switch. There were very few knowledgeable people who shared unbiased opinions on it. That is what led to my research and my post. The things I listed are what systemd claims to offer that Slackware's init system does not.
That is part of the problem. The "claims to offer" are not necessarily true. The systemd developers gloss over the part where it fails to deliver.

There have been cockups by the systemd developers breaking things, then trying to coerce the kernel developers to change something to handle the systemd failures. At least one of them has been banned from making kernel patches. Using kernel defined options for systemd use... then trying to put the blame on the kernel developers for another failure of systemd...
Quote:
It doesn't mean that Slackware needs them (personally, I have no issues with Slackware's init system... it works great for what I need it to do), but it is still something that a competing init system offers that Slackware doesn't. For some, that is a big deal. Unfortunately, as you showed above, claims don't always equal results (all those fat loss commercials can be a testament to that).

My post was not meant to be "reasons why Slackware should switch to systemd", because I don't think it needs to (assuming upstream doesn't eventually force it due to systemd dependencies that don't have alternatives available). It was more of a "what's different between Slackware's init and systemd". There's a lot of differences, not all of them good.
First, the alternative has to ACTUALLY work. So far, systemd is still limping along - but imposing itself on a lot of formerly independent projects to compensate for its own failures.
 
3 members found this post helpful.
Old 10-11-2016, 04:53 PM   #72
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
I understand a lot of what you're saying, but if things are so bad, then why have so many distros embraced it? It just can't be as bad as you're painting it with only a few major holdouts not jumping on the systemd bandwagon. And it isn't even just distro maintainers... you're also seeing a large number of upstream projects adding systemd support, while dropping support (or not providing any additional support) for non-systemd alternatives (not the direct spinoffs of systemd services like Consolekit2 and eudev, but the ones previous to those). Things can't be as bad as the systemd naysayers are saying if a HUGE chunk of the Linux market is moving towards it.

systemd can't impose itself on anything but fedora (and RedHat driven programs). Everyone else, both distro maintainers and upstream developers had a choice and the majority chose to embrace systemd. Why would everyone do that if systemd is as bad as you make it?
 
Old 10-11-2016, 05:22 PM   #73
ttk
Senior Member
 
Registered: May 2012
Location: Sebastopol, CA
Distribution: Slackware64
Posts: 1,038
Blog Entries: 27

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
I think most distro maintainers don't care, and adopted it when it became a dependency for udev. Nearly every distribution uses udev to load device drivers.

Some distributions (including Slackware) have adopted eudev, which avoids the dependency. Such distributions are listed here: https://forums.gentoo.org/viewtopic-p-7648392.html

That's 47 Linux distributions in all. That's a lot of distributions rejecting systemd, they're just mostly minor distributions.
 
1 members found this post helpful.
Old 10-11-2016, 05:40 PM   #74
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by bassmadrigal View Post
I understand a lot of what you're saying, but if things are so bad, then why have so many distros embraced it? It just can't be as bad as you're painting it with only a few major holdouts not jumping on the systemd bandwagon. And it isn't even just distro maintainers... you're also seeing a large number of upstream projects adding systemd support, while dropping support (or not providing any additional support) for non-systemd alternatives (not the direct spinoffs of systemd services like Consolekit2 and eudev, but the ones previous to those). Things can't be as bad as the systemd naysayers are saying if a HUGE chunk of the Linux market is moving towards it.
Upstream projects don't really support it - most of them do accept updates from systemd developers, with the Linux kernel being one of the biggest hold outs, having rejected kdbus as something to fix user space performance issues.
Quote:

systemd can't impose itself on anything but fedora (and RedHat driven programs). Everyone else, both distro maintainers and upstream developers had a choice and the majority chose to embrace systemd. Why would everyone do that if systemd is as bad as you make it?
I keep saying "it works in relatively simple setups". I haven't seen it really work in complex configurations... Not that it can't work, but it will require some rather large resources to fix the problems that crop up.

The more complex the dependency network, the less reliable it gets.

and a number of distributions accepted it when udev became dependent on it.

Now that there is eudev as an alternate those rejecting systemd are using it instead of udev.

Last edited by jpollard; 10-11-2016 at 05:44 PM.
 
1 members found this post helpful.
Old 10-11-2016, 09:43 PM   #75
bassmadrigal
LQ Guru
 
Registered: Nov 2003
Location: West Jordan, UT, USA
Distribution: Slackware
Posts: 8,792

Rep: Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656Reputation: 6656
Quote:
Originally Posted by jpollard View Post
Upstream projects don't really support it - most of them do accept updates from systemd developers, with the Linux kernel being one of the biggest hold outs, having rejected kdbus as something to fix user space performance issues.
Isn't KDE working towards logind support and not really tinkering with the other options? I figured based on Eric's comments about researching it that the other alternatives are losing support from KDE. Granted, I haven't dug into this a ton.

Quote:
Originally Posted by jpollard View Post
I keep saying "it works in relatively simple setups". I haven't seen it really work in complex configurations... Not that it can't work, but it will require some rather large resources to fix the problems that crop up.

The more complex the dependency network, the less reliable it gets.
So, RHEL7 and SLES12 are working with simple setups? Both have had systemd for around 2 years. I have a hard time imagining the enterprise is willingly accepting as bad of a system as you're portraying it is.

Quote:
Originally Posted by jpollard View Post
and a number of distributions accepted it when udev became dependent on it.

Now that there is eudev as an alternate those rejecting systemd are using it instead of udev.
I didn't ever put the two together. That does make a lot of sense. But it seems like many distros have gotten beyond the point of just simply switching to eudev (and/or logind). It's not impossible, but it still seems not likely.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How to Crash Systemd in One Tweet ChuangTzu Linux - News 0 09-28-2016 04:03 PM
LXer: Up-and-Coming Clients to Tweet LXer Syndicated Linux News 0 03-05-2014 04:10 PM
LXer: How to use Pidgin to tweet LXer Syndicated Linux News 0 07-17-2012 11:50 PM
Tweet Adder Linux linuxPCplus Linux - Software 0 05-02-2012 11:15 PM
LXer: In Space, Everyone Can Hear You Tweet LXer Syndicated Linux News 0 05-15-2009 08:20 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 04:39 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration