LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices



Reply
 
Search this Thread
Old 02-10-2014, 06:55 AM   #226
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Hanover, Germany
Distribution: Main: Gentoo Others: What fits the task
Posts: 15,653
Blog Entries: 2

Rep: Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095

I was able to get back to that machine earlier than I expected and could check it, /dev/shm is properly created when starting with systemd.
I know that a quick fix for booting with sysvinit is possible, but it shouldn't be necessary in the first place.
On further note, I think the ArchWiki is a great help with configuring systemd, for I now I could (easily) solve the keyboard layout problem with its help. Now I will look if they have a .service file for my display manager.
 
Old 02-10-2014, 06:59 AM   #227
jtsn
Member
 
Registered: Sep 2011
Location: Europe
Distribution: Slackware
Posts: 908

Rep: Reputation: 446Reputation: 446Reputation: 446Reputation: 446Reputation: 446
Quote:
Originally Posted by kikinovak View Post
On a side note: the Debian developers just voted in favour of systemd for their next release.
A great example of why voting is not the smartest way to make design decisions.
Quote:
I've been using Debian on servers and desktops for a few years, so it will be interesting to see the consequences of this decision on a traditionally rock-solid distribution.
"Rock-solid" due to ancient packages. In my view Debian was mainly popular for creating distribution forks based on it (like Ubuntu). Linux systems with a relevant end-user market share (Android and the rest of the embedded crowd) currently even avoid udev.

I think GNU failed its goal and is on the way out. At the end of the PC era it is unlikely that I will ever have to deal with Debian again in my life. So I'm fine with it.
 
Old 02-10-2014, 07:00 AM   #228
bartgymnast
Member
 
Registered: Feb 2003
Location: Lelystad, Netherlands
Distribution: slack 7.1 till latest and -current, LFS
Posts: 272

Original Poster
Rep: Reputation: 93
Tobi, did you changed your lilo config with " init=/lib/systemd/systemd" ?
 
Old 02-10-2014, 07:02 AM   #229
bartgymnast
Member
 
Registered: Feb 2003
Location: Lelystad, Netherlands
Distribution: slack 7.1 till latest and -current, LFS
Posts: 272

Original Poster
Rep: Reputation: 93
@tobi
indeed, the arch wiki is a great resource for configurations.
 
Old 02-10-2014, 07:05 AM   #230
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Hanover, Germany
Distribution: Main: Gentoo Others: What fits the task
Posts: 15,653
Blog Entries: 2

Rep: Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095
Quote:
Originally Posted by bartgymnast View Post
well you are in control.
you decide which graphical target you want to start.
in other words, if you want the graphical loader of KDE (not sure what it is) I use Gnome (GDM)
you can create a standard .service file and place it inside the directory graphical.target.wants
That is what I will do next, I use Slim as DM.
Quote:
also UTF8 is needed (forgot to list that somewhere I think)
If you mean starting with " vt.default_utf8=1" as kernel parameter, that is the case here.

Quote:
The swap is indeed an issue on certain installations.
As it seems that issues was caused by me, I just noticed that I forgot about the udev.new configuration file.
 
Old 02-10-2014, 07:08 AM   #231
bartgymnast
Member
 
Registered: Feb 2003
Location: Lelystad, Netherlands
Distribution: slack 7.1 till latest and -current, LFS
Posts: 272

Original Poster
Rep: Reputation: 93
The reason why I said it, was that I encountered the issue myself before.

When you have created a slim.service file could you share it with me ?
I will post it up on the site, so that others can use it as well if that is ok with you.
 
Old 02-10-2014, 07:17 AM   #232
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Hanover, Germany
Distribution: Main: Gentoo Others: What fits the task
Posts: 15,653
Blog Entries: 2

Rep: Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095Reputation: 4095
The Slim project itself already delivers a service file:
Code:
[Unit]
Description=SLiM Simple Login Manager
After=systemd-user-sessions.service

[Service]
ExecStart=/usr/bin/slim -nodaemon

[Install]
Alias=display-manager.service
 
Old 02-10-2014, 07:41 AM   #233
bartgymnast
Member
 
Registered: Feb 2003
Location: Lelystad, Netherlands
Distribution: slack 7.1 till latest and -current, LFS
Posts: 272

Original Poster
Rep: Reputation: 93
@TobiSGD, I posted the service file: http://slackware.omgwtfroflol.com/sl...service-files/

rebuilding slim with systemd enabled would also install it automatically offcourse.
 
Old 02-10-2014, 08:04 AM   #234
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 2,336

Rep: Reputation: 594Reputation: 594Reputation: 594Reputation: 594Reputation: 594Reputation: 594
Quote:
Originally Posted by ReaperX7 View Post
Getting the network up and running has been a long running argument for server usage of systemd and parallel loading of daemons compared to traditional linear loading methods used by sysvinit.

Here's a good question, could systemd be scripted to load network related services without using parallel loading?
Nope.

The way it is designed, I don't think so.

Now that said, maybe - but you would first have to drop NetworkManager (as well as any other service providing a tree of services), and make changes to every service.

First, systemd would have to start the service, then it must wait for each service to tell systemd when that service has completed its initialization (loading any tables, initialized any interfaces, setup any sockets it will be using (from what I see, this still isn't being done).

Second, once everything is READY for incoming requests (but before servicing any) it has to tell systemd that it is ready to process requests.

The SysVinit way - this is done automatically. A process gets started by the init process, which then waits until the service script exits. And the script starts the service (but doesn't continue - it is waiting for the service daemon to fork a new process and the parent service daemon exits. This provides the automatic delay until all initialization is completed. The service exit is the signal that the daemon is now active or has failed. But doing it this way means that the init process doesn't know the identity of the service.

But the startup is clean.

Part of the problem is the process fork-exec pair. It is an advantage for sysVinit (simple code), but it is a penalty to systemd that must worked around (which requires a lot more code).

Systemd wants to work more like Windows where they are combined into one system call (a number of things there came from VMS, including the use of termination messages via mailbox to pass more data than just an exit code). Another part of the problem is that a process exit can only pass one byte of information to its parent (the exit status). The systemd approach uses dbus to get around that limit. Dbus may also be being used to try to get around the fork-exec issue.

But that means that dbus and systemd are now co-dependant, if either one exits, the system is going down (I don't think even restarting dbus works as it doesn't have all the information that its dead predecessor had about routing the information around). I think that is part of the reason for the hard push that has been going on to force a version of dbus into the kernel.

Dbus is an interesting feature... a nice additional method of IPC. adding it to the kernel does make Dbus more efficient by eliminating two kernel transitions (from kernel to dbus process and back). But is it actually more efficient than just using a fork/exec to start a service?

Now USING dbus makes shutdown appear more efficient, it only calls for sending the shutdown to every process using dbus. It also appears to make service monitoring easy as a failed service causes an event that can identify which service actually failed. But the assumption on that "more efficient" is that EVERY service has to use it. Things that don't use it get lost - and systemd still seems to have problems with that. I haven't been sure that my VMs actually shutdown before systemd terminates (it may explain why the system seems to hang for no apparent reason though it does usually complete, shutting down a bunch of VMs running unknown operating systems with unknown dependencies is always tricky, and it can't shut anything else down until they have shutdown).

And systemd still has to send out a signal to kill all non-root processes... first to send a signal to save and exit, then a forced kill at some later time (same thing with sysVinit). As to whether this is done before/during/after the normal systemd service shutdown, I'm not sure. Personally, I think before is better. Too many times I've seen messages implying that there has been no delay at all between the first signal (save and exit) and the second (forced kill).

As to the implementation, it works for IPC fairly well. The techniques for dbus go back at least to VMS (termination mailboxes, completion routines... What? you thought dbus was new? Granted, even I forgot about VMS, not having used it for almost 30 years, but you can still lookup what the SYS$EXIT system call does and what "image rundown" means for an exit). But for reliability it does depend on the rest of the system... asynchronous operation is not well controlled in the UNIX environment. Not even by dbus. And trying to make the system do something it isn't designed to do just creates a source of problems, not solutions. Now changing how processes are created/destroyed, adding in event message handling at the same time might fix that - but you end up with a kernel that is just as bloated (and slow) as NT.

My main problems with systemd aren't the mechanics - it is the assumption that the dependency analysis can be successfully done... Yes, it can - but only for fixed, and relatively small networks. Systemd is still having trouble mounting filesystems - especially those that use the network. People keep putting the mounts in rc.local adding service startups, then adding sleeps to make it delay long enough that it works... conditional network structures are really bad. It works for small things... but I still wonder if anyone has tried it on a large system - several hundred disks, multiple LVM partitions, and a number of distributed services... It just doesn't seem like systemd is designed for large problems.
 
5 members found this post helpful.
Old 02-10-2014, 08:43 AM   #235
bartgymnast
Member
 
Registered: Feb 2003
Location: Lelystad, Netherlands
Distribution: slack 7.1 till latest and -current, LFS
Posts: 272

Original Poster
Rep: Reputation: 93
@jpollard thanks for this explanation.

And it is not a maybe. it is a yes.
I can edit all my service file and give them a number for example and tell them:
start 1, when 1 is finished start 2, when 2 is finished start 3, etc.

So it is possible. (the user/administrator is is control)
 
2 members found this post helpful.
Old 02-10-2014, 11:38 AM   #236
Darth Vader
Member
 
Registered: May 2008
Location: Romania
Distribution: DARKSTAR Linux 2008.1
Posts: 659

Rep: Reputation: 138Reputation: 138
Quote:
Originally Posted by jpollard View Post
Nope.

The way it is designed, I don't think so.

Now that said, maybe - but you would first have to drop NetworkManager (as well as any other service providing a tree of services), and make changes to every service.

First, systemd would have to start the service, then it must wait for each service to tell systemd when that service has completed its initialization (loading any tables, initialized any interfaces, setup any sockets it will be using (from what I see, this still isn't being done).

Second, once everything is READY for incoming requests (but before servicing any) it has to tell systemd that it is ready to process requests.

The SysVinit way - this is done automatically. A process gets started by the init process, which then waits until the service script exits. And the script starts the service (but doesn't continue - it is waiting for the service daemon to fork a new process and the parent service daemon exits. This provides the automatic delay until all initialization is completed. The service exit is the signal that the daemon is now active or has failed. But doing it this way means that the init process doesn't know the identity of the service.

But the startup is clean.

Part of the problem is the process fork-exec pair. It is an advantage for sysVinit (simple code), but it is a penalty to systemd that must worked around (which requires a lot more code).

Systemd wants to work more like Windows where they are combined into one system call (a number of things there came from VMS, including the use of termination messages via mailbox to pass more data than just an exit code). Another part of the problem is that a process exit can only pass one byte of information to its parent (the exit status). The systemd approach uses dbus to get around that limit. Dbus may also be being used to try to get around the fork-exec issue.

But that means that dbus and systemd are now co-dependant, if either one exits, the system is going down (I don't think even restarting dbus works as it doesn't have all the information that its dead predecessor had about routing the information around). I think that is part of the reason for the hard push that has been going on to force a version of dbus into the kernel.

Dbus is an interesting feature... a nice additional method of IPC. adding it to the kernel does make Dbus more efficient by eliminating two kernel transitions (from kernel to dbus process and back). But is it actually more efficient than just using a fork/exec to start a service?

Now USING dbus makes shutdown appear more efficient, it only calls for sending the shutdown to every process using dbus. It also appears to make service monitoring easy as a failed service causes an event that can identify which service actually failed. But the assumption on that "more efficient" is that EVERY service has to use it. Things that don't use it get lost - and systemd still seems to have problems with that. I haven't been sure that my VMs actually shutdown before systemd terminates (it may explain why the system seems to hang for no apparent reason though it does usually complete, shutting down a bunch of VMs running unknown operating systems with unknown dependencies is always tricky, and it can't shut anything else down until they have shutdown).

And systemd still has to send out a signal to kill all non-root processes... first to send a signal to save and exit, then a forced kill at some later time (same thing with sysVinit). As to whether this is done before/during/after the normal systemd service shutdown, I'm not sure. Personally, I think before is better. Too many times I've seen messages implying that there has been no delay at all between the first signal (save and exit) and the second (forced kill).

As to the implementation, it works for IPC fairly well. The techniques for dbus go back at least to VMS (termination mailboxes, completion routines... What? you thought dbus was new? Granted, even I forgot about VMS, not having used it for almost 30 years, but you can still lookup what the SYS$EXIT system call does and what "image rundown" means for an exit). But for reliability it does depend on the rest of the system... asynchronous operation is not well controlled in the UNIX environment. Not even by dbus. And trying to make the system do something it isn't designed to do just creates a source of problems, not solutions. Now changing how processes are created/destroyed, adding in event message handling at the same time might fix that - but you end up with a kernel that is just as bloated (and slow) as NT.

My main problems with systemd aren't the mechanics - it is the assumption that the dependency analysis can be successfully done... Yes, it can - but only for fixed, and relatively small networks. Systemd is still having trouble mounting filesystems - especially those that use the network. People keep putting the mounts in rc.local adding service startups, then adding sleeps to make it delay long enough that it works... conditional network structures are really bad. It works for small things... but I still wonder if anyone has tried it on a large system - several hundred disks, multiple LVM partitions, and a number of distributed services... It just doesn't seem like systemd is designed for large problems.
Let me do a little summary of your speech:

Under systemd, administrator or System Builder must be very careful of the order of services.

My opinion:

It's the old story of RPM's dependencies hell. And I think the culprit of dependencies Hell, is not of RPM, but of the system builder(s) who is/are careless in to configure them.

And about stories about clustered RAID of hundreds of hard drives ... I tell you to lay low with this story.

First, because in this case are administrators who work full-time at Enterprise level. And technically, at this level, we have in-house distributions, created by the company's Linux teams.
Secondly, I do not think Slackware provides support at Enterprise level. Is just the pet-project of P.V.

And yet if a huge Company wants to make a second Watson under Slackware, I think that the presentation of a team to P.V. door, together with a nice suitcase (full of dollars), is the slightest problem. Because, for a amount of 7-8 zeros, I think P.V. would be very happy to do the system builder and administrator role for them. Full time.

Last edited by Darth Vader; 02-10-2014 at 11:46 AM.
 
2 members found this post helpful.
Old 02-10-2014, 11:47 AM   #237
AlleyTrotter
Member
 
Registered: Jun 2002
Location: Coal Township PA
Distribution: Slackware64-14.1 (3.18.0) UEFI enabled
Posts: 360

Rep: Reputation: 77
Interesting

http://ewontfix.com/14/
 
1 members found this post helpful.
Old 02-10-2014, 12:12 PM   #238
Darth Vader
Member
 
Registered: May 2008
Location: Romania
Distribution: DARKSTAR Linux 2008.1
Posts: 659

Rep: Reputation: 138Reputation: 138
Quote:
Originally Posted by AlleyTrotter View Post
Still, yet SystemD is good enough for RHEL7. You known, is about the enterprise distribution from those guys who earned $1 billion from Linux...

I trust that they know what they're doing.

And I DO NOT trust in the street corner prophets who believe that the end of the world is near.

Last edited by Darth Vader; 02-10-2014 at 12:18 PM.
 
Old 02-10-2014, 12:39 PM   #239
kikinovak
Senior Member
 
Registered: Jun 2011
Location: Montpezat (South France)
Distribution: Slackware, Slackware64
Posts: 1,951

Rep: Reputation: 972Reputation: 972Reputation: 972Reputation: 972Reputation: 972Reputation: 972Reputation: 972Reputation: 972
Quote:
Originally Posted by Darth Vader View Post
Secondly, I do not think Slackware provides support at Enterprise level. Is just the pet-project of P.V.
Noah's Ark was basically a one-man pet-project.

Not a full-blown professional project with support at Enterprise level like the Titanic.

Last edited by kikinovak; 02-10-2014 at 12:41 PM.
 
3 members found this post helpful.
Old 02-10-2014, 12:51 PM   #240
Darth Vader
Member
 
Registered: May 2008
Location: Romania
Distribution: DARKSTAR Linux 2008.1
Posts: 659

Rep: Reputation: 138Reputation: 138
Quote:
Originally Posted by kikinovak View Post
Noah's Ark was basically a one-man pet-project.

Not a full-blown professional project with support at Enterprise level like the Titanic.
However, we do not know for sure if Noah's Ark really existed or it is just a Hebrew old legend, but we know for sure that there was a ship called Titanic.

BTW, also Titanic sank not because of a engineering or construction error, but, lets say, due to it's captain human pride ...

Finally, I have a question for you:

Let's say that a big company contacts you to create an in-house Linux distribution, loosely based on Slackware, but i686 pure and driven by SystemD, to work with a very, very large distributed RAID. On the top, of course, a nice Beowulf (how else to manage 500 HDD?). But, a contract with a figure of seven zeros, full-time.

What will you say? You will say that your religious beliefs will not allow the use of SystemD, or will you accept?

My bet is that you'll instantly become die-hard fan of SystemD in Slackware.

Last edited by Darth Vader; 02-10-2014 at 01:26 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



All times are GMT -5. The time now is 04:39 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration