LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 05-16-2014, 04:31 PM   #46
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513

Quote:
Originally Posted by Arkerless View Post
Correct me if I am wrong, but I got interested in the subject and did some reading up on systemd, and if I am not mistaken the fundamental driving force in their architecture is improved VPS performance. They want a linux optimized to run as a virtual machine, and to start and stop very quickly on demand in that environment, right?

The design makes a lot more sense in that context. You can be pretty certain that the DHCP response is coming within a certain number of seconds and just assume success and that will normally work. And saving a couple seconds on boot time would actually be worth some effort, if you are talking about booting thousands of virtual servers every hour it actually starts to add up.
Not really. If you are booting thousands of VMS most of them would still be running in parallel - thus a boot time of 20-30 seconds would still be in the neighborhood of 25-35 seconds because of the overlapping times since most of them would all be started at the same time.

If they are serially started (not a good thing anyway) even if the boot time is 10 seconds you still have thousands of 10 seconds...

And if you have inter-system dependencies then you are still talking about really long boot times.
Quote:
If I understand correctly this is why they are building this entire layer of stuff around systemd instead of using something compatible like OpenRC - squeezing that last second out of your boot time is an incredible waste of time for a traditional user, but again makes sense if you are talking about booting VMs thousands of times an hour.
If you are rebooting thousands of times an hour you have a bigger problem than the boot time.
 
Old 05-16-2014, 05:04 PM   #47
Arkerless
Member
 
Registered: Mar 2006
Distribution: Give me Slack or give me death.
Posts: 81

Rep: Reputation: 60
Quote:
Originally Posted by jpollard View Post
If they are serially started (not a good thing anyway) even if the boot time is 10 seconds you still have thousands of 10 seconds...
Sure, but 1000*10 seconds vs 1000*11 seconds is a difference of over 16.5 minutes. None of us are even likely to notice a 1 second improvement in boot time but a VPS provider very well might notice the aggregate. For a big enough shop they might even measure the improvement in terms of needing fewer machines to provide service.

Quote:
And if you have inter-system dependencies then you are still talking about really long boot times.
If you want to be certain that it all comes up in order and that errors are flagged as they occur and handled appropriately, yeah, that can make a very long boot.

If you know (or think you now) ahead of time, however, exactly what the boot situation will be, and you are trying to boot VMs on demand and serve the maximum number of customers with the minimum of hardware, I can see how it becomes tempted to jury-rig things for better performance. If you know this is a VM and dhcp will be provided in <1 second by a daemon running on the same physical machine, then it would seem safe to assume success and go right on without verifying, right? (Not saying that this is ultimately a good decision, certainly not in every case, but I was having difficulty seeing why anyone would even think about going down this road at all until I came across a discussion in the VPS context where I started seeing logic that at least makes some sort of sense for some use cases.)

Quote:
If you are rebooting thousands of times an hour you have a bigger problem than the boot time.
Well that's what I started off thinking, how incredibly short sighted to spend so much time and effort breaking stuff that works well and re-architecting so much of the system to shave a few seconds (at most) off a procedure that occurs so infrequently to begin with, it just makes no sense at all, it's completely irrational.

In my experience, while people are often irrational to some degree, very few are ever completely irrational - if I look closely I can usually find some twist of circumstance where what appeared completely irrational at first glance actually makes some sort of sense once you understand the true motivation.

In this case, if it is not to facilitate massive parallel VPS bootups I am at a loss as to what other goal is being served.
 
Old 05-16-2014, 06:48 PM   #48
briselec
Member
 
Registered: Jun 2013
Location: Ipswich, Australia
Distribution: Slackware
Posts: 74

Rep: Reputation: Disabled
Fast boot times is a common requirement with embedded systems.
In the future when I get in my self driving car I don't won't to have to wait 20 secs or more for the system to boot.
 
Old 05-16-2014, 07:13 PM   #49
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: McKinney, Texas
Distribution: Slackware64 15.0
Posts: 3,858

Rep: Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225
Quote:
Originally Posted by dunric View Post
However this is not an issue in all GNU/Linux distributions known to me or Open/Net/FreeBSDs. Holds true for Slackware, of course
Well, the system requirements for ruby are at least 65 times greater than the current init. I'll admit that I didn't explicitly mention that, but if you are going to use an interpreted language more complicated than Forth (hello, OpenBoot!) then you can expect your runtime requirements to go up a bit.
 
Old 05-16-2014, 08:59 PM   #50
Arkerless
Member
 
Registered: Mar 2006
Distribution: Give me Slack or give me death.
Posts: 81

Rep: Reputation: 60
Quote:
Originally Posted by briselec View Post
Fast boot times is a common requirement with embedded systems.
In the future when I get in my self driving car I don't won't to have to wait 20 secs or more for the system to boot.
By the time they are self-driving I expect the electricity and battery problems to be solved well enough there will be no objection to leaving the system up 24/7 under normal conditions, using a power save mode when not in use or possibly even hibernating to some sort of fast storage. The system would only need reboot after being deliberately turned off to facilitate maintenance of some kind, whether unplugging wiring or installing a new kernel.

Embedded systems generally should have optimal boot-times using traditional techniques, just make sure you are not loading anything unnecessarily and that the things that do have to load do so correctly. Nondeterministic load orders and socket-activation offers no improvement here, and it *might* cause the boot to fail.

The only way I can see that systemd could even theoretically be useful in embedded is if you assume that hiring someone competent to optimize the system is entirely out of the question. Since it would be done once for a particular model of device and then ship thousands perhaps millions of devices that sounds like poor management. Admittedly I dont work in embedded and could be missing something important but...
 
Old 05-16-2014, 09:53 PM   #51
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by Arkerless View Post
Sure, but 1000*10 seconds vs 1000*11 seconds is a difference of over 16.5 minutes. None of us are even likely to notice a 1 second improvement in boot time but a VPS provider very well might notice the aggregate. For a big enough shop they might even measure the improvement in terms of needing fewer machines to provide service.
Insignificant.

And if boot times determined the number of machines there is a real failure. They are using the wrong machines and should be running on a Power 7 or other mainframe.
Quote:


If you want to be certain that it all comes up in order and that errors are flagged as they occur and handled appropriately, yeah, that can make a very long boot.

If you know (or think you now) ahead of time, however, exactly what the boot situation will be, and you are trying to boot VMs on demand and serve the maximum number of customers with the minimum of hardware, I can see how it becomes tempted to jury-rig things for better performance. If you know this is a VM and dhcp will be provided in <1 second by a daemon running on the same physical machine, then it would seem safe to assume success and go right on without verifying, right? (Not saying that this is ultimately a good decision, certainly not in every case, but I was having difficulty seeing why anyone would even think about going down this road at all until I came across a discussion in the VPS context where I started seeing logic that at least makes some sort of sense for some use cases.)
I know of no Intel machine that can reliably support "thousands of VMs". And even if VMs are all supported on one machine, DHCP on that one machine will be swamped addressing "thousands of VMs booting" - and the delay will be more than a few seconds... (I have seen up to 15 just using 40 VMs. total boot time was still under 15 minutes, and that was 10 years ago).
Quote:

Well that's what I started off thinking, how incredibly short sighted to spend so much time and effort breaking stuff that works well and re-architecting so much of the system to shave a few seconds (at most) off a procedure that occurs so infrequently to begin with, it just makes no sense at all, it's completely irrational.

In my experience, while people are often irrational to some degree, very few are ever completely irrational - if I look closely I can usually find some twist of circumstance where what appeared completely irrational at first glance actually makes some sort of sense once you understand the true motivation.

In this case, if it is not to facilitate massive parallel VPS bootups I am at a loss as to what other goal is being served.
The only one I can think of where very short boot times would be nice is on a laptop. And laptops are very simple configurations, with almost no dependencies (only the disk, network, and GUI).

Other than that, systemd only seems to be designed to make the developers egos bigger.
 
Old 05-16-2014, 10:00 PM   #52
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by briselec View Post
Fast boot times is a common requirement with embedded systems.
In the future when I get in my self driving car I don't won't to have to wait 20 secs or more for the system to boot.
Quite. Which is why they don't use systemd OR sysVinit. Both are supposed to be generic, and customizable.

And an embedded init is already customized to start only what the hardware requires - and the hardware is known ahead of time and doesn't change. They also don't have multiple roots (and an initrd is a root system), they go directly to the real root with no initrd (saves 5 seconds or more that way - all drivers required are compiled in, the only use for the root filesystem is to run a designated application that serves as the init process.

Now that embedded system might have additional tools - diagnostics, an update procedure... but these are not started at boot time unless some maintenance switch directs it. So normal boots wouldn't be bothered.
 
Old 05-16-2014, 11:48 PM   #53
Arkerless
Member
 
Registered: Mar 2006
Distribution: Give me Slack or give me death.
Posts: 81

Rep: Reputation: 60
Quote:
Originally Posted by jpollard View Post
Insignificant.

And if boot times determined the number of machines there is a real failure. They are using the wrong machines and should be running on a Power 7 or other mainframe.
Eh, devils advocate here, I agree it would be a better world if people just picked the right hardware from the ground up, but I bet there are a lot of shops running VPS on PC hardware nonetheless, and it should work. The I/O for boot up could in theory be kept almost entirely in cache if they are all running exactly the same binaries and only have to check disk for config files on bootup.

Quote:
I know of no Intel machine that can reliably support "thousands of VMs".
A lot of commercial services emphasize price over reliability. The more subscribers they can service on a fixed number of machines, whatever that number is, the better for them obviously. And if they can detect an attempt to use a virtual server and then boot it near instantaneously in response, they would only need to actually run a fraction of them at any given time.

At least, that's what I suspect the business side would be thinking here.

Quote:
And even if VMs are all supported on one machine, DHCP on that one machine will be swamped addressing "thousands of VMs booting" - and the delay will be more than a few seconds... (I have seen up to 15 just using 40 VMs. total boot time was still under 15 minutes, and that was 10 years ago).
Thousands at once, yes, but 1000 per hour seems roughly in the realm of possibility no? That would be 16 or 17 VMs booting up every minute, spread out over the hour. Assume you only need to touch disk once for each instance, to grab the config file. All the binaries are exactly the same for each instance and can be kept in RAM. Everything can be run in parallel, dhcp will of course take a measurable amount of time but we can assume that it WILL succeed a priori (if it does not succeed, that's simply beyond the scope of competence of the VM to address - we have bigger problems and they have to be addressed at another level so a failed boot is an acceptable response.)

Not my use case, not the use case of anyone else I know, but maybe where the companies that pay people to work on this think they see an expanding market?

Quote:
The only one I can think of where very short boot times would be nice is on a laptop. And laptops are very simple configurations, with almost no dependencies (only the disk, network, and GUI).

Other than that, systemd only seems to be designed to make the developers egos bigger.
I guess at some level most people do develop for ego but the suits putting money into it have to believe there is something here that will benefit them.

On a laptop, personally, I find suspend to disk nearly eliminates the need to reboot.

Last edited by Arkerless; 05-16-2014 at 11:51 PM.
 
Old 05-17-2014, 02:57 AM   #54
a4z
Senior Member
 
Registered: Feb 2009
Posts: 1,727

Rep: Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742Reputation: 742
Quote:
Originally Posted by Arkerless View Post
Embedded systems generally should have optimal boot-times using traditional techniques, just make sure you are not loading anything unnecessarily and that the things that do have to load do so correctly. Nondeterministic load orders and socket-activation offers no improvement here, and it *might* cause the boot to fail.
Quote:
Originally Posted by jpollard View Post
And an embedded init is already customized to start only what the hardware requires - and the hardware is known ahead of time and doesn't change. They also don't have multiple roots (and an initrd is a root system), they go directly to the real root with no initrd (saves 5 seconds or more that way - all drivers required are compiled in, the only use for the root filesystem is to run a designated application that serves as the init process.
both true for a lot of places, but in reality I see more and more 'out of the box' embedded installations.
systemd has already a layer in openembedded, and I expect to see it in real on embedded devices soon.
when Ubuntu switches, Linaro builds will come with systemd, and these systems come with a lot of BSPs.

and embedded is nowadays also an intel atom board, x86, and just recently I replaced a Arch Linux with Slackware.
guess what, boot times are an issue
 
Old 05-17-2014, 03:46 AM   #55
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,558
Blog Entries: 15

Rep: Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097
One reason I can honestly say about systemd after reading Bruce Dubbs' notes on it from B/LFS, systemd is mostly geared for UI driven systems and not so much command line ones, which was one of several reasons Bruce removed it from the book.

Slackware is also a command line driven distribution, so I don't see how systemd's toolkit would be of any extended and real benefit to the standard SysV utilities.

If you need to shave off boot time only, then why bother with only systemd? Three other init systems exist that tree load daemons and dependencies in parallel, namely, OpenRC, Runit, and s6... and if you need daemon management, there are three toolkits likewise that offer the same... Perp, s6, and Runit.

Boot times are an issue but they're a fickle argument to make. The real argument should be trying to find a suitable, less taxing, less intrusive, and more open standard successor to SysVinit that replaces only what it needs to replace, which is only SysVinit. Skarnet's s6 and Gerrit Pape's Runit might not be perfect successors, but as we've had talks over in the LFS section about Runit's implementation, maybe it doesn't have to be entirely a perfect replacement, maybe it needs to be only a simple step in the right direction.

Our work with Runit is fully public over in the LFS section, and could be easily translated to work with Slackware, or any other distribution for that matter. Our goal isn't perfection, but something that works as a semi-perfect replacement for SysVinit that could be easily reduplicated... Even maybe on Slackware.
 
Old 05-17-2014, 03:53 AM   #56
jtsn
Member
 
Registered: Sep 2011
Posts: 922

Rep: Reputation: 480Reputation: 480Reputation: 480Reputation: 480Reputation: 480
Quote:
Originally Posted by briselec View Post
Fast boot times is a common requirement with embedded systems.
Most smartphones take about a minute to boot. Main reasons being the slow flash memory and the quite slow ARM-CPU. It's not a issue though, because these devices boot up once and then stay on all the time.
 
Old 05-17-2014, 04:30 AM   #57
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,558
Blog Entries: 15

Rep: Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097
Quote:
Originally Posted by jtsn View Post
Most smartphones take about a minute to boot. Main reasons being the slow flash memory and the quite slow ARM-CPU. It's not a issue though, because these devices boot up once and then stay on all the time.
Quite true. There's only a handful of times you really ever need to reboot a Smartphone. It's not like any other typical computer, same with tablets.
 
Old 05-17-2014, 08:51 AM   #58
dunric
Member
 
Registered: Jul 2004
Distribution: Void Linux, former Slackware
Posts: 498

Rep: Reputation: 100Reputation: 100
Quote:
Originally Posted by Richard Cranium View Post
Well, the system requirements for ruby are at least 65 times greater than the current init. I'll admit that I didn't explicitly mention that, but if you are going to use an interpreted language more complicated than Forth (hello, OpenBoot!) then you can expect your runtime requirements to go up a bit.
Why should even matter dependecies count in this case ? Is that so hard to understand the article and the code are for learning purposes only not as a replacement of real init ?
 
Old 05-17-2014, 10:12 AM   #59
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Quote:
Originally Posted by ReaperX7 View Post
Quite true. There's only a handful of times you really ever need to reboot a Smartphone. It's not like any other typical computer, same with tablets.
I don't know about you, but my systems stay up for months... Only rebooting for power failures, or kernel updates... My last reboot was 119 days ago...

And in all the computer centers I worked in, uptime was required to be continuous if at all possible, with only scheduled down time, or hardware failures. And most of the hardware failures were handled without requiring the system to be shutdown. Even memory failures were handled automatically (as were CPU failures), unless there were no more reserve memory available (or CPUs) for substituting for marked bad pages (aren't mainframes typical computers too?) Even the intel based servers didn't cause outages for hardware, due to redundant servers and automatic failover - the VMs would just migrate to one of the still running alternate servers without rebooting the VM.

In our center even reinstalling an entire VM from scratch to operational only needed 15 minutes. Installing a host server took a bit longer... 20 minutes to install, and a day or three to validate and carry out a burn in test before operational use. A new server with a new hardware configuration would take about the same amount of time, though setting up the operational configuration took longer (and we would reinstall from our configuration management server after the changes to make sure everything went as expected. But once the setup was finished, recorded, verified... and the operational burn in complete - back to the 20 minutes for a full reinstall.

Boot times were really irrelevant. (As I recall, it was under 30 seconds for a VM, but it was faster than I could think of the next thing to do that I needed that particular VM for unless it was testing a new filesystem, or network interface where my test was alread setup... So I had to try a ssh connection 3 or four times before a connection was made... big deal)

What was more important was reliability. If it worked once, it should work EVERY time. And that isn't a property of systemd.
 
Old 05-17-2014, 10:46 AM   #60
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: McKinney, Texas
Distribution: Slackware64 15.0
Posts: 3,858

Rep: Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225Reputation: 2225
Quote:
Originally Posted by dunric View Post
Why should even matter dependecies count in this case ?
I wasn't referring to the "dependency count" (whatever the hell that is) but to the sheer size of the executable and shared libraries required to run each one.

Quote:
Is that so hard to understand the article and the code are for learning purposes only not as a replacement of real init ?
I fully understand that. I'm not certain that some of the people commenting on the article in this thread do, given the comments about "ohmygod, that still uses systemD stuff"
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Systemd Is The Future Of Debian LXer Syndicated Linux News 1 02-11-2014 03:09 PM
It seems that in future Linux kernel itself will force the use of systemd blancamolinos Slackware 25 11-07-2013 02:38 PM
[SOLVED] systemd and Slackware's future kikinovak Slackware 95 07-14-2012 11:40 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 08:41 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration