LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian
User Name
Password
Debian This forum is for the discussion of Debian Linux.

Notices


View Poll Results: systemd vs. upstart, or else?
sysVinit: if they can't decide just keep the status quo, it has worked for 40+ years. 43 58.90%
openrc: a more traditional init system 18 24.66%
upstart: the Canonical non GNU way 3 4.11%
systemd: the RHEL/SuSE/Pottering way. 12 16.44%
Multiple init systems, just let the user decide and leave the nightmare for the maintainers. 10 13.70%
Don't know/don't really care. 6 8.22%
Multiple Choice Poll. Voters: 73. You may not vote on this poll

Reply
  Search this Thread
Old 03-18-2014, 12:41 AM   #31
Randicus Draco Albus
Senior Member
 
Registered: May 2011
Location: Hiding somewhere on planet Earth.
Distribution: No distribution. OpenBSD operating system
Posts: 1,711
Blog Entries: 8

Rep: Reputation: 635Reputation: 635Reputation: 635Reputation: 635Reputation: 635Reputation: 635

Replace a trusted and reliable initialisation system with a Comical creation? Even Buntu will not be using it. They will be going with systemd after failing to convince Debian to adopt their upstart.
 
Old 03-18-2014, 12:58 AM   #32
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,558
Blog Entries: 15

Rep: Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097
I fail to see why there is such a rush to adopt an init and service management system that is still in development and mostly still incomplete by the authors long term goals. Upstart, Runit, OpenRC, and SysV+perp have all been in stable maintenance for years now.

Not to put down any efforts by the Debian developers, but they should focus on a long term stable init and service management system rather than one that has not been completed and fully finalized. I understand a lot of projects are adding code for systemd, but they still are maintaining support for non-systemd code as well. Just because ConsoleKit got pushed into maintenance mode to focus on systemd-logind, doesn't mean that developers are going to just scrap ConsoleKit support. Even they know people and systems all vary. They'd be downright stupid to just scrap legacy support especially if other platforms use it as the norm.
 
Old 03-18-2014, 01:19 AM   #33
geox
Member
 
Registered: Jan 2012
Posts: 42

Rep: Reputation: 2
Quote:
Originally Posted by Randicus Draco Albus View Post
Replace a trusted and reliable initialisation system with a Comical creation? Even Buntu will not be using it. They will be going with systemd after failing to convince Debian to adopt their upstart.
Ouch. I don't like systemd. To me it is way too complicated, overdesigned, too many dependencies and has way too many design goals. Upstart has much simpler goals and does it in a simpler fashion.
 
Old 03-18-2014, 05:16 AM   #34
Randicus Draco Albus
Senior Member
 
Registered: May 2011
Location: Hiding somewhere on planet Earth.
Distribution: No distribution. OpenBSD operating system
Posts: 1,711
Blog Entries: 8

Rep: Reputation: 635Reputation: 635Reputation: 635Reputation: 635Reputation: 635Reputation: 635
Quote:
Originally Posted by ReaperX7 View Post
Not to put down any efforts by the Debian developers, but they should focus on a long term stable init and service management system rather than one that has not been completed and fully finalized.
My guess is they see the writing on the wall. It is only a matter of time before Red Hat gets systemd and whatever else will be "attached to it" forced on everyone. Perhaps Debian's developers are unwilling to fight the inevitable alone.
 
Old 03-18-2014, 07:46 AM   #35
jens
Senior Member
 
Registered: May 2004
Location: Belgium
Distribution: Debian, Slackware, Fedora
Posts: 1,463

Rep: Reputation: 299Reputation: 299Reputation: 299
Quote:
Originally Posted by vl23 View Post
LOL no, Bare metal:
appp-->kernel-->drivers--hardware
VM appinguest-->kernelinguest-->driversinguest-->virtualization infrastructure and host kernel virtualization components-->on-metal hardware
It would only be useful if you are using a jail, and that is not what I am doing, the I/O wait and memory access time would be higher since VM mapping would have to go through 2 kernels instead of one, and since the disk is a file on the host system.
Sorry for the late reply, but that's just ridiculous.

KVM can be can be instructed to do whatever you want when booting.
Real hardware needs to be accessed and validated (taking about 75% of your boot time).

GNU/Hurd boots under 1 second in KVM for me (using neither sysv or sysd), that's not even possible on bare metal ...
 
Old 03-18-2014, 08:01 AM   #36
jens
Senior Member
 
Registered: May 2004
Location: Belgium
Distribution: Debian, Slackware, Fedora
Posts: 1,463

Rep: Reputation: 299Reputation: 299Reputation: 299
Quote:
Originally Posted by Randicus Draco Albus View Post
My guess is they see the writing on the wall. It is only a matter of time before Red Hat gets systemd and whatever else will be "attached to it" forced on everyone. Perhaps Debian's developers are unwilling to fight the inevitable alone.
You're confusing the actual init part (pid1) with whole stack on top.
Ubuntu's upstart is using logind as well ...

Even if this thing goes wrong, there's always room to replace systemd's pid1 and add all components on a different init (OpenRC and friends).
 
Old 03-18-2014, 09:36 PM   #37
ReaperX7
LQ Guru
 
Registered: Jul 2011
Location: California
Distribution: Slackware64-15.0 Multilib
Posts: 6,558
Blog Entries: 15

Rep: Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097Reputation: 2097
Quote:
Originally Posted by Randicus Draco Albus View Post
My guess is they see the writing on the wall. It is only a matter of time before Red Hat gets systemd and whatever else will be "attached to it" forced on everyone. Perhaps Debian's developers are unwilling to fight the inevitable alone.
How is it inevitable? Look at ConsoleKit vs Logind. It's technically the same exact service daemon, just rolled into systemd. ConsoleKit is stable also and hasn't needed maintenance or patches, even though the developers have put it in deprecation. It's viable and it still works, so how is logind any better or worse than ConsoleKit? It's not and neither. ConsoleKit is simply a stand-alone version of systemd-logind.

Systemd is nothing more than an amalgamated hypervisor that aims to eventually become the underlying OS of the entire Linux OS... and yes I said that correctly. Not GNU/Linux or even UNG/Linux, but Linux OS. Lennart's long term manifesto aims at having it control every underlying aspect of Linux as the systemd-OS.

The only inevitable is the inevitability that sound, stable, and viable projects for GNU/Linux are going to go heavily unmaintained for extended periods of time to where they become useless compared to systemd when it finally takes over the entire OS.

Here's a good question:

If, by far reaching chance, Lennart somehow miraculously wrote up systemd tomorrow to literally and completely replace every aspect of the GNU userland, compiler, libraries, and kernel tools, pull the entire core OS and kernel into PID-1, and then act as a hypervisor to the hardware resources, would you keep using it?

Or better question, what choice would you have, but to use it?
 
Old 03-19-2014, 09:46 AM   #38
vl23
Member
 
Registered: Mar 2009
Posts: 125

Original Poster
Rep: Reputation: 8
Quote:
Originally Posted by jens View Post
Sorry for the late reply, but that's just ridiculous.

KVM can be can be instructed to do whatever you want when booting.
Real hardware needs to be accessed and validated (taking about 75% of your boot time).

GNU/Hurd boots under 1 second in KVM for me (using neither sysv or sysd), that's not even possible on bare metal ...
Oh, so the I/O penalty(by far the biggest bottleneck in computing since forever) is magically less on virtualized hardware than it is on bare metal eh, and a kernel somehow does not interact in the same way with virtualized hardware as it does with regular stuff, do you have any tangible proof of your claims other than, oh it is faster for me.
maybe a few unbiased and peer-reviewed articles not churned out by VMWare or the other Cloud pushers
 
Old 03-19-2014, 12:13 PM   #39
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by vl23 View Post
Oh, so the I/O penalty(by far the biggest bottleneck in computing since forever) is magically less on virtualized hardware than it is on bare metal eh, and a kernel somehow does not interact in the same way with virtualized hardware as it does with regular stuff, do you have any tangible proof of your claims other than, oh it is faster for me.
maybe a few unbiased and peer-reviewed articles not churned out by VMWare or the other Cloud pushers
It should be as easy as to prove your claims that VMs have to boot slower than bare metal, just do a few benchmarks.
 
Old 03-19-2014, 12:31 PM   #40
vl23
Member
 
Registered: Mar 2009
Posts: 125

Original Poster
Rep: Reputation: 8
Quote:
Originally Posted by TobiSGD View Post
It should be as easy as to prove your claims that VMs have to boot slower than bare metal, just do a few benchmarks.
it is simple logic damn it, a VM can't utilize the underlying hardware at 100%, even POWER and Z have at the least a 5% penalty for their virtualization solutions, which are light-years price and performance-wise when compared to x86 virtualization capabilities, and there is now way to have your I/O work faster when your OS is on a disk image that is basically just a file stuck in the host's FS, unless of course you are using some caching tricks, which I am not, and I really doubt that the host page cache will help all that much, besides even with bare metal you still have DMA, the on-device buffer and the disk's own readahead.

EDIT:

But please don't take my word on it, here is what Big Blue has to say on the subject:

Quote:
Originally Posted by "Kernel Virtual Machine (KVM): Best practices for KVM - IBM";
Most Linux file systems ensure that the metadata and files, that are stored within the file system, are
aligned on 4 KB (page) boundaries. The second extended file system (ext2) and the third extended file
system (ext3) are most commonly used. Both file systems use 4 KB alignment for all data.
Because a disk image file is a file in the file system, the disk image file is 4 KB aligned. However, when
the disk image file is partitioned using the default or established partitioning rules, partitions can begin
on boundaries or offsets within the image file that are not 4 KB aligned.
The probability of a partition, within a disk image file, starting on a 4 KB alignment is low for the
following reasons:
v Partitions within a disk image file typically start and end on cylinder boundaries.
v Cylinder sizes typically are not multiples of 4 KB.
Within the guest operating system, disk I/O operations occur with blocks that are offset in 4 KB
multiples from the beginning of the partitions. However, the offset of the partition in the disk image file
is typically not 4 KB aligned with respect to the following items:
v The beginning of the disk image file.
v The file system in which the disk image file is located.
As a result, when a guest operating system initiates I/O operations of 4 KB to the partition in the disk
image file, the I/O operations span two 4 KB pages of the disk image file. The disk image file is located
in the page cache of the hypervisor. If an I/O operation is a write request, the hypervisor must perform a
Read-Modify-Write (RMW) operation to complete the I/O request.
For example, the guest operating system initiates 4 KB write operation. The hypervisor reads a page to
update the last 1 KB of data. Then, the hypervisor reads the next page to update the remaining 3 KB of
data. After the updates are finished, the hypervisor writes both modified pages back to the disk. For
every write I/O operation from the guest operating system, the hypervisor must perform up to two read
operations and up to two write operations. These extra read and write operations produce an I/O
Best practices for KVM
9multiplication factor of four. The additional I/O operations create a greater demand on the storage
devices. This increased demand can affect the throughput and response times of all the software using
the storage devices.
To avoid the I/O affects from partitions that are not 4 KB aligned, ensure that you optimally partition the
disk image file. Ensure that partitions start on boundaries that adhere to the 4 KB alignment
recommendation. Most standard partitioning tools default to using cylinder values to specify the start
and end boundaries of each partition. However, based on default established values used for disk
geometry, like track size and heads per cylinder, cylinder sizes rarely fall on 4 KB alignment values. To
specify an optimal start value, you can switch the partition tool to the Advanced or Expert modes.
Here is the source document http://pic.dhe.ibm.com/infocenter/ln...ctices_pdf.pdf

Last edited by vl23; 03-19-2014 at 12:46 PM.
 
Old 03-19-2014, 01:18 PM   #41
jens
Senior Member
 
Registered: May 2004
Location: Belgium
Distribution: Debian, Slackware, Fedora
Posts: 1,463

Rep: Reputation: 299Reputation: 299Reputation: 299
Quote:
Originally Posted by vl23 View Post
it is simple logic damn it, a VM can't utilize the underlying hardware at 100%, even POWER and Z have at the least a 5% penalty for their virtualization solutions, which are light-years price and performance-wise when compared to x86 virtualization capabilities, and there is now way to have your I/O work faster when your OS is on a disk image that is basically just a file stuck in the host's FS, unless of course you are using some caching tricks, which I am not, and I really doubt that the host page cache will help all that much, besides even with bare metal you still have DMA, the on-device buffer and the disk's own readahead.

EDIT:

But please don't take my word on it, here is what Big Blue has to say on the subject:



Here is the source document http://pic.dhe.ibm.com/infocenter/ln...ctices_pdf.pdf
That has absolutely NOTHING to do with boot time in KVM.
Stop trolling.
 
Old 03-19-2014, 01:35 PM   #42
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by vl23 View Post
it is simple logic damn it, a VM can't utilize the underlying hardware at 100%, even POWER and Z have at the least a 5% penalty for their virtualization solutions, which are light-years price and performance-wise when compared to x86 virtualization capabilities, and there is now way to have your I/O work faster when your OS is on a disk image that is basically just a file stuck in the host's FS, unless of course you are using some caching tricks, which I am not, and I really doubt that the host page cache will help all that much, besides even with bare metal you still have DMA, the on-device buffer and the disk's own readahead.

EDIT:

But please don't take my word on it, here is what Big Blue has to say on the subject:



Here is the source document http://pic.dhe.ibm.com/infocenter/ln...ctices_pdf.pdf
With the advent of SSDs and Advanced Format harddisks all modern OSes switched to automatically aligning partitions on 4KB boundaries, the only partitioning tool that still does not do that by default, whichever reason the developers may have for that, is cfdisk, which still starts the first partition on logical sector 63. Any other tool aligns correctly by default. Also, this is a partitioning issue that happen on bare metal, too, for example on SSDs or Advanced Format disks (disks with 4KB physical block size) and is not really related to VMs.

I still recommend to back up your thought processes about boot speed under different conditions with verifiable data from benchmarks. The easiest and fastest way to prove your hypothesis that booting on VMs has to be slower than on bare metal.
 
Old 03-19-2014, 03:10 PM   #43
vl23
Member
 
Registered: Mar 2009
Posts: 125

Original Poster
Rep: Reputation: 8
Quote:
Originally Posted by TobiSGD View Post
With the advent of SSDs and Advanced Format harddisks all modern OSes switched to automatically aligning partitions on 4KB boundaries, the only partitioning tool that still does not do that by default, whichever reason the developers may have for that, is cfdisk, which still starts the first partition on logical sector 63. Any other tool aligns correctly by default. Also, this is a partitioning issue that happen on bare metal, too, for example on SSDs or Advanced Format disks (disks with 4KB physical block size) and is not really related to VMs.

I still recommend to back up your thought processes about boot speed under different conditions with verifiable data from benchmarks. The easiest and fastest way to prove your hypothesis that booting on VMs has to be slower than on bare metal.
I'd rather take IBM's word over yours, sorry but the technology behemoth with the record numbers of patents and years at the top of Gartner's various rankings just beats you for some reason

besides if you've already done such benchmarks, or if jens has done it, please share them since you are very strongly convinced.

Also on a related matter, you do know that modern OSes actually allow for a singe page to be subdivided between files, so as to save on wasted space, right.
 
Old 03-19-2014, 05:06 PM   #44
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by vl23 View Post
I'd rather take IBM's word over yours, sorry but the technology behemoth with the record numbers of patents and years at the top of Gartner's various rankings just beats you for some reason
Actually, I would recommend to neither take my words nor IBM's words, but to come up with actual data. That is how it usually works: Make a claim, back it up with data. I don't care how many patents IBM has, nor should you.

Quote:
besides if you've already done such benchmarks, or if jens has done it, please share them since you are very strongly convinced.
Nope, I have not made actual benchmarks, I can only report from what I have seen when experimenting with VMs, where for example an Ubuntu installation in a VM started much faster than the same system on bare metal. I have not investigated the reasons for that nor made actual benchmarks.
Quote:
Also on a related matter, you do know that modern OSes actually allow for a singe page to be subdivided between files, so as to save on wasted space, right.
This is not clear to me, are you speaking about memory pages or filesystem blocks? If you mean file-system blocks, that is neither a new feature nor a feature of the OS, but a feature of the filesystem in use, ReiserFS can do this for a long time already. In any way this is not related to performance of VMs vs. bare metal.

To make that clear: The quotes from the IBM article you presented are not related to VMs in general being slower, but are tips to prevent performance loss due to wrong configurations. It may be possible that VMs boot slower than bare metal, but from my experience this is not the case. If you are that certain that you are right it shouldn't be a problem to backup that claims with actual data. If you want do that is of course up to you, but without that data your claims are as inappropriate as evidence as my experiences.
 
1 members found this post helpful.
Old 03-20-2014, 09:42 AM   #45
vl23
Member
 
Registered: Mar 2009
Posts: 125

Original Poster
Rep: Reputation: 8
Quote:
Originally Posted by TobiSGD View Post
Actually, I would recommend to neither take my words nor IBM's words, but to come up with actual data. That is how it usually works: Make a claim, back it up with data. I don't care how many patents IBM has, nor should you.

Nope, I have not made actual benchmarks, I can only report from what I have seen when experimenting with VMs, where for example an Ubuntu installation in a VM started much faster than the same system on bare metal. I have not investigated the reasons for that nor made actual benchmarks.
This is not clear to me, are you speaking about memory pages or filesystem blocks? If you mean file-system blocks, that is neither a new feature nor a feature of the OS, but a feature of the filesystem in use, ReiserFS can do this for a long time already. In any way this is not related to performance of VMs vs. bare metal.

To make that clear: The quotes from the IBM article you presented are not related to VMs in general being slower, but are tips to prevent performance loss due to wrong configurations. It may be possible that VMs boot slower than bare metal, but from my experience this is not the case. If you are that certain that you are right it shouldn't be a problem to backup that claims with actual data. If you want do that is of course up to you, but without that data your claims are as inappropriate as evidence as my experiences.
So are you saying that you have no actual data to back yours and jens' claims, is that it?

As to the FS thing, yeah, the O was a ypo, that is what happens when you are using stupid small touchscreens and haven't disabled autocorrect
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Debian May Be Leaning Towards Systemd Over Upstart LXer Syndicated Linux News 0 01-17-2014 03:30 PM
LXer: Debian Stil Debating Systemd vs. Upstart Init System LXer Syndicated Linux News 0 12-30-2013 06:02 PM
Debian To Replace SysVinit, Switch To Systemd Or Upstart jeremy Linux - News 0 10-28-2013 02:03 PM
[SOLVED] LPIC-1 updates: systemd and upstart matiasar Linux - Certification 11 09-25-2013 07:47 AM
Boot Delay 30min: systemd-analyze blame systemd-tmpfiles-setup.service BGHolmes Fedora 0 07-27-2011 09:02 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian

All times are GMT -5. The time now is 12:36 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration