DebianThis forum is for the discussion of Debian Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
View Poll Results: systemd vs. upstart, or else?
sysVinit: if they can't decide just keep the status quo, it has worked for 40+ years.
Replace a trusted and reliable initialisation system with a Comical creation? Even Buntu will not be using it. They will be going with systemd after failing to convince Debian to adopt their upstart.
I fail to see why there is such a rush to adopt an init and service management system that is still in development and mostly still incomplete by the authors long term goals. Upstart, Runit, OpenRC, and SysV+perp have all been in stable maintenance for years now.
Not to put down any efforts by the Debian developers, but they should focus on a long term stable init and service management system rather than one that has not been completed and fully finalized. I understand a lot of projects are adding code for systemd, but they still are maintaining support for non-systemd code as well. Just because ConsoleKit got pushed into maintenance mode to focus on systemd-logind, doesn't mean that developers are going to just scrap ConsoleKit support. Even they know people and systems all vary. They'd be downright stupid to just scrap legacy support especially if other platforms use it as the norm.
Replace a trusted and reliable initialisation system with a Comical creation? Even Buntu will not be using it. They will be going with systemd after failing to convince Debian to adopt their upstart.
Ouch. I don't like systemd. To me it is way too complicated, overdesigned, too many dependencies and has way too many design goals. Upstart has much simpler goals and does it in a simpler fashion.
Not to put down any efforts by the Debian developers, but they should focus on a long term stable init and service management system rather than one that has not been completed and fully finalized.
My guess is they see the writing on the wall. It is only a matter of time before Red Hat gets systemd and whatever else will be "attached to it" forced on everyone. Perhaps Debian's developers are unwilling to fight the inevitable alone.
LOL no, Bare metal:
appp-->kernel-->drivers--hardware
VM appinguest-->kernelinguest-->driversinguest-->virtualization infrastructure and host kernel virtualization components-->on-metal hardware
It would only be useful if you are using a jail, and that is not what I am doing, the I/O wait and memory access time would be higher since VM mapping would have to go through 2 kernels instead of one, and since the disk is a file on the host system.
Sorry for the late reply, but that's just ridiculous.
KVM can be can be instructed to do whatever you want when booting.
Real hardware needs to be accessed and validated (taking about 75% of your boot time).
GNU/Hurd boots under 1 second in KVM for me (using neither sysv or sysd), that's not even possible on bare metal ...
My guess is they see the writing on the wall. It is only a matter of time before Red Hat gets systemd and whatever else will be "attached to it" forced on everyone. Perhaps Debian's developers are unwilling to fight the inevitable alone.
You're confusing the actual init part (pid1) with whole stack on top.
Ubuntu's upstart is using logind as well ...
Even if this thing goes wrong, there's always room to replace systemd's pid1 and add all components on a different init (OpenRC and friends).
My guess is they see the writing on the wall. It is only a matter of time before Red Hat gets systemd and whatever else will be "attached to it" forced on everyone. Perhaps Debian's developers are unwilling to fight the inevitable alone.
How is it inevitable? Look at ConsoleKit vs Logind. It's technically the same exact service daemon, just rolled into systemd. ConsoleKit is stable also and hasn't needed maintenance or patches, even though the developers have put it in deprecation. It's viable and it still works, so how is logind any better or worse than ConsoleKit? It's not and neither. ConsoleKit is simply a stand-alone version of systemd-logind.
Systemd is nothing more than an amalgamated hypervisor that aims to eventually become the underlying OS of the entire Linux OS... and yes I said that correctly. Not GNU/Linux or even UNG/Linux, but Linux OS. Lennart's long term manifesto aims at having it control every underlying aspect of Linux as the systemd-OS.
The only inevitable is the inevitability that sound, stable, and viable projects for GNU/Linux are going to go heavily unmaintained for extended periods of time to where they become useless compared to systemd when it finally takes over the entire OS.
Here's a good question:
If, by far reaching chance, Lennart somehow miraculously wrote up systemd tomorrow to literally and completely replace every aspect of the GNU userland, compiler, libraries, and kernel tools, pull the entire core OS and kernel into PID-1, and then act as a hypervisor to the hardware resources, would you keep using it?
Or better question, what choice would you have, but to use it?
Sorry for the late reply, but that's just ridiculous.
KVM can be can be instructed to do whatever you want when booting.
Real hardware needs to be accessed and validated (taking about 75% of your boot time).
GNU/Hurd boots under 1 second in KVM for me (using neither sysv or sysd), that's not even possible on bare metal ...
Oh, so the I/O penalty(by far the biggest bottleneck in computing since forever) is magically less on virtualized hardware than it is on bare metal eh, and a kernel somehow does not interact in the same way with virtualized hardware as it does with regular stuff, do you have any tangible proof of your claims other than, oh it is faster for me.
maybe a few unbiased and peer-reviewed articles not churned out by VMWare or the other Cloud pushers
Oh, so the I/O penalty(by far the biggest bottleneck in computing since forever) is magically less on virtualized hardware than it is on bare metal eh, and a kernel somehow does not interact in the same way with virtualized hardware as it does with regular stuff, do you have any tangible proof of your claims other than, oh it is faster for me.
maybe a few unbiased and peer-reviewed articles not churned out by VMWare or the other Cloud pushers
It should be as easy as to prove your claims that VMs have to boot slower than bare metal, just do a few benchmarks.
It should be as easy as to prove your claims that VMs have to boot slower than bare metal, just do a few benchmarks.
it is simple logic damn it, a VM can't utilize the underlying hardware at 100%, even POWER and Z have at the least a 5% penalty for their virtualization solutions, which are light-years price and performance-wise when compared to x86 virtualization capabilities, and there is now way to have your I/O work faster when your OS is on a disk image that is basically just a file stuck in the host's FS, unless of course you are using some caching tricks, which I am not, and I really doubt that the host page cache will help all that much, besides even with bare metal you still have DMA, the on-device buffer and the disk's own readahead.
EDIT:
But please don't take my word on it, here is what Big Blue has to say on the subject:
Quote:
Originally Posted by "Kernel Virtual Machine (KVM): Best practices for KVM - IBM";
Most Linux file systems ensure that the metadata and files, that are stored within the file system, are
aligned on 4 KB (page) boundaries. The second extended file system (ext2) and the third extended file
system (ext3) are most commonly used. Both file systems use 4 KB alignment for all data.
Because a disk image file is a file in the file system, the disk image file is 4 KB aligned. However, when
the disk image file is partitioned using the default or established partitioning rules, partitions can begin
on boundaries or offsets within the image file that are not 4 KB aligned.
The probability of a partition, within a disk image file, starting on a 4 KB alignment is low for the
following reasons:
v Partitions within a disk image file typically start and end on cylinder boundaries.
v Cylinder sizes typically are not multiples of 4 KB.
Within the guest operating system, disk I/O operations occur with blocks that are offset in 4 KB
multiples from the beginning of the partitions. However, the offset of the partition in the disk image file
is typically not 4 KB aligned with respect to the following items:
v The beginning of the disk image file.
v The file system in which the disk image file is located.
As a result, when a guest operating system initiates I/O operations of 4 KB to the partition in the disk
image file, the I/O operations span two 4 KB pages of the disk image file. The disk image file is located
in the page cache of the hypervisor. If an I/O operation is a write request, the hypervisor must perform a
Read-Modify-Write (RMW) operation to complete the I/O request.
For example, the guest operating system initiates 4 KB write operation. The hypervisor reads a page to
update the last 1 KB of data. Then, the hypervisor reads the next page to update the remaining 3 KB of
data. After the updates are finished, the hypervisor writes both modified pages back to the disk. For
every write I/O operation from the guest operating system, the hypervisor must perform up to two read
operations and up to two write operations. These extra read and write operations produce an I/O
Best practices for KVM
9multiplication factor of four. The additional I/O operations create a greater demand on the storage
devices. This increased demand can affect the throughput and response times of all the software using
the storage devices.
To avoid the I/O affects from partitions that are not 4 KB aligned, ensure that you optimally partition the
disk image file. Ensure that partitions start on boundaries that adhere to the 4 KB alignment
recommendation. Most standard partitioning tools default to using cylinder values to specify the start
and end boundaries of each partition. However, based on default established values used for disk
geometry, like track size and heads per cylinder, cylinder sizes rarely fall on 4 KB alignment values. To
specify an optimal start value, you can switch the partition tool to the Advanced or Expert modes.
it is simple logic damn it, a VM can't utilize the underlying hardware at 100%, even POWER and Z have at the least a 5% penalty for their virtualization solutions, which are light-years price and performance-wise when compared to x86 virtualization capabilities, and there is now way to have your I/O work faster when your OS is on a disk image that is basically just a file stuck in the host's FS, unless of course you are using some caching tricks, which I am not, and I really doubt that the host page cache will help all that much, besides even with bare metal you still have DMA, the on-device buffer and the disk's own readahead.
EDIT:
But please don't take my word on it, here is what Big Blue has to say on the subject:
it is simple logic damn it, a VM can't utilize the underlying hardware at 100%, even POWER and Z have at the least a 5% penalty for their virtualization solutions, which are light-years price and performance-wise when compared to x86 virtualization capabilities, and there is now way to have your I/O work faster when your OS is on a disk image that is basically just a file stuck in the host's FS, unless of course you are using some caching tricks, which I am not, and I really doubt that the host page cache will help all that much, besides even with bare metal you still have DMA, the on-device buffer and the disk's own readahead.
EDIT:
But please don't take my word on it, here is what Big Blue has to say on the subject:
With the advent of SSDs and Advanced Format harddisks all modern OSes switched to automatically aligning partitions on 4KB boundaries, the only partitioning tool that still does not do that by default, whichever reason the developers may have for that, is cfdisk, which still starts the first partition on logical sector 63. Any other tool aligns correctly by default. Also, this is a partitioning issue that happen on bare metal, too, for example on SSDs or Advanced Format disks (disks with 4KB physical block size) and is not really related to VMs.
I still recommend to back up your thought processes about boot speed under different conditions with verifiable data from benchmarks. The easiest and fastest way to prove your hypothesis that booting on VMs has to be slower than on bare metal.
With the advent of SSDs and Advanced Format harddisks all modern OSes switched to automatically aligning partitions on 4KB boundaries, the only partitioning tool that still does not do that by default, whichever reason the developers may have for that, is cfdisk, which still starts the first partition on logical sector 63. Any other tool aligns correctly by default. Also, this is a partitioning issue that happen on bare metal, too, for example on SSDs or Advanced Format disks (disks with 4KB physical block size) and is not really related to VMs.
I still recommend to back up your thought processes about boot speed under different conditions with verifiable data from benchmarks. The easiest and fastest way to prove your hypothesis that booting on VMs has to be slower than on bare metal.
I'd rather take IBM's word over yours, sorry but the technology behemoth with the record numbers of patents and years at the top of Gartner's various rankings just beats you for some reason
besides if you've already done such benchmarks, or if jens has done it, please share them since you are very strongly convinced.
Also on a related matter, you do know that modern OSes actually allow for a singe page to be subdivided between files, so as to save on wasted space, right.
I'd rather take IBM's word over yours, sorry but the technology behemoth with the record numbers of patents and years at the top of Gartner's various rankings just beats you for some reason
Actually, I would recommend to neither take my words nor IBM's words, but to come up with actual data. That is how it usually works: Make a claim, back it up with data. I don't care how many patents IBM has, nor should you.
Quote:
besides if you've already done such benchmarks, or if jens has done it, please share them since you are very strongly convinced.
Nope, I have not made actual benchmarks, I can only report from what I have seen when experimenting with VMs, where for example an Ubuntu installation in a VM started much faster than the same system on bare metal. I have not investigated the reasons for that nor made actual benchmarks.
Quote:
Also on a related matter, you do know that modern OSes actually allow for a singe page to be subdivided between files, so as to save on wasted space, right.
This is not clear to me, are you speaking about memory pages or filesystem blocks? If you mean file-system blocks, that is neither a new feature nor a feature of the OS, but a feature of the filesystem in use, ReiserFS can do this for a long time already. In any way this is not related to performance of VMs vs. bare metal.
To make that clear: The quotes from the IBM article you presented are not related to VMs in general being slower, but are tips to prevent performance loss due to wrong configurations. It may be possible that VMs boot slower than bare metal, but from my experience this is not the case. If you are that certain that you are right it shouldn't be a problem to backup that claims with actual data. If you want do that is of course up to you, but without that data your claims are as inappropriate as evidence as my experiences.
Actually, I would recommend to neither take my words nor IBM's words, but to come up with actual data. That is how it usually works: Make a claim, back it up with data. I don't care how many patents IBM has, nor should you.
Nope, I have not made actual benchmarks, I can only report from what I have seen when experimenting with VMs, where for example an Ubuntu installation in a VM started much faster than the same system on bare metal. I have not investigated the reasons for that nor made actual benchmarks.
This is not clear to me, are you speaking about memory pages or filesystem blocks? If you mean file-system blocks, that is neither a new feature nor a feature of the OS, but a feature of the filesystem in use, ReiserFS can do this for a long time already. In any way this is not related to performance of VMs vs. bare metal.
To make that clear: The quotes from the IBM article you presented are not related to VMs in general being slower, but are tips to prevent performance loss due to wrong configurations. It may be possible that VMs boot slower than bare metal, but from my experience this is not the case. If you are that certain that you are right it shouldn't be a problem to backup that claims with actual data. If you want do that is of course up to you, but without that data your claims are as inappropriate as evidence as my experiences.
So are you saying that you have no actual data to back yours and jens' claims, is that it?
As to the FS thing, yeah, the O was a ypo, that is what happens when you are using stupid small touchscreens and haven't disabled autocorrect
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.