Quote:
Originally Posted by never-never-land
I read that both use different kinds of "Service Management Facility": OpenRC (sabayon), Upstart (ubuntu). and I'm curious whether this is the cause of that behaviour or is it something else?
|
Hmmm, I haven't done any testing on this particular issue, but I have my doubts about it being upstart.
Firstly, as far as systemd is concerned, it is quite a lot 'cleverer' than the traditional Sys V init (and cleverer here is a distinctly double-edged sword - to be frank, I don't look forward to debugging it, when it goes wrong), but its primary advantage is that the computer should boot more quickly. I'm not sure how upstart will survive, now systemd is on the block; from what I remember, Ubuntu was the only major distro that had committed to upstart, and, if systemd is trouble-free it seems to be likely to get more converts. Note that 'if' though.
So I think that there is a strong case for considering whether it might be something else.
Quote:
Originally Posted by never-never-land
Yet there are 2 distros that seems to have an extra edge over the others: Sabayon & Ubuntu, while sabayon gives the fastest raw performance, Ubuntu has a uniqe way of dealing with multitasking meaning I can open many tasks in the background yet my PC would barely slow down or lag, but the really interesting part is that in every other OS/distro when your PC starts lagging you have to immediately stop executing more task!
|
...first report of Ubuntu as a speed demon, in the Unixy-world... That's worth noting.
Firstly, this kind of behaviour is likely to be influenced by 'swappiness', pre-load, and whether swap is occurring. Whether swap is occurring depends primarily on how much memory is in your current working set and how much ram you have. Your working set depends on the general bloat factors (what services and how many, you are running, which is under the influence of the distro and their attitude to 'stripped bare' vs 'everything you want, already there', and the GUI which can vary from an absolute memory hog to rather less so, and even to how the gui is configured).
Playing with 'swappiness' is a quite frequent hobby amongst Ubuntu users; I think that the easiest summary is that there is no free lunch here; a value of swappiness that is likely to make things better under one set of circumstances may make things worse (alternative; make things dramatically worse) under some other set of circumstances. So, if you fancy making things better under the 95% circumstance of normal usage, but rather worse in the exceptional overload conditions, this is probably something you want to try playing with.
Pre-load allows you to do some of the work of loading common apps on boot up, so that they appear to start faster, when you do get around to clicking on the icon. Again, somewhat deceptive as you are sacrificing time at boot/initialisation for an advantage later on, but this is probably a trade-off that is worth making for many people.
(note: 'have enough ram' is probably the only performance tip that most people need. Measuring whether you have enough ram is one of the first things that you should do if there are any performance issues, and obviously that depends, amongst other things, on how much 'bloat' you are running. Also note that while 'bloated' GUIs have an impact on the amount of memory needed, if you need 4 Gigs for the applications that you are running, no choice of GUI in the world will make that into 3 Gigs).
For Sabayon, Gentoo (the distro, not the file manager) and friends there is another factor - you compile for your particular processor, rather than accepting a 'lowest common denominator' build for a more common processor. This is nice, but in common usage, it doesn't really make a night-and-day difference; have a look at the testing that Phoronix does from time-to-time in order to learn more (there are some special cases where there can be a noticeable gain, but it doesn't affect the normal experience).
In fact, it is usually said that 'having control' is more important than 'custom compilation', but that rather depends on how much you can leave out because you know exactly which services that you need to run and which you can omit - if you are going to include this service, and that service and the other service because you don't know whether you'll end up needing them, then this won't really be a big gain.
There is more to be gained by ensuring disk mounts are sensible (noatime, or similar), but most distros do that by default, these days, and by choosing a sensible partition type (again, often 'right' by default, these days, but not guaranteed).
And there are also 'reversions' (sometimes incorrectly described) with kernel versions, and those will affect all distros which use the affected versions, so there isn't really any 'Distro X is better than Distro Y' advantage here - all distros that use a late 2011 kernel will get whatever is in a late 2011 kernel, with the exception of the enterprise-style distros that are using much older kernels with subsequent patches backported, with whatever advanatges or disadvantages that brings. Look again at Phoronix, where you can see that small advances or regressions (a regression is arguably the wrong description for a situation in which a data security issue is fixed, but performance reduces as a consequence, when it is primarily used to mean 'something went wrong') are really quite common.
This all makes it difficult to take a 'Distro X is faster than Distro Y' report at face value without knowing the kernel versions, and all sorts of details about how they were set up, whether that is about the defaults, which may be different, or whether it is about what settings have been 'tweaked' as a result of personal preference or prejudice.