Alternative to 200 lines kernel patch, /sys/fs/cgroup/cpu missing
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
In the slashdot discussion on the subject it was mentioned several times that this will only work for applications started from a tty, or a single bash session. This means, as Con Kolivas wrote, that the "normal" GUI apps started from the desktop (e.g. via krunner or the kde menu) will not see a difference (as all of them will be grouped together), and that in any case the window manager may be modified to do something similar to what the scripts do.
this is slightly changed: the last version of the patch works per session
can confirm that kde 4.5.3 (I've that installed on 64-current) looks much more smooth and fluid with autogroup on: trying it with zen-stable git that includes latest patch.
I think this patch is BS. It seems to be experimental, nobody can clearly explain how it works or what it is good for, and yet everyone praises it as a miracle. A bunch of BS. I have tried it and the alternative and there's no difference. It's probably because everyone is using CFQ, which causes lags and stuttering and this is a workaround.
H_Tex, I'm with you on this one. Problem is there's a whole lot of bloggers and journalists for that matter out there parroting the "Miracle patch makes linux run faster" headline, who don't really understand what they're talking about.
Blindly grouping processes into cgroups based on session id is going to cause as many problems as it solves. All I see happening is that instead as running things that you don't want to impact your system with "nice <program>", you'll end up using "setsid <program>" to break things that you do want to get a fair share out of the current session.
Glad you have tried the patch. I was about to do it as well as due to the hype in the media I had quite high expectations. Sorry to hear it that at the moment it's the case of much ado about nothing. Clearly we're not there yet...
Yeah, I think using 'nice' properly will be a much better solution than this. This is just a crazy hack.
P.S.
You should try it, but know that it's not as miraculous as they say. I mean, I didn't notice any difference. Don't take my word for it, your system is setup differently, it may actually work, who knows.
Last edited by H_TeXMeX_H; 11-28-2010 at 07:04 AM.
Tried it and it IS miraculous.
To see that, you have to overload your desktop.
But as we never use a make -j64, it's not really useful.
Anyway, this is a good thing.
Tried it and it IS miraculous.
To see that, you have to overload your desktop.
But as we never use a make -j64, it's not really useful.
Anyway, this is a good thing.
Ok, so it would only make a difference if I were to use make -j64 ? You have any benchmarks, for example make -j64 versus make -j4 or -j5 for a quad core ?
Ok, so it would only make a difference if I were to use make -j64 ? You have any benchmarks, for example make -j64 versus make -j4 or -j5 for a quad core ?
Shouldn't it compile slower with the patch? I thought this patch is for more responsiveness, not for actually accelerating things. I mean, if your desktop is more responsive during a make -j64, it has to get more of the CPU than without the patch/alternative solution. So the compile-time must be longer. Or didn't I get the point?
Shouldn't it compile slower with the patch? I thought this patch is for more responsiveness, not for actually accelerating things. I mean, if your desktop is more responsive during a make -j64, it has to get more of the CPU than without the patch/alternative solution. So the compile-time must be longer. Or didn't I get the point?
Actually, it doesn't have to get more time, it has to get the same amount of time, just in smaller, more frequent chunks.
glxgears is a good example for demonstating this when run on a non-accelerated Xserver such as nouveau, fbdev,nv or vesa. The nature of glxgears is that it will try and use as much cpu as it can get, but because 3d is not accelerated the Xserver process will need to do the grunt work using mesa.
try this:
open a terminal and run 'top' so you can see what's happening.
then open a second terminal and run
(It'll attempt to start 20 copies of glxgears)
Code:
for (( i=0 ; i < 20 ; i++ ))
do
glxgears >/dev/null 2>&1 &
done
Try moving some of the windows so you can see them all (on my system it doesn't even manage to start them all). Responsiveness is terrible and you'll notice that not all the gears are spinning at once. The reason for this is that the XServer process will be competing with each of the 20 glxgears for cputime as they are all the same priority.
Clear them up with "pkill glxgears" and then try the same thing but this time we'll nice them
Code:
for (( i=0 ; i < 20 ; i++ ))
do
nice -n 19 glxgears >/dev/null 2>&1 &
done
Not only will all the gears be spinning at once, your desktop will remain responsive.
This demonstrates why I think this new patch is not all that miraculous. Simply running these tasks at a lower priority is enough, and unlike automated cgroups it won't have any unanticipated side effects by unintentional groupings.
If you want to automate this stuff why not just add a " renice -n 19 $$ "to your .bashrc. As far as I can see, that would pretty much have the same practical effect as the kernel patch would on a "make -j 64"
Well, if it isn't about performance then what good is it (why not use nice) ? My desktop is very responsive as is. I can be running 'make -j4' in the background and be browsing the web at the same time without issue or lag and WITHOUT this patch. I think this may be about more than just cpu scheduling, it may also have to do with HDD I/O scheduling. I used to have some terrible lagging when using CFQ.
FYI, in 2.6.36-zen1, you can choose between CFS (patched for autogroups or not, best behaviour with CFQ here) or latest BFS (best behaviour with BFQ here, the combination I'm using), plus aufs, squashfs+lzma and a boatload of the same -zen stuff (you can even raise the CONFIG_HZ, I tried 2000 and works fine here).
And they include interesting patches that try to fix the I/O load problem.
if you're into measuring latency with different kernels, the best tool out there is probably Con's kernbench (benchmarks must run in init 1).
Last edited by ponce; 11-30-2010 at 01:21 PM.
Reason: best, not better ;P
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.