Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I've been reading a lot of articles on the new scheduler, but there is something that really bugs me about the whole thing. I think that I am misunderstanding it, or I am missing something. Here is how I 'think' it works from what I've read...
Okay, so CPU time is divided into nanoseconds (1 billion per second) and if a system has say 10 processes on it, each process gets 1/10th of the CPUs time or 100 million nanoseconds.
What I don't understand is how this works with CPU intensive processes versus ones that mostly sleep. If I'm running a 3D FPS and there is a cronjob daemon or some other process that is mostly idle how this works out. Do both processes really get the same amount of time in the CPU? Can a process finish it's current task and then give up control if it is able to complete that task before its allocated time (100 million nanoseconds) is used? What if my resource intensive application needs a lot of CPU time? Is there anyway it can get more than 1/10th of the CPU's time?
Sorry for being such an idiot. I'm just a user and don't follow kernel development that closely, but I am interested in learning how this all works.
A scheduler only allocates CPU resources to processes that are "runnable" (i.e., waiting for the CPU). Processes that are idle - sleeping, waiting for an interrupt, waiting for I/O, etc. - are not runnable. Processes can voluntarily give up the CPU before their time slice has ended, in addition to implicitly surrendering it as a result of calling a system function (like an I/O).
As a result, the 10 arbitrary processes that get 1/10 of the CPU time would all have to be CPU intensive operations in order to continuously consume their entire time slice repeatedly. Think along the lines of 10 CPU loops.
I think you meant "100 millionths of a second", not "100 million seconds", but 1/10 would be 100,000 millionths of a second.
Last edited by macemoneta; 08-03-2007 at 09:34 PM.
CFS uses nanosecond granularity accounting and does not rely on any jiffies or other HZ detail. Thus the CFS scheduler has no notion of 'timeslices' and has no heuristics whatsoever.
Look, you can call it "nanosecond granularity accounting", quanta or timeslices. You can make them static, dynamic or morphing. You can put lipstick on a pig and call it your girlfriend - but it's still a pig.
The concept of a timeslice (as opposed to a specific implementation) is a limit on the maximum runtime of a single dispatched task. If you eliminate that, you might as well say that the system is completely broken. A single CPU loop will never release the CPU willingly, so you must have some method of limiting the resource consumed by the task in order to be fair to other runnable tasks.
The quote that you provided simply says that the timeslice interval is being varied. The reason it is being varied is so that the scheduler can be more fair in its allocation. It's still a timeslice - it's just not a static value.
A scheduler only allocates CPU resources to processes that are "runnable" (i.e., waiting for the CPU). Processes that are idle - sleeping, waiting for an interrupt, waiting for I/O, etc. - are not runnable. Processes can voluntarily give up the CPU before their time slice has ended, in addition to implicitly surrendering it as a result of calling a system function (like an I/O).
As a result, the 10 arbitrary processes that get 1/10 of the CPU time would all have to be CPU intensive operations in order to continuously consume their entire time slice repeatedly. Think along the lines of 10 CPU loops.
I think you meant "100 millionths of a second", not "100 million seconds", but 1/10 would be 100,000 millionths of a second.
Ahhh that explains it. I was thinking that ALL processes would need a share of the CPU regardless of whether or not they were active or sleeping. My reasoning was that the process itself would need to get CPU time to check for an interrupt or some kind of user action. That is probably the kernel's job though. If I were running a high end game and recompiling an application in the background I would probably have to use nice to give the game a higher priority and the compiler a lower one if I didn't want the two to have completely equal share over the CPU.
I meant to put "100 million nanoseconds" but I got ahead of myself there. Sorry about that. Thank you all for your help. It is very much appreciated.
The concept of a timeslice (as opposed to a specific implementation) is a limit on the maximum runtime of a single dispatched task. If you eliminate that, you might as well say that the system is completely broken. A single CPU loop will never release the CPU willingly, so you must have some method of limiting the resource consumed by the task in order to be fair to other runnable tasks.
Nope.
That is classic "scheduler/dispatcher" design.
This is different. There is no pre-ordained (not even dynamic) time-slice to expire.
More a case of guaranteed "non-dispatch assurance".
You keep running until some other unit of work is determined to have a greater "right" to the CPU. Then you get pre-empted.
End of issue.
Ingo assures the world all the pathalogical test cases perform better on this than the current design.
And he managed to convince the kernel devs he was right - Andrew and Linus included obviously.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.