LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 08-03-2007, 09:14 PM   #1
Darkhack
Member
 
Registered: Mar 2005
Location: Kansas City
Distribution: Ubuntu 7.10
Posts: 47

Rep: Reputation: 15
Process Scheduler (CFS)


I've been reading a lot of articles on the new scheduler, but there is something that really bugs me about the whole thing. I think that I am misunderstanding it, or I am missing something. Here is how I 'think' it works from what I've read...

Okay, so CPU time is divided into nanoseconds (1 billion per second) and if a system has say 10 processes on it, each process gets 1/10th of the CPUs time or 100 million nanoseconds.

What I don't understand is how this works with CPU intensive processes versus ones that mostly sleep. If I'm running a 3D FPS and there is a cronjob daemon or some other process that is mostly idle how this works out. Do both processes really get the same amount of time in the CPU? Can a process finish it's current task and then give up control if it is able to complete that task before its allocated time (100 million nanoseconds) is used? What if my resource intensive application needs a lot of CPU time? Is there anyway it can get more than 1/10th of the CPU's time?

Sorry for being such an idiot. I'm just a user and don't follow kernel development that closely, but I am interested in learning how this all works.

Last edited by Darkhack; 08-04-2007 at 12:42 PM.
 
Old 08-03-2007, 09:33 PM   #2
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
A scheduler only allocates CPU resources to processes that are "runnable" (i.e., waiting for the CPU). Processes that are idle - sleeping, waiting for an interrupt, waiting for I/O, etc. - are not runnable. Processes can voluntarily give up the CPU before their time slice has ended, in addition to implicitly surrendering it as a result of calling a system function (like an I/O).

As a result, the 10 arbitrary processes that get 1/10 of the CPU time would all have to be CPU intensive operations in order to continuously consume their entire time slice repeatedly. Think along the lines of 10 CPU loops.

I think you meant "100 millionths of a second", not "100 million seconds", but 1/10 would be 100,000 millionths of a second.

Last edited by macemoneta; 08-03-2007 at 09:34 PM.
 
Old 08-03-2007, 09:57 PM   #3
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,127

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
That's the way it used to work.
@Darkhack, you've probably seen this but it gives a reasonable over-view.

Here's a little more on the design.
 
Old 08-03-2007, 10:07 PM   #4
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
Quote:
That's the way it used to work.
Nothing in the CFS changes what I've described. The CFS alters the method of selecting the runnable processes, not the function of the scheduler.
 
Old 08-03-2007, 10:38 PM   #5
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,127

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Quote:
CFS uses nanosecond granularity accounting and does not rely on any jiffies or other HZ detail. Thus the CFS scheduler has no notion of 'timeslices' and has no heuristics whatsoever.
From the horses mouth.
 
Old 08-03-2007, 10:49 PM   #6
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
Look, you can call it "nanosecond granularity accounting", quanta or timeslices. You can make them static, dynamic or morphing. You can put lipstick on a pig and call it your girlfriend - but it's still a pig.

The concept of a timeslice (as opposed to a specific implementation) is a limit on the maximum runtime of a single dispatched task. If you eliminate that, you might as well say that the system is completely broken. A single CPU loop will never release the CPU willingly, so you must have some method of limiting the resource consumed by the task in order to be fair to other runnable tasks.

The quote that you provided simply says that the timeslice interval is being varied. The reason it is being varied is so that the scheduler can be more fair in its allocation. It's still a timeslice - it's just not a static value.
 
Old 08-04-2007, 12:41 PM   #7
Darkhack
Member
 
Registered: Mar 2005
Location: Kansas City
Distribution: Ubuntu 7.10
Posts: 47

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by macemoneta
A scheduler only allocates CPU resources to processes that are "runnable" (i.e., waiting for the CPU). Processes that are idle - sleeping, waiting for an interrupt, waiting for I/O, etc. - are not runnable. Processes can voluntarily give up the CPU before their time slice has ended, in addition to implicitly surrendering it as a result of calling a system function (like an I/O).

As a result, the 10 arbitrary processes that get 1/10 of the CPU time would all have to be CPU intensive operations in order to continuously consume their entire time slice repeatedly. Think along the lines of 10 CPU loops.

I think you meant "100 millionths of a second", not "100 million seconds", but 1/10 would be 100,000 millionths of a second.
Ahhh that explains it. I was thinking that ALL processes would need a share of the CPU regardless of whether or not they were active or sleeping. My reasoning was that the process itself would need to get CPU time to check for an interrupt or some kind of user action. That is probably the kernel's job though. If I were running a high end game and recompiling an application in the background I would probably have to use nice to give the game a higher priority and the compiler a lower one if I didn't want the two to have completely equal share over the CPU.

I meant to put "100 million nanoseconds" but I got ahead of myself there. Sorry about that. Thank you all for your help. It is very much appreciated.
 
Old 08-04-2007, 06:06 PM   #8
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,127

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Quote:
Originally Posted by macemoneta
The concept of a timeslice (as opposed to a specific implementation) is a limit on the maximum runtime of a single dispatched task. If you eliminate that, you might as well say that the system is completely broken. A single CPU loop will never release the CPU willingly, so you must have some method of limiting the resource consumed by the task in order to be fair to other runnable tasks.
Nope.
That is classic "scheduler/dispatcher" design.
This is different. There is no pre-ordained (not even dynamic) time-slice to expire.
More a case of guaranteed "non-dispatch assurance".

You keep running until some other unit of work is determined to have a greater "right" to the CPU. Then you get pre-empted.
End of issue.

Ingo assures the world all the pathalogical test cases perform better on this than the current design.
And he managed to convince the kernel devs he was right - Andrew and Linus included obviously.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: CFS scheduler to appear in Linux kernel 2.6.23 LXer Syndicated Linux News 0 07-10-2007 09:16 AM
LXer: Linux: CFS Scheduler v19, Group Scheduling LXer Syndicated Linux News 0 07-07-2007 06:16 PM
Gentoo, process scheduler, hard drive dma Oxagast Linux - General 3 11-30-2005 05:44 PM
Need help with CFS rjkfsm Linux - Security 1 08-05-2005 11:17 AM
scheduler -a process??? mojozoox Linux - General 1 08-25-2003 12:53 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 02:49 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration