thread scheduling. is it guaranteed that kernel will run threads on all cores?
I need to test a program that says it will restrict the number of
processor cores being used. I was thinking of writing a program to spawn 100 threads, each one performing some long, intensive computation. Then from a console, I could see how many cores are being used - if all threads are running on the allowed number of cores, then the test passes. Is there a guarantee that the kernel will run the threads on different cores, and run them on all the cores? (if the scheduler incorrectly used only one core when two are allowed, then that will give me an incorrect result). |
In a pthreads programming environment you typically have (in pseudo code)
launch thread A launch thread B ... launch thread Z ----- thread A initialization part of A loop: get item from message/queue (wait if necessary) do work if done or error then exit goto loop When you desire to add affinity pinning to the thread then the "initialization part of A" will require you to add code to determine what cores are available (what number of hardware threads are available), what cpi loads have already been allocated to each core, what cpi load functional code of A has, other factors (e.g. IO, FPU vs integer, etc..), then a determination of which core to use (hardware thead is made), then a system API is called to migrate the current software thread's execution to that core (or group of cores) and to restrict it to run on that (those) core(s). Then add to core(s) load the load value (CPI) for the functional code of the do work of A. This is repeated in each thread's initialization part (in a critical section) |
All times are GMT -5. The time now is 09:41 AM. |