How does software know how to use Multiple CPU and Cores
I am writing a program that is computationally intensive. I have a serial version which runs on one machine, and a parallelized version that can run across several machines (reducing the processing time).
Does the OS handle (intelligently) splitting my application between CPU's and Cores -- or is that something I have to program into my code? Are there any books explaining how apps are handled on multiple CPU/Core systems? This is the info for the machine(s) I am working on: HP Proliant DL380 G5. 2 Dual Core Intel Xeon Processors (3.00 GHz / 1333MHz, 4MB L2 Cache) with 8GB memory, Fedora Core 6 Operating System, running Linux Kernel 2.6.20-1.2948.fc6. |
You have to write threaded applications, basically. I don't know the specifics of what your program does or even what language it's in, but if you write it with the appropriate threading routines and libraries and, if you have enough threads, the OS (if it is multi-core capable) will make use of all the available processor cores.
|
When you say you have got a parallelized version, it sounds like you already use methods like multi-threading (e.g. via pthread) or MPI. If you do so, the OS will do the rest.
|
Quote:
|
Multi-threaded code that isn't designed to be multi-processor safe isn't.
Lots of appalling code runs just fine on one processor - because concurrency and race conditions don't come into play. Parallelism (on different machines) doesn't necessarily change this. |
All times are GMT -5. The time now is 08:11 PM. |