LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 10-09-2007, 06:54 AM   #1
DIGITAL39
Member
 
Registered: Sep 2003
Location: Virginia
Distribution: Slackware, CentOS, Red Hat
Posts: 48

Rep: Reputation: 15
Software and CPU Cores


I had to buy a replacement board while the board I have in my business machine is being RMA'd. I am considering keeping the board when it comes back and buying a quad core chip for it. My concern is software that will take advantage of all cores. In windows a lot of applications do not take advantage of all the cores. Please excuse my ignorance on this issue, but I am under the impression it has to do with the way the application is written. I was curious if maybe at compile time it might make a difference. Anyway my real question is, if I buy a quad core chip what percentage of your more common linux applications will support all cores? Will binaries from CentOS support additional cores before another distribution like Ubuntu since more Red Hat installs go on servers?

Pete
 
Old 10-09-2007, 07:17 AM   #2
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,103

Rep: Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117Reputation: 4117
Quote:
Originally Posted by DIGITAL39 View Post
if I buy a quad core chip what percentage of your more common linux applications will support all cores?
Natively ??? - probably some number very close to zero
Quote:
Will binaries from CentOS support additional cores before another distribution like Ubuntu since more Red Hat installs go on servers?
No reason to expect so - the underlying code has to be written multi-threaded, and thread-safe. And sufficiently smart so as to start more threads as more engines (cores in this case) are available.
Things like Make can be told it has engines available, and will use them - I use this on servers to speed up kernel compiles. Haven't timed my new q6600, but expect similar results there.

There are beneficial side effects in that there is more likelihood that processes will run immediately rather than be put on the run-queue. So concurrency benefits even single-threaded code. Would be hard to measure the benefit unless the system is maxed out I imagine.
 
Old 10-09-2007, 08:21 AM   #3
DIGITAL39
Member
 
Registered: Sep 2003
Location: Virginia
Distribution: Slackware, CentOS, Red Hat
Posts: 48

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by syg00 View Post
Natively ??? - probably some number very close to zeroNo reason to expect so - the underlying code has to be written multi-threaded, and thread-safe. And sufficiently smart so as to start more threads as more engines (cores in this case) are available.
Things like Make can be told it has engines available, and will use them - I use this on servers to speed up kernel compiles. Haven't timed my new q6600, but expect similar results there.

There are beneficial side effects in that there is more likelihood that processes will run immediately rather than be put on the run-queue. So concurrency benefits even single-threaded code. Would be hard to measure the benefit unless the system is maxed out I imagine.
Is there anyway to lock a process to a certain core? so you could maybe designate each core to a certain application
 
Old 10-09-2007, 11:55 AM   #4
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
Digital

Just to be clear, if you are running multiple applications (at least 4 apps in your described case) all the cores will be used. Assuming four applications, each will run on one core. If the application was written multithreaded (some already are), that one application could use multiple cores. Unfortunately not all processes are suited to multithreading. If the process is linear (each step is dependent on the previous step) then multithreading will not gain much. A lot of the applications that are best suited to multithread have already been rewritten (most of the video conversion applications, Avidemux).

Lazlow
 
Old 10-09-2007, 12:16 PM   #5
DIGITAL39
Member
 
Registered: Sep 2003
Location: Virginia
Distribution: Slackware, CentOS, Red Hat
Posts: 48

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by lazlow View Post
Digital

Just to be clear, if you are running multiple applications (at least 4 apps in your described case) all the cores will be used. Assuming four applications, each will run on one core. If the application was written multithreaded (some already are), that one application could use multiple cores. Unfortunately not all processes are suited to multithreading. If the process is linear (each step is dependent on the previous step) then multithreading will not gain much. A lot of the applications that are best suited to multithread have already been rewritten (most of the video conversion applications, Avidemux).

Lazlow
That really helps out. I was curious about something, and obviously correct me if I am wrong as I do not write server or desktop applications, but it seems like you could have one process that runs, which is optimized for multicore processors. Then all processes that do not make use of multiple cores could channeled through this process. Basically kind of appearing as if it was a (cpu speed X cores) speed chip. I dont know, its probably way off since I know nothing about the roots of interacting with processors, but it was just an idea.
 
Old 10-09-2007, 01:17 PM   #6
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
That is essentially how the smp kernel (the default on most destros now) handles things(sorta).

Think of a core as a person with a one track mind. Now imagine a team of people like this. On some task(assembling a car) this team will get things done much faster than an individual could. On other tasks (folding a paper airplane) using a team would just get in the way. However if one wanted to fold a hundred paper airplanes, a team could get the job done much faster, even though each plane is made by only one person. Multiple core processors are just like this. When there is a real advantage to getting the job done by a team (multithreaded) then the software is usually rewritten to take advantage of this. This is and will take time as we are still learning on how to best do tasks using multithreads, but a lot (maybe most) of the tasks that are suited for multithreading have been converted. As a rule of thumb on dual core machines I figure that overall they get 1.5 times as much done as a single core processor of the same speed.

Hope this clears it up a little.

Lazlow
 
Old 10-09-2007, 05:09 PM   #7
studioj
Member
 
Registered: Oct 2006
Posts: 460

Rep: Reputation: 31
Quote:
Originally Posted by lazlow View Post
That is essentially how the smp kernel (the default on most destros now) handles things(sorta).

Think of a core as a person with a one track mind. Now imagine a team of people like this. On some task(assembling a car) this team will get things done much faster than an individual could. On other tasks (folding a paper airplane) using a team would just get in the way. However if one wanted to fold a hundred paper airplanes, a team could get the job done much faster, even though each plane is made by only one person. Multiple core processors are just like this. When there is a real advantage to getting the job done by a team (multithreaded) then the software is usually rewritten to take advantage of this. This is and will take time as we are still learning on how to best do tasks using multithreads, but a lot (maybe most) of the tasks that are suited for multithreading have been converted. As a rule of thumb on dual core machines I figure that overall they get 1.5 times as much done as a single core processor of the same speed.

Hope this clears it up a little.

Lazlow

like what he said. accept for the stuff about how software needs to be rewritten. The idea that "software" is not ready to take advantage of multiple cores or multile processor machines is a myth perpetrated by the manufacturers of said chips to try to explain why they are such a waste of money on desktop machines.
ALL modern GUI type software is extreemly multithreaded and threadsafe and all the rest of it. The problem is that mostly human interaction with a desktop machine is quite single task and linear. Which makes all those cores rather much of a waste.

Last edited by studioj; 10-09-2007 at 05:10 PM.
 
Old 10-10-2007, 02:08 AM   #8
otoomet
Member
 
Registered: Oct 2004
Location: Tartu, Århus,Nürnberg, Europe
Distribution: Debian, Ubuntu, Puppy
Posts: 619

Rep: Reputation: 45
Also note that memory speed is a lot bigger issue with multicore. Even a single core can 'eat' more memory than MMU-s + caches can feed, and the problem will only worsen when add processing power. You should buy a quad-channel motherboard and supply it with 4 memory blocks (or dual channel for dual-core).
 
Old 10-10-2007, 02:47 AM   #9
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
StudioJ

Try running the versions of software just a couple of years old (say avidemux). There is a tremendous difference between software that was written with multithreads and software that is not(on the same hardware/OS). On a dual core machine with avidemux it cut the time required to process a large video file by at least a third.

All the "modern" OSs are multithreaded and threadsafe (at least Linux) but not all the software is.

otoomet

Memory and pathway bandwidth are rapidly becoming the limiting factor. I do not know if DDR3 will be of any help or if it will just be a stopgap until something else comes along. I still suspect that it will come down to putting 1gb/core of memory right on the cpu chip. If you go with dedicated off chip memory(requiring separate memory controllers for each) then you run into the problem of what to do when you have to share or transfer data between cores(gets ugly in a hurry).

Lazlow
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
cores vs. clockspeed jhwilliams Linux - Hardware 26 10-21-2007 09:35 AM
Can Linux SMP Kernel Support 3 Cores CPU? btbx Linux - Kernel 1 07-26-2007 03:04 AM
linux on multiple cores reslowgr Linux - Server 1 01-20-2007 03:35 PM
how to find number of cores in CPU narensr Linux - Hardware 5 08-24-2006 01:09 PM
Cores on Fedora jonty_11 Fedora 2 11-05-2004 09:25 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 05:13 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration