LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 12-08-2005, 12:47 AM   #16
purelithium
Member
 
Registered: Oct 2005
Location: Canada
Distribution: Mandriva 2006.0
Posts: 390

Rep: Reputation: 30

Quote:
Originally Posted by foo_bar_foo
exactly what would this take to "catch up" ?

software design is a complex thing that has always attempted from the very beginning to make the most of resources. I'm not going to go into software threading issues because the concepts are HUGE but statements like the one above are born straight out of corporate propaganda machine and nothing else.

Of course it's complex, but why are you "not going to get into software threading"? That is the whole issue that software is behind hardware. The fact that most software is only single threaded is NOT "making the most of [available] resources"

So I guess you're saying that software that was coded 10 years ago will "make the most of" newer technologies like SSE3 and HyperThreading? Of course not. Because they were coded to the standards of that time, and the programmers could not forsee those technologies.

First comes hardware, then comes software. Because if there's nothing to run it on, or if the software supports unadopted standards, no one will buy it(or use it, in the case of FOSS).

Frankly, your response is too much of a fallacy.
 
Old 12-08-2005, 09:10 PM   #17
foo_bar_foo
Senior Member
 
Registered: Jun 2004
Posts: 2,553

Rep: Reputation: 53
Quote:
The fact that most software is only single threaded
who says most software is single threaded ?
oh yea marketing execs who want to sell you something thats "hyperthreaded"
and tell us we need to "change our mindset"

Charles Moore invented (indirect) threaded Code in 1970 at the very begining of computing.
nobody had to forsee the technology.
machine instructions later were actually a move away from threaded code because early computers required MORE of making the most of [available] resources not less
these threads might not have been "hyper" but they have been there from the beginning.
The Linux kernel makes proper use of hardware resources and the compiler takes care of using stuff like SSE3 -- that's all application developers need to know in a modern computing environment. The left hand doesn't know or care what the right hand is doing. That's basic object oriented concept and has lead to the developement we see today.
application programmers write code that is portable and scalable across a wide range of hardware never even knowing what the hardware is or will be in the future..

look -- you can write a 10 line program today that uses either threads or fork() and it will run different loops on different processors. This is not rocket science. This is just basic crap.
all Linux processes are a fork of init
all shell processes are a fork of bash
all gui and net and database application are multithreaded and some also use underlying multithreaded libraries and toolkits like Qt.
Linux kernel is multithreaded.

so yea some programs like "ls" is a single thread because it would be slower and stupid for it not to be. No need for a "new mindset"

Last edited by foo_bar_foo; 12-08-2005 at 09:15 PM.
 
Old 12-08-2005, 09:49 PM   #18
purelithium
Member
 
Registered: Oct 2005
Location: Canada
Distribution: Mandriva 2006.0
Posts: 390

Rep: Reputation: 30
What I mean by "single threaded" is that most software applications(I'm not talking about OS process management or the like), are only able to take advantage of the processing power of one processing unit(logical or physical) at a time. Of course some small programs like ls will only be single threaded, because it's absurd to have such a non-CPU-intensive program like that to use multiple processing units. I'm talking about huge CPU-hogs like calculating MD5sums, or encoding audio or video. Image rendering, Games, etc.

I understand that the kernel can assign separate processes to different processing units, but if one process takes up 99% of one of my cores, and the other is at 10-15% that is not making the most of available resources. If the software I was running was able to split the work between the cores, then it would be able to take advantage of that second core. The fact is that most applications are not able to do this on their own.

Stop putting words in my mouth, I never once said that programmers need a new mindset.
 
Old 12-08-2005, 10:34 PM   #19
leandean
Member
 
Registered: Oct 2005
Location: Burley, WA
Distribution: Sabayon, Debian
Posts: 278

Rep: Reputation: Disabled
My statement "There is almost no consumer software at present written specifically to take advantage of dual-core (or 64-bit for that matter). The software guys have a lot of catching up to do." is actually the opposite of what the "corporate propaganda machine" would have you believe. The simple fact is that software at present cannot take advantage of dual-core or for that matter dual-channel memory. Current software 'runs' on dual-core, 64 bit and dual-channel. It isn't written (yet) to optimally perform with any of it.
 
Old 12-08-2005, 10:41 PM   #20
purelithium
Member
 
Registered: Oct 2005
Location: Canada
Distribution: Mandriva 2006.0
Posts: 390

Rep: Reputation: 30
Quote:
Originally Posted by leandean
My statement "There is almost no consumer software at present written specifically to take advantage of dual-core (or 64-bit for that matter). The software guys have a lot of catching up to do." is actually the opposite of what the "corporate propaganda machine" would have you believe. The simple fact is that software at present cannot take advantage of dual-core or for that matter dual-channel memory. Current software 'runs' on dual-core, 64 bit and dual-channel. It isn't written (yet) to optimally perform with any of it.

Exactly! An excellent summarization of the point I'm trying to make.
 
Old 12-08-2005, 10:56 PM   #21
exvor
Senior Member
 
Registered: Jul 2004
Location: Phoenix, Arizona
Distribution: Gentoo, LFS, Debian,Ubuntu
Posts: 1,537

Rep: Reputation: 87
In reading this thread here im getting the impression that
many who are responding dont have a good understanding of how
a computer works. All the software mumble jumble aside single
processor computers CPU does one thing at a time. having 2 cores
means you can do 2 things at a time in real time. Ok well that
statement is confusing i bet. Let me elaborate computers trick there
human friends into thinking they can do more then one thing at a
time by switching between jobs real quick. Think of it like someone gives you
5 things to do and you prioritize them ito whats more important
then you do a little part of one thing at a time. having 2 cores
is like having an assistant that can help you work on 2 of the current
things at once thus they get done faster. Of courese in a computer
its much more complex but the idea is the same. Having more CPU's/CORES only
allows you to run more programs at once it does not increase the speed of the application
when you write software unless its a driver or a kernel I dont see
why you would care about the hardware. To a programer the hardware is just a
blackbox that he can do things with. think of it like a train putting another engine on it
doesnet make the train faster it makes it pull more cars faster then
a single engine could.
 
Old 12-08-2005, 11:34 PM   #22
leandean
Member
 
Registered: Oct 2005
Location: Burley, WA
Distribution: Sabayon, Debian
Posts: 278

Rep: Reputation: Disabled
Very true. However, it's the software that distributes the tasks to available processor resources. Theoretically, two cores should double performance but without proper instruction this cannot occur.
 
Old 12-08-2005, 11:41 PM   #23
purelithium
Member
 
Registered: Oct 2005
Location: Canada
Distribution: Mandriva 2006.0
Posts: 390

Rep: Reputation: 30
Quote:
Originally Posted by exvor
In reading this thread here im getting the impression that many who are responding dont have a good understanding of how
a computer works.
On the contrary, as an engineer, I have a great understanding(though not nearly complete) of the way computers work and handle tasks.

Quote:
Having more CPU's/CORES only allows you to run more programs at once it does not increase the speed of the application
This statement is correct if you take current (mainstream, common) programming standards into account. This is what I meant when I said:

Quote:
one process takes up 99% of one of my cores, and the other [core] is at 10-15%, that is not making the most of available resources
BUT if you change the way programs are built, then the one task performed by one program can be broken down into smaller tasks(threads) to be delegated to separate processing units. This is not a new idea as has been said before. It is just not a mainstream practice. The linux kernel does this in a rudimentary way by splitting processes between the cores, rather than the processes splitting themselves between the available cores.

To further extend your engine analogy, this is like taking each wheel of a vehicle and creating its own individual engine so that it is still doing the same task as one engine would, but the task is spread over multiple engines to take the strain off of that single one. This allows the engines to work together to accomplish the task of propelling the vehicle faster or farther.
 
Old 12-09-2005, 12:45 AM   #24
exvor
Senior Member
 
Registered: Jul 2004
Location: Phoenix, Arizona
Distribution: Gentoo, LFS, Debian,Ubuntu
Posts: 1,537

Rep: Reputation: 87
Well yea if by software you mean Operating system cause im pretty sure the
applications dont.
 
Old 12-09-2005, 09:19 AM   #25
runlevel0
Member
 
Registered: Mar 2005
Location: Hilversum/Holland
Distribution: Debian GNU/Linux 5.0 (“Lenny”)
Posts: 290

Rep: Reputation: 31
Quote:
Originally Posted by Hammett
Now that dual-core processors are taking part of the market (specially those running WinXp Media Center), do you think Linux is really prepared to take advantatge of those dual-core processors?
AFAIK Linux was already prepared for multi-core arch since Linus wrote the first kernels... or at least since a lot of time ago. Remember that you are talking about the OS which is the standard choice for today's supercomputing.

You will only need to use the to configure the "SMP" stuff or use a off-the-shelf SMP kernel. That's all.
 
Old 12-09-2005, 11:08 AM   #26
purelithium
Member
 
Registered: Oct 2005
Location: Canada
Distribution: Mandriva 2006.0
Posts: 390

Rep: Reputation: 30
If you read the rest of this thread, you would understand that he wasn't talking about Linux as a kernel, but Linux as an entire computing environment(WM's, Applications, etc).
 
Old 12-09-2005, 11:41 AM   #27
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Quote:
AFAIK Linux was already prepared for multi-core arch since Linus wrote the first kernels...
Actually not, linux kernels weren't at all multi-cpu aware for many years, and many rewritings. First kernels were barely able to run bash and gcc, which was anyway an achievement. I remember running Linux 0.01 and 0.02 kernels on a 386 PC, and that was epic.

At that time only SVR4 was already providing SMP in the Unix area.

BSD wasn't by design, one of the reasons it was dropped by most if not all major Unix H/W vendors.

Quote:
Remember that you are talking about the OS which is the standard choice for today's supercomputing.
One can argue Solaris is more advanced in that area.
 
Old 12-09-2005, 12:45 PM   #28
foo_bar_foo
Senior Member
 
Registered: Jun 2004
Posts: 2,553

Rep: Reputation: 53
everybody seems to be saying "applications" need to take advantage of parallellism more.
Everybody after that point is totally lacking in details about how to do that.
someone is even saying something about 64bit without one single word about what "optimizing" for that would entail.using 64 bit properly is a compiler thing and has more than nothing to do with application programming.
type "optimize for 64 bit" in google and all you get is a bunch of intel links as well !
First as i said earlier "applications" do take advantage of parallellism
WHERE IT IS APPROPRIATE !
if you can interact with a GUI and have it not freeze while a task is being performed then
you are the witness to parallellism and multi-threading in your application.
when you download a web page and at the same time get GUI feedback on progress and at the same time can still interact with menus that's about as multi-threaded as the application can be.

most if not all GUI type desktop applications are, by their nature of interaction with humans, linear and sequential. you as the user do essentially one thing at a time with these applications. So yes the applications then in turn do one thing at a time and do not use parrallell resources ! duhhhhh
it may be possible to go all "hyper" with threading and get these applications do use dual resources a little more (not alot more) but the result would be alot of wasted overhead and locking to now create the illusion of linear behaviour in a parrallel setting and the applcation would run slower not faster. generally simple inline sequential code runs faster than anything else.There is also an issue of overhead when you schedule a bunch of small tasks because you begin to thrash the data in the cpus memory caches.

one person tries to provide vague reference to particulars
Quote:
if you change the way programs are built, then the one task performed by one program can be broken down into smaller tasks(threads) to be delegated to separate processing units.
first we MUST know what we are talking about here rather than accepting advertising logic.
I just did "grep -R fork ./*" on mozilla source code and got something on the order of 300 references.
I did "grep -R thread ./*" on mozilla source code and i cannot return to the original command to see how many line were returned as many are lost.
i used rxvt and it is set at rxvt*saveLines: 10000

this is being refered to here as "single threading" from the stone age !

this is an indicator of just how much untrue garbage is being tossed out here with bizarre statements like
Quote:
What I mean by "single threaded" is that most software applications(I'm not talking about OS process management or the like), are only able to take advantage of the processing power of one processing unit(logical or physical) at a time.
Its just simply not true anymore than iraq whathisname was trying to overthrough america was true. it's just not true at all period.
 
Old 12-11-2005, 08:35 PM   #29
Hammett
Senior Member
 
Registered: Aug 2003
Location: Barcelona, Catalunya
Distribution: Gentoo
Posts: 1,074

Original Poster
Rep: Reputation: 59
As long as I read this thread, I realise:

1.- Not every application (say ls, say Gimp) HAS to be multi-threaded
2.- Nobody knows what's going to be with multi-thread software

When I posted 1st post, my idea was to ask you people where do you think software programming will go: multi-thread will generalise wherever is necessary or software will remain like now, and nobody will optimize because it just simply run and it's kernel's job to distribute the stress to the cores.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Fedora Core 4 x86_64 on a Shuttle XPC SN25P with an AMD 64 X2 (Dual core) gwiesenekker Linux - Hardware 4 12-06-2005 11:53 PM
Problem with installing Debian Linux on a AMD64 dual core machine brianq Linux - Software 1 10-07-2005 08:34 PM
linux and dual core 64 bits sytems pumkindan Linux - Hardware 4 06-11-2005 05:34 PM
Dual processor vs. Dual core cs. single on home machine fincher69 Linux - Hardware 3 03-04-2005 12:37 PM
Installing FC2 to already partitioned/prepared 2nd HD Equisilus Fedora - Installation 2 06-20-2004 03:48 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 08:53 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration