LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   can i perfrom a parrale computation with a P4 with "HT" technology (https://www.linuxquestions.org/questions/programming-9/can-i-perfrom-a-parrale-computation-with-a-p4-with-ht-technology-357067/)

ztdep 08-25-2005 09:49 PM

can i perfrom a parrale computation with a P4 with "HT" technology
 
i want to know is the "HT" equavlent to the two cpu.

and i want conduct the parralel computation with fluent on the "HT" P4 machine/

regards

FreeThinkerJim 08-26-2005 02:53 AM

Re: can i perfrom a parrale computation with a P4 with "HT" technology
 
Quote:

Originally posted by ztdep
i want to know is the "HT" equavlent to the two cpu.
Not exactly. What hyperthreading does is allow two threads of a program to be executed in parallel. A thread is like a little mini program within a program that does a certain task. If the program you're using uses more than one thread to do its computations, you should be able to do them in parallel with an HT chip.

archtoad6 08-26-2005 01:24 PM

Hope I'm not too tactless -- this isn't IM, IRQ, or SMS here, please learn to spell "I" & "parallel" correctly.

Thanks

sundialsvcs 08-26-2005 01:49 PM

No, hyperthreading is not equivalent to "two CPUs." Probably the best way to think about it is that it allows task-switching to occur faster, or perhaps better to say, partially. The chip is using on-board execution units that would otherwise be idle.

From answers.com:
Quote:

Hyper-Threading Technology (HTT) is Intel's trademark for their implementation of the simultaneous multithreading technology on the Pentium 4 microarchitecture. It is basically a more advanced form of Super-threading that first debuted on the Intel Xeon processors and was later added to Pentium 4 processors. The technology improves processor performance under certain workloads by providing useful work for execution units that would otherwise be idle, for example during a cache miss.
The advantages of Hyper-Threading are listed as improved support for multi-threaded code, allowing multiple threads to run simultaneously, improved reaction and response time, and increased number of users a server can support.
Hyper-Threading works by duplicating certain sections of the processor -- those that store the architectural state -- but not duplicating the main execution resources. This allows a Hyper-Threading equipped processor to pretend to be two "logical" processors to the host operating system, allowing the operating system to schedule two threads or processes simultaneously. Where execution resources in a non-Hyper-Threading capable processor are not used by the current task, and especially when the processor is stalled, a Hyper-Threading equipped processor may use those execution resources to execute the other scheduled task. (Reasons for the processor to stall include a cache miss, a branch misprediction and waiting for results of previous instructions before the current one can be executed.)
Except for its performance implications, this innovation is transparent to operating systems and programs. All that is required to take advantage of Hyper-Threading is symmetric multiprocessing (SMP) support in the operating system, as the logical processors appear as standard separate processors.
However, it is possible to optimize operating system behaviour on Hyper-Threading capable systems, such as the Linux techniques discussed in Kernel Traffic (http://www.kerneltraffic.org/kernel-...threading.html). For example, consider an SMP system with two physical processors that are both Hyper-Threaded (for a total of four logical processors). If the operating system's process scheduler is unaware of Hyper-Threading, it would treat all four processors the same. As a result, if only two processes are eligible to run, it might choose to schedule those processes on the two logical processors that happen to belong to one of the physical processors. Thus, one CPU would be extremely busy while the other CPU would be completely idle, leading to poor overall performance. This problem can be avoided by improving the scheduler to treat logical processors differently from physical processors; in a sense, this is a limited form of the scheduler changes that are required for NUMA systems.
According to Intel, the first implementation only used an additional 5% of the die area over the "normal" processor, yet yielded performance improvements of 15-30%.
and...
Quote:

Simultaneous multithreading, often referred to as SMT, is a technique for improving the overall efficiency of the hardware that executes instructions in a computer. This hardware is typically called the CPU. SMT permits multiple independent threads of execution to better utilize the resources provided by modern processor architectures.
Normal multithreading operating systems allow multiple processes and threads to utilize the processor one at a time, giving its exclusive ownership to a particular thread for a time slice in the order of milliseconds. Quite often, a process will stall for hundreds of cycles while waiting for some external resource (for example, a RAM load), thus wasting processor time.
A successive improvement is super-threading, where the processor can execute instructions from a different thread each cycle. Thus cycles left unused by a thread can be used by another that is ready to run.
Still, a given thread is almost surely not utilizing all the multiple execution units of a modern processor at the same time. Simultaneous multithreading allows multiple threads to execute different instructions in the same clock cycle, using the execution units that the first thread left spare. This is done without great changes to the basic processor architecture: the main additions needed are the ability to fetch instructions from multiple threads in a cycle, and a larger register file to hold data from multiple threads. The number of concurrent threads can be decided by the chip designers, but practical restrictions on chip complexity usually limit the number to 2, 4 or sometimes 8 concurrent threads.
This technique dates to the 1950's. :study: ... An excellent timeline is available at http://www.cs.clemson.edu/~mark/multithreading.html.
The Denelcor HEP is notable, as is the Stellar GS-1000 which may have been when field installations of commercial SMT machines first reached the hundreds.
Early notable machines from the 1950's are the Bull Gamma 60 and Honeywell 800.
Every decade seems to rediscover the technique and put a new spin on it. :) Although it is primarily a throughput enhancement technique, it has constantly been touted as "hidinlatency", while not changing anything about the latency of any operation.
The actual effectiveness of it, then, is decidedly a mixed-bag. It's been written about several places. Here's probably the most-thorough writeup:
http://www-128.ibm.com/developerwork...library/l-htl/

The type of workload apparently matters a great deal. Single-user workload models showed no improvement and some operations slowed-down by as much as 30%. Various types of server-performance fared much better. I would think that multi-CPU systems (that is, systems which actually needed to be multi-CPU and which could profitably exploit multiple engines) would have the best chance of seeing real benefit.

sirclif 08-26-2005 03:21 PM

Quote:

Originally posted by archtoad6
this isn't IM, IRQ, or SMS here
...or English class?

addy86 08-27-2005 01:25 AM

Quote:

Originally posted by sirclif
...or English class?
That's right, but reading and understanding posts will become a challenge if you have to decrypt every single word (especially for non-native English speakers like me).
By the way, if the person with the question doesn't have the time to write with as less spelling errors as possible, why should the person with the answer have the time to write at all?

syg00 08-27-2005 02:15 AM

Quote:

Originally posted by sundialsvcs
The type of workload apparently matters a great deal. Single-user workload models showed no improvement and some operations slowed-down by as much as 30%. Various types of server-performance fared much better. I would think that multi-CPU systems (that is, systems which actually needed to be multi-CPU and which could profitably exploit multiple engines) would have the best chance of seeing real benefit.
Nice answer sundialsvcs.
Given the OP ask about parallel processing, presumably s/he has well behaved, multi-threaded code. As such, I would expect it to see a reasonable improvement - similar to "server" code, which also generally meets this profile.

Not 100% improvement (or even close as one might see with two of everything), but useful none-the-less

KimVette 08-27-2005 12:29 PM

measurable, maybe.
noticeable? doubtful.

Go SMP and if your threaded app executes threads in parallel, you can reap an 80-100% performance increase.

angkor 09-07-2005 05:11 PM

Quote:

Originally posted by addy86
......to write with as less spelling errors as possible, why should the person with the answer have the time to write at all?
Ouch! :rolleyes: What about errors in grammar?

jlliagre 09-07-2005 05:50 PM

Quote:

What about errors in grammar?
Please enlighten those of us for which "as less as possible" seems acceptable english.

KimVette 09-07-2005 07:24 PM

Quote:

Originally posted by jlliagre
Please enlighten those of us for which "as less as possible" seems acceptable english.
It would be " as few as possible "

For example: when you refer to "less people" you are actually referring to say, a measurement of soylent green, whereas fewer would indicate you are referring to a number of whole people. :D

jlliagre 09-07-2005 07:44 PM

Got it, thanks.

addy86 09-08-2005 04:08 AM

Quote:

Originally posted by angkor
Ouch! :rolleyes: What about errors in grammar?
They don't count :D But you're right.

However, I don't except anybody to write without errors, but at least to try.

sundialsvcs 09-09-2005 10:59 AM

I think this was a hyperthreading thread, not an English class. :rolleyes:

I have my doubts that a parallel computation would benefit from HT, but of course it depends entirely upon how wide the computation is, and how many FPU's there actually are on-board the chip. And, how "smart" Intel's HT implementation actually is.

I suppose the only way to know for sure is to benchmark it with the actual app.


All times are GMT -5. The time now is 05:39 AM.