Quote:
Originally Posted by foo_bar_foo
SMT is a kernel extension for SMP designed for better scheduling on intel dual core as opposed to actual dual processors. (processor switching is severly limited)
|
No, SMT is hyperthreading.
Dual core CPUs do REAL SMP. They are "actual dual processors" - just on one die. The drawback cited with dual cores is a bottleneck to the memory bus when both processors need to fetch data from RAM.
Quote:
SMT might not be apropriate for AMD ?? i think amd dual core is actually two prcessors with seperate cache (seperate bus ?) who knows.
|
Let's straighten this out:
Intel dual core processors = two actual processors on one die
AMD dual core processors = two actual processors on one die
Intel quad core processors (announced) = four actual processors on one die
Hyperthreading(ht, or SMT) = two "virtual" processors handled by one actual processor (and yes, some Intel dual cores are also hyperthreading, which gives you two "actual" processors, but four "virtual" processors, or with the upcoming quad-core Xeons, eight "virtual" processors)
As far as I know, no AMD processors do hyperthreading, which is arguably not a disadvantage.
Last time I checked was a month ago, I haven't checked out AMD's roadmap. Honestly, SMT was designed to work around the Pentium 4's inherent design flaws, and since they've been going back to Pentium III technology with their latest Centrino family (Pentium M), I don't think that hyperthreading will be around for much longer, either that, or it's going to be significantly different in implementation if it makes it to the Pentium M line.
Quote:
not sure about the other
I don't mean to be
what it seems you may be measuring -- not sure -- is non realtime context switching overhead or latency plus scheduling latency. obviously the latency
will increase from the schedular as the cue becomes larger.
|
I don't mean to be crass, but you need to look up some terms before you start using them. Look up latency, thread scheduling, and context switching before you use those terms. Also look up cue vs. queue.
Quote:
2 cues instead of 1 ? it's an interesting idea.
|
I think your post is a cue that you should check out wikipedia or howstuffworks, and there won't be much of a queue there, so you should be able to get right into those sites.
(just demonstrating cue vs. queue here, in an attempt to be funny, don't read this as a flame please!)
Quote:
rather than mutex lock you might try reading and writing something like tomeofday to and from a pipe ? just a thought.
i would also try fork instead of thread to see the difference
like i said before because dual core chips are generally (at least) intel using the same memory cache
|
Again, flat-out wrong. Go read ANY Intel specs, including marketing slicks intended for laypersons. You will see that the dual-cores' processors' have independent L1 caches and independent L2 caches, just like any halfway-intelligent multiple-core processor implementation should have. What they DO share is a common bus to system memory, which is an inherent weakness of nearly any multicore chip. Still, it's a vast improvement due to a literal doubling of processor power, because it means now in a dual-processor socket/slot system you can achieve true quad-processing.
Quote:
and since threads as opposed to process do not actually get a new copy of address space what you might be seing also is cpu afinity ?
|
Another term for you to look up:
affinity.
Affinity has nothing to do with address space. Affinity = which processor a thread is assigned to - and that assignment might not even be static, FYI. The OS's thread scheduler may decide that thread x8abe62 is on CPU1 for one cycle, and might move it to another processor for the next cycle, and you will never know it, because there is no reason for you to know unless you're the kernel or the MMU. Transparency is the whole point of SMP (and SMT), so you can focus on coding your application and managing your own threads rather than worrying about optimizing the OS's management of the actual scheduling.
Memory/address space = where in system RAM your program's data (executable, variable/pointer data, etc.) is located.
Quote:
if you are using SMT scheduling in the kernel to prevent thrashing the schedulat might be sceduling all threads on the same cpu basically.
it is an interesting line of inqury thats for shure.
|
If you're trying to drive the thread management and override the kernel's thread manager, you're heading for a race condition at best, or data corruption and/or kernel panic at worst.