I work on a simulator product in which multiple threads (when used) are all doing essentially the same thing. They might all be limited by floating point performance, so enabling hyperthreading slightly reduces throughput, or they might all be limited by cache misses, so enabling hyperthreading dramatically reduces throughput. I don't know of any case of any of my employer's simulator products running better with hyperthreading enabled.
BUT I actually spend a lot more computer time recompiling the product than running it. I often work on key parts of the code that are templated in .hpp files that are depended upon by a large fraction of a very large project. So testing a small change requires recompiling a massive amount of code.
All my tests with our build system show that running a lot of compilers in parallel with hyperthreading enabled has better throughput than running either the same or fewer compilers without hyperthreading. Most of that testing has been under Windows, but such testing as I have done under Linux reveals similar results.
I expect there are a lot of other workloads in which hyperthreading helps. Running many compilers in parallel is not the reason hyperthreading exists.
Originally Posted by suicidaleggroll
I have a multi-processor modeling program at work.
"modeling" can mean enough different things that I shouldn't guess the performance characteristics of that product from that word. But based on my experience with a variety of different simulators (semiconductor, electromagnetic, mechanical, thermal, fluid, chemical, etc.) I think you might also be in some area where hyperthreading is harmful. That doesn't mean such results apply to everyone.