LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 05-05-2014, 02:43 PM   #16
Atterus
LQ Newbie
 
Registered: Apr 2014
Distribution: Redhat 6.9
Posts: 13

Original Poster
Rep: Reputation: Disabled

Alright, I got a few more specifics from the boss...

It looks like the main program that we are using "pulls in large data matrix's using RAM, then does a straight crunching of the numbers" (I should clarify so I don't look incompetent, this machine is for his work, not mine). It isn't any software that was purchased, it was all written in house over the course of decades apparently in Fortran 66 (I believe it opens multiple other programs, but he wasn't sure). It ultimately is processing thousands of combinations of FTIR Spectra and identifying the best ones using a algorithm I can't really discuss. It is written in old Fortran, and according to my adviser (since I don't do Fortran) it alternates between floating point modes to optimize itself. Unfortunately, the programs we have do not support CUDA cores like suggested above since I believe the programs were written WAY back in the day. I told him we could migrate the programs to run with CUDA cores, but he wants to "keep it simple" (I'm looking into that for my own PC now though, didn't know I could do that). Like I said, I think he is thinking of the days where 500 MB of RAM was massive, and he was amazed by the prospect of single six core CPU's.

Something he also told me, wishing he had sooner, is that our existing machine DOES have 2 dual core Xeon CPU's, but he can only get one CPU to "work" (as in, the other is still working and... processing, but not interfacing with the program somehow). He says each iteration of optimization takes about 30 seconds (and there are alot of iterations). I haven't checked it myself, but that immediately is putting me off of trying a multi CPU system, unless the MB/ program he wrote is just messed up, but he IS really good at programing. I'm confident that we don't need something on the scale of genome sequencing or biometric processing since we are only dealing with relatively low dimensional data. I definitely think that fewer powerful cores and a large amount of RAM would be preferable, especially since he was effectively using a dual core at a "moderate" pace before. From that talk, it seems like unlike in my work, his work uses Matlab more as a means to a end rather than the main program (That CUDA processing stuff is making me giddy).

Sorry if it sounds like I'm ignoring advise, like I said I definitely am looking into alot of the suggestions for my own use, but I think my adviser is more interested in a high end PC he can upgrade and fix himself down the road. I figure that if he was running alright on a old dual core, a six core high end CPU with a ton of RAM would blow his mind for the next decade or so. I agree I should have asked more questions about the project initially, especially since the "requirements" he now is giving kinda make the task alot more simple than I envisioned (I was also imagining genome sequencing power and TITAN clusters initially lol).

But still, the question then remains if I should be focusing on Xeon or i series CPU's (Since I'm thinking I only need at most eight cores) and if the RAM frequency makes a difference here since I've never bothered to go above 1600.

Thanks for all of your advise and suggestions!
 
Old 05-05-2014, 03:13 PM   #17
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143
Quote:
Originally Posted by Atterus View Post
But still, the question then remains if I should be focusing on Xeon or i series CPU's (Since I'm thinking I only need at most eight cores) and if the RAM frequency makes a difference here since I've never bothered to go above 1600.

Thanks for all of your advise and suggestions!
i# = desktop consumer grade crap
xeon = server hardware

You pay more for the server level equipment, but if you want it to last more than a couple of years that's the only place you should be looking. Ignore ALL i3-7, non-ECC, consumer level hardware unless you want to be replacing failed motherboards, video cards, and/or RAM every few years. Sometimes you can get lucky and the consumer grade hardware will last longer, but don't count on it.

You want to pay close attention to bus speed (QPI/DMI). Memory speed doesn't matter so much as long as it's fast enough to saturate the bus. Don't populate all of the DIMMs either, leave some open so your boss can add more down the road without having to swap all of it out. The motherboard manual should tell you what the maximum necessary speed is for different DIMM layouts.

Check out this page for Intel's processor options, namely the Ivy Bridge-EP uni-processor or dual-processor:
http://en.wikipedia.org/wiki/List_of...icroprocessors

The E5-1650v2 or E5-1660v2 would probably be a good option, depending on how the budget stacks up once you take into account the chassis, mobo, ram, and storage. You may even be able to swing the E5-1680v2 depending on the rest of the loadout. I prefer Supermicro for the mobo, chassis, and power supply, but there are other good options as well.

You still haven't mentioned form factor (rack vs tower), noise level, storage capacity/speed, redundancy, video capability (number of monitors), networking requirements, etc.

Last edited by suicidaleggroll; 05-05-2014 at 03:18 PM.
 
Old 05-05-2014, 03:22 PM   #18
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
That calls into question the fortran program. How threaded is it, if at all ? If it is not threaded then fewer cores would be better.

I'd say a high-end desktop with an i5 processor and 16-32GB RAM should do well. You probably don't need the i7, because all it really adds is hyperthreading and more power. If your program is not highly threaded, I doubt that power will be put to any use. You could also get an E5 based server board with plenty of RAM like suicidaleggroll suggests.

Last edited by metaschima; 05-05-2014 at 03:23 PM.
 
1 members found this post helpful.
Old 05-05-2014, 03:37 PM   #19
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Sounds like "the boss" is giving you confused and contradictory information. You should nail down the facts before making decisions.

Quote:
Originally Posted by Atterus View Post
- Built for multiple high speed tasks
Quote:
Originally Posted by Atterus View Post
it was all written in house over the course of decades apparently in Fortran 66...
Something he also told me, wishing he had sooner, is that our existing machine DOES have 2 dual core Xeon CPU's, but he can only get one CPU to "work" (as in, the other is still working and... processing, but not interfacing with the program somehow).
So he (and apparently you also) do not understand the basics of multi-processing.

Your old program only has one thread, so it will only ever use one core of one CPU. When that program is running alone on the system, a small part of another core is used fort various OS book keeping and all the rest of the cores are WASTED.

If you want to run one instance at a time of an old program, the best you can do is the highest clock rate you can get. Two CPU-sockets is a waste. More than two cores is probably unavoidable, but also a waste.

But if many different users were running instances of that same program at the same time, then you make good use of as many cores as you have simultaneous copies running.

You could rewrite the program to use multiple threads for one instance of the program. But you have made it pretty clear that is not in the plan.

Quote:
his work uses Matlab more as a means to a end
That is not so informative. So let me guess: In my own work, Matlab is used to examine the results (search, sift, rearrange, plot, get statistics and outliers, etc.) but not to compute the results. My results are computed with C++ code and would be far too inefficient if computed in Matlab. So my guess is your "boss" is using Matlab in some similar way to my use. Even if the old Fortran could run faster if translated to Matlab, it wasn't coded that way. Bottom line of my guess: Matlab is used for parts of the task where you need flexibility, not speed.

Quote:
Originally Posted by Atterus View Post
It looks like the main program that we are using "pulls in large data matrix's using RAM, then does a straight crunching of the numbers"
We have "large" and we seem to have "many years ago". Multiply those together and you get "small". Knowing how big in MB would help. Also knowing what "crunching" means would help. Some kinds of crunching need very random access to the data. Other kinds don't.

With very random access, "small" might mean any L2 cache is enough, or it might mean no L2 cache is enough, or it might mean there is a very big performance difference available from choosing the CPU with the largest L2 cache you can get in your price range.

I know for size you are thinking total ram size. Maybe I'm wrong about "many years ago" and/or he wants to have a lot of users sharing the machine. In that case, you do need to work the numbers on RAM required. But my guess is that 16GB is enough and it is silly to get less than 16GB even if you don't need that much.

Last edited by johnsfine; 05-05-2014 at 04:02 PM.
 
1 members found this post helpful.
Old 05-05-2014, 04:15 PM   #20
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,353

Rep: Reputation: 3690Reputation: 3690Reputation: 3690Reputation: 3690Reputation: 3690Reputation: 3690Reputation: 3690Reputation: 3690Reputation: 3690Reputation: 3690Reputation: 3690
If you can't fix the symetrical processing aspect then you need to spend money on the fastest path on single core. Actually multi real core may help some since other tasks might be able to run on those cores. Going faster on processor and using a processor that can run single threaded apps faster would be the only way to improve this.

I'd still get a pci-e ssd. Not just a ssd, get a pci-e board.


I'd investigate if you can run smp apps even if you have to buy them.


A different hardware may help but not at those prices. It would be in the $20K range.

Last edited by jefro; 05-05-2014 at 04:17 PM.
 
Old 05-05-2014, 04:16 PM   #21
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
For a moment, let me guess the "multiple tasks" at the start of this thread either was mis information, or didn't mean at the same time.

So you want to maximize the speed of old single threaded Fortran code.

Maybe an important part of the purchase is a new Intel Fortran compiler. Combined with a decent new Intel CPU, recompiling the code with a new Intel Fortran compiler will probably do a lot better than using a much more expensive CPU with the old compiler.

I have no clue of academic pricing for Intel's compilers. But probably the prices are not too bad. (I use an Intel compiler commercially under an enterprise wide license, so I also don't know what it cost commercially, but I think a lot).

I assume the old code was compiled 32-bit. If you went with a free compiler, that was probably GNU Fortran. GNU is actually pretty bad at 32-bit x86 code generation. Whether you switch to the non free Intel compiler or not, recompiling with a 64-bit compiler will likely help a lot. You can run an old 32-bit program on a 64-bit Linux quite easily and well. Some programs actually run better that way than if recompiled in 64-bit. But I would give long odds that this is NOT one of those programs. This one will run faster if recompiled for 64-bit.

Again only if you are focused on running one instance at a time: I think you would get a big payback from using Oprofile to find out where the program spends most of its time. With OMP, it might be very easy to add multi-threading to a local hot spot of the program without the giant effort needed to widely multi-thread the whole program.

If you have multiple users running multiple instances of the program at the same time, you don't buy much by multi-threading each instance. But if only one instance is running and other cores are sitting idle, a little effort at multi-threading a key loop can have a big payback.

Last edited by johnsfine; 05-05-2014 at 04:21 PM.
 
Old 05-05-2014, 09:51 PM   #22
Atterus
LQ Newbie
 
Registered: Apr 2014
Distribution: Redhat 6.9
Posts: 13

Original Poster
Rep: Reputation: Disabled
The Fortran program is not hyperthreaded, but it has been compiled to 64 bit. As for the specs you wanted suicidaleggroll, we would like a tower, noise isn't relevant (giant machine always making a ton of noise in the lab), likely going with a typical 1T ssd (the pcie ones are significantly more expensive), I'm not familiar with redundancy but we have tons of UPS's, we only need one monitor and something that would be capable of minor graphical processing (3D plotting with 4000 3dPCA points level), and the networking requirements can be basic and it looks like most of the MB's have more than we would use anyways.

We definitely want multiple instances of the program running simultaneously, so its good to hear that hyperthreading wouldn't factor in. Johnsfire is right about the use of Matlab, and I was also curious about what exactly "crunching" meant, but at this point I'm assuming he is referring to the calculation step after the matrix's are read in. The information about how exactly multiprocessing works really helped, I imagine that the other CPU wasn't doing anything purely because only one instance was running. In that case I can see the dual CPU route being better, the more you know!

With your helpful comments, we decided to go ahead with the E5 server component route since it seemed like a logical step up from the existing system. It also sounds like since we end up linking all of the computers in our lab together it would be more useful to have it capable of multitasking efforts besides this one. Since that seems a bit more expensive I got the budget bumped up a bit to around $4600, but it cannot go over 5k due to restrictions on the funding we have. I think that would let me do two E5 2650 v2's and plenty of RAM, but I'm well aware that "newer =/= better" (I've been rocking a used, O.C'd 2500k at home through a few upgrades lol).

As for the 20k machine... it would likely try to lord itself over the other machines in the lab.
 
Old 05-05-2014, 10:46 PM   #23
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143
Quote:
Originally Posted by Atterus View Post
As for the specs you wanted suicidaleggroll, we would like a tower, noise isn't relevant (giant machine always making a ton of noise in the lab)
Lots of options there.

Quote:
Originally Posted by Atterus View Post
likely going with a typical 1T ssd (the pcie ones are significantly more expensive), I'm not familiar with redundancy but we have tons of UPS's
UPSs are not related. If there's one thing you can count on a hard drive to do, it's to fail. It might last a year, it might last 5, you may even get lucky and make it to 10, but it's going to fail. SSDs seem better in that regard than HDDs in my experience, but they'll still fail eventually. Assuming it goes without warning and it's a total loss (this is the likely failure mode of an SSD), you might wake up one morning and the drive and everything on it is gone and 100% unrecoverable. The question is what do you want to happen on this day?

1) This is the only drive and you have no backups - it's a total loss, everything on the system is gone forever, all research, all work, all results, all codes, gone.

2) This is the only drive and you do have backups - the drive is a total loss, you need to order a new one and then spend a few days re-installing the OS, compilers, programs, and recovering all of your files from the backups, and you've lost all new data since the last backup (could be a day, could be a month, could be 6 months).

3) This is not the only drive and you have a RAID setup - you wake up to an email letting you know one of the drives in the array has failed. You order a new drive, when it arrives you swap out the bad one, and everything keeps going like normal and nobody is the wiser with zero downtime.

If you're diligent with backups and the few days of recovery time required for #2 are not important, then #2 is a fine option. If you are not diligent with backups, or if you can't stomach a few days of recovery time, then you need #3. The ideal scenario is #3 plus backups, to protect against user stupidity (file deletion) and acts of God in addition to your garden variety hard drive failures.

#1 is not a realistic option for any professional application, so if that's what you're building your budget around, you need to go back to the drawing board.

On a side note - 1 TB? Total? Are you sure that's enough? 1 TB is fine for a normal machine, but for a processing machine doing database crunching and various numerical operations on large data sets, 1 TB sounds incredibly limiting to me. I can churn out 1 TB of analysis results on a single model run in a single day. My job and usage is I'm sure very different from yours, but with just 2-3 people doing runs and data analysis at my job we mow through about 20-30 TB a year.


Quote:
Originally Posted by Atterus View Post
we only need one monitor and something that would be capable of minor graphical processing (3D plotting with 4000 3dPCA points level)
Sounds like your basic $100 Quadro NVS 3xx (300, 310, etc.) would do just fine.

Quote:
Originally Posted by Atterus View Post
and the networking requirements can be basic and it looks like most of the MB's have more than we would use anyways.
Not an issue then.

Quote:
Originally Posted by Atterus View Post
I think that would let me do two E5 2650 v2's
Do you really need 16 cores? You lose a lot of processing speed by getting that many, from 3.7 GHz with the E5-1660v2 to 2.6 GHz with the E5-2650v2. You do gain a lot more cores, but if the majority of your applications are single-threaded, how many do you really need? The 1660v2 would run a single threaded application (and even a multi-threaded application up to 6 threads) around 50% faster than 2x 2650v2, the 2x 2650v2 could just run more of them at the same time. If you don't need the additional cores, then you'd be spending thousands more on a slower computer.

Last edited by suicidaleggroll; 05-05-2014 at 10:56 PM.
 
1 members found this post helpful.
Old 05-05-2014, 11:21 PM   #24
Atterus
LQ Newbie
 
Registered: Apr 2014
Distribution: Redhat 6.9
Posts: 13

Original Poster
Rep: Reputation: Disabled
Thanks for the options!

For the HD, we really only need one since the computer will be linked up to a separate system dedicated for that exact reason (storage and redundancy). Ironically, the PSU on that died and that was a interesting week. We keep backups of all made programs and scripts on there until we need them on a particular machine. 1TB on this machine should be fine since the data is usually moved off after its analyzed, the data ultimately doesn't take up alot of space either.

It definitely is going to be a tradeoff between more cores and a faster CPU. We would be running many instances of this particular program, more running is better, but I imagine at some point it would reach diminishing returns. However, the higher speed would get fewer instances done faster. We like both, but can't have both with this budget. I'm leaning more towards more cores since we can simply be running more stuff, but faster certainly sounds nicer too. I'll just kick that issue to my boss, "more cores, or faster?". I would also go faster, but he wanted "more things running".

EDIT: It also looks like alot of the E5 16xxv2 supported MB's only support "One socket". I'm not sure if that means only one can be used or not. I'm probably just looking in the wrong place or misreading its meaning.

Last edited by Atterus; 05-05-2014 at 11:30 PM.
 
Old 05-06-2014, 07:39 AM   #25
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by suicidaleggroll View Post
If there's one thing you can count on a hard drive to do, it's to fail. It might last a year, it might last 5, you may even get lucky and make it to 10, but it's going to fail.
I won't argue against having good backups. But I find your statement absurdly pessimistic about hard drives. I've had at least one (usually two) workstations on my desk (at various jobs) with at least one (usually three) hard drives per workstation for about 30 years. In that time ONE hard drive failed for me, and I easily recovered the data. It was a Seagate 40 MB drive (so not very recent) and was spinning 24/7 for seven years and so lost the ability to start spinning after a power off. With those old drives, it just took a bit of skill to manually start it spinning and then it would keep spinning and work.
The seven year old computer I'm typing this on has two drives the same age as the computer plus one eleven year old drive. All spinning 24/7 for all those years and unlike the ancient Seagate drives, they still spin up after a power failure.
I get involved when anyone in my team has computer problems, so I have seen quite a few failed hard drives over the years. But that is from a much larger pool of drives. The average drive is obsolete and discarded before it fails.
 
Old 05-06-2014, 08:08 AM   #26
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Atterus View Post
We definitely want multiple instances of the program running simultaneously, so its good to hear that hyperthreading wouldn't factor in.
I think you are confusing hyperthreading with multi-threading.

Multi-threading means one instance of the program can make good use of multiple cores. If you are consistently running multiple instances of the program you don't really care. But I find it hard to believe you are running 16 instances at once. So I share suicidaleggroll's doubts about getting more rather than faster cores.

Hyperthreading is an Intel technology available on some CPUs that allows the OS to dynamically slice each core in half creating two virtual cores that each run about half as fast. If you had that enabled in the BIOS (important choice to make) and you had 16 real cores and 16 instances of your program running, each would get a real core. But if you added 4 more instances of the program, instead of time slicing the available cores, the OS would split cores so at any given instance 12 of the 20 instances would have whole cores and 8 of the 20 would have half cores.

Depending on the application, a "half core" might run significantly faster or slower than half the speed of a whole core. In my Linux work, I spend most CPU time recompiling massive projects for small header file changes. The compiler runs significantly faster than half speed on a half core, so enabling hyperthreading and running twice as many instances of the compiler makes the total recompile run a little faster. But I also run massive simulations some times and almost any large (by modern standards) simulator runs much slower than half speed on half cores. If I have sufficient control to suspend other work and tell the simulation to use only the number of threads for which real cores exist, it is OK to have hyperthreading enabled. But process priority does stop the OS from slicing cores. If a high priority task is using your 16 real cores and one or more lower priority tasks want 16 more cores, the OS will slice them all and the high priority task will run much slower than half speed. So we need hyperthreading off on the systems that run both high priority large simulations and anything lower priority.

I will guess your program is unlike a large simulator and likely to run a tiny bit faster than half speed on a half core. But if I'm wrong, you should turn hyperthreading off in the BIOS, because it would be faster to time slice 16 real cores (when running over 16 instances) than to split them.

Last edited by johnsfine; 05-06-2014 at 08:21 AM.
 
1 members found this post helpful.
Old 05-06-2014, 08:11 AM   #27
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Atterus View Post
EDIT: It also looks like alot of the E5 16xxv2 supported MB's only support "One socket". I'm not sure if that means only one can be used or not. I'm probably just looking in the wrong place or misreading its meaning.
Most motherboards only support one CPU. A motherboard that supports two CPUs is significantly more expensive than an ordinary motherboard (then the ones that support 4 CPUs are ridiculously expensive).

Edit: I think what I said in this post is still correct. But suicidaleggroll's answer to the same sub topic (see post #29) is a lot more relevant.

Last edited by johnsfine; 05-06-2014 at 12:47 PM.
 
Old 05-06-2014, 10:12 AM   #28
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143
Quote:
Originally Posted by johnsfine View Post
I won't argue against having good backups. But I find your statement absurdly pessimistic about hard drives.
It is pessimistic, but pessimism is required if you plan to build a reliable system. Optimism is what gets you into trouble.

I have about 120 drives under my control right now, spread across 16 machines (I think that's right, I may be missing one or two) all running 24/7. Most of the machines are relatively idle most of the time, but 7 of the machines are incredibly active, and they account for 95 of those 120 drives.

In my experience, over the course of one of those active machines' ~10 year lifespan, it generally loses around 1/3 of its drives. I have one machine that's only 6 years old and has already lost 4 of its 9 drives. Of those 7 machines, each one loses a drive every couple of years, which means every year I'm losing about 2-3 of the 95 drives.

I won't build a high-reliability machine with just a single drive, it always gets a RAID (and not RAID-0). To do so is just asking for problems. And compared with the labor required to recover the data from a failed drive and get a machine back up and running to its previous level, combined with the effect of a few days of downtime while I work through the recovery process, the cost of sticking a second drive in RAID-1 is peanuts.

Now all of the above experience is in reference to HDDs. I also have a lot of SSDs under my control, but I have yet to lose one. Until they start failing, I can't really comment on their reliability, but so far (just going by the numbers alone) they're better than HDDs, I just don't know how much better.

Last edited by suicidaleggroll; 05-06-2014 at 10:18 AM.
 
Old 05-06-2014, 10:17 AM   #29
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143Reputation: 2143
Quote:
Originally Posted by Atterus View Post
EDIT: It also looks like alot of the E5 16xxv2 supported MB's only support "One socket". I'm not sure if that means only one can be used or not. I'm probably just looking in the wrong place or misreading its meaning.
The 16xx series is a single socket processor, it can't be used in multi-socket systems. Notice they're classified under the uni-processor category on that wikipedia page, versus the dual-processor category. The Xeons are easy, the first number after the E#- (eg: E5-1xxx, E7-2xxx, and so on) tells you how many sockets can be used with that model. The E5-1xxx series are single-proc only. The E5-2xxx are dual (they can still be used single if you want though), E5-4xxx are quad, E7-8xxx are oct, etc.
 
2 members found this post helpful.
Old 05-08-2014, 04:40 PM   #30
Atterus
LQ Newbie
 
Registered: Apr 2014
Distribution: Redhat 6.9
Posts: 13

Original Poster
Rep: Reputation: Disabled
Thanks everyone, I think this is enough for us to make a decision on the processor which is really the major area of contention. Ironically, we decided on the dual 2680v2 since it was faster than the 2650v2 and had more cores anyways, a happy medium of sorts? My boss ended up wanting as many cores as possible since he said he planned on running a ton of other things through it too, not related to "the program". The other parts are pretty straightforward as far as I can tell, got a MB that had all the right capabilities, good 600W PSU, RAM is cheap and everything else is pretty basic in terms of what we need/ want. We DID end up with a basic enterprise model hard drive since the boss wasn't comfortable with a SSD even though I did try to explain it was more stable and all... Give the man what he wants I guess. Anyways, I certainly learned quite a bit, and unless someone comes yelling that those processors would explode in my face I think we're set! Thanks again, this community never fails to impress!
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Curtiss-Wright Low Power Processing Card Runs Linux/DSP BIOS on Multicore TI OMAP-L138 LXer Syndicated Linux News 0 03-28-2014 12:00 PM
How much Processing power do I need edwardcode Linux - Hardware 54 07-20-2012 02:12 AM
Very high system level processing kenneho Linux - Server 10 01-12-2009 05:20 AM
What's the best distribution for high processing on a laptop? maelstrom209 Linux - Laptop and Netbook 11 10-07-2004 06:06 AM
Linux in high cost machine????? melinda_sayang Linux - Enterprise 17 05-13-2004 05:41 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 12:49 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration