Since the other thread has been closed ... here goes my attempt at an answer. Note that I have no experience with FDS6, but I have quite a bit of experience building and running beowulf clusters.
- ROCKS is supposedly an easy way to build a Beowulf cluster with pre-rolled scripts. If the style of the cluster you want to build and your management style synergizes well with the "ROCKS" way of doing things, it will probably work well. If not, you'll find it a real pain. Note that there are other choices, e.g.Perceus and Warewulf). Personally, I just like to build all the nodes from scratch myself (using Kickstart + Puppet with Scientific Linux). It's really not as hard as some people lead you to believe, and you have the advantage of being able to know and tune how everything is set up. If you're even a semi-experienced Linux admin, setting up a basic beowulf cluster without using pre-rolled scripts is really not that difficult.
- We don't know your application. You should benchmark it against several commercially available processors and see what gives you the best computations per dollar range. You might also try asking other users of FDS6 or its manufacturer (if commercial) or developers (if open source) what they recommend. Chances are that they will know better than random people on a message board who have never heard of the application :-). Also think about whether you will be running computations in parallel over the network (e.g. using MPI). If so, network performance may matter way more than CPU performance. Questions to ask in that case are whether the code is bandwidth-sensitive or latency-sensitive or, worst of all, both. If it's latency sensitive, you may never get good inter-node parallel performance without investing lots and lots of money into specialized low latency network hardware (special purpose Ethernet or InfiniBand). Also what kind of I/O ingest/output rates are required. If FDS6 reads/writes massive data files, the fastest processors and networks in the world won't help you if the code is constantly stuck performing I/O to slow as molasses disk systems. Building an efficient and cost-effective HPC systems is all about finding what your workload needs, doing as well as possible at meeting those needs, and skimping as much as possible on other stuff to save money. Just asking "what's the best processor?" is a bit like "what kind of engine do I need in my car?" It depends on what kind of car you want to build, what gas mileage is required, how fast does it needs to go, etc.
- If you plan to run parallel calculations accross nodes, they should be matched in speed (since faster processors do no good when combined with slower ones in a parallel calculation, generally). There are some exceptions to this rule, but they're few and far between. As a rule of thumb, you want things as homogenous as possible. If you will not be running parallel computations between multiple nodes, then this is less critical, but still homogeneity will help ease the management and administration burden (only having one type of memory means you can keep you can keep a few spares on hand when [not if] a DIMM fails).
- Not necessarily. It depends on which tasks are going to be running on the master node. If it will be running lots of set-up jobs, serving a file system to the cluster via NFS, and running a batch scheduler for the rest of the cluster, then you want it to be fairly beefy. If it's just handling user logins while the rest of the cluster does the "real work", then not so much. In some of the clusters I've built, the master node is about the slowest in the whole operation.
Other things to consider:
- ECC RAM is good if you can afford it. I've seen too many crap numerical results courtesy of passing cosmic rays.
- Think about how much memory each node will need.
- Think about your power and cooling requirements if you plan to put the cluster in a confined space (e.g. a network closet). Over heating can and does cause component failure. Also, consider the cost of the electricity to run the thing...
- It's best not to cheap out if at all possible; low quality components have higher failure rates.
- If your code will run well on GPUs, consider buying a few of these rather than a big cluster. You might get just as good performance for much less money.