Quote:
Originally Posted by sohel2009
2/4Gb of DDR3
|
By "2/4" you mean "2 or 4"?
Probably your compile tasks are much less demanding than ours. We (where I work) do a lot of compiles on several Linux systems and those with less than 2GB of ram per core are frequently limited by that, sometimes to the point that it isn't even effective to try to use all the cores.
Every day the code we build gets even bigger as multiple programmers add features.
Quote:
The system will be diskless
|
Why? Are you really trimming the cost that aggressively? Disk drives are very low cost.
A few GB of swap space is very helpful in a compile server in case the most demanding compiles all happen it once. That's a lot better than exhausting ram and failing.
Quote:
and to send the results via FTP.
|
Odd choice. We store the results by SMB or NFS. So how do you get the source code (which we also get by SMB or NFS)? If that also by FTP? FTP seems even less convenient for getting the source code that for pstoring the binaries. Where do you store the source code during the compile?
Quote:
By the way, I have to build four of these systems in order to create a 16-cores cluster
|
BTW, how will you distribute the work between those systems? I haven't yet dealt with that. We have the super crude approach of manually selecting one when starting a build job.