DebianThis forum is for the discussion of Debian Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am trying to increase my parallel computational abilities for some dynamic systems mathematical research. I am deciding whether to purchase one of nvidia's gpu cards for my intel CoreQuad, or enlarge my 5 node cluster of mixed debian linux boxes with some headless nodes.
My main concern is how the GPU will appear to the system. Will the system be able to fork processes natively to the GPU? Has anyone successfully ran an nvidia gpu as a computational aid with debian?
I have used debian for years and am reluctant to switch to any other distro, especially redhat which seems to be supported by nvidia, blah, but with my research taking the path it is, my computing system needs the maximum number of simultaneous processors available and the gpu represents a good alternative if it can work under debian.
Currently the Nvidia GPUs are not directly supported by any operating system (in the sense of using them as general purpose processing units), and it would be difficult to do so.
One makes use of the CUDA development system to compile code that will make use of the graphics processor array. It is a cheap way of getting lots of processing power (for certain classes of applications), but the applications have to be explicitly developed to use them.
Last edited by neonsignal; 01-12-2010 at 08:08 AM.
thanks for the reply. I was aware of the CUDA C extensions, and this is acceptable to me as my programs are simple and easily portable at this moment... that is why it is important for me to get my hardware situated before I commit to a language and start developing more advanced programs. I see cuda has ext for both FFT and LAS which is awesome. My main concern is if I go buy a CUDA compliant geforce card and slap it in my debian quadcore box, how involved is the setup. I have seen only one instance of an attempt at this on debian in my googling, and it seemed non-trivial.
I guess what I am asking is, are there any hpc debian users successfully using GPU for their calculations? How is it working, and would you do it again? Is the performance exceding the same money put towards mobo+cpu nodes for extremely parallel computations. An example of one of my problems is calculating the norm between all possible pairs of points in a LONG list. Simple enough, but takes FOREVER.
Also, if I get an nvidia geforce card with cuda extensions, can I use it for graphics as well, or will it need to be dedicated for calc?
sorry for all the questions, but I have limited funding, and don't know anyone in a HPC field.
The use of general purpose GPUs is still fairly new, so it is going to involve significantly more work fitting your application to them (making the code non-portable). The benefit will depend very much on the application; the shader pipelines mean that you can have significant parallelism, but only where the data sets can be kept small and processed independently. For example, the Stanford 'Folding at Home' project is achieving around 70 times the performance from a typical graphics card than they are from a typical CPU. But they have already had two iterations of software to achieve that performance.
If your constraint is money rather than time, you could consider getting an older GeForce 8 series card to evaluate the use of CUDA before committing to one of the expensive cards.
yes, I agree, I am currently looking at a few sub100$ cards that *could* represent significant improvement in parallelism (if they work) to test the water. Maybe after my university requirements are over I will have the time to utilize one of nvidias Tesla devices
I am concerned that the current CUDA systems do not seem to be able to run disjoint kernels? This will represent a concerning coding consideration for me. My current setup typically forks sub-kernels strapped with my "tools" to available cpus on my network, sends a object and combines resulting objects. This has represented the simplest way to utilize all available processing power for homomorpic problem sets. However, if my objects are of varying complexity, the card would often spend much of its time waiting, and I would have to construct a loop to batch jobs of similar estimated time together, wait, batch, wait, batch. hmmmmm
Secondary projects of mine involve mapping the same function over many initial conditions of same complexity without any recombination and seem easily portable to GPU environment.
Thanks much for the wonderful responses neon. May I ask how employment in the HPC community is? I am a mathematician trying to decide what fields to enter...
I have seen the projects like Seti and Folding@home. I don't think that is the direction I am heading.
Well install is not going that well. I am getting the dreaded x-freeze in combination with the nvidia proprietary drivers on a previously rock solid debian lenny box, intel q6600 and foxconn G33M03 mobo. It freezes anytime I try to play a video. OpenGL works fast as lightening though! Still working on it... apparently this is common, so I just have to research/woodshed.
I will report back any solutions, to help document. I hear this is a problem with other distros too, so hopefully I can fix it on debian!
Some notes for others that are interested in this sort of thing on debian.
1. It seems that the regular NVIDIA graphics driver for my hardware (Geforce GT220) supports CUDA and is a newer version than the driver on the cuda page. The install went well with no errors, system detects presence of CUDA device, and I can currently run some of the demo code.
2. **PROBLEM** Random freezes, and consistently freezes when playing video. These freezes cause the mouse pointer to remain active, but the computer does not recognize any keyboard or mouse button input, even ctrl-alt-_______ sequences do not work. Computer must hard power off, and reboot.
I tried everything software possible to fix this under debian, and later under windows. Tried newer BIOS, updated all hardware drivers, rolled back drivers, etc. So I dual booted into vista on the same pc and guess what? After I installed the nvidia drivers, the computer screen went blank and my fans quieted, frozen again, now under windows. So I opened up the case, cleaned it up a bit and reseated everything. Same problem. Searching, I found someone at nvidia gave a hint to a disgruntled customer to remove memory down to one stick, install the card, and replace memory. Well this worked with one stick of RAM, and did not work once I replaced all the RAM... so the best solution I found so far:
**SOLUTION** I had 4 sticks of memory occupying four slots. 2x 1gig, 2x 512MB. I removed one 512mb stick. No problems since? I do not understand what caused this and am dissapointed about the situation, but for those experiencing lockups with nvidia hardware, try fiddling with your memory. My computer memory is stock and not overclocked etc, so I would expect the nvidia card to work out of the box.... this also fixed the problem under windows, so it is definitely a hardware issue and not a reflection on debian.
so far as I can tell, even though debian doesnt fully support the CUDA products, I had no software issues, and can recommend this setup to others. I was very impressed by the computational prowlness in some of the example codes, particularly the ocean surface simulation.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.