PCIExpress 3.0 with consumer motherboard
Hello:
My interst is to upgrade from a GA-X79-UD3 motherboard, used for number crunching (no X server) with two GTX-680 and Intel i7-3930K with an Antec Twelve Hundred, driven by Debian amd64. I.e., my interest is in a moderboard with real PCIE 3.0 FOR AT LEAST TWO GPUs, i.e., at least two PCIE 3.0 sockets. I am asking that in the hope that the motherboard has been checked, as producers have much blurred about PCIE, masking as 3.0 what was really 2.0. As my cage is limited to standard ATX, I am prepared to change to a full tower for more GPUs and a larger motherboards. I was happy with the large upper fan of the Antec, keeping both the GPUs and the CPU at very mild temperatures, very heavy loading notwistanding during at least 24hs continuous use. Thanks a lot for advice chiendarret |
The whole PCIe 2.0 vs 3.0 with intel situation is very complex.
However, the Gigabyte X79-UD3 does support PCIe 3.0. Just in some situations. Gigabyte X79-UD3 product page- Quote:
i7-3930K product page- Quote:
The new 'Ivy-bridge E' LGA 2011 CPUs support PCIe 3.0. i7-4820K is about $350 US, and only a quad core, to get something with the same amount of cores as the i7-3930K means getting a i7-4930K/i7-4960X ($600/$1000+) |
For my type of computations, at least two CPUs are needed for each GPU card, and as a safeguard, more that is normally provided. That is, as far as I know, a quad core might not be enough.
As I could save all hardware, except the CPU, I would be prepared to pay for the six core i7-4930K CPU. There is no point to pay for a i7-4960X because most acceleration for my computations is provided by the GPU. Has reality of PCIE 3.0 for GA-X79-UD3 with i7-4930K been proven by users? In my experience Gygabyte is very reliable, however.... Incidentally, do you think that the cage I mentioned, with an Artic Coolink, will provide enough heat dissipation for i7-4930K/i7-4960X? I have no refrigerated chamber. Also, is Corsair 850W enough for the new CPU? Thanks chiendarret |
Sorry, I see now that also the motherboad has to changed. But that is not expensive.
chiendarret |
Ummm....why do you think the motehrboard has to be changed?
Quote:
Quote:
Quote:
850watts should be more than enough. ;) |
Yes, that of PCI# 3.0 is a very tricky matter. What I have now found
PCI Express Revision 3 PCI Express Configurations ‡ 1x16, 2x8, 1x8 2x4 Max # of PCI Express Lanes 16 means that I could have one GPU card only at PCIE 3.0 fully working. If true, it could hardly beat the configuration I have now with 2x16 PCI 2.0. What about ASUS? They maintain to be on pole position to this regard, however I never used asus hardware. thanks chiendarret |
Sorry, I forgot to say that that above concerns i7-4930K
http://ark.intel.com/products/75133 chiendarret |
Very sorry, I have misinterpreted. Both GA-X79-UD3 and the latest GA-X79-UD5 support PCIe 3.0, provided that a suitable CPU is installed. The one I have, i7-3930, gives PCIe 2.0 support. The new i7-4930K gives PCie 3.0 support. Both processors have the same socket, LGA 2011, so that I could hold the GA-X79-UD3 for the new CPUs.
The only remaining obscure side is whether i7-4930K with GA-X79-UD3 provides PCIe 3.0 support to 2x16 or to 1x16 and 1x8. In the latter case, I fear that there is unbalance for the code I use, possibly hindering its use. As far as I undersdand, this is related to the CPU, not to whether GA-X79-UD3 or GA-X79-UD5 is used. Hope not to have misunderstood again. thanks chiendarret |
Quote:
The i7-4930K specs are here- http://ark.intel.com/products/77780 As far as I know the capabilities/PCIe lanes of the PCIe slots wont change from PCIe 2.0 (so if you are running 2 x PCIe 2.0 x16, changing to PCIe 3.0 should give you the same x16 lanes). |
I am going to order the replacement i7-4930K and a change from 1066Mz to 1600MHz dimm ram. I stick to GA-X79-UD3 because I don't see - for my purposes - any major reason to change to GA-X79-UD5 (the type of calculations for this computer - statistical mechanics - does require a modest ram amount, 16GB is more than enough)
Moreover, UD5 is extended ATX, which would require a new cage. All that in the perspective that these are fast moving things, soon from LGA-2011 to LGA-2011-3 and may be to Hashwell-E 8 core. As far as I could investigate from the nvidia forum, it seems that nvidia has provided software access to pcie 3.0 in the latest linux drivers. I use wheezy amd64 for reasons of stability. May be I have to change to testing to get such nvidia drivers. Someting still to investigate. I thank you for all your most valuable advice. It will turn useful to other guys too. chiendarret |
Quote:
Quote:
Part of the whole 'PCIe 3.0 with intel situation is very complex' issue. You should be able to backport the nVidia drivers from testing/unstable to stable. ;) |
PCIExpress 3.0 with consumer motherboard
Hello:
Today with Gigabyte X79-UD3 I have replaced i7-3930 with i7-4930K and 1066MHz with 1866MHz Corsairvengenance. BIOS updated for the new 4930. I had not carried out a mem bandwidth test GTX-680/ram under CUDA 4.2, at which the statstical mechanics software (let me call it "code") that I use is compiled; 4.2 is also the version on Debian stable wheezy. However, I have the comparative benchmarks on running the "code". With a rather light job, there is no gain with the new configuration. Also, by setting ram in BIOS at either their correct clock 1866MHz, or at 1066Mz, the job is executed by "code" at the same speed. With a very heavy job, there is a very small rate increase with the new configuration (0.12s/step, which makes 1.4 days/ns) with respect to previous configuration (0.14s/step, which makes 1.6days/ns). All that speaks - as far as I can understand - for the same bottleneck existing between GTX-680 and ram. The two GTX-680 cards of my system are working correctly, both on 16 lanes. This is indicated by the "code" log, showing that equal portions of the system to compute (a large protein) are distributed between the two GPUs (it deals of parallel computing). Moreover, nvidia-smi tells that both GPUs engage the same amount of memory. Take into account that the largest part of the job is carried out by the GPUs, however, AT EACH STEP, a part of the job has to be carried out by the CPUs. Two CPUs per each GPU are needed. From whay we know (largely kindly provided by CASCADE9), all that above was expected. What I can try now is either (a) installing the most recent nvidia drivers, if it is true that nvidia has restored PCIe access with them. This involves some management, as I utilize the "Debian way" in providing the nvidia-drivers. or (b) Install an older nvidia drivers, one before that nvidia stopped to provide PCIe 3.0 access. (a) or (b) ? Thanks chiendarret |
PCIE 3.0 with consumer motherboards
CUDA driver 319.60 dfid not help. Same speed.
To test GPU-RAM bandwidth with nvidia-cu8da-toolkit, nvidia only offers SDK packages for ubuntu, not debian. I don't like troubles with ubuntu, which, unlike LinuxMINT, is not debian compatible. What about GNU "CUDA-Z-07.189.run"? It should offer a bandwidth test. With my machines it is unable to find libXrender.so.1, although this lib is available. So, this PCIE 3.0 for scientific use remains a mystery.. Perhaps is any setting on Gigabyte X79-UD3 for PCIE 3.0 which I was unable to detect? I set to auto, which correctly sees the clock of CPU and RAM. thanks chiendarret |
Quote:
You'd probably do best to check the link speed rather than guessing. I'd be more likely to try the newer driver rather than the older driver. Even with the newer/older driver you still might need to hack around and could cause instability- Quote:
|
PCIe 3.0 with consumer motherboar
I am late in answering. Had classes.
Well, I long solved getting PCIe 3.0 (8GT/s) by asking that directly to the kernel, through permanet grub booting. However, as far as number crunching is conserved, only with very heavy job there is some (margihnal) gain. As to testing, of course I tested both LnKStad (the well known linux tools tell everything about the GPUs, no need of nvia tools) and the MD performance. Both in accordance. I used the 319.60 driver, but on passing trials with 304.xx gave the same. In conclusion, as far as number crunching is concerned (where we do not call the X-server and activate tye GPUs with nvidia-smi -L and nvidia-smi -pm 1) the change from sandy to ivy is not worth the money. chiendarret |
All times are GMT -5. The time now is 09:09 PM. |