LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   PCIExpress 3.0 with consumer motherboard (https://www.linuxquestions.org/questions/linux-hardware-18/pciexpress-3-0-with-consumer-motherboard-4175481566/)

chiendarret 10-21-2013 03:45 AM

PCIExpress 3.0 with consumer motherboard
 
Hello:
My interst is to upgrade from a GA-X79-UD3 motherboard, used for number crunching (no X server) with two GTX-680 and Intel i7-3930K with an Antec Twelve Hundred, driven by Debian amd64.

I.e., my interest is in a moderboard with real PCIE 3.0 FOR AT LEAST TWO GPUs, i.e., at least two PCIE 3.0 sockets.

I am asking that in the hope that the motherboard has been checked, as producers have much blurred about PCIE, masking as 3.0 what was really 2.0.

As my cage is limited to standard ATX, I am prepared to change to a full tower for more GPUs and a larger motherboards. I was happy with the large upper fan of the Antec, keeping both the GPUs and the CPU at very mild temperatures, very heavy loading notwistanding during at least 24hs continuous use.

Thanks a lot for advice

chiendarret

cascade9 10-21-2013 04:20 AM

The whole PCIe 2.0 vs 3.0 with intel situation is very complex.

However, the Gigabyte X79-UD3 does support PCIe 3.0. Just in some situations.

Gigabyte X79-UD3 product page-

Quote:

* PCIe Gen. 3 is dependent on CPU and expansion card compatibility.
http://www.gigabyte.com/products/pro...px?pid=4050#ov

i7-3930K product page-

Quote:

PCI Express Revision 2.0
http://ark.intel.com/products/63697

The new 'Ivy-bridge E' LGA 2011 CPUs support PCIe 3.0. i7-4820K is about $350 US, and only a quad core, to get something with the same amount of cores as the i7-3930K means getting a i7-4930K/i7-4960X ($600/$1000+)

chiendarret 10-21-2013 05:29 AM

For my type of computations, at least two CPUs are needed for each GPU card, and as a safeguard, more that is normally provided. That is, as far as I know, a quad core might not be enough.

As I could save all hardware, except the CPU, I would be prepared to pay for the six core i7-4930K CPU. There is no point to pay for a i7-4960X because most acceleration for my computations is provided by the GPU. Has reality of PCIE 3.0 for GA-X79-UD3 with i7-4930K been proven by users? In my experience Gygabyte is very reliable, however....

Incidentally, do you think that the cage I mentioned, with an Artic Coolink, will provide enough heat dissipation for i7-4930K/i7-4960X? I have no refrigerated chamber. Also, is Corsair 850W enough for the new CPU?

Thanks
chiendarret

chiendarret 10-21-2013 05:38 AM

Sorry, I see now that also the motherboad has to changed. But that is not expensive.
chiendarret

cascade9 10-21-2013 06:03 AM

Ummm....why do you think the motehrboard has to be changed?

Quote:

Originally Posted by chiendarret (Post 5049556)
As I could save all hardware, except the CPU, I would be prepared to pay for the six core i7-4930K CPU. There is no point to pay for a i7-4960X because most acceleration for my computations is provided by the GPU.

i7-4960X is for very insane users, very rich users, or some balance between insane and rich.

Quote:

Originally Posted by chiendarret (Post 5049556)
Has reality of PCIE 3.0 for GA-X79-UD3 with i7-4930K been proven by users? In my experience Gygabyte is very reliable, however....

I havetn checked to see if I can find a user who says that it works...but most people who are even looking at PCIe 2.0 vs 3.0 are gamers on windows.

Quote:

Originally Posted by chiendarret (Post 5049556)
Incidentally, do you think that the cage I mentioned, with an Artic Coolink, will provide enough heat dissipation for i7-4930K/i7-4960X? I have no refrigerated chamber. Also, is Corsair 850W enough for the new CPU?

Artic Coolink? A tpyo on some 'artic cooling' branded heatsink/fan I guess. The i7-4930K/i7-4960X are rated at the same TDP as the i7-3930K, so just chaning the CPU wont make any real difference to case temps, etc. (and the CPU TDP is 130watts, which isn't much compred to 2 x 195watt TDP GTX 680s).

850watts should be more than enough. ;)

chiendarret 10-21-2013 08:41 AM

Yes, that of PCI# 3.0 is a very tricky matter. What I have now found

PCI Express Revision 3
PCI Express Configurations ‡ 1x16, 2x8, 1x8 2x4
Max # of PCI Express Lanes 16

means that I could have one GPU card only at PCIE 3.0 fully working. If true, it could hardly beat the configuration I have now with 2x16 PCI 2.0.

What about ASUS? They maintain to be on pole position to this regard, however I never used asus hardware.

thanks
chiendarret

chiendarret 10-21-2013 08:42 AM

Sorry, I forgot to say that that above concerns i7-4930K

http://ark.intel.com/products/75133

chiendarret

chiendarret 10-21-2013 10:24 AM

Very sorry, I have misinterpreted. Both GA-X79-UD3 and the latest GA-X79-UD5 support PCIe 3.0, provided that a suitable CPU is installed. The one I have, i7-3930, gives PCIe 2.0 support. The new i7-4930K gives PCie 3.0 support. Both processors have the same socket, LGA 2011, so that I could hold the GA-X79-UD3 for the new CPUs.

The only remaining obscure side is whether i7-4930K with GA-X79-UD3 provides PCIe 3.0 support to 2x16 or to 1x16 and 1x8. In the latter case, I fear that there is unbalance for the code I use, possibly hindering its use. As far as I undersdand, this is related to the CPU, not to whether GA-X79-UD3 or GA-X79-UD5 is used.

Hope not to have misunderstood again.

thanks
chiendarret

cascade9 10-22-2013 01:20 AM

Quote:

Originally Posted by chiendarret (Post 5049632)
Sorry, I forgot to say that that above concerns i7-4930K

http://ark.intel.com/products/75133

Wrong CPU. You've linked to the i7-4930MX, which is a mobile 4 core CPU, not a desktop LGA 2011 6 core.

The i7-4930K specs are here-

http://ark.intel.com/products/77780

As far as I know the capabilities/PCIe lanes of the PCIe slots wont change from PCIe 2.0 (so if you are running 2 x PCIe 2.0 x16, changing to PCIe 3.0 should give you the same x16 lanes).

chiendarret 10-22-2013 02:57 AM

I am going to order the replacement i7-4930K and a change from 1066Mz to 1600MHz dimm ram. I stick to GA-X79-UD3 because I don't see - for my purposes - any major reason to change to GA-X79-UD5 (the type of calculations for this computer - statistical mechanics - does require a modest ram amount, 16GB is more than enough)

Moreover, UD5 is extended ATX, which would require a new cage. All that in the perspective that these are fast moving things, soon from LGA-2011 to LGA-2011-3 and may be to Hashwell-E 8 core.

As far as I could investigate from the nvidia forum, it seems that nvidia has provided software access to pcie 3.0 in the latest linux drivers. I use wheezy amd64 for reasons of stability. May be I have to change to testing to get such nvidia drivers. Someting still to investigate.

I thank you for all your most valuable advice. It will turn useful to other guys too.

chiendarret

cascade9 10-28-2013 06:10 AM

Quote:

Originally Posted by chiendarret (Post 5050214)
As far as I could investigate from the nvidia forum, it seems that nvidia has provided software access to pcie 3.0 in the latest linux drivers. I use wheezy amd64 for reasons of stability. May be I have to change to testing to get such nvidia drivers. Someting still to investigate.

PCIe 3.0 worked with 295.33 drivers under linux, but then nVidia disabled it from 295.XX to (at least) 310.XX. More info here-

Quote:

PCIe 3.0 was disabled because of reliability issues with the current generation of PCIe controllers, such as the one in the X79 chipset platform.
https://devtalk.nvidia.com/default/t...e-3-0-support/

Part of the whole 'PCIe 3.0 with intel situation is very complex' issue.

You should be able to backport the nVidia drivers from testing/unstable to stable. ;)

chiendarret 11-09-2013 01:25 PM

PCIExpress 3.0 with consumer motherboard
 
Hello:
Today with Gigabyte X79-UD3 I have replaced i7-3930 with i7-4930K and 1066MHz with 1866MHz Corsairvengenance. BIOS updated for the new 4930.

I had not carried out a mem bandwidth test GTX-680/ram under CUDA 4.2, at which the statstical mechanics software (let me call it "code") that I use is compiled; 4.2 is also the version on Debian stable wheezy.

However, I have the comparative benchmarks on running the "code". With a rather light job, there is no gain with the new configuration. Also, by setting ram in BIOS at either their correct clock 1866MHz, or at 1066Mz, the job is executed by "code" at the same speed.

With a very heavy job, there is a very small rate increase with the new configuration (0.12s/step, which makes 1.4 days/ns) with respect to previous configuration (0.14s/step, which makes 1.6days/ns).

All that speaks - as far as I can understand - for the same bottleneck existing between GTX-680 and ram.

The two GTX-680 cards of my system are working correctly, both on 16 lanes. This is indicated by the "code" log, showing that equal portions of the system to compute (a large protein) are distributed between the two GPUs (it deals of parallel computing). Moreover, nvidia-smi tells that both GPUs engage the same amount of memory. Take into account that the largest part of the job is carried out by the GPUs, however, AT EACH STEP, a part of the job has to be carried out by the CPUs. Two CPUs per each GPU are needed.

From whay we know (largely kindly provided by CASCADE9), all that above was expected. What I can try now is either

(a) installing the most recent nvidia drivers, if it is true that nvidia has restored PCIe access with them. This involves some management, as I utilize the "Debian way" in providing the nvidia-drivers.

or

(b) Install an older nvidia drivers, one before that nvidia stopped to provide PCIe 3.0 access.

(a) or (b) ?

Thanks
chiendarret

chiendarret 11-13-2013 04:02 AM

PCIE 3.0 with consumer motherboards
 
CUDA driver 319.60 dfid not help. Same speed.

To test GPU-RAM bandwidth with nvidia-cu8da-toolkit, nvidia only offers SDK packages for ubuntu, not debian. I don't like troubles with ubuntu, which, unlike LinuxMINT, is not debian compatible.

What about GNU "CUDA-Z-07.189.run"? It should offer a bandwidth test. With my machines it is unable to find libXrender.so.1, although this lib is available.

So, this PCIE 3.0 for scientific use remains a mystery.. Perhaps is any setting on Gigabyte X79-UD3 for PCIE 3.0 which I was unable to detect? I set to auto, which correctly sees the clock of CPU and RAM.

thanks
chiendarret

cascade9 11-16-2013 04:29 AM

Quote:

Originally Posted by chiendarret (Post 5061432)
However, I have the comparative benchmarks on running the "code". With a rather light job, there is no gain with the new configuration. Also, by setting ram in BIOS at either their correct clock 1866MHz, or at 1066Mz, the job is executed by "code" at the same speed.

With a very heavy job, there is a very small rate increase with the new configuration (0.12s/step, which makes 1.4 days/ns) with respect to previous configuration (0.14s/step, which makes 1.6days/ns).

The minor change might be due to a different driver version.....or it might just be the difference between PCIe 2.0 and 3.0.

You'd probably do best to check the link speed rather than guessing.

I'd be more likely to try the newer driver rather than the older driver. Even with the newer/older driver you still might need to hack around and could cause instability-

Quote:

A future driver release will add a kernel module parameter, NVreg_EnablePCIeGen3=1, that will enable PCIe gen 3 when possible. Please note that this option is experimental and many systems are expected to have stability problems when this option is enabled. Use it at your own risk.

Aaron Plattner
NVIDIA Linux Graphics

#13
Posted 01/03/2013 10:05 PM
https://devtalk.nvidia.com/default/t...e-3-0-support/

chiendarret 11-19-2013 12:21 PM

PCIe 3.0 with consumer motherboar
 
I am late in answering. Had classes.
Well, I long solved getting PCIe 3.0 (8GT/s) by asking that directly to the kernel, through permanet grub booting. However, as far as number crunching is conserved, only with very heavy job there is some (margihnal) gain.

As to testing, of course I tested both LnKStad (the well known linux tools tell everything about the GPUs, no need of nvia tools) and the MD performance. Both in accordance. I used the 319.60 driver, but on passing trials with 304.xx gave the same.

In conclusion, as far as number crunching is concerned (where we do not call the X-server and activate tye GPUs with nvidia-smi -L and nvidia-smi -pm 1) the change from sandy to ivy is not worth the money.
chiendarret


All times are GMT -5. The time now is 09:09 PM.