It may be easier for you to just install GCC 7.3 from source. After downloading the source tarball, there is a script in-there to download prerequisites, and then I build it from source using following commands:
Code:
./configure -v --build=x86_64-linux-gnu --host=x86_64-linux-gnu \
--target=x86_64-linux-gnu --prefix=/opt/gcc-7.3 \
--enable-checking=release --enable-languages=c,c++,fortran \
--disable-multilib --program-suffix=-7.3
Then you need to point to this version of GCC when compiling CUDA code. For example, if your project is using CMake, then following commands, before CMake run, will do:
Code:
CC=/opt/gcc-7.3/bin/gcc-7.3
CXX=/opt/gcc-7.3/bin/g++-7.3
CUDAHOSTCXX=$CXX
export CC CXX CUDAHOSTCXX
Note that you need to specifically tell CUDA to use this version of GCC as its so-called host compiler (the CUDAHOSTCXX line).
The advantage of this approach is that this way you don't mess with your system-wide GCC installation - you have parallel GCC 7 installation, that you could use only when needed.
NVIDIA is indeed usually slow to enable latest GCC version. Previously, it was typically enough just to comment out corresponding #ifdef in specific CUDA header file to make it possible to use latest version of GCC, but with GCC 7 and GCC 8 things get more complicated. For GCC 7, there was a patch in ciruculation that will make CUDA work, but with GCC 8, there is no such thing, as far as I know, so the only way is to use GCC 7. Alike for LLVM, the highest clang version supported is 6, while Slackware switched to clang version 7. Next CUDA version will certainly fix this, as many important distributions switched to GCC 8, and in particular upcoming RHEL 8 should come with GCC 8 too. I expect new CUDA version certainly at the next GTC conference, in March 2019, and maybe earlier.