[SOLVED] Slackware64 current (15.0) OpenCL added to stock Mesa amdgpu - call for testing
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
bassmadrigal replied "This will only work with supported AMD GPUs. It won't work for Nvidia."
I'd really didn't expect as much... So now I have to figure out either how to kludge CUDA to work with my unsupported GPU *or* update OpenCL so that it sees my GPU. Any suggestions on which track I should try first? This is important for me.
The proprietary Nvidia drivers (not nouveau) includes all the OpenCL libs needed. However CUDA will not work with OPENCL and vice versa. The are different, competing compute APIs (one proprietary and one open source). Also, Nvidia does not support CUDA development on all of its GPUs. If you need to build OpenCL applications on Nvidia GPU, start here: https://developer.nvidia.com/opencl
Thanks a lot kingB. I am thinking about upgrading my GPU but I can't decide if I should go with AMD -or- stay with Nvidia. Going with AMD has at least two big incentives:
1) Get a lot more GPU for my $
2) Freeing myself of the need to rebuild the Nvidia proprietary driver/kernel mods every time I upgrade my kernel (which I do often).
My current low-end Nvidia GPU [1030 GT] as it turns out is supported and working with Cuda TK and Cuda NN libs. This (Cuda) facilitates building the Neural Network based App (https://github.com/lightvector/KataGo) that I would be mainly upgrading for. The aforementioned App however simply will *not* compile with the current stock Slackware OpenCL libraries. It is even mentioned on that site that linux OpenCL drivers are problematic - yet the Windows pre-compiled binaries seem to work fine with the OpenCL based version.
So if I were to make the jump to AMD I would most certainly be relying on your SlackBuild (or the one on Slackbuilds.org) to support it.
I don't suppose you would be willing to direct me here? Maybe even see if you can build the above App on your box?
wirelessmc, the mesa OpenCL aren't quite ready (esp. for AMD). You'll need the AMD OpenCL stack from AMDGPU-PRO or perhaps ROCm (although newer NAVI are not supported and their build system organization give me hives). If you run AMD on Windows, that driver (Adrenalin 2020 Edition) will of course have all the OpenCL bits and work fine. The Linux equivalent is AMDGPU-PRO. Thus my post here as a "workaround" for the F/OSS amdgpu kernel module/driver.
1) and 2) were also my reasons to drop Nvidia for AMD when Current hit kernel 5.4.x (5.8 would be better). I'm pleased with the decision so far, although it was mainly for games and BOINC. Now, after a kernel update I need to do ... nothing ... to update amdgpu. sweet!
I just did a big upgrade on the current boxes, and updated to amdgpu-opencl-20.30. I'll give KataGo a shot here soon while I'm updating and rebuilding all my SBo stuff. DM me the build options etc you use. They also report it to be buggy on RX 5700, so I will build and also test on a RX 590 I have. This will be on Slackware64-current so you'll have a baseline.
Hi, I replaced my RX580 with RX 5700 XT and opencl stopped working on RX 5700 XT. I did more digging and I got to work again on RX 5700 XT.
It looks like RX580 will use libamdocl-orca64.so for opencl and RX 5700 XT will use libamdocl64.so but I had to copy libdrm_amdgpu.so.1.0.0 from libdrm-amdgpu-amdgpu1_2.4.100-1109583_amd64.deb replacing slackware one. Also I left only amdocl64.icd in /etc/OpenCL/vendors.
Number of platforms 1
Platform Name AMD Accelerated Parallel Processing
Platform Vendor Advanced Micro Devices, Inc.
Platform Version OpenCL 2.1 AMD-APP (3143.9)
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_icd cl_amd_event_callback cl_amd_offline_devices
Platform Host timer resolution 1ns
Platform Extensions function suffix AMD
Platform Name AMD Accelerated Parallel Processing
Number of devices 1
Device Name gfx1010
Device Vendor Advanced Micro Devices, Inc.
Device Vendor ID 0x1002
Device Version OpenCL 2.0 AMD-APP (3143.9)
Driver Version 3143.9 (PAL,LC)
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Board Name (AMD) AMD Radeon RX 5700 XT
Device PCI-e ID (AMD) 0x731f
Device Topology (AMD) PCI-E, 0b:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 20
SIMD per compute unit (AMD) 4
SIMD width (AMD) 32
SIMD instruction width (AMD) 1
Max clock frequency 2100MHz
Graphics IP (AMD) 10.10
Device Partition (core)
Max number of sub-devices 20
Supported partition types None
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 1024x1024x1024
Max work group size 256
Preferred work group size (AMD) 256
Max work group size (AMD) 1024
Preferred work group size multiple 32
Wavefront width (AMD) 32
Preferred / native vector sizes
char 4 / 4
short 2 / 2
int 1 / 1
long 1 / 1
half 1 / 1 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals No
Infinity and NANs No
Round to nearest No
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 8573157376 (7.984GiB)
Global free memory (AMD) 8306688 (7.922GiB) 8044544 (7.672GiB)
Global memory channels (AMD) 8
Global memory banks per channel (AMD) 4
Global memory bank width (AMD) 256 bytes
Error Correction support No
Max memory allocation 7059013632 (6.574GiB)
Unified memory for Host and Device No
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing Yes
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 2048 bits (256 bytes)
Preferred alignment for atomics
SVM 0 bytes
Global 0 bytes
Local 0 bytes
Max size for global variable 6353112064 (5.917GiB)
Preferred total size of global vars 8573157376 (7.984GiB)
Global Memory cache type Read/Write
Global Memory cache size 16384 (16KiB)
Global Memory cache line size 64 bytes
Image support Yes
Max number of samplers per kernel 16
Max size for 1D images from buffer 134217728 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 256 bytes
Pitch alignment for 2D image buffers 256 pixels
Max 2D image size 16384x16384 pixels
Max 3D image size 2048x2048x2048 pixels
Max number of read image args 128
Max number of write image args 64
Max number of read/write image args 64
Max number of pipe args 16
Max active pipe reservations 16
Max pipe packet size 2764046336 (2.574GiB)
Local memory type Local
Local memory size 65536 (64KiB)
Local memory syze per CU (AMD) 65536 (64KiB)
Local memory banks (AMD) 32
Max number of constant args 8
Max constant buffer size 7059013632 (6.574GiB)
Preferred constant buffer size (AMD) 16384 (16KiB)
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution No
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 262144 (256KiB)
Max size 8388608 (8MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Number of P2P devices (AMD) 0
Profiling timer resolution 1ns
Profiling timer offset since Epoch (AMD) 1599657823269325201ns (Wed Sep 9 09:23:43 2020)
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Thread trace supported (AMD) Yes
Number of async queues (AMD) 4
Max real-time compute queues (AMD) 1
Max real-time compute units (AMD) 0
printf() buffer size 4194304 (4MiB)
Built-in kernels (n/a)
Device Extensions cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_khr_gl_depth_images cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_gl_event cl_khr_depth_images cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_amd_copy_buffer_p2p
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) AMD Accelerated Parallel Processing
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) Success [AMD]
clCreateContext(NULL, ...) [default] Success [AMD]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx1010
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx1010
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) Success (1)
Platform Name AMD Accelerated Parallel Processing
Device Name gfx1010
ICD loader properties
ICD loader Name OpenCL ICD Loader
ICD loader Vendor OCL Icd free software
ICD loader Version 2.2.12
ICD loader Profile OpenCL 2.2
Hi, I replaced my RX580 with RX 5700 XT and opencl stopped working on RX 5700 XT. I did more digging and I got to work again on RX 5700 XT.
It looks like RX580 will use libamdocl-orca64.so for opencl and RX 5700 XT will use libamdocl64.so but I had to copy libdrm_amdgpu.so.1.0.0 from libdrm-amdgpu-amdgpu1_2.4.100-1109583_amd64.deb replacing slackware one. Also I left only amdocl64.icd in /etc/OpenCL/vendors.
Well, heck. I was not expecting that. I was wondering what was up when I upgraded using amdgpu-pro 20.30. Given all the updates to current, I was looking in the wrong area...
I'm not able to get amdgpu-opencl based on amdgpu-pro-20.30 to work. Rolled back to 20.10. I'm not quite sure what's missing. jedrek.b workaround did not work. I'll update the TP. EDIT: seems I can't edit the top post. So i'll just drop this here.
Last edited by kingbeowulf; 09-12-2020 at 09:31 PM.
Reason: additional comment
GPU: XFX Radeon RX 5700 XT THICC III Ultra (Factory overclocked, not a "stock" RX 5700 XT).
Stock amdgpu with amdgpu-opencl-20.10
Compilation:
Code:
$ cmake . -DUSE_BACKEND=OPENCL -DNO_GIT_REVISION=1
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Building 'katago' executable for GTP engine and other tools.
-- -DUSE_BACKEND=OPENCL, using OpenCL backend.
-- -DNO_GIT_REVISION=1 is set, avoiding including the Git revision in compiled executable
-- Looking for CL_VERSION_2_2
-- Looking for CL_VERSION_2_2 - found
-- Found OpenCL: /usr/lib64/libOpenCL.so (found version "2.2")
-- Found ZLIB: /usr/lib64/libz.so (found version "1.2.11")
-- Found Boost: /usr/lib64/cmake/Boost-1.74.0/BoostConfig.cmake (found version "1.74.0") found components: system filesystem
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Configuring done
-- Generating done
-- Build files have been written to: /home/beowulf/test/SlackBuilds/development/KataGo/cpp
$ make
Scanning dependencies of target katago
[ 1%] Building CXX object CMakeFiles/katago.dir/core/global.cpp.o
[ 2%] Building CXX object CMakeFiles/katago.dir/core/bsearch.cpp.o
[ 3%] Building CXX object CMakeFiles/katago.dir/core/config_parser.cpp.o
[ 5%] Building CXX object CMakeFiles/katago.dir/core/datetime.cpp.o
[ 6%] Building CXX object CMakeFiles/katago.dir/core/elo.cpp.o
[ 7%] Building CXX object CMakeFiles/katago.dir/core/fancymath.cpp.o
[ 9%] Building CXX object CMakeFiles/katago.dir/core/hash.cpp.o
[ 10%] Building CXX object CMakeFiles/katago.dir/core/logger.cpp.o
[ 11%] Building CXX object CMakeFiles/katago.dir/core/makedir.cpp.o
[ 13%] Building CXX object CMakeFiles/katago.dir/core/md5.cpp.o
[ 14%] Building CXX object CMakeFiles/katago.dir/core/multithread.cpp.o
[ 15%] Building CXX object CMakeFiles/katago.dir/core/rand.cpp.o
[ 17%] Building CXX object CMakeFiles/katago.dir/core/rand_helpers.cpp.o
[ 18%] Building CXX object CMakeFiles/katago.dir/core/sha2.cpp.o
[ 19%] Building CXX object CMakeFiles/katago.dir/core/test.cpp.o
[ 21%] Building CXX object CMakeFiles/katago.dir/core/threadsafequeue.cpp.o
[ 22%] Building CXX object CMakeFiles/katago.dir/core/timer.cpp.o
[ 23%] Building CXX object CMakeFiles/katago.dir/game/board.cpp.o
[ 25%] Building CXX object CMakeFiles/katago.dir/game/rules.cpp.o
[ 26%] Building CXX object CMakeFiles/katago.dir/game/boardhistory.cpp.o
[ 27%] Building CXX object CMakeFiles/katago.dir/dataio/sgf.cpp.o
[ 28%] Building CXX object CMakeFiles/katago.dir/dataio/numpywrite.cpp.o
[ 30%] Building CXX object CMakeFiles/katago.dir/dataio/trainingwrite.cpp.o
[ 31%] Building CXX object CMakeFiles/katago.dir/dataio/loadmodel.cpp.o
[ 32%] Building CXX object CMakeFiles/katago.dir/dataio/lzparse.cpp.o
[ 34%] Building CXX object CMakeFiles/katago.dir/dataio/homedata.cpp.o
[ 35%] Building CXX object CMakeFiles/katago.dir/neuralnet/nninputs.cpp.o
[ 36%] Building CXX object CMakeFiles/katago.dir/neuralnet/modelversion.cpp.o
[ 38%] Building CXX object CMakeFiles/katago.dir/neuralnet/nneval.cpp.o
[ 39%] Building CXX object CMakeFiles/katago.dir/neuralnet/desc.cpp.o
[ 40%] Building CXX object CMakeFiles/katago.dir/neuralnet/openclbackend.cpp.o
[ 42%] Building CXX object CMakeFiles/katago.dir/neuralnet/openclkernels.cpp.o
[ 43%] Building CXX object CMakeFiles/katago.dir/neuralnet/openclhelpers.cpp.o
[ 44%] Building CXX object CMakeFiles/katago.dir/neuralnet/opencltuner.cpp.o
[ 46%] Building CXX object CMakeFiles/katago.dir/search/timecontrols.cpp.o
[ 47%] Building CXX object CMakeFiles/katago.dir/search/searchparams.cpp.o
[ 48%] Building CXX object CMakeFiles/katago.dir/search/mutexpool.cpp.o
[ 50%] Building CXX object CMakeFiles/katago.dir/search/search.cpp.o
[ 51%] Building CXX object CMakeFiles/katago.dir/search/searchresults.cpp.o
[ 52%] Building CXX object CMakeFiles/katago.dir/search/asyncbot.cpp.o
[ 53%] Building CXX object CMakeFiles/katago.dir/search/distributiontable.cpp.o
[ 55%] Building CXX object CMakeFiles/katago.dir/search/analysisdata.cpp.o
[ 56%] Building CXX object CMakeFiles/katago.dir/program/gtpconfig.cpp.o
[ 57%] Building CXX object CMakeFiles/katago.dir/program/setup.cpp.o
[ 59%] Building CXX object CMakeFiles/katago.dir/program/playutils.cpp.o
[ 60%] Building CXX object CMakeFiles/katago.dir/program/playsettings.cpp.o
[ 61%] Building CXX object CMakeFiles/katago.dir/program/play.cpp.o
[ 63%] Building CXX object CMakeFiles/katago.dir/program/selfplaymanager.cpp.o
[ 64%] Building CXX object CMakeFiles/katago.dir/tests/testboardarea.cpp.o
[ 65%] Building CXX object CMakeFiles/katago.dir/tests/testboardbasic.cpp.o
[ 67%] Building CXX object CMakeFiles/katago.dir/tests/testcommon.cpp.o
[ 68%] Building CXX object CMakeFiles/katago.dir/tests/testrules.cpp.o
[ 69%] Building CXX object CMakeFiles/katago.dir/tests/testscore.cpp.o
[ 71%] Building CXX object CMakeFiles/katago.dir/tests/testsgf.cpp.o
[ 72%] Building CXX object CMakeFiles/katago.dir/tests/testnninputs.cpp.o
[ 73%] Building CXX object CMakeFiles/katago.dir/tests/testownership.cpp.o
[ 75%] Building CXX object CMakeFiles/katago.dir/tests/testsearch.cpp.o
[ 76%] Building CXX object CMakeFiles/katago.dir/tests/testtime.cpp.o
[ 77%] Building CXX object CMakeFiles/katago.dir/tests/testtrainingwrite.cpp.o
[ 78%] Building CXX object CMakeFiles/katago.dir/tests/testnn.cpp.o
[ 80%] Building CXX object CMakeFiles/katago.dir/command/commandline.cpp.o
[ 81%] Building CXX object CMakeFiles/katago.dir/command/analysis.cpp.o
[ 82%] Building CXX object CMakeFiles/katago.dir/command/benchmark.cpp.o
[ 84%] Building CXX object CMakeFiles/katago.dir/command/evalsgf.cpp.o
[ 85%] Building CXX object CMakeFiles/katago.dir/command/gatekeeper.cpp.o
[ 86%] Building CXX object CMakeFiles/katago.dir/command/gtp.cpp.o
[ 88%] Building CXX object CMakeFiles/katago.dir/command/lzcost.cpp.o
[ 89%] Building CXX object CMakeFiles/katago.dir/command/match.cpp.o
[ 90%] Building CXX object CMakeFiles/katago.dir/command/matchauto.cpp.o
[ 92%] Building CXX object CMakeFiles/katago.dir/command/misc.cpp.o
[ 93%] Building CXX object CMakeFiles/katago.dir/command/runtests.cpp.o
[ 94%] Building CXX object CMakeFiles/katago.dir/command/sandbox.cpp.o
[ 96%] Building CXX object CMakeFiles/katago.dir/command/selfplay.cpp.o
[ 97%] Building CXX object CMakeFiles/katago.dir/command/tune.cpp.o
[ 98%] Building CXX object CMakeFiles/katago.dir/main.cpp.o
[100%] Linking CXX executable katago
[100%] Built target katago
$ ./katago benchmark
2020-09-14 17:29:47-0700: Loading model and initializing benchmark...
2020-09-14 17:29:47-0700: nnRandSeed0 = 1511841109923074339
2020-09-14 17:29:47-0700: After dedups: nnModelFile0 = /home/beowulf/test/SlackBuilds/development/KataGo/cpp/default_model.bin.gz useFP16 auto useNHWC auto
2020-09-14 17:29:49-0700: Found OpenCL Platform 0: AMD Accelerated Parallel Processing (Advanced Micro Devices, Inc.) (OpenCL 2.1 AMD-APP (3075.10))
2020-09-14 17:29:49-0700: Found 1 device(s) on platform 0 with type CPU or GPU or Accelerator
2020-09-14 17:29:49-0700: Found OpenCL Platform 1: Clover (Mesa) (OpenCL 1.1 Mesa 20.1.7)
2020-09-14 17:29:49-0700: Found 1 device(s) on platform 1 with type CPU or GPU or Accelerator
2020-09-14 17:29:49-0700: Found OpenCL Device 0: gfx1010 (Advanced Micro Devices, Inc.) (score 11000200)
2020-09-14 17:29:49-0700: Found OpenCL Device 1: AMD Radeon RX 5700 XT (NAVI10, DRM 3.35.0, 5.4.65, LLVM 10.0.1) (AMD) (score 11000101)
2020-09-14 17:29:49-0700: Creating context for OpenCL Platform: AMD Accelerated Parallel Processing (Advanced Micro Devices, Inc.) (OpenCL 2.1 AMD-APP (3075.10))
2020-09-14 17:29:49-0700: Using OpenCL Device 0: gfx1010 (Advanced Micro Devices, Inc.) OpenCL 2.0 AMD-APP (3075.10) (Extensions: cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_khr_gl_depth_images cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_gl_event cl_khr_depth_images cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_amd_copy_buffer_p2p )
2020-09-14 17:29:49-0700: No existing tuning parameters found or parseable or valid at: /home/beowulf/.katago/opencltuning/tune8_gpugfx1010_x19_y19_c320_mv8.txt
2020-09-14 17:29:49-0700: Performing autotuning
2020-09-14 17:29:49-0700: *** On some systems, this may take several minutes, please be patient ***
2020-09-14 17:29:49-0700: Found OpenCL Platform 0: AMD Accelerated Parallel Processing (Advanced Micro Devices, Inc.) (OpenCL 2.1 AMD-APP (3075.10))
2020-09-14 17:29:49-0700: Found 1 device(s) on platform 0 with type CPU or GPU or Accelerator
2020-09-14 17:29:49-0700: Found OpenCL Platform 1: Clover (Mesa) (OpenCL 1.1 Mesa 20.1.7)
2020-09-14 17:29:49-0700: Found 1 device(s) on platform 1 with type CPU or GPU or Accelerator
2020-09-14 17:29:49-0700: Found OpenCL Device 0: gfx1010 (Advanced Micro Devices, Inc.) (score 11000200)
2020-09-14 17:29:49-0700: Found OpenCL Device 1: AMD Radeon RX 5700 XT (NAVI10, DRM 3.35.0, 5.4.65, LLVM 10.0.1) (AMD) (score 11000101)
2020-09-14 17:29:49-0700: Creating context for OpenCL Platform: AMD Accelerated Parallel Processing (Advanced Micro Devices, Inc.) (OpenCL 2.1 AMD-APP (3075.10))
2020-09-14 17:29:49-0700: Using OpenCL Device 0: gfx1010 (Advanced Micro Devices, Inc.) OpenCL 2.0 AMD-APP (3075.10) (Extensions: cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_khr_gl_depth_images cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_gl_event cl_khr_depth_images cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_amd_copy_buffer_p2p )
Setting winograd3x3TileSize = 4
------------------------------------------------------
Tuning xGemmDirect for 1x1 convolutions and matrix mult
Testing 56 different configs
Tuning 0/56 (reference) Calls/sec 278.418 L2Error 0 WGD=8 MDIMCD=1 NDIMCD=1 MDIMAD=1 NDIMBD=1 KWID=1 VWMD=1 VWND=1 PADA=1 PADB=1
Tuning 2/56 Calls/sec 3037.67 L2Error 0 WGD=8 MDIMCD=8 NDIMCD=8 MDIMAD=8 NDIMBD=8 KWID=1 VWMD=1 VWND=1 PADA=1 PADB=1
Tuning 3/56 Calls/sec 4288.08 L2Error 0 WGD=16 MDIMCD=8 NDIMCD=8 MDIMAD=8 NDIMBD=8 KWID=1 VWMD=1 VWND=1 PADA=1 PADB=1
Tuning 4/56 Calls/sec 5615.23 L2Error 0 WGD=32 MDIMCD=16 NDIMCD=16 MDIMAD=8 NDIMBD=8 KWID=2 VWMD=2 VWND=2 PADA=1 PADB=1
Tuning 5/56 Calls/sec 6356.66 L2Error 0 WGD=32 MDIMCD=8 NDIMCD=16 MDIMAD=8 NDIMBD=8 KWID=2 VWMD=2 VWND=2 PADA=1 PADB=1
Tuning 8/56 Calls/sec 6424.11 L2Error 0 WGD=32 MDIMCD=8 NDIMCD=16 MDIMAD=8 NDIMBD=8 KWID=2 VWMD=4 VWND=2 PADA=1 PADB=1
Tuning 10/56 Calls/sec 6430.62 L2Error 0 WGD=32 MDIMCD=8 NDIMCD=16 MDIMAD=8 NDIMBD=8 KWID=8 VWMD=2 VWND=2 PADA=1 PADB=1
Tuning 20/56 ...
Tuning 40/56 ...
Tuning 47/56 Calls/sec 6450.63 L2Error 0 WGD=32 MDIMCD=16 NDIMCD=8 MDIMAD=8 NDIMBD=8 KWID=8 VWMD=2 VWND=4 PADA=1 PADB=1
------------------------------------------------------
Tuning xGemm for convolutions
Testing 70 different configs
Tuning 0/70 (reference) Calls/sec 366.946 L2Error 0 MWG=8 NWG=8 KWG=8 MDIMC=1 NDIMC=1 MDIMA=1 NDIMB=1 KWI=1 VWM=1 VWN=1 STRM=0 STRN=0 SA=0 SB=0
Tuning 1/70 Calls/sec 367.095 L2Error 0 MWG=8 NWG=8 KWG=8 MDIMC=1 NDIMC=1 MDIMA=1 NDIMB=1 KWI=1 VWM=1 VWN=1 STRM=0 STRN=0 SA=0 SB=0
Tuning 2/70 Calls/sec 761.028 L2Error 0 MWG=8 NWG=8 KWG=8 MDIMC=8 NDIMC=8 MDIMA=8 NDIMB=8 KWI=1 VWM=1 VWN=1 STRM=0 STRN=0 SA=0 SB=0
Tuning 3/70 Calls/sec 1487.92 L2Error 0 MWG=16 NWG=16 KWG=16 MDIMC=8 NDIMC=8 MDIMA=8 NDIMB=8 KWI=1 VWM=1 VWN=1 STRM=0 STRN=0 SA=0 SB=0
Tuning 4/70 Calls/sec 2205.15 L2Error 0 MWG=32 NWG=32 KWG=32 MDIMC=16 NDIMC=16 MDIMA=16 NDIMB=16 KWI=2 VWM=2 VWN=2 STRM=0 STRN=0 SA=1 SB=1
Tuning 5/70 Calls/sec 3926.68 L2Error 0 MWG=64 NWG=64 KWG=16 MDIMC=16 NDIMC=16 MDIMA=16 NDIMB=16 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=0 SB=0
Tuning 14/70 Calls/sec 3941.97 L2Error 0 MWG=64 NWG=64 KWG=16 MDIMC=16 NDIMC=8 MDIMA=16 NDIMB=8 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=0 SB=0
Tuning 40/70 ...
Tuning 42/70 Calls/sec 3961.01 L2Error 0 MWG=64 NWG=64 KWG=32 MDIMC=16 NDIMC=8 MDIMA=16 NDIMB=8 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=0 SB=0
Tuning 45/70 Calls/sec 4234.12 L2Error 0 MWG=64 NWG=64 KWG=32 MDIMC=8 NDIMC=8 MDIMA=8 NDIMB=8 KWI=2 VWM=2 VWN=2 STRM=0 STRN=0 SA=1 SB=1
Tuning 59/70 Calls/sec 4436.6 L2Error 0 MWG=64 NWG=64 KWG=32 MDIMC=8 NDIMC=8 MDIMA=8 NDIMB=8 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=0 SB=0
------------------------------------------------------
Tuning hGemmWmma for convolutions
Testing 146 different configs
FP16 tensor core tuning failed, assuming no FP16 tensor core support
------------------------------------------------------
Tuning xGemm16 for convolutions
Testing 70 different configs
Tuning 0/70 (reference) Calls/sec 519.453 L2Error 0 MWG=8 NWG=8 KWG=8 MDIMC=1 NDIMC=1 MDIMA=1 NDIMB=1 KWI=1 VWM=1 VWN=1 STRM=0 STRN=0 SA=0 SB=0
Tuning 1/70 Calls/sec 4516.71 L2Error 0 MWG=64 NWG=64 KWG=32 MDIMC=8 NDIMC=8 MDIMA=8 NDIMB=8 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=0 SB=0
Tuning 4/70 Calls/sec 4743.46 L2Error 0 MWG=64 NWG=64 KWG=16 MDIMC=16 NDIMC=16 MDIMA=16 NDIMB=16 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=0 SB=0
Tuning 5/70 Calls/sec 5527.88 L2Error 0 MWG=64 NWG=64 KWG=32 MDIMC=8 NDIMC=16 MDIMA=8 NDIMB=16 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=0 SB=0
Tuning 7/70 Calls/sec 5543.6 L2Error 0 MWG=64 NWG=64 KWG=32 MDIMC=16 NDIMC=8 MDIMA=16 NDIMB=8 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=0 SB=0
Tuning 10/70 Calls/sec 9195.95 L2Error 0 MWG=64 NWG=64 KWG=16 MDIMC=16 NDIMC=8 MDIMA=16 NDIMB=8 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=1 SB=1
Tuning 17/70 Calls/sec 9424.76 L2Error 0 MWG=64 NWG=64 KWG=32 MDIMC=8 NDIMC=8 MDIMA=8 NDIMB=8 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=1 SB=1
Tuning 40/70 ...
Tuning 50/70 Calls/sec 9425.38 L2Error 0 MWG=64 NWG=64 KWG=32 MDIMC=16 NDIMC=8 MDIMA=16 NDIMB=8 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=1 SB=1
Tuning 54/70 Calls/sec 9647.03 L2Error 0 MWG=64 NWG=64 KWG=32 MDIMC=8 NDIMC=16 MDIMA=8 NDIMB=16 KWI=2 VWM=4 VWN=4 STRM=0 STRN=0 SA=1 SB=1
Enabling FP16 compute due to better performance
------------------------------------------------------
Using FP16 storage!
Using FP16 compute!
------------------------------------------------------
Tuning winograd transform for convolutions
Testing 47 different configs
Tuning 0/47 (reference) Calls/sec 2293.98 L2Error 0 transLocalSize0=1 transLocalSize1=1
Tuning 1/47 Calls/sec 2294.98 L2Error 0 transLocalSize0=1 transLocalSize1=1
Tuning 2/47 Calls/sec 29209.4 L2Error 0 transLocalSize0=16 transLocalSize1=8
Tuning 3/47 Calls/sec 33522.1 L2Error 0 transLocalSize0=64 transLocalSize1=1
Tuning 5/47 Calls/sec 33657.4 L2Error 0 transLocalSize0=64 transLocalSize1=4
Tuning 6/47 Calls/sec 35366.2 L2Error 0 transLocalSize0=32 transLocalSize1=4
Tuning 20/47 ...
Tuning 40/47 ...
------------------------------------------------------
Tuning winograd untransform for convolutions
Testing 111 different configs
Tuning 0/111 (reference) Calls/sec 4411.94 L2Error 0 untransLocalSize0=1 untransLocalSize1=1 untransLocalSize2=1
Tuning 2/111 Calls/sec 24665.6 L2Error 0 untransLocalSize0=16 untransLocalSize1=1 untransLocalSize2=16
Tuning 10/111 Calls/sec 27702.5 L2Error 0 untransLocalSize0=2 untransLocalSize1=16 untransLocalSize2=2
Tuning 12/111 Calls/sec 29562.5 L2Error 0 untransLocalSize0=8 untransLocalSize1=2 untransLocalSize2=16
Tuning 13/111 Calls/sec 32765.4 L2Error 0 untransLocalSize0=8 untransLocalSize1=2 untransLocalSize2=8
Tuning 40/111 ...
Tuning 60/111 ...
Tuning 80/111 ...
Tuning 99/111 Calls/sec 33185.8 L2Error 0 untransLocalSize0=8 untransLocalSize1=4 untransLocalSize2=1
Tuning 104/111 Calls/sec 36278.6 L2Error 0 untransLocalSize0=8 untransLocalSize1=2 untransLocalSize2=2
------------------------------------------------------
Tuning global pooling strides
Testing 73 different configs
Tuning 0/73 (reference) Calls/sec 13654.1 L2Error 0 XYSTRIDE=1 CHANNELSTRIDE=1 BATCHSTRIDE=1
Tuning 1/73 Calls/sec 13713.7 L2Error 0 XYSTRIDE=1 CHANNELSTRIDE=1 BATCHSTRIDE=1
Tuning 2/73 Calls/sec 90579.7 L2Error 2.1072e-13 XYSTRIDE=32 CHANNELSTRIDE=2 BATCHSTRIDE=2
Tuning 20/73 ...
Tuning 26/73 Calls/sec 91645.8 L2Error 2.1072e-13 XYSTRIDE=32 CHANNELSTRIDE=2 BATCHSTRIDE=1
Tuning 40/73 ...
Tuning 60/73 ...
Done tuning
------------------------------------------------------
2020-09-14 17:31:48-0700: Done tuning, saved results to /home/beowulf/.katago/opencltuning/tune8_gpugfx1010_x19_y19_c320_mv8.txt
2020-09-14 17:31:52-0700: OpenCL backend thread 0: Model version 8
2020-09-14 17:31:52-0700: OpenCL backend thread 0: Model name: g170-b30c320x2-s4824661760-d1229536699
2020-09-14 17:31:55-0700: OpenCL backend thread 0: FP16Storage true FP16Compute true FP16TensorCores false
2020-09-14 17:31:55-0700: Loaded config /home/beowulf/test/SlackBuilds/development/KataGo/cpp/default_gtp.cfg
2020-09-14 17:31:55-0700: Loaded model /home/beowulf/test/SlackBuilds/development/KataGo/cpp/default_model.bin.gz
Testing using 800 visits.
If you have a good GPU, you might increase this using "-visits N" to get more accurate results.
If you have a weak GPU and this is taking forever, you can decrease it instead to finish the benchmark faster.
You are currently using the OpenCL version of KataGo.
If you have a strong GPU capable of FP16 tensor cores (e.g. RTX2080), using the Cuda version of KataGo instead may give a mild performance boost.
Your GTP config is currently set to use numSearchThreads = 6
Automatically trying different numbers of threads to home in on the best:
2020-09-14 17:31:55-0700: nnRandSeed0 = 16727936483373699045
2020-09-14 17:31:55-0700: After dedups: nnModelFile0 = /home/beowulf/test/SlackBuilds/development/KataGo/cpp/default_model.bin.gz useFP16 auto useNHWC auto
2020-09-14 17:31:56-0700: Found OpenCL Platform 0: AMD Accelerated Parallel Processing (Advanced Micro Devices, Inc.) (OpenCL 2.1 AMD-APP (3075.10))
2020-09-14 17:31:56-0700: Found 1 device(s) on platform 0 with type CPU or GPU or Accelerator
2020-09-14 17:31:56-0700: Found OpenCL Platform 1: Clover (Mesa) (OpenCL 1.1 Mesa 20.1.7)
2020-09-14 17:31:56-0700: Found 1 device(s) on platform 1 with type CPU or GPU or Accelerator
2020-09-14 17:31:56-0700: Found OpenCL Device 0: gfx1010 (Advanced Micro Devices, Inc.) (score 11000200)
2020-09-14 17:31:56-0700: Found OpenCL Device 1: AMD Radeon RX 5700 XT (NAVI10, DRM 3.35.0, 5.4.65, LLVM 10.0.1) (AMD) (score 11000101)
2020-09-14 17:31:56-0700: Creating context for OpenCL Platform: AMD Accelerated Parallel Processing (Advanced Micro Devices, Inc.) (OpenCL 2.1 AMD-APP (3075.10))
2020-09-14 17:31:56-0700: Using OpenCL Device 0: gfx1010 (Advanced Micro Devices, Inc.) OpenCL 2.0 AMD-APP (3075.10) (Extensions: cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_gl_sharing cl_khr_gl_depth_images cl_amd_device_attribute_query cl_amd_media_ops cl_amd_media_ops2 cl_khr_image2d_from_buffer cl_khr_subgroups cl_khr_gl_event cl_khr_depth_images cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_amd_copy_buffer_p2p )
2020-09-14 17:31:56-0700: Loaded tuning parameters from: /home/beowulf/.katago/opencltuning/tune8_gpugfx1010_x19_y19_c320_mv8.txt
2020-09-14 17:32:01-0700: OpenCL backend thread 0: Model version 8
2020-09-14 17:32:01-0700: OpenCL backend thread 0: Model name: g170-b30c320x2-s4824661760-d1229536699
2020-09-14 17:32:03-0700: OpenCL backend thread 0: FP16Storage true FP16Compute true FP16TensorCores false
Possible numbers of threads to test: 1, 2, 3, 4, 5, 6, 8, 10, 12, 16, 20, 24, 32,
numSearchThreads = 5: 10 / 10 positions, visits/s = 385.99 nnEvals/s = 339.66 nnBatches/s = 136.54 avgBatchSize = 2.49 (20.8 secs)
numSearchThreads = 12: 10 / 10 positions, visits/s = 472.84 nnEvals/s = 425.73 nnBatches/s = 71.95 avgBatchSize = 5.92 (17.2 secs)
numSearchThreads = 10: 10 / 10 positions, visits/s = 444.73 nnEvals/s = 389.87 nnBatches/s = 78.94 avgBatchSize = 4.94 (18.2 secs)
numSearchThreads = 20: 10 / 10 positions, visits/s = 407.54 nnEvals/s = 367.63 nnBatches/s = 37.72 avgBatchSize = 9.75 (20.1 secs)
numSearchThreads = 8: 10 / 10 positions, visits/s = 457.40 nnEvals/s = 400.83 nnBatches/s = 101.17 avgBatchSize = 3.96 (17.6 secs)
numSearchThreads = 6: 10 / 10 positions, visits/s = 432.82 nnEvals/s = 382.11 nnBatches/s = 128.18 avgBatchSize = 2.98 (18.6 secs)
Ordered summary of results:
numSearchThreads = 5: 10 / 10 positions, visits/s = 385.99 nnEvals/s = 339.66 nnBatches/s = 136.54 avgBatchSize = 2.49 (20.8 secs) (EloDiff baseline)
numSearchThreads = 6: 10 / 10 positions, visits/s = 432.82 nnEvals/s = 382.11 nnBatches/s = 128.18 avgBatchSize = 2.98 (18.6 secs) (EloDiff +39)
numSearchThreads = 8: 10 / 10 positions, visits/s = 457.40 nnEvals/s = 400.83 nnBatches/s = 101.17 avgBatchSize = 3.96 (17.6 secs) (EloDiff +51)
numSearchThreads = 10: 10 / 10 positions, visits/s = 444.73 nnEvals/s = 389.87 nnBatches/s = 78.94 avgBatchSize = 4.94 (18.2 secs) (EloDiff +33)
numSearchThreads = 12: 10 / 10 positions, visits/s = 472.84 nnEvals/s = 425.73 nnBatches/s = 71.95 avgBatchSize = 5.92 (17.2 secs) (EloDiff +48)
numSearchThreads = 20: 10 / 10 positions, visits/s = 407.54 nnEvals/s = 367.63 nnBatches/s = 37.72 avgBatchSize = 9.75 (20.1 secs) (EloDiff -44)
Based on some test data, each speed doubling gains perhaps ~250 Elo by searching deeper.
Based on some test data, each thread costs perhaps 7 Elo if using 800 visits, and 2 Elo if using 5000 visits (by making MCTS worse).
So APPROXIMATELY based on this benchmark, if you intend to do a 5 second search:
numSearchThreads = 5: (baseline)
numSearchThreads = 6: +39 Elo
numSearchThreads = 8: +51 Elo (recommended)
numSearchThreads = 10: +33 Elo
numSearchThreads = 12: +48 Elo
numSearchThreads = 20: -44 Elo
During the benchmark run, the RX 5700 XT spiked to 106 C (3066 rpm) and 110 C (3215 rpm) for 20-30 seconds. From clinfo, KataGo chose the correct OpenCL platform (Creating context for OpenCL Platform: AMD Accelerated Parallel Processing). OpenCL GPU compute issues seem to have been patched with the AMDGPU-PRO-20.x drivers.
Wow! These are outstanding results KingB! I am so grateful for you making the effort and volunteering your time to do this!
The benchmark data is truly telling. As a data point my measly little Nvidia 1030 GT GPU (compiled katago with the Cuda backend) is only optimum at 5 search threads where your RX 5700 XT is optimum at 8 search threads. That's an 80% improvement. Huge! Were any other packages needed to build it? i.e. other than your custom AMDGPU Slackbuild? Thanks so much! This is all I needed to see to sway me over to AMD for my next investment in GPU.
@wirelessmc:
Just added my admgpu-opencl-20.10 package to a stock Slackware64-current (12-Sept-2020) multilib (due to my Steam/WINE gaming addiction). Stock X.org and Mesa. There are a few libs (WxPython, wxGTK3), codecs and apps from SBo but nothing that KataGo needs AFAIK from the build log (but I didn't look too closely). As soon as I find the RX590 and the PCI-e riser adapter for the Ryzen 7 3800X miniITX system, I'll compile there to check, and maybe compare to the GTX 590 (fastest Nvidia I still own). That box is pure Slackware64-current.
By the way, I forgot that while the benchmark was running, I had a qemu VM running Slackware64-14.2 to compile QT5.
Well many, many more kudos and appreciation out to you kingbeowulf!! This is some outstanding work/development you have done getting the AMDGPU code working on Slackware. The Slackware community is a better place because of your contributions! It will be interesting to see GTX 590 fares in this benchmark.
dchmelik, if you scroll up a bit to here: https://www.linuxquestions.org/quest...ml#post6165291
you'll see I had some issues with 20.30 not working, so I reverted to 20.10. Now, a whole heap of changes to current, but I haven't had a chance to revisit. Soon. As in, "real soon now" ;-)
Using amdgpu-pro-20.40-1147286-ubuntu-20.04.tar.xz, I was able to get an updated OpenCL slackbuild working on Slackware64-current (15.0, Sun Nov 15 00:02:28 UTC 2020). Since my static web site is in read-only mode while we convert to a CMS, I'll post patches below to update amdgpu-opencl-20.10 to amdgpu-opencl-20.40. Note that the included clinfo no longer works. I think it has to do with Slackware using a different icd loader library. Also, please remember, the whole point of this was to see if I could add OpenCL to Slackware's opensource amdgpu without replacing any libraries or packages. (That was the one thing I never liked about using Nvidia.) BOINC ran a few GPU OpenCL tasks overnight on the RX 5700 XT with this new package. I have not tested the RX 590 yet.
For a functional clinfo, use https://github.com/Oblomov/clinfo I'll probably create a buildscript for it if there isn't one already.
ls -l /usr/lib64/libOpenCL*
lrwxrwxrwx 1 root root 18 Oct 16 07:35 /usr/lib64/libOpenCL.so -> libOpenCL.so.1.0.0
lrwxrwxrwx 1 root root 18 Oct 16 07:35 /usr/lib64/libOpenCL.so.1 -> libOpenCL.so.1.0.0
-rwxr-xr-x 1 root root 47424 Oct 8 11:01 /usr/lib64/libOpenCL.so.1.0.0
while for Khronos we have:
Code:
s -l libOpenCL*
lrwxrwxrwx 1 beowulf users 14 Nov 16 19:58 libOpenCL.so -> libOpenCL.so.1
lrwxrwxrwx 1 beowulf users 16 Nov 16 19:58 libOpenCL.so.1 -> libOpenCL.so.1.2
-rwxr-xr-x 1 beowulf users 49672 Nov 16 19:58 libOpenCL.so.1.2
Also, on slackware, headers are stored at /usr/include/ocl_icd.h but some OpenCL builds may look for /usr/include/CL/ocl_icd.h, but I am not sure if this is due to Khronos or deb-ish distros.
Patches for amdgpu-opencl.SlackBuild to convert 20.10 -> 20.40 amdgpu-opencl-script.diff:
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.