LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Slackware (https://www.linuxquestions.org/questions/slackware-14/)
-   -   Slackware-specific guide to KVM-Qemu VGA passthrough (https://www.linuxquestions.org/questions/slackware-14/slackware-specific-guide-to-kvm-qemu-vga-passthrough-4175572331/)

Gerard Lally 02-15-2016 02:06 PM

Slackware-specific guide to KVM-Qemu VGA passthrough
 
I had a stroke of luck in January and as a result I have finally been able to build a powerful new desktop.

:twocents:

I bought two fairly good video cards, and would like to assign one of them exclusively to a Windows guest under KVM-Qemu.

Thought I'd try here first for Slackware-specific instructions, before heading over to the virtualization forum. There are Debian and Arch guides floating around on the internet, but, as usual, there's always something in these to trip you up. So slacker suggestions, instructions, tips and whatnot welcome!

red_fire 02-17-2016 11:35 PM

Hi gezley,

I'm very interested in this as well!

With regards to the 2 video cards, are they identical or is one just for gaming and the other one just for the display/multimedia?

Cesare 02-18-2016 06:16 AM

Just some general hints:

1) Your mainboard's BIOS needs to support this, otherwise PCI-passthrough doesn't work.

2) After booting you have to unbind (from what?) the video card and assign it to the pci-stub driver. I did this with a simple script:

Code:

#!/bin/bash

echo "loading kernel module"
modprobe pci-stub
sleep 1

lspci | if grep -q '01\:00\.0.*VGA.*AMD'; then
  echo "unbinding AMD VGA device"
  echo "1002 683f" > /sys/bus/pci/drivers/pci-stub/new_id
  echo "0000:01:00.0" > /sys/bus/pci/devices/0000:01:00.0/driver/unbind
  echo "0000:01:00.0" > /sys/bus/pci/drivers/pci-stub/bind
fi

lspci | if grep -q '01\:00\.1.*Audio.*AMD'; then
  echo "unbinding AMD Audio device"
  echo "1002 aab0" > /sys/bus/pci/drivers/pci-stub/new_id
  echo "0000:01:00.1" > /sys/bus/pci/devices/0000:01:00.1/driver/unbind
  echo "0000:01:00.1" > /sys/bus/pci/drivers/pci-stub/bind
fi

echo "status:"
dmesg | tail
echo

Obviously this only works in a very specific setup, but it should help to explain the concept.

3) Start qemu as root and pass it the PCI- and USB-ids wou want your virtual machine to handle.

It took me a while and many reboots to figure out the correct commands. After that everything was running fine as long as I didn't touch anything, i.e. restarting the software within the VM would result in a complete host lock up. Some filesystems don't like this, so better have a backup ready. The situation might be improved with more modern hardware and newer QEMUs.

lopid 03-28-2016 07:51 AM

I have just been through this process. This is what you have to do on Slackware.

Be sure that your motherboard and the graphics card that you want to pass through both support UEFI, and that your CPU supports IOMMU. Most modern ones do. The disk image of your guest OS must also support UEFI. This rules out anything before, and some versions of, Windows 7. Windows 8+ should be OK.

Be sure that VT-d (for Intel CPUs) or AMD-Vi (for AMD CPUs) is enabled in BIOS. Also make sure that the graphics card that you want to pass through is not set as the primary one.

You will need to recompile the kernel, because the default "huge" one, at least in current, doesn't include CONFIG_VFIO_PCI_VGA, so enable it. You can do the few steps below before rebooting into the new kernel, although it's good to reboot first to make debugging easier in case you fucked something up. Remember to keep the old kernel!

In lilo.conf, for an Intel CPU, add "intel_iommu=on" to the kernel parameters (that's the "append" line). I'm not sure what it should be for AMD CPUs. It might be "amd_iommu=on", but the kernel documentation isn't clear on how to enable it for AMD. You might also add "iommu=pt" (see here for why you might not want to).

The idea is that you want the vfio-pci driver to take control of any PCI device that you want to pass through. Some guides will mention pci-stub. That's older tech. and you don't need it for Slackware. Sure, you can use it, but it adds an extra unnecessary step. Or at least, it did for me. You can see what driver is assigned to a device with "lspci -nnk". Look for "Kernel driver in use". The next steps are necessary to assign vfio-pci to the device(s).

If you intend to pass through an Nvidia card, blacklist the nouveau driver in /etc/modprobe.d/nouveau.conf or some such. You don't want it claiming the card before you want to use it. Of course, also blacklist the official Nvidia binary driver, if you have it installed.

With an Nvidia card, you might notice that lspci shows it has an audio device as well as a VGA controller. I wanted this audio device to also pass through (in fact, I might have read that it makes the whole exercise easier, if not possible), but I noticed later that the snd_hda_intel driver was always claiming it instead of vfio-pci. To prevent that, I blacklisted snd_hda_intel. However, that meant that even my Intel motherboard's audio didn't work, so I simply loaded the driver at a stage after the vfio-pci is loaded - in /etc/rc.d/rc.local ("modprobe snd_hda_intel").

You'll need to tell the system to load the vfio-pci driver and assign it to the PCI devices (Nvidia VGA and, for me, it's audio counterpart). To do that, you'll need the addresses of the PCI devices. You can either use the addresses of the devices themselves, or, as I did, use a string which accounts for all Nvidia devices. You can see the addresses with the lspci command above. They look like "10de:13c2". Create an entry in /etc/modprobe.d/vfio.conf, separating each address with a comma, thusly:

Quote:

options vfio-pci ids=10de:13c2,10de:0fbb
Of course, those are the addresses of my own device. Yours may differ. Alternatively, use the Nvidia catch all string:

Quote:

options vfio-pci ids=1002:ffffffff:ffffffff:ffffffff:00030000:ffff00ff,1002:ffffffff:ffffffff:ffffffff:00040300:fffff fff,10de:ffffffff:ffffffff:ffffffff:00030000:ffff00ff,10de:ffffffff:ffffffff:ffffffff:00040300:fffff fff
You can reboot now. Check that the vfio-pci driver has been assiged to the devices. Check also that the new kernel is using the correct setting ("zgrep CONFIG_VFIO_PCI_VGA /proc/config.gz"), and that the IOMMU kernel parameters worked ("find /sys/kernel/iommu_groups/ -type l" - if you see lots of entries, it worked; if you see none, it didn't).

You'll need to install QEMU and its dependencies. I used the SBo 14.1 repository for this. Here're the contents of a queue file that you can load into sbopkg to make life easier:

Quote:

usbredir
vte3
vala
spice-protocol
pyparsing
celt051
spice
orc
gstreamer1
gst1-plugins-base
spice-gtk
gtk-vnc
ipaddr-py
tunctl
gnome-python2-gconf
yajl
urlgrabber
libvirt
libvirt-python
libvirt-glib
libosinfo
virt-manager
qemu
Maybe some of them could be found at a repository in slackpkg...

I had to change libvirt and virt-manager (which I didn't end up using, but it still might be necessary) to the latest versions. Here are the diffs for libvirt:

Quote:

$ diff /var/lib/sbopkg/SBo/14.1/libraries/libvirt/libvirt.info*
2c2
< VERSION="1.2.21"
---
> VERSION="1.3.2"
4,5c4,5
< DOWNLOAD="ftp://libvirt.org/libvirt/libvirt-1.2.21.tar.gz"
< MD5SUM="76ab39194302b9067332e1f619c8bad9"
---
> DOWNLOAD="ftp://libvirt.org/libvirt/libvirt-1.3.2.tar.gz"
> MD5SUM="poop"

$ diff /var/lib/sbopkg/SBo/14.1/libraries/libvirt/libvirt.SlackBuild*
8c8
< VERSION=${VERSION:-1.2.21}
---
> VERSION=${VERSION:-1.3.2}
The diffs for virt-manager:

Quote:

$ diff /var/lib/sbopkg/SBo/14.1/system/virt-manager/virt-manager.info*
2c2
< VERSION="1.2.1"
---
> VERSION="1.3.2"
4,5c4,5
< DOWNLOAD="http://virt-manager.org/download/sources/virt-manager/virt-manager-1.2.1.tar.gz"
< MD5SUM="c8045da517e7c9d8696e22970291c55e"
---
> DOWNLOAD="http://virt-manager.org/download/sources/virt-manager/virt-manager-1.3.2.tar.gz"
> MD5SUM="poop"
7c7
< MD5SUM_x86_64=""
---
> MD5SUM_x86_64="poop"

$ diff /var/lib/sbopkg/SBo/14.1/system/virt-manager/virt-manager.SlackBuild*
10c10
< VERSION=${VERSION:-1.2.1}
---
> VERSION=${VERSION:-1.3.2}
You'll need a copy of the latest OVMF UEFI BIOS. Get edk2.git-ovmf-x64 and extract OVMF-pure-efi.fd somewhere. I followed the steps here, but I spent too long trying to figure out how to get "UEFI" in the BIOS setting. I guess the RPM spec file would have set it up all nicely on an RPM based platform, but all I managed was seeing the full path to the OVMF file in virt-manager, which didn't work. I did that by changing the nvram variable in /etc/libvirt/qemu.conf. Instead, I gave up on virt-manager and used qemu-system-x86_64 directly, as below.

I found I had to change stdio_handler to "file", in qemu.conf, and uncomment cgroup_device_acl and add /dev/vfio/1:

Code:

cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio", "/dev/vfio/1"
]

Restart libvirt:
Quote:

/etc/rc.d/rc.libvirt restart
When you install a guest, it'll most likely need an external driver in order to recognise the VirtIO drive that you'll use. I found that the stable virtio-win ISO worked in Microsoft's Windows 7, and the latest virtio-win ISO worked in Microsoft's Windows 10.

Create this file as kvm-install.sh:

Code:

#!/bin/sh

INSTALLFILE=win10-uefi-x64_system.qcow2
FILESIZE=50G

INSTALLCD=/home/lopid/Win10.iso
# if you use a hardware CD-ROM drive, check for the device. In most cases it's /dev/sr0
#INSTALLCD=/dev/sr0

DRIVERCD=/home/lopid/virtio-win-0.1.113.iso

# PCI address of the passtrough devices
DEVICE1="01:00.0"
DEVICE2="01:00.1"

# create installation file if not exist
if [ ! -e $INSTALLFILE ]; then
    qemu-img create -f qcow2 $INSTALLFILE $FILESIZE
fi

#QEMU_PA_SAMPLES=4096 QEMU_AUDIO_DRV=pa \
qemu-system-x86_64 \
-bios /usr/share/OVMF/OVMF-pure-efi.fd \
-cpu host,kvm=off \
-device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 \
-device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=3 \
-device usb-kbd \
-device usb-tablet \
-device vfio-pci,host=$DEVICE1,addr=0x8.0x0,multifunction=on \
-device vfio-pci,host=$DEVICE2,addr=0x8.0x1 \
-device virtio-blk-pci,scsi=off,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 \
-device virtio-net-pci,netdev=user.0,mac=52:54:00:a0:66:43 \
-drive file=$DRIVERCD,if=none,id=drive-ide0-1-0,readonly=on,format=raw \
-drive file=$INSTALLCD,if=none,id=drive-ide0-0-0,readonly=on,format=raw \
-drive file=$INSTALLFILE,if=none,id=drive-virtio-disk0,format=qcow2,cache=unsafe \
-enable-kvm \
-m 4096 \
-machine pc-i440fx-2.1,accel=kvm \
-netdev user,id=user.0 \
-rtc base=localtime,driftfix=slew \
-smp 1,sockets=1,cores=4,threads=4 \
-soundhw hda \
-usb \
-vga qxl

INSTALLFILE is the name of the image of the guest that will be created. Substitute DEVICE1 and DEVICE2 for your device addresses ("lspci"). Notice that I commented out the QEMU_* audio driver line. The QEMU that I used didn't support Pulseaudio, but I still heard sound from line out, if somewhat crackly. Edit the smp argument according to your system. Notice also that -bios points to the .fd file that was downloaded earlier. Just go through the file and check it looks good for you.

Run kvm-install.sh as root, and you should see a QEMU guest window appear and Windows start to install. When it asks you to set up the disk, click "Load Driver" and point it to the drive where virtio-win is. You want to select the viostor driver. After Windows has installed, shut it down, don't reboot.

Create another file, kvm-start.sh:

Code:

#!/bin/sh

INSTALLFILE=win10-uefi-x64_system.qcow2
IMAGEFILE=win10-uefi-x64_system-01.qcow2

# PCI address of the passtrough devices
DEVICE1="01:00.0"
DEVICE2="01:00.1"

# create a imagefile from backingfile file if not exist
if [ ! -e $IMAGEFILE ]; then
    qemu-img create -f qcow2 -o backing_file=$INSTALLFILE,backing_fmt=qcow2 $IMAGEFILE
fi

#QEMU_PA_SAMPLES=6144 QEMU_AUDIO_DRV=pa \
qemu-system-x86_64 \
-bios /usr/share/OVMF/OVMF-pure-efi.fd \
-cpu host,kvm=off \
-device qxl \
-device usb-host,hostbus=1 \
-device vfio-pci,host=$DEVICE1,addr=0x8.0x0,multifunction=on,x-vga=on \
-device vfio-pci,host=$DEVICE2,addr=0x8.0x1 \
-device virtio-blk-pci,scsi=off,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-device virtio-net-pci,netdev=user.0,mac=52:54:00:a0:66:43 \
-drive file=$IMAGEFILE,if=none,id=drive-virtio-disk0,format=qcow2,cache=none \
-enable-kvm \
-m 8192 \
-machine pc-i440fx-2.1,accel=kvm,iommu=on \
-netdev user,id=user.0 \
-rtc base=localtime,driftfix=slew \
-smp 4,cores=4,threads=1,sockets=1 \
-soundhw hda \
-usb \
-vga none

I changed some arguments here because I was tweaking it for my system. YMMV. Again, check it over for yourself, before running it as root. This time, the guest window should display a message saying that the display has not yet been initialised. That's good. Check the output of the video card that you're passing through. It should show Windows!

I had a go at passing through a USB host, so my peripherals would work directly in the guest as well, but this was hit and miss. Sometimes Windows would show, for example, my mouse as a generic device, and sometimes it would show as the Logitech mouse that it is. Anyway, that should be enough to get you started with PCI VGA passthrough on Slackware. I have to give credit to other guys who already did most of the actual work, I just put things together for Slackware. Their links below.

As far as performance, with the Unigine Valley benchmark, I saw 61.7 average FPS with the basic settings on an Nvidia GTX 970 in a Windows 10 guest, whereas in Windows 7 running natively, I had 89.1 average FPS. Bear in mind that I didn't do much tweaking either with QEMU or in the guest. I note also that I couldn't get the guest to show all four of my CPU cores, they would only ever show as one.

archfan 03-29-2016 01:31 PM

Don't use libvirt. It's bloated FOSS crap that serves no purpose other than to waste your precious disk space. The syntax is plain awful imho.


As previously mentioned you need to recompile your kernel with:
- CONFIG_VFIO_PCI_VGA=y

I also suggest
- CONFIG_JUMP_LABEL=y (optimizes likely / unlikely branches, maybe gives a small perf. boost)
- CONFIG_HUGETLBFS=y
- CONFIG_HUGETLB_PAGE=y
- CONFIG_KSM=y
- CONFIG_MCORE2=y (depends on your arch, this is for Intel C2D and higher)
- CONFIG_HZ_1000=y
- CONFIG_HZ=1000
- CONFIG_PREEMPT_VOLUNTARY=y
- CONFIG_CC_STACKPROTECTOR_STRONG=y (this should be a default setting.)

Here are my scripts:

/etc/rc.d/rc.vfio

Quote:

#!/bin/sh
# /etc/rc.d/rc.vfio
#
# TRAVEL:
# Something that makes you feel like you're getting somewhere.
#

set -e

source /etc/vfio_devices.conf


vfio_bind() {
for dev in "$@"; do
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done
}

start_vfio() {
if [ -n "$DEVICES" ]; then
echo -n "binding" $DEVICES "to VFIO "
vfio_bind $DEVICES
echo " All done."
else
echo "WARNING: You have no devices specified in /etc/vfio_devices.conf"
fi
}

stop_vfio() {
echo -n "Unloading kvm modules."
echo -n "You can now use Virtualbox."
modprobe -r kvm-intel kvm
}

reload_vfio() {
echo -n "Reloading kvm modules."
modprobe kvm-intel kvm
}

restart_vfio()
echo -n "\ Punishing your deed in 10 seconds."
sleep 10
restart -r now
}

case "$1" in
'start')
start_vfio
;;
'stop')
stop_vfio
;;
'reload')
reload_vfio
;;
'restart')
restart_vfio
;;
*)
echo "usage $0 start|stop|reload|restart"
esac
/etc/vfio_devices.conf
Quote:

"DEVICES="0000:01:00.0 0000:01:00.1 0000:03:00.0"
-> lspci -nk gets you a nice tree with device IDs.

/usr/local/sbin/winnet
Quote:

/usr/sbin/brctl addbr br0
ip addr flush dev eth0
/usr/sbin/brctl addif br0 eth0
/usr/sbin/tunctl -u {USERNAME}
/usr/sbin/brctl addif br0 tap0
ip link set dev br0 up
ip link set dev tap0 up
dhcpcd
/usr/local/sbin/winvm
Quote:

#!/bin/sh

for i in {0..7}; do
echo performance > /sys/devices/system/cpu/cpu${i}/cpufreq/scaling_governor
done

taskset -ac 2-7 qemu-system-x86_64 \
-qmp unix:/run/qmp-sock,server,nowait \
-serial none \
-parallel none \
-nodefaults \
-nodefconfig \
-enable-kvm \
-name Windows10 \
-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,check \
-smp sockets=1,cores=3,threads=2 \
-m 8000 -mem-path /dev/hug \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/home/{USERNAME}/VM/win/OVMF_VARS-pure-efi.fd \
-rtc base=utc \
-boot order=c \
-device virtio-scsi-pci,id=scsi \
-drive if=virtio,id=drive0,file=/dev/sdb,cache=none,aio=native,format=raw \
-net nic,model=virtio \
-net nic,vlan=0,macaddr=52:54:00:00:00:01,model=virtio,name=net0 \
-net bridge,vlan=0,name=bridge0,br=br0 \
-nographic \
-device vfio-pci,host=01:00.0,multifunction=on \
-device vfio-pci,host=01:00.1 \
-device vfio-pci,host=03:00.0 &amp;


sleep 5

cpuid=2
for threadpid in $(echo 'query-cpus' | qmp-shell /run/qmp-sock | grep '^(QEMU) {"return":' | sed -e 's/^(QEMU) //' | jq -r '.return[].thread_id'); do
taskset -p -c ${cpuid} ${threadpid}
((cpuid+=1))
done

wait

for i in {0..7}; do
echo powersave > /sys/devices/system/cpu/cpu${i}/cpufreq/scaling_governor
done
If you intend to use QMP you need to get these files from the QEMU source tarball. They're not included by default for some odd reason.

Note: I have installed Windows directly on my secondary SSD. This gives me option to run Windows natively just in case something doesn't work as expected with QEMU.

Additionally you need to get edk2 from here: kraxel.org/repos/jenkins/edk2/

Simply download edk2.git-ovmf-x64-(...).rpm from there and run rpm2tgz to create a Slackware package.

/etc/modprobe.d/vfio.conf
Quote:

options vfio-pci ids=1002:67b0,1002:aac8,1912:0014
After compiling make sure to get your initramfs right.
/etc/mkinitrd.conf
Quote:

# mkinitrd.conf.sample
# See "man mkinitrd.conf" for details on the syntax of this file
#
SOURCE_TREE="/boot/linux-tree"
CLEAR_TREE="1"
OUTPUT_IMAGE="/boot/initrd-generic.gz"
#KERNEL_VERSION="$(uname -r)"
KERNEL_VERSION="4.5.0"
KEYMAP="us"
MODULE_LIST="intel_agp:i915:ext4:vfio:vfio_iommu_type1:vfio_pci:vfio_virqfd"
ROOTDEV="/dev/sda2"
ROOTFS="ext4"
RAID="0"
LVM="0"
UDEV="1"
MODCONF="0"
WAIT="1"
Then use mkinitrd -F to create a new initramfs.

We also need to change the /etc/fstab file. Add this:
Quote:

hugetlbfs /dev/hug hugetlbfs mode=1770 0 0
Last but not least I have changed the /etc/rc.d/rc.udev script in order to automatically mount hugetlbfs on boot and initialize the rc.vfio script.

Quote:

#!/bin/sh
# This is a script to initialize udev, which populates the /dev
# directory with device nodes, scans for devices, loads the
# appropriate kernel modules, and configures the devices.

PATH="/sbin:/bin"

check_mounted() {
grep -E -q "^[^[:space:]]+ $1 $2" /proc/mounts
return $?
}

mount_devpts() {
if ! check_mounted /dev/pts devpts ; then
mkdir /dev/pts 2> /dev/null
mount -n -o mode=0620,gid=5 -t devpts devpts /dev/pts
fi
}

mount_devshm() {
if ! check_mounted /dev/shm tmpfs ; then
mkdir /dev/shm 2> /dev/null
mount /dev/shm
fi
}

mount_devhug() {
if ! check_mounted /dev/hug hugetlbfs ; then
mkdir /dev/hug 2> /dev/null
mount /dev/hug
fi
}

mount_vfio() {
if [ -x /etc/rc.d/rc.vfio ]; then
/etc/rc.d/rc.vfio start
fi
}




case "$1" in
start)
# Sanity check #1, udev requires that the kernel support tmpfs:
if ! grep -wq tmpfs /proc/filesystems ; then
echo "Sorry, but you need tmpfs support in the kernel to use udev."
echo
echo "FATAL: Refusing to run /etc/rc.d/rc.udev."
exit 1
fi

# Sanity check #2, make sure that a 2.6.x kernel is new enough:
if [ "$(uname -r | cut -f 1,2 -d .)" = "2.6" ]; then
if [ "$(uname -r | cut -f 3 -d . | sed 's/[^[:digit:]].*//')" -lt "32" ]; then
echo "Sorry, but you need a 2.6.32+ kernel to use this udev."
echo "Your kernel version is only $(uname -r)."
echo
echo "FATAL: Refusing to run /etc/rc.d/rc.udev."
exit 1
fi
fi

# Sanity check #3, make sure the udev package was not removed. If udevd
# is not there, this will also shut off this script to prevent further
# problems:
if [ ! -x /sbin/udevd ]; then
chmod 0644 /etc/rc.d/rc.udev
echo "No udevd daemon found."
echo "Turning off udev: chmod 644 /etc/rc.d/rc.udev"
echo "FATAL: Refusing to run /etc/rc.d/rc.udev."
exit 1
fi

# Disable hotplug helper since udevd listens to netlink:
if [ -e /proc/sys/kernel/hotplug ]; then
echo "" > /proc/sys/kernel/hotplug
fi

if grep -qw devtmpfs /proc/filesystems ; then
if ! check_mounted /dev devtmpfs ; then
# umount shm if needed
check_mounted /dev/shm tmpfs && umount -l /dev/shm

# Umount pts if needed, we will remount it later:
check_mounted /dev/pts devpts && umount -l /dev/pts

# umount hug if needed
check_mounted /dev/hug hugetlbfs && umount -l /dev/hug


# Mount tmpfs on /dev:
mount -n -t devtmpfs devtmpfs /dev
fi
else
# Mount tmpfs on /dev:
if ! check_mounted /dev tmpfs ; then
# umount shm if needed
check_mounted /dev/shm tmpfs && umount -l /dev/shm

# Umount pts if needed, we will remount it later:
check_mounted /dev/pts devpts && umount -l /dev/pts

# umount hug if needed
check_mounted /dev/hug hugetlbfs && umount -l /dev/hug

# Mount tmpfs on /dev:
# the -n is because we don't want /dev umounted when
# someone (rc.[06]) calls umount -a
mount -n -o mode=0755 -t tmpfs tmpfs /dev
fi
fi

# Mount devpts
mount_devpts
mount_devshm
mount_devhug
mount_vfio


(...)
And in case you're using grub:
GRUB_CMDLINE_LINUX_DEFAULT="hugepagesz=1GB default_hugepagesz=1GB hugepages=8 intel_iommu=on iommu=pt"

Other useful kernel options are:
- pcie_acs_override=downstream (requires the acs kernel patch. Only add this if passthrough doesn't work.)
- hugepages=8 -> 1x8 GB hugepages. Make sure you have enough ram.

Richard Cranium 03-29-2016 01:38 PM

Opinions differ on libvirtd.

archfan 03-29-2016 01:45 PM

Indeed. :)

I did some benchmarks recently and here are some results. Might be of interest for some.

Benchmark on native Windows 10 x64: http://www.3dmark.com/fs/7942691
The same benchmark on QEMU: http://www.3dmark.com/fs/7931626

There wasn't really much difference in terms of performance between QEMU and the native run.

Richard Cranium 03-29-2016 02:17 PM

Quote:

Originally Posted by archfan (Post 5523120)
Indeed. :)

I did some benchmarks recently and here are some results. Might be of interest for some.

Benchmark on native Windows 10 x64: http://www.3dmark.com/fs/7942691
The same benchmark on QEMU: http://www.3dmark.com/fs/7931626

There wasn't really much difference in terms of performance between QEMU and the native run.

That's some useful information. Thanks for posting it.

archfan 03-30-2016 07:16 PM

Just one more quick tip. In case you're using an integrated Intel iGPU as primary GPU and plan to use a secondary card for VT-d passthrough you might encounter a strange error where grub is unable to boot the system. Some error about "file '/grub2/locale/en.mo.gz' not found" or something along the lines.

Just uncomment "GRUB_TERMINAL=console" in /etc/default/grub and create a new config with grub-mkconfig.

In extreme cases you might have to fix your ACPI tables and recompile the dsdt.hex into your kernel in order to be able to boot with the onboard GPU as primary device. If someone encounters this problem just PM me or ask here. I might know how to fix it but it requires further testing.

Cheers

Gerard Lally 04-11-2016 02:41 PM

Sorry for the late reply. One of the video cards I bought is a GeForce GTX 750 Ti; the other a Radeon R7 200.

The GeForce does not work properly under Linux. The fans power up like jet turbines every 20 seconds or so, making the computer unusable. I also had trouble installing Linux on the motherboard, an Asrock Extreme9 990FX. Admittedly this was probably down to my complete ignorance regarding UEFI.

At the end of the day, I have neither time nor inclination to fight these battles over and over again. In 2016 I expect not to have to fight the battles I was fighting with Linux in 2001. Gnome, Red Hat, the systemd cabal, KDE, Debian, Google, Ubuntu - all of them creating a never-ending stream of new bugs for future generations to tackle, but none of them remotely interested in solving today's bugs.

But that's what you get when Wall Street venture capital dictates the terms on which Linux should proceed. It's sad: what was an international project that had so much promise 15 years ago has now been hijacked and monopolised by the big bullies on the block for their own ends. Truth to tell, I am increasingly sick of Linux, sick of the immaturity that drives so-called progress in Linux, sick of the constant breakage in Linux and last but not least sick and tired of the fanboys making false claims about Linux.

At the moment I am back with Windows 8.1, though we all know where Microsoft are going, and undoubtedly they've had a hand in nudging Linux down the cul-de-sac it's in anyway, so Microsoft is not a long-term option either. NetBSD is a beautifully engineered project - sane, conservative, and predictable. The trouble is, you still need to slap a desktop on it for day-to-day use, so what do you go with: Xfce, which is struggling with Gnome's constant breakage and its bully-boy indifference to other projects? Gnome, which has been an insulting Fisher-Price PoS since version 3 was imposed by the powers-that-be in corporate America? or KDE, which will eventually culminate in a stable version of 5 only to decide they want to abandon it and devote themselves exclusively to 6 instead? Great choice there. Of course we all know they're pushing us to use the oh-so-great cloud anyway and the desktop is oh-so-nineties (when We were still in nappies), why would you even care!

Well anyway, they're my thoughts. Slackware and Crux are great. Hard to see them holding out against the tide for the next decade though, and who can blame them if they eventually do succumb?

Sorry. Probably not the place to take out my anger on Linux, but I see the options narrowing, and that is not supposed to be what Linux was about. I am so, so angry with those responsible, and their stupid, brain-dead, immature pet projects designed to keep breakage to the fore in Linux.

Gerard Lally 04-16-2016 06:49 AM

Quote:

Originally Posted by lopid (Post 5522419)
I have just been through this process. This is what you have to do on Slackware.
...

Thank you for such a detailed write-up! I've had problems with the Nvidia card I bought so I've postponed this for the time being. When I get a replacement video card I will use these instructions.

Gerard Lally 04-16-2016 06:58 AM

Quote:

Originally Posted by archfan (Post 5523113)
Don't use libvirt. It's bloated FOSS crap that serves no purpose other than to waste your precious disk space. The syntax is plain awful imho.
...

Cripes it's a lot more complex than I was expecting! Video card pass-through with NetBSD Xen is far less complex but unfortunately the NetBSD team haven't yet updated Xen 4 to support it, and Xen 3, which does support it, is quite old now.

I will probably go picking a few brains here when I get the second video card replaced.

Thank you for your detailed reply.

archfan 04-16-2016 06:08 PM

I'm happy to help. Don't worry it really looks harder than it is.

Though I don't necessarily disagree with your statement. It's a complex system but once it works it works. I haven't encountered any critical bugs nor crashes during the two years I have used KVM-pt on my system. The developers have really put a lot of effort into this technology and it shows. I'm still impressed how smoothly it performs on my system.

When it comes to passthrough Radeon cards are probably the better choice. Nvidia has some nasty anti-features to prevent hyper-v enlightenments in their drivers whereas AMD has no such mechanisms. These extensions are IMHO necessary for flawless performance. Without them my system felt too unresponsive and sluggish under load.

mtslzr 04-23-2016 10:49 PM

While Radeon is preferred, is there anything major the prevent Nvidia cards from working?

Been on the fence about giving this a try for quite some time, and this thread (with Slack-specific instructions) may be what pushes me over the edge.

archfan 05-09-2016 09:50 AM

Well apparently the problem with Nvidia GPUs has been solved in Qemu.

Just quoting from the Arch wiki here:
Quote:

"Error 43 : Driver failed to load" on Nvidia GPUs passed to Windows VMs

Since version 337.88, Nvidia drivers on Windows check if an hypervisor is running and fail if it detects one, which results in an Error 43 in the Windows device manager. Starting with QEMU 2.5.0, the vendor_id for the hypervisor can be spoofed, which is enough to fool the Nvidia drivers into loading anyway. All one must do is add hv_vendor_id=whatever to the cpu parameters in their QEMU command line.

Alien Bob 05-10-2016 04:48 AM

Quote:

Thought I'd try here first for Slackware-specific instructions, before heading over to the virtualization forum. There are Debian and Arch guides floating around on the internet, but, as usual, there's always something in these to trip you up. So slacker suggestions, instructions, tips and whatnot welcome!
So who of you guys is going to add this crucial information as a new article on the Slackware Documentation Project Wiki?

mtslzr 05-19-2016 01:22 PM

Do I need to do anything fancy re:mouse and keyboard? FWIW, I have two monitors, and my plan was to either split them, or maybe plug one into both cards, and just switch it's input when gaming under Windows. Can I move the mouse between the two, or do I need to set something up for that?

archfan 05-19-2016 01:36 PM

Not necessarily. A USB switch for keyboard and mouse and an additional USB card for passthrough is highly recommended though. I also use an external USB sound card for my VM as sound emulation is sort of terrible.

If you don't want to spend money you can always use Synergy or QEMU USB passthrough for your USB peripheral devices.

mostlyharmless 05-19-2016 03:38 PM

Synergy works well. Although I am currently using Arch, I had VGA passthrough working on Slackware 14.1. Agree with the notes previously given. Qemu script and libvirtd each have advantages other than the GUI, but to each his/her own. My notes on setup are on my LQ blog pages, though they're a bit dated now. I'd recommend the vfio-kvm mailing list for help, and make sure you check out the guide on vfio.blogspot.com.

kthxbye 06-17-2016 05:18 PM

Greetings!
I've recently dusted my old GTX550Ti in order to try and prepare a virtual machine (x64 Windows 10) with GPU pass-through but I've hit a few issues on the way.
I've followed the guides in this topic (as well as the linked ones) and I managed to go as far as binding the GTX550Ti to vfio-pci but I cannot manage to have a working VM with qemu.

I'm using an up-to-date Slackware64-current installation: I had to recompile the kernel (I used the default kernel's config file and enabled vfio), blacklisted both snd_hda_intel and the nvidia drivers (later to be loaded with modprobe using /etc/rc.d/rc.local) and bound the graphic card to vfio, both the GPU and the audio portion.
lspci -nnk outputs the following (relevant entries only):
Code:

06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF116 [GeForce GTX 550 Ti] [10de:1244] (rev a1)                                                                                                                                                                 
        Subsystem: ASUSTeK Computer Inc. GF116 [GeForce GTX 550 Ti] [1043:83be]                                                                                                                                                                                               
        Kernel driver in use: vfio-pci                                                                                                                                                                                                                                       
        Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia                                                                                                                                                                                                                 
06:00.1 Audio device [0403]: NVIDIA Corporation GF116 High Definition Audio Controller [10de:0bee] (rev a1)                                                                                                                                                                   
        Subsystem: ASUSTeK Computer Inc. GF116 High Definition Audio Controller [1043:83be]                                                                                                                                                                                   
        Kernel driver in use: vfio-pci                                                                                                                                                                                                                                       
        Kernel modules: snd_hda_intel

iommu groups are ok, as seen here:
Code:

IOMMU group 0
        00:00.0 Host bridge [0600]: Intel Corporation 4th Gen Core Processor DRAM Controller [8086:0c00] (rev 06)
IOMMU group 1
        00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
        01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 950] [10de:1402] (rev a1)
        01:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fba] (rev a1)
IOMMU group 2
        00:14.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI [8086:8c31] (rev 05)
IOMMU group 3
        00:16.0 Communication controller [0780]: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 [8086:8c3a] (rev 04)
IOMMU group 4
        00:1a.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 [8086:8c2d] (rev 05)
IOMMU group 5
        00:1b.0 Audio device [0403]: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller [8086:8c20] (rev 05)
IOMMU group 6
        00:1c.0 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 [8086:8c10] (rev d5)
IOMMU group 7
        00:1c.2 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 [8086:8c14] (rev d5)
IOMMU group 8
        00:1c.3 PCI bridge [0604]: Intel Corporation 82801 PCI Bridge [8086:244e] (rev d5)
        04:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 04)
IOMMU group 9
        00:1c.4 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 [8086:8c18] (rev d5)
IOMMU group 10
        00:1d.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 [8086:8c26] (rev 05)
IOMMU group 11
        00:1f.0 ISA bridge [0601]: Intel Corporation H87 Express LPC Controller [8086:8c4a] (rev 05)
        00:1f.2 SATA controller [0106]: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [8086:8c02] (rev 05)
        00:1f.3 SMBus [0c05]: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller [8086:8c22] (rev 05)
IOMMU group 12
        03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 0c)
IOMMU group 13
        06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF116 [GeForce GTX 550 Ti] [10de:1244] (rev a1)
        06:00.1 Audio device [0403]: NVIDIA Corporation GF116 High Definition Audio Controller [10de:0bee] (rev a1)

The other graphic card (a GTX 950) works as usual with the nVidia proprietary drivers.
I'm using alien's qemu and vde packages if this can be relevant. I also checked if the GTX550Ti works properly and I verified both a Windows installation (dual boot) and my Slackware system (prior to binding the GTX550Ti to vfio) could use the card without issues and output to the monitor I connected.

Once I reached this point, I started to run into several issues.

- Problem #1: no output on the GTX550Ti
I cannot output anything on the GTX550Ti using qemu (the monitor I hooked up keeps giving the NO SIGNAL warning).
I'm using this script to launch qemu:
Code:

#!/bin/sh

INSTALLFILE=/home/kthxbye/vm/win10-uefi-x64.qcow2
FILESIZE=200G

INSTALLCD=/home/kthxbye/archive/windows/win10/Win10_1511_1_English_x64.iso

DRIVERCD=/home/kthxbye/archive/windows/virtio-win-0.1.118.iso

# PCI address of the passtrough devices
DEVICE1="06:00.0"
DEVICE2="06:00.1"

if [ ! -e $INSTALLFILE ]; then
    qemu-img create -f qcow2 -o preallocation=metadata,compat=1.1,lazy_refcounts=on $INSTALLFILE $FILESIZE
fi

cp /usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd /tmp/my_vars.fd

QEMU_AUDIO_DRV=alsa
qemu-system-x86_64 \
  -enable-kvm \
  -m 8192 \
  -smp cores=4,threads=2 \
  -cpu host,kvm=off \
  -vga none \
  -soundhw hda \
  -boot menu=on \
  -usb \
  -device vfio-pci,host=$DEVICE1,multifunction=on \
  -device vfio-pci,host=$DEVICE2 \
  -drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
  -drive if=pflash,format=raw,file=/tmp/my_vars.fd \
  -device virtio-scsi-pci,id=scsi \
  -drive file=$INSTALLCD,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
  -drive file=$INSTALLFILE,id=disk,format=qcow2,if=none,cache=writeback -device scsi-hd,drive=disk \
  -drive file=$DRIVERCD,id=virtiocd,if=none,format=raw -device ide-cd,bus=ide.1,drive=virtiocd

- Problem #2: BSOD on the VM if using -vga std
If I start the vm with the -vga std option I can install Windows 10; doing so allowed me to verify that the VM does actually see the GTX550Ti with no issues. Windows proceeded to download the latest updates and the drivers for the nVidia card but at the next reboot it started going into a BSOD loop giving one of the two following errors:
Code:

system_service_exception
or
Code:

system_service_exception nvlddmkm.sys
This happens both if I let Windows download the updates and the drivers by itself or if I install the latest drivers myself.

- Problem #3: freezes and garbled audio.
Prior to installing the drivers, the VM works reasonably well, although it's a bit slow at times and I had better success by using
Code:

  -cdrom $INSTALLCD \
  -hda $INSTALLFILE

instead of
Code:

  -device virtio-scsi-pci,id=scsi \
  -drive file=$INSTALLCD,id=isocd,format=raw,if=none -device scsi-cd,drive=isocd \
  -drive file=$INSTALLFILE,id=disk,format=qcow2,if=none,cache=writeback -device scsi-hd,drive=disk \
  -drive file=$DRIVERCD,id=virtiocd,if=none,format=raw -device ide-cd,bus=ide.1,drive=virtiocd

in the qemu script I use.
It does, however, freeze every time it needs to output sound: this happens both with alsa and pulseaudio. The terminal shows this warning:
Code:

main-loop: WARNING: I/O thread spun for 1000 iterations
On top of that, the sound is also heavily garbled. The sound coming from the host is not affected.

Does anyone have any idea about how to solve any of those issues, or where to look for a possible solution?

If I'll manage to have a working VM with proper pass-through and virtualisation I'll also write a step by step guide taken to solve the aforementioned issues.

Thanks in advance!
Cheers,
~kthxbye

kthxbye 06-18-2016 04:28 PM

A quick update: I managed to get output from the GTX550Ti by NOT using the OVMF BIOS, which solves issue #1.
Issue #2 changed as now I don't have to use -vga std to work on the VM; however, after installing the nVidia drivers the GTX550Ti does not output anything (monitor remains blank for a while before giving the no signal warning).
Issue #3 is unchanged.
I have a new issue as well: if I reboot the guest OS and/or restart the VM, I have no output on the GTX550Ti and I have the following warning on the terminal:
Code:

qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:06:00.0
Device option ROM contents are probably invalid (check dmesg).
Skip option ROM probe with rombar=0, or load from file with romfile=

It only works the first time the VM is started after a reboot of the host OS. I tried to extract the rom to use with the romfile= option but I've been unsuccessful as of yet, as even on a fresh reboot of the host OS it says the rom is invalid when I try to cat it to a new file.
For reference, I've tried this method:
Code:

# cd /sys/bus/pci/devices/0000:06:00.0/
# echo 1 > rom
# cat rom > /tmp/image.rom
# echo 0 > rom


Gerard Lally 06-18-2016 04:31 PM

Quote:

Originally Posted by kthxbye (Post 5563046)
A quick update: I managed to get output from the GTX550Ti by NOT using the OVMF BIOS, which solves issue #1.

Hi - I'm the OP, and I'm still interested in getting this done, but I still haven't replaced the nvidia card that was giving me trouble so I'm not much use to you I'm afraid.

camerabambai 11-22-2016 12:09 PM

Some question about vga passtrough
I have two video cards
one is nvidia,on host
another is ati,i want to pass it to qemu/libvirt.
I have one tv/monitor with multiple inputs
my card is connected to vga(nvidia) the ati
is on dv.
I have installed with libvirt(i know is foss,
but has a nice virt-manager to add and remove
stuff..) windows 8.1
And on display i can see the windows
But my dream is to see something like this(pic is fake!
i hope it get reality someday..)

https://s12.postimg.org/t4kgq4yzh/ricevuta_ram5.jpg

the gpu card(real) on a screen emulated(vnc,spice...)
Is possible to do something like this with qemu/vga passtrough?

On some youtube video seems possible,but they don't put a simply howto
only the video..:(

https://www.youtube.com/watch?v=Qi1LdFkRzIs

camerabambai 11-22-2016 06:25 PM

I see the qemu config on this video of the same user,
he get a very nice result,can play 3d game in little window
using spice client!

https://www.youtube.com/watch?v=_6K9Sxbb_lU

Lucky user,i don't understand how he can get the 3d of real card
on spicy client.
As i understand this configuration works only with intel host+amd guest gpu.
I have tried a similar configuration running this script
but spicy client connect only to qxl card...whitout 3d

Code:

sudo qemu-system-x86_64 -enable-kvm  \
-smp 2,sockets=1,cores=2,threads=1 \
-M pc \
-m 6192 \
-cpu host \
-rtc base=localtime \
-vga qxl \
-spice port=5902,disable-ticketing \
-device virtio-serial \
-chardev spicevmc,id=vdagent,name=vdagent \
-net nic,model=e1000 -net tap,ifname=tap0,script=no \
-device vfio-pci,host=06:00.0,multifunction=on \
-device vfio-pci,host=06:00.1 \
-drive file=.local/libvirt/images/win81.qcow2,format=qcow2 \
-cdrom 1.iso  \
-soundhw hda \
-boot c

I use vfio instead of stub because i cannot find the "unbind" voice

Code:

find  /sys/bus/pci/devices/0000\:0*|grep -i unbi
Return nothing

lopid 08-08-2019 05:44 AM

Quote:

Originally Posted by Alien Bob (Post 5543076)
So who of you guys is going to add this crucial information as a new article on the Slackware Documentation Project Wiki?

I'd be happy to, but since I haven't done passthrough since I wrote that post, I'm afraid it would only be a copy-paste job.


All times are GMT -5. The time now is 04:55 PM.