LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 02-15-2016, 02:06 PM   #1
Gerard Lally
Senior Member
 
Registered: Sep 2009
Location: Brú na Bóinne, IE
Distribution: Slackware, NetBSD
Posts: 1,862

Rep: Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334
Slackware-specific guide to KVM-Qemu VGA passthrough


I had a stroke of luck in January and as a result I have finally been able to build a powerful new desktop.



I bought two fairly good video cards, and would like to assign one of them exclusively to a Windows guest under KVM-Qemu.

Thought I'd try here first for Slackware-specific instructions, before heading over to the virtualization forum. There are Debian and Arch guides floating around on the internet, but, as usual, there's always something in these to trip you up. So slacker suggestions, instructions, tips and whatnot welcome!
 
Old 02-17-2016, 11:35 PM   #2
red_fire
Member
 
Registered: Jan 2007
Location: Indonesia
Distribution: Slackware Linux
Posts: 67

Rep: Reputation: 11
Hi gezley,

I'm very interested in this as well!

With regards to the 2 video cards, are they identical or is one just for gaming and the other one just for the display/multimedia?
 
Old 02-18-2016, 06:16 AM   #3
Cesare
Member
 
Registered: Jun 2010
Posts: 63

Rep: Reputation: 97
Just some general hints:

1) Your mainboard's BIOS needs to support this, otherwise PCI-passthrough doesn't work.

2) After booting you have to unbind (from what?) the video card and assign it to the pci-stub driver. I did this with a simple script:

Code:
#!/bin/bash

echo "loading kernel module"
modprobe pci-stub
sleep 1

lspci | if grep -q '01\:00\.0.*VGA.*AMD'; then
  echo "unbinding AMD VGA device"
  echo "1002 683f" > /sys/bus/pci/drivers/pci-stub/new_id
  echo "0000:01:00.0" > /sys/bus/pci/devices/0000:01:00.0/driver/unbind
  echo "0000:01:00.0" > /sys/bus/pci/drivers/pci-stub/bind
fi

lspci | if grep -q '01\:00\.1.*Audio.*AMD'; then
  echo "unbinding AMD Audio device"
  echo "1002 aab0" > /sys/bus/pci/drivers/pci-stub/new_id
  echo "0000:01:00.1" > /sys/bus/pci/devices/0000:01:00.1/driver/unbind
  echo "0000:01:00.1" > /sys/bus/pci/drivers/pci-stub/bind
fi

echo "status:"
dmesg | tail
echo
Obviously this only works in a very specific setup, but it should help to explain the concept.

3) Start qemu as root and pass it the PCI- and USB-ids wou want your virtual machine to handle.

It took me a while and many reboots to figure out the correct commands. After that everything was running fine as long as I didn't touch anything, i.e. restarting the software within the VM would result in a complete host lock up. Some filesystems don't like this, so better have a backup ready. The situation might be improved with more modern hardware and newer QEMUs.
 
2 members found this post helpful.
Old 03-28-2016, 07:51 AM   #4
lopid
Member
 
Registered: Jun 2008
Posts: 155

Rep: Reputation: Disabled
I have just been through this process. This is what you have to do on Slackware.

Be sure that your motherboard and the graphics card that you want to pass through both support UEFI, and that your CPU supports IOMMU. Most modern ones do. The disk image of your guest OS must also support UEFI. This rules out anything before, and some versions of, Windows 7. Windows 8+ should be OK.

Be sure that VT-d (for Intel CPUs) or AMD-Vi (for AMD CPUs) is enabled in BIOS. Also make sure that the graphics card that you want to pass through is not set as the primary one.

You will need to recompile the kernel, because the default "huge" one, at least in current, doesn't include CONFIG_VFIO_PCI_VGA, so enable it. You can do the few steps below before rebooting into the new kernel, although it's good to reboot first to make debugging easier in case you fucked something up. Remember to keep the old kernel!

In lilo.conf, for an Intel CPU, add "intel_iommu=on" to the kernel parameters (that's the "append" line). I'm not sure what it should be for AMD CPUs. It might be "amd_iommu=on", but the kernel documentation isn't clear on how to enable it for AMD. You might also add "iommu=pt" (see here for why you might not want to).

The idea is that you want the vfio-pci driver to take control of any PCI device that you want to pass through. Some guides will mention pci-stub. That's older tech. and you don't need it for Slackware. Sure, you can use it, but it adds an extra unnecessary step. Or at least, it did for me. You can see what driver is assigned to a device with "lspci -nnk". Look for "Kernel driver in use". The next steps are necessary to assign vfio-pci to the device(s).

If you intend to pass through an Nvidia card, blacklist the nouveau driver in /etc/modprobe.d/nouveau.conf or some such. You don't want it claiming the card before you want to use it. Of course, also blacklist the official Nvidia binary driver, if you have it installed.

With an Nvidia card, you might notice that lspci shows it has an audio device as well as a VGA controller. I wanted this audio device to also pass through (in fact, I might have read that it makes the whole exercise easier, if not possible), but I noticed later that the snd_hda_intel driver was always claiming it instead of vfio-pci. To prevent that, I blacklisted snd_hda_intel. However, that meant that even my Intel motherboard's audio didn't work, so I simply loaded the driver at a stage after the vfio-pci is loaded - in /etc/rc.d/rc.local ("modprobe snd_hda_intel").

You'll need to tell the system to load the vfio-pci driver and assign it to the PCI devices (Nvidia VGA and, for me, it's audio counterpart). To do that, you'll need the addresses of the PCI devices. You can either use the addresses of the devices themselves, or, as I did, use a string which accounts for all Nvidia devices. You can see the addresses with the lspci command above. They look like "10de:13c2". Create an entry in /etc/modprobe.d/vfio.conf, separating each address with a comma, thusly:

Quote:
options vfio-pci ids=10de:13c2,10de:0fbb
Of course, those are the addresses of my own device. Yours may differ. Alternatively, use the Nvidia catch all string:

Quote:
options vfio-pci ids=1002:ffffffff:ffffffff:ffffffff:00030000:ffff00ff,1002:ffffffff:ffffffff:ffffffff:00040300:fffff fff,10de:ffffffff:ffffffff:ffffffff:00030000:ffff00ff,10de:ffffffff:ffffffff:ffffffff:00040300:fffff fff
You can reboot now. Check that the vfio-pci driver has been assiged to the devices. Check also that the new kernel is using the correct setting ("zgrep CONFIG_VFIO_PCI_VGA /proc/config.gz"), and that the IOMMU kernel parameters worked ("find /sys/kernel/iommu_groups/ -type l" - if you see lots of entries, it worked; if you see none, it didn't).

You'll need to install QEMU and its dependencies. I used the SBo 14.1 repository for this. Here're the contents of a queue file that you can load into sbopkg to make life easier:

Quote:
usbredir
vte3
vala
spice-protocol
pyparsing
celt051
spice
orc
gstreamer1
gst1-plugins-base
spice-gtk
gtk-vnc
ipaddr-py
tunctl
gnome-python2-gconf
yajl
urlgrabber
libvirt
libvirt-python
libvirt-glib
libosinfo
virt-manager
qemu
Maybe some of them could be found at a repository in slackpkg...

I had to change libvirt and virt-manager (which I didn't end up using, but it still might be necessary) to the latest versions. Here are the diffs for libvirt:

Quote:
$ diff /var/lib/sbopkg/SBo/14.1/libraries/libvirt/libvirt.info*
2c2
< VERSION="1.2.21"
---
> VERSION="1.3.2"
4,5c4,5
< DOWNLOAD="ftp://libvirt.org/libvirt/libvirt-1.2.21.tar.gz"
< MD5SUM="76ab39194302b9067332e1f619c8bad9"
---
> DOWNLOAD="ftp://libvirt.org/libvirt/libvirt-1.3.2.tar.gz"
> MD5SUM="poop"

$ diff /var/lib/sbopkg/SBo/14.1/libraries/libvirt/libvirt.SlackBuild*
8c8
< VERSION=${VERSION:-1.2.21}
---
> VERSION=${VERSION:-1.3.2}
The diffs for virt-manager:

Quote:
$ diff /var/lib/sbopkg/SBo/14.1/system/virt-manager/virt-manager.info*
2c2
< VERSION="1.2.1"
---
> VERSION="1.3.2"
4,5c4,5
< DOWNLOAD="http://virt-manager.org/download/sources/virt-manager/virt-manager-1.2.1.tar.gz"
< MD5SUM="c8045da517e7c9d8696e22970291c55e"
---
> DOWNLOAD="http://virt-manager.org/download/sources/virt-manager/virt-manager-1.3.2.tar.gz"
> MD5SUM="poop"
7c7
< MD5SUM_x86_64=""
---
> MD5SUM_x86_64="poop"

$ diff /var/lib/sbopkg/SBo/14.1/system/virt-manager/virt-manager.SlackBuild*
10c10
< VERSION=${VERSION:-1.2.1}
---
> VERSION=${VERSION:-1.3.2}
You'll need a copy of the latest OVMF UEFI BIOS. Get edk2.git-ovmf-x64 and extract OVMF-pure-efi.fd somewhere. I followed the steps here, but I spent too long trying to figure out how to get "UEFI" in the BIOS setting. I guess the RPM spec file would have set it up all nicely on an RPM based platform, but all I managed was seeing the full path to the OVMF file in virt-manager, which didn't work. I did that by changing the nvram variable in /etc/libvirt/qemu.conf. Instead, I gave up on virt-manager and used qemu-system-x86_64 directly, as below.

I found I had to change stdio_handler to "file", in qemu.conf, and uncomment cgroup_device_acl and add /dev/vfio/1:

Code:
cgroup_device_acl = [
    "/dev/null", "/dev/full", "/dev/zero",
    "/dev/random", "/dev/urandom",
    "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
    "/dev/rtc","/dev/hpet", "/dev/vfio/vfio", "/dev/vfio/1"
]
Restart libvirt:
Quote:
/etc/rc.d/rc.libvirt restart
When you install a guest, it'll most likely need an external driver in order to recognise the VirtIO drive that you'll use. I found that the stable virtio-win ISO worked in Microsoft's Windows 7, and the latest virtio-win ISO worked in Microsoft's Windows 10.

Create this file as kvm-install.sh:

Code:
#!/bin/sh

INSTALLFILE=win10-uefi-x64_system.qcow2
FILESIZE=50G

INSTALLCD=/home/lopid/Win10.iso
# if you use a hardware CD-ROM drive, check for the device. In most cases it's /dev/sr0
#INSTALLCD=/dev/sr0

DRIVERCD=/home/lopid/virtio-win-0.1.113.iso

# PCI address of the passtrough devices
DEVICE1="01:00.0"
DEVICE2="01:00.1"

# create installation file if not exist
if [ ! -e $INSTALLFILE ]; then
    qemu-img create -f qcow2 $INSTALLFILE $FILESIZE
fi

#QEMU_PA_SAMPLES=4096 QEMU_AUDIO_DRV=pa \
qemu-system-x86_64 \
-bios /usr/share/OVMF/OVMF-pure-efi.fd \
-cpu host,kvm=off \
-device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 \
-device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=3 \
-device usb-kbd \
-device usb-tablet \
-device vfio-pci,host=$DEVICE1,addr=0x8.0x0,multifunction=on \
-device vfio-pci,host=$DEVICE2,addr=0x8.0x1 \
-device virtio-blk-pci,scsi=off,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2 \
-device virtio-net-pci,netdev=user.0,mac=52:54:00:a0:66:43 \
-drive file=$DRIVERCD,if=none,id=drive-ide0-1-0,readonly=on,format=raw \
-drive file=$INSTALLCD,if=none,id=drive-ide0-0-0,readonly=on,format=raw \
-drive file=$INSTALLFILE,if=none,id=drive-virtio-disk0,format=qcow2,cache=unsafe \
-enable-kvm \
-m 4096 \
-machine pc-i440fx-2.1,accel=kvm \
-netdev user,id=user.0 \
-rtc base=localtime,driftfix=slew \
-smp 1,sockets=1,cores=4,threads=4 \
-soundhw hda \
-usb \
-vga qxl
INSTALLFILE is the name of the image of the guest that will be created. Substitute DEVICE1 and DEVICE2 for your device addresses ("lspci"). Notice that I commented out the QEMU_* audio driver line. The QEMU that I used didn't support Pulseaudio, but I still heard sound from line out, if somewhat crackly. Edit the smp argument according to your system. Notice also that -bios points to the .fd file that was downloaded earlier. Just go through the file and check it looks good for you.

Run kvm-install.sh as root, and you should see a QEMU guest window appear and Windows start to install. When it asks you to set up the disk, click "Load Driver" and point it to the drive where virtio-win is. You want to select the viostor driver. After Windows has installed, shut it down, don't reboot.

Create another file, kvm-start.sh:

Code:
#!/bin/sh

INSTALLFILE=win10-uefi-x64_system.qcow2
IMAGEFILE=win10-uefi-x64_system-01.qcow2

# PCI address of the passtrough devices
DEVICE1="01:00.0"
DEVICE2="01:00.1"

# create a imagefile from backingfile file if not exist
if [ ! -e $IMAGEFILE ]; then
    qemu-img create -f qcow2 -o backing_file=$INSTALLFILE,backing_fmt=qcow2 $IMAGEFILE
fi

#QEMU_PA_SAMPLES=6144 QEMU_AUDIO_DRV=pa \
qemu-system-x86_64 \
-bios /usr/share/OVMF/OVMF-pure-efi.fd \
-cpu host,kvm=off \
-device qxl \
-device usb-host,hostbus=1 \
-device vfio-pci,host=$DEVICE1,addr=0x8.0x0,multifunction=on,x-vga=on \
-device vfio-pci,host=$DEVICE2,addr=0x8.0x1 \
-device virtio-blk-pci,scsi=off,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-device virtio-net-pci,netdev=user.0,mac=52:54:00:a0:66:43 \
-drive file=$IMAGEFILE,if=none,id=drive-virtio-disk0,format=qcow2,cache=none \
-enable-kvm \
-m 8192 \
-machine pc-i440fx-2.1,accel=kvm,iommu=on \
-netdev user,id=user.0 \
-rtc base=localtime,driftfix=slew \
-smp 4,cores=4,threads=1,sockets=1 \
-soundhw hda \
-usb \
-vga none
I changed some arguments here because I was tweaking it for my system. YMMV. Again, check it over for yourself, before running it as root. This time, the guest window should display a message saying that the display has not yet been initialised. That's good. Check the output of the video card that you're passing through. It should show Windows!

I had a go at passing through a USB host, so my peripherals would work directly in the guest as well, but this was hit and miss. Sometimes Windows would show, for example, my mouse as a generic device, and sometimes it would show as the Logitech mouse that it is. Anyway, that should be enough to get you started with PCI VGA passthrough on Slackware. I have to give credit to other guys who already did most of the actual work, I just put things together for Slackware. Their links below.

As far as performance, with the Unigine Valley benchmark, I saw 61.7 average FPS with the basic settings on an Nvidia GTX 970 in a Windows 10 guest, whereas in Windows 7 running natively, I had 89.1 average FPS. Bear in mind that I didn't do much tweaking either with QEMU or in the guest. I note also that I couldn't get the guest to show all four of my CPU cores, they would only ever show as one.
 
3 members found this post helpful.
Old 03-29-2016, 01:31 PM   #5
archfan
Member
 
Registered: Mar 2016
Location: /dev/hug
Distribution: Slackware 14.2 x64
Posts: 85

Rep: Reputation: 32
Don't use libvirt. It's bloated FOSS crap that serves no purpose other than to waste your precious disk space. The syntax is plain awful imho.


As previously mentioned you need to recompile your kernel with:
- CONFIG_VFIO_PCI_VGA=y

I also suggest
- CONFIG_JUMP_LABEL=y (optimizes likely / unlikely branches, maybe gives a small perf. boost)
- CONFIG_HUGETLBFS=y
- CONFIG_HUGETLB_PAGE=y
- CONFIG_KSM=y
- CONFIG_MCORE2=y (depends on your arch, this is for Intel C2D and higher)
- CONFIG_HZ_1000=y
- CONFIG_HZ=1000
- CONFIG_PREEMPT_VOLUNTARY=y
- CONFIG_CC_STACKPROTECTOR_STRONG=y (this should be a default setting.)

Here are my scripts:

/etc/rc.d/rc.vfio

Quote:
#!/bin/sh
# /etc/rc.d/rc.vfio
#
# TRAVEL:
# Something that makes you feel like you're getting somewhere.
#

set -e

source /etc/vfio_devices.conf


vfio_bind() {
for dev in "$@"; do
vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
device=$(cat /sys/bus/pci/devices/$dev/device)
if [ -e /sys/bus/pci/devices/$dev/driver ]; then
echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
fi
echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done
}

start_vfio() {
if [ -n "$DEVICES" ]; then
echo -n "binding" $DEVICES "to VFIO "
vfio_bind $DEVICES
echo " All done."
else
echo "WARNING: You have no devices specified in /etc/vfio_devices.conf"
fi
}

stop_vfio() {
echo -n "Unloading kvm modules."
echo -n "You can now use Virtualbox."
modprobe -r kvm-intel kvm
}

reload_vfio() {
echo -n "Reloading kvm modules."
modprobe kvm-intel kvm
}

restart_vfio()
echo -n "\ Punishing your deed in 10 seconds."
sleep 10
restart -r now
}

case "$1" in
'start')
start_vfio
;;
'stop')
stop_vfio
;;
'reload')
reload_vfio
;;
'restart')
restart_vfio
;;
*)
echo "usage $0 start|stop|reload|restart"
esac
/etc/vfio_devices.conf
Quote:
"DEVICES="0000:01:00.0 0000:01:00.1 0000:03:00.0"
-> lspci -nk gets you a nice tree with device IDs.

/usr/local/sbin/winnet
Quote:
/usr/sbin/brctl addbr br0
ip addr flush dev eth0
/usr/sbin/brctl addif br0 eth0
/usr/sbin/tunctl -u {USERNAME}
/usr/sbin/brctl addif br0 tap0
ip link set dev br0 up
ip link set dev tap0 up
dhcpcd
/usr/local/sbin/winvm
Quote:
#!/bin/sh

for i in {0..7}; do
echo performance > /sys/devices/system/cpu/cpu${i}/cpufreq/scaling_governor
done

taskset -ac 2-7 qemu-system-x86_64 \
-qmp unix:/run/qmp-sock,server,nowait \
-serial none \
-parallel none \
-nodefaults \
-nodefconfig \
-enable-kvm \
-name Windows10 \
-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,check \
-smp sockets=1,cores=3,threads=2 \
-m 8000 -mem-path /dev/hug \
-drive if=pflash,format=raw,readonly,file=/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd \
-drive if=pflash,format=raw,file=/home/{USERNAME}/VM/win/OVMF_VARS-pure-efi.fd \
-rtc base=utc \
-boot order=c \
-device virtio-scsi-pci,id=scsi \
-drive if=virtio,id=drive0,file=/dev/sdb,cache=none,aio=native,format=raw \
-net nic,model=virtio \
-net nic,vlan=0,macaddr=52:54:00:00:00:01,model=virtio,name=net0 \
-net bridge,vlan=0,name=bridge0,br=br0 \
-nographic \
-device vfio-pci,host=01:00.0,multifunction=on \
-device vfio-pci,host=01:00.1 \
-device vfio-pci,host=03:00.0 &amp;


sleep 5

cpuid=2
for threadpid in $(echo 'query-cpus' | qmp-shell /run/qmp-sock | grep '^(QEMU) {"return":' | sed -e 's/^(QEMU) //' | jq -r '.return[].thread_id'); do
taskset -p -c ${cpuid} ${threadpid}
((cpuid+=1))
done

wait

for i in {0..7}; do
echo powersave > /sys/devices/system/cpu/cpu${i}/cpufreq/scaling_governor
done
If you intend to use QMP you need to get these files from the QEMU source tarball. They're not included by default for some odd reason.

Note: I have installed Windows directly on my secondary SSD. This gives me option to run Windows natively just in case something doesn't work as expected with QEMU.

Additionally you need to get edk2 from here: kraxel.org/repos/jenkins/edk2/

Simply download edk2.git-ovmf-x64-(...).rpm from there and run rpm2tgz to create a Slackware package.

/etc/modprobe.d/vfio.conf
Quote:
options vfio-pci ids=1002:67b0,1002:aac8,1912:0014
After compiling make sure to get your initramfs right.
/etc/mkinitrd.conf
Quote:
# mkinitrd.conf.sample
# See "man mkinitrd.conf" for details on the syntax of this file
#
SOURCE_TREE="/boot/linux-tree"
CLEAR_TREE="1"
OUTPUT_IMAGE="/boot/initrd-generic.gz"
#KERNEL_VERSION="$(uname -r)"
KERNEL_VERSION="4.5.0"
KEYMAP="us"
MODULE_LIST="intel_agp:i915:ext4:vfio:vfio_iommu_type1:vfio_pci:vfio_virqfd"
ROOTDEV="/dev/sda2"
ROOTFS="ext4"
RAID="0"
LVM="0"
UDEV="1"
MODCONF="0"
WAIT="1"
Then use mkinitrd -F to create a new initramfs.

We also need to change the /etc/fstab file. Add this:
Quote:
hugetlbfs /dev/hug hugetlbfs mode=1770 0 0
Last but not least I have changed the /etc/rc.d/rc.udev script in order to automatically mount hugetlbfs on boot and initialize the rc.vfio script.

Quote:
#!/bin/sh
# This is a script to initialize udev, which populates the /dev
# directory with device nodes, scans for devices, loads the
# appropriate kernel modules, and configures the devices.

PATH="/sbin:/bin"

check_mounted() {
grep -E -q "^[^[:space:]]+ $1 $2" /proc/mounts
return $?
}

mount_devpts() {
if ! check_mounted /dev/pts devpts ; then
mkdir /dev/pts 2> /dev/null
mount -n -o mode=0620,gid=5 -t devpts devpts /dev/pts
fi
}

mount_devshm() {
if ! check_mounted /dev/shm tmpfs ; then
mkdir /dev/shm 2> /dev/null
mount /dev/shm
fi
}

mount_devhug() {
if ! check_mounted /dev/hug hugetlbfs ; then
mkdir /dev/hug 2> /dev/null
mount /dev/hug
fi
}

mount_vfio() {
if [ -x /etc/rc.d/rc.vfio ]; then
/etc/rc.d/rc.vfio start
fi
}




case "$1" in
start)
# Sanity check #1, udev requires that the kernel support tmpfs:
if ! grep -wq tmpfs /proc/filesystems ; then
echo "Sorry, but you need tmpfs support in the kernel to use udev."
echo
echo "FATAL: Refusing to run /etc/rc.d/rc.udev."
exit 1
fi

# Sanity check #2, make sure that a 2.6.x kernel is new enough:
if [ "$(uname -r | cut -f 1,2 -d .)" = "2.6" ]; then
if [ "$(uname -r | cut -f 3 -d . | sed 's/[^[:digit:]].*//')" -lt "32" ]; then
echo "Sorry, but you need a 2.6.32+ kernel to use this udev."
echo "Your kernel version is only $(uname -r)."
echo
echo "FATAL: Refusing to run /etc/rc.d/rc.udev."
exit 1
fi
fi

# Sanity check #3, make sure the udev package was not removed. If udevd
# is not there, this will also shut off this script to prevent further
# problems:
if [ ! -x /sbin/udevd ]; then
chmod 0644 /etc/rc.d/rc.udev
echo "No udevd daemon found."
echo "Turning off udev: chmod 644 /etc/rc.d/rc.udev"
echo "FATAL: Refusing to run /etc/rc.d/rc.udev."
exit 1
fi

# Disable hotplug helper since udevd listens to netlink:
if [ -e /proc/sys/kernel/hotplug ]; then
echo "" > /proc/sys/kernel/hotplug
fi

if grep -qw devtmpfs /proc/filesystems ; then
if ! check_mounted /dev devtmpfs ; then
# umount shm if needed
check_mounted /dev/shm tmpfs && umount -l /dev/shm

# Umount pts if needed, we will remount it later:
check_mounted /dev/pts devpts && umount -l /dev/pts

# umount hug if needed
check_mounted /dev/hug hugetlbfs && umount -l /dev/hug


# Mount tmpfs on /dev:
mount -n -t devtmpfs devtmpfs /dev
fi
else
# Mount tmpfs on /dev:
if ! check_mounted /dev tmpfs ; then
# umount shm if needed
check_mounted /dev/shm tmpfs && umount -l /dev/shm

# Umount pts if needed, we will remount it later:
check_mounted /dev/pts devpts && umount -l /dev/pts

# umount hug if needed
check_mounted /dev/hug hugetlbfs && umount -l /dev/hug

# Mount tmpfs on /dev:
# the -n is because we don't want /dev umounted when
# someone (rc.[06]) calls umount -a
mount -n -o mode=0755 -t tmpfs tmpfs /dev
fi
fi

# Mount devpts
mount_devpts
mount_devshm
mount_devhug
mount_vfio


(...)
And in case you're using grub:
GRUB_CMDLINE_LINUX_DEFAULT="hugepagesz=1GB default_hugepagesz=1GB hugepages=8 intel_iommu=on iommu=pt"

Other useful kernel options are:
- pcie_acs_override=downstream (requires the acs kernel patch. Only add this if passthrough doesn't work.)
- hugepages=8 -> 1x8 GB hugepages. Make sure you have enough ram.

Last edited by archfan; 03-29-2016 at 02:09 PM.
 
3 members found this post helpful.
Old 03-29-2016, 01:38 PM   #6
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.2
Posts: 3,748

Rep: Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085
Opinions differ on libvirtd.
 
Old 03-29-2016, 01:45 PM   #7
archfan
Member
 
Registered: Mar 2016
Location: /dev/hug
Distribution: Slackware 14.2 x64
Posts: 85

Rep: Reputation: 32
Indeed.

I did some benchmarks recently and here are some results. Might be of interest for some.

Benchmark on native Windows 10 x64: http://www.3dmark.com/fs/7942691
The same benchmark on QEMU: http://www.3dmark.com/fs/7931626

There wasn't really much difference in terms of performance between QEMU and the native run.

Last edited by archfan; 03-29-2016 at 01:57 PM.
 
1 members found this post helpful.
Old 03-29-2016, 02:17 PM   #8
Richard Cranium
Senior Member
 
Registered: Apr 2009
Location: Carrollton, Texas
Distribution: Slackware64 14.2
Posts: 3,748

Rep: Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085Reputation: 2085
Quote:
Originally Posted by archfan View Post
Indeed.

I did some benchmarks recently and here are some results. Might be of interest for some.

Benchmark on native Windows 10 x64: http://www.3dmark.com/fs/7942691
The same benchmark on QEMU: http://www.3dmark.com/fs/7931626

There wasn't really much difference in terms of performance between QEMU and the native run.
That's some useful information. Thanks for posting it.
 
1 members found this post helpful.
Old 03-30-2016, 07:16 PM   #9
archfan
Member
 
Registered: Mar 2016
Location: /dev/hug
Distribution: Slackware 14.2 x64
Posts: 85

Rep: Reputation: 32
Just one more quick tip. In case you're using an integrated Intel iGPU as primary GPU and plan to use a secondary card for VT-d passthrough you might encounter a strange error where grub is unable to boot the system. Some error about "file '/grub2/locale/en.mo.gz' not found" or something along the lines.

Just uncomment "GRUB_TERMINAL=console" in /etc/default/grub and create a new config with grub-mkconfig.

In extreme cases you might have to fix your ACPI tables and recompile the dsdt.hex into your kernel in order to be able to boot with the onboard GPU as primary device. If someone encounters this problem just PM me or ask here. I might know how to fix it but it requires further testing.

Cheers
 
1 members found this post helpful.
Old 04-11-2016, 02:41 PM   #10
Gerard Lally
Senior Member
 
Registered: Sep 2009
Location: Brú na Bóinne, IE
Distribution: Slackware, NetBSD
Posts: 1,862

Original Poster
Rep: Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334
Sorry for the late reply. One of the video cards I bought is a GeForce GTX 750 Ti; the other a Radeon R7 200.

The GeForce does not work properly under Linux. The fans power up like jet turbines every 20 seconds or so, making the computer unusable. I also had trouble installing Linux on the motherboard, an Asrock Extreme9 990FX. Admittedly this was probably down to my complete ignorance regarding UEFI.

At the end of the day, I have neither time nor inclination to fight these battles over and over again. In 2016 I expect not to have to fight the battles I was fighting with Linux in 2001. Gnome, Red Hat, the systemd cabal, KDE, Debian, Google, Ubuntu - all of them creating a never-ending stream of new bugs for future generations to tackle, but none of them remotely interested in solving today's bugs.

But that's what you get when Wall Street venture capital dictates the terms on which Linux should proceed. It's sad: what was an international project that had so much promise 15 years ago has now been hijacked and monopolised by the big bullies on the block for their own ends. Truth to tell, I am increasingly sick of Linux, sick of the immaturity that drives so-called progress in Linux, sick of the constant breakage in Linux and last but not least sick and tired of the fanboys making false claims about Linux.

At the moment I am back with Windows 8.1, though we all know where Microsoft are going, and undoubtedly they've had a hand in nudging Linux down the cul-de-sac it's in anyway, so Microsoft is not a long-term option either. NetBSD is a beautifully engineered project - sane, conservative, and predictable. The trouble is, you still need to slap a desktop on it for day-to-day use, so what do you go with: Xfce, which is struggling with Gnome's constant breakage and its bully-boy indifference to other projects? Gnome, which has been an insulting Fisher-Price PoS since version 3 was imposed by the powers-that-be in corporate America? or KDE, which will eventually culminate in a stable version of 5 only to decide they want to abandon it and devote themselves exclusively to 6 instead? Great choice there. Of course we all know they're pushing us to use the oh-so-great cloud anyway and the desktop is oh-so-nineties (when We were still in nappies), why would you even care!

Well anyway, they're my thoughts. Slackware and Crux are great. Hard to see them holding out against the tide for the next decade though, and who can blame them if they eventually do succumb?

Sorry. Probably not the place to take out my anger on Linux, but I see the options narrowing, and that is not supposed to be what Linux was about. I am so, so angry with those responsible, and their stupid, brain-dead, immature pet projects designed to keep breakage to the fore in Linux.

Last edited by Gerard Lally; 04-11-2016 at 03:02 PM.
 
1 members found this post helpful.
Old 04-16-2016, 06:49 AM   #11
Gerard Lally
Senior Member
 
Registered: Sep 2009
Location: Brú na Bóinne, IE
Distribution: Slackware, NetBSD
Posts: 1,862

Original Poster
Rep: Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334
Quote:
Originally Posted by lopid View Post
I have just been through this process. This is what you have to do on Slackware.
...
Thank you for such a detailed write-up! I've had problems with the Nvidia card I bought so I've postponed this for the time being. When I get a replacement video card I will use these instructions.
 
Old 04-16-2016, 06:58 AM   #12
Gerard Lally
Senior Member
 
Registered: Sep 2009
Location: Brú na Bóinne, IE
Distribution: Slackware, NetBSD
Posts: 1,862

Original Poster
Rep: Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334Reputation: 1334
Quote:
Originally Posted by archfan View Post
Don't use libvirt. It's bloated FOSS crap that serves no purpose other than to waste your precious disk space. The syntax is plain awful imho.
...
Cripes it's a lot more complex than I was expecting! Video card pass-through with NetBSD Xen is far less complex but unfortunately the NetBSD team haven't yet updated Xen 4 to support it, and Xen 3, which does support it, is quite old now.

I will probably go picking a few brains here when I get the second video card replaced.

Thank you for your detailed reply.
 
Old 04-16-2016, 06:08 PM   #13
archfan
Member
 
Registered: Mar 2016
Location: /dev/hug
Distribution: Slackware 14.2 x64
Posts: 85

Rep: Reputation: 32
I'm happy to help. Don't worry it really looks harder than it is.

Though I don't necessarily disagree with your statement. It's a complex system but once it works it works. I haven't encountered any critical bugs nor crashes during the two years I have used KVM-pt on my system. The developers have really put a lot of effort into this technology and it shows. I'm still impressed how smoothly it performs on my system.

When it comes to passthrough Radeon cards are probably the better choice. Nvidia has some nasty anti-features to prevent hyper-v enlightenments in their drivers whereas AMD has no such mechanisms. These extensions are IMHO necessary for flawless performance. Without them my system felt too unresponsive and sluggish under load.
 
Old 04-23-2016, 10:49 PM   #14
mtslzr
LQ Newbie
 
Registered: May 2005
Location: Austin, TX
Distribution: Slackware
Posts: 12

Rep: Reputation: Disabled
While Radeon is preferred, is there anything major the prevent Nvidia cards from working?

Been on the fence about giving this a try for quite some time, and this thread (with Slack-specific instructions) may be what pushes me over the edge.
 
Old 05-09-2016, 09:50 AM   #15
archfan
Member
 
Registered: Mar 2016
Location: /dev/hug
Distribution: Slackware 14.2 x64
Posts: 85

Rep: Reputation: 32
Well apparently the problem with Nvidia GPUs has been solved in Qemu.

Just quoting from the Arch wiki here:
Quote:
"Error 43 : Driver failed to load" on Nvidia GPUs passed to Windows VMs

Since version 337.88, Nvidia drivers on Windows check if an hypervisor is running and fail if it detects one, which results in an Error 43 in the Windows device manager. Starting with QEMU 2.5.0, the vendor_id for the hypervisor can be spoofed, which is enough to fool the Nvidia drivers into loading anyway. All one must do is add hv_vendor_id=whatever to the cpu parameters in their QEMU command line.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Qemu KVM vga passthrough slow cpu speed Holering Linux - Virtualization and Cloud 4 02-08-2015 04:55 PM
qemu/kvm: permissions for pci-passthrough insectiod Linux - Virtualization and Cloud 1 08-20-2014 02:33 PM
LXer: Set up qemu-kvm-1.0+noroms as spice enabled qemu server vs qemu-kvm-spice on Ubuntu Precise LXer Syndicated Linux News 0 05-26-2012 07:41 AM
USB Passthrough Problem with KVM/QEMU on FC12 wdsnyc Linux - Virtualization and Cloud 1 02-23-2010 09:49 PM
On qemu-kvm, qemu-ifup script not found on Slackware 13 AndrewGaven Linux - Virtualization and Cloud 14 01-29-2010 03:36 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 03:58 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration