Linux - Virtualization and CloudThis forum is for the discussion of all topics relating to Linux Virtualization and Linux Cloud platforms. Xen, KVM, OpenVZ, VirtualBox, VMware, Linux-VServer and all other Linux Virtualization platforms are welcome. OpenStack, CloudStack, ownCloud, Cloud Foundry, Eucalyptus, Nimbus, OpenNebula and all other Linux Cloud platforms are welcome. Note that questions relating solely to non-Linux OS's should be asked in the General forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm dealing with a tricky puzzle. I want to get rid of my aging gaming PC and instead put a decent GPU inside my Ubuntu server which is already running two VMs on KVM/QEMU.
So far, so good but I'd really like to limit my contact with windows to the bare minimum.
I thought about running the windows VM and ussing PCI passthrough, having it directly use the GPU. I could then virtualize linux in it and work on that.
Drawback is that all keystrokes would still pass windows.
Then there's having a linux VM and then virtualizing windows on it. To game, I'd have to pass through the GPU twice. I don't know whether this is possible nor do I think it very efficient.
The best idea, I think, would be to put my old GPU in as well and having something like a KVM switch.
Problem is the hardware is on another floor and another room than the peripherals. So it would have to be a KVM switch that could be switched over that distance. I have connected the peripherals through 10 to 15m long DP and USB3 cables.
Does anyone have a better idea on how to solve this? The ideal goal is to have to separate VMs on a more or less remote machine. Both should be accessible through my "terminal".
Regards and thanks in advance,
Marco
Edit: D'uh, I completely forgot that most KVM switches accept hotkeys to switch between machines. Only question that remains is is there such a hotkey solution integrated in the hypervisor, perhaps?
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Could you not install the GPU in the server and tell the host not to use it, and then creat a Windows VM and a Linux VM which cannot be running at the same time and have both use the GPU passthrough?
I've not been patient enough to get GPU passthrough working myself so perhaps I missed something?
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Yes, very funny.
If you have physical connecions to the host OS then you access the guests through that?
You acceess the Windows and Linux VMs you created using the sam methods you use to access the other VMs but you have to have a physical connection between the graphics card dedicated to them and the monitor you wish to use.
I'll freely admit I'm thinking of this logocally rather than from experience so if you know better it may help your chances of an answer to chip in with what you do know?
The intention is to attach the monitor and the usb 3 cable (on which there is a dockingstation attached with keyboard, mouse and an audio card) to the GPU and an USB port both passed directly into the VM. Now if I were to have two GPUs in the machine and dedicated two USB ports, I could then pass them through to a gaming windows and a working linux vm respectively.
Using a KVM switch, I could then switch between the two VMs as if they were physical machines. The host OS has nothing to do with this at that point.
Now an alternative that comes to mind is having only one GPU and writing a script that will shut down the running vm, detach the GPU, reattach it to the other VM, reassign the USB port and then boot that second VM. Obviously, this switch would take longer. I would save money on the KVM switch, though.
Third option, if this is supported, would be to hot-swap the GPU and leave the vms running.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Quote:
Originally Posted by Marco2G
Now an alternative that comes to mind is having only one GPU and writing a script that will shut down the running vm, detach the GPU, reattach it to the other VM, reassign the USB port and then boot that second VM. Obviously, this switch would take longer. I would save money on the KVM switch, though.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.