Hybrid graphics and GPU passthrough to VM

How do you guys set up hybrid graphics so that you can also use the dGPU on a virtual machine?

I’ve tried bumblebee, but it only works on the host if I unbind my dGPU from VFIO, which then means that I can’t run my Windows VM.

I don’t think this is possible. In a virtual machine you run virtual drivers for your hardware. You can use 3D acceleration, if your kernel modules and extension packs (guest additions) are installed correctly. But your guest will NOT see your real graphics card. It just passes the OpenGL (or DirectX) call from the Guest directly to the Host hardware to process.
It’s a complex topic, but maybe this link will help you understand the theory.

Is that possible? I thought the “discrete” graphics adapter in most hybrid/optimus laptops wasn’t truly standalone and rendered through the integrated gpu.

He is talking about doing gpu passthrough where you use 1 graphics card for Linux and a second card gets passed through to the VM. It is supported in qemu.

@dalto

Maybe I misunderstood his intension? Because I’m used to using Virtualbox I described that situation. When it’s about qemu/kvm I’m out. I tried a few times but performance was a nightmare.

Passing through the GPU and other hardware devices is really the only way to get true gaming level performance out of a VM. You can get really exceptional performance this way. Of course, it requires some somewhat complicated setup and a second physical GPU so it isn’t perfect.

@dalto
I did a quick research. You’re right with both statements. It is possible as long as you have 2 physical GPUs. And the setup is everything but easy. That’s some low level kernel stuff.

@His_Turdness
You should include words like vfio and IOMMU in your search. vfio is the used driver and IOMMU stands for input output memory management unit.

Are you trying to passtrough both cards? or just the discrete dGPU?

In case it’s only the discrete one you are trying to passthrough, is it a newer nvidia card or older? 16xx and 20xx have some tricks that make it easier to accomplish what you want.
I managed to do it with a 1650 card.

1 Like

I want to be able to pass through my dGPU to the VM, so that the VM uses the dGPU when it’s running. I also want to be able to use the dGPU on Linux when the VM is not in use, without any reboots in between.

I have managed to do a passthrough and gotten good performance on Windows VM, but this means that the dGPU isn’t available on Linux.

I have a 1070.

That was to be expected. Using passthrough gives the guest exclusive access to the hardware. It ist also not possible to share the hardware if you want to use more than one guest simultaneously.

I found the opposite to be true. I’ve been using Virtualbox for years, and the response (because of abysmal graphics performance) was sluggish and unfit for doing graphics work in Photoshop for example.
I have recently tried out qemmu/kvm and found the performance to be hugely improved. It’s normal since KVM runs closer to metal than Virtualbox (I’ve passed tru to the VM two SSD partitions and installed Windows directly to them thru the VM, so there is no virtual fs overhead for example). But additionally it has a much better emulated graphics device implementation, and also allows more video memory to be allocated if tinkered. The Windows KVM VM I am using now is exactly as fast as running Windows on bare metal and miles faster than the old Virtualbox incarnation I as using. Not to mention that Virtualbox is QT-based and I was experiencing some glitches when running it on XFCE due to less than perfect integration with the GTK environment.

I’ve also played with passing tru the dGPU but while I managed to successfully do it, I found there is no use for that unless I connect a second monitor directly to it, otherwise the machine cannot use it to improve the graphics performance of the output on the host. Nvidia driver detects there is no monitor attached and gives out a notice that it refuses to work if it doesn’t have a display to output to, at least on consumer cards, as a strategy to force people to buy workstation quadro cards for such tasks. Not to mention I had to hide the fact that the OS runs in a virtualized environment by tweaking some envirnoment strings, as the same driver refused to initialize the card altogether, again because only Quadro cards are supposed to work in virtualized scenarios.

1 Like

Not what I want to do. I want the dGPU to be in use for he host when it’s not used by the guest. I read that it’s possible with some scripts. I’m going to look into that when I have the time.

I just plug the GPU to my display’s HDMI port. You could also use a physical stub connector to trick the GPU into thinking that it’s in use.

Having to swap video sources on the monitor is a bit annoying, but you can get around that with LookingGlass.

Follwing this guide below, I managed to set up libvirt hooks so that my NVidia GPU gets bound to VFIO/QEMU when my virtual machine starts and gets unbound from them when the virtual machine shuts down.

It takes some work but it’s worth it. Remember to chmod -x the scripts you place in the hook subfolders!!! I was stuck on that for way too long. I didn’t see anything about that on the guide.