Hacker News new | past | comments | ask | show | jobs | submit login

Ya the lock out is absolutely arbitrary. There is zero physical difference between the consumer and server chips for these features. I think actually there's a lot of benefit to consumers by having these features enabled! I talk about that a bit here in our Xorg Developer Conference 2021 talk: https://www.youtube.com/watch?v=8pVrTyLqV_I

We're going to try to add support for more distributions in the coming days.

Right now we've got support in our install script for Ubuntu 20.04 hosts and arbitrary guest operating systems (Windows guests work best so far) but if people on GitHub are posting issues asking for support for other systems I'll try my best to get to those.

I'm going to try to add official support for Arch, PopOS, and Fedora as I know some people who I think would use it on those systems and a few others.




Is the process of unlocking these features on Nvidia GPUs similar to something like the vgpu_unlock tool is doing?[1] No affiliation, just came across it trying to find a replacement to the deprecated RemoteFX vGPU and am out of my depth.

[1]https://github.com/DualCoder/vgpu_unlock


For replacing RemoteFX vGPU, what you might want is https://forum.level1techs.com/t/2-gamers-1-gpu-with-hyper-v-...

(which is the direct successor)

The advantage is that it ships inbox in Windows and doesn't need license hacks or anything. It works cross-vendor too.

However, it needs the host OS to be Windows (with Hyper-V being used).


I think Hyper-V is GPU-P is great! I think Microsoft's hypervisor team is one of the most talented in the world, and honestly I think I could learn a lot from them.

One of the benefits to using our approach instead of Microsoft's is that our tools are free open source software and (my biased opinion) I think we have an easier user setup. :)

Some day I would love to read Hyper-V's GPU-P code as I think they did a rather good job overall.


Interesting! News to me, the Microsoft documentation I was looking at didn't make any mention of GPU-P but that seems like a perfect fit. Was looking at a old Grid K2 to avoid Nvidia licensing and direct pass through for high use VMs but as we're full Microsoft (for better or worse) their solution probably makes more sense.

edit - apparently while this works on Server 2019 direct pass through is the only officially "supported" option. I wonder if this is a stepping on partners' toes sort of situation?


I'm not entirely sure on how wide the vendor support is but I wouldn't be entirely surprised if they were upsetting some folks with it. Happily we're just a little team right now making stuff that's useful for ourselves so we don't have all the same pressures big companies have. I hope if we ever grow we'll act in the same spirit (I'll try my best to see that we do anyway).


GPU-P is also the GPU acceleration infrastructure used for GPU acceleration in WSL2 and for Windows Sandbox too.


vGPU_Unlock's Merged driver is an optional package you can include but if you don't want to use it there's no explicit dependance. We actually enable these features using a vendor neutral API called VFIO-Mdev:

https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/drive...

Here's a few examples of YAML for use with different GPU vendors:

Intel: https://github.com/Arc-Compute/libvf.io/blob/master/example/...

Nvidia: https://github.com/Arc-Compute/libvf.io/blob/master/example/...

AMD: https://github.com/Arc-Compute/libvf.io/blob/master/example/...

The odd one out is AMD that uses a different API due to the fact that the vendor has largely ignored standard open source interfaces in the kernel. We're still supporting that API but unfortunately there are very few AMD cards that work due to the fact that they refuse to release open source code to support their newer cards and they have locked out these features at the firmware level on consumer cards. Fortunately Nvidia and Intel GPUs are very well suited to this functionality and we've got support for most recent consumer cards from both!


It would be helpful to note that Intel GVT-g is a dead end with 10th gen Comet Lake being the end of the road[0]. They do not support it with XE and have instead decided to go with SR-IOV.

I'm curious to see if this could be used in virtualizing macOS with GVT-g for 3D accelerated guests. I know that this was looked at a few year ago and no one had made it work then.

0: https://github.com/torvalds/linux/blob/2f111a6fd5b5297b4e92f...


Based on my reading of the link you provided I don't believe support for GVT-g has been removed.

You can see the GVT-Linux repository is also still receiving commits: https://github.com/intel/gvt-linux


To be clear, I never said it was dead, only a dead end.

As for GVT-g and Xe, according to a post in this[0] issue by one of the Intel devs, Rocket Lake (Xe) is not getting support and only does GVT-d.

Also in the same issue, someone pointed out that Intel themselves have states as much here[1].

I hope I am proven wrong in the end and GVT-g comes to then entire Xe and ARC lineup. Intel's communication on this matter has been...lacking.

0: https://github.com/intel/gvt-linux/issues/190

1: https://www.intel.com/content/www/us/en/support/articles/000...


If they switch GVT-g for SR-IOV that shouldn't affect the use of VFIO-Mdev on consumer devices (the primary API we use). https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/drive...

It may be that they're changing internal APIs used for device mediation which wouldn't surprise me given both AMD and Nvidia use SR-IOV but only Intel uses GVT-g.


Was on a call with Intel last week and they specifically confirmed there are no plans to bring GVT-g back. :-(

Makes sense as in servers you probably want static partitioning anyways, but for desktop it was perfect.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: