Hacker News new | past | comments | ask | show | jobs | submit login
Looking Glass: Run a Windows VM on Linux in a window with native performance (looking-glass.io)
843 points by tambourine_man 15 days ago | hide | past | favorite | 303 comments



I'm an engineering software contractor and every client has a whole bucket load of outdated IDEs and random USB interface drivers they want me to install on my machine. Often this stuff has weird specific version dependencies, kernel level drivers that cause weird things to happen (like the computer to BSOD if you boot it up with other USB devices attached), random system level crashes, and when you go to the driver download page there are always an unsettling list of simple security vulns noted.

The #1 thing that I've ever wanted is a Linux that lives between the bootloader and Windows that lets me achieve native performance on the Windows VM, but gives me an environment where I can easily do all the things that you can usually do in HyperV like create snapshots, clone installs, share Sharepoint drives between images, etc. But I do need something that is perfectly stable and just works. This is the right technology (thank you so much for working on it), but just not yet at a maturity that makes me feel comfortable about putting livelihood on the line.

Edit: actually, please, if anyone knows something that suits this use-case, even if it costs decent money, please leave a comment.


KVM? Virt-Manager? It might take an hour to learn virt-manager and some of its quirks, but it absolutely can do everything you request here, I do so very regularly.

If you want a low-maintenance version, you could just snag someone's NixOS config - you'd have an easy to reproduce environment that just ensures virt-manager and a really light-weight window manager are installed and then you're done.

Virt-Manager does shared directories (9p anyway; there's no UI for virtio-fs, but you can still use it by editting XML). It handles USB2 and USB3 forwarding. It does snapshots, it does clones, you can even leverage Linux filesystems to do far fancier things than possible on HyperV hosts. Etc. (Plus KVM won't trash your plain ole EXT4 partitions like countless people keep reporting under HyperV [and I've personally experienced twice]). It can even do graphics accceleration/virtualization for Linux guests.

Virt-Manager is so under-known and under-appreciated, but then again, it's got its rough-edges. With some polish there is really no reason to ever mess with VBox under Linux.


Do you happen to know any tips for good online resources for virt-manager and features like this other than tedious reading of manual pages and trial & error?

Libvirt manual is pretty good

https://libvirt.org/docs.html


You could netboot off an iscsi target on another machine, which is backed by an image file on a zfs volume. That way you get all the nice zfs features of snapshotting, and windows just thinks it's using a hard drive.

You could even skip zfs and just use a qemu image file, with qemu-nbd you can have it present as a block device, and you can then export that block device as an iscsi target. Then you can use qemu-img for snapshotting etc.

The machine running the storage could be pretty low spec too, a nuc would do it for sure, maybe even an rpi or similar (if you go the qemu-img route, zfs on rpi is not feasible, ask me how I know :D)


You don't even have to let the guest know it is netbooting. I use ESXi with one Solaris (Illumos) guest, which runs ZFS over everything and exports iSCSI back to ESXi. The other guests all think they have native storage and I can do whatever I want with the underlying ZFS filesystem.


I was actually wondering about this a while ago, one can netboot windows? i.e. no "local" storage, all accessed over the network? or does it depend on the hypervisor, i.e. the hypervisor access the storage over the network, but to windows, its a local disk?


You can, with no hypervisor at all, just running on bare metal, so long as your mobo / nic supports it (which I believe most do). I haven't ever personally done it though, so ymmv.


I would probably run proxmox if I was you and kit your device with extra peripheral cards that you pass through to the VM, with ZFS you can snapshot your vm volumes and proxmox has builtin support to do full backups and either store on your local machine or ship them off to a remote location. If you pass the GPU through to Windows you're going to be essentially native speed along with great security and flexibility. If you want to have multiple windows machines you could just rdp them unless you need GPU acceleration in them too.

If you're working on a laptop I would not recommend ZFS, ZOL doesn't implement freeze and thaw.


I could be remembering this wrong, but I'm pretty sure proxmox + zfs doesn't let you roll back to a snapshot that has child snapshots. If you're coming from vmware (or probably virtual box, I honestly can't remember), then the ability to be able to jump back and forth between any snapshot and branch off form there easily is really convenient and a bit jarring to loose.


The thing with proxmox is that it's quite open, you could manage ZFS snapshots outside of the proxmox system if you want, I haven't tried this specific with proxmox, I run NixOS with ZFS root and libvirt for managing the guests (vit-manager and virsh).


Would you still recommend this if it was a mobile workstation [1], doing a mix of at-desk work and mobile work, utilizing peripherals like docking stations, and I was running nearly constant compute intensive workloads (Matlab)?

1: https://www.dell.com/en-au/work/shop/workstations-isv-certif...


Proxmox is basically Debian plus some VM hypervisor stuff. So your question really is would someone recommend it if they're running Debian, as all that desktop environment stuff would be outside of what proxmox manages.

Debian is super reliable - potentially the most reliable linux distro - but this wouldn't be a turnkey solution or anything (for example PopOS, based on Debian via Ubuntu, focuses more on the out-of-the-box experience). I don't know if Debian can handle your particular needs, but my assumption is that if any distro can Debian can - it just might take some time.

Personally I love proxmox as a main operating system, as I can get everything from broader Debian/Ubuntu/PopOS environments and learn a lot about Linux too. But, it has taken a lot of my time, so I'd only recommend it if you wanted to invest the time.


I honestly have no idea, I would get another SSD and give it a shot, if it works its really quite great. Not with ZFS on mobile though, go btrfs.


The specification you present is a bit unclear.

Any modern virtualization has very fast CPU virtualization (it's hard to say near-native, as there are always corner cases), and snapshotting tools. I don't know about Sharepoint; the "clone installs" is a bit fuzzy too, but one can clone installed systems by just copying the underlying image file and ensuring that it has a unique identifier (and updating the guest O/S license, if required).

If GPU matters, VFIO definitely is part of the solution.

However, if you're trying to achieve a sort-of full system passthrough (eg. because of drivers that have bugs related to certain hardware components, which it seems to be your situation), this will never happen, because certain parts of the guest need to be necessarily emulated (e.g. chipset). Even passing a USB port is not easy - one actually needs to pass the whole hubs (AFAIK a port may belong to two hubs - USB 2 and USB 3).


For the USB part, would adding a extra USB adaptor card (assuming it's a desktop machine) and using PCI-passthrough/VFIO on that adaptor make it easier? From my knowledge of QEMU, then no USB emulation is involved - the guest OS sees the entire USB adaptor (and the host doesn't) - so one can have all the special USB hardware connected to that.


I'm not knowledgeable with this, but I think so, as it'd be a standard passthrough.

What needs to be kept in mind is that, as the standanrd passthrough, it's subject to the IOMMU groups handling (in best case, there's no other device in the same group; otherwise, one needs some trickery, which AFAIK is not 100% guaranteed to work).


I've read of people doing this setup for exactly this on homelab and VFIO subreddits. PCIe passthrough is easier. I've no personal experience with this, but it seems to work well; the VFIO setups are hardcore anyways.

why don't you require clients to provide a lab machine for all their requirements?


Because I charge out at $X per hour and the calculations that take my machine 1 hour to do, typically take their machines 3 or 4 hours to do. From an executive level, a lot of clients would rather spend $2k on a cheap throwaway machine than spend say $5K, but then seem fine to have the labor component cost 2-3x as much.

I just build the price of the hardware into my hourly rate (which ends up being not very much .. $2 per hour), AU law instant write-off the cost of the machine as tax, tell them they don't need to buy a machine (which they are very happy about), and then everyone is surprised at how fast I can get the job done compared to the in-house teams which have their hands tied.


hi. i have the same problem. because those lab machines usually SUCK!

i have my own custom build servers at home now and virtualize everything. every client has own VM and I usually RDP/ssh into them for work.


If anyone is interested, the main developer, who goes by 'gnif' spends a decent amount of time on the dedicated Looking Glass sub-forum on the Level1Techs forum, here:

https://forum.level1techs.com/c/software/lookingglass/142

Level1Techs also has a ton of information and help both on the forum, and their YouTube channel, about setting up Looking Glass and VFIO. The main host of the YouTube channel, Wendell, has forgotten more about nitty-gritty system administration than I will likely ever know in my lifetime. He also just seems like a genuinely good human being.


Is that the gnif from EEVBlog?


Yes, I am :)


Also thanks for opening an account outside your community (here) just to help people.

Also, didn't know that any 4 letter usernames were available here on HN anymore :)


Haha, yeah, it's why I like this handle, it's very rarely used anywhere.

As for support, I do encourage people to join the Discord or head on over to the L1Techs forums as I won't really be monitoring this very closely.

Edit: I mean I am at the moment because I am super stoked to make #1 on HN :D


I was very surprised when my three-letter username and was actually available just about two months ago.


Cool. Thanks for all your hard work on that too!


You're most welcome :)


Note we also have a very active Discord where I try to be very active where most of the support and discussions happen.

https://discord.gg/52SMupxkvt


Just want to say that gnif is amazing. He is always happy to help out people on the looking glass support discord. Plus he's absolutely dedicated to continuing to improve the software he writes and the ecosystem around it. Top notch guy.


Thanks mate, your support is appreciated.


Gnif also has a patreon to support his work https://www.patreon.com/gnif (mentioning this because I only discovered it a long time after using looking glass)


I looked into this project (and related GPU pass through projects) a few years ago. Putting the discussion and even some issue tracking into a "support thread" in this forum makes getting information really difficult. It's the same experience when you want to install LinageOS and realize you have to go through a long thread on XDA and sometimes reddit to solve any issue you may have.


We have tried very hard to rectify this with the B4 release by adding a documentation project to the repository, which is available in HTML form here:

https://looking-glass.io/docs/stable/


Thank you! Although TBH linux gaming has been usable to the extent that I probably will never have to run windows ever again..

I run a VFIO setup with a single GPU - the linux host is headless, and Windows runs on top with the GPU. Its pretty awesome. Windows runs at native performance - no problem gaming or running other heavy workloads. The linux host acts as a devbox and runs a few other homeserver style services.

It's difficult to set up right but it taught me a lot about VMs and hardware. Once you get it setup well enough, its relatively painless. Like I haven't messed with my VM settings in over a year, everything just continues to work smoothly. Including windows updates, driver upgrades, most online games with anti-cheat etc. If i upgrade my hardware, it might take a day or two of tinkering to get it back up. Based on my benchmarking it runs within ~5% of native perf.

This is still the best guide IMO if you want to set it up - https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF. Single GPU is basically the same as dual GPU, except you have to ensure the linux host does not take over the GPU or load any drivers for it during boot.


Same setup here! I am running Proxmox on the host to streamline managing VMs and storage. Proxmox comes with a nice web GUI, which makes it very east to monitor system state.

I have a Windows VM for gaming that owns the single Nvidia GPU. I also have a few Linux VMs for development (via VS Code remote) and media management.

As far as storage goes, I don’t have anything too fancy. Proxmox is installed on an SSD. I have a second SSD for VM images. For all other storage (media, photos, VM image backups, etc.), I have a 3 disk ZFS pool consisting of a single RAID-Z1 vdev - yea, it’s risky, but losing the pool wouldn’t be the end of the world.

One of the cool things about this kind of setup is being able to easily restore VMs from backup. Some time back, I accidentally screwed up my Windows install by enabling Hyper-V (nested virt). I panicked at first, but then remembered that I have daily snapshots of the VM. I had it back up and running within 10 minutes :)

All in all, it was fun to setup and has been running very smoothly.


Would you mind talking a bit about the small decisions you took for a Proxmox setup? I am slowly learning and planning my (very)small server setup. Things like:

* Are the VM image backups you mentioned, done to your pool by means of the ZFS snapshots? Or done at the file-level with rsync or similar.

* Do you make backups of the Proxmox installation? Similar as before: is Proxmox itself on a ZFS volume, so backups can be done just by doing ZFS snapshots? The installer lets me choose between an LVM-Thin + ext4, or a ZFS filesystem, and I was wondering whether to choose one or the other, for maximum convenience.

* "Proxmox is installed on an SSD": isn't that a bit wasteful? I mean, doesn't the Proxmox system just take like 1 or 2 GB at most?

I have a Lenovo ThinkCentre m910q which brings a 160GB M.2 NVMe disk, and another 320GB SSD disk... so I an in the process of deciding where to put each thing. Although for bigger storage I'm also considering if adding a 1 or 2 TB USB3 external disk would make sense (to store user backups like photos, documents, and also for the server's system backups)


1. In my case, VM image SSDs are formatted using LVM-Thin. Based on this, Proxmox automatically takes care of snapshotting the VM images for backups. It would work the same if you used ZFS for image storage.

2. No, I do not currently backup the Proxmox config. There is a good script here: https://github.com/DerDanilo/proxmox-stuff

3. If you install Proxmox on a "thin" filesystem (ZFS or LVM-Thin), then yes, you will get snapshot functionality for free. Note that you would have to configure this yourself - Proxmox does not expose a backup feature for its own config.

4. I have two NVMe SSDs. The first is 500GB and is split into two partitions: 100GB for the Proxmox install (LVM + ext4), and 400GB for VM images and containers (LVM-Thin). The second is 900GB and is fully used for VM images. Both are using LVM.


Seeing how others have organized their systems helps me with doing my own thing. Thanks a lot for sharing!


Yes I love that an entire VM is basically 2 files - one disk image, one libvirt config. Makes backup and restore so easy :)


You can't mention VFIO without mentioning the amazing subreddit, r/VFIO[0].

[0]: https://www.reddit.com/r/VFIO/


Even more amazing on the old interface:

https://old.reddit.com/r/VFIO/


Or you can just go into your reddit settings and change back to the old interface permanently. I have that enabled and both these links are the same to me.


Indeed. The new Reddit interest is horrid.


This has been my dream setup for years, so that rather than a Windows host with Linux VMs I can run the other way round, enable ZFS and continually snapshot the Windows VM, and avoid the usual Windows failure as bits flip or sectors go bad.


I run the same setup but with esxi as the host OS. It's quite a bit easier to setup since esxi already runs headless by design, as well as easy PCI passthrough configuration via the host client html5 UI


> most online games with anti-cheat etc.

Any notable exceptions?

What's the performance like compared to the same games in a native Windows install. I tried proton several months ago and I had to go back to Windows because it wasn't even close to good enough as far as compatability goes.

I'd really love to abandon Windows as a desktop OS and just game in a VM though.


A pretty notable example is probably BattlEye, which is used by several major FPS games [1] like Tarkov, Rainbow Six and PUBG.

They banned the use of virtual machines [2] back in 2020 and their AC solution does not work on Linux.

Valve seems to be working on it [3], though their solution might be tailored to their SteamDeck hardware instead of a generic proton fix.

[1]: https://www.battleye.com/ [2]: https://twitter.com/TheBattlEye/status/1289027672186720263?s... [3]: https://partner.steamgames.com/doc/steamdeck/faq


PUBG uses EasyAntiCheat these days, rather than BattlEye.


I use a vfio setup as well but with two GPUs (one integrated, one dedicated) and hotswap the dedicated GPU between the two whenever I’m playing games (I play on both Linux and Windows).


Is it possible to switch the GPU while the host is running, that is without rebooting?


It is, see: https://stty.io/2018/07/27/running-windows-with-pci-passthro... (Relevant scripts sub-heading)


Thanks a lot, I'll check it out.

If you have more than one GPU (even as in one on-board and one discrete) you can, with some effort, dedicate one of them to a VM and let the host use the other. Low-yield crypto mining is a common usage for that kind of arrangement, but I suppose VMs would work just fine too.

Or go full Unix and attach a serial terminal to the console port and happy linuxing from there while you ignore the built-in gaming console running off a VM. This is more or less what every modern console does anyway - a hypervisor under everything. ;-)


Which GPU do you have? I tried using a Mac Pro 5.1 as host, Debian Testing as OS and various Windows varieties as well as Linux (again Debian Testing) as guest - but I always, always run into that nasty PCI reset bug (https://www.nicksherlock.com/2020/11/working-around-the-amd-...) :(


Are you saying that that workaround doesn't help you? You could drop into their Discord on the vendor-reset channel and see if they can help you out.

https://discord.gg/FJs9ufyu


Which games do you play that require anti-cheat schemes?

The last time I checked, many such games explicitly do not support running in VMs and you risk being banned by obfuscating that you are running the game in a guest OS.


Ive played PUBG, COD Warzone, Apex Legends, Quake Champions, maybe a few others. Of course it depends on the game but i think the perception of how many games don't work under VMs is a bit skewed.

TBH every thread like this has the “but what about anti-cheat?” post which makes it seem like a big problem but i think only a tiny minority of games don’t work under VMs.


Only online multiplayer shooters (or whatever they call them these days) come with anti cheat, I think.

So... 5% of the available titles?

Most are financed by IAPs so I just stay away from them anyway.


Thanks! That is more than I expected.

FWIW I tried to install Valorant in a VM recently (because I'm tired of installing the root-kit anti-cheat on my main PC in order to play it). It simply fails to launch the anti-cheat, and so doesn't launch the game.


Yes, unfortunately Valorant is one that blocks VM usage... not that it has helped them any.


Anti-cheat measures end up blocking mostly the well-meaning players.

I know it's sometimes too costly, but game studios should invest more on server-side security measures and mostly distrust the clients.


But distrusting the client doesn’t solve the problem. If a user can see through a wall locally how can ‘distrusting’ anything fix that? What is there to distrust? All inputs are genuine.


> If a user can see through a wall locally how can ‘distrusting’ anything fix that?

The fact that the client can remove the wall and see something behind it is due to the fact that the client is trusted to do the hiding. An untrusted client would not receive the enemy position until the enemy is visible. This, however, comes with the drawback that the server needs to do the culling - That's why pretty much nobody does it right now.

Aimhacks would still be possible, of course, but client-side anti-cheat can't prevent those either.


Valorant specifically actually does this to a degree, and there was a quick article written by one of their anti-cheat developers that roughly explains their approach.

https://technology.riotgames.com/news/demolishing-wallhacks-...


Right it’s not a realistic suggestion as not even the client does it in software!


Why do you think that? The server needs to at the very least trace a line for a shot. There is nothing difficult or slow about that. Before you say that it is done with a z-buffer or something similar, think about third person camera angles or how older games did the same thing. You might want to pull back on being so certain if you don't have experience with game engines or graphics.


> Why do you think that? The server needs to at the very least trace a line for a shot. There is nothing difficult or slow about that. Before you say that it is done with a z-buffer or something similar, think about third person camera angles or how older games did the same thing. You might want to pull back on being so certain if you don't have experience with game engines or graphics.

But... it is done with a z-buffer.

If an opponent is obscured behind a nearby pillar or something, that's not going to be culled in software - that's done by the hardware z-buffer as part of the render process.

You can see this for yourself if you look at a game being run with wireframe rendering. You'll see it's in the same render node so it's still rendered - it's just obscured by closer geometry. And it's how some cheats actually work - they basically turn the wireframe back on!

'Tracing a shot' is casting one single ray.

For example read this article someone else linked https://technology.riotgames.com/news/demolishing-wallhacks-... and look at the last animation.


You are conflating needing information about player position with visibility of individual polygons.

Also you are forgetting that you just said that line of sight was done in hardware and you didn't explain how that would work for a server testing if shots actually hit.

> You can see this for yourself if you look at a game being run with wireframe rendering. You'll see it's in the same render node so it's still rendered

What does this even mean? What is "it" here and what is a "render node" ? There are hierarchies of transforms and players are going to be separate from the environment. This doesn't actually mean anything.

> it's just obscured by closer geometry. And it's how some cheats actually work - they basically turn the wireframe back on!

Yes, you are restating the context of what people are talking about, not what is actually being talked about, which is the timing of when the server should send visibility information, which is what your link is actually about.

Your link is actually directly contradicts what you are saying since it uses both an expanded bounding box based motion extrapolation and precomputed visibility, neither of which has anything to do with a z-buffer.


Look at the last illustration in that article.

Can you see how the red outline of the opponent appears while they're obscured behind the pillar?

When that red outline appears it's showing that the opponent is now being rendered, and that the z-buffer is being used to obscure them from behind the pillar.

This discussion is about how to make the red outline not appear until the opponent is actually visible.

The article goes into lots of ways to make the red outline appear later, but it still appears before the opponent is actually visible on screen.

That's the issue that people want to solve.

Consider an example of an opponent with just one pixel of their gun visible around a corner. How do you send that information to the client without telling them there's an opponent there, so that the user has to actually see the pixel? You'd have to just send that one pixel, right? Now we're talking about rendering server-side!


" When that red outline appears it's showing that the opponent is now being rendered, and that the z-buffer is being used to obscure them from behind the pillar."

Yeah, that's game rendering in the engine. That's visualizing something, not illustrating how the server is doing it. Did you actually read and understand your own link?

"That's the issue that people want to solve."

No it isn't, you misunderstood your own link to the point that you have it backwards.

The server not rendering the entire game from each person's perspective every for every player every frame.

The problem is being able to see every player walking around all the time.

Think for a moment what would happen if the server actually had perfect visibility - by the time you can see them it is already too late. You should be able to see them and then the server starts sending you a position. By the time you know you should see them, you should have already seen them and the other player pops into frame.

That isn't even buried in your own link, it's at the very top.

"Consider an example of an opponent with just one pixel of their gun visible around a corner. How do you send that information to the client without telling them there's an opponent there, so that the user has to actually see the pixel? You'd have to just send that one pixel, right? Now we're talking about rendering server-side!"

This is gibberish and is a lot like Frank Abignail trying to BS pilots. Once again your own link explains why this is nonsense from a lot of different angles, did you even read what you linked or did you just look at the pictures? It explains everything clearly.


> This is gibberish and is a lot like Frank Abignail trying to BS pilots

Why are you so abusive in your replies? What causes you to talk to people like this?

> You should be able to see them and then the server starts sending you a position.

Yes that's what I'm saying you'd need for an untrustworthy client. But even that's not quite good enough - if you can 'see' them but it's just one pixel that the user might miss - should the client really get the full location information? It could highlight the enemy from that when a player would likely miss it otherwise.

> The problem is being able to see every player walking around all the time.

No that's a weaker version of the overall problem. If you give the player's location to the client when the player may not actually be able to see them then you're relying on a trustworthy client.


I can see we are at the "you're being mean to me" stage in the discussion instead of the "I should not spread misinformation then doubled down on it" stage. No one is abusing you and you aren't a victim when someone wonders why you're misinforming people. If what you are saying doesn't add up (temporal chicken and egg, partial location information etc), focus on that instead of attacking people that are giving you the feedback that what you are saying doesn't add up.

You originally said that a server would have to render the game and use the z-buffer to do any occlusion culling, but this is not only not correct, it is contradicted by something you yourself linked. Why not just admit that this was a guess and not from experience or research into how game engines work?

"But even that's not quite good enough "

You are the only one saying that. Going from seeing every player on the map all the time to only seeing players a few frames before you would have seen them anyway is a huge leap, which is again, what people are talking about and exactly what you linked.

"should the client really get the full location information? "

What partial location information are you envisioning here?

Again, focus on backing up what you originally said first instead of trying to shift the goalposts from how servers would "have to" do occlusion culling.


I don't agree - but I think you're really just trying to get a reaction by being as aggressive and contrary as possible rather than actually going on what I've written, so I'm going to leave you to it from here.

Parent: "An untrusted client would not receive the enemy position until the enemy is visible. This, however, comes with the drawback that the server needs to do the culling - That's why pretty much nobody does it right now."

You: "But... it is done with a z-buffer. If an opponent is obscured behind a nearby pillar or something, that's not going to be culled in software - that's done by the hardware z-buffer as part of the render process."

Then I explained why this doesn't make sense on the server as a generalization and isn't necessary from a technical angle.

Then you ignored that you were both snarky and wrong, provided your own source which directly contradicts what you originally said and ultimatly called yourself a victim of aggression when I pointed this out.


Then the server should never send them information on what's behind the wall.


In Counter-strike there are footstep sounds with spatial audio. How can the server not send that info to me in a way that won't reveal the player's direction? hearing players coming before you see them is a huge part of the game.


You want to render all graphics on the server? I’m not sure that’s really a tractable suggestion.


Why would that be necessary? You realize the server already has to do a line of sight calculation to determine if a shot hits right?


Think about many times a second you have to trace a shot.

Now think about how many times a second you'd need to trace from every pixel on the screen to every part of the geometry on every opponent in order to check if it was visible or not to see if a player was legitimately able to view any part of their opponent.

For example read this article someone else linked https://technology.riotgames.com/news/demolishing-wallhacks-... and look at the last animation.


If you actually understood your own link you would see that there is no reason to trace every pixel on the screen when you can make a bounding box that covers motion and trace the vertices.

Anyone familiar with game engine programming would never consider what you are saying. That link is a more in depth version of what I just said, ray casts are being done on the server for visibility and have nothing to do with rendering the game to do it. It is literally demonstrating that they are already doing what people were wondering about.


A bounding-box is something we'd call an over approximation.

Using an over-approximation causes the opponent's location to be revealed to the client even when the opponent isn't quite on screen yet, requiring the client to be trusted to not show this information early, which is what people in this thread want to avoid.

That's the whole point of the discussion.

This is what the article is showing - can you see how the red outline of opponents appears early, and how the client is being relied upon to hide them until they're actually visible? That's what people don't want.


lol, who is "we" in this sentence?

You for some reason are ignoring what you originally said to focus on something else you are seem to misunderstand the context of.

What you originally were saying was that you would have to render polygons in hardware for the server to have any idea about occlusion, which the link that you gave not only disproves, but assumes that no one would think in the first place.

The whole point is that wall hacks let you see people running around the whole level and it is just a matter of work for the server to only send positions a few frames before you are going to see a player.

Everyone else is on the same page, but you think the player position being sent right before they appear is a problem? That's the solution in your own link.


> The whole point is that wall hacks let you see people running around the whole level and it is just a matter of work for the server to only send positions a few frames before you are going to see a player.

...and when an untrustworthy client gets that info it can highlight the opponent just before they come into frame, or highlight them fully even when they're mostly concealed, giving you an advantage.

That's the point of the thread. That's what people want to avoid. That's what the link wants to avoid, and says it doesn't manage to quite do and explains why it's hard.


Question, why can users see through walls locally, seems like there should be some sort of occlusion? I guess it's too slow to calculate and causes too much server-side processing?


> I guess it's too slow to calculate and causes too much server-side processing?

Bingo, game servers need to be as lightweight as possible because whatever calculations they have to run need to run per player per tick. Detailed occlusion calculations would be impractical, so at best it's very rough. And of course you don't want a situation where an opposing player isn't even seen until they've already shot you, so it needs to err on the side of visibility.

Every latency-sensitive online game has to make a bunch of tradeoffs between performance and security, and performance is generally more important.


The anti-cheat systems end up really just being more menance than use. Tons of money burned on something that cheaters will get around anyways.


It's one of those 'keeping honest people honest' things, of it were even easier I think even more people would do it?


They keep honest people from playing the game, unless you have clean install of windows with no blacklisted drivers or software installed. Not to mention how these things basically hook themselves to critical system APIs, acting more like a malware. Valorant is probably worst example of this. Community run servers are the best form of "anti-cheat".

The problem is that developers treat PC like a locked console. This is just a completely fruitless uphill battle. PC gives power to its users, while consoles give power to the developers. PCs are designed to not sandbox or lock you, you can do anything with them without having to break its sandbox first. The mindset with the developers that deploy intrusive anti-cheat is to have the users locked in so they can ship their centralized server model and hope they can deal with the hopefully lesser amount of cheaters themselves, instead of giving the moderation power to the users themselves.


If you enable nested virtualization in your host and shove Valorant in a VM with Hyper-V (through what I believe is a feature in Windows, but forgot the name) Valorant should actually run. Or at least it did a few months ago, not sure if it does work now. Worth a try.


Most games work just fine, just a few odd ones that decided to block VMs don't such as Tarkov.

Known to work without issue are titles such as

  * PUBG
  * Battlefield 1/3/4/5
  * Titanfall 1&2
  * Arma 3
  * 7 Days to Die
  * Ark
  * Fortnite
  * Apex
  * Halo: The Master Chief Collection
  * Star Wars Squadrons
And many many more.


The only games I play are Doom Eternal and Starcraft 2 — by chance do you know if those work or if there’s a list somewhere?


No idea about doom, but Starcraft 2 works for me fine.


SC2 works almost perfectly under Wine/Proton.

Only issue is that a few custom maps crash, and there's a weird, minor performance issue at some point in the LOTV campaign menus (but not the game itself). Haven't had an issue in a ladder game in years. And I play SC2 a lot (too much).


"Almost" is the key word here, under a VFIO VM, it's 100% flawless.


I recently finished the single player campaign on Doom Eternal - that worked perfect, though i havent tried multiplayer.


Really, is that why Tarkov drops me when I try to start a match? At least CS:GO told me their anti-cheat hated my setup (Windows 10 in a Xenserver VM) and I was able to get a refund.


CS:GO works fine in a libvirt/KVM VM last I checked. As for Tarkov, see: https://mobile.twitter.com/TheBattlEye/status/12890276721867...

Note that the game vendor selects what features they want to apply to their titles that are available. BattlEye allows you to stop people using VMs, if the game vendor opts in to this stupid feature.


Does running CS:GO in a VM impact trust factor? Trust and prime are pretty much the only thing to reduce cheaters encountered in your matches since CS:GO does not have a working anti-cheat.


Doesn’t CS:GO run on Linux? I exclusively play dota 2 on Linux and it performs better than in Windows imo.


it does, but you can only play matchmaking AFAIK, ESEA and FaceIT don't support linux for their anti-cheats.


Interesting, I didn't even know you had external parties providing league based (I'm guessing) match making. All matches in Dota 2 run through the official valve coordinator or are pre-made lobbies. There is a LAN only build iirc, but not widely available to the public.


Likely. There were some hacks in the last year or so that required looking glass.


Add Overwatch and Trackmania 2020 to this list too, they both work flawlessly.


For EAC, they require a variety of signals to ban someone (unless it’s a obvious thing like detecting a known cheat.) An honest VM setup that doesn’t obfuscate probably counts as one potential signal that you are cheating, an obfuscated one that they are able detect might be a stronger signal.


I do the vfio thing as well, do you happen to remember the registry editing you had to do to stop stuttering in games? There's a program that can set and unset all the stuff you need but evidently I deleted it.

It's the only thing I am missing to build another one or rebuild the one I have. I wish I had believed it was going to work perfectly when I started and wrote everything down.


I don’t recall doing any registry edits. Most of the perf work i remember doing was on the VM side - getting the little tweaks in libvirt settings, matching vCPU topology to physical, keeping VM cores from running linux system processes with GRUB flags, using a dedicated USB controller and sound card. i think i set the MSI stuff gnif mentioned as well though it wasn’t critical for me. Generally stuttering will be resolved by making sure time critical events (like interrupts) are delivered quickly to the guest.

+1 on the writing stuff down :D I did it twice and documented pretty thoroughly the second time around, notes before each change, testing performance delta, notes about if it worked. It really helps.


Likely setting the NVidia drivers to use MSI (Message Signalled Interrupts)


I fixed my stuttering by enabling cpu tuning and setting the correct cpu topology [1].

[1] https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF#CP...


I have a VFIO setup as well, and I'm not aware of any register setting correcting stuttering.

There was an issue with AMD systems a few years ago, which caused microstuttering.


I have a VFIO setup as well but with dual GPUs. As you say, it is pretty awesome. I keep setting up other OS's for fun. I have an old nVidia NVS300 card that is supported natively under macOS, and also fired up a WindowsXP setup for grins. There were some interesting snags in getting each one going, and I learned a lot along the way.

The Arch wiki is indeed spectacular.


The host is headless? So you're streaming the display via VNC or something to another system, or do you mean the host has a head but it's just being passed thru directly to the guest?


Headless host means you can only access the host system via SSH or web interface (e.g., if you’re running Proxmox). The guest VM “owns” the single GPU.


The GPU is given to the guest, so the guest is driving the GPU directly making it a headless host.


Does this also work for audio?

E.g. could you run a high end DAW on the guest with the same performance?

If not, is it something that might be added later?


The host PC has 1 graphics card, the host OS is running headless, and the windows OS is using the graphics card.


Does it boot the windows UI by default then? Or you just boot into a plain shell?

Kind of defeats the point if your UI becomes windows again in my opinion.


@zaptheimpaler thank you for sharing this.


why don't you go the other way round, Windows native ane WSL2? Beside wanting to hack and learn things, obviouvsly.


This is an interesting question, since it's actually the underlying strategy of Microsoft (I don't imply it's a wrong thing).

My personal motivation is that Linux power users will miss the control, or at least, customizability, of the operating system, which is something Linux does, and Windows doesn't (as they have different targets).

Also, not to be underestimated, security (although for me it's only a very small factor).


WSL2 doesn't support hardware perf counters, which means `perf` and `rr` won't work.


Tried WSL, was buggy and had some issues with filesystem performance i think. I prefer having a 100% real linux & windows OSes that just work over slightly buggy workarounds. Plus i also host a lot of homeserver services (media server, SMB server, postgres db etc) on the linux host and its cleaner having those run on the host. If i want to do something very demanding on the host for example I can shutdown the VM.


WSL2 is using a real Linux kernel with almost no limitations. But it doesn't pass through the GPU. Therefore graphical applications are relatively slow.


Wouldn't it be easier to have 2 devices, one specifically designed for gaming and one running 24/7 for home server services. A cheap used Thinkpad could be used as linux device so it doesn't have to be expensive. Plus if you value your own time and effort you have to put in for your current setup, then it might actually be cheaper.


It depends on what do you mean with "easier".

Technically speaking, for machines where VFIO works, there is no maintenance, and the setup can be easy (excluding one significant issue due to a specific linux kernel upgrade, my VFIO setup procedure has been essentially the same for years, on multiple machines).

So, assuming that VFIO setup setup/maintenance is very easy, having a second machine is just redundant.

One convenience not to forget is safety. If a Windows VM gets infected for whatever reasons, rolling back the system is performed by literally deleting one file.


You basically just need an extra gfx card (or even headless host). And why would you not want a fast Linux OS? Many SSD's and lots of RAM to make the IDE fast. Powerful CPU for compiling. Good GFX for better latency/hz.


It might, but I am cheap :) and i like to run heavy workloads on the home server/devbox as well which would take a well specced $700-800 PC.


Better anti cheats won’t allow setups like this.


‘Better’?

Why are VMs being blocked?


Because some cheats use a VM environment to hide from detection as they can act on the VM from outside of it.

The solution though is not to ban VMs, but to push vendors like AMD and Intel to enable access to, and enforce usage of technologies like SEV if running inside a VM.

https://www.kernel.org/doc/html/v5.6/virt/kvm/amd-memory-enc...


I wrote this another comment, but instead of getting too much into an arms race, studios should invest more on server-side anti-cheat mechanisms.

You could correlate community feedback and some machine learning, while also picking easy-to-catch impossible actions.


Server-side solutions don't catch all cheats. They can block actions that are impossible according to the game rules but they cannot prevent clients from disclosing too much information to the player about other players, or automating actions that are technically possible, like using aimbots.


You can definitely handle some of those situations server side (the key word being "some") with enough engineering effort.

In regards to player positions: check which player locations are occluded and wouldn't be visible through the geometry, then only send the valid ones for each player. Of course, doing this on high tick servers could prove to be computationally intensive.

In regards to aimbots: the clients already send you information about where they're looking so that it can be displayed to other players. Attach some mouse movement metrics and from that you'll sometimes be able to infer the most naive aimbots instantly.


> In regards to player positions: check which player locations are occluded and wouldn't be visible through the geometry, then only send the valid ones for each player. Of course, doing this on high tick servers could prove to be computationally intensive.

What's your tolerance on this? Too low and players will complain that other players pop into view and kill them in the event of latency. Too high and cheaters still have access to the most valuable cases of information, when there's a chance for one player to get the drop on the other.

What about strategy games which rely on their lockstep simulation for performance? How would an RTS work if it's sending the locations of 100s of units in real time versus just player actions. Do you want to have to implement prediction and deal with warping in such a game?


A few approaches to consider:

  1) be fair and decide upon some value that should cover most cases, make the outliers suck it up, like some games kick those with higher pings
  2) don't be fair and base the threshold of visibility on some predictions about the movement of the entities in the following ticks, based on their probable movement speeds, as well as the ping times of the each player; the player with the higher ping value might receive the position of the other about 10 frames earlier before they round a corner - imperfect, but should still avoid ESP across the map
  3) don't be fair, base this tolerance on hidden metrics about how trustworthy each of the players is considered, based on whatever data about them you can get, a bit like hidden ELO - you can probably game or abuse this system with enough effort, but it shouldn't make a difference in the lives of most legit players, since it shouldn't matter whether a model that you're about to see was rendered 5 or 10 frames before you actually did
  4) enforce regional matchmaking by default and only show servers with acceptable ping times for your system (if any at all)
As for RTS games, that should be even simpler - most have some sort of a fog of war mechanic. Given that, you could probably come up with some data structure to represent everything that's visible to your side (like an octree) and send all of the models within it, without worrying about checking individual positions.

As for warping: the exact same way as in any online game, probably by some interpolation. If you receive a position from the server, the entity should be visible at a certain position, if you do not, then it shouldn't be visible (or maybe send the position in which it should disappear, with an additional flag). If you don't get the data for a while, handle it however you would stale data - like ARMA 3 does with entities just standing around or other games with them running in place, which is pretty funny.


Interestingly, given it was one of the strategy games I was thinking of when I made that comment, the Paradox devs for CK3 commented on why they use a lockstep architecture and not sharing the state of the game by server decided POV in their dev diary a couple of days after: https://forum.paradoxplaza.com/forum/threads/anatomy-of-a-ga...

>Attach some mouse movement metrics and from that you'll sometimes be able to infer the most naive aimbots instantly.

see? even you do not believe that this will work


Of course I don't believe that it'll work 100% of time time, since nothing will.

Fighting against cheating in online games is going to be a constant arms race.

That's not to say that detecting most of the naive implementations isn't worthy of the effort.

It won't always work consistently but it should be pretty obvious when someone is lerping between two quaternions. Then, you can build upon that and attempt to detect small bits of random noise that'd be applied upon said interpolation and go from there.


This is what Valorant does and just does not work. People saying "yeah game dev are lazy, why not everything is done server side" this is really a naive view of game dev.

The short version is that you can't have a great experience for online games if you try to create a client as a dumb terminal.


I didn't mean to say they're lazy. I generally dislike the studios but developers there are brillant, usually.

I was thinking that studios were being cheap. Why invest in a proper server infrastructure if you can make clients install abusive software... Maybe I'm wrong but it always looked to me that way.


Don't disclose to the client anything not in their view.

I know this is sometimes impossible and/or too costly to implement but it should be possible to find a compromise that prevents most of the blatant cheaters, eventually.

Also helpers like: In any score event, for randomly selected players, analyze the last actions taken.

You just cannot trust the clients. People will find creative ways of reading the memory of their own hardware, whatever you do.


> Don't disclose to the client anything not in their view.

Either full of edges cases (how do you efficiently compute visibility, and can you prevent models from popping in as a result of latency) or computationally expensive[0]. Valorant, CSGO, League of Legends, Dota 2 are some of the games that I know about that implement server-side occluding to minimise the impact of wallhacks, but eventually a client will still need information like the position of an audio cue such as footsteps that cheats can make use of.

[0]: https://technology.riotgames.com/news/demolishing-wallhacks-...


> can you prevent models from popping in as a result of latency

Can you do that well enough on the client? The client can add some prediction on where someone is moving, but so can the server. And enemies killing you due to lag is happening already with current architectures.


> instead of getting too much into an arms race, studios should invest more on server-side anti-cheat mechanisms

End offline AAA gaming?


Offline games do not use or need "anti cheat".


One thing that isn't clear from the website is that it requires two GPUs, one for the host and one for the VM.


What about machines with dual graphics cards, like one intel on board and another nvidia?

Would that be OK?

I ask because many laptops used to have similar setups.


Yes, this works for many laptops but it depends on how the laptop is internally wired. We have many members that use their iGPU for Linux, and the dGPU for the VM.


I haven't tried in about 6 months but I had issues with reusing that same dGPU if I wanted to use it for Linux gaming when the VM is offline. I thought it would be easy to bind and unbind it as needed but had issues with doing so. Is this a possible/recommended setup, is there another alternative or I shouldn't bother trying?


In theory yes, but I concluded my attempts on a Dell 9570 with “forget it”.


I fear I'll get a similar result.

Wouldn't be it possible to let Linux/Xorg give up GPU for a time Windows is used? Which could still let me access it via VNC from Windows. My native platform is Xubuntu on AMD Renoir laptop.


That is already possible without the Looking Glass. Just regular libvirt GPU passthrough.

I would say the use case here would be a machine that does not have a monitor connected, for example some Bitcoin mining server in the attic. With Looking Glass the game screen can get streamed to a ultrabook in your living room, with the server GPU performance


It's technically possible, but not stable. On my system, it's not stable, so I prefer to have two cards, but YMMV.


If this uses vfio then no, the host never sees the GPU, it's blind to it from boot.


What is stopping GPU hot plugging from working?

There was this recent improvement at least: https://phoronix.com/scan.php?page=news_item&px=Linux-5.14-A...

edit: seems there are some howtos - eg https://github.com/joeknock90/Single-GPU-Passthrough


Two reasons (in the context of a standard desktop user, not headless server):

1. Desktop environment, which uses the video card; this is not a big deal, since one can terminate the session

2. video drivers correctly and fully releasing the card; this is possible, but in my opinion, not stable.


Not really a problem, you can unbind the VFIO driver


> linux /vmlinuz-5.10.27-gentoo root=/dev/nvme0n1p2 ro amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction pci-stub.ids=10de:1b82,10de:10f0

the host never sees the card.


thats using the pci-stub driver, not VFIO


Does an iGPU fill the same criteria?


Yes it does. Intel usually is the host adapter.


The host can use a $15 5450, doesn’t have to be fancy.


This is awesome. In theory you could absolutely minimize the latency penalty to just the overhead of the gpu1->memory->gpu2 copy, if the display sync signals from the display the passthrough window was on were passed through to the GPU driver on Windows, and that was combined with fullscreen compositor bypass (available on many Linux WMs) or low-latency compositing (available on sway and now mutter https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/1762 on Wayland).


Reminds me of the time I'd use a patched ACPI table on my gaming VM to mock a battery for the NVIDIA driver (similar to [0]). The drivers checked and deliberately failed when a battery wasn't present in conjunction with a mobile GPU.

Setting up looking glass itself wasn't much of a problem though. I got some AAA games cutting on my gentoo laptop like butter, though the mouse movement was more jelly unless I VFIOd my mouse/kb as well. Sound went through Scream over a socket.

Nowadays I play on my Xbox. I ditched gentoo. I've gotten old. Do you, reader, assume that I've also checked out and gone full windows?

If you do, then guess again. I'm about to port my whole gentoo gaming setup to NixOS and it will probably take me about 15 minutes, and ~ 5 - 10 minutes a year just to maintain. How much time do you spend fiddling with your windows and driver updates? Oh, wait...

0: https://old.reddit.com/r/VFIO/comments/ebo2uk/nvidia_geforce...


I'm doing this with NixOS and too, it was really simple. NixOS unstable hasn't been problem free though, it breaks quite often in the unstable channel, most usually though I just roll the channel back and wait it out. Right now I'm stuck with kernel 5.12 because nvidias shit Linux driver support.


I don't suppose you have a nix file to share? :)


These days many AAA games work well in Gentoo thanks to Proton, though I still dual boot (to Windows 7...) for a lot of games. Looking Glass is pretty cool, though perhaps it's a bit too late. My next Windows on a new computer will be either 10 Enterprise LTSC or 11 if 11 doesn't suck, but I still plan to run Gentoo as my primary. If your NixOS experience matches your expectations over the next year, it'd be neat to hear about it later -- I looked at Nix a long time ago but I've been a happy Gentoo user since 2007 and I still see no reason to move. I've gotten a bit old, but the maintenance is low, every time I use another distro it inevitably annoys me for not being like Gentoo, and anyway the upkeep sort of feels like gardening, not like a pain in the butt.


Just FYI, Looking Glass has been in development for over three years now, so it's not actually as late as you think :).

Also proton doesn't address the myriad of productivity applications that people still need windows for, such as Adobe products, or Autocad, etc.


My windows boot is generally low maintenance, but a Windows update did introduce a stutter in all games once, so had to do a clean install (no restore point). Was annoying to troubleshoot. Windows is a mess. If only anti cheats worked on Linux.


In the Steam Deck announcement this week Valve mentioned they were going to work with vendors to get anti-cheat finally working, so fingers crossed.


I guess it's inevitable. It makes no sense for Windows to be the PC gaming platform; it's only inertia.


It makes sense for almost every user and developer, due to better graphics support from GPU vendors and from Microsoft for Windows.


Well of course, but that's only popularity and inertia. There's no technical reason to prefer Windows over Linux as a gaming platform.

Edit: oh, I just rephrased grandparent comment, sorry.


Better technical support of GPU vendors for games isn't a technical reason?


No, that's logistics. Logistics is relatively easy to change.


It performs really well.

I love qemu/libvirt - the crazy thing is if you have two disks and use one to boot Windows, you can mount the disk then boot into that when using Linux and have that running off another video card (I have two in my PC, one AMD and and nVidia).


It's QEMU and the kernel that do the magic - libvirt is actually kinda annoying and ill suited for these things, you have to edit lots of unnecessary XML just to change launch options for QEMU.


That sounds very interesting. Does that mean I should be able to load a VM containing my windows partition fron linux, even without a dedicated GPU? Do you have a pointer to a tutorial explaining the setup ?


Yes, I can confirm that this works. I have one windows installation and can either directly boot windows on the host or start it within Linux with VFIO.


Do you know where this is documented? This would be very useful to me.


I wrote a `viewsetup` tool for setting up a /dev/mapper device (and VMDK file pointing to it) that I use for exposing a set of native Windows partitions while masking off the active Linux partitions. It's enough to allow booting Windows natively or in a VirtualBox VM using the same partition. But the steps to create backing files, run dm-setup to create the /dev/mapper device, and create the VMDK are all separate, so you ought to be able to use this with any virtualization tool.

I sort of have it documented at https://github.com/hotsphink/sfink-tools/blob/master/doc/Vir... though that's really a set of instructions intended for a new PC that a bunch of us at my employer recently received. You just want the `viewsetup` stuff. You probably want to leave off the `--auto` flag, so that it'll prompt you per-partition.

The tool goes to some effort to only expose the specific partitions needed. It also makes the virtual disk exactly match the native disk, copying small partitions (and gaps between them) to files and exposing those over loopback interfaces, so that Windows can go crazy and write over everything and it won't break your Linux setup. (The Linux partitions themselves are exposed as loopback devices to empty sparse files of the right size.)

Get the single script file at https://hg.sr.ht/~sfink/sfink-tools/raw/bin/viewsetup?rev=ti... or check out the full repo at either https://hg.sr.ht/~sfink/sfink-tools/ (mercurial) or https://github.com/hotsphink/sfink-tools (git). I keep both up to date.


Thanks! I will test this out later.


> That sounds very interesting. Does that mean I should be able to load a VM containing my windows partition fron linux, even without a dedicated GPU?

That works fine.

I do that at work to avoid nuking/tampering with the Windows installation provided to me by IT, while running Linux as my main OS from a second volume.


How is the IO (particularly random IO) performance in Windows? In my experience that's been where VMs still have a long ways to go. Even kvm2 Linux on Linux VMs have severe IO performance deficits compared to native.


I have a lot of respect for the VFIO crowd but speaking as an administrator of Linux systems myself, I didn't ever find the effort or pain worth it. I run a native Windows 10 system for games because the last thing I want after a workday of fixing corporate systems is having to fix my own system just so I can chill out and play a game.


I thought it was Project Looking Glass [1] from Java era. It was really cool back then.

[1] https://en.wikipedia.org/wiki/Project_Looking_Glass


There is also the Looking Glass server software that ISPs provide to debug routing issues.

https://en.wikipedia.org/wiki/Looking_Glass_server


Ah yes, I still remember little Jonathan Schwartz presenting project Looking Glass! Those felt like good days, even though it was clearly the end of Sun Microsystems.


I used it for a bit a while back and would only use it again if I absolutely had to. Cases like that are very limited though, using an external GPU enclosure without a monitor attached to it is one. Being dumb enough to try to use a gimped mining GPU that doesn't have any video outputs is another.


I think you forgot to mention why you didn't like it.


Not him but I'll echo the same thing. Unless I absolutely have no other choice, I'm not going back to this setup.

It's how I "gamed on Linux" for a couple years. Support is basically you and you alone. The dev for Looking Glass is active but the man isn't your personal tech assistant so often you're just doing A/B testing to make something work.

For me, I just didn't want to fiddle with my home desktop that much. I went back to Windows.


The fiddle factor is why I'm using W10 as well. Linux is great, but on the desktop I just need things to work as-is. Linux on the desktop still doesn't have that. Maybe next year ;-)


> Linux on the desktop still doesn't have that

It does have it if all you want to do is develop, browse the web and similar things.

Including windows games into the things it needs to run without hassle is kinda unfair in my opinion, as that's pretty far from what this DE is used for regularly.

I went back to windows on my home pc because off games too, but my work environment with Ubuntu/regolith was significantly less painful to setup then the WSL hassles I had to jump through on windows before.


Sort of. I bought a HP Probook laptop in 2018 and install Ubuntu 18.04 LTS which works perfectly except something minor fixed with a kernel update; however when I update to 20.04, sleep is just broken. Every time the machine sleeps it forgets that the keyboard exists. As much as I dislike Microsoft, stuff like that just doesn't happen with Windows.

I'm still suffering with Linux for largely philosophical reasons at this point, but quite frankly if I wasn't such an opinionated nerd I'd just go to a normal Windows machine at this point.


It's too limiting, if you're doing the whole VFIO thing in the first place you don't really want to limit yourself to just Windows in a specific configuration. You want to have an ability to run any OS you can and pass your hardware to it.

I wouldn't recommend VFIO just for gaming, there are better options.


What exactly is limiting about it? I use it on a daily basis in order to flip between high performance Linux and high performance Windows instantly with the flick of a hotkey.

I use it for gaming, software dev, and just in general it's nice to be able to switch OSs for any reason instantly. What better options are there?


I'm not familiar with VFIO and Looking glass and last time I used a VM was years ago. Could you explain how exactly did the 'flip between high performance Linux and high performance Windows instantly with the flick of a hotkey' actually work? What GPU setup did you have? Was the 'high performance Linux' in your case the host OS, or was it rather just another VM?


Flipping between OS wasn't how I used VFIO. I had two different OS running on two monitors and both having access to underlying Linux shell.

Basically it's not worth setting up VFIO just for gaming, but if you already on top of that mountain, then use whatever. I stopped using VFIO a while back and just bought a laptop that runs Linux without any issues.

I don't really have time for games now, but if I wanted to play I would probably wait until next LTSC and install it + latest WSL on my PC. Or would buy a console, maybe that Steam Deck.


I would argue it's very much worth it just for gaming, now I get to everyday run an os I like and spin the windows vm up whenever I wanna play overwatch. Now that I'm on NixOS it was literally just a couple Nix lines and it's configured forever. (if on NixOS stable it doesn't break all the time, but you also have 6m+ old packages)


Litmus test for these kinds of things is - would you set this up for a family member in another part of town? I wouldn't. And I used Nixos too.


Well no, but I also don't like living my life by the lowest denominator, I understand how it works, so I don't "break" it by forgetting how to use it. My father's office365 broke last month because of multiple mailboxes, not because something was wrong, but because logged onto the wrong account without the right permissions.


Well I hope you don't end up like me, asking yourself "Why I'm still doing this?" and not finding a good answer.

To be honest, I'm doing it because I like the idea of owning my own machine, the flexibility of a VM and because I want to learn more about Nix & Linux.

Trouble is, now that my Nix setup is good for everyday use (Not yet development, I wish to see Flakes mature soon) I barely ever tinker with it, might be because it's summertime here too, doesn't encourage me to geek out too much.


> I wouldn't recommend VFIO just for gaming, there are better options.

What options? I'm interested.


Keeping work separate from entertainment is one better option. But if you have to have Linux and some casual gaming in one place, then I would stick to Lutris or Proton or some other easy to use wrapper.

Or install less annoying edition of Windows like LTSC, configure Unified Write Filter or similar feature to keep it under control and try to live with latest WSL as your Linux. And just buying a console is another option, if not for a general chip shortage it would be a very good time to do it.


FWIW, I got native windows performance on a Linux host just using virtualbox and (the key part) a virtual disk file that I manually edited to point directly to a windows partition on disk. This made it obvious that file IO was the bottleneck...


Is it really native performance? I find it hard to believe but now I'm curious and will have to try this out...


Yeah as sfink said (and I should have clarified and used different vocabulary), this was "native" performance for most typical tasks like programming, it wasn't a gaming setup. Though there was still hardware accelerated graphics through VirtualBox.

Close to native disk performance, but graphics performance (and capability/compatibility) are not at all native.



The question everyone is wondering: is this ready for serious use for gaming and productivity (Eg, photoshop or game development)? Or is it very hard to get a smooth workflow going? If the workflow is not great, is there a clear path to solving that?


You don't need this for general productivity software, only stuff where QEMU's emulated GPU is too slow or otherwise insufficient (so games, video editing, etc).

The caveats are basically:

1. Setup is a bit annoying

LG can feed input into any VM but requires guest support for capture, so you need to do the setup with a direct monitor.

2. The passed-through card must have a monitor connected.

IIRC, this is an API issue since it just captures what would go to that monitor.

3. Some things are only visible on the real monitor.

They need different capture strategies for the regular desktop, system desktops (such as lock screens), and secure desktops (UAC prompts). Sometimes the transition fails, sometimes there is no strategy implemented for what you're trying to show.

4. Your CPU and motherboard must support IOMMU passthrough.

For Intel this means using Skylake+ and ensuring that it supports VT-d. For AMD this means using Ryzen with an X-series motherboard.

5. You need separate GPUs for the host and guest.

Blame GPU vendors for making VT-g an enterprise-only feature.


    1. We are working on it
    2. Not if it's a vGPU or a Quadro where EDID spoofing is allowed
    3. No, we capture everything now, even the windows Login screen and windows updates, etc.
    4. Yes
    5. Very yes!


> 3. No, we capture everything now, even the windows Login screen and windows updates, etc.

B3 and B4 both have pretty huge improvements here, but there are still a few rough corners. For example, when logging out the host shuts down immediately upon request, so you can't see the "program X is blocking shutdown" dialog.

But ultimately this is a very minor papercut for a very impressive and useful project.


Really? I see this all the time with my VM when I shutdown via LG.


Huh, so this is a bit more involved than I thought. I just tried again to confirm (still on B4).

The first time I tried I just got the Looking Glass splash as soon as I clicked shut down. When I cancelled that and tried again I was able to see both the throbber and the prompt. When I let the VM sit idle for a while before trying again, the LG splash was back. Rebooting the VM also seems to bring back the splash reliably.


Create a desktop shortcut to "shutdown -s -t 1" it'll keep shutting down even if an application might try to keep the system running.


> 2. The passed-through card must have a monitor connected.

Regarding this, I don't know about LG, but with a standard QEMU VFIO setup, one use a single monitor connected to the two cards, and switch the input when required.


Yup, this is a good option too, makes it easy to debug/diagnose when things go wrong with the guest VM too.

Just note that not all multi-input monitors are equal though, a small minority appear to the GPU as unplugged when the input selection is changed.


(I haven't used LG, so these are general VFIO considerations).

I mantain a guide for setting up VFIO (https://github.com/saveriomiroddi/vga-passthrough), which I frequently use.

My conclusion is: for machines that are compatible with it, VFIO works very well. The technology itself is stable, so working on photoshop/game development etc. (from a technological perspective, there's no distinction between the two tasks) is not distinguishable from working on native.

I had VFIO on 4 machines I think, and one had problems which I couldn't solve, while the others worked well.

When used with QEMU, it requires some system settings and QEMU flags etc., so it's a bit annoying, but it's straightforward and documented process.

To put it in another way: if one wants to use VFIO seriously, it's best using hardware known to work well, rather trying to cram VFIO on a not very compatible system.

And also: one needs to be pratical. A USB soundcard solves countless hours of attempts to use the host's Pulseaudio system (meh).


When gaming, as long as I don't have a lot of software running on the host, pipewire works great with pulse passthough, starts crackling if I run FF on the host, I guess something to do with nice levels


It's quite easy to get it set up once you have a VM with GPU passthrough running (for which there are plenty of guides available online) -- just a double-click installation of a service on the Windows side, compiling a cmake project on the Linux side, and (optionally, for some extra performance) compiling a Linux kernel module.

After that it kind of just works, and continues working. I use it to play games and run Office apps, and have not had it break on me in a ~year of use. (Disclaimer: I occasionally contribute to the project now, but remember being impressed at how easy it was to get going when I first tried it out. Getting the VM working at all was the hardest part of the endeavor, but only took a few hours.)


Just to tack on, I set up windows with GPU pass-through and looking glass on my new laptop over the course of about 3 hours of on-and-off work using the excellent guide over at https://asus-linux.org/wiki/vfio-guide/


Guess this is the "Linux Subsystem for Windows" some people have been missing... Too bad it requires 2 GPUs


No, it's nothing like that at all.

It's only that if you wear a fedora unironically and have a Slashdot account with a four digit user ID.


I see the comparison. WSL2 is running a full Linux kernel on top of a hypervisor with an eye towards reducing the overhead of running both kernels (see the recentish discussion on lkml about Microsoft trying to upstream para virtualized directx). When you combine that with the fact that Windows's interface is both more GUI forward and closed to interesting modifications, then the same user model "I want to run both kernels and their user code with as low of overhead as possible in both ux and compute power" this is what you'd come up with for Windows on Linux compared with modern WSL's Linux on Windows.


Does anyone know if you can do color accurate work with this setup (non HDR) ?

To go into detail: are monitor color calibration from Linux (is that even existing) and Windows conflicting here?


Looking Glass is lossless, so provided you can do your color calibration under Linux then it would work fine.

I was confused for quite some time--I thought this was the same company that creates the holographic display, and this software is used to display content on the device. Which didn't make sense to me since it simply acts as an external monitor. The more I read, the more I got confused.

Just a warning to readers that it's not the same company :)

https://lookingglassfactory.com/


Sorry to ask, but I'm still using win10 as I could switch to linux if I wanted.

Except gaming I personally don't see what prevents me from being "linux first". I'm just using windows 10 by habit and laziness.

What is your experience? What makes you still need windows? It's weird because there are so many software alternatives that run on linux and also many initiatives that allow one to be run on linux via emulators and whatnots.


I use Windows as my prior OS for work, even though much of work ends up happening through a Linux command line (WSL/ssh)

For both work and casual use, there's a lot of small things that add up to it being very productive, and despite regularly giving the linux desktop a chance, I always gravitate back to Windows (and recently MacOS as well).

I feel like most of the QoL features I enjoy are mostly invisible so its hard to remember them unless I'm actually experimenting with the Linux desktop. Here's some that do come to mind though.

- Windows supports right-click and drag with context menus. For multi directory file manipulation, this shortcut is shockingly useful. While much file manipulation can be faster on the CLI, certain operations like this are incredibly efficient.

- Binary blobs are really convenient compared to package managers/source building for my daily driver stuff. I have decade old games that just work, and I've never had to deal with version incompatibilities for my tooling, unlike in my Linux environments.

- Common actions like sleep/wake just seem to work better.

- Microsoft Office is really nice.


So, I don't think I'm understanding this correctly, does it have to be a separate machine? Seems to mention VM and VNC so I'm not entirely sure what it is!

Like, currently I dual boot into Windows for games, but I can also get into it from Virtualbox if I need to quickly use something on there.

Would I be able to use this with that? So I can use the internal GPU for my local Linux install, and then use the nvidia for the windows?


Yes, it would work for your setup perfectly. Linux on the iGPU and windows on the nvidia.


Oh ok awesome! Looks like I need to do some proper research on this then - can it really be, the best of both worlds?!


I certainly think so.


Can you use this to play netflix at 4k on GNU/Linux?


Can rpcs3 do this for you?


The PS3 never supported 4K, so I dont see how its Netflix client would.


You're right. Wonder what would happen if you run it and tell rpcs3 to render in 4k


I used to run a Windows Server 2016 box on KVM and used all the virt-io drivers from RedHat(?) to have paravirtualisation for better performance. Ran fine. Bet it wouldn’t have been so great for, say, Windows 10 on a desktop and for anything GPU intensive. So my Q is, is this GPU pass-through and is it novel/new ? Trying to see how it’s different to what I was running 5 years ago.


The problem is that you have to have a monitor plugged in the windows card otherwise it goes all wonky. At that point you could just use a KVM....


KVM switches are often slow and/or buggy and/or expensive. Plus, looking glass is more flexible since you can display both OSs at the same time.

As a sibling comment stated, you can get a dummy monitor plug on Amazon for like $5 which fools your GPU into thinking there's a monitor attached.


The blurb at the top says "allows the use of a KVM [...] without an attached physical monitor, keyboard or mouse." Is that not correct?


The abbreviation KVM can mean two things: Kernel-based Virtual Machine, and keyboard/video/mouse (switch). Gp was talking about the latter.


I mean it behaves like a software KVM, at which point you can use a hardware KVM without the added latency penalty due to memory copy....


Judging from reviews of hardware keyboard-video-mouse switches, they tend to work MUCH worse than Looking Glass. All modern GPU connectors are digital, so those KVM switches have to include a mini-gpu (usually a costly and buggy one). Also they end up re-implementing a complete HID support, because otherwise switching USB devices between hubs takes forever.

In nutshell, KVMs aren't worth their money because they have to re-implementing a lot of hardware already found in your PC.


You can always use a dummy plug to fool windows into believing there is a monitor attached. Then it's much more convenient than using a KVM switch.


Not only are there dummy plugs, but some cards (e.g., Quadro) let you spoof the EDID without one at all.


Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: