I run a VFIO setup with a single GPU - the linux host is headless, and Windows runs on top with the GPU. Its pretty awesome. Windows runs at native performance - no problem gaming or running other heavy workloads. The linux host acts as a devbox and runs a few other homeserver style services.
It's difficult to set up right but it taught me a lot about VMs and hardware. Once you get it setup well enough, its relatively painless. Like I haven't messed with my VM settings in over a year, everything just continues to work smoothly. Including windows updates, driver upgrades, most online games with anti-cheat etc. If i upgrade my hardware, it might take a day or two of tinkering to get it back up. Based on my benchmarking it runs within ~5% of native perf.
This is still the best guide IMO if you want to set it up - https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF. Single GPU is basically the same as dual GPU, except you have to ensure the linux host does not take over the GPU or load any drivers for it during boot.
Same setup here! I am running Proxmox on the host to streamline managing VMs and storage. Proxmox comes with a nice web GUI, which makes it very east to monitor system state.
I have a Windows VM for gaming that owns the single Nvidia GPU. I also have a few Linux VMs for development (via VS Code remote) and media management.
As far as storage goes, I don’t have anything too fancy. Proxmox is installed on an SSD. I have a second SSD for VM images. For all other storage (media, photos, VM image backups, etc.), I have a 3 disk ZFS pool consisting of a single RAID-Z1 vdev - yea, it’s risky, but losing the pool wouldn’t be the end of the world.
One of the cool things about this kind of setup is being able to easily restore VMs from backup. Some time back, I accidentally screwed up my Windows install by enabling Hyper-V (nested virt). I panicked at first, but then remembered that I have daily snapshots of the VM. I had it back up and running within 10 minutes :)
All in all, it was fun to setup and has been running very smoothly.
Would you mind talking a bit about the small decisions you took for a Proxmox setup? I am slowly learning and planning my (very)small server setup. Things like:
* Are the VM image backups you mentioned, done to your pool by means of the ZFS snapshots? Or done at the file-level with rsync or similar.
* Do you make backups of the Proxmox installation? Similar as before: is Proxmox itself on a ZFS volume, so backups can be done just by doing ZFS snapshots? The installer lets me choose between an LVM-Thin + ext4, or a ZFS filesystem, and I was wondering whether to choose one or the other, for maximum convenience.
* "Proxmox is installed on an SSD": isn't that a bit wasteful? I mean, doesn't the Proxmox system just take like 1 or 2 GB at most?
I have a Lenovo ThinkCentre m910q which brings a 160GB M.2 NVMe disk, and another 320GB SSD disk... so I an in the process of deciding where to put each thing. Although for bigger storage I'm also considering if adding a 1 or 2 TB USB3 external disk would make sense (to store user backups like photos, documents, and also for the server's system backups)
1. In my case, VM image SSDs are formatted using LVM-Thin. Based on this, Proxmox automatically takes care of snapshotting the VM images for backups. It would work the same if you used ZFS for image storage.
3. If you install Proxmox on a "thin" filesystem (ZFS or LVM-Thin), then yes, you will get snapshot functionality for free. Note that you would have to configure this yourself - Proxmox does not expose a backup feature for its own config.
4. I have two NVMe SSDs. The first is 500GB and is split into two partitions: 100GB for the Proxmox install (LVM + ext4), and 400GB for VM images and containers (LVM-Thin). The second is 900GB and is fully used for VM images. Both are using LVM.
Or you can just go into your reddit settings and change back to the old interface permanently. I have that enabled and both these links are the same to me.
This has been my dream setup for years, so that rather than a Windows host with Linux VMs I can run the other way round, enable ZFS and continually snapshot the Windows VM, and avoid the usual Windows failure as bits flip or sectors go bad.
I run the same setup but with esxi as the host OS. It's quite a bit easier to setup since esxi already runs headless by design, as well as easy PCI passthrough configuration via the host client html5 UI
What's the performance like compared to the same games in a native Windows install. I tried proton several months ago and I had to go back to Windows because it wasn't even close to good enough as far as compatability goes.
I'd really love to abandon Windows as a desktop OS and just game in a VM though.
I use a vfio setup as well but with two GPUs (one integrated, one dedicated) and hotswap the dedicated GPU between the two whenever I’m playing games (I play on both Linux and Windows).
If you have more than one GPU (even as in one on-board and one discrete) you can, with some effort, dedicate one of them to a VM and let the host use the other. Low-yield crypto mining is a common usage for that kind of arrangement, but I suppose VMs would work just fine too.
Or go full Unix and attach a serial terminal to the console port and happy linuxing from there while you ignore the built-in gaming console running off a VM. This is more or less what every modern console does anyway - a hypervisor under everything. ;-)
Which GPU do you have? I tried using a Mac Pro 5.1 as host, Debian Testing as OS and various Windows varieties as well as Linux (again Debian Testing) as guest - but I always, always run into that nasty PCI reset bug (https://www.nicksherlock.com/2020/11/working-around-the-amd-...) :(
Which games do you play that require anti-cheat schemes?
The last time I checked, many such games explicitly do not support running in VMs and you risk being banned by obfuscating that you are running the game in a guest OS.
Ive played PUBG, COD Warzone, Apex Legends, Quake Champions, maybe a few others. Of course it depends on the game but i think the perception of how many games don't work under VMs is a bit skewed.
TBH every thread like this has the “but what about anti-cheat?” post which makes it seem like a big problem but i think only a tiny minority of games don’t work under VMs.
FWIW I tried to install Valorant in a VM recently (because I'm tired of installing the root-kit anti-cheat on my main PC in order to play it). It simply fails to launch the anti-cheat, and so doesn't launch the game.
But distrusting the client doesn’t solve the problem. If a user can see through a wall locally how can ‘distrusting’ anything fix that? What is there to distrust? All inputs are genuine.
> If a user can see through a wall locally how can ‘distrusting’ anything fix that?
The fact that the client can remove the wall and see something behind it is due to the fact that the client is trusted to do the hiding. An untrusted client would not receive the enemy position until the enemy is visible. This, however, comes with the drawback that the server needs to do the culling - That's why pretty much nobody does it right now.
Aimhacks would still be possible, of course, but client-side anti-cheat can't prevent those either.
Valorant specifically actually does this to a degree, and there was a quick article written by one of their anti-cheat developers that roughly explains their approach.
Why do you think that? The server needs to at the very least trace a line for a shot. There is nothing difficult or slow about that. Before you say that it is done with a z-buffer or something similar, think about third person camera angles or how older games did the same thing. You might want to pull back on being so certain if you don't have experience with game engines or graphics.
> Why do you think that? The server needs to at the very least trace a line for a shot. There is nothing difficult or slow about that. Before you say that it is done with a z-buffer or something similar, think about third person camera angles or how older games did the same thing. You might want to pull back on being so certain if you don't have experience with game engines or graphics.
But... it is done with a z-buffer.
If an opponent is obscured behind a nearby pillar or something, that's not going to be culled in software - that's done by the hardware z-buffer as part of the render process.
You can see this for yourself if you look at a game being run with wireframe rendering. You'll see it's in the same render node so it's still rendered - it's just obscured by closer geometry. And it's how some cheats actually work - they basically turn the wireframe back on!
You are conflating needing information about player position with visibility of individual polygons.
Also you are forgetting that you just said that line of sight was done in hardware and you didn't explain how that would work for a server testing if shots actually hit.
> You can see this for yourself if you look at a game being run with wireframe rendering. You'll see it's in the same render node so it's still rendered
What does this even mean? What is "it" here and what is a "render node" ? There are hierarchies of transforms and players are going to be separate from the environment. This doesn't actually mean anything.
> it's just obscured by closer geometry. And it's how some cheats actually work - they basically turn the wireframe back on!
Yes, you are restating the context of what people are talking about, not what is actually being talked about, which is the timing of when the server should send visibility information, which is what your link is actually about.
Your link is actually directly contradicts what you are saying since it uses both an expanded bounding box based motion extrapolation and precomputed visibility, neither of which has anything to do with a z-buffer.
Can you see how the red outline of the opponent appears while they're obscured behind the pillar?
When that red outline appears it's showing that the opponent is now being rendered, and that the z-buffer is being used to obscure them from behind the pillar.
This discussion is about how to make the red outline not appear until the opponent is actually visible.
The article goes into lots of ways to make the red outline appear later, but it still appears before the opponent is actually visible on screen.
That's the issue that people want to solve.
Consider an example of an opponent with just one pixel of their gun visible around a corner. How do you send that information to the client without telling them there's an opponent there, so that the user has to actually see the pixel? You'd have to just send that one pixel, right? Now we're talking about rendering server-side!
" When that red outline appears it's showing that the opponent is now being rendered, and that the z-buffer is being used to obscure them from behind the pillar."
Yeah, that's game rendering in the engine. That's visualizing something, not illustrating how the server is doing it. Did you actually read and understand your own link?
"That's the issue that people want to solve."
No it isn't, you misunderstood your own link to the point that you have it backwards.
The server not rendering the entire game from each person's perspective every for every player every frame.
The problem is being able to see every player walking around all the time.
Think for a moment what would happen if the server actually had perfect visibility - by the time you can see them it is already too late. You should be able to see them and then the server starts sending you a position. By the time you know you should see them, you should have already seen them and the other player pops into frame.
That isn't even buried in your own link, it's at the very top.
"Consider an example of an opponent with just one pixel of their gun visible around a corner. How do you send that information to the client without telling them there's an opponent there, so that the user has to actually see the pixel? You'd have to just send that one pixel, right? Now we're talking about rendering server-side!"
This is gibberish and is a lot like Frank Abignail trying to BS pilots. Once again your own link explains why this is nonsense from a lot of different angles, did you even read what you linked or did you just look at the pictures? It explains everything clearly.
> This is gibberish and is a lot like Frank Abignail trying to BS pilots
Why are you so abusive in your replies? What causes you to talk to people like this?
> You should be able to see them and then the server starts sending you a position.
Yes that's what I'm saying you'd need for an untrustworthy client. But even that's not quite good enough - if you can 'see' them but it's just one pixel that the user might miss - should the client really get the full location information? It could highlight the enemy from that when a player would likely miss it otherwise.
> The problem is being able to see every player walking around all the time.
No that's a weaker version of the overall problem. If you give the player's location to the client when the player may not actually be able to see them then you're relying on a trustworthy client.
I can see we are at the "you're being mean to me" stage in the discussion instead of the "I should not spread misinformation then doubled down on it" stage. No one is abusing you and you aren't a victim when someone wonders why you're misinforming people. If what you are saying doesn't add up (temporal chicken and egg, partial location information etc), focus on that instead of attacking people that are giving you the feedback that what you are saying doesn't add up.
You originally said that a server would have to render the game and use the z-buffer to do any occlusion culling, but this is not only not correct, it is contradicted by something you yourself linked. Why not just admit that this was a guess and not from experience or research into how game engines work?
"But even that's not quite good enough "
You are the only one saying that. Going from seeing every player on the map all the time to only seeing players a few frames before you would have seen them anyway is a huge leap, which is again, what people are talking about and exactly what you linked.
"should the client really get the full location information? "
What partial location information are you envisioning here?
Again, focus on backing up what you originally said first instead of trying to shift the goalposts from how servers would "have to" do occlusion culling.
I don't agree - but I think you're really just trying to get a reaction by being as aggressive and contrary as possible rather than actually going on what I've written, so I'm going to leave you to it from here.
Parent: "An untrusted client would not receive the enemy position until the enemy is visible. This, however, comes with the drawback that the server needs to do the culling - That's why pretty much nobody does it right now."
You: "But... it is done with a z-buffer.
If an opponent is obscured behind a nearby pillar or something, that's not going to be culled in software - that's done by the hardware z-buffer as part of the render process."
Then I explained why this doesn't make sense on the server as a generalization and isn't necessary from a technical angle.
Then you ignored that you were both snarky and wrong, provided your own source which directly contradicts what you originally said and ultimatly called yourself a victim of aggression when I pointed this out.
In Counter-strike there are footstep sounds with spatial audio. How can the server not send that info to me in a way that won't reveal the player's direction? hearing players coming before you see them is a huge part of the game.
Think about many times a second you have to trace a shot.
Now think about how many times a second you'd need to trace from every pixel on the screen to every part of the geometry on every opponent in order to check if it was visible or not to see if a player was legitimately able to view any part of their opponent.
If you actually understood your own link you would see that there is no reason to trace every pixel on the screen when you can make a bounding box that covers motion and trace the vertices.
Anyone familiar with game engine programming would never consider what you are saying. That link is a more in depth version of what I just said, ray casts are being done on the server for visibility and have nothing to do with rendering the game to do it. It is literally demonstrating that they are already doing what people were wondering about.
A bounding-box is something we'd call an over approximation.
Using an over-approximation causes the opponent's location to be revealed to the client even when the opponent isn't quite on screen yet, requiring the client to be trusted to not show this information early, which is what people in this thread want to avoid.
That's the whole point of the discussion.
This is what the article is showing - can you see how the red outline of opponents appears early, and how the client is being relied upon to hide them until they're actually visible? That's what people don't want.
You for some reason are ignoring what you originally said to focus on something else you are seem to misunderstand the context of.
What you originally were saying was that you would have to render polygons in hardware for the server to have any idea about occlusion, which the link that you gave not only disproves, but assumes that no one would think in the first place.
The whole point is that wall hacks let you see people running around the whole level and it is just a matter of work for the server to only send positions a few frames before you are going to see a player.
Everyone else is on the same page, but you think the player position being sent right before they appear is a problem? That's the solution in your own link.
> The whole point is that wall hacks let you see people running around the whole level and it is just a matter of work for the server to only send positions a few frames before you are going to see a player.
...and when an untrustworthy client gets that info it can highlight the opponent just before they come into frame, or highlight them fully even when they're mostly concealed, giving you an advantage.
That's the point of the thread. That's what people want to avoid. That's what the link wants to avoid, and says it doesn't manage to quite do and explains why it's hard.
Question, why can users see through walls locally, seems like there should be some sort of occlusion? I guess it's too slow to calculate and causes too much server-side processing?
> I guess it's too slow to calculate and causes too much server-side processing?
Bingo, game servers need to be as lightweight as possible because whatever calculations they have to run need to run per player per tick. Detailed occlusion calculations would be impractical, so at best it's very rough. And of course you don't want a situation where an opposing player isn't even seen until they've already shot you, so it needs to err on the side of visibility.
Every latency-sensitive online game has to make a bunch of tradeoffs between performance and security, and performance is generally more important.
They keep honest people from playing the game, unless you have clean install of windows with no blacklisted drivers or software installed. Not to mention how these things basically hook themselves to critical system APIs, acting more like a malware. Valorant is probably worst example of this. Community run servers are the best form of "anti-cheat".
The problem is that developers treat PC like a locked console. This is just a completely fruitless uphill battle. PC gives power to its users, while consoles give power to the developers. PCs are designed to not sandbox or lock you, you can do anything with them without having to break its sandbox first. The mindset with the developers that deploy intrusive anti-cheat is to have the users locked in so they can ship their centralized server model and hope they can deal with the hopefully lesser amount of cheaters themselves, instead of giving the moderation power to the users themselves.
If you enable nested virtualization in your host and shove Valorant in a VM with Hyper-V (through what I believe is a feature in Windows, but forgot the name) Valorant should actually run. Or at least it did a few months ago, not sure if it does work now. Worth a try.
Only issue is that a few custom maps crash, and there's a weird, minor performance issue at some point in the LOTV campaign menus (but not the game itself). Haven't had an issue in a ladder game in years. And I play SC2 a lot (too much).
Really, is that why Tarkov drops me when I try to start a match? At least CS:GO told me their anti-cheat hated my setup (Windows 10 in a Xenserver VM) and I was able to get a refund.
Note that the game vendor selects what features they want to apply to their titles that are available. BattlEye allows you to stop people using VMs, if the game vendor opts in to this stupid feature.
Does running CS:GO in a VM impact trust factor? Trust and prime are pretty much the only thing to reduce cheaters encountered in your matches since CS:GO does not have a working anti-cheat.
Interesting, I didn't even know you had external parties providing league based (I'm guessing) match making. All matches in Dota 2 run through the official valve coordinator or are pre-made lobbies. There is a LAN only build iirc, but not widely available to the public.
For EAC, they require a variety of signals to ban someone (unless it’s a obvious thing like detecting a known cheat.) An honest VM setup that doesn’t obfuscate probably counts as one potential signal that you are cheating, an obfuscated one that they are able detect might be a stronger signal.
I do the vfio thing as well, do you happen to remember the registry editing you had to do to stop stuttering in games? There's a program that can set and unset all the stuff you need but evidently I deleted it.
It's the only thing I am missing to build another one or rebuild the one I have. I wish I had believed it was going to work perfectly when I started and wrote everything down.
I don’t recall doing any registry edits. Most of the perf work i remember doing was on the VM side - getting the little tweaks in libvirt settings, matching vCPU topology to physical, keeping VM cores from running linux system processes with GRUB flags, using a dedicated USB controller and sound card. i think i set the MSI stuff gnif mentioned as well though it wasn’t critical for me. Generally stuttering will be resolved by making sure time critical events (like interrupts) are delivered quickly to the guest.
+1 on the writing stuff down :D I did it twice and documented pretty thoroughly the second time around, notes before each change, testing performance delta, notes about if it worked. It really helps.
I have a VFIO setup as well but with dual GPUs. As you say, it is pretty awesome. I keep setting up other OS's for fun. I have an old nVidia NVS300 card that is supported natively under macOS, and also fired up a WindowsXP setup for grins. There were some interesting snags in getting each one going, and I learned a lot along the way.
The host is headless? So you're streaming the display via VNC or something to another system, or do you mean the host has a head but it's just being passed thru directly to the guest?
Headless host means you can only access the host system via SSH or web interface (e.g., if you’re running Proxmox). The guest VM “owns” the single GPU.
This is an interesting question, since it's actually the underlying strategy of Microsoft (I don't imply it's a wrong thing).
My personal motivation is that Linux power users will miss the control, or at least, customizability, of the operating system, which is something Linux does, and Windows doesn't (as they have different targets).
Also, not to be underestimated, security (although for me it's only a very small factor).
Tried WSL, was buggy and had some issues with filesystem performance i think. I prefer having a 100% real linux & windows OSes that just work over slightly buggy workarounds. Plus i also host a lot of homeserver services (media server, SMB server, postgres db etc) on the linux host and its cleaner having those run on the host. If i want to do something very demanding on the host for example I can shutdown the VM.
WSL2 is using a real Linux kernel with almost no limitations.
But it doesn't pass through the GPU. Therefore graphical applications are relatively slow.
Wouldn't it be easier to have 2 devices, one specifically designed for gaming and one running 24/7 for home server services. A cheap used Thinkpad could be used as linux device so it doesn't have to be expensive. Plus if you value your own time and effort you have to put in for your current setup, then it might actually be cheaper.
Technically speaking, for machines where VFIO works, there is no maintenance, and the setup can be easy (excluding one significant issue due to a specific linux kernel upgrade, my VFIO setup procedure has been essentially the same for years, on multiple machines).
So, assuming that VFIO setup setup/maintenance is very easy, having a second machine is just redundant.
One convenience not to forget is safety. If a Windows VM gets infected for whatever reasons, rolling back the system is performed by literally deleting one file.
You basically just need an extra gfx card (or even headless host). And why would you not want a fast Linux OS? Many SSD's and lots of RAM to make the IDE fast. Powerful CPU for compiling. Good GFX for better latency/hz.
Because some cheats use a VM environment to hide from detection as they can act on the VM from outside of it.
The solution though is not to ban VMs, but to push vendors like AMD and Intel to enable access to, and enforce usage of technologies like SEV if running inside a VM.
Server-side solutions don't catch all cheats. They can block actions that are impossible according to the game rules but they cannot prevent clients from disclosing too much information to the player about other players, or automating actions that are technically possible, like using aimbots.
You can definitely handle some of those situations server side (the key word being "some") with enough engineering effort.
In regards to player positions: check which player locations are occluded and wouldn't be visible through the geometry, then only send the valid ones for each player. Of course, doing this on high tick servers could prove to be computationally intensive.
In regards to aimbots: the clients already send you information about where they're looking so that it can be displayed to other players. Attach some mouse movement metrics and from that you'll sometimes be able to infer the most naive aimbots instantly.
> In regards to player positions: check which player locations are occluded and wouldn't be visible through the geometry, then only send the valid ones for each player. Of course, doing this on high tick servers could prove to be computationally intensive.
What's your tolerance on this? Too low and players will complain that other players pop into view and kill them in the event of latency. Too high and cheaters still have access to the most valuable cases of information, when there's a chance for one player to get the drop on the other.
What about strategy games which rely on their lockstep simulation for performance? How would an RTS work if it's sending the locations of 100s of units in real time versus just player actions. Do you want to have to implement prediction and deal with warping in such a game?
1) be fair and decide upon some value that should cover most cases, make the outliers suck it up, like some games kick those with higher pings
2) don't be fair and base the threshold of visibility on some predictions about the movement of the entities in the following ticks, based on their probable movement speeds, as well as the ping times of the each player; the player with the higher ping value might receive the position of the other about 10 frames earlier before they round a corner - imperfect, but should still avoid ESP across the map
3) don't be fair, base this tolerance on hidden metrics about how trustworthy each of the players is considered, based on whatever data about them you can get, a bit like hidden ELO - you can probably game or abuse this system with enough effort, but it shouldn't make a difference in the lives of most legit players, since it shouldn't matter whether a model that you're about to see was rendered 5 or 10 frames before you actually did
4) enforce regional matchmaking by default and only show servers with acceptable ping times for your system (if any at all)
As for RTS games, that should be even simpler - most have some sort of a fog of war mechanic. Given that, you could probably come up with some data structure to represent everything that's visible to your side (like an octree) and send all of the models within it, without worrying about checking individual positions.
As for warping: the exact same way as in any online game, probably by some interpolation. If you receive a position from the server, the entity should be visible at a certain position, if you do not, then it shouldn't be visible (or maybe send the position in which it should disappear, with an additional flag). If you don't get the data for a while, handle it however you would stale data - like ARMA 3 does with entities just standing around or other games with them running in place, which is pretty funny.
Interestingly, given it was one of the strategy games I was thinking of when I made that comment, the Paradox devs for CK3 commented on why they use a lockstep architecture and not sharing the state of the game by server decided POV in their dev diary a couple of days after: https://forum.paradoxplaza.com/forum/threads/anatomy-of-a-ga...
Of course I don't believe that it'll work 100% of time time, since nothing will.
Fighting against cheating in online games is going to be a constant arms race.
That's not to say that detecting most of the naive implementations isn't worthy of the effort.
It won't always work consistently but it should be pretty obvious when someone is lerping between two quaternions. Then, you can build upon that and attempt to detect small bits of random noise that'd be applied upon said interpolation and go from there.
This is what Valorant does and just does not work. People saying "yeah game dev are lazy, why not everything is done server side" this is really a naive view of game dev.
The short version is that you can't have a great experience for online games if you try to create a client as a dumb terminal.
I didn't mean to say they're lazy. I generally dislike the studios but developers there are brillant, usually.
I was thinking that studios were being cheap. Why invest in a proper server infrastructure if you can make clients install abusive software... Maybe I'm wrong but it always looked to me that way.
Don't disclose to the client anything not in their view.
I know this is sometimes impossible and/or too costly to implement but it should be possible to find a compromise that prevents most of the blatant cheaters, eventually.
Also helpers like: In any score event, for randomly selected players, analyze the last actions taken.
You just cannot trust the clients. People will find creative ways of reading the memory of their own hardware, whatever you do.
> Don't disclose to the client anything not in their view.
Either full of edges cases (how do you efficiently compute visibility, and can you prevent models from popping in as a result of latency) or computationally expensive[0]. Valorant, CSGO, League of Legends, Dota 2 are some of the games that I know about that implement server-side occluding to minimise the impact of wallhacks, but eventually a client will still need information like the position of an audio cue such as footsteps that cheats can make use of.
> can you prevent models from popping in as a result of latency
Can you do that well enough on the client? The client can add some prediction on where someone is moving, but so can the server. And enemies killing you due to lag is happening already with current architectures.
It's difficult to set up right but it taught me a lot about VMs and hardware. Once you get it setup well enough, its relatively painless. Like I haven't messed with my VM settings in over a year, everything just continues to work smoothly. Including windows updates, driver upgrades, most online games with anti-cheat etc. If i upgrade my hardware, it might take a day or two of tinkering to get it back up. Based on my benchmarking it runs within ~5% of native perf.
This is still the best guide IMO if you want to set it up - https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF. Single GPU is basically the same as dual GPU, except you have to ensure the linux host does not take over the GPU or load any drivers for it during boot.