The #1 thing that I've ever wanted is a Linux that lives between the bootloader and Windows that lets me achieve native performance on the Windows VM, but gives me an environment where I can easily do all the things that you can usually do in HyperV like create snapshots, clone installs, share Sharepoint drives between images, etc. But I do need something that is perfectly stable and just works. This is the right technology (thank you so much for working on it), but just not yet at a maturity that makes me feel comfortable about putting livelihood on the line.
Edit: actually, please, if anyone knows something that suits this use-case, even if it costs decent money, please leave a comment.
If you want a low-maintenance version, you could just snag someone's NixOS config - you'd have an easy to reproduce environment that just ensures virt-manager and a really light-weight window manager are installed and then you're done.
Virt-Manager does shared directories (9p anyway; there's no UI for virtio-fs, but you can still use it by editting XML). It handles USB2 and USB3 forwarding. It does snapshots, it does clones, you can even leverage Linux filesystems to do far fancier things than possible on HyperV hosts. Etc. (Plus KVM won't trash your plain ole EXT4 partitions like countless people keep reporting under HyperV [and I've personally experienced twice]). It can even do graphics accceleration/virtualization for Linux guests.
Virt-Manager is so under-known and under-appreciated, but then again, it's got its rough-edges. With some polish there is really no reason to ever mess with VBox under Linux.
You could even skip zfs and just use a qemu image file, with qemu-nbd you can have it present as a block device, and you can then export that block device as an iscsi target. Then you can use qemu-img for snapshotting etc.
The machine running the storage could be pretty low spec too, a nuc would do it for sure, maybe even an rpi or similar (if you go the qemu-img route, zfs on rpi is not feasible, ask me how I know :D)
If you're working on a laptop I would not recommend ZFS, ZOL doesn't implement freeze and thaw.
Debian is super reliable - potentially the most reliable linux distro - but this wouldn't be a turnkey solution or anything (for example PopOS, based on Debian via Ubuntu, focuses more on the out-of-the-box experience). I don't know if Debian can handle your particular needs, but my assumption is that if any distro can Debian can - it just might take some time.
Personally I love proxmox as a main operating system, as I can get everything from broader Debian/Ubuntu/PopOS environments and learn a lot about Linux too. But, it has taken a lot of my time, so I'd only recommend it if you wanted to invest the time.
Any modern virtualization has very fast CPU virtualization (it's hard to say near-native, as there are always corner cases), and snapshotting tools. I don't know about Sharepoint; the "clone installs" is a bit fuzzy too, but one can clone installed systems by just copying the underlying image file and ensuring that it has a unique identifier (and updating the guest O/S license, if required).
If GPU matters, VFIO definitely is part of the solution.
However, if you're trying to achieve a sort-of full system passthrough (eg. because of drivers that have bugs related to certain hardware components, which it seems to be your situation), this will never happen, because certain parts of the guest need to be necessarily emulated (e.g. chipset). Even passing a USB port is not easy - one actually needs to pass the whole hubs (AFAIK a port may belong to two hubs - USB 2 and USB 3).
What needs to be kept in mind is that, as the standanrd passthrough, it's subject to the IOMMU groups handling (in best case, there's no other device in the same group; otherwise, one needs some trickery, which AFAIK is not 100% guaranteed to work).
I just build the price of the hardware into my hourly rate (which ends up being not very much .. $2 per hour), AU law instant write-off the cost of the machine as tax, tell them they don't need to buy a machine (which they are very happy about), and then everyone is surprised at how fast I can get the job done compared to the in-house teams which have their hands tied.
i have my own custom build servers at home now and virtualize everything. every client has own VM and I usually RDP/ssh into them for work.
Level1Techs also has a ton of information and help both on the forum, and their YouTube channel, about setting up Looking Glass and VFIO. The main host of the YouTube channel, Wendell, has forgotten more about nitty-gritty system administration than I will likely ever know in my lifetime. He also just seems like a genuinely good human being.
Also, didn't know that any 4 letter usernames were available here on HN anymore :)
As for support, I do encourage people to join the Discord or head on over to the L1Techs forums as I won't really be monitoring this very closely.
Edit: I mean I am at the moment because I am super stoked to make #1 on HN :D
It's difficult to set up right but it taught me a lot about VMs and hardware. Once you get it setup well enough, its relatively painless. Like I haven't messed with my VM settings in over a year, everything just continues to work smoothly. Including windows updates, driver upgrades, most online games with anti-cheat etc. If i upgrade my hardware, it might take a day or two of tinkering to get it back up. Based on my benchmarking it runs within ~5% of native perf.
This is still the best guide IMO if you want to set it up - https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF. Single GPU is basically the same as dual GPU, except you have to ensure the linux host does not take over the GPU or load any drivers for it during boot.
I have a Windows VM for gaming that owns the single Nvidia GPU. I also have a few Linux VMs for development (via VS Code remote) and media management.
As far as storage goes, I don’t have anything too fancy. Proxmox is installed on an SSD. I have a second SSD for VM images. For all other storage (media, photos, VM image backups, etc.), I have a 3 disk ZFS pool consisting of a single RAID-Z1 vdev - yea, it’s risky, but losing the pool wouldn’t be the end of the world.
One of the cool things about this kind of setup is being able to easily restore VMs from backup. Some time back, I accidentally screwed up my Windows install by enabling Hyper-V (nested virt). I panicked at first, but then remembered that I have daily snapshots of the VM. I had it back up and running within 10 minutes :)
All in all, it was fun to setup and has been running very smoothly.
* Are the VM image backups you mentioned, done to your pool by means of the ZFS snapshots? Or done at the file-level with rsync or similar.
* Do you make backups of the Proxmox installation? Similar as before: is Proxmox itself on a ZFS volume, so backups can be done just by doing ZFS snapshots? The installer lets me choose between an LVM-Thin + ext4, or a ZFS filesystem, and I was wondering whether to choose one or the other, for maximum convenience.
* "Proxmox is installed on an SSD": isn't that a bit wasteful? I mean, doesn't the Proxmox system just take like 1 or 2 GB at most?
I have a Lenovo ThinkCentre m910q which brings a 160GB M.2 NVMe disk, and another 320GB SSD disk... so I an in the process of deciding where to put each thing. Although for bigger storage I'm also considering if adding a 1 or 2 TB USB3 external disk would make sense (to store user backups like photos, documents, and also for the server's system backups)
2. No, I do not currently backup the Proxmox config. There is a good script here: https://github.com/DerDanilo/proxmox-stuff
3. If you install Proxmox on a "thin" filesystem (ZFS or LVM-Thin), then yes, you will get snapshot functionality for free. Note that you would have to configure this yourself - Proxmox does not expose a backup feature for its own config.
4. I have two NVMe SSDs. The first is 500GB and is split into two partitions: 100GB for the Proxmox install (LVM + ext4), and 400GB for VM images and containers (LVM-Thin). The second is 900GB and is fully used for VM images. Both are using LVM.
Any notable exceptions?
What's the performance like compared to the same games in a native Windows install. I tried proton several months ago and I had to go back to Windows because it wasn't even close to good enough as far as compatability goes.
I'd really love to abandon Windows as a desktop OS and just game in a VM though.
They banned the use of virtual machines  back in 2020 and their AC solution does not work on Linux.
Valve seems to be working on it , though their solution might be tailored to their SteamDeck hardware instead of a generic proton fix.
Or go full Unix and attach a serial terminal to the console port and happy linuxing from there while you ignore the built-in gaming console running off a VM. This is more or less what every modern console does anyway - a hypervisor under everything. ;-)
The last time I checked, many such games explicitly do not support running in VMs and you risk being banned by obfuscating that you are running the game in a guest OS.
TBH every thread like this has the “but what about anti-cheat?” post which makes it seem like a big problem but i think only a tiny minority of games don’t work under VMs.
So... 5% of the available titles?
Most are financed by IAPs so I just stay away from them anyway.
FWIW I tried to install Valorant in a VM recently (because I'm tired of installing the root-kit anti-cheat on my main PC in order to play it). It simply fails to launch the anti-cheat, and so doesn't launch the game.
I know it's sometimes too costly, but game studios should invest more on server-side security measures and mostly distrust the clients.
The fact that the client can remove the wall and see something behind it is due to the fact that the client is trusted to do the hiding. An untrusted client would not receive the enemy position until the enemy is visible. This, however, comes with the drawback that the server needs to do the culling - That's why pretty much nobody does it right now.
Aimhacks would still be possible, of course, but client-side anti-cheat can't prevent those either.
But... it is done with a z-buffer.
If an opponent is obscured behind a nearby pillar or something, that's not going to be culled in software - that's done by the hardware z-buffer as part of the render process.
You can see this for yourself if you look at a game being run with wireframe rendering. You'll see it's in the same render node so it's still rendered - it's just obscured by closer geometry. And it's how some cheats actually work - they basically turn the wireframe back on!
'Tracing a shot' is casting one single ray.
For example read this article someone else linked https://technology.riotgames.com/news/demolishing-wallhacks-... and look at the last animation.
Also you are forgetting that you just said that line of sight was done in hardware and you didn't explain how that would work for a server testing if shots actually hit.
> You can see this for yourself if you look at a game being run with wireframe rendering. You'll see it's in the same render node so it's still rendered
What does this even mean? What is "it" here and what is a "render node" ? There are hierarchies of transforms and players are going to be separate from the environment. This doesn't actually mean anything.
> it's just obscured by closer geometry. And it's how some cheats actually work - they basically turn the wireframe back on!
Yes, you are restating the context of what people are talking about, not what is actually being talked about, which is the timing of when the server should send visibility information, which is what your link is actually about.
Your link is actually directly contradicts what you are saying since it uses both an expanded bounding box based motion extrapolation and precomputed visibility, neither of which has anything to do with a z-buffer.
Can you see how the red outline of the opponent appears while they're obscured behind the pillar?
When that red outline appears it's showing that the opponent is now being rendered, and that the z-buffer is being used to obscure them from behind the pillar.
This discussion is about how to make the red outline not appear until the opponent is actually visible.
The article goes into lots of ways to make the red outline appear later, but it still appears before the opponent is actually visible on screen.
That's the issue that people want to solve.
Consider an example of an opponent with just one pixel of their gun visible around a corner. How do you send that information to the client without telling them there's an opponent there, so that the user has to actually see the pixel? You'd have to just send that one pixel, right? Now we're talking about rendering server-side!
Yeah, that's game rendering in the engine. That's visualizing something, not illustrating how the server is doing it. Did you actually read and understand your own link?
"That's the issue that people want to solve."
No it isn't, you misunderstood your own link to the point that you have it backwards.
The server not rendering the entire game from each person's perspective every for every player every frame.
The problem is being able to see every player walking around all the time.
Think for a moment what would happen if the server actually had perfect visibility - by the time you can see them it is already too late. You should be able to see them and then the server starts sending you a position. By the time you know you should see them, you should have already seen them and the other player pops into frame.
That isn't even buried in your own link, it's at the very top.
"Consider an example of an opponent with just one pixel of their gun visible around a corner. How do you send that information to the client without telling them there's an opponent there, so that the user has to actually see the pixel? You'd have to just send that one pixel, right? Now we're talking about rendering server-side!"
This is gibberish and is a lot like Frank Abignail trying to BS pilots. Once again your own link explains why this is nonsense from a lot of different angles, did you even read what you linked or did you just look at the pictures? It explains everything clearly.
Why are you so abusive in your replies? What causes you to talk to people like this?
> You should be able to see them and then the server starts sending you a position.
Yes that's what I'm saying you'd need for an untrustworthy client. But even that's not quite good enough - if you can 'see' them but it's just one pixel that the user might miss - should the client really get the full location information? It could highlight the enemy from that when a player would likely miss it otherwise.
> The problem is being able to see every player walking around all the time.
No that's a weaker version of the overall problem. If you give the player's location to the client when the player may not actually be able to see them then you're relying on a trustworthy client.
You originally said that a server would have to render the game and use the z-buffer to do any occlusion culling, but this is not only not correct, it is contradicted by something you yourself linked. Why not just admit that this was a guess and not from experience or research into how game engines work?
"But even that's not quite good enough "
You are the only one saying that. Going from seeing every player on the map all the time to only seeing players a few frames before you would have seen them anyway is a huge leap, which is again, what people are talking about and exactly what you linked.
"should the client really get the full location information? "
What partial location information are you envisioning here?
Again, focus on backing up what you originally said first instead of trying to shift the goalposts from how servers would "have to" do occlusion culling.
You: "But... it is done with a z-buffer.
If an opponent is obscured behind a nearby pillar or something, that's not going to be culled in software - that's done by the hardware z-buffer as part of the render process."
Then I explained why this doesn't make sense on the server as a generalization and isn't necessary from a technical angle.
Then you ignored that you were both snarky and wrong, provided your own source which directly contradicts what you originally said and ultimatly called yourself a victim of aggression when I pointed this out.
Now think about how many times a second you'd need to trace from every pixel on the screen to every part of the geometry on every opponent in order to check if it was visible or not to see if a player was legitimately able to view any part of their opponent.
Anyone familiar with game engine programming would never consider what you are saying. That link is a more in depth version of what I just said, ray casts are being done on the server for visibility and have nothing to do with rendering the game to do it. It is literally demonstrating that they are already doing what people were wondering about.
Using an over-approximation causes the opponent's location to be revealed to the client even when the opponent isn't quite on screen yet, requiring the client to be trusted to not show this information early, which is what people in this thread want to avoid.
That's the whole point of the discussion.
This is what the article is showing - can you see how the red outline of opponents appears early, and how the client is being relied upon to hide them until they're actually visible? That's what people don't want.
You for some reason are ignoring what you originally said to focus on something else you are seem to misunderstand the context of.
What you originally were saying was that you would have to render polygons in hardware for the server to have any idea about occlusion, which the link that you gave not only disproves, but assumes that no one would think in the first place.
The whole point is that wall hacks let you see people running around the whole level and it is just a matter of work for the server to only send positions a few frames before you are going to see a player.
Everyone else is on the same page, but you think the player position being sent right before they appear is a problem? That's the solution in your own link.
...and when an untrustworthy client gets that info it can highlight the opponent just before they come into frame, or highlight them fully even when they're mostly concealed, giving you an advantage.
That's the point of the thread. That's what people want to avoid. That's what the link wants to avoid, and says it doesn't manage to quite do and explains why it's hard.
Bingo, game servers need to be as lightweight as possible because whatever calculations they have to run need to run per player per tick. Detailed occlusion calculations would be impractical, so at best it's very rough. And of course you don't want a situation where an opposing player isn't even seen until they've already shot you, so it needs to err on the side of visibility.
Every latency-sensitive online game has to make a bunch of tradeoffs between performance and security, and performance is generally more important.
The problem is that developers treat PC like a locked console. This is just a completely fruitless uphill battle. PC gives power to its users, while consoles give power to the developers. PCs are designed to not sandbox or lock you, you can do anything with them without having to break its sandbox first. The mindset with the developers that deploy intrusive anti-cheat is to have the users locked in so they can ship their centralized server model and hope they can deal with the hopefully lesser amount of cheaters themselves, instead of giving the moderation power to the users themselves.
Known to work without issue are titles such as
* Battlefield 1/3/4/5
* Titanfall 1&2
* Arma 3
* 7 Days to Die
* Halo: The Master Chief Collection
* Star Wars Squadrons
Only issue is that a few custom maps crash, and there's a weird, minor performance issue at some point in the LOTV campaign menus (but not the game itself). Haven't had an issue in a ladder game in years. And I play SC2 a lot (too much).
Note that the game vendor selects what features they want to apply to their titles that are available. BattlEye allows you to stop people using VMs, if the game vendor opts in to this stupid feature.
It's the only thing I am missing to build another one or rebuild the one I have. I wish I had believed it was going to work perfectly when I started and wrote everything down.
+1 on the writing stuff down :D I did it twice and documented pretty thoroughly the second time around, notes before each change, testing performance delta, notes about if it worked. It really helps.
There was an issue with AMD systems a few years ago, which caused microstuttering.
The Arch wiki is indeed spectacular.
E.g. could you run a high end DAW on the guest with the same performance?
If not, is it something that might be added later?
Kind of defeats the point if your UI becomes windows again in my opinion.
My personal motivation is that Linux power users will miss the control, or at least, customizability, of the operating system, which is something Linux does, and Windows doesn't (as they have different targets).
Also, not to be underestimated, security (although for me it's only a very small factor).
Technically speaking, for machines where VFIO works, there is no maintenance, and the setup can be easy (excluding one significant issue due to a specific linux kernel upgrade, my VFIO setup procedure has been essentially the same for years, on multiple machines).
So, assuming that VFIO setup setup/maintenance is very easy, having a second machine is just redundant.
One convenience not to forget is safety. If a Windows VM gets infected for whatever reasons, rolling back the system is performed by literally deleting one file.
Why are VMs being blocked?
The solution though is not to ban VMs, but to push vendors like AMD and Intel to enable access to, and enforce usage of technologies like SEV if running inside a VM.
You could correlate community feedback and some machine learning, while also picking easy-to-catch impossible actions.
In regards to player positions: check which player locations are occluded and wouldn't be visible through the geometry, then only send the valid ones for each player. Of course, doing this on high tick servers could prove to be computationally intensive.
In regards to aimbots: the clients already send you information about where they're looking so that it can be displayed to other players. Attach some mouse movement metrics and from that you'll sometimes be able to infer the most naive aimbots instantly.
What's your tolerance on this? Too low and players will complain that other players pop into view and kill them in the event of latency. Too high and cheaters still have access to the most valuable cases of information, when there's a chance for one player to get the drop on the other.
What about strategy games which rely on their lockstep simulation for performance? How would an RTS work if it's sending the locations of 100s of units in real time versus just player actions. Do you want to have to implement prediction and deal with warping in such a game?
1) be fair and decide upon some value that should cover most cases, make the outliers suck it up, like some games kick those with higher pings
2) don't be fair and base the threshold of visibility on some predictions about the movement of the entities in the following ticks, based on their probable movement speeds, as well as the ping times of the each player; the player with the higher ping value might receive the position of the other about 10 frames earlier before they round a corner - imperfect, but should still avoid ESP across the map
3) don't be fair, base this tolerance on hidden metrics about how trustworthy each of the players is considered, based on whatever data about them you can get, a bit like hidden ELO - you can probably game or abuse this system with enough effort, but it shouldn't make a difference in the lives of most legit players, since it shouldn't matter whether a model that you're about to see was rendered 5 or 10 frames before you actually did
4) enforce regional matchmaking by default and only show servers with acceptable ping times for your system (if any at all)
As for warping: the exact same way as in any online game, probably by some interpolation. If you receive a position from the server, the entity should be visible at a certain position, if you do not, then it shouldn't be visible (or maybe send the position in which it should disappear, with an additional flag). If you don't get the data for a while, handle it however you would stale data - like ARMA 3 does with entities just standing around or other games with them running in place, which is pretty funny.
see? even you do not believe that this will work
Fighting against cheating in online games is going to be a constant arms race.
That's not to say that detecting most of the naive implementations isn't worthy of the effort.
It won't always work consistently but it should be pretty obvious when someone is lerping between two quaternions. Then, you can build upon that and attempt to detect small bits of random noise that'd be applied upon said interpolation and go from there.
The short version is that you can't have a great experience for online games if you try to create a client as a dumb terminal.
I was thinking that studios were being cheap. Why invest in a proper server infrastructure if you can make clients install abusive software... Maybe I'm wrong but it always looked to me that way.
I know this is sometimes impossible and/or too costly to implement but it should be possible to find a compromise that prevents most of the blatant cheaters, eventually.
Also helpers like: In any score event, for randomly selected players, analyze the last actions taken.
You just cannot trust the clients. People will find creative ways of reading the memory of their own hardware, whatever you do.
Either full of edges cases (how do you efficiently compute visibility, and can you prevent models from popping in as a result of latency) or computationally expensive. Valorant, CSGO, League of Legends, Dota 2 are some of the games that I know about that implement server-side occluding to minimise the impact of wallhacks, but eventually a client will still need information like the position of an audio cue such as footsteps that cheats can make use of.
Can you do that well enough on the client? The client can add some prediction on where someone is moving, but so can the server. And enemies killing you due to lag is happening already with current architectures.
End offline AAA gaming?
Would that be OK?
I ask because many laptops used to have similar setups.
I would say the use case here would be a machine that does not have a monitor connected, for example some Bitcoin mining server in the attic.
With Looking Glass the game screen can get streamed to a ultrabook in your living room, with the server GPU performance
There was this recent improvement at least: https://phoronix.com/scan.php?page=news_item&px=Linux-5.14-A...
edit: seems there are some howtos - eg
1. Desktop environment, which uses the video card; this is not a big deal, since one can terminate the session
2. video drivers correctly and fully releasing the card; this is possible, but in my opinion, not stable.
the host never sees the card.
Setting up looking glass itself wasn't much of a problem though. I got some AAA games cutting on my gentoo laptop like butter, though the mouse movement was more jelly unless I VFIOd my mouse/kb as well. Sound went through Scream over a socket.
Nowadays I play on my Xbox. I ditched gentoo. I've gotten old. Do you, reader, assume that I've also checked out and gone full windows?
If you do, then guess again. I'm about to port my whole gentoo gaming setup to NixOS and it will probably take me about 15 minutes, and ~ 5 - 10 minutes a year just to maintain. How much time do you spend fiddling with your windows and driver updates? Oh, wait...
Also proton doesn't address the myriad of productivity applications that people still need windows for, such as Adobe products, or Autocad, etc.
Edit: oh, I just rephrased grandparent comment, sorry.
I love qemu/libvirt - the crazy thing is if you have two disks and use one to boot Windows, you can mount the disk then boot into that when using Linux and have that running off another video card (I have two in my PC, one AMD and and nVidia).
I sort of have it documented at https://github.com/hotsphink/sfink-tools/blob/master/doc/Vir... though that's really a set of instructions intended for a new PC that a bunch of us at my employer recently received. You just want the `viewsetup` stuff. You probably want to leave off the `--auto` flag, so that it'll prompt you per-partition.
The tool goes to some effort to only expose the specific partitions needed. It also makes the virtual disk exactly match the native disk, copying small partitions (and gaps between them) to files and exposing those over loopback interfaces, so that Windows can go crazy and write over everything and it won't break your Linux setup. (The Linux partitions themselves are exposed as loopback devices to empty sparse files of the right size.)
Get the single script file at https://hg.sr.ht/~sfink/sfink-tools/raw/bin/viewsetup?rev=ti... or check out the full repo at either https://hg.sr.ht/~sfink/sfink-tools/ (mercurial) or https://github.com/hotsphink/sfink-tools (git). I keep both up to date.
That works fine.
I do that at work to avoid nuking/tampering with the Windows installation provided to me by IT, while running Linux as my main OS from a second volume.
It's how I "gamed on Linux" for a couple years. Support is basically you and you alone. The dev for Looking Glass is active but the man isn't your personal tech assistant so often you're just doing A/B testing to make something work.
For me, I just didn't want to fiddle with my home desktop that much. I went back to Windows.
It does have it if all you want to do is develop, browse the web and similar things.
Including windows games into the things it needs to run without hassle is kinda unfair in my opinion, as that's pretty far from what this DE is used for regularly.
I went back to windows on my home pc because off games too, but my work environment with Ubuntu/regolith was significantly less painful to setup then the WSL hassles I had to jump through on windows before.
I'm still suffering with Linux for largely philosophical reasons at this point, but quite frankly if I wasn't such an opinionated nerd I'd just go to a normal Windows machine at this point.
I wouldn't recommend VFIO just for gaming, there are better options.
I use it for gaming, software dev, and just in general it's nice to be able to switch OSs for any reason instantly. What better options are there?
Basically it's not worth setting up VFIO just for gaming, but if you already on top of that mountain, then use whatever. I stopped using VFIO a while back and just bought a laptop that runs Linux without any issues.
I don't really have time for games now, but if I wanted to play I would probably wait until next LTSC and install it + latest WSL on my PC. Or would buy a console, maybe that Steam Deck.
Trouble is, now that my Nix setup is good for everyday use (Not yet development, I wish to see Flakes mature soon) I barely ever tinker with it, might be because it's summertime here too, doesn't encourage me to geek out too much.
What options? I'm interested.
Or install less annoying edition of Windows like LTSC, configure Unified Write Filter or similar feature to keep it under control and try to live with latest WSL as your Linux. And just buying a console is another option, if not for a general chip shortage it would be a very good time to do it.
The caveats are basically:
1. Setup is a bit annoying
LG can feed input into any VM but requires guest support for capture, so you need to do the setup with a direct monitor.
2. The passed-through card must have a monitor connected.
IIRC, this is an API issue since it just captures what would go to that monitor.
3. Some things are only visible on the real monitor.
They need different capture strategies for the regular desktop, system desktops (such as lock screens), and secure desktops (UAC prompts). Sometimes the transition fails, sometimes there is no strategy implemented for what you're trying to show.
4. Your CPU and motherboard must support IOMMU passthrough.
For Intel this means using Skylake+ and ensuring that it supports VT-d. For AMD this means using Ryzen with an X-series motherboard.
5. You need separate GPUs for the host and guest.
Blame GPU vendors for making VT-g an enterprise-only feature.
1. We are working on it
2. Not if it's a vGPU or a Quadro where EDID spoofing is allowed
3. No, we capture everything now, even the windows Login screen and windows updates, etc.
5. Very yes!
B3 and B4 both have pretty huge improvements here, but there are still a few rough corners. For example, when logging out the host shuts down immediately upon request, so you can't see the "program X is blocking shutdown" dialog.
But ultimately this is a very minor papercut for a very impressive and useful project.
The first time I tried I just got the Looking Glass splash as soon as I clicked shut down. When I cancelled that and tried again I was able to see both the throbber and the prompt. When I let the VM sit idle for a while before trying again, the LG splash was back. Rebooting the VM also seems to bring back the splash reliably.
Regarding this, I don't know about LG, but with a standard QEMU VFIO setup, one use a single monitor connected to the two cards, and switch the input when required.
Just note that not all multi-input monitors are equal though, a small minority appear to the GPU as unplugged when the input selection is changed.
I mantain a guide for setting up VFIO (https://github.com/saveriomiroddi/vga-passthrough), which I frequently use.
My conclusion is: for machines that are compatible with it, VFIO works very well. The technology itself is stable, so working on photoshop/game development etc. (from a technological perspective, there's no distinction between the two tasks) is not distinguishable from working on native.
I had VFIO on 4 machines I think, and one had problems which I couldn't solve, while the others worked well.
When used with QEMU, it requires some system settings and QEMU flags etc., so it's a bit annoying, but it's straightforward and documented process.
To put it in another way: if one wants to use VFIO seriously, it's best using hardware known to work well, rather trying to cram VFIO on a not very compatible system.
And also: one needs to be pratical. A USB soundcard solves countless hours of attempts to use the host's Pulseaudio system (meh).
After that it kind of just works, and continues working. I use it to play games and run Office apps, and have not had it break on me in a ~year of use. (Disclaimer: I occasionally contribute to the project now, but remember being impressed at how easy it was to get going when I first tried it out. Getting the VM working at all was the hardest part of the endeavor, but only took a few hours.)
It's only that if you wear a fedora unironically and have a Slashdot account with a four digit user ID.
To go into detail: are monitor color calibration from Linux (is that even existing) and Windows conflicting here?
Just a warning to readers that it's not the same company :)
Except gaming I personally don't see what prevents me from being "linux first". I'm just using windows 10 by habit and laziness.
What is your experience? What makes you still need windows? It's weird because there are so many software alternatives that run on linux and also many initiatives that allow one to be run on linux via emulators and whatnots.
For both work and casual use, there's a lot of small things that add up to it being very productive, and despite regularly giving the linux desktop a chance, I always gravitate back to Windows (and recently MacOS as well).
I feel like most of the QoL features I enjoy are mostly invisible so its hard to remember them unless I'm actually experimenting with the Linux desktop. Here's some that do come to mind though.
- Windows supports right-click and drag with context menus. For multi directory file manipulation, this shortcut is shockingly useful. While much file manipulation can be faster on the CLI, certain operations like this are incredibly efficient.
- Binary blobs are really convenient compared to package managers/source building for my daily driver stuff. I have decade old games that just work, and I've never had to deal with version incompatibilities for my tooling, unlike in my Linux environments.
- Common actions like sleep/wake just seem to work better.
- Microsoft Office is really nice.
Like, currently I dual boot into Windows for games, but I can also get into it from Virtualbox if I need to quickly use something on there.
Would I be able to use this with that? So I can use the internal GPU for my local Linux install, and then use the nvidia for the windows?
As a sibling comment stated, you can get a dummy monitor plug on Amazon for like $5 which fools your GPU into thinking there's a monitor attached.
In nutshell, KVMs aren't worth their money because they have to re-implementing a lot of hardware already found in your PC.