My solution to this problem is to run Windows in a VM, with a graphics card passed through to it. That way I can run (and test!) games and other apps I still need every once in a while without dual booting.
I use qemu/kvm, with OVMF for the firmware to get around some VGA problems, and pass a discrete card to the guest while using the integrated intel graphics for the host. It requires CPU and motherboard support, but those aren't too uncommon on newer hardware.
Be careful with GPU passthrough in general; you are giving Windows in that VM direct memory access. [1] In the last year or so, a group was able to overwrite Xen (in order to get additional privileges outside the VM) by merely being given GPU passthrough. [2] Even if you aren't using Xen, PCI passthrough necessarily entails the in-VM OS getting direct memory access which more or less spoils any security separation between the in-VM OS and the host OS.
To be clear, I don't think very many people are likely to have to deal with this as a security problem. I just think its appropriate to be aware of the power/danger inherent in the seemingly benign nature of GPU/PCI passthrough.
[1] = "you can use PCI passthrough to assign a PCI device (NIC, disk controller, HBA, USB controller, firewire controller, soundcard, etc) to a virtual machine guest, giving it full and direct access to the PCI device" (http://wiki.xenproject.org/wiki/XenPCIpassthrough)
[2] Sorry, I can't find a link.
my setup passes through a discrete PCI-Express graphics card to Windows running under KVM, and the card is restricted to address space allocated to the VM using the motherboard's IOMMU.
You are correct that it is dependent upon whether your motherboard has IOMMU [1]. Nevertheless, given the following paragraph, then it seems to be the reverse: it is generally unwise to willy-nilly give out PCI pass-through, unless you have confirmed that your hardware has IOMMU.
Xen documentation explicitly states how to determine whether you have IOMMU on your hardware.[2] The hardware listed here [3], while out of date, conspicuously notes, for example, that "Core i7...most "K" versions don't support VT-d", which, according to Intel [4], means that the i7-4770K, for example, does not support IOMMU (VT-d). This tells me that if you are concerned about this issue, then you must check your hardware, even if it is new hardware.
[4] the i7-4770K probably lacks IOMMU because it would negatively affect performance on a CPU that is sold as a near top-of-the-line CPU for gaming, which generally excludes most virtualization use cases.
The IOMMU hardware ensures security of the VM's raw device access while the VM is running. The only significant security risk I'm aware of is if the VM has a chance to update some firmware (like an option ROM) on the device you give it access to, which may enable it to do evil things when the host system next reboots.
Right, Intel's an ass about product segmentation. But most hypervisors don't offer the option of doing any sort of PCI passthrough without a functional IOMMU, so they're not accidentally exposing anyone.
It would be sensible for hypervisors to require IOMMU for all PCI passthrough, but it looks like Xen allows it without IOMMU for paravirtualised guests:
"VT-d Pass-Through is a technique to give a domU exclusive access to a PCI function using the IOMMU provided by VT-d. It is primarily targeted at HVM (fully virtualised) guests because PV (paravirtualized) pass-through does not require VT-d (altough it may be utilized too)."[1]
Yeah I ran into that wall 2 weeks ago. I tried to switch to Linux for the host and run my Windows 8 VMs in VMWare on Mint. After fighting with the nvidia drivers at install for a bit I got it working but VMWare using dual monitors is super slow on nvidia (780 here) in mint: the windows in the guest VM were drawn as if it was done by a single core on half the speed, instead of on the GPU. Single monitor, no problem.
Had to roll back to windows for the host to make the VMs work nicely. A bummer, considering moving away from Windows takes that first step and I now have to stick around for longer. Perhaps next year...
This is pretty much the preferred cure from what I've gathered. If you can build a PC with two GPU's (or merely one PCIe and one integrated), you can game, use Photoshop nearly natively.
But as you have said, it's not as straighforward as I'd like. I haven't tried it yet but it's nice seeing others having success.
Are there any up to date guides and best practices for this?
https://bbs.archlinux.org/viewtopic.php?id=162768 is the main point of discussion for it - too much to read through entirely, but the top posts include a lot of info, and it contains a lot of valuable information on quirks for specific hardware if you search. Some of the stuff is outdated though (eg the kernel includes a lot of the patches now), and it's targetted at Arch, but most instructions should apply to any distro.
Another key point is, you need VT-d/IOMMU support on your motherboard and CPU, which isn't universal (only in the last few generations of intel CPUs, with some excptions that don't support it. And motherboard support can vary between OEMs). Check https://docs.google.com/spreadsheet/ccc?key=0Aryg5nO-kBebdFo... to see if others have had any luck with your hardware, or before making any purchases.
Two different nvidia cards should work, but I don't think it's possible to do it with two cards of the same model.
You can't switch a GPU between host and VM without rebooting, you have to assign one GPU to the VM through kernel boot options, which will hide it from the host.
A lot of people also just use a dedicated (Nvidia/AMD) GPU for the windows VM for gaming and heavy duty things, and only use their onboard Intel GPU for the linux host, which is sufficient for normal usage if you're doing all your gaming through the VM anyway.
I believe it should be possible to use two cards of the same model, using pci-stub instead of blacklisting the driver. I've never gotten it to work, but I also think it should be possible to move a card from the guest back to the host (though that might require some more work on the kernel and/or other components).
Do you know if it is viable to run Windows 8 in a WM for minor Photoshop and Illustrator work if you only have integrated graphics (Intel HD 5500)?
I recently switched from OSX to Linux on the new Dell XPS 13 and I'm not sure if I should stay with my current dual boot setup, use a VM or get rid of window completely and invest time in learning Gimp and Scribus as alternatives.
I'm running Windows 7 as a Xen guest using Intel GVT-g on my i3-5005U (2C4T, HD Graphics 5500) for the past week now. I've pinned 2 vcpus each to both Dom0 (Ubuntu 15.04) and DomU and use both simultaneously in a dual monitor setup.
I can't comment about Photoshop or Illustrator as I have no experience using those programs, though I do a fair bit of work with PCB CAD tools: OrCAD, EAGLE. I haven't ran into any major issues thus far. Media playback works fine btw; mpv does not report any skipped frames when playing 1080p webm files downloaded from Youtube with youtube-dl.
I use qemu/kvm, with OVMF for the firmware to get around some VGA problems, and pass a discrete card to the guest while using the integrated intel graphics for the host. It requires CPU and motherboard support, but those aren't too uncommon on newer hardware.