
Ask HN: Why no hardware virtualisation of a dual OS boot? - iammyIP
I need to use windows for work and like to dualboot into a gnu&#x2F;linux for everything else.<p>Why do i have to reboot the machine to switch the operating system?<p>Software virtualisation is not an option.<p>Couldn&#x27;t there be a bios level implementation of some real dualboot function, that assigns half of my cpu cores to each os, splits the available ram in half for each os, and then boots them both at the same time, so after that i can switch between them near instantaneously with a simple buttonpress, like i would be running 2 computers.<p>What&#x27;s the big technical challenge here?
Or is this just a fringe desire?
======
jdan
Accessing all of the hardware simultaneously from both systems without
conflicts would prove to be a big challenge (there is more than just CPU and
RAM on a box). Effectively you'd have to implement a complex mechanism or
resource sharing and a way to manage it. Such a thing already exists: it is
called a hypervisor.

------
87
Yes, your desire to not use virtualization is pretty fringe. Modern
hypervisors easily achieve performance almost equivalent to bare-metal. And
virtualization comes with numerous benefits making it outright preferable to
bare-metal for many use cases.

I think you'd be perfectly happy using something like KVM if you have the
right hardware and are willing to figure a few things out.

------
Someone
Apart from the technical problems (who gets mouse or keyboard input? What
prevents OS#1 from low-level formatting a drive that is mounted on OS#2? Etc):
do you really want a setup where, if one OS and its applications uses a
quarter of your RAM and CPUs, the other OS still is limited to half of both,
with a quarter of it lying idle?

You need some kind of hypervisor to solve the hardware sharing problems. Why
not use it for memory and CPU, too?

------
exrook
This is totally possible and is very similar to how I have my desktop
currently setup. Check out this subreddit[0] and this Arch wiki page [1] for a
step by step tutorial.

The tricky thing is for optimal performance you have to give a guest OS
exclusive access to a graphics card. This means you need two graphics cards,
one for the host (or a Linux vm) and one for Windows. However, many CPUs have
integrated graphics that can be controlled by Linux while a discrete graphics
card can be given to the Windows guest. Additionally, you need to have some
sort of KVM switch as two graphics cards means two (or more) display outputs,
but there is another option[2].

In my case, my processor (Ryzen 2700X) does not have integrated graphics and I
don't have a spare video card to dedicate to the host system. I solve this
with the following setup:

Host (H) - Arch Linux installation running KVM hypervisor

Guest (A) - Primary Linux Install

Guest (B) - Windows Install

On boot I pass kernel arguments[3] to H to instruct it not to touch the
graphics card. This gives me a completely headless boot and leaves my screen
frozen on whatever the bootloader last had displayed. Once H boots up, it
autostarts a VM for A, with the GPU passed in as well as the sound card and a
USB controller. From A I can use my Linux install as normal. The interesting
part is when I want to switch to Windows, I can ssh into H and kick off a
script[4] that will tell A to suspend to disk (aka hibernate), which for all
the GPU and other HW cares is the same as powering off, then to start B with
all the same devices passed through. I can then do the same process to go back
to my Linux guest.

I haven't done this yet but I'm planning to put the swap/hibernate drives for
both guests on an NVMe SSD so the suspend/resume should be super fast.

The reason I have my main system as the Linux guest instead of just using the
hypervisor is that loading and unloading the drivers for the GPU on the host
is kinda wonky and has resulted in kernel panics for me (that I can't debug
without a display). Additionally as far as I can tell there's no way to
hotswap GPUs with display outputs in X so I'd have to restart all my desktop
programs anyways.

If you have an nVidia GPU you also might run into issues installing the
drivers[5] inside the VM, and have to change some settings in the hypervisor
to get around them.

Let me know if you have any questions, I'm happy to help or provide more
information. Ask here or see email in bio.

[0] [https://reddit.com/r/vfio](https://reddit.com/r/vfio)

[1]
[https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVM...](https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF)

[2]
[https://github.com/gnif/LookingGlass](https://github.com/gnif/LookingGlass)

[3] video=efifb:off I don't have a good source on this, different systems may
need different args

[4] this script is not yet written, I have been using SSH from my laptop until
now, I've only had this set up for like a week

[5] search error code 43, it's also mentioned in [1]

~~~
jdan
This is the right answer and definitely what the OP should do, but it isn't a
"hardware virtualization" the way the original question was asked, IMHO.
Still, totally the way to go for what the OP asked; great answer.

------
wmf
This is kind of similar to the Jailhouse hypervisor, but you still need to
virtualize all the I/O and for desktop usage I doubt it would be better than
normal virtualization.

------
sn
Why is software virtualization not an option?

