
Ask HN: What hypervisor are you using? - johnnycarcin
We are looking at bringing our apps back in-house and running them in some type of hypervisor on bare metal so I was curious as to what others are using? I know awhile back KVM was the way to go because it was in the mainline kernel but I believe that has changed and now XEN is pretty painless to get up and going. Docker is not an option sadly.<p>We are not looking to build a &quot;cloud&quot; or anything as most of our applications do not need architecture like that. Basically just a physical cluster we can deploy VMs to. 
We do use consul for service discovery so we essentially just need some type of solution that allows us to hit an API (or run a command locally) that will fire up a pre-defined virtual machine on one of our various servers. Performance of the virtual machines is our top priority with ease of management being a close second.<p>Currently we use salt which appears to have good support for working with KVM which is a major bonus but not something that ties us to KVM 100%<p>Anyone out there using barebones hypervisors in production who can share their experiences? Right now it honestly seems like a toss up from what I&#x27;ve been reading.
======
windowsworkstoo
Having used, extensively and in anger, Hyper-V 2008/R2/2012, VMWare ESXi 4
thru 5.5 and (Citrix) XenServer, for Windows and Linux VM's, my personal
favourite is XenServer.

Hyper-V works, but has terrible admin tooling, especially if not on a domain

VMWare is a fine hypervisor and if you are happy with what it provides then
ESXi free is hard to beat

But if you want a cluster on the cheap, with proper clustering support,
XenServer is hard to beat. I use this extensively especially in SMB and also
local dev environments - a couple of Xen Servers backed by fastish NFS SAN's.
Easy to maintain and administer. Good tooling both from a GUI and CLI point of
view, without actually having to dish out any money.

That said, if someone else was paying, ESX would be just as good if you needed
vMotion/vStorageMotion etc.

~~~
lewisl9029
Agreed on the horrible Hyper-V tooling.

I still use it begrudgingly though, because my dev machine is also a gaming
machine, and Hyper-V GPU passthrough to the primary OS (Windows) is nigh
seamless and doesn't require any configuration. Also, my dev machine also
happens to be a laptop, and Hyper-V supports many advanced power management
features (like connected standby), that are not well-supported on any other
hypervisor as far as I'm aware of (many of them have trouble with even going
into/resuming from sleep/hibernation in the first place when I last tried
them).

------
fractalcat
I've run barebones hypervisors in production for years; KVM is head and
shoulders above everything else I've tried. libvirt and virsh provide a
reasonably nice interface if you don't need web-based tooling, and of course
config management makes everything a lot easier to maintain. I haven't used
salt, but if it has mature tooling for what you need, KVM would be a no-
brainer for me.

~~~
chei0aiV
oVirt is the web interface for libvirt stuff. There is also virt-manager for a
desktop GUI for libvirt stuff.

ganeti is an alternative to libvirt.

------
nwilkens
At [https://mnx.io](https://mnx.io) we are using
[http://smartos.org](http://smartos.org) from the great folks at
[http://Joyent.com](http://Joyent.com)

You can look at [https://project-fifo.net](https://project-fifo.net) for
managing smartos.

I'd be happy to discuss in detail if it looks interesting!

------
vmorgulis
I use libguestfs and virsh to inject files in a VM (without network).

[http://libvirt.org/virshcmdref.html](http://libvirt.org/virshcmdref.html)
[http://libguestfs.org/](http://libguestfs.org/)

------
SwellJoe
We run KVM on all of our current machines (about 20-25 VMs at any given time),
managed by Cloudmin (eating our own dog food). Before that, it was Xen, which
also worked well. The decision is mostly one of least resistance; KVM has the
best out of the box support on CentOS 7, so it's what we're using. Back when
Xen met that description, we used Xen.

------
Ologn
I have used KVM. I have a standard Ubuntu workstation, but sometimes want to
do something like have a Debian sid system building the trunk version of Gnome
via jhbuild or something. If it falls apart, my base system is not affected.

I have VPS's at Linode and Ramnode. Both these companies use KVM as well. Even
if you're not deploying cloud, you might look at what the big VPS providers
are using, as they're dependent on a sturdy hypervisor, so big VPS providers
using a particular one is a good indicator.

I have worked at places that used VMware's hypervisors, and they worked fine.
You'll be shelling out more money for the support and such though, although
they do have some free options on the lower end. I am not up to date on VMware
enough to know if there's any catches on the free VMware stuff now in 2016.

------
linc01n
SmartOS is a good choice. They provide KVM and joyent opensource their
SmartDataCenter [https://github.com/joyent/sdc](https://github.com/joyent/sdc)

so you can provision VM with API.

They are developing for LXC which will give you better performance in the
future.

~~~
qwertyuiop924
...Wait. Did you just say they're developing LXC? That is hilarious.

They're SmartOS! They're on the Solaris codebase! They've had containers since
2005! Literally before it existed, it had containers!

~~~
melloc
linc01n's comment was a little unclear. SmartOS isn't doing anything with the
LXC protect. What SmartOS has done is revived the branded zones code from Sun
for emulating Linux system calls. This means that you can run Docker
containers on SmartOS as zones, as well as use dtrace, mdb and ptools on your
Linux binaries. You can read up some more here:

[https://wiki.smartos.org/display/DOC/LX+Branded+Zones](https://wiki.smartos.org/display/DOC/LX+Branded+Zones)

~~~
qwertyuiop924
I saw Bryan Cantrill's talk on that! It did sound neat.

------
orionblastar
I use Virtual Box, but it is based on QEMU.

I used QEMU sometimes because it has support for MIPS and SPARC emulation so
that it can run old operating systems for those platforms on it.

I had a task to modify QEMU to get networking working on a Solaris 7 SPARC
virtual machine, but there is a bug in the kernel that causes a crash with the
NIC that is emulated in QEMU for SPARC and there used to be a Sun Y2K CD-ROM
that fixed it and nobody seems to have it anymore. Anyway I couldn't modify
QEMU in time to make it work and lost that gig. To be honest there are Russian
Hackers trying to get Solaris 7 and older stuff working in QEMU and can't seem
to get anywhere. But that was a few years ago and maybe someone made progress
yet?

QEMU runs IBM OS/2 quite well.

------
joshmn
KVM. SolusVM on top for clients / people who don't want to screw with the
command line.

------
VSpike
We've been using Proxmox VE (
[https://pve.proxmox.com/wiki/Main_Page](https://pve.proxmox.com/wiki/Main_Page)
) in production for a while.

It's open source and free, and is a tweaked Debian with KVM/QEMU
virtualization and also LXC container support. It puts a nice web interface on
top of it all, but the underlying OS and CLI is fully available too.

The hosts can be clustered and VMs can be migrated. It doesn't require shared
storage but can use it if you have it. (You need shared storage for live
migrations).

We pay for support since the cost is reasonable, but you don't have to.

~~~
johnnycarcin
Is there a reason you went with that option? I've never heard of Proxmox VE so
I'm curious to hear why you went that route vs just plain KVM.

------
gnulnx
Depending on your needs, LXC might fit the bill. Otherwise, KVM.

------
jlgaddis
Somewhat related: anyone know if the "API" for ESXi/vSphere works on the free
version?

I've considered putting the free hypervisor on a server here at home instead
of using VirtualBox on my laptop for my test VMs but I'd want to be able to
create/provision/etc. VMs using my own tools (and not have to fire up a
Windows machine to run the vSphere client).

~~~
The_Reckoning
As of a few months ago, no. They only open up the API when you license it.
They may change it in the future as they no longer have a strangle hold on the
hypervisor market, but traditionally, they withhold that API functionality in
the free version.

------
im_down_w_otp
Xen, but only because its the primary target of pretty much every current
unikernel build/deploy chain.

As that becomes less and less the case we may migrate or expand what we adopt.

------
scprodigy
Have you checked with hyper.sh?

------
victorhugo31337
KVM is King(Linux/Openstack), Xen is dead, Docker is unstable.

~~~
tw04
>Xen is dead

Other than that whole AWS thing...

~~~
toomuchtodo
Which doesn't really count. Its fine AWS uses Xen, they have the budget and
the team to maintain their fork, warts and all.

