
Linode turns 12, transitions from Xen to KVM - alexforster
https://blog.linode.com/2015/06/16/linode-turns-12-heres-some-kvm/comment-page-1/
======
ksec
Linode is great, however there are three things I really love to see.

Object Storage - Which LiquidWeb, RackSpace, AWS already has and many other
Hosting Companies are providing it.

Memory Optimized Plan - Everything is getting in memory. But most dont need 20
core for 96GB Memory. There should be a low CPU count plan with 128GB+ and may
be up to 512GB. ( or Higher )

CDN - Please Resell a decent CDN or even make your own one. So we can get
everything in one place.

~~~
Acconut
I can relate to your problem regarding the memory optimized plan. Are you
aware of any cloud provider, similar to Linode or DigitalOcean, which allow to
specify the numbers of cores and GBs of memory by yourself? All the services I
know only provide fixed machine types.

~~~
corobo
I'd love to be fully recommending gandi.net here. I've never had any issue
with them reliability-wise however they've switched to some credit-based
payment plan that I just can't seem to get my head round so I've stopped using
them myself. If you're able to decipher their "no bullshit" credit system,
have at it

[https://www.gandi.net/hosting/iaas/buy](https://www.gandi.net/hosting/iaas/buy)

Edit: Oh nice, they've added an "Or approximately £xx.xx per month" now. Might
have to give them another go

~~~
Acconut
Gandi seems to be exactly what I was referring to, thanks.

~~~
corobo
Glad to hear! Just noticed the price on the lowest tier drops significantly if
you lose the IPv4 address too, nice

I may have recommended a service to myself here, an odd moment for sure

------
btrask
No mention of security? Xen isn't perfect, but according to the Qubes team
it's the best we've got.

> We still believe Xen is currently the most secure hypervisor available,
> mostly because of its unique architecture features, that are lacking in any
> other product we are aware of.

[https://raw.githubusercontent.com/QubesOS/qubes-
secpack/mast...](https://raw.githubusercontent.com/QubesOS/qubes-
secpack/master/QSBs/qsb-018-2015.txt)

~~~
InclinedPlane
That's hypothetical security, of course. In a practical sense Xen has seen
several major security vulnerabilities, such as Venom.

Not that other VM systems haven't suffered similar problems. But when real-
world experience shows that, for example, Xen is no less vulnerable to the
most damaging exploits than any other VM manager then the hypothetical
security advantages evaporate away and it no longer becomes a useful
justification for preferring Xen.

~~~
walterbell
Venom was a vulnerability in Qemu, not Xen, which has a containment mechanism
(stub domains) for this class of vulnerability. In addition, PV Linux VMs do
not use Qemu, so were not vulnerable. Qubes contained Qemu in stub domains, so
was not vulnerable, [https://groups.google.com/forum/m/#!topic/qubes-
users/uRg6gk...](https://groups.google.com/forum/m/#!topic/qubes-
users/uRg6gkssfX4)

------
mwcampbell
Anyone know why the performance difference is so dramatic? My guess was that
the difference would go the other way -- that Xen would be more efficient,
because it was designed for paravirtualization rather than hardware emulation,
and the guest kernels had to be modified to accommodate it.

~~~
tekacs
I believe it's because Xen adds overhead[0] in its process of working around
the need for hardware virtualisation support[1], whereas KVM has much less
overhead, but _requires_ hardware support to run efficiently.

[0]: [http://dtrace.org/blogs/brendan/2013/01/11/virtualization-
pe...](http://dtrace.org/blogs/brendan/2013/01/11/virtualization-performance-
zones-kvm-xen/)

[1]: Xen was built before such support was widely adopted.

~~~
alexforster
This is about right. A lot of the performance seems to be coming from lower
hypervisor CPU overhead and better I/O virtualization with virtio.

------
alexforster
There's also a new Singapore datacenter that launched recently–
[https://blog.linode.com/2015/04/27/hello-
singapore/](https://blog.linode.com/2015/04/27/hello-singapore/)

~~~
hibala
have you tried it? how is the performance at the singapore dc?

~~~
alexforster
Hardware-wise, you'll be using the same spec hypervisors that are in all of
the other datacenters. Network-wise, it's difficult to get consistent, low
latency connectivity to the entire Asia Pacific region, but we're already
doing pretty well with that and we're working hard to be one of the best. I'd
recommend checking out a speedtest to see if it works for you. If it doesn't,
check back again in the future, because we're making constant improvements.

~~~
hibala
Thanks for the points, we are planning to give a try for our next project.

------
joeyh
I hope this will make it much easier to run your own (or your distro's own)
kernel on Linode

While possible currently (and I do), it requires some pv-grub configuration,
and IIRC recent distro kernels don't work with Linode's pv-grub version, and
so quite complex a pv-grub chaining is needed.

WRT security, I'm much more concerned with getting prompt kernel upgrades from
my distro or rolled by hand, when there are network exploitable bugs, than
with hypervisor bugs that might allow the small group who share the physical
hardware to do something naughty.

~~~
joeyh
Tried it, and pv-grub gets converted to "grub (legacy)", and "grub 2" is also
available as a "kernel" after switching to KVM. As well as "direct disk" :)

Edit: However, a system that had the debian pv-grub-menu package installed to
create its menu.lst won't boot with "grub (legacy)". Installing grub-pc and
using the grub 2 option also failed, with "error: hd0 cannot get C/H/S values"
Interesting, never had trouble booting grub from kvm before.

------
vfclists
What is the difference between paravirtualized KVM and fully virtualized KVM?
I thought KVM has always been fully virtualized which is why it is capable of
running any OS.

Is KVM paravirtualizaton a new feature?

~~~
exacube
I think paravirtualization just means that there are drivers installed on the
guest OS that make the interaction with hardware faster (perhaps reduce the
number of copies and/or interrupts) by being cooperative.

Paravirtualization is just an extension of what you think is "full
virtualization"

BTW, In case this article was misleading: I think Xen already has paravirt
support that VPS providers already take advantage of.

~~~
cthalupa
Paravirtualization is not an extension of full virtualization. If anything,
it's moving in the other direction - instead of virtualizing the parts of the
system that are difficult or low performance when virtualized, you instead
offer up a software device. In practice, this is generally your IO devices -
without SR-IOV support, your IO is quite slow through an emulated device.

Paravirtual IO drivers, you have a front end driver in the guest, and a back
end driver elsewhere. In Xen, this is a dom0 or stub domain that also includes
the actual hardware driver. With Xen the front end and back end are just a
shared memory segment using a ring buffer. KVM does things a bit differently,
but the core concept is pretty similar.

SR-IOV you have a virtual function of the device that is passed through
directly to the guest, eliminating the need for the PV IO drivers at all.

~~~
exacube
Thanks for the clarification!

------
orthecreedence
Is KVM burstable? From what I know about Xen (very little), at least in
Linode's case, CPUs were not burstable. I always thought of this as a feature
because while it's that if I need a little extra juice I can have it, I don't
want my neighbors parking on my lawn every time they have a party. I'd rather
have predictability over performance is my point. Is this still the case?

------
Veratyr
Previous post:
[https://news.ycombinator.com/item?id=9726789](https://news.ycombinator.com/item?id=9726789)

------
baudehlo
Are they really going to keep the same number of hosts per server, given they
can now get more out of a server? It would be great if they will, but I have
doubts.

~~~
developer1
Unlikely they'll squeeze more linodes onto a physical machine. The overhead
being saved isn't _that_ enormous. Not to mention there might be more CPU, but
there isn't necessarily enough memory, disk I/O, and/or network available to
make it worth it. I wouldn't worry. Plus linode has a solid track record of
_improving_ user experience, not putting it at risk. If they find a way to
squeeze in a little more, you won't be losing any performance.

------
drzaiusapelord
>The kernel build time dropped from 573 to 363 seconds. That’s 1.6x faster.

Wow, that's quite a nice upgrade upgrade, especially considering the price.

------
diminish
Here is the type of technology I love;

Essentially, our KVM upgrade means you get a much faster server just by
clicking a button..

------
aladine
Great service. I absolutely love that. Customer support is fast and
informative.

