Hacker News new | comments | ask | show | jobs | submit login
Linode turns 12, transitions from Xen to KVM (linode.com)
168 points by alexforster on June 16, 2015 | hide | past | web | favorite | 49 comments



Linode is great, however there are three things I really love to see.

Object Storage - Which LiquidWeb, RackSpace, AWS already has and many other Hosting Companies are providing it.

Memory Optimized Plan - Everything is getting in memory. But most dont need 20 core for 96GB Memory. There should be a low CPU count plan with 128GB+ and may be up to 512GB. ( or Higher )

CDN - Please Resell a decent CDN or even make your own one. So we can get everything in one place.


I can relate to your problem regarding the memory optimized plan. Are you aware of any cloud provider, similar to Linode or DigitalOcean, which allow to specify the numbers of cores and GBs of memory by yourself? All the services I know only provide fixed machine types.


I'd love to be fully recommending gandi.net here. I've never had any issue with them reliability-wise however they've switched to some credit-based payment plan that I just can't seem to get my head round so I've stopped using them myself. If you're able to decipher their "no bullshit" credit system, have at it

https://www.gandi.net/hosting/iaas/buy

Edit: Oh nice, they've added an "Or approximately £xx.xx per month" now. Might have to give them another go


Gandi seems to be exactly what I was referring to, thanks.


Glad to hear! Just noticed the price on the lowest tier drops significantly if you lose the IPv4 address too, nice

I may have recommended a service to myself here, an odd moment for sure


Linode used to let you pay extra to add just more memory (or bandwidth or disk) but they took that option away a few years ago. I would imagine it makes provisioning the hardware a lot more difficult.


Web search returns https://rimuhosting.com/

I have no idea if they're any good.


You've brought up some good points in where Linode would need to expand to be an option for some people using, say, AWS. But I wonder if that's the right direction for them to go.

Object storage is a hard area to compete in. Amazon drives prices very low and when you're talking about object storage, durability is really important. Rackspace and LiquidWeb charge $0.10 and $0.08 per GB which is around 3x more than Amazon charges. I'm not sure Linode offering object storage for $0.08/GB would attract much business. HP offers it for $0.09 and Joyent for $0.043. Linode customers can use S3 for their storage in many cases. The case it doesn't work for are workloads where you're going to want the data more local like running Hive queries off S3 data. Would an expensive Linode S3 competitor be worth it?

A memory optimized plan could make a bit of sense, but then Linode would really need to hammer out what CPU you're paying for on the standard plans. A lot of VPS providers are giving you "vCPU" ratings, but who knows what that translates to. Linode tells us that their servers use Ivy Bridge E5-2680 v2 processors. But how many? Let's say that it has two processors. That means 20 cores and 40 virtual cores via HT. OK, how many Linodes can fit on one of these boxes. At least 96GB of Linodes. If it's 96GB of Linodes, 96 1-GB Linodes would mean 96 vCPUs - way more than there are processors. However, 1 96-GB Linode would have 20 vCPUs, less than the processors have. Amazon is a lot more through about what CPU resources you're getting. If Linode were to make a distinction between high-memory and high-CPU instances, you'd want it to be more than just "cores". If you're creating a compute cluster, the compute resources you're getting matter. Maybe this is more of a generalization of your suggestion: Linode needs to provide more resource options and make what resources you're paying for clearer.

A CDN would be an easy add-on for Linode, but without object storage, is it that interesting? I'm sure Fastly or someone else would let Linode white-label, but is it so important to have a CDN from the same provider that does your other infrastructure?

I think some of these are wanting Linode to be something that it isn't. Amazon has made AWS into a general store for compute, network, and storage stuff. You want to analyze petabytes of information? Stick it onto S3, bring it down to compute nodes with the right balance of IO, memory, disk, and CPU to do your analysis, etc. Linode is more "you want a VPS? We have VPSs!" And they're pretty great at it. They're fast, the SSDs are wonderful, and they've been a reliable member of the community for over a decade. Heck, you can even get a load balancer to handle 10,000 simultaneous connections for $20. The stuff you need to run a decent site. Digital Ocean, who likely has more funding at its disposal, hasn't gone beyond this.

Maybe it will be the next step for Linode (and DO). To get there, I think those VPS providers will have to get more serious about what resources a user actually gets.


Object Storage, I dont think the cost of Object Storage matters at the scale of Linode are trying to compete. Because anyone who needs PB or few hundred TB storage are already out of Linode's best fit range. The reason for Object Storage within the same host is the saving on Bandwidth cost. And purely in terms of instances,

There are Intel 1S 10 Core CPU that offers up to 768GB Memory if I remember correctly. Although I am not sure if the pricing works in their flavor.

CDN - I was thinking more of Linode building their own with their DC around the world, mainly as Bulk transfer. For pure Speed it would properly be EdgeCast or Fastly. But this was more of convinenet rather then necessary.


Accessing S3 from Linode is prohibitively expensive. The base storage fee is the cheap part. It costs three times as much to transfer one gigabyte of data off of S3 ($0.09) than it does to actually store the gigabyte of data there for a month ($0.03). Linode doesn't have to come anywhere near S3's storage prices in order to be competitive for Linode compute customers, as long as data transfer between Linode VMs and Linode storage is free (as it is between EC2 and S3). I'd gladly pay $0.10/GB/month for a Linode object hosting service.


I just really want VPCs type resources.

I'm happy to use object storage and CDNs from other providers but I'd rather not have to do foo to make my interal connectivity private.

Please do correct me if this is a solved problem now though.


No mention of security? Xen isn't perfect, but according to the Qubes team it's the best we've got.

> We still believe Xen is currently the most secure hypervisor available, mostly because of its unique architecture features, that are lacking in any other product we are aware of.

https://raw.githubusercontent.com/QubesOS/qubes-secpack/mast...


That's hypothetical security, of course. In a practical sense Xen has seen several major security vulnerabilities, such as Venom.

Not that other VM systems haven't suffered similar problems. But when real-world experience shows that, for example, Xen is no less vulnerable to the most damaging exploits than any other VM manager then the hypothetical security advantages evaporate away and it no longer becomes a useful justification for preferring Xen.


Venom was a vulnerability in Qemu, not Xen, which has a containment mechanism (stub domains) for this class of vulnerability. In addition, PV Linux VMs do not use Qemu, so were not vulnerable. Qubes contained Qemu in stub domains, so was not vulnerable, https://groups.google.com/forum/m/#!topic/qubes-users/uRg6gk...


Case in point: I received 12 hours of notice on a Sunday before my master database was rebooted to patch a Xen security flaw http://status.linode.com/incidents/2dyvn29ds5mz On the plus side, great Xen effort to to roll out fixes before it became a public zero day.


wait, but didn't you just contradict yourself?

> cool that xen rolled out fixes before zero day

> not cool that your vm was promptly rebooted to apply the fix


    > not cool that your vm was promptly rebooted to apply fix
Not sure where you got the "not cool" part, I believe the parent post just said "this happened", not "it sucks that this happened", I could be wrong though


They rolled out the patch over two weeks time. The first fixes were after 12 hours, and could have likely waited at least 1 business day to give me proper notice.


The majority of the serious Xen security issues affected HVM, not PV.


Good point.


You're surprised that Linode isn't addressing security?

I've been a Linode customer for most of their 12 years, but my only complaint (and the reason I don't use them for anything I _really_ care about) is that they have always been very opaque about security.


Opaque isn't the word I'd use to describe Linode security.


Yes, Linode being opaque was my experience too. I remember a really weird discussion with their support about HTTPS load balancing.


Anyone know why the performance difference is so dramatic? My guess was that the difference would go the other way -- that Xen would be more efficient, because it was designed for paravirtualization rather than hardware emulation, and the guest kernels had to be modified to accommodate it.


I believe it's because Xen adds overhead[0] in its process of working around the need for hardware virtualisation support[1], whereas KVM has much less overhead, but requires hardware support to run efficiently.

[0]: http://dtrace.org/blogs/brendan/2013/01/11/virtualization-pe...

[1]: Xen was built before such support was widely adopted.


This is about right. A lot of the performance seems to be coming from lower hypervisor CPU overhead and better I/O virtualization with virtio.


Running "xen" vm is way too broad to know. http://www.brendangregg.com/blog/2014-05-07/what-color-is-yo.... PVH should be as good as KVM.


Came here to say exactly this.


Blame AMD. They ruined PV performance on x84_64 when they removed two of the protection rings.

Hardware extensions still move the performance needle towards Xen (PV)HVM/PVH and KVM, but the extra context switches required due to the inferior (for paravirtualization) architecture are a major performance hit.

(PV, of course, performs IO better when there is not SRIOV access to the IO devices)


There's also a new Singapore datacenter that launched recently– https://blog.linode.com/2015/04/27/hello-singapore/


have you tried it? how is the performance at the singapore dc?


Hardware-wise, you'll be using the same spec hypervisors that are in all of the other datacenters. Network-wise, it's difficult to get consistent, low latency connectivity to the entire Asia Pacific region, but we're already doing pretty well with that and we're working hard to be one of the best. I'd recommend checking out a speedtest to see if it works for you. If it doesn't, check back again in the future, because we're making constant improvements.


Thanks for the points, we are planning to give a try for our next project.


I hope this will make it much easier to run your own (or your distro's own) kernel on Linode

While possible currently (and I do), it requires some pv-grub configuration, and IIRC recent distro kernels don't work with Linode's pv-grub version, and so quite complex a pv-grub chaining is needed.

WRT security, I'm much more concerned with getting prompt kernel upgrades from my distro or rolled by hand, when there are network exploitable bugs, than with hypervisor bugs that might allow the small group who share the physical hardware to do something naughty.


Tried it, and pv-grub gets converted to "grub (legacy)", and "grub 2" is also available as a "kernel" after switching to KVM. As well as "direct disk" :)

Edit: However, a system that had the debian pv-grub-menu package installed to create its menu.lst won't boot with "grub (legacy)". Installing grub-pc and using the grub 2 option also failed, with "error: hd0 cannot get C/H/S values" Interesting, never had trouble booting grub from kvm before.


I didn't have any problem getting Ubuntu 14.04 working on Linode with pv-grub. I had to go that route in order to use ksplice.


What is the difference between paravirtualized KVM and fully virtualized KVM? I thought KVM has always been fully virtualized which is why it is capable of running any OS.

Is KVM paravirtualizaton a new feature?


"Paravirtualization" can refer to two somewhat different things, both of which count as guest-assisted virtualization. The original idea behind paravirtualization, and originally the only type of virtualization in Xen, was one where Xen presented itself as its own PC-incompatible machine type using the x86 instruction set, and you had to port the boot code, memory management code, etc. over to Xen hypercalls, and you don't have access to real mode, the x86 MMU, etc. But you also need to port all your driver code (disks, networking, graphics) to Xen hypercalls, since you also no longer have access to the BIOS interrupts, PCI bus, etc.

It's possible to isolate the second part -- paravirtualized drivers -- without requiring the first part -- paravirtualized bootup. Since bootup isn't guest-assisted any more, the hypervisor now has to emulate the entire PC-compatible boot process, including minimal emulation of BIOS interrupts for disk access, etc. But once that's completed, you can switch to paravirtualized drivers for optimized performance, and the actual performance benchmarks people care about are the steady-state disk and network bandwidth.

The only somewhat tricky thing is that you need to handle MMU updates somehow. I believe (but don't know for certain) that with nested page table support at the processor level, you can just safely give x86 hardware-virtualized guests access to their nested page tables, and they can use the same instructions to modify that as they would on native hardware. So you don't need paravirtualization for that. This support has been in hardware since around 2008.

One of the benefits of using paravirtualized drivers alone, instead of an entire paravirtualized boot process, is that you can support OSes where you can write custom drivers but you can't change the boot code. So, for instance, the KVM project has Windows disk and network drivers that use the paravirtualized interface (virtio):

http://www.linux-kvm.org/page/WindowsGuestDrivers

You can continue to use Windows without this support, which will use the slow, emulated disk and network hardware. But if you have the drivers, things will get much faster. This is a best-of-both-worlds approach: you can continue to run any OS (since full virtualization support remains present), but you can switch to the paravirt drivers and get steady-state performance competitive with paravirt-only hypervisors.


No, KVM has had virtio (paravirtualization) for quite some time now.

Paravirtualized KVM uses virtio devices rather then emulated devices. So, instead of a virtual e1000 device, you'll see a virtio-net device. Performance gains are very, very significant.


I think paravirtualization just means that there are drivers installed on the guest OS that make the interaction with hardware faster (perhaps reduce the number of copies and/or interrupts) by being cooperative.

Paravirtualization is just an extension of what you think is "full virtualization"

BTW, In case this article was misleading: I think Xen already has paravirt support that VPS providers already take advantage of.


Paravirtualization is not an extension of full virtualization. If anything, it's moving in the other direction - instead of virtualizing the parts of the system that are difficult or low performance when virtualized, you instead offer up a software device. In practice, this is generally your IO devices - without SR-IOV support, your IO is quite slow through an emulated device.

Paravirtual IO drivers, you have a front end driver in the guest, and a back end driver elsewhere. In Xen, this is a dom0 or stub domain that also includes the actual hardware driver. With Xen the front end and back end are just a shared memory segment using a ring buffer. KVM does things a bit differently, but the core concept is pretty similar.

SR-IOV you have a virtual function of the device that is passed through directly to the guest, eliminating the need for the PV IO drivers at all.


Thanks for the clarification!


Is KVM burstable? From what I know about Xen (very little), at least in Linode's case, CPUs were not burstable. I always thought of this as a feature because while it's that if I need a little extra juice I can have it, I don't want my neighbors parking on my lawn every time they have a party. I'd rather have predictability over performance is my point. Is this still the case?



Are they really going to keep the same number of hosts per server, given they can now get more out of a server? It would be great if they will, but I have doubts.


Unlikely they'll squeeze more linodes onto a physical machine. The overhead being saved isn't that enormous. Not to mention there might be more CPU, but there isn't necessarily enough memory, disk I/O, and/or network available to make it worth it. I wouldn't worry. Plus linode has a solid track record of improving user experience, not putting it at risk. If they find a way to squeeze in a little more, you won't be losing any performance.


>The kernel build time dropped from 573 to 363 seconds. That’s 1.6x faster.

Wow, that's quite a nice upgrade upgrade, especially considering the price.


Here is the type of technology I love;

Essentially, our KVM upgrade means you get a much faster server just by clicking a button..


Great service. I absolutely love that. Customer support is fast and informative.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: