Object Storage - Which LiquidWeb, RackSpace, AWS already has and many other Hosting Companies are providing it.
Memory Optimized Plan - Everything is getting in memory. But most dont need 20 core for 96GB Memory. There should be a low CPU count plan with 128GB+ and may be up to 512GB. ( or Higher )
CDN - Please Resell a decent CDN or even make your own one. So we can get everything in one place.
Edit: Oh nice, they've added an "Or approximately £xx.xx per month" now. Might have to give them another go
I may have recommended a service to myself here, an odd moment for sure
I have no idea if they're any good.
Object storage is a hard area to compete in. Amazon drives prices very low and when you're talking about object storage, durability is really important. Rackspace and LiquidWeb charge $0.10 and $0.08 per GB which is around 3x more than Amazon charges. I'm not sure Linode offering object storage for $0.08/GB would attract much business. HP offers it for $0.09 and Joyent for $0.043. Linode customers can use S3 for their storage in many cases. The case it doesn't work for are workloads where you're going to want the data more local like running Hive queries off S3 data. Would an expensive Linode S3 competitor be worth it?
A memory optimized plan could make a bit of sense, but then Linode would really need to hammer out what CPU you're paying for on the standard plans. A lot of VPS providers are giving you "vCPU" ratings, but who knows what that translates to. Linode tells us that their servers use Ivy Bridge E5-2680 v2 processors. But how many? Let's say that it has two processors. That means 20 cores and 40 virtual cores via HT. OK, how many Linodes can fit on one of these boxes. At least 96GB of Linodes. If it's 96GB of Linodes, 96 1-GB Linodes would mean 96 vCPUs - way more than there are processors. However, 1 96-GB Linode would have 20 vCPUs, less than the processors have. Amazon is a lot more through about what CPU resources you're getting. If Linode were to make a distinction between high-memory and high-CPU instances, you'd want it to be more than just "cores". If you're creating a compute cluster, the compute resources you're getting matter. Maybe this is more of a generalization of your suggestion: Linode needs to provide more resource options and make what resources you're paying for clearer.
A CDN would be an easy add-on for Linode, but without object storage, is it that interesting? I'm sure Fastly or someone else would let Linode white-label, but is it so important to have a CDN from the same provider that does your other infrastructure?
I think some of these are wanting Linode to be something that it isn't. Amazon has made AWS into a general store for compute, network, and storage stuff. You want to analyze petabytes of information? Stick it onto S3, bring it down to compute nodes with the right balance of IO, memory, disk, and CPU to do your analysis, etc. Linode is more "you want a VPS? We have VPSs!" And they're pretty great at it. They're fast, the SSDs are wonderful, and they've been a reliable member of the community for over a decade. Heck, you can even get a load balancer to handle 10,000 simultaneous connections for $20. The stuff you need to run a decent site. Digital Ocean, who likely has more funding at its disposal, hasn't gone beyond this.
Maybe it will be the next step for Linode (and DO). To get there, I think those VPS providers will have to get more serious about what resources a user actually gets.
There are Intel 1S 10 Core CPU that offers up to 768GB Memory if I remember correctly. Although I am not sure if the pricing works in their flavor.
CDN - I was thinking more of Linode building their own with their DC around the world, mainly as Bulk transfer. For pure Speed it would properly be EdgeCast or Fastly. But this was more of convinenet rather then necessary.
I'm happy to use object storage and CDNs from other providers but I'd rather not have to do foo to make my interal connectivity private.
Please do correct me if this is a solved problem now though.
> We still believe Xen is currently the most secure hypervisor available, mostly because of its unique architecture features, that are lacking in any other product we are aware of.
Not that other VM systems haven't suffered similar problems. But when real-world experience shows that, for example, Xen is no less vulnerable to the most damaging exploits than any other VM manager then the hypothetical security advantages evaporate away and it no longer becomes a useful justification for preferring Xen.
> cool that xen rolled out fixes before zero day
> not cool that your vm was promptly rebooted to apply the fix
> not cool that your vm was promptly rebooted to apply fix
I've been a Linode customer for most of their 12 years, but my only complaint (and the reason I don't use them for anything I _really_ care about) is that they have always been very opaque about security.
: Xen was built before such support was widely adopted.
Hardware extensions still move the performance needle towards Xen (PV)HVM/PVH and KVM, but the extra context switches required due to the inferior (for paravirtualization) architecture are a major performance hit.
(PV, of course, performs IO better when there is not SRIOV access to the IO devices)
While possible currently (and I do), it requires some pv-grub configuration, and IIRC recent distro kernels don't work with Linode's pv-grub version, and so quite complex a pv-grub chaining is needed.
WRT security, I'm much more concerned with getting prompt kernel upgrades from my distro or rolled by hand, when there are network exploitable bugs, than with hypervisor bugs that might allow the small group who share the physical hardware to do something naughty.
Edit: However, a system that had the debian pv-grub-menu package installed to create its menu.lst won't boot with "grub (legacy)". Installing grub-pc and using the grub 2 option also failed, with "error: hd0 cannot get C/H/S values" Interesting, never had trouble booting grub from kvm before.
Is KVM paravirtualizaton a new feature?
It's possible to isolate the second part -- paravirtualized drivers -- without requiring the first part -- paravirtualized bootup. Since bootup isn't guest-assisted any more, the hypervisor now has to emulate the entire PC-compatible boot process, including minimal emulation of BIOS interrupts for disk access, etc. But once that's completed, you can switch to paravirtualized drivers for optimized performance, and the actual performance benchmarks people care about are the steady-state disk and network bandwidth.
The only somewhat tricky thing is that you need to handle MMU updates somehow. I believe (but don't know for certain) that with nested page table support at the processor level, you can just safely give x86 hardware-virtualized guests access to their nested page tables, and they can use the same instructions to modify that as they would on native hardware. So you don't need paravirtualization for that. This support has been in hardware since around 2008.
One of the benefits of using paravirtualized drivers alone, instead of an entire paravirtualized boot process, is that you can support OSes where you can write custom drivers but you can't change the boot code. So, for instance, the KVM project has Windows disk and network drivers that use the paravirtualized interface (virtio):
You can continue to use Windows without this support, which will use the slow, emulated disk and network hardware. But if you have the drivers, things will get much faster. This is a best-of-both-worlds approach: you can continue to run any OS (since full virtualization support remains present), but you can switch to the paravirt drivers and get steady-state performance competitive with paravirt-only hypervisors.
Paravirtualized KVM uses virtio devices rather then emulated devices. So, instead of a virtual e1000 device, you'll see a virtio-net device. Performance gains are very, very significant.
Paravirtualization is just an extension of what you think is "full virtualization"
BTW, In case this article was misleading: I think Xen already has paravirt support that VPS providers already take advantage of.
Paravirtual IO drivers, you have a front end driver in the guest, and a back end driver elsewhere. In Xen, this is a dom0 or stub domain that also includes the actual hardware driver. With Xen the front end and back end are just a shared memory segment using a ring buffer. KVM does things a bit differently, but the core concept is pretty similar.
SR-IOV you have a virtual function of the device that is passed through directly to the guest, eliminating the need for the PV IO drivers at all.
Wow, that's quite a nice upgrade upgrade, especially considering the price.
Essentially, our KVM upgrade means you get a much faster server just by clicking a button..