Hacker News new | past | comments | ask | show | jobs | submit login

20-30% hit in IO? It hasn't been at that level for a long time. With new KVM versions and good Intel processors there is a performance hit of as low as 5% these days. That's not to necessarily say your particular workload will see that, but 10% is generally an 'at-worst' level at this point.



Its definitely 20-30% for realistic workloads using VM based tech on top I.e. CLR/JVM or a database engine which is realistic. This is on top of VMware. I can't speak for Xen.

The outcome is pretty grim.


Exactly. I often see claims of 5-10%, but I've yet to see any reliable set of benchmarks done with those results. Too often, people are using dd and testing throughput instead of actual IOPS. Even the benchmarks that show 20%+ tend to be skewed in favour of virtualization, as they tend to be run with a single VM instead of multiple VM's.

Even if there was 0% performance penalty from virtualization, you'd still see suboptimal allocation of hardware resources just from trying to take an abstracted view of the hardware. Different applications have different performance profiles. You either end up with overbuilt hardware to support the virtualization environment and the different performance profiles of the different applications, or with multiple VM's for the same application on the same hardware which is totally unnecessary overhead. Virtualization is just not meant for large scale.


Here [1] is a great paper about nested virutalization for KVM. This combines hardware capabilities with software tricks to allow running multiple levels of VMMs. It may not have intense IOPs testing but it's got a couple benchmarks that would be representative of real-world workloads. Keep in mind that this paper was published in 2010 and virtualization performance has been on a dramatic rise the last several years.

Jump to the results section. The more relevant bullets here are 'single guest' (either virtio or using direct mapping).

Highlights (or lowlights, depending on your perspective): kernbench - 9.5% overhead SPECjbb - 7.6% overhead

I don't agree with your point about suboptimal allocation of hardware resources. Virtualization does not require you to divide a machine in a different way than processes do (you could easily have one VM consume nearly all CPU cycles, one consuming nearly all I/O capacity, etc.) IMO, the key difference is that virtualization lets you easier establish hard, enforceable limits and concrete policies around resource usage (not to mention the ability to account for usage across all kinds of different applications and users). And, it lets you do that for arbitrary applications on arbitrary operating systems. So users don't have to write to one particular framework/language/runtime/OS whatever. That's all pretty important for large scale.

[1] http://static.usenix.org/event/osdi10/tech/slides/ben-yehuda...


What distinction does KVM or the kernel make for a single guest?

Is there a system that would allow for mapping part of an IO device (such as a block range or a LUN) to a guest when multiple guests are running with the same level of overhead?


I'm not sure I follow your question 100%, but I'm gonna take a stab...

The distinction being made here isn't for a single guest or multiple guests, it's for a single guest OS or nested guests (i.e. a VM running another VM). To expose the hardware virtualization extensions to the guest VMM, then they must be emulated by the privileged domain (host). There are software tricks that allow this emulation to happen pretty efficiently (and map an arbitrary level of guests onto the single level provided by the actual hardware). It's not a common use-case, but for a few very specific things it's very useful.

There are a few different ways to map I/O devices directly into domains. Some definitely allow for part of an I/O device. For example, many new network devices support SR-IOV -- which effectively allows you to poke it and create new virtual devices (which may be constrainted in some way) which can be mapped directly into guests.


Ah, VMware is the problem, that is explains it.

Parties that care muchly about fine performance margins apparently need to be using Xen or KVM or Illuminos then.


Can't speak for the parent poster's company but the numbers don't match my experience with VMware many years back. It's possible they've had a sharp regression but we were maxing out gigabit ethernet and local RAID arrays in 2006.


Well, don't confuse I/O with throughput here. You can look at performance numbers for just about anything and tweak one direction or the other.

For instance it's easy to make a benchmark showing huge throughput to any given storage solution (and many NAS providers sell on this basis), but your I/O might be terrible because to get that throughput you're maxing the CPU (etc.). Likewise, you can change your benchmark and show high I/O, but the throughput is 'terrible'.


The parent very clearly specified I/O, which is what I was commenting on.


Virtualization which cares about iops needs SSD. Hard drives can't be sanely virtualized.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: