

Basecamp, now with more vroom - adamhowell
http://www.37signals.com/svn/posts/1819-basecamp-now-with-more-vroom

======
datums
Having faster CPUs and removing the Virtualization layer, your results are as
expected. With the exact hardware you will have better results without the
virtualization layer. Virtualization is not about performance. If you're
moving to a cloud (vps/virtualized hardware) infrastructure take into
consideration, there's no exact equivalent on the cloud/shared environment.

~~~
joshwa
Virtualization: it's not about performance, it's about provisioning.

------
omouse
Did they test the hardware by putting it in production or did they run a
series of benchmarking tests (so that the experiment can be duplicated)?

~~~
trafficlight
From my understanding of the article, it looks like they added the new
machines into their cluster and directed a higher percentage of traffic to
them (to account for the 8 cores instead of 4 cores being used in the virtual
machines).

~~~
imbriaco
That's exactly right. I'm not a big believer in synthetic benchmarks, and I'm
not concerned with repeatability of my experiments. The purpose of my test was
to determine, for our specific workload, how much of a difference we would see
if we went to dedicated hardware -- and even further, what is the difference
between a couple of very specific hardware configurations with that workload.
I think it accomplished that nicely.

~~~
omouse
_The purpose of my test was to determine, for our specific workload,_

Define "specific workload". The workload depends on how many users are using
the system, what parts of the website they're hitting (some may require more
backend processing than others), etc.

By not telling us the specifics and by killing the repeatability of this
benchmark, you can only say with a minimal amount of certainty that the
dedicated hardware improved performance.

I love how the industry functions on magic, whether it's on the software or
hardware side.

~~~
teej
> By not telling us the specifics and by killing the repeatability of this
> benchmark, you can only say with a minimal amount of certainty that the
> dedicated hardware improved performance.

I can agree that this test only has a small amount of certainty that actual
performance of the application was higher on the new hardware. So what? It's
is possible to use correlated metrics, such as engagement, time on site, and
pageviews per user to determine if the move was "successful".

37signals' end goal was most likely to -improve user experience-. If you can
confidently say that a valid random sample of users on A servers have a better
experience than those on B servers, raw speed numbers don't matter.

I guess my point is: Test everything on your own. In the end, it's your users
happiness that matters the most, not server speed.

------
smithjchris
Why do people virtualize the hardware and OS? Shouldn't it be a software
architectural thing? That's why we have web clusters, load balancers, content-
based caches, database clusters, distributed hash tables etc and not big iron
mainframe style partitions. Look at Google!

I am slightly biased against this as I run a large ESX deployment with about
100 virtual hosts. Performance sucks compared to native and it purely shifts
the management effort from one concern to another saving a grand total of
nothing. Blades however have some credibility.

Usually by now someone mentions security to which a certain Mr Theo De Raadt
concisely debunks:

"x86 virtualization is about basically placing another nearly full kernel,
full of new bugs, on top of a nasty x86 architecture which barely has correct
page protection. Then running your operating system on the other side of this
brand new pile of shit. You are absolutely deluded, if not stupid, if you
think that a worldwide collection of software engineers who can't write
operating systems or applications without security holes, can then turn around
and suddenly write virtualization layers without security holes."

