
The ABCs of virtual private servers, Part 1: Why go virtual? - evo_9
http://arstechnica.com/business/news/2011/02/virtual-private-servers.ars
======
prodigal_erik
A hypervisor is just an operating system (arbiter of resource use) relying on
legacy operating systems as compatibility shims, because it has a native API
nobody wants to deal with. It seems strange that we see resource contention
problems when applications try to coexist on an operating system, and our
response is to give each app its own guest O/S and then try to make _those_
coexist. Why do we expect an improvement? I can't believe the existing O/S
schedulers and paging policies were so poor that anything similar written from
scratch for a hypervisor will automatically make better decisions despite less
visibility into what's going on.

That said, I can certainly see a win in virtualization for testing. If your
production system will have a large set of discrete machines communicating,
you can simulate that with a much smaller pool of QA machines (maybe just
one).

~~~
regularfry
I could be wrong, but I don't _think_ the argument for virtualisation was that
it would fix resource contention, except possibly under some fairly rarefied
conditions.

------
axod
Downsides to VPS

    
    
      1. You pay stupidly high *monthly* costs based on RAM.
         This makes little sense, as RAM is ridiculously cheap.
      2. You pay masses for bandwidth if you go over.

~~~
regularfry
1\. You'll always pay more renting than buying in the long run. The difference
is a factor of how efficient the market is, I guess. A high memory quad XL
instance on Amazon has near enough 64GB of RAM, which you can have for $2.00
per hour, or $5300 for a reserved instance for a year. If you wanted to buy
that much outright, you'd be paying a couple of thousand _just for the DIMMs_.
Then you've got to buy and maintain the box to put them in, knowing that it'll
have depreciated _significantly_ over the year. I guess it depends on what
you're actually planning to do with the RAM once you've got it, but it doesn't
look like too bad a deal to me.

2\. That depends on your VPS host, really.

~~~
axod
2\. Show me a VPS host that has cheap bandwidth.

For example, lets say we need 10TB of transfer.

    
    
      Slicehost: $3,000 (0.30/GB !!! WTF are they smoking)
      Amazon: $1,250 (Assume 5TB in, 5TB out, in US)
      Linode: $1,000 (0.10/GB Good for VPS, but still crazy)
    
      Typical dedicated server: $99 with 10TB included.
    

Amazon is ridiculously expensive. I know it's really "hip" to use it, but it's
throwing money away. I guess if it's VC's money then who cares if you get to
say "We're cloud hosted!!!".

Bandwidth pricing on VPS hosts is just crazily expensive.

~~~
tasaro
10TB/month averages out to 32Mbps. From what I've heard, try using anywhere
near that on your typical "all you can eat" provider and you'll either be
QoS'd enough to never achieve it or you'll suddenly be in violation of some
finely printed Terms of Service.

~~~
axod
I do around 5mpbs on dedicated servers without issue, I've peaked at 100mps
without too much fuss.

------
uggedal
Though a bit dated, I wrote a performance comparison of some of the providers
mentioned in this article: <http://journal.uggedal.com/vps-performance-
comparison>

------
latch
Wrote an introduction to hosting a while ago if anyone's interested:
<http://openmymind.net/2010/10/26/An-Introduction-To-Hosting>

Goes beyond VPSs and tries to look at the different options, plus the options
typically available.

------
fleitz
VPSs are great if you need less than the resources of one server, need a
number of machines for a short period of time or your tasks are not disk IO
intensive.

All that work you did to make your IO sequential goes to waste as soon as you
put it on a VPS. It will be interesting to see if SSDs overcome the typical
VPS limitations.

~~~
btmorex
I'd say VPSs are great if you're CPU bound, okay if you're memory bound, and
terrible if you're disk IO bound.

Basically, CPU time is an abundant resource that's easily partitioned, but can
be shared. So, you're guaranteed your share, but you often actually get more
than your share.

Memory is strictly partitioned. You always get your share, no more, no less.

Disk is almost impossible to partition fairly (performance, not space).
Furthermore, the more active disk users there are the worse the total
performance is, so if you're unlucky enough to have a neighbor that uses a lot
of disk IO than you effectively get double screwed.

What would be interesting is if someone came up with a VPS with dedicated disk
resources, but as far as I know no one has done that.

~~~
tres
For your better quality VPS providers, memory isn't really an issue because
memory is dedicated. For your old time OpenVZ/Virtuozzo low-end box, that's
not the case. You can oversell every resource on an OpenVZ box... Back in the
day SW-Soft was claiming you could provision hundreds of VPSs on a 32 bit
server. I never saw anything close to that, but I've seen some amazingly
oversold servers trying to keep up with disk I/O.

One of the nice things about Xen is that disk I/O can be controlled somewhat
because each domU has a specific process in dom0 for disk access. So you can
ionice things & provide a somewhat more controlled access to the disk.

~~~
sparky
Memory capacity is typically dedicated (e.g., Linode), but memory bandwidth is
difficult to allocate statically, and can be a huge problem even if you're
fine on capacity. For example, a simulator I like to run has a 50-60MB working
set, much larger than on-chip cache but well under my allocated 512MB of RAM.
However, other concurrent users can disproportionately use up DRAM bandwidth,
depending on their access pattern and the OS scheduler.

~~~
tres
I've never seen a hardware node get anywhere near saturating the bus before
disk I/O became an issue. So yes, there is a potential for saturation;
however, I'd be really happy if that were my capacity bottleneck.

------
epynonymous
i use vps for development and beta environments. anything production quality
needs to go on physical hardware.

~~~
latch
needs? is there some type of law for this where you live?

Have you seen the list of _production_ sites running on EC2?
(<http://aws.amazon.com/solutions/case-studies/>) Linode also has some pretty
impressive sites...linode.com probably being the most obvious one.

------
bradleyland
This really has nothing to do with the content of the article, but it's one of
the reasons I respect Ars so much. There is not one mention of "cloud" outside
the vendor's specific product names in the entire article. In other words,
Ars' authors and editors don't play the buzzword bingo game.

