

With Software, Small is the new Big - meghan
http://blog.wordnik.com/with-software-small-is-the-new-big

======
jules
So you re-architected your application and ran it on different hardware, and
it performed better. I wonder whether that's due to the re-architecting or due
to the hardware, since generally AWS hardware is extremely expensive for what
you get. It would be interesting to see how well the new architecture performs
on beefy dedicated boxes for the same money as the AWS hardware, especially
since you can get a beefy dedicated server for cheaper than a standard EC2
instance.

~~~
fehguy
There is no question that physical machines are faster than VMs. The main
issues are fixed cost, burst capacity, multi-datacenter deployments and linear
scaling. That was the main point of the migration. There is nearly zero unused
capacity now--that is a very tough thing to achieve with physical machines.

------
mhd
That sounds more like "with servers, small is the new big" to me. From the
looks of it, it doesn't sound like the (custom) software itself got any
smaller.

------
tlack
Though I like the idea of modularity, wouldn't having so many different
services running at once in completely independent compartments make it much
harder to monitor and manage the software as a whole? I feel like that's a
whole lot of quirks to interact in the course of a given transaction.

~~~
fehguy
Yes! For communication this is part of the motivation for developing Swagger.
For configuration & monitoring, our Caprica configuration tool keeps all the
servers talking to the right services. It's not terribly complicated but not
something to overlook. I'll blog about it soon.

------
pdhborges
You got 1/10th of the I/O performance, so you traded disk seeks for network
latency?

~~~
ww520
You can parallelize I/O with more servers. They split their one big data set
into multiple small data sets in many more cheap boxes. These boxes may be
1/10th of the I/O performance of the beefy one but there are less contention,
more fault tolerant, and more scalable while having the same overall
performance. And it seemed they found it cheaper as well.

------
nchuhoai
Interesting, usually you see people move away from EC2 over time with growth

~~~
fehguy
Netflix is an example to the opposite.

But it is worth noting that a great model is a hybrid physical/cloud, once you
have established predictable, steady load.

------
karol314
Modularization rediscovered.

------
j45
Hmm, building apps to be service oriented instead of object oriented....
because the web is service oriented.

I've noticed a few times that some of the problems we run into with web apps
are due to OO not architecting itself in an SOA friendly manner.

Why not have a bunch of little heroku apps? For most web apps that start out
small this isn't as big of a deal and you can grow the sub apps that do?

------
huggyface
_If you wrote software to take advantage of monster physical servers, it will
almost certainly fail to run efficiently in the cloud._

I find this statement kind of incredible when the solution for the terrible
I/O of EC2 is to spin up ranks and ranks of virtual machines, trying to make
up the difference in aggregate. That is efficiency?

When I have a problem with I/O, I improve I/O: A gorgeous cluster of Nimble
SANs with some supporting local Fusion IO cards. When I have a problem with
memory or caching servers, I add memory or caching servers (just got a
relatively low cost _dev_ server with 192GB...just incredible). When I need
more processing power, I add more processing power. Just added some Xeon E5s
to the mix, and boy do they set new thresholds of power.

That's the world of controlling my own hardware.

This story is really one about restrictions forcing a more efficient platform.
Yet you don't _need_ restrictions to have an efficient platform, and the two
are only loosely correlated. This is the story of the alcoholic cheering on
prohibition without which they couldn't contain themselves.

~~~
fehguy
And when you need another data center, you shell out major coin. That's what
we needed to avoid.

At some point, even your finest physical server has limits. If you can split
the work up into smaller pieces that execute in "parallel" fashion, you have a
more scalable architecture. This holds true in VMs as well as physical
servers. Think map reduce, twitter blender, or nearly any parallel system.

Rope is strong because it has many threads.

~~~
huggyface
_Rope is strong because it has many threads._

Rope made out of wet tissue paper is not strong. Rope made out of threaded
rebar is very strong.

You are presenting a somewhat absurd dichotomy -- that having good servers
precludes parallel operations or scalability. I thought we had discarded that
sort of silly foundation a few years ago.

