
Heroku Isn't for Idiots - niallsmart
http://rdegges.com/heroku-isnt-for-idiots
======
pbiggar
I love this post, and hope it keeps coming up. Some day, developers will learn
that even though they could do it as well as Heroku, given enough time and
effort, it's just a lot cheaper not to!

This comes up all the time with my company. We make hosted Continous
Integration (<https://circleci.com>), and often hear "can't I just set up
Jenkins?". And the answer is the same, "you could, but ...". Run it on EC2,
where your tests fail because the IO is bad? What about when three people push
at once and you want to get results ASAP? Are you going to manually compile
Postgres 9.1? And again when you add a second box to you cluster?

I could go on.

~~~
heretohelp
The persistence scaling story for Heroku seems pretty questionable to me. Once
you've maxed out what they offer for MySQL and Postgres, what exactly are you
supposed to do? Start using EC2 RDS?

Heroku seems to be like a more useful Google App Engine, a good place to host
a blog or experimental project if you're not into dev-ops.

If you have a knack (at all) for dev-ops, you're not saving yourself anything.

The downtime is pretty bleh too. The moment you start doing multi-provider to
offset this, you'll end up doing all the dev-ops work you would've had to have
done before. Except now, you have to do it all at once in a time of what is
probably high stress.

If you do the dev-ops/automation yourself from the start, you can start
small/simple and grow that as you go, deploying your services to arbitrary
hosting providers (EC2, Linode, dedicated boxes, whatever).

This is why whenever anybody asks me my opinion of Heroku, I respond, "it's a
great place to host that blog engine you wrote in
Haskell/Clojure/{hipster_language_of_choice}".

~~~
j-kidd
> The persistence scaling story for Heroku seems pretty questionable to me.

Agree. Those people who tell horror stories about Heroku/EC2, usually give
solid numbers, i.e. this was how much I spent, this is how much I save by
moving away, and our response time is now X% faster.

On the other hand, we have article like this that shows pretty graph for a web
app serving 16.2 requests per minute, and make bold claims that everything
will scale.

~~~
benologist
Here's my pretty graph for serving ~500,000 requests per minute on Heroku:

<https://api.playtomic.com/load.html>

Savings aren't exclusively because of Heroku, I also switched the underlying
architecture during that migration from C# / ASP.NET to NodeJS which is
exceptionally well suited to what I'm doing there.

Previously: Dedicated servers, ~$1600 a month

8x dedicated servers at ~$200 each, each running 3 to 5 instances of the API
depending on how many IP addresses they were provisioned with. Uploading was
done via a simple hand-rolled script that'd just FTP everything to each
server.

Now: Heroku, current usage $400 - $500 a month

With Heroku I don't have to worry about concurrent connections (typically 200
- 400 thousand people at once), I don't have to maintain all those servers and
I don't have to fuck around with all the stupid things that can go wrong when
you're operating at scale.

It was a lot of work to get to this point and I made a lot of mistakes like
having a heavy redis pub/sub outside of the EC2 network that cost me $350 in
excess bandwidth providing inter-dyno communication, and I saturated database
connections lots of times because in the old days those dedicated servers each
had a local mongodb which could keep up with ordinary connection pooling, but
it was totally worth it.

~~~
zeeg
So let's assume thats $500 in dynos, that's approximately 14 dynos. You have
30k concurrent users per dyno using Node.JS?

It's not that I don't believe you, I just think you don't understand what
you're saying.

(Actually, it is that I don't believe you)

That said, if this is if you're doing 500 requests/sec (that's very different
than 500 concurrent users) per dyno, good for you. My main bottleneck was not
so much CPU on the web machines (I hit memory limitations), but the database
layer.

~~~
zeeg
>>> 20000000/24/60/60.0 ~231.46666666666667 requests a second?

~~~
benologist
That's _people_ , not requests. :)

~~~
zeeg
Is your graph showing concurrent (as in active connection) users?

~~~
benologist
The graph shows http requests, each individual person sends 'event.general...'
requests approximately 2x per minute.

In the last minute ~460,000 of those requests were made which means somewhere
around 230,000 people sent data although there's room for that to be higher or
lower depending on sessions starting, sessions ending, and just what period of
time you want 'concurrent' to live within.

------
norswap
Previous discussion here: <http://news.ycombinator.com/item?id=4062364>

------
aaronbrethorst
I'd love to see an example of someone saying Heroku is for idiots. Although
it's a bit pricey[1], I love it to pieces because it lets me focus on what I
like to do: shipping product.

That said, I'd be delighted if Heroku would introduce a high-memory dyno. I've
been working on something for the past few days where their soft'ish 512MB cap
has been biting me in the ass.

[1] Assuming you value your time at something around $0/hour.

edit: thanks for the link!

~~~
wmf
This rant is a reply to <http://justcramer.com/2012/06/02/the-cloud-is-not-
for-you/> which is indeed trashing Heroku.

~~~
zeeg
I wasn't aiming to trash Heroku specifically, but the post by Randall is very
uninformed.

------
fusiongyro
This is just like the garbage collection versus manual memory management
debate.

------
benologist
I also save significant time and money using Heroku and it's awesome being
able to scale up and down automatically using HireFireApp. Most of this
article rings true with my experience however some of it doesn't feel that
valid for me having a high volume NodeJS app, not to mention the entire "Let's
talk about bad ideas" section is just lame.

1) If your app crashes it takes _ages_ to restart dynos "automatically",
there's nothing at all instant about it and if it's a bug and you have a high
volume of requests it's going to hit every dyno which means you are offline.

2) Performance can be variable and it can be hard to be sure an optimization
has done anything. This will be easier when New Relic supports NodeJS but
right now you're stuck using less elegant solutions. It is shared hosting and
it's not necessarily anybody's fault if something is slow, and I know this
because the same requests frequently have orders of magnitude difference in
response time for me.

    
    
        dyno=web.10 queue=0 wait=0ms service=3ms status=200 bytes=25
        dyno=web.7 queue=0 wait=0ms service=207ms status=200 bytes=28
    

Between Heroku and NodeJS I run my API server usually on just 8 dynos doing
6,000 - 10,000 requests per second and having come from C# and dedicated
Windows servers it is a dream - nothing to maintain and easy deployment and
easy debugging on Heroku's side, and NodeJS is just amazing once you start
realizing what's possible with it.

~~~
rapind
I use Hirefire as well for several sites. For the most part I like it, however
it can be awfully slow to react to traffic spikes. I highly recommend leaving
a bit of a buffer (few extra dynos) during prime time traffic.

~~~
benologist
You are completely correct, the default settings are not good and it will
aggressively scale your app down too far if you let it.

------
liveoneggs
<quote> Heroku is Just Unix

At its core, Heroku is just a simple unix platform; specifically, Ubuntu 10.04
LTS. </quote>

I just threw up a little

~~~
rbanffy
Why?

~~~
orthecreedence
I'm going to guess it's because Ubuntu was called "unix" (which actually
raised my eyebrow as well). Although similar, unix != linux, and Ubuntu
especially != unix, as opposed to a distro like Slackware, which is more or
less unix with a linux kernel slapped in.

I got what the author meant though, and it's not that big a deal.

~~~
rbanffy
There is "Unix", an operating system originated at AT&T and "unix", a generic
name that usually refers to a family of operating systems that are based more
or less on the same ideas. Linux is a lot closer to AT&T's ideas of Unix than
other certified Unixes like OSX and AIX. Linux is not Unix, but it certainly
is a unix.

When I really want to annoy my BSD friends, I call it a "Linux-like" operating
system. It never fails.

------
jeremyjh
"Each instance (Heroku calls them dynos), has:

512MB of RAM, 1GB of swap. Total = 1.5GB RAM. 4 CPU cores (Intel Xeon X5550 @
2.67GHz)."

That is very interesting. So you get 200% of the CPU from a c1.medium but only
1/6 that memory? For 1/8 the price that Amazon charges? Maybe this is a the
new math I keep hearing about.

~~~
wmf
I would guess that Heroku runs ~29 dynos per m1.xlarge, so each dyno gets a
minimum of ~1/7th of a core or ~366 MHz. Just because you can see 4 cores
doesn't mean you can use them.

~~~
exogen
One day I noticed that Heroku gives you access to netstat, so I started trying
to figure out the actual number.

Running this:

    
    
        heroku run "netstat -l | grep lxc | wc -l"
    

...seems to imply that it's actually 100 ± 25 dynos per instance.

(It's possible I could be entirely misunderstanding the netstat output, in
which case someone please speak up.)

------
juniorer
Honest question: Why do startups continue to setup their own infrastructure on
AWS with ops? Scaling? Cost?

------
npguy
"Heroku Isn't for Idiots " But going by current trends there could soon be a
"Heroku for Idiots" :-)

