Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So basically nothing has gone wrong yet, therefore it's a slam dunk case for Doing It Yourself.

I'm sorry, but no.

You don't pay an ops guy or Heroku buckets of $$ for when things are going well, just as you don't pay $$ for software that only handles the happy case.

You pay $$ for someone who has fixed shit that went horribly wrong and has the scars to prove it. "That deep purple welt on my lower ego ... is where I only had a backup script and never tested the backups. This interesting zigzag is where I learnt about things that can wrong with heartbeat protocols ..."

Edit: though see below for a more nuanced discussion of reasons from OP.



You're saying before Heroku (or the cloud) everyone paid an operations guy to do anything?

Nope.

I don't need to suffer the same acts of terror operations people have gone through to be able to avoid, prevent, or recover from them. I'm paying myself to be the operations guy, as well as the engineer.


It's about value delivered and opportunity cost paid.

You're betting foregone engineering time against Heroku's value-as-delivered.

I don't think that the hours you will inevitably spend fixing the ops infrastructure will turn out to be very profitable.

For a large business, moving off Heroku to inhouse operations is probably justifiable, because they can capture sufficient value from a smart ops team to offset the potentially very high monthly bill Heroku would levy for their business.

But for a smaller firm, Heroku so abundantly oversupplies value compared to the bill that you will simply never be able to capture that value from internal work at anything like the same price.

It's like saying "I should stop buying food from the supermarket, it would be cheaper to grow my own vegetables!"

In a naive dollar-cost analysis this is true. In an actual consideration of whether raising a vege patch, bearing the risks of starvation, doing it poorly because you're new at it, spending the hundreds of hours of labour it will require -- means that letting a professional farmer do it is much smarter.

Gains from trade are gains.

I have a pretty strong opinion on this given how much time I've already sunk on similar work: http://chester.id.au/2012/06/27/a-not-sobrief-aside-on-reign...

And for my secondary startup I am thinking I will just let Heroku handle it.


[deleted]


The blog post doesn't explicitly mention the performance problems that you are now saying prompted your move.

Then you go on to say it was simple and talk about how much cheaper it is.

I feel like I was replying to what you said in the first instance, not the more interesting underlying cause.

Did you talk to Heroku about your performance problems? I'd be interested to see how much leeway they would grant.

Edit: for people wondering why the comment was deleted, I think because zeeg accidentally replied to me instead of a different poster; nothing silly going on.


I talked to them (the people I knew) some, but it was mostly characteristics of the app.

So various events were like this:

* Perf problem w/ the code (e.g. didnt handle this kind of spike)

* Perf problem with service (e.g. had $200 db instead of $400)

* Couldnt max cpu due to lack of memory

* Couldnt max cpu due to IO issues (db)

* Couldnt maintain a reasonable queue (had to use RedisToGo which is far from cheap)

The biggest one I couldnt get around was my queue workers required too much memory to operate (likely because of them dealing with larger JSON loads). Too much was like 600mb (or something along those lines) total on the Dyno (not just from the process). I routinely saw in the Heroku logs "using 200% of memory" etc, and thats when things would start going down hill.

Things could have been a lot better if I had more insights into the capacity/usage on Dynos (without something like NewRelic, which doesnt give it well enough)

A great analogy for me is this:

If SQL isnt scaling there are several options:

1. Stop using it (switch to another DB)

2. Shard it

3. Buy better hardware

Guess which one we always go to first? :)


Thanks, this makes more sense to me.

Looks like I misread your purpose.


Speaking of backups, you might want to run wal-e:

https://github.com/heroku/wal-e

It is not the best program I ever principally authored (since I know you are very discriminating Python programmer), however, it does work very well and has an excellent reliability record so far, seemingly for both Heroku and users of wal-e per most reports. I also tried quite hard -- and in this I'm pretty satisfied given both the feedback I've gotten and my personal experiences -- to make the program easy to use and administer, for one server and operator to many servers and one operator.

If someone is not doing continuous archiving and can't use our service for any reason, we do try to urge them to use something like wal-e or any of the other continuous archiving options. And to that end, I tried to make a pretty credible wal-e set-up only take a few minutes for someone who already knows how to install Python programs (I would love for someone to contribute more credible packaging).


Actually before you embark on such things, you should probably be "the scarred person who knows how to fix shit that goes wrong". Then you know which things are likely to hurt you and you can avoid them.

To be in control of your venture means knowing all the corner cases.

I'd trust dedicated hardware more than Heroku as when it does go wrong, you're at the mercy of yourself and not others (other than the colo facility).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: