Hacker News new | comments | show | ask | jobs | submit login

I too was shocked at that. Also that 6 dynos is apparently the average size to handle that load.

It takes $179/mo (6 dynos) to handle 17 requests/second? That's insane.

Didn't intend to imply that. Number of dynos needed varies extremely widely, with the app's response time and language/framework being used as the main variables.

There are apps on Heroku that serve 30k–50k reqs/min on 10–20 dynos, typically written in something like Scala/Akka or Node.js and serving incredibly short (~30ms) response times with very little variation. But these are unusual.

The more common case of a website, written in non-threadsafe Rails, with median response times of ~200ms but 95th percentile at 3+ seconds, would probably use those same 10 dynos to do only a few thousand requests per minute. Whether or not you use a CDN and page caching also makes a big difference (see Urban Dictionary for an example that does it well).

But it really depends. We were trying to quantify when you should be worried. If you're running a blog that serves 600 rpm / 10 reqs/sec off of two dynos, you don't need to sweat it.

And if you get into any sort of slowness (like say mongo decides to pull something from disk instead of ram), it is instantly H12's all over the place and there is nothing you can do about it.

This comes back to visibility: knowing where the problem lies (especially when you're using a variety of add-on services or calling external APIs) and being able to understand what's happening, or what happened in retrospect.

Visibility is hard no matter where you run your app. But this is an area where Heroku can get a lot better, and we intend to.

Visibility is one part of the problem. 50k requests/min is only ~833/s. The reality is that a single dyno should be able to more than handle that sort of load, especially if it is a simple app. People are doing 10k connections on a single laptop, 833s should be a piece of cake. So, yes, visibility is a big issue here because you have no idea if you need 10, 11, 12 or 20 dyno's to serve 50k requests/min. You just guess and when you guess wrong, it ends up with cascading failure of H12's and other issues. Never mind that very few apps have a steady stream of traffic and most have big dips depending on the time of day and HN popularity... and now we are back to the auto scaling discussion.

Another key part of your statement is 'with very little variation'. The code pretty much can't be doing anything other than serving up some static content because as soon as anything that requires any sort of IO or cpu will instantly throw the system into H12 hell. Yes, a CDN will take load off your Heroku dyno's because god forbid that your dyno actually do anything itself. Except that you forget that not all apps are webapps and in my case, there is no reason to add a CDN when I'm just serving requests and responses to an iphone app.

The other part of the problem is being able to actually do something about it. I've tried anywhere between 50 and 300 dynos (yes we got that number increased). If we could just throw money at the problem that would be one thing, but nothing was able to resolve the H12's that we see and our paid support contract was no help either.

"If you're running a blog that serves 600 rpm / 10 reqs/sec off of two dynos, you don't need to sweat it."

Once again, we are back at the same conclusion... don't use Heroku if you want to run a production system.

How could one single-threaded dino serve more than 833 requests / sec?

Where do you get that dyno's are single threaded? Please read: https://devcenter.heroku.com/articles/dynos#dynos-and-reques...

500-800req/sec over 10-20 servers with ~30ms response times. 1 req / server seemed plausible.

Thanks. I admit I'm not familiar with the platform.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact