Hacker Newsnew | comments | show | ask | jobs | submit login

I certainly already agreed that Rails' architecture is bad (though the reason that it has this problem is its memory usage, and not any of the other reasons you mention). Herokus architecture is bad as well. It's the combination of these that causes the problem. But that does not mean that it's impossible, or even hard, to solve the problem at Herokus end.

> I'm not sure there are such providers, and if there aren't, I think it's safe to point the finger towards Rails.

This is not sound logic. I described above two methods for solving the problem: (1) increase the memory per Dyno (see below: they're doing this, going from 512MB to 1GB per Dyno IIRC, which although still low will be a great improvement if that means that your app can now run 2 concurrent processes per Dyno instead of 1), or (2) do intelligent routing for small groups of Dynos. Do you understand the problem with random routing, and why either of these two would solve it? If not you might find the paper I linked to previously very interesting:

"To motivate this survey, we begin with a simple problem that demonstrates a powerful fundamental idea. Suppose that n balls are thrown into n bins, with each ball choosing a bin independently and uniformly at random. Then the maximum load, or the largest number of balls in any bin, is approximately log n / log log n with high probability. Now suppose instead that the balls are placed sequentially, and each ball is placed in the least loaded of d >= 2 bins chosen independently and uniformly at random. Azar, Broder, Karlin, and Upfal showed that in this case, the maximum load is log log n / log d + Θ(1) with high probability [ABKU99].

The important implication of this result is that even a small amount of choice can lead to drastically different results in load balancing. Indeed, having just two random choices (i.e., d = 2) yields a large reduction in the maximum load over having one choice, while each additional choice beyond two decreases the maximum load by just a constant factor."

-- http://www.eecs.harvard.edu/~michaelm/postscripts/handbook20...

I understand that one approach to dispatching requests at the load balancer is superior to the other, just as I understand that one way of absorbing requests at the app server is better than the other.

Most things are inferior to other substitutable things! :)


That's a mild way of putting it. With the current way of dispatching requests you need exponentially many servers to handle the same load at the same queuing time, if your application uses too much memory to run multiple instances concurrently on a single server.


Guidelines | FAQ | Support | API | Lists | Bookmarklet | DMCA | Y Combinator | Apply | Contact