Hacker Newsnew | comments | ask | jobs | submitlogin
dkuebric 383 days ago | link | parent

Yeah, I had the same thought when the discussion came around last time. I actually took the source to the simulations (thanks for sharing, rapgenius!) and did some experiments on how random routing performs if it's routing to backends that can handle various #s of concurrent requests.

Here's the writeup: https://www.appneta.com/2013/02/21/the-taming-of-the-queue-m...

(Spoilers: turns out it gets a lot better very quickly.)



thu 383 days ago | link

Thanks for the writeup. It seems Heroku could implement easily a two-layer routing mesh to accomplish what you describe thought.

-----

jcampbell1 383 days ago | link

I think that is the wrong conclusion. You still have the problem of having to design a system where the dynos communicate their queues to the second layer of the routing mesh.

The solution is to have dynos large enough that you can run 8+ workers and let the operating system do all the complex scheduling.

-----

thu 383 days ago | link

> and let the operating system do all the complex scheduling.

So Heroku could spawn workers with a $FD environment variable instead of $PORT and the "complex scheduling" done by the OS _is_ the second routing layer.

But really, they could still do a second level routing even outside a single OS as the scale of the distribution is much smaller, so having the routing mesh be aware of the worker availability seems feasible again.

-----




Lists | RSS | Bookmarklet | Guidelines | FAQ | DMCA | News News | Feature Requests | Bugs | Y Combinator | Apply | Library

Search: