I've read through the OP, and all of the comments here. Our job at Heroku is to make you successful and we want every single customer to feel that Heroku is transparent and responsive. Getting to the bottom of this situation and giving you a clear understanding of what we’re going to do to make it right is our top priority. I am committing to the community to provide more information as soon as possible, including a blog post on http://blog.heroku.com.
Anyone who wants to like Heroku would hope that the OP is flat out, 100%, wrong. The fact that Heroku's official answer requires a bit of managing implies otherwise.
On a related tangent, I would also encourage future public statements to be a little less opaque than some Heroku has put out previously.
For instance, the cause of the outage last year was attributed to "...the streaming API which connects the dyno manifold to the routing mesh" . While that statement is technically decipherable, it's far from clear.
What we want to know:
- is the OP right or wrong? That is, did you switch from smart to naive routing, for all platforms, and without telling your existing or future customers?
- if you did switch from smart to naive routing, what was the rationale behind it? (The OP is light on this point; there must be a good reason to do this, but he doesn't really say what it is or might be)
- if the OP is wrong, where do his problems might come from?
> What's the point of posting a link to the front page of your blog, where the most recent article is 15 days old (4 hours after the comment above)?
I think OP is saying 'I am going to investigate the situation; when I am finished here [the blog] is where I will post my response', not that there is something there already.
That said, it's all a little too PR-Bot for my taste (although there's probably only so many ways to say the same info without accidentally accepting liability or something).
Me, I'm the swarthy pirate. Arrrh.
Well, he promised a detailed blog post, at which point that link will be extremely helpful.
I do not think it is fair to expect an immediate detailed response to those questions. If I were CEO of Heroku, I wouldn't say anything definite until after talking to the engineers and product managers involved--even if I was already pretty sure what happened. The worst thing you could do at this point is say something that's just wrong.
But a link, that doesn't point anywhere useful, introduced by a PR phrase that sounds a little like "Your call is important to us", was a little annoying, esp. after reading the OP where they say they have contacted Heroku multiple times on this issue.
Most probable cause: smart routing is hard to scale. Multiple routers, with each one doing random distribution independently of others will still produce a globally random distribution. No need for inter-router synchronization.
If multiple routers try smart routing, they must do quite a bit of state sharing to avoid situations where N routers try to schedule their tasks on a single dyno. And even if you split dynos between routers then you need to move requests between routers in order to balance them.
Managing a distributed queue is hard, for reasons similar to ones making the original problem hard - DQs require global state in a distributed environment. There are tradeoffs involved - the synchronization cost might become a bottleneck in itself.
Pushing the problem on the distributed brokers is making a big bet on the queuing solution. Nope, definitely not in the "just use" category.
But they will end up building a pull rather than push system in the end.
We've always been of the opinion that queues were happening on the router, not on the dyno.
We consistently see performance problems that, whilst we could tie down to a particular user request (file uploads for example, now moved to S3 direct), we could never figure out why this would result in queuing requests given Heroku's advertised "intelligent routing". We mistakenly thought the occasion slow request couldn't create a queue....although evidence pointed to the contrary.
Now that it's apparent that requests are queuing on the dyno (although we have no way to tell from what I can gather) it makes the occasional "slow requests" we have all the more fatal. e.g. data exports, reporting and any other non-paged data request.
I'm pissed. Spent way too much time unable to explain it to coworkers, thinking I just didn't understand Heroku's platform and that it was my fault.
Turns out, I didn't understand it, because Heroku never thought to clearly mention something that's pretty important.
Easiest fix: moving to EC2 next week. I've wanted to ever since our issues became evident but it's hard to make a good argument from handwaving about 'problems'.
Of course, then you need to solve all these problems yourself. That sounds pretty easy, you'll have it done next week no problem!
That was sarcastic, but this isn't: good luck, let us know how it goes.
But if you want to let the various PAAS providers put the fear into you, that's your cowardice.
Let the others learn as they may.
To clarify, Heroku is making the problem harder on themselves than it would be for an individual to serve their own needs because of the complexity managing so many customers and apps.
You don't have to be Heroku to do for yourself what they offer.
I agree with this, actually. I know it's not simple to do your own servers when you're growing. Yet I'd rather improve my existing ops skills a bit than have to setup everything as async APIs (on EC2 anyway). That's the only way I can see that I can solve this.
You're going to discover that a lot of "ops skills" boils down to "do things asynchronously whenever possible". And while nearly any smart engineer can think of the "right" way of doing something, finding the time to do it all is a huge opportunity cost.
That's what the parent is trying to say. It's not that you can't do it; it's that it's a really bad idea to do it, at first.
This "bad idea" is how 90% of the web works...
I can only imagine how these guys must have been beating their heads against the wall. Heroku charges a premium price and should be providing a premium service.
Depending on the complexity of their setup, they COULD have it done next week no problem.
After all tens of thousands of other sites have. It's not like everybody except Google and Facebook is using Heroku.
"After reading the following RapGenius article (http://rapgenius.com/James-somers-herokus-ugly-secret-lyrics), we are reevaluating the decision to use Heroku. I understand that using a webserver like unicorn_rails will alleviate the symptoms of the dyno queuing problem, but as a cash-strapped startup, cost-efficiency is of high importance.
I look forward to hearing you address the concerns raised by the article, and hope that the issue can be resolved in a cost-effective manner for your customers."
Currently, though, I believe it's just fed as a number of milliseconds: https://github.com/newrelic/rpm/blame/master/lib/new_relic/a...
This solves the issue of the application seeing out-of-whack queue times if there's clock skew between the front-end routing framework and the actual dyno box, but misses all the queued time spent in the dyno-queue per rap genius's post.
There's facility for that in the Agent, to allow multiple copies of the header and use whichever came first (for the beginning) and whichever came last (for the end ), it'd be relatively easy to hook metrics into each of those.
Why dos it not use some kind of scheduling system to handle other task while one task is waiting on i/o?
EDIT: will need to look into our memory perf though, looks like we'll need to do some work to get more than a couple of workers.
Also as the nodes are virtual machines anyway and may be contending with each other for IO, and for most apps these days you spend more time waiting for IO than you do spinning the CPU (unless you have a lot of static content so don't need to hit the db for many requests - but such requests are better handled by a caching layer above that which handles the fancier stuff), so the benefit of running multiple processes per node is going to be a lot less noticable than if you are talking about the nodes being physical machines with dedicated storage channels.
The routing dynamics should be explained better in Heroku's documentation. From an engineering perspective, they're a very important piece of information to understand.
We're with https://bluebox.net now and are very happy.
But really, throwing in the towel at intelligent routing and replacing it with "random routing" is horrific, if true.
It's arguable that the routing mesh and scaling dynamics of Heroku are a large part, if not -the- defining reason for someone to choose Heroku over AWS directly.
Is it a "hard" problem? I'm absolutely sure it is. That's one reason customers are throwing money at you to solve it, Heroku.
The thing is, their old "intelligent routing" was really just "we will only route one request at a time to a dyno." In other words, what changed is that they now allow dynos to serve multiple requests at a time. When you put it that way, it doesn't sound as horrific, does it?
The problem is when you send a dyno that has all its threads stuck on long-running computations a new request, because it won't be able to even start processing it. The power is orthogonal to the problem.
The only mitigation is that if a dyno can handle a large number of threads, it probably won't get clogged. But if it can only handle 3 and gets new requests at random, you're in a bad place.
That is utter madness, and the validity of the argument depends on whether it's the Heroku or this dude's fault that the VM is serving only a single request at a time (and it taking >1sec to handle a request).
At some point you hit memory limits, disk IO limits, or simply a connection limit. It doesn't matter what limit:
If you have some requests that are longer running than others, random load balancing will make them start piling up once you reach some traffic threshold.
You can increase the threshold by adding more backends or increasing the capacity of each backend (by optimizing, or picking beefier hardware if you're on a platform that will let you), and maybe you can increase it enough that it won't affect you.
But no matter what you do, you end up having to allocate more spare resources at it than what you would need with more intelligent routing.
If you're lucky, the effect might be small enough to not cost you much money, and you might be able to ignore it, but it's still there.
I think we have to remember that the "intelligent routing" in question here is actually marketing-speak for "one request per server." Are you saying that when your servers can only receive one request at a time, you will necessarily need fewer than if your servers can handle three requests at a time but are assigned requests randomly?
Sorry to be a bit harsh, but I find it a bit shocking how even in this field where we can basically play god and do whatever we want and what we think is best on increasingly powerful bit-cruncher-and-storer-machines, so many here seem to behave like a herd of sheep and just do what 'everyone else' does. Just sit down for a moment and think! What are my requirements right now? What could be a requirement in the near future? What technologies are there which can help me? Am I sure about these feature? Better read up on it first! How difficult is it to get it to behave in ways that are or can be important for me?
Now list that stuff down. If it's puzzling sleep over it, forget it for a few days. Then suddenly, for example under a hot shower you get an idea - that requirement I had isn't really one, I can solve it differently! Come back, take the now fitting piece of the puzzle and do your job in 20% of the time that would have been needed if you would just have blindly followed some path. That's how it usually works for me. Be picky, be exact, but be lazy.
Now about that routing dispatcher problem: Couldn't we solve that in one to two weeks on a generic plattform, but specifically for a certain use case? Let's say you want to have a worker queue of rails request handlers that work in parallel. Just write that damn router! Maybe I'd be lazy, learn Erlang for a week and think about it afterwards.
Java is almost done from "Love".
Rails is done from "Love" of those who do not "love" java.
I don't "love" java and Rails.
People are throwing money at Heroku because it's really easy to use, not because it's the best long-term technology choice. Seriously - what percentage of Heroku paying users do you think actually read up on the finest technical details like routing algorithms before they put in their credit card? Heroku knows. They know you can't even build a highly-available service on top of it, since it's singly-homed, and they're still making tons of money.
I think heroku does want to be a long-term technology choice.
> I think heroku does want to be a long-term technology choice.
Oh, I'm sure Heroku wants to be a long-term technology choice. That doesn't mean they're trying to be one with their current product offerings.
Consider their product choices since launch: they've added dozens of small value-added technology integrations. Features for a few bucks a month like New Relic to upsell their smallest customers. The price drop was also a big move to reduce barriers to using their platform - which also targets smaller customers. They launched Clojure as their third supported language! Meanwhile, they're singly-homed and have had several protracted outages, and have no announced plans to build a multi-homed product. Scalability has gotten worse with this random routing development.
I think Heroku has known for a long time that they don't have a long-term platform product and that they can't keep big accounts until they build one.
Percentage of the requests served within a certain time (ms)
100% 30029 (longest request)
Why in the world would a company spend $20,000 a month for service this awful?
* 89/100 requests failed (according to
* Heroku times out requests after 30 seconds, so the 30000ms
numbers may be timeouts (I've forgotten if *ab* includes
those in the summary).
* That said, the *ab* stats could be biased by using overly
large concurrency settings (not probably if you're running 50 dynos...),
Uncertainty is DiaI (death-in-an-infrastructure). I just created a couple of projects on Heroku and love the service, but this needs to be addressed ASAP (even if addressing it is just a blog post).
Also, if you have fewest-connections available, I've never understood using round-robin or random algorithms for load-balancers...
LeastConns/FastestConn selection is very dangerous when a backend host fails. Imagine a host has a partial failure, allowing health checks to pass. This host now fast fails and returns a 500 faster than other hosts in the pool generate 200s. This poison host will have less active connections and your LB will route more requests to it. A single host failure just became a major outage.
I like WRR selection for back ends, then use a queue or fast fail when your max active conns is exceeded. Humans prefer latency to errors, so let the lb queue (to a limit) on human centric VIPs. Automated clients deal better with errors so have your lb throw a 500 directly, or RST, or direct to a pool that serves static content.
Or even, an alarm threshold if responses are averaging /too fast/, based on your expected load & response times.
I've not done any deployment/ops beyond hte trivial/theoretical though, so I don't know how this would work in reality.
Nope, don't do this either. Unless you like getting pages because things are working?
But you're now running a stateful l7 application proxy. That's waaaaaay more expensive than a tcp proxy with asynchronous checks.
Unless something has changed recently, ab doesn't handle dynamic pages very well. It takes the first pageload as a baseline, and any subsequent request with any portion of the page that is randomized, or is a CSRF token, or reflects most recent changes, etc., is marked as "failed" because it doesn't match the baseline's length.
The page in question does have a block in the footer reflecting "hot songs", which I'm guessing changed a bit during the run.
I imagine the rationale was something along the lines of many servers/apps are written to incorrectly return 200 with a descriptive error page rather than 500 or whatever the appropriate status code would be. And at the time ab was first written, pages were a lot more static than they are now, so a different page would be more likely to indicate an incorrect response.
I suspect that the reason they'be been pushed to do this is financial, and it makes me think that Nodejitsu's model of simply not providing ANY free plans other than one month trials is a good one. I realize it's apples and oranges, since NJ is focused on async and this wouldn't even be a problem for a Node app, but from a business perspective I feel like this would alleviate pressure. How many dynos does Heroku have running for non-paying customers? Do these free dynos actually necessitate this random routing mesh bullshit? If not, what?
Of course the random routing mesh isn't necessitated by anything, this problem is already solved by bigger companies.
Maybe this is the start of Off the Rails Rap.
The solution here is to figure out why your 99th is 3 seconds. Once you solve that, randomized routing won't hurt you anymore. You hit this exact same problem in a non-preemptive multi-tasking system (like gevent or golang).
1) Once per minute (or less often if you have a zillion dynos), each dyno tells the router the maximum number of requests it had queued at any time over the past minute.
2) Using that information, the router recalculates a threshold once a minute that defines how many queued requests is "too many" (e.g. maybe if you have n dynos, you take the log(n)th-busiest-dyno's load as the threshold -- you want the threshold to only catch the tail).
3) When each request is sent to a dyno, a header fields is added that tells the dyno the current 'too many' threshold.
4) If the receiving dyno has too many, it passes the request back to the router, telling the router that it's busy ( http://news.ycombinator.com/item?id=5217157 ). The 'busy' dyno remembers that the router thinks it is 'busy'. The next time its queue is empty, it tells the router "i'm not busy anymore" (and repeats this message once per minute until it receives another request, at which point it assumes the router 'heard').
5) When a receiving dyno tells the router that it is busy, the router remembers this and stops giving requests to that dyno until the dyno tells it that it is not busy anymore.
I haven't worked on stuff like this myself, do you think that would work?
I imagine the long tail disappears in a similar way that a traffic jam is prevented by lowering the speed limit.
There's a relatively easy fix for Heroku. They should do random routing with a backup second request sent if the first request times fails to respond after a relatively short period of time (say, 95th percentile latency), killing any outstanding requests when the first response comes back in. The amount of bookkeeping required for this is a lot less than full-on intelligent routing, but it can reduce tail latency dramatically since it's very unlikely that the second request will hit the same overloaded server.
Even ignoring the POST requests problem (yup, it tried to replay those) properly cancelling a request on all levels of a multi-level rails stack is very hard/not possible in practice. So you end up DOSing the hard to scale lower levels of the stack (e.g. database) at the expense of the easy to scale LB.
ha-proxy is a lot better than nginx + more flexible if you want to introduce non-http to your stack.
Shouldn't the request be canceled on all levels if you cut the HTTP connection to the frontend?
Alternately, heroku can introduce a third layer between the mesh routers and the inbound random load balancer. This layer consistently hashes (http://en.wikipedia.org/wiki/Consistent_hashing) the api-key/primary key of your app, and sends you to a single mesh router for all of your requests. Mesh routers are/should be blazing fast relative to rails dynos, so that this isn't really a bottleneck for your app. Since the one mesh router can maintain connection state for your app, heroku can implement a least-conn strategy. If the mesh router dies, another router can be automatically chosen.
The 'tied request' idea from the Dean paper is neat, too, and Heroku could possibly implement that, and give dyno request-handlers the ability to check, "did I win the race to handle this, or can this request be dropped?"
Your solution doesn't work if requests aren't idempotent.
For mutating requests, there's a solution as well, but it involves checksumming the request and passing the checksum along so that the database layer knows to discard duplicate requests that it's already handled. You need this anyway if there's any sort of retry logic in your application, though.
Yeah, it's a lot more practical than implementing QoS, isn't it?
As for the intelligent routing, could you explain the problem? The goal isn't to predict which request will take a long time, the goal is to not give more work to dynos that already have work. Remember that in the "intelligent" model it's okay to have requests spend a little time in the global queue, a few ms mean across all requests, even when there are free dynos.
Isn't it as simple as just having the dynos pull jobs from the queue? The dynos waste a little time idle-spinning until the central queue hands them their next job, but that tax would be pretty small, right? Factor of two, tops? (Supposing that the time for the dyno-initiated give-me-work request is equal to the mean handling time of a request.) And if your central queue can only handle distributing to say 100 dynos, I can think of relatively simple workarounds that add another 10ms of lag every factor-of-100 growth, which would be a hell of a lot better than this naive routing.
What am I missing?
Your solution would likely work if you had some higher level (application level? not real up on Heroku) at which you could specify a push vs. pull mechanism for request routing.
Given that, according to TFA (and it's consistent with some other things I've read) Heroku's bread and butter is Rails apps, and given that, according to TFA, Rails is single-threaded, that (valid) point about concurrency in a single dyno is perhaps not that relevant? You'd think that Heroku would continue to support the routing model that almost all of their marketing and documentation advertises, right? Even if it's a configurable option, and it only works usefully with single-threaded servers?
And if you did do it pull-based, it wouldn't be Heroku's problem to decide how many concurrent requests to send. Leave it to the application (or whatever you call the thing you run on a dyno).
And it doesn't need to be pull-based, if the router can detect HTTP connections closing in dynos, or whatever.
But the idea of pull-based work distribution is pretty straightforward. It's called a message queue.
Animations and results are in the explanation at http://rapgenius.com/1502046
It's not an insurmountable problem by any measure, and it's definitely worth it.
I'm not sure this applies to the OP. His in-app measurements were showing all requests being handled very fast by the app itself; the variability in total response time was entirely due to the random routing.
Even if you work on narrowing the fat tails, shouldn't you still need to be upfront and clear about how adding a new dyno only gives you an increased chance of better request handling times as you scale?
The Golang runtime uses non-blocking I/O to get around this problem.
You could write a pthreads-compliant threading library without using threads at all, just epoll.
> But elsewhere in their current docs, they make the same old statement loud and clear:
> The heroku.com stack only supports single threaded requests. Even if your applicaExplaintion were to fork and support handling multiple requests at once, the routing mesh will never serve more than a single request to a dyno at a time.
They pull this from Heroku's documentation on the Bamboo stack , but then extrapolate and say it also applies to Heroku's Cedar stack.
However, I don't believe this to be true. Recently, I wrote a brief tutorial on implementing Google Apps' openID into your Rails app.
The underlying problem with doing so on a free (single-dyno) Heroku app is that while your app makes an authentication request to Google, Google turns around and makes a "oh hey" request to your app. With a single-concurrency system, Google your app times out waiting for Google to get back to you and Google won't get back to you until your app gets back to you so hey deadlock.
However, there is a work-around on the Cedar stack: configure the unicorn server to supply 4 or so worker processes for your web server, and the Heroku routing mesh appropriately routes multiple concurrent requests to Unicorn/my app. This immediately fixed my deadlock problem. I have code and more details in a blog post I wrote recently. 
This seems to be confirmed by Heroku's documentation on dynos :
> Multi-threaded or event-driven environments like Java, Unicorn, and Node.js can handle many concurrent requests. Load testing these applications is the only realistic way to determine request throughput.
I might be missing something really obvious here, but to summarize: their premise is that Heroku only supports single-threaded requests, which is true on the legacy Bamboo stack but I don't believe to be true on Cedar, which they consider their "canonical" stack and where I have been hosting Rails apps for quite a while.
However, dumb routing is still very problematic – even if your dyno can work on two requests simultaneously it's still bad for it to get sent a third request when there are other open dynos.
Also, for apps with a large-ish memory footprint, you can't run very many workers. A heroku dyno has 512mb memory, so if your app has a 250mb footprint, then you can basically only have two workers.
Another essential point to note is that the routing between cedar and bamboo is essentially unchanged. They simply changed the type of apps you can run.
Also, if the unicorn process is doing something cpu intensive (vs waiting on a 3rd party service or io etc) then it won't serve 3 requests simultaneously as fast as single processes would.
It would be if all requests were equal. If all your requests always take 100ms, spreading them equally would work fine.
But consider if one of them takes longer. Doesn't have to be much, but the effect will be much more severe if you e.g. have a request that grinds the disk for a few seconds.
Even if each dyno can handle more than one requests, since those requests share resources, if some of them slows down due to some long running request, response times for the other requests are likely to increase, and as response times increase, it's queue is likely to increase further, and it gets more likely to pile up more long running requests.
> Followup question, also how would intelligent routing work if it just previously checked to see if which dyno had no requests? That seems like an easy thing to do, now you would have to check CPU/IO whatever and route based on load. Not specifically targeted at you but to everyone reading the thread.
There is no perfect answer. Just routing by least connections is one option. it will hurt some queries that will end up being piled up on servers processing a heavy request in high load situations, but pretty soon any heavily loaded servers will have enough connections all the time that most new requests will go to lighter loaded servers.
Adding "buckets" of servers for different types of requests is one option to improve it, if you can easily tell by url which requests will be slow.
I am using it on a small production environment with Heroku and I like it, but when we officially launch the app, should we switch to Unicorn?
This seems to be missing from most of these project sites, which are often just marketing (look! It's better!!), and therefore not very trustworthy.
From the outside it looks like the biggest differentiator in each generation of ruby servers (and, I guess, db managment systems :) is not that the new is better or worse, but simply that has different trade-offs.
Although as noted in the comments, I neglected to run threadsafe! and should have probably tried rubinius or jruby. I have been meaning to redo. Take with a grain of salt
Overall really solid, though more useful if you can use something other than MRI.
With two Unicorn workers we found that 25 was the best backlog threshold to accept (it refuses additional requests). When we were able to go to 5 Unicorn workers on Heroku we had to start to adjust that.
You have to remove the port declaration from the line for Unicorn in your Procfile, and then add a line like this to your unicorn.rb file to define the listener port along with adjusting the backlog size:
listen ENV['PORT'], :backlog => Integer(ENV['UNICORN_BACKLOG'] || 100)
Puma define 4:8 threads or Unicorn 3 workers.
Normally when I read "X is screwing Y!!!" posts on Hacker News I generally consider them to be an overreaction or I can't relate. In this case, I think this was a reasonable reaction and I am immediately convinced never to rely on Heroku again.
Does anyone have a reasonably easy to follow guide on moving from Heroku to AWS? Let's keep it simple and say I'm just looking to move an app with 2 web Dynos and 1 worker. I realize this is not the type of app that will be hurt by Heroku's new routing scheme but I might as well learn to get out before it's too late.
To whom it may concern,
We are long time users of Heroku and are big fans of the service. Heroku allows us to focus on application development. We recently read an article on HN entitled 'Heroku's Ugly Secret' http://s831.us/11IIoMF
We have noticed similar behavior, namely increasing dynos does not provide performance increases we would expect. We continue to see wildly different performance responses across different requests that New Relic metrics and internal instrumentation can not explain.
We would like the following:
1. A response from Heroku regarding the analysis done in the article, and
2. Heroku-supplied persistant logs that include information how long requests are queued for processing by the dynos
Thanks in advance for any insight you can provide into this situation and keep up the good work.
I've been reading through all the concerns from customers, and I want every single customer to feel that Heroku is transparent and responsive. Our job at Heroku is to make you successful. Getting to the bottom of this situation and giving you a clear and transparent understanding of what we’re going to do to make it right is our top priority. I am committing to the community to provide more information as soon as possible, including a blog post on http://blog.heroku.com.
This reminds me of the excellent 5 stages of hosting story shared on here from a while back:
That amount buys a whole lotta dedicated servers and the talent to run them. (Sidenote: Every time I price AWS or one of its competitors for a reasonably busy site, my eyes seem to pop out at the cost when compared to dedicated hardware and the corresponding sysadmin salary.)
The larger issue is: Invest in your own sysadmin skills, it'll pay off in spades, especially when your back's up against the wall and you figure out that the vendor-which-solves-all-your-problems won't.
1. Employees are expensive. A good ops guy who believes in your cause and wants to work at an early stage startup can be had for $100k. (Maybe NYC is much cheaper than the bay area, but I'll use bay area numbers for now because it's what I know). That's base. Now add benefits, payroll taxes, 401k match, and the cost of his options. So what... $133k?. That's one guy who can then never go on vacation or get hit by the proverbial bus. Now buy/lease your web cluster, database cluster, worker(s), load balancers, dev and staging environments, etc. Spend engineering time building out Cap and Chef/Puppet scripts and myriad other sysops tools. (You'd need some of that on AWS for sure, but less on Heroku which is certainly much much more expensive than AWS)
2. When you price-out these AWS systems are you using the retail rates or are you factoring in the generous discount larger customers are getting from Amazon? You realize large savings first by going for reserved instances and spot pricing and stack on top of that a hefty discount you negotiate with your Amazon account rep.
3. I've worked at 2 successful, talent Bay Area startups in the last few years: One that was built entirely on AWS, and now, currently, one that owns all of their own hardware. Here's what I think: It's a wash. There isn't a huge difference in cost. You should go with whatever your natural talents lead you towards. You have a founding team with solid devops experience? Great, start on the cloud and then transition early to your own hardware. If not, focus on where your value-add is and outsource the ops.
They should go dedicated for now, it's too early to colo IMO.
Heroku is a great company, and I imagine there was some technical reason they did it (not an evil plot to make more money). But not having a global request queue (or "intelligent routing") definitely makes their platform less useful. Moving to Unicorn helped a bit in the short term, but is not a complete solution.
We went with a metal cluster setup and everything ran super smooth. I never did figure out what the problem was with Heroku though and this article has been a very illuminating read.
But, they also might also be able to self-help quite a bit. RG makes no mention of using more than 1 unicorn worker per dyno. That could help, making a smaller number of dynos behave more like a larger number. I think it was around when Heroku switched to random routing that they also became more officially supportive of dynos handling multiple requests at once.
There's still the risk of random pileups behind long-running requests, and as others have noted, it's that long-tail of long-running requests that messes things up. Besides diving into the worst offender requests, perhaps simply segregating those requests to a different Heroku-app would lead to a giant speedup for most users, who rarely do long-running requests.
Then, the 90% of requests that never take more than a second would stay in one bank of dynos, never having pathological pile-ups, while the 10% that take 1-6 seconds would go to another bank (by different entry URL hostname). There'd still be awful pile-ups there, but for less-frequent requests, perhaps only used by a subset of users/crawler-bots, who don't mind waiting.
Assume each unicorn can tell how many of its workers are engaged. The 1st thing any worker does – before any other IO/DB/net-intensive work – would be to check if the dyno is 'loaded', defined as all other workers (perhaps just one, for workers=2) on the same dyno already being engaged. If so, the request is redirected to a secondary hostname, getting random assignment to a (usually) different dyno.
The result: fewer big pileups unless completely saturated, and performance approaching smart routing but without central state/queueing. There is an overhead cost of the redirects... but that seems to fit the folk wisdom (others have also shared elsewhere in thread) that a hit to average latency is worth it to get rid of the long tails.
(Also, perhaps Heroku's routing mesh could intercept a dyno load-shedding response, ameliorating pile-ups without taking the full step back to stateful smart balancing.)
Added: On even further thought: perhaps the Heroku routing mesh automatically tries another dyno when one refuses the connection. In such a case, you could set your listening server (g/unicorn or similar) to have a minimal listen-backlog queue, say just 1 (or the number of workers). Then once it's busy, a connect-attempt will fail quickly (rather than queue up), and the balancer will try another random dyno. That's as good as the 1-request-per-dyno-but-intelligent-routing that RapGenius wants... and might be completely within RapGenius's power to implement without any fixes from Heroku.
I'm unaware of how Heroku does things. I'd guess they dropped the global queue because it's unpractical (failure prone, not scalable as it's a single point of contention).
I'm mostly surprised to see people happy being able to handle 1 or 2 requests in parallel per instance in general. That sounds absolutely insane to me.
Not strictly true; imagine that they can query the load state of a dyno, but at some non-zero cost. (For example, that it requires contacting the dyno, because the load-balancer itself is distributed and doesn't have a global view.)
Then, contacting 2, and picking the better of the 2, remains a possible win compared to contacting more/all.
See for example the 'hedged request' strategy, referenced in a sibling thread by nostradaemons from a Jeff Dean Google paper, where 2 redundant requests are issued and the slower-to-respond is discarded (or even actively cancelled, in the 'tiered request' variant).
That'd be probably significantly better than the case of (request i => dyno picked out of hat) for all i
Very annoying when I want to concentrate on the technical details. So we'll see once again - everyone's different.
I got started on Heroku for a project, and I also ran into limitations of the platform. I think it can work for some types of projects, but it's really not that expensive to host 15m uniques/month on your own hardware. You can do just about anything on Heroku, but as your organization and company grow it makes sense to do what's right for the product, and not necessarily whats easy anymore.
FYI I wrote up several posts about it, though my reasons were different (and my use-case is quite a bit different from a traditional app):
OTOH, having a customer have a serious problem like this AND still say "we love your product! We want to remain on your platform", just asking you to fix something, is a pretty ringing endorsement. If you had a marginal product with a problem this severe, people would just silently leave.
It probably doesn't hurt RG as much as lower overall performance during normal operations does, though.
PG can scale up pretty well on a single box, but scaling PG on AWS can be problematic due to the disk io issue, so I suspect they just don't do it. I'd love to be corrected :)
The issue with the number of connections is that each connection creates a process on the server. We cap the connections at 500, because at that point you start to see problems with O(n^2) data structures in the Postgres internals that start to make all kinds of mischief. This has been improved over the last few releases, but in general, it's still a good idea to try and keep the number of concurrent connections down.
*EDIT: thanks. not a thread. :)
In theory this is horrible, since PG connections are so expensive. In practice the cost of establishing a connection is negligible for a Rails app.
I do suspect this will make performance "fall-off-the cliff" as you get close to capacity.
However, contrary to the author, I'm serving 25,000 real requests per second with only 8 dynos.
The app is written in Scala and runs on top of the JVM. And I was dissatisfied that 8 dynos seem like too much for an app that can serve over 10K requests per sec on my localhost.
You running zynga on Heroku or something?
And 25K is not the whole story. In a lot of ways it's similar to high frequency trading. Not only do you need to decide in real time if you want to respond with a bid or not, but the total response time should be under 200ms, preferably under 100ms, otherwise they start bitching about latency and they could drop you off from the exchange.
And the funny thing is 25K is actually nothing, compared to the bigger marketplace that we are pursuing and that will probably send us 800K per second at peak.
But we haven't seen Heroku's comments, and while some parts of RapGenius's complaint are compelling, I'm not sure their apparent conclusions - that 'intelligent routing' is needed, and its lack is screwing Heroku customers — are right. I strongly suspect some small tweaks, ideally with Heroku's help, but perhaps even without it, can fix most of RapGenius's concerns.
Perhaps there was a communication or support failure, which led to the public condemnation, or maybe that's just RG's style, to heighten the drama. (That's an observation without judgement; quarrels can benefit both parties in some attention- and personality-driven contexts.)
This is run from inside AWS on an m1.large:
For the 50 dyno test, this was the second run, making the assumption that the dynos had to warm up before they could effectively service requests.
You'll see that with 49 more dynos, we only managed to get around 400 more requests/second on an app that isnt even close to real world.
(By no means is this test scientific, but I think it's telling)
New Relic could be much more appealing if they had a pricing model that was based on usage instead of number of machines.
Heroku is used by tons of people around the world. Some of them are paying good money for the service. Given the amount of scrutiny under which they operate, what is the incentive for them to turn an algorithm into a less effective one and still charge the same amount of money in a growing "cloud economy" where companies providing the same kind of service are a dime a dozen (AWS, Linode, Engine Yard, etc)?
How does that benefit their business if "calling their BS" is as easy as firing Apache Benchmark, collecting results, drawing a few charts and flat out "prove" that they're lying about the service they provide??
I mean, I doubt Heroku is that stupid, they know how their audience doesn't give them much room for mistakes. So as nice as the story sounds on paper, I'd really like another take on all this, either from other users of Heroku, independent dev ops, researchers, routing algorithms specialists or even Heroku themselves before we all too hastily jump to sensationalist conclusions.
The only conclusion I jumped to was that they ditched the routing they originally said they had (without telling anyone) and that their routing is worse than what you get as a default from passenger.
Meanwhile, AWS has been a dominant presence in the "Stupid Poor" market segment.
The other two quadrants do not exist.
I'm also quite wary of the incentive a 20K monthly bill would give you to try and shake Heroku down for a rebate. By the way, the figure in itself seems very high, but out of context it's impossible for me, the reader, to evaluate if that's actually good money or not. Maybe other solutions (handling everything yourself) would actually be WAY more costly, maybe Heroku actually provides a service that is well-worth the money or maybe the author is right and it's actually swindling on Heroku's part, no way to know.
I wish Heroku would tell us more about what they tried. I can imagine a few cock-a-mimie schemes off the top of my head; it would be good to know whether they thought of those.
Taking the case of minimally loaded, you need to keep track of how many active requests each node/replica is serving, as well as globally keeping track of the min. (which past a certain load, will suffer a lot of contention to update)
To do choice of 2, all you need is to keep track of active requests per node/replica.
Under spiky workloads, there is also a problem with choosing minimally loaded. The counter for numRequests of a node might not update fast enough, so that a bunch of requests will go to that node, quickly saturating its capacity.
Choice of 2 doesn't suffer this problem bec of its inherent randomization.
My best guess is that they hit a scaling problem with doing smart load balancing. Smart load balancing, conceptually, requires persistent TCP connections to backend servers. There's some upper limit per LB instance or machine at which maintaining those connections causes serious performance degredations. Maybe that overhead became too great at a certain point, and the solution was to move to a simpler random round-robin load balancing scheme.
I'd love to hear a Heroku employee weigh in on this.
Quite a few people still report a myriad of issues with Rails applications. I really want EB to work well with Rails, but just didn't have confidence in it.