In order to get around the lack of features, I only use the ELBs for SSL termination (well, and DNS, and autoscaling). For anything fancier, I've developed a coprocess that manages HAProxy behind the scenes. It leverages Auto Scaling notifications to keep the backend instances in sync. It has a REST API so that you can drive configuration, and it works in a master-master configuration.
It's been running in production for about 9 months now, and has proven invaluable. Having hooks at this layer is incredible. I'm able to get amazing debugging information, I can "bumper" the site at the frontend, I can do 100% zero downtime deployments with quick rollback, I can tarpit and rate limit, etc.
Once I added enough "sugar", I started to realize that maybe AWS is doing it right -- it'd be impossible to add all the features that every customer would want to the ELB. However, WebSocket support and request draining are low-hanging fruit. Same goes for the support for generic HTTP methods, which was implemented some time ago.
Draining is a huge issue for us. Investigating ways to add a secondary layer of balancing behind ELB to help mitigate it, which is completely silly. We shouldn't need to do this. ELB should have supported gracefully terminating connections from Day 1, IMO. Last I heard it was being 'considered'.
It is called WRR (Weighted Round Robin). Here are some docs to get you started:
^-- BenF@AWS commented on June 10th saying that amazon is now actively working on it!
I always add an additional check to see if a file called /tmp/down exists, and if it does, return a 500 for the health checks. Existing clients will continue to be served but that instance will get no new connections.
It's hard to find load balancing as a service.
In addition to weighting, we need an nginx layer for different app pools, custom routing options, max connections limits, request queueing, url rewriting, static content serving for specific requests. The list goes on but these things could easily be brought up into the ELB layer.
I think a LOT of people revert to HAPRoxy on an EC2 instance because of this.