Your biggest problem was that the configuration of your services was not sized/tuned properly for the hardware resources you've got. As a result of this your servers have become unresponsive and instead of fixing the problem, you've had to wait 30+ minutes until the servers recovered.
In your case you should have limited Solr's JVM memory size to the amount of RAM that your server can actually allocate to it (check your heap settings and possibly the PermGen space allocation).
If all services are sized properly, under no circumstance should your server become completely unresponsive, only the overloaded services would be affected. This would allow you or your System Administrator to login and fix the root-cause, instead of having to wait 30+ minutes for the server to recover or be rebooted. In the end it will allow you to react and interact with the systems.
The basic principle is that your production servers should never swap (that's why setting vm.swappines=0 sysctl is very important). The moment your services start swapping your performance will suffer so much that your server will not be able to handle any of the requests and they will keep piling up until a total meltdown.
In your case OOM killing the java process actually saved you by allowing you to login to the server. I wouldn't consider setting the OOM reaction to "panic" a good approach - if there is a similar problem and you reboot the server, you will have no idea what caused the memory usage to grow in the first place.
You're a development shop, not scalable system builders. Deciding to build your own systems has already potentially cost you the success of this product - I doubt you'll get a second chance on HN now. If you were on appengine, you'd be popping champagne corks instead of blood vessels, and capitalising on the momentum instead of writing a sad post-mortem.
I'd recommend you put away all the Solr, Apache, Nginx an varnish manuals you were planning to study for the next month, and check out appengine. Get Google's finest to run your platform for you, and concentrate on what you do best.
I know that I know little to nothing about sysadmin, so when I built a recent app I used AppEngine for this very reason. And when it got onto the HN front page it scaled ridiculously easy without any configuration changes. (No extra dynos, no changes at all.)
And when I've occasionally screwed up and done stupid stuff, it still doesn't go down. (To be honest, I first saw the problem when I noticed my weekly bill was ~$5 instead of the baseline $2.10. It helped that being a paid app pushed the limits up a lot higher.)
I mean, careful configuration and capacity planning is important. But what happened to straightforward conservative hardware purchasing where you get a much bigger system than you think you need? It's not like bigger hosts are that expensive: splurge for an EC2 2XL ($30/day I think) or three for the week you launch and have a simple plan in place for bringing up more to handle bursty load.
 The OOM killer picked a 2.7G Java process to kill. It usually picks the biggest thing available, so I'm guessing at 4G total.
2. set vm.swappiness = 0 to make sure crippling hard drive swap doesn't start until it absolutely has to
3. Use IPTABLES xt_connlimit to make sure people aren't abusing connections, even by accident - no client should have more than 20 connections to port 80, maybe even as low as 5 if your server is under a "friendly" ddos. If you are reverse proxying to apache, connlimit is a MUST.
One must be careful when setting connection limits like this. A lot of people still use proxy servers and with modern browsers it quite easy to hit 20 concurrent connections per IP address.
It's important to understand connlimit will cause people to queue, not to get blocked, and if 20 people are connecting all at the very exact millisecond from the same ip, well it cannot hurt to queue them for server stability.
HTTP Keep-alive connections play a big role in the number of connections as well so this should be considered when choosing the number.
You can use the optional second part the keepalive_timeout setting in nginx to send timeout hints to modern browsers, ie.
keepalive_timeout 10 10;
Bad idea. This way you're actively increasing the latency of your site. This way, for each asset that has to be fetched you're forcing the client to open a new connection, which can add more than 150 ms of delay per item (thanks to the three way TCP handshake).
What I would suggest is setting the KeepAlive timeout to a value that could handle each individual page-load. This way all the page elements will have a chance to use the connections that has been already opened.
It's great for an ajax-heavy site, especially when it's all behind SSL. Using a CustomLog I log total request time, from connection time to the request has been served (conditional log when the request is handled by the backend) and I can see it's halved since I could use the threaded Apache.
Currently I have 100 threads per Apache process, and ~20 of them handling 2000 idle connections. I'm sure this can be tweaked some more.
Apaches manages SSL etc. and just proxies to my application servers.
There's also an event-based Apache module which I haven't tried.
There might be some magic value at which KeepAlive will be helpful during non-peak periods without crippling the server during peak periods, but for a well-engineered site, the extra 10ms delay per request shouldn't be a big enough deal to warrant risking a full-on site outage later on.
Also, this has already been discussed to death on HN: http://www.hnsearch.com/search#request/all&q=keepalive...
I was fairly careful with the test(s), and the 10ms difference seemed to be consistent. So that's odd. I need to investigate that further.
Then, ensure that Varnish is actually caching every element of the page, and that you are seeing the cache being hit consistently.
You should expect over 10,000 unique visitors within 24 hours, with most coming in the 30 minutes to 2 hours after you've hit the front page on HN.
You need not do your whole site... but definitely ensure that the key landing page can take the strain.
Unless you've put something like Varnish in front of your web servers, there's a good chance your web server is going down, especially if your pages are quite dynamic and require any processing.
A few weeks ago I got on the front page and within a 24 hour period was hit with 29,000 unique visitors with 38,000 page views. The page itself is image heavy with 1.3 MB on first load. I'm running Wordpress with the Quick Cache plugin by PriMoThemes. I'm hosted on a shared 1and1 server.
I've been hit before and went down, that's when I installed the Quick Cache plugin. Also 1and1 moved me to another server at some point but I'm not sure if it was before or after. Either the cache plugin is really good or I'm on a rockin server all by my lonesome. Or both.
If you're self hosting a wordpress site grab the Hyper Cache plugin or the very very simple Quick Cache plugin by PriMoThemes.
Knowing that I can serve 10 requests per second and likely withstand a frontpage on HN is more useful to me than knowing that I need a way out if and when my web-servers are being totally overwhelmed.
Graphical post-mortem here:
During some large events, I was seeing 5-7 page views per second and Wordpress did just fine (I think I had 120k page views in 24 hours). But I made sure to test as much of my site as possible to make sure it performed well. Using various page analyzers to make sure the proper headers were returned so objects were cached on the client. Tuning wordpress to cache pages properly. Turning on PHP APC (opcode cache). Running various stress tests on the site (apache benchmark and loadimpact).
I'd say the page analysis tools and extended apache-benchmark runs really helped me tune my OS and services properly so I could handle a huge load.
AwStats screen shot: https://unavailable.s3.amazonaws.com/20121130_WP-Super-Cache...
Plug: we've built several products around Varnish so we have a good handle of how/where Varnish can be leveraged. Here's a list of varnish things we've built at unixy:
Varnish load balancer: http://www.unixy.net/advanced-hosting/varnish-load-balancer
Varnish for cPanel and DirectAdmin: http://www.unixy.net/varnish
Varnish w/ Nginx for cPanel: http://www.unixy.net/advanced-hosting/varnish-nginx-cpanel
Email in profile. I'll be more than happy to help out.
You could come up with a generic VCL that works for most websites out there but its cache effectiveness diminishes as you try to account for the most common corner cases. In fact, we did come up with such VCL. We distribute it with the cPanel Varnish plugin.
If you ever have a question or need a hand with Varnish/VCL drop me an email. I'll be more than happy to help out.
So in theory we were positioned to serve from Nginx cache.
I often try to think of ways to not have full text search...this case is a little more difficult, but why not create a list of common recipe names (i.e. "grilled cheese" would be a facet for all grilled cheese sandwiches) and store it as a static json? It would take some taxonomy work on the backend but the list as JsON could easily be less than a MB and the. You wouldn't have to worry about full search as much...full search could still be an option, just not a top of the front page option.
In our defense this page was supposed to be HTTP cached and that didn’t happen which led to this domino effect.
I think, however, the need to write something like this speaks to an incorrection assumption: you need a "launch". Of course, TC and HN can give you a nice bump in traffic and even signups. However, in the long run, this really doesn't accomplish much for you. It gives you the kind of traffic that will likely leave and move on to the next article, skewing your metrics. There's certainly qualified prospects in there, but it's hard to decipher with all the noise.
Again, the concept of a "launch" speaks to poor business models. It really benefits businesses where the word "traction" is more important than "revenue". Build a business that provides a service that others will pay for and grow as fast as the business can bear, bringing in those visitors that are truly valuable to you.
Cucumbertown has some notions like 'forking recipes' – called “Write a variation” which enables you to take a recipe and fork and make changes. Additionally Cucumbertown has a short hand notation way to write recipes(think stenography for recipes) – for advanced users. Things like these appeal to the HN crowd a lot.
Also, don’t you think quite a few hackers like me are also cooks!
This is probably my favorite feature: http://www.cucumbertown.com/tribe/ (Jane Jojo's really flying ahead)
Cucumbertown is very UX focused and from our research we came to a conclusion that our “aha” moment is to get you to the fastest possible way to write a recipe and give you that bout of joy. Now being hackers, we’d want to see forking & stenography in front of us. But that’s been a struggle we’ve been trying to showcase between simplicity to the “aha” moment and differentiation with others. Our primary audience comes to Cucumbertown because they are frustrated writing recipes in dropbox, google docs, wordpress & tumblr blogs as “blobs of text”.
However, if your site goes down for any reason a postmortem of this sort is definitely warranted. The word "launch" is not signifying much more than a point in time in this case, and I think you're jumping to a lot of conclusions about what hopes they were pinning on this event.
The most challenging piece of a new business is, well, new business. And it's about growing your value proposition organically, one customer at a time, and refining the business. Analyzing bump in media attention won't really help you on that piece of the search.
Once you've nailed down the search, and you're simply focused on getting more publicity as you scale, then perhaps that sort of analysis will be of more use. But, I doubt it.
> HN community’s remarks and constructive criticism are pearls of wisdom
Granted, that's mostly laziness -- apparently I've got a rule that matches "strange words near the top of the post" to "probably the name of the product".
Regardless of what you do, a little bit of respect for English is always a good thing to have.
Call this a hacker’s laziness + Yahoo chat room era slangs.
Ampersands are often used to mark brands, names and cultural items made up of multiple components: Johnson & Johnson, Dungeons & Dragons, bread & butter, fish & chips, Gold, Smith & Associates.
If I say "I had some fish & coleslaw" that would make a few people wonder if this is some popular recipe they should google.
(it's strange that they recommend enabling swap when they also recommend enabling reboot-on-oom, which is pretty much the complete opposite philosophy)
But modern wisdom on that is that, in general(+), it may be good to not have swap at all, on your server. Rather than address things by running parallel instances and load balancing.
As such swap space may also run out eventually if some service is leaking memory. And until it does it will make the system slow for everybody. Its better instead to let the culprit processes simply die, and make things easier for every body else.
On my server my jettys keep dying when they run out of memory. Thankfully there are lot other instances which are there to process requests.
Its a trade off you make, in favor of dropping some requests which are currently hitting the errant service (jetty instance in my case) vs. slowing down things for everybody, to the extent that even the developers can't help until something finally runs out of swap space also and dies (like the solr case you and OP mentions).
+ I say in general because there definitely could be reasons when you need swap.
Edit: Added explanation for (+)
Increasing the swap, which is the suggested solution, is however, a terrible idea. As soon as you hit high memory usage, your IO load will go through the roof, and everything will grind to a halt.
The solution here is separation of services - i.e. put Solr on a different box, so that if it spirals it doesn't take out other services.
The OOM killer is your friend for recovering from horrible conditions, but as soon as you hit it or swap, somethings gone wrong.
I'm pretty sure we disable swap at Google. Maybe swap was necessary back in the days when memory was really tight, but it seems like a terrible idea now. Especially since the scheduling is completely oblivious to swap AFAIK, which means that a heavily swapped system will spend most of its timeslices just swapping program code back into memory. It's the worst kind of thrashing.
2. Isn't swapping bad? I don't think I've ever had a situation in which swap more than say 100MB was helpful. Once the machine starts swapping, a bigger swap just prolongs the agony.
3. If you couldn't ssh, why didn't you just reboot the machine?
1. What did you use for the graphs?
2. What is the stack?
Stack is Python, Django, PostgreSql, Redis, Memcache etc.
It doesn take more then an hour, and you quickly know what your upper limits are, and where the bottlenecks are.
I use gatling in favor of JMeter:
Running everything on one box? Using swap? No caching? It's like a laundry list of junior admin mistakes.
* If memory serves correctly, if your system runs out of memory, shouldn't the scheduler kill processes that are using too much memory? If this is the case, the system should recover from the OOM error and no restart should be needed.
* OOM errors aren't the only way to get a system into a state where you cannot SSH into a system. It would be great to have a more general solution.
* Even if you do restart, unless you had some kind of performance monitoring enabled, the system is no longer in the high-memory state so it will take a bit of digging to determine the root cause. If OOM errors are logged to syslog or something, I guess this isn't a big deal.
I suppose the best fail-safe solution is to ensure you always have one of the following:
* physical access to the system
* a way to access the console indirectly (something like VSphere comes to mind)
* Services like linode allow you to restart your system remotely, which would have been useful in this scenario
* I've never seen any sort of virtual hosting service without either a remote console or a remote reboot. Usually both.
I think I really like your website. I really like the simplicity of the presentation to the user.
"Note that in 0.8.44 behaviour was changed to something considered
more natural. As of 0.8.44 nginx no longer caches responses with
Set-Cookie header and doesn't strip this header with cache turned
on (unless you instruct it to do so with proxy_hide_header and
I think you should put a description up front to describe what Cucumber town is. I think that main image should be a slider with multiple feature images, and I think the Latest Recipes should be the first section after this. Just my 2c!
Putting a dedicated Varnish server in front of the search servers helped a lot. Using a cdn may also be a viable option, but haven't tried it myself.
One last thing you can do if none of that is possible is use a better VM like Jrockit (http://www.oracle.com/technetwork/middleware/jrockit/overvie...). Jrockit with the right GC in my experience is much better about running in lower memory type situations.
Obviously, it's easy to say that when you're on the bench. Congratulations on the launch by the way.
At my previous firm we had this culture that whenever traffic peaks we spin up new instances. And tools like RightScale & Chef make it ridiculously simple. So our style was to do that than to optimize strains in code paths. Because this is so so convenient.
And before you know it, this notion of hardware is cheap becomes a culture. Soon enough if you grow you’ll be serving 100K users with 250 machines.
I do agree that Chef [and RightScale? Never used it) makes it easy to spawn new instances and through load balancing average your load.
I was talking in term of tradeoff in the first few weeks of a new service with a MVP. Obviously, you re-assess your need before you get to 100k user, and probably uses something else than EC-2.
In any case, both ideas are equally good. I don't claim better knowledge in any way.
If it was better for your previous firm to pay more than to optimize, it's actually preferring developers time (which cost money, as you know) over servers cost.
This can be cost effective until some point. I don't think it could get to the level of "100k users on 250 servers", and if it does.
If the other side of the coin is that you waste dev time AND that your site is down for a few hours..
Is it really worth the "culture fear"?
Also, in some instances of runaway memory, there will always be a point where all the memory in the world isn't enough.
1. HN should let you pay them $10 and let them hammer your server(s) before your story goes live. good for you. good for them.
2. there's a deal at lowendbox right now for a 2GB VPS for $30 a YEAR. you could have a healthy server farm for pretty cheap.
 http://news.ycombinator.com/item?id=4847949  http://www.joedog.org/siege-home/  http://httpd.apache.org/docs/2.2/programs/ab.html
Addendum: Oh, load impact is pricey though.
Cooking/food articles consistently do well here.