Close. In 2011, all the engineer desktops got upgraded to 36Gigs. At the time, the eng department still hadn't figured out how to deploy without duplicating hundreds of jar files everywere.
Then their hearts sank when they realized why.
Somebody suggested LinkedIn is one large application, which is absolutely false. It's composed of many small applications, each deployed as a WAR file since the non-Node.js part of the site is on the JVM. There's a lot of overhead that comes from deploying this way. It doesn't matter in production or staging, but it adds up on a desktop.
At this point in time though, dev machines have stopped growing because we have shared stacks you can deploy a couple pieces of the application against if you want to test the whole thing.
Sure, they ended up with 20x greater speed, but never did they say that it was because node.js. They even provide a high-level detailed description of all the things they did with their new approach to achieve this performance.
"LinkedIn Moved from Rails to Node: 27 Servers Cut and Up to 20x Faster" should say something like "LinkedIn did a rewrite and rearchitecture that cut 27 Servers and increase speed aby 20x"
It's also a great post illuminating how in hindsight some things can be really obvious (that building a high capacity web service dependent on a single-threaded server will give you problems down the road), but at the time it's not always easy seeing the woods for the trees.
For me though, the big takeaway was that one line summary: "You’re comparing a lower level server to a full stack web framework." Node.js has a pretty nice library/module ecosystem now, but for a complete full-stack solution with maximum productivity I would venture that there is nothing out there that compares to Rails currently.
But their old technology is requiring towers with 36GB of RAM. It seems like more of a choice between the devil you know vs the devil you don't. Obviously there needs to be a cost-benefit analysis even if it's imperfect. Otherwise you'll get stuck in a local maxima and before you know it you're paying $10 per million cycles of a mainframe running batch COBOL jobs maintained by old men who are dying faster than you can replace them.
At the end of the day, the Rails stuff got the job done. LinkedIn stayed up and was able to grow and add mobile features during that time. The current solution, the node.js stack, is even newer than Rails. So no, I don't think this validates management desires to stay with old technology.
Um, the new solution is single-threaded too. Threading and concurrency are not the same things.
But I completely agree with "the rewrite thing". I guess the other factors made it necessary to still do it....
Was there a recent discussion? I'm fairly certain everyone has moved on from Rails, and I'm not sure if they're still using it anywhere at LinkedIn. There are a few folks using JRuby, but I believe they're using Sinatra:
Never once did I ever feel this article was advising that node.js was superior to RoR - they only every justified, at a high level, a way better approach (in terms of server load) to an "MVC" like architecture by leveraging client side frameworks and technique to lessen server load.
The author of this article also makes it clear at the end by stating that comparing the solutions is apples to oranges, but so did the original article...so I don't get the need for "clarification".
EDIT: I retract my "way better" statement - I mean "way better" in the sense of server load.
That may explain why they had spam and security issues.
Billy Madison: "High school is great. I mean I'm learning a lot. And all the kids are treating me very nice. It's great."