That's not to say that Rails doesn't make a great API and that Alex doesn't have a point. There certainly is a place for frontend applications out there built on frameworks like Spine, Backbone or Ember (all great projects). Those types of applications have their advantages in some cases. But, it's prudent to be pragmatic and to recognize that the times where you truly need to build a client driven application are few and for the other times Rails is still great at serving up HTML.
Exactly the same work is taking place on the server apart from template processing. It makes no sense. If your API is going to be taking 2 secs to build a response, so will your server-side rendered one.
There's no real overhead to DOM insertion and simple template output. If you're doing crazy stuff, it's going to take long wherever you do it.
Admittedly it's still very tricky to get it right at the moment. I also don't really agree with the author's statement about desktop experiences in a browser. I'm beginning to believe it's impossible, especially without the native UX of the OS. I don't think we ever will have that experience as long as there's an address bar, back buttons and all the other cruft at the top of the screen. I think users may always see it as a web page and expect it to behave like a web page. Hopefully I'm wrong.
Let's take your solution: it's really difficult to build a development environment that allows a large team to work efficiently which is not based on having essentially a single bootstrap file when deployed in production. When you break apart your files into smaller chunks you are asking your developers to understand the intricacies of asynchronous loading of each file and impose dependency management on everyone. This is a huge problem at Twitter's scale.
FWIW, Twitter did break apart their files into smaller chunks pretty effectively using the Loadrunner project (https://github.com/danwrong/loadrunner).
However twitter is a perfect example of how not to do it, they inadvertently shot themselves in the foot with the hashbangs so they never know in the first request what's being asked for. That's kind of a mix between HTML5 history being so slow to come out and Google encouraging the # stuff in the first place. But everyone's still learning. Full page postback-less js applications are still a pretty new field.
Twitter were a very early trail blazer. As much as people hate it, kudos to them for having the balls to try it and let us all learn from their mistakes. I've got a project that's a horrible mix between postback and postbackless stuff. It's almost dailyWTF worthy, but I don't regret doing it as it's getting closer to getting it done right. And there are people using it right now and it works as much as I cringe about it.
For example if you look at the 1.5Mb js file they send when I do an anonymous request, it's got every possible action I could ever do regardless of whether you're on a tweet page, the main page, a profile page, etc. It doesn't matter if I'm signed in or not. They're all in there as templates. And then for some reason they wack in a load of compressed jQuery stuff, etc. It has how to sign up, how to add people to follow, congratulation screen for when your first few follows, etc.
That, to me, is just nuts. Why is there no split in the js dependant on the kind of user they're going to be? On reflection they probably think that too.
With regards to development environments, I do think you should force your programmers to understand your infrastructure. Give new programmers a couple of simple examples to play with on how to manage dependencies and loading files and that's that. In my view it's no different to saying 'we use these parts of C++, not these' or 'this is our coding style, you must follow it or your checkins will be rejected'. Or 'this is how the industry this app is for works, these are the general business rules most of them have, this is the general workflow'.
New programmers to your organisation always have to pick up some contextual information over time. It's your job to train them in the most relevant key information asap. I think far too often it's just a case of 'you're a programmer you're smart, let's just throw you in at the deep end and see if you can swim. Oh, why did you swim into the shark pen? Silly n00b.'. Very relevant: http://news.ycombinator.com/item?id=3736800
I think thats the big thing. I randomly arrive at tweets all the time, I open gmail once or twice a day and leave it open. Even if you are a heavy twitter user, you are still going to end up loading the page multiple times organically from browsing, even if you leave a main twitter tab open.
The lines are blurring between dynamic content driven websites and web applications, but they are still there. IMO Twitter (and Gawker) got distracted by the new hotness, even if it wasn't appropriate for what they actually are.
No, I'm not twitter. Since when is their incompetence a metric for the capabilities of a technology stack?
You could hand them a top10 supercluster and they'd still failwhale it.
Remember, this is the same company that failwhaled for the better part of 2 years(?) on a pubsub app (one of the most researched and understood areas in computing, cf. telco industry, financial industry).
It's nonsensical to conclude "Twitter can't do it so it isn't possible".
I maintain twitter.com is slow because twitter is incompetent or doesn't care about their product.
These are all symptoms of sloppy engineering.
In any case, Gmail is not a good example to support your case since it loads just as slowly as Twitter does, with a progress bar and everything.
For me the point of the discussion was the claim that the first-load performance of a fat-client app is inherently terrible.
This is false.
It is a straightforward optimization problem. Twitter didn't care to optimize.
I don't think it is under debate that the overall responsiveness (after first-load) in a client-side app is heads and shoulders above anything you can achieve in the request/response paradigm. Network latency is real, AJAX can mitigate it but instant response is only possible when you don't hit the server.
So all we're talking about here is the specific (important) case of the initial page-load. As said above, that case has tons of optimization potential, up to the point where it's near indistinguishable from a regular HTML page-load.
Gmail is not a good example to support your case
This may be subjective. Yes, GMail is slow. But it loads faster than twitter for me, despite being significantly more complex. I also imagine google has less incentive to optimize the first-load because unlike twitter users rarely follow deep-links to gmail.
Now imagine twitter applied only the little optimization that google has to their much smaller app - the latency problem would probably not exist. And if that's not enough, I can only repeat: It's a valid approach to serve static HTML for direct tweet links (or just for everything) and upgrade it asynchronously.
My point is that it's very possible to optimize this problem away where it matters (deep links to tweets). I've done it myself a couple times. It's nasty gruntwork, involving endless Firebug sessions. Crying Foul and "this is not possible" is a lazy cop-out.
What you're suggesting is usually called progressive enhancement. Yeah it's great, but sadly when you are using a system like Spine, Backbone or Ember, it's not just a matter of "endless Firebug sessions" to resolve the initial bootstrap problem.
No one claims that this is not possible, nor is anyone "crying foul". On the contrary I know first hand how much of this work was done at Twitter and how difficult it is. But as you stand by your petulant claim that "twitter.com is slow because twitter is incompetent or doesn't care about their product." I stand by mine, that there are tradeoffs when embracing one style of development over another.
Feel free to disagree with me but back it up with experience, not false assumptions, blanket statements and the denigration of many good people.
Oh, what else is the matter then?
back it up
You mean like you just backed up your claim of Spine/Backbone/Ember having some mythical, unspecified problem that prevents bootstrap optimization?
You say "static sites can spit back pages in like 200ms (across the network!!)". This time is entirely dominated by network latency. There is no way you're getting data to the client in 200ms if they're based in Australia and you're in, say, east coast US. With the client-side model you can push your app's bootstrap to a CDN to get ~200ms latency just about anywhere. You can render the UI and show a "Loading data" spinner to hide the latency of the initial request to your application server. This gives the user the appearance of a snappy site even though network latency might be 1s or more. You can't do this if you go the server-side route.
It doesn't have to. Load the HTML first, which shows something to the user, and load the data asynchronously, and no one will notice.
Charles Nutter is doing genius work with JRuby, btw, that's where Ruby has a future with the enterprise, on the JVM. Twitter dumping Rails for Scala was a major win for Odersky & friends, and a big hit to Rails (though Ruby/Rails continues to innovate, nothing has changed there)
Not sure why client-side applications take a second or 2 to render a page, but if that is the case, cached html on front end server would be dee way 2 go, just avoid hitting the application server entirely...
Seriously, I'm curious, what public facing, or any facing, components does Twitter use that is written in Ruby and/or Rails?
A Google search for "twitter rails" brings up the usual got dumped threads. A similar search, but this time, "twitter scala" brings up, as a first result "Scala School" for Twitter engineers, followed by a bunch of threads on the Twitter + Scala marriage.
Don't worry, Scala has its own issues (Yammer, for example, ditched Scala for Java due to, ironically enough, scalability issues).
The backend (i.e. Rails) still does almost everything it used to do: validations, access control, session management, data crunching, and everything else that you can't blindly trust a client to do. The real difference with client-side applications is that instead of stitching together view templates and sending back HTML documents, JSON objects are sent back for the client to represent.
More (sometimes duplicate) stuff is added to the client - things like client-side validations, and all the business logic and template code for the purposes of presentation. But no one in their right mind would let their backend blindly consume whatever it gets and persist it without question.
There's still plenty of responsibility for the backend.
Leave the rendering of the app to the client. Leave the ins and outs of the data to the API (driven by Rails, Node/Express/Railway, Django, what-have-you).
That's definitely applicable to mobile and web apps.
It would make more sense just to pipe JSON through IRC since it's much less verbose and was designed to be symmetrical (and session based) from the start.
Because its the best option. When WebSockets what whatever socket protocol becomes ubiquitous you'll probably see less HTTP use.
From my perspective we're coming full circle back to client/server desktop apps... only instead of C++, we're doing it with js inside a browser container... I've done ActiveX controls and Flash components... It's not that much of a stretch.
I would rather see more people deploying native aplications with great UX backed by REST APIs than shoe-horning apps into browsers (which break the web).
1) Developers are worried that a user won't try their app if they have to install something on their computer.
2) Compatibility , support anything that can render HTML , although this may no longer be the case with the amount of chrome only apps I see.
3) Firewalls, you can get out on port 80 pretty much anywhere, that random port number you decided to use for your app not so much.
4) A few years back web programming was seen as the "easy" way to get into development since writing relatively limited PHP was a lot easier than wrangling C++ and the Windows SDK. Therefor web developers reached critical mass.
Java applets and Flash did a reasonable job with 1 and 2, but they seem to be some of the most hated parts of the web, I think part of this may be because they are see as "too powerful".
People want the web to be lightweight.
The web, as it was originally envisioned, makes perfect sense: HTTP, URIs and hypertext to provide navigable content, period. On the other hand, building interactive applications by manipulating the DOM while reinveinting UI patterns and widgets over and over again seems like a hack that grew to enormous proportions.
There's no magic bullet.
I'm not sure how or even if the infrastructure of a distributed system like the web could be engineered so as to prevent this kind of situation. Perhaps the solution is to build in a system of financial incentives -- not unlike what the Bitcoin folks have done to solve the Byzantine Generals problem. It's an interesting problem.
It's in both mobile apps, browser based JS app, etc.
My talk was rejected. PUNKS!
This will make it much easier to only use Rails as an API and get rid of a lot of the bloat/complexity people seem to complain about.
This was posted on edgeguides but it's since been removed some reason (http://edgeguides.rubyonrails.org/api_app.html).
I'm bearish on Rails because its maintainers don't want it to just be an API. The fight to become the best backend API is much different than the fight to become the best html server. Rails isn't even participating in the fight. Rails has a lot of cruft not needed in an API, Sinatra or Express feel much better for that.
Rails isn't even participating in the fight
Rails became popular because with it developers had the ability to cut down on soul-sucking activities in their day to day jobs, like building yet another authorization system, or yet another admin. That some people ran with it and scaled it to its limits, that's only because they felt in love with its ease of getting things done.
That's its main advantage. When it comes to getting things done, there's no better alternative.
The fight to become the best backend API
is much different than the fight to become the
best html server.
So you see, there is no fight. That fight was won long ago by the JVM and the frameworks that run on top of it. Every big website on the web right now (except Microsoft-stuff), runs either on top of the JVM or with custom-baked solutions written in C/C++.
That didn't stop people from building cool stuff on top of PHP, or Rails, or Django, or every other platform that brought instant gratification, but that's another point entirely and we are talking about "battles" here.
I beg your pardon, but... what?! Out of the box, Rails has none of this. Rails is almost a meta-framework. Honestly taking your two examples: Devise is horribly complicated and the half-dozen admin frameworks that had a hard time crossing the 3.x line (which is not encouraging for the future) are either generators, downright incomplete enough to be useful outside of a hello world, or frameworks in themselves. I defy anyone beginning with Rails to set up Devise and an admin system in less than half a day, let alone an hour. I'm not trying to pick up a fight, but anecdotally compare this to Django, where setting up both auth and admin is largely under 30min for a newcomer, easy, with ample possibilities left ahead.
So although Rails helps a lot in various areas (like resource routing, respond_to/with), but it still ends up being soul-sucking in numerous others where you have to either delve into needlessly complicated stuff or implement it yourself.
This is a false equivalence IMO. With rails i spend far less time on incidental complexity than I do with anything else I've tried.
What does that make Wikipedia (PHP), Craiglist (Perl), Wordpress (PHP) and Youtube (Python) ?
Wordpress is an interesting one, I wonder if they aren't doing some C++ pre-compilation a la Facebook's HipHop? Maybe not, WP is pretty light code-wise (compared to the slow, bloated dog that is Drupal), and for the most part personal blogging sites are not handling Twitter level bandwidth last I checked.
@bad_user is a cool thinker @icebraining, no hot headedness to be found, he's just stating the facts in regard to industry trends for enterprise level applications: it's an M$ and JVM world.
Can you define big? I've worked on some big sites that used MRI ruby for APIs and we served a ton of traffic with strict SLAs for a max of 250ms at the 99th percentile and things like that.
I think by big that's a code word for enterprise.
The big sites you have worked on are comparatively small if Ruby is backing the show. That's not to take away from the ton of traffic that you guys were able to serve; it's just that there are few enterprise level Ruby backed sites running these days.
Github, I believe is one, but lately I've been getting the "unicorn is angry" icon when viewing repositories, so I wonder about scalability issues. For the record, I have never, ever seen a "unicorn is angry" icon on Twitter. Maybe switching to the JVM got rid of all the magic ;-)
"The big sites you have worked on are comparatively small if Ruby is backing the show."
That's a big assumption. You have no idea who I am or what I've worked on. One of the sites I worked on was yellowpages.com. That's a top 1000 site, but even that doesn't tell the full story. When I was there we were serving ads for most requests to bing maps. Do you consider bing maps comparatively small?
I currently work for disney who runs espn.go.com. The person sitting next to me right now worked on espn.go.com before she transferred to my group. I can assure you that espn.go.com could easily be served with ruby instead of java.
I'm questioning a couple of specific claims made by bad_user. That all big sites use the JVM or c/c++ or .net. I think that is false, or the definition of "big" is so narrow as to be meaningless for 99.9% of programmers. I'm also questioning the claim that the best backend is the one that is the fastest. I'd argue that the best backend is the one that is fast enough and the cheapest, wouldn't you agree? As I mentioned before, I was working on an api that served requests for bing maps with a very tight SLA. We ran it on MRI ruby, and we met the SLA.
Here's something that people often forget, you can put things like varnish in front of your API. This doesn't work for everyone, but if you're API is easily cached, then you shouldn't have any problems scaling it even if you're using a language like MRI ruby which has a GIL.
Rails/Backbone will work well for the time being but I wonder which way the balance will swing? Will we have fat Rails models or fat Backbone models? If it is a fat Backbone model, then maybe direct database queries is the solution, cutting out Rails entirely. CouchDB and others already support this and could make it even simpler in the future.
We'll always need some sort of server-side last-mile, tamper-proof validation. So given that, I'm not sure the benefit of duplicating it on the client side.
You duplicate validation logic on the client side in order to provide rapid feedback to the user. But it's a huge mistake to not let the backend have the final say.
- Debugging on every browser was quite painful. I tried to support IE7+ as well. If a server-side code works, it works for everyone.
- Frameworks on the client-side are not as developed as the server-side ones. Also I am reluctant to introduce more client-side libraries than necessarily, so I will end up coding a lot of stuff, that I can get for free in Django otherwise.
- The user will need to wait for the JS to load and execute until they see something on the first pageload (at least in my implementation).
- If there is a bug in the JS it's harder to get it logged.
I usually just go with wherever the Rails team is taking the framework. For some legacy sites, I may stick with wherever they're headed.
For new projects, I really view Rails as just the web api that feeds my Backbone framework.
Yes of course Ajax and the interactivity and convenience it offers are awesome, but they come at a massive expense, and this is something that will be very saddening if Rails starts considering it as a major part of its core sooner rather than later.
 Alright so it's not just America, but my point stands.
can't get much leaner than that. IE 10 supports pushState(), so the future is looking bright.
I's all-in now, give poor ole' JVM beast a break, it might OOME on me if I actually put it to work...
Put node.js in front of something like grape and you have the best of both worlds.
Now I don't even want an invite.
Oh wait. That already happened, and it was called the switch from mainframes to PCs (or whatever the fuck you want to call it). The point is: developers adapted, and got over it, and you will too.
My answer: no, not at all. But it's hard as hell to fight the momentum of an unstoppable force. Even with an immovable object.
That leaves authentication and CRUD operations for the application server to work on; the rest lives in-memory (via google mod_pagespeed, for example) on the front end server.
I'm taking this approach as I like speed, and I don't yet trust the JVM enough to handle tons of live requests and OOME on me while I'm out surfing ;-)
Now, put that aside, I do agree :)
I was not talking about "Web Applications" either, which, FYI, are not the same thing as dynamic content. I realize that he mentions in the post that he's mostly talking about Web Applications, but then he's just stating the obvious and the post title should be "For Web Applications, Rails is just an API".