> It wasn’t just fancy HTML we had to build; it was a port of a
> graphic- and animation-intensive Flash app, with the polish
> intact,1 and it had to work on iOS. It had to work on IE7.
> It had to work on Android 2.1 on one of the clients’ phones.
> And we had six weeks. We did the maths and figured that at
> twelve hours a day, six days a week for six weeks, we might
> just sneak in.
> There were also three levels of stakeholders above us, with
> different priorities, and constantly shifting (and always
> growing) specs cascading down from above as the site took shape.
> After some discussion with my ‘if I get hit by a truck’ backup
> developer, I decided to add CoffeeScript to this, despite no
> real experience with it.
Then you're a brave, brave man. After this intro, I fully expected to hear a disaster story of missed deadlines and feature cuts ... not a successful project and a Super Bowl launch.
You mention that you're thinking of adding an explicit validation step -- great idea. For CS & JS (which can be compiled and validated quickly), even better than having it run when you build is having it run every time you save the file in your text editor...
A DHH type would've slammed this guy for being not giving the language its due, you are way too nice J!
Given no tooling that I liked in the Java realm I work in, I made my own compiler using Rhino and Coffeescript.js.
My biggest critique of the article is the process. The author picked a tool he are not familiar with, probably produced sub-par code, and then for the next project aims to pick something else they appear to be equally unfamiliar with. That's great for padding the resume and getting the next gig, but terrible for producing high quality.
To reply to you and jashkenas, we weren't completely delusional. Sure, the project was big and tight, but we were pretty confident that we could nail it; none of the particulars were new, just the scale and the timeframe.
There was never any question of "can we actually do this?", just "can we do this in time?" Hence picking a tool that looked like it might offer some shortcuts. Yeah, it was a hell of a lot of work, but that was always going to be the case, and I accepted the project on those terms.
(My post perhaps came across as more extreme than the reality. We delivered. The site works. The code never got to 'unmanageable'. Given the opportunity to refactor this particular site, I'd leave it in Coffee, although I'd rework the build and deployment process.)
This article gives a perfect example of why you would want to take advantage of the git index: the author was talking about different versions of coffee sometimes producing different output which lead to commits going back and forth between the two versions.
The author is right: this is annoying.
Now, the correct way to solve this would probably be to either not check in generated files, to leave the JS out of the repo, or, if that's not possible due to politics, then use the same version of coffee which I would highly recommend anyways because it gets rid of really bad WTFy issues if one version of the compiler has a bug which manifests itself in an application bug which will then only be seen if the last commit was done by the person with the broken compiler.
Anyways. Let's say you can't use the same compiler and you have to check in JS in addition to coffee (you are not trying to sneak in coffee by ONLY committing the JS, are you?)
Now we have the perfect oportunity to show off the git index:
Instead of committing the whole file, you would use add -p to only select the changes which are actual code changes and not changes in compiler output.
Then you would only commit those changes and then undo the unneeded stuff and test again, to ensure that you added the right lines to the index (commit --amend or rebase -i if not).
This will give you a much cleaner history, which will be a huge help when blaming or reviewing the code. No more annoying flip-flopping between compiler output styles. No more meaningless changes in a commit. All the changed lines in the diff are the lines that count. The lines you should look at when reviewing. No risk of missing the trees in the forest.
But, you might say, history rewriting is bad. I don't want to commit something I haven't tested.
Remember though: you haven't pushed yet. You haven't altered public history yet. Nobody knows about those commits yet. You have all the time for testing, patching and massaging commits. Only once you pushed, your changes (should) become set in stone. Only then, history gets written. Only then it can't be changed any more.
This is why I love rebase and I'm really glad I finally found a good real-world example to show why.
Let me readily acknowledge that our process wasn't optimal. The CoffeeScript version-change happened while I was completely offline for a week, having dumped the next dev right in the middle of things with insufficient handover, so we caught that a bit late.
In hindsight (1) I should have used something other than Make for my build process and (2) I should have figured out a way to move some of the build upstream.
I submitted primarily because it's a great read but also because I know the firm Matt does a lot of work for (Sons & Co in Christchurch, NZ) is co-founded by a buddy of mine. They are all really nice guys.
For what it's worth, facing a client-side project of similar magnitude, we chose Google Web Toolkit and learned Java.
Two years and 60k LOC later, (www.activityinfo.org) I'm not entirely satisfied with the tool -- sometimes more abstraction introduces new problems to be solved, and it's hard to find good, affordable java devs to work on the project here in NL, but it does IMHO bring tremendous advantages in terms of modularity, dependency management, and generally keeping the code base maintainable. And the optimizing compiler is nothing short of amazing.
Having worked in a project where GWT was used it really makes it easier to work with.
I found it really easy to pick up and start doing some actual work, the only complaint I have so far was the need of everytime I wanted to expose a service from the server to call in the client I needed to edit around 4/5 files, but I'm not sure if it is really needed or if at least 2 of them could be avoided, since there was already a small codebase to follow.
We recently have switched over to using CoffeeScript in production and haven't looked back. One thing we found out real fast was to not compile the CS until you go to staging or QA. The way we get around this is using RequireJS (http://requirejs.org/) and the CoffeeScript Plugin(https://github.com/jrburke/require-cs). This buys you the ability to compile at runtime in the browser, so you don't have this mess of CoffeeScript files and compiled JS files in your source. When you are ready to build for production or QA you use RequireJS' optimization tools to concatenate, minify, obfuscate and compile the code. Hope that helps.
Found out the same thing. Developed mostly via TDD/QUnit in FF with Firebug only and occasionally running the entire suite with multiple browers. Even IE6 (via IETester) would compile several thousand lines in a few seconds or so.
Used tdd for almost everything even clicking ui widgets to populate from a rest resource and then validating the result. CS rocks!
I am now on my third project using coffeescript together with BackboneJS.
On the second and third project, I am also teaching coffeescript to 3 people who had never written any coffeescript OR BackboneJS before.
This may be due to Coffeescript, and it may be due to Backbone adding some much needed structure, but I'd like to believe it's a combination of both and even using one in isolation should yield an improvement.
On a closing note, you say that JS is ridiculously powerful which leads me to believe that you feel using CS means sacrificing some of that power. Since CS translates directly into JS, I think you'll find this fear is unwarranted.
> On a closing note, you say that JS is ridiculously powerful which leads me to believe that you feel using CS means sacrificing some of that power. Since CS translates directly into JS, I think you'll find this fear is unwarranted.
I appreciate the article but disagree about it being well written. It doesn't take enough of a stance and when it tries to, it is still swinging around many directions.
On the whole, though, I don’t think CoffeeScript adds quite enough benefit to outweigh the costs
It's not clear what are those costs? The --watch mode bug?
Or the generated JS which is perhaps less a complaint about CoffeeScript, and more a warning about versioning and standardising all components of a project.? Or the author's trouble with indentation rules?
Maybe I'm just feeling cynical this morning but I don't understand why the OP came to his conclusion. I read it as him making a bunch of mistakes which led him to conclude he shouldn't use CoffeeScript.
He took on a large project on short notice with a tight deadline, browser compatibility issues, and growing requirements (client is a big company after all) using a language he's "not terribly confident with". This alone is a really Bad Idea. But I'll give him the benefit of the doubt and assume he recognizes this in hindsight and felt he could handle it at the time.
Then he decides to do it in CoffeeScript, something he has never used before. It seems he had expectations that CS was much more heavy handed, providing library-like code structure and compensating for his lack of JS knowledge. Mistake #2.
He finds a bug and some other small annoyances. This is understandable. There are some things about CS that annoy me too. Every language has its problems. I suppose this alone could be enough to drive someone away from a language. Personally I find the trade offs worth it for increased productivity, but that's just my opinion.
His conclusion about CoffeeScript is that "it just doesn’t feel quite robust enough (as a language or as a tool) that I’m confident in it at this sort of scale, and at smaller scales it doesn’t confer enough benefit to be worth the added complexity." Again, problems with confidence.
I've posted this before, but it's relevant again here, as the author mentions the dangling comma issue. Discussion on preferred syntax for a parameter spanning multiple lines followed by another parameter: https://gist.github.com/1215863
There are so mnay things wrong with this approach I really don't know where to begin.
> CoffeeScript was a godsend here, simply as an explicit compile and validation step.
> Automatic local scoping (no var necessary) is a sane choice, and safe loop scoping with for x in y do (x) -> erases a whole category of errors.
While not addressed to me, i'd like to chime in with some comments of my own here.
For a wonderful counter point, have a look at how ClojureScript implements its own collection types, complete with value identity, indentity partitions and proper maps with object keys.
Particularly I'm thinking of the inescapable async-all-the-way down, which necessitates endless lambdas. In other languages it's possible to pick a point at which to invert control back to blocking-style; JS's single-thread model makes that impossible. So for instance all calls to the server API have to be callback style; if you have to make serial calls, you can't avoid nested callbacks.
I miss a set type, I miss solid iteration (although coffescript mitigates that), and immutable datatypes would be nice (I've got used to them in Clojure.) I miss a solid FP stdlib, although jQuery has map and filter, which goes a long way.
> So for instance all calls to the server API have to be callback style; if you have to make serial calls, you can't avoid nested callbacks.
I was bummed `await` and `defer` was not merged in. However, after thinking about it, the same result can be had using a good async library. The code is self-documenting if you choose good property names.
follow = (from, to, cb) ->
# Invokes four async operations in a series, which requires 4 `await`, `defer`
# using iced. Note how error handling is handled
# in one place rather than each step.
# That's a big drawback for me with await/defer.
user: (cb) -> User.find name:from, cb
addFollowing: (cb, results) ->
User.save results.user, cb
followee: (cb) ->
User.find name:to, cb
addFollower: (cb, results) ->
User.update results.followee, cb
, (err, results)
cb err, results
After finally getting over my fear of client-side programming, I finally built a project in JS last week.
It's not that any one thing is especially broken, it's just that all added up I feel like it's not finished, and I'm not Getting Stuff Done as I do with Python.
Google Closure seems to go some of the way there, but I get the feeling that's probably just adding some syntactic sugar to make swallowing the bitterness a bit easier.
This is why I've been dismayed to see JS taking off on the server side. Maybe there's some clever engineering behind Node and maybe taking a free ride on V8 is a lot less work than building a similarly robust VM for Ruby or Python but it just seems wrong to build the next generation of apps on such a flawed language.
The explosion of the web opened the doors to new languages after a long stagnation of client-side code. Why let an accident of browser development history dictate the tools we use for the next 5-10 years?
I've not used them in production yet but it looks like a very cool way to ensure code quality. The syntax of the Contracts resembles Haskell type definitions. Contracts are very flexible -- for example, you could create and enforce a "Prime Number" type (or whatever type you wish).
Also, you don't have to include the contracts in your production code -- you can compile to JS for production and omit the contracts once you've validated the code against them.
Actually, the more I look at this, the more I think I'll use it for my next CoffeeScript project.
Interestingly, I think that quality is entirely orthogonal to production worthiness and any implied or explicitly stated guarantees about utility and unbreakiness.
I've personally thrown my weight behind up and coming technologies in the past (Merb to be specific, back in the Merb vs Rails insurgency), and ended up supporting a technology stack that was left abandoned. In spite of that, my company continued running happily on Merb for 3 years after that (until a move to Scala).
That jashkenas warns people with a "caveat emptor" doesn't mean a tool shouldn't be used in production. It means that you should carefully evaluate how you use your tools in production, make sure you write modular testable code, be willing to dive down into the weeds, and most importantly always have a backup plan.
(my own caveat is that I do not CoffeeScript in production. At least, not yet.)