The only interesting thing would be to see how much of a performance improvement each of the separate changes brought.
Phrases like "Most of the time was spent by the server to compile the necessary dataset of time entries, but also related data." seems to hint at:
a) missing profiling (was it database queries, actual computation, network I/O, disk I/O, ... ?)
b) architectural problems (a database schema that didn't perform, missing indexes, ...)
I'm sure Go is a nice language, but I don't see how Go could improve any of those besides maybe a more managable approach to parallel computations.
Given the amount of seemingly parallel changes that happened, it's pretty much impossible to determine what actually caused the improvement (at least with the information in this blogpost).
I've got nothing against Go, but this smells fishy. 5-20 second load times on a page in Rails? That doesn't sound like an issue with the programming language, it sounds like an underpowered server, lack of proper caching, or an under-tuned database (or data model).
I switched from ruby to clojure because jvm is awesome
and it's a lisp and I also use coffescript, backbone,
requirejs and raphael. I've been using this stack for
about a month and a half, it's awesome.
Does this help? How does this help you solve your problem? I'm going to go with "It doesn't". All of these posts boil down to that form.
The good ones start with: "Over the last year we deployed [insert stack] into production..". And then tell me what sucked because not everything is awesome.
This is very cyclical. We're in the honeymoon period with some of these technologies. Two years ago you saw a lot of posts proclaiming that NodeJS and MongoDB were fantastic, world changing technologies.
Now you see a lot of posts saying "We're switching from MongoDB to Postgres/Riak because of X, Y, and Z". I suspect we'll start seeing similar posts about Node.js very soon.
No telling what will happen with Go, but just enjoy the honeymoon period for what it is and stay tuned to find out what happens later.
I'm curious about the LABjs usage. In my own testing it ends up not having any significant impact in load times, sometimes it's even slower (same for other loaders). Just concatenating+minifying files has a huge effect already, the added complexity doesn't justify itself.
Agreed. I recently moved from LABjs to RequireJS. I regret ever going down the LJS route because it really doesn't encourage / enable you to modularize your JS. It was over complicated to get my head around RJS, but once I did, it really cleaned up a lot of my code and I'm super glad that I made the switch.
I don't know if Go or Toggl fans rallied and upvoted this article, but I don't have anything with that.
That being said, I'd like to see if HN could provide some kind of fraud protection for upvotes/downvotes. I know that, for example, ad providers have some kind of algorithm detecting if the click is valid. Does anyone know of something like that in use in some of the community recommendation sites like HN?
As it is now, it would be fairly easy to game HN for karma or marketing reasons. All it takes for an article to take off is less than 10 votes at the right moment.
A layman's question, I'm not a Ruby expert, but I was wondering - did you try jRuby and were not happy with the performance? or you already run on the JVM? My narrow notion on the Ruby world was that if it doesn't scale, put it on the JVM and you are done, am I living in fantasy land?
If you look for the stream of articles that invaded HN in the last weeks, it is usually because someone implemented a web site in Ruby or Python and when it gets scalability problems all is heaven on earth with Go.
Of course if you're doing server side development already with more performant languages and runtimes, there is little incentive.
Phrases like "Most of the time was spent by the server to compile the necessary dataset of time entries, but also related data." seems to hint at:
a) missing profiling (was it database queries, actual computation, network I/O, disk I/O, ... ?)
b) architectural problems (a database schema that didn't perform, missing indexes, ...)
I'm sure Go is a nice language, but I don't see how Go could improve any of those besides maybe a more managable approach to parallel computations.
Given the amount of seemingly parallel changes that happened, it's pretty much impossible to determine what actually caused the improvement (at least with the information in this blogpost).