Hacker News new | comments | show | ask | jobs | submit login
How We Went from 30 Servers to 2: Go (iron.io)
688 points by treeder 1413 days ago | hide | past | web | 495 comments | favorite

It's not really "go" that makes the difference. it's how the runtimes and frameworks are used and/or made.

frameworks on top of frameworks, all being over engineered, with poor understanding of what the system actually does, result in super slow apps on top, that you generally go to aws to scale.

It's not the first time that I see people reducing a dozen servers that were "always maxed out" by a couple of servers "that can barely feel the load".

Yeah throwing hardware at the issues is fine'n all but we're been way over the limit too many times in too many directions.

Go provides a clear start/APIs/framework, and the language enforces good habits. Also, it doesnt have things like global interpreter locks.

The lesson? Stop using cool techs because there's blogs about them. UNDERSTAND the tech before using it.

I'm very sympathetic to your tech-on-tech-on-tech-on-tech hogwashery promulgated by open sourcers & vendors prosteltyzing their wonderful solutions and the other acculturation factors that permits developers approaches other than trying & seeing & exploring-

That said, you are wrong.

I'm not a Go user and I don't care for it, but Go is fundamentally different and better than most runtimes. Go has it's own green threading, which allows it to start new tasks using extremely small amounts of memory, and it can switch between tasks without the normal cost of context switching full on threads.

That is a huge difference. Node has some elements in play here, with it's event loop and callbacks, but really the only thing in the field that compares to Go that's in use is Erlang, and there's no frameworking on top of Erlang that wouldn't cause most PHP or nine tenths the Python developers to curl into a weeping ball at the Eldritch horrors they'd been exposed to: even ASM code is not nearly so effective, being clearly a low level somewhat harmless thing.

Being able to kick off small processes, that can sit around for a really long time, awaiting data- that's a model of computation we haven't done in decades, never with the mass popularity- there's a TLA for this kind of processing- Communication Sequential Processes, and Go is it. It's entirely different from the procedural model, from the Python and Perl and PHP scripting, where concurrency is a carefully waded into thing- Go is a concurrent first runtime, about ongoing, enduring, concurrent processes, and that's huge, HUGE I say, huge.

> I'm very sympathetic to your tech-on-tech-on-tech-on-tech hogwashery promulgated by open sourcers & vendors prosteltyzing their wonderful solutions and the other acculturation factors that permits developers approaches other than trying & seeing & exploring-

Are you studying for the SATs, or just trying to make your argument sound more compelling through the use of unnecessarily complex words?

Yet, different, and sometimes more complex words can have entirely unlike connotations, that much more effectively describe the message the author is trying to convey. Sometimes, simpler words are just that—too simple to properly describe the concept the author would like to capture, and the emotions he would like to encode within it.

That's not to say complicated is always better. Like it programming, the best set of tools are those suited for the job.

In the end, you consider your audience. Can I add more to this sentence to better capture the feeling I'm trying to convey, without detracting from it's ability to be understood? The parent most likely decided that on a forum like HN, the answer was yes.

It's how I feel, I'm so very very sorry it's tripped your acceptable tolerance levels. I'll bite my lip and not add "you prat" to the end. Thanks for your considered moderation, and I see your point.

My only point is that being obtuse is not the best way to get your point across. I assume your goal in expressing your opinion is to convince others, and to do so it helps to be clear, concise, and direct.

That's a very narrow assumption as to why someone would express their opinion. The OP might be sharing their experience (in what was a humorous way) purely to be sociable in this discussion...

Not everyone is out to convince others.

Righto, he could be here to make himself feel good by using words he doesn't usually get to.

Dare I call into question what you are here for? How do you feel about your contribution- attempting to heap more shame- plays for the whole audience? Is what you are doing of value?

On a forum like HN, where the audience spends a great deal of time reading and engaging in self-improvement, choosing words that are not simplistic should be fine.

I'm going to go out on a limb and guess that even in a community where "acculturation factors" is something people actually say (crit lit? sociology?) it's something that causes people to play that community's equivalent of buzzword bingo.

Ia, Ia, Erlang Ftagn!

I agree though, I just wish that Go would have developed from Erlang as a runtime.

The Erlang virtual machine and systems are beautiful things - I used them for some wire-level work on a project and bitfields are awesome, and it provided a clean interface to a more 'normal' language to display to the user, fail-fast fault tolerance on a public safety project, neat stuff.

It's just erlang-the-programming-language that has all the Prolog warts that scares people, I think.

Per your queueing, I'll toss in the Elixer chip- http://elixir-lang.org/ - it's a rather rote, boring, routine language that we all can be semi familiar with, with some aberrations carried over - pattern matched dispatch is still there, for example, making people wonder wtc. The web-page highlights it's advancements, it's "features," which is mildly amusing to me in that the biggest draw is that it's an Erlang that feels familiar to us Algol-derived unwashed/don't-know-anything-but hordes.

Kind of as a foil to that normality, it's similarly remarkable to me that the Prolog asserted-messages/facts & reactive processing cycle of Erlang isn't itself better captured and shown off. It's kind of a middle stage introductory tutorial, after you've made it threw guarded dispatches, yet by itself I think it'd make a great "start with the hard stuff" intro-tutorial to Erlang, and I haven't really seen it presented independently, firstly.

> It's just erlang-the-programming-language that has all the Prolog warts that scares people, I think.

I must be strange but I actually like Erlang's syntax. Its pattern matching is really hard to beat. Also with Erlang, it is not just about the language unless one just wants to learn academic FP or Actor model concepts -- it is about the framework. Debugging, tracing, distribution all those come as part of the package.

Think of Erlang as a tank. It might not be pretty and slick as a new Mercedes, but when you go into some battle, you need to learn to operate a tank, a shiny Mercedes will only take you so far.

Another one here... I like Erlang's syntax as well. Of course, we could have some things better (like record syntax), but it's really not bad at all.

I think most people have curly braces fetish (we call them "tits" at times where I work). If it doesn't have "tits", no thanks, it's ugly and bad.

There is talk about frames and maps for a while. I noticed there is an upcoming Erlang factory talk from Ericsson about this feature. I suspect those might end up used in place of records in some cases.

Correct me if I'm wrong, but the Haskell concurrency model is a very efficient one which is also based on lightweight threads. Moreover, is CSP is all you care about, you might want to have a look at CHP.

Go is... fundamentally different than most runtimes? Probably. Better? Maybe. Better than the comparable ones? I think it's losing that war. You don't even need a special runtime, it can help, but I think the amount it helps is overstated in Go's case.

Concurrency isn't that hard in practice. If you're willing to forgo automatic multiplexing Perl will give you Go's concurrency model in a library written ten years ago, and it works as naturally as it does in Go. Python seems to have similarly good support but it probably doesn't look much like Go. I'm not sure what Lua's doing these days, but in the past I've found concurrency dead simple. If you happen to be writing C, time-slicing the work isn't even that hard. If you know enough about Go's concurrency to tell whether a single goroutine will cause its scheduler to deadlock all by itself, you can easily write concurrent programs in all these languages.

I know this will be a controversial claim, but all of these solutions are easier to reason about than Go. Usually not by much, and it has primarily to do with the nature of its multi-threading support, but that's just the concurrency model being weakened by the support for convenient parallelism.

There is a case where Go is best: when you like writing Go more than the others. You can guarantee a little less about concurrency than most, but at least you can usually guarantee more about types than most. If you like writing Go you will either luckily avoid the problems or develop habits that prevent you from producing concurrency bugs, possibly even without realizing it.

Go's standard lib is built around it's communications and concurrency primitives. Sure one can extend other languages with new capabilities to take them into Go's realm, but that implies a movement away from stock standard core, whereas Go users, at their core, share a common massively concurrent message passing beast of a runtime and a concurrency friendly standard library that they are all unified behind. One will never mobilize behind a concurrent library in the kind of ways doing it up front will accomplish. Go has a large community of practitioners all writing massively concurrent code. That is huge and has never been done before.

But let's go look at patterning: look at Node: everything is hinged around re-using Node's core patterns, callbacks and every now and then slower events. There is promises work abound, but it's not used heavily: indeed it'd be hard to, with intermodule work, because A) it's not the standard lib B) nor are promises standardized. People seem to prefer something I loath- the horrible Async library- which gives them a bunch of composition tools to make worse the awful awful crime that is callbacks: because callbacks are what Node is, people pick up tools to do it more, to a further extreme, rather than going to the foundation & reshaping the landscape. (PS: Callbacks are A) awful B) actually & entirely the right decision.)

I was reasonably seriously into Perl, but I don't know what perl CSP style library you seem to be alluding to that A) you seem to indicate people used & B) got on par with Go 10 years ago. But I'm interested to hear of it! <3 me some CSP.

> but that implies a movement away from stock standard core [...] never [etc.]

I wouldn't have said anything if I didn't disagree very strongly with these sentiments (see below about growing your own CSP/etc, and later about Coro in particular.) But first, about Node: let's not look at Node (unless you really want to, but as long as you understand it has nothing to do with my comment.) Problems with JS are usually founded on people complaining they don't like callbacks (translation: "Go is better because I like it better"—in which case please dispense with the cognitive dissonance and just be honest and say as much♯) and are inapplicable to other most other languages regardless.

♯Edit: That's not necessarily directed specifically at you, but people often vilify every aspect of things they don't like. I'm not convinced that's what you're trying to say about JS.

I don't view concurrency as this mysterious thing, if you know CSP you can write it in any language, usually fairly naturally. In languages like JavaScript, Perl, Tcl, you can even make it look like it was already part of the language, tucking away calls to the scheduler so the user doesn't need to write them explicitly (heck, you can do that part in C... but... don't), like in Go (save for pure-computation loops, also just like in Go.) All Go really does over the general case is adding implicit calls to the scheduler (not nicely possible in some languages) and treats the scheduler as a second-class primitive feature.

To go on about a unified implementation reaches my ears much like an argument that you'd better use `for` loops and never `map`s (guess what you can't do in Go.)

I wasn't going to bring this up, but since you're familiar with Perl you will be less likely to assume limitations. The lib is Coro. People do use it, but it is controversial because of #dramaintheperlcommunity (#whatelseisnews.) I don't know how popular it is overall, but if popularity is an issue for you let me know because it's a stupid argument to get into and I don't want to have it without realizing it. People rarely pay attention to concurrency anyway, but it's been around forever.

It is not a parallel library, for which I am thankful. (It's not incompatible with parallelism, and importantly: it doesn't make Go's mistakes in that area.) It amounts to a scheduler, facilities for spawning coroutines, wrappers to make calling the scheduler implicit (eg. at IO points), do async IO and every single thing you would need to make Perl automatically do the things Go automatically does. It will look exactly like Perl with Go's concurrency model, in broad strokes you do the same things, in the fine details you use Perl modules and methods and a new keyword or two.

"if you know CSP you can write it in any language"

If you don't know CSP you still have no choice but to write it in Go: that the core unity I've been trying to describe, something the entire standard library is built around, like callbacks in Node. I don't see that you've added anything to your case for other languages in this recent reply: I think this excerpt captures well how your line of thought remains focused on what is possible, while ignoring my core arguments that it's all about the prime interface of the Turing machine before one, not what they set up for themselves on the machine.

"In languages like JavaScript, Perl, Tcl, you can even make it look like it was already part of the language,"

And newcomers may be able to immitate/cargo-cult your styling blindly without straying too far off the gods-sworn path if they are really lucky!

But rather than risk it, making a language CSP first would, IMO, be a clearer way to keep unity.

I really don't care what can be done with a Turing complete machine: I care what use of that machine looks like to the important audience: it's practitioners, all of them, practiced and experienced, and those with other backgrounds and seeking only practical results.

Go has created an entire language all of whose practitioners trade in CSP. I'm happy to de-rate from a language to a massively successful widely known active (let's say +30k active developers) shared and intercompatible practice of doing CSP (libraries or what not). Until I hear that challenged, my argument about Go being fundamentally different for having a runtime and standard lib based around CSP stands untouched, unless you have something specific there to contend.

And newcomers may be able to immitate/cargo-cult your styling blindly without straying too far off the gods-sworn path if they are really lucky!

I guess they should count themselves lucky if they know what an API is at all. C'mon this is a 101-level topic. You can't expect everyone to suddenly forget how to use an API (with or without native syntax) just because it's for concurrency... or do you?

As for Coro, it doesn't matter how many people use it, you can write the exact same things you would write in Go and it will work in the exact same way. It's not cargo-culting, it's writing CSP, which you still have to actually write CSP in Go if that's what you want, and it's perfectly possible not to.

The runtime distinction isn't very important either, in fact when I hear "runtime" I ask "why not library?" Some things are impossible in libraries but CSP is not among them, so I'm still asking "why not a library?" The scheduler, probably the most interesting part, can always be implemented as a library. Can you say anything particular to Go's runtime that makes it so special? I can't think of a single thing. The scheduler itself is about as boring as it gets. Not that there's anything wrong with that—it's not a difficult subject after all.

As for the popularity contest goes, we might as well be talking about an iteration protocol. There are many different ones but you'd rarely notice it. The most commonly used ones are relatively less-than-great. You can always use a for-loop like you can always write a coroutine or time slicer in the worst case scenario. It works, and very well.

The other thing is CSP. Up to now, I wasn't worried about the concurrency model in particular, but now you're saying we can't use actors, agents, etc.—it must be CSP. That's BS. CSP is nice when you have to build it yourself, but it's not all there is and it's no easier to reason about than other good options. In fact I would mostly only chose it when I had to build it myself or support for something else wasn't available, which is understandably common. Really though, this sounds like a deceitful way to rule out Erlang and friends.

    And newcomers may be able to immitate/cargo-cult your styling blindly without
    straying too far off the gods-sworn path if they are really lucky!

  I guess they should count themselves lucky if they know what an API is at all.
  C'mon this is a 101-level topic. You can't expect everyone to suddenly forget how
  to use an API (with or without native syntax) just because it's for concurrency...
  or do you?
Taking perl as an example, they probably know how perl does file i/o. Yet if we're switching to a concurrent system, the entire practice they have of doing file i/o needs to be re-grokked. Your proposal, that learning an API is all it takes to master a form of concurrency, is a fucking joke sir. Concurrency is greater than proceduralism, it shapes the systems of code whereas writing code and hitting apis is a lower order operation: shaping code.

  As for Coro, it doesn't matter how many people use it, you can write the exact same
  things you would write in Go and it will work in the exact same way. It's not
  cargo-culting, it's writing CSP, which you still have to actually write CSP in Go
  if that's what you want, and it's perfectly possible not to.
And how many CPAN modules can I download which I won't have to hand wrap in Coro? How many modules will expose coro based concurrency patterns?

I don't know about you, but most of my work is not writing code: it's using code. You miss this essential fact again and again.

What is possible is entirely not at all my interest here. I am interested in the practice of writing code. My sole contention against you is that the availability of libraries is irrelevant, an my basis for this point is that I agree with:

  The runtime distinction isn't very important either,  [[snip]]
Thank you for repeating what I said last time in a dumbed down way. I continue to agree we have Turing complete runtimes: we can do anything. Yes, the runtime distinction is largely unimportant (although performance, natch). Adoption is. Availability of code is. Having other practitioners is important.

  in fact when I hear "runtime" I ask "why not library?"
Indeed, you've done it three times now!

And every time you've gotten a response that agrees with your conjecture that it might be possible!

So why are you still making this same uncontested assertion!

So maybe you need to find something else to do, other than to re-describe how possible libraries make it to be Go-like or otherwise concurrent, because- with some niggling performance issues- there's been naught but agreement, and you've avoided clashing every time with my fundamental point that Go has a practicing community with some many thousands of libraries written for it, and the libraries you are talking about don't, and using your libraries will create a schizm between you and the other practioners of the parent language and the modules written for it, and Go users have no impedance when working with one another w/r/t their concurrent systems.

  Some things are impossible in libraries but CSP is not among them, so I'm still
  asking "why not a library?" 
FFS I ended my last post with a Turing equivalency argument that could not have been more explicit about agreeing with you. It's not about what is possible, unless you are a god and live forever and don't mind doing it all yourself. If that's in fact the case, fuck you, I'll see your ass at the fucking fields of Ragnarok. Now get off my lawn or start saying something you haven't that might be useful.

  The scheduler, probably the most interesting part, can always be implemented as a
Yes, Turing completeness. Good and acceptable performant? Meh, maybe, often actually, sure. I really could care less: I think you are so very far off in the weeds trying to discuss this, as what can be done is the most irrelevant topic when dealing with Turing machines, aka programming languages.

  Can you say anything particular to Go's runtime that makes it so special? I can't
  think of a single thing. The scheduler itself is about as boring as it gets. Not
  that there's anything wrong with that—it's not a difficult subject after all.
You are a masterful troll sir. The runtime itself is not novel. The important thing I highlighted from the beginning about Go is that its standard library is written for a distinct communications oriented set of concurrency primitivies, and thus all uses of the language flow from this reference library's standard, and that creates an interesting practicing community of people all using the same communications oriented concurrency constructs. Thanks for making me spell out the value again, I hope I'm clearer this time.

  As for the popularity contest goes, we might as well be talking about an iteration protocol. There are many different ones but you'd rarely notice it. The most commonly used ones are relatively less-than-great. You can always use a for-loop like you can always write a coroutine or time slicer in the worst case scenario. It works, and very well.

The other thing is CSP. Up to now, I wasn't worried about the concurrency model in particular, but now you're saying we can't use actors, agents, etc.—it must be CSP. That's BS. CSP is nice when you have to build it yourself, but it's not all there is and it's no easier to reason about than other good options. In fact I would mostly only chose it when I had to build it myself or support for something else wasn't available, which is understandably common. Really though, this sounds like a deceitful way to rule out Erlang and friends.

  Please cite where I went hardline and demanded CSP as the only acceptable communications/concurrency primitive I'll amend that such that your argument here is thoroughly unnecessary- I certainly don't see myself as opposing what you are saying, but you didn't actually tell me what the clash was that arose this contention- all you said was "The other thing is CSP. Up to now,"- and that doesn't actually inform me about what changed your mind, & made you think I was hardlining for any specific thing; so, I'm not sure how to clarify and reinforce my agreement with you, and my willingness to embrace any practioner's use of good solid helpful concurrency construct, be it of the communications variety (which I do myself enjoy, as concurrency in my artisan's mind is modeled as non-locality) or otherwise.

Taking perl as an example, they probably know how perl does file i/o. Yet if we're switching to a concurrent system, the entire practice they have of doing file i/o needs to be re-grokked.

I see you didn't look into Coro or read much of what I said. You do IO in the regular way and the scheduler is automatically invoked.

(If you want to know what needs wrapping, just use your head. If it does IO, make sure you have that kind of IO wrapper. If you screw this up your program will still work.)

You are a masterful troll sir. The runtime itself is not novel.

You yourself stressed the importance of the fundamental difference in the Go runtime, two or three times I believe. It seemed to have something to do with your argument against libraries.

The one argument you claim to have left (userbase) is the one you've been ignoring my comments about since the beginning. I'm not interested in arguing it anymore, but if you're interested in it, feel free to re-read my comments on that matter.

From the Coro intro docs-

Using only ready, cede and schedule to synchronise threads is difficult,

Coro supports a number of primitives to help synchronising threads in easier ways.

Coro adds threading to Perl but falls short of being a concurrency practice- it's tooling for doing concurrency.

Go establishes two fundamental primitives, goroutines and channels, and gets rid of old-worn notions of threads.

This isn't a novel runtime, because it's been done- in Newsqueak, just to name one- but the entire platform built from the ground up to be mobilized around not giving you the tools to find your own concurrency solutions, but to have a deliberated answer to all concurrency problems, a masthead banner saying "THIS IS THE ANSWER" is the differentiation and novelness I'm ascribing to Go.

That approach, over our discussion, I've come to realize is more characteristic of the standard library of Go than it is the runtime. That said, you're holding my feet awfully cruelly to the fire and I take issue with you using this newly gained outlook in such a mean-spirited and cruel fashion to stab at me.

Talking with you is incredibly frustrating as you toss away my points with a single dismissing wave and do not cite material things; you pick tiny little fragments to quip at as springboards to launch into whatever you wanted to start talking anyways. As I said before, masterful trolling, it's very hard to hold a conversation with you. I'd beg of you to yield, because otherwise we might indeed find ourselves on the field of Ragnarok, at that end that never comes after all infinity.

You also seem very ill informed about Coro- which you champion-

Fortunately, the IO::AIO module on CPAN allows you to move these I/O calls into the background, letting you do useful work in the foreground

So, no, _You do IO in the regular way and the scheduler is automatically invoked_ is a bullshit claim; this has all the weaknesses of Node.js's event loop, and the need to write small things that yield, or here in Coro world, cede, regularly. And if you use a CPAN module that does block, you are toast, making this fundamentally incompatible with everything else ever done in Perl in a very serious way.

a simple module that implemented a specific form of first class continuations called Coroutines. These basically allow you to capture the current point execution and jump to another point is in no way a parallel to the runtime of Go because the stdlib of perl is not meant to operate in a non-blocking way, and Go is. That, I think, is safe to say.

You also seem very ill informed about Coro

On the contrary, I've used it. And Go. I know what it's like in practice which do not, and this makes your claims sound very ignorant to me.

So, no, _You do IO in the regular way and the scheduler is automatically invoked_ is a bullshit claim; this has all the weaknesses of Node.js's event loop, and the need to write small things that yield, or here in Coro world, cede, regularly.

This is proof of your ignorance. It isn't like working with an event loop (but it is compatible with them, ultimately giving you a better option for parallelism than Go.) I was going to mention how rarely `cede` is necessary, but I had honestly forgotten what it was called. You need it exactly as often as you need to write `sync` or whatever it is in Go. Seriously, you can write the exact same program as far as where and when calling concurrency facilities are and you will get the exact same concurrency profile in your program.

I'm not sure what you think the wrappers are for, but they introduce non-blocking behaviour into the library calls you make by, as I've said a few times now, invoking the scheduler—that's how this all works after all. There's no compatibility problem either, that's BS you came up with like just about everything else you've said about it.

Oh you've used it! Well, I take back all the other points I said! And the documentation from Coro that I cited explaining how it works, how it requires different I/O patterns, and that it's just a GOTO that doesn't do anything about synchronous long running code problems one might have when trying to interface with the rest of Perl! You're message is read loud and clear, I get where you are coming from now!

The fact is everything you said about it is wrong, and you drew the wrong conclusions from the docs. I would forgive a person for making that mistake from a lack of experience, but if you want to play it the other way I could just accuse you of lying.

The best part is when you say it's just a GOTO that doesn't do anything about synchronous long running code problems—how do you think Go's concurrency actually works under the hood? It works the same way—and has the same problem you describe.

_it's just a GOTO that doesn't do anything about synchronous long running code problems_

_how do you think Go's concurrency actually works under the hood? It works the same way—and has the same problem you describe._

I think Go, in most uses, makes use of a thread pool to decouple it's green threads from the OS threads such that a single busy green thread does not seize up the rest of the runtime, and I think Go manages the assignment of green to OS threads in a way that is hands off for the programmer and by default.

I also think one gets a runtime in Go where I/O is multiplexed on an event loop, such that most common syscalls a Perl program might make don't hold up execution, and are instead handled in a non-blocking way by default.

Please let me know what else I can clarify so that we can find agreement.

That's an acceptable approximation for now at least, although Go doesn't know which threads will hold up the runtime.

I'm not sure what's left to clarify on your part. I think you misunderstand what working with Coro is like. In practice the Perl code that holds up execution is the same code that holds up execution in Go, things like:

    while(1) { $i++ }
have the same problem in both languages. Throw a `cede` (or `sync`) on it and the problem goes away. Might want to unroll it a bit first.

I'm not sure what there is to agree on. I could perhaps eventually convince you that working in Coro is very much like working in Go, or you could do that on your own time if you really want. But you seem to have an issue with a concurrency implementation that isn't widely popular, and that sounds like a matter of taste to me. To my ears it sounds like an incredibly stupid thing to get hung up on—CSP is so easy—but if that's your bag I'll only ask you not to use it to convince me of anything.

It's like this-

Concurrency is the root of all code construction, at a more meta- level than writing code. Layering it in with libraries is feasible, but you've already imposed some structure from the code, then you're adding on top of that structure an underlying structure, and that's... well... that's something Go avoided doing.

You probably stopped at the half of my comment, because I did write Go had advantages (over the likes of ruby&friends anyway).

But it's the wrong way to think about it, that's why it's in the second half of the comment. Else, people will just use go because they're told it's the cool kid on the block. WRONG way of thinking.

That is all :)

"Also, it doesnt have things like global interpreter locks." is basically the only thing you said that is anything alike what I'm asserting. And it demonstrates a knowledge of a thing I contrasted Go against, without demonstrating knowledge of what Go is & why it is different. Further let me add that I don't think "clear start/APIs/framework" captures the essence at all; there is something deeper than framing and end users here- there is something deep in the bowels that is using the machine dislike how all else do, and that is sorcery that is deeply important.

Alright - note that I generally attempt to make one point per comment, hence the succint part on Go. Your comment goes deeper, that is good. I don't believe we have diverging opinions however, hence my reply, as you did write that I was "wrong". But being wrong is difficult if we (mostly) share the same opinion.. in my opinion ;-)

Now then again, I do believe a clean start helps, and it's very possible that I didn't properly "capture the essence" , or at least not well enough

They went from a language with a factor of forty cost compared to C++ to one with a factor of four cost. They saw a factor of ten improvement in CPU utilization. Seems like all Go really brought to the table here was speed. I don't see any evidence that its unusual features were a factor, besides its base-level increased efficiency.

Nice rant, and I mostly agree with it, but there are a few slow things about the ways that some languages are usually implemented that don't really have much to do with over-engineering. Let's take this line of Python code, for example, and look at how the CPython interpreter runs it:

    return x + 42
This turns into the following bytecode instructions:

    LOAD_FAST       0 (x)
    LOAD_CONST      1 (42)
First we get the x variable from the local namespace. This is an array of PyObject pointers; the LOAD_FAST oepration simply does an array lookup and puts the result on the stack, incrementing the reference count. Pretty fast. Next is LOAD_CONST, which is even faster; it takes an already-allocated PyObject and puts a pointer to it on the stack, incrementing the reference count. BINARY_ADD removes two numbers from the stack, dereferences the pointers to get the integer values, allocates a new PyObject with the resulting integer inside, bumps up its reference count, decrements the reference count of the two operands, and pushes the result on the stack. Finally, RETURN_VALUE jumps back to the caller of the function.

In Go, the corresponding code would be compiled to two, maybe three machine code instructions. There are good reasons why Python does it this way, but it does suffer some inherent slowdown.

A more interesting comparison would be the PyPy-generated native code for the expression versus the compiled code from Go. Comparing an interpreter with a compiler is generally uninteresting; the compiler almost always wins.

If you look at the code that comes out of various modern JITs you often will find really interesting differences in the native code they produce, even for simple constructs like arithmetic. Type checks, barriers, bailouts, etc - in addition to more mundane differences, like one using SSE to do floating point and the other using x87, and one JIT having a better register allocator.

> dereferences the pointers to get the integer values

Doesn't it find a method of x that implements addition?

Nope! There's a branch in the interpreter that checks if both operands are Python's built-in int objects. If they are, and the result can fit in a C int without overflow, then the interpreter adds the numbers directly. This is by far the most common case.

How can the interpreter know, at bytecode compilation time, that x is bound to an int? Surely it generates two code paths, one for the "common case" and another for when x is a general object with an __add__ method?

The test is in the interpreter, not the compiler to byte-code. Every time the + is evaluated, the interpreter checks for the common case before resorting to finding the first object's method.

Okay, I think I get it now. The check you refer to is done inside the BINARY_ADD implementation. That's why the bytecode is the same for both cases. Am I right?

Yes, as you say, it's not known at compile-time and is checked at interpretation time, each time. http://hg.python.org/cpython/file/b49971a1e70d/Python/ceval....

Thanks. And wow, the CPython source code seems to be very much readable. I've been meaning to dive further into Python internals for a while, maybe this is the time to do so.

Interesting, thanks.

> It's not really "go" that makes the difference. it's how the runtimes and frameworks are used and/or made.


For the type of product they sell, if the processing time isn't pretty much all going to the actual running of the customer supplied job code and network latency, they're doing something very wrong. The language choice shouldn't be much more than noise in a setup like that. The framework choice, on the other hand... Rails is hardly well optimised for a high traffic API that can't readily be cached.

I've done queuing and job processing in Ruby. Spending 90% of the time in kernel space handling network traffic or waiting is not hard to achieve for a pure queuing server. If you add delegating jobs to external code to do the actual work, you should spend less time in userspace in the code actually processing the messages.

If language choice nets you more than 10%-20% in a setup like that, something else has changed beyond language.

And that's fine. But a post focusing on the language change then makes it seem like they didn't understand what caused the performance problem in the first place.

Maybe I'm just a new and terrible programmer, but I feel like if you wait to fully understand whatever tech is out there in terms of what to use, you'll never get started.

Let's be new and terrible programmers together, because I feel like the only way to fully understand whatever tech is out there is to actually use it.

In reality you have to follow the buzz in the tech to understand new technology. If something sounds like you could use it, take a few days out to toy with it. If you still want to use it, try it out for a small project and the rest will come automatically if it is the right choice.

> It's not really "go" that makes the difference

But from what I've seen and read about Go, it does seem to have things in it that do make a difference.

For example from day one Go was designed for with concurrency in mind and that's a big plus.

To see how this can help this Rob Pike video does a good job of showing off Go's version of concurrency:


Now, you can implement concurrency lots of different ways using many different languages. But I've used a lot of languages and I never seen concurrency done as easily as shown in that video.

> But I've used a lot of languages and I never seen concurrency done as easily as shown in that video.

So I take it you never used Erlang.

Erlang is big and complicated compared to Go. It's not easy, and besides it is functional (not easy) which already makes it unusable by 90% of people.

> Erlang is big and complicated compared to Go.

No. OTP is big (arguably), Erlang isn't even remotely. It's also completely irrelevant to OP's remark.

> It's not easy, and besides it is functional (not easy)

That doesn't make sense. What's hard in Erlang? That you don't have a for loop?

OTP is big an complicated... Erlang really isn't.

Go's definitely picking up momentum. I know that Mozilla is shifting much of its services infrastructure to Go.

We're porting over our Arabic sentiment engine, currently in Python/Cython to Go. If you're dealing with simple data structures, going from Python to Go is almost a line-for-line port, but the performance benefits are, of course, massive. Our benchmarks show a 40x speed improvement so far.

Lastly, for anyone thinking about taking the plunge, use Go tip (from their source code repo) - don't use their "stable" releases. They fix bugs so fast you'll always want to be current.

Do you have a citation for Mozilla using Go? I would be a bit surprised given their(Mozilla) development of Rust.

Rust and Go are not similar or competing. They are both new, but aside from that the differences are huge. So if a project uses one, that doesn't mean the other was a possibility for that project too.

They're not similar at all, but both were designed to replace C++. That sort of means that they're competing for the use-case of "things I would write in C++ if it wasn't a god-awful nightmare to do so".

From there, though, you're right that they take very, very different approaches.

> They're not similar at all, but both were designed to replace C++.

In practice, the main use for Go appears to be like this story - an alternative to RoR, Node, Python/Django, etc. - and not a C++ alternative. Go and Rust may both start from the idea of doing C++ or something similar "right", but end up in very different places.

>They're not similar at all, but both were designed to replace C++.

Well, Rust WAS designed to replace C++, both in the intention behind its design and the way it was done.

Go designers had this vague intention of "replacing C++", but the way they have designed the language they only really replace Java or some scripting language. Which they admit (the get mostly Python etc converts than C/C++ converts).

The Rust team agrees: https://github.com/mozilla/rust/wiki/Doc-language-FAQ (control-f for 'this Google language')

Indeed they are not similar but wildly different tools are often used in a similar or competing use cases(eg. the blog's post of the company using Ruby and conversion to Go). I had asked because if you were to reverse the usage do you think Google is likely using much Rust since it is developing Go?

Just to clarify, I see a bright future for both languages.

It would be interesting to see Rust used in Android in some of the parts that are now C or C++.

The way I would put it is that Go was designed to incorporate some of the simplicity and speed of C as well as the ease of use of Python, while Rust was designed to combine the speed and flexibility of C++ with the safety of Haskell.

... and the concurrency of Erlang. (and OCaml might be more apt than Haskell).

> Rust and Go are not similar or competing.

Rust is more "kitchen sink" (done as elegantly and cleanly as possible) and Go is more minimalist.

Being "kitchen sink" isn't a design goal of Rust. Rather, Rust is a lower-level, safer language. Go is higher-level.

Rust strives to be as minimalist as possible without sacrificing the goals of low-level control over memory and C++ performance (optional GC), memory safety, race-free concurrency, and type safety (no null pointers).

I don't think he meant kitchen sink, more that there are fundamental aspects from many parts of languages that are bound together well, which makes

> Rust strives to be as minimalist as possible without sacrificing the goals of low-level control over memory and C++ performance (optional GC), memory safety, race-free concurrency, and type safety (no null pointers).

somewhat funny.

Why funny? (coming from someone who's thinking about giving Rust a try)

He said "rust doesn't have the kitchen sink", and then listed the kitchen sink. Rust is awesome, you should use it.

Alright, I'll download the compiler/etc right now :D

Go is from 2009. Ruby is from 1995. Although it took about 10 years for Ruby to really start to be popular in the English speaking world.

Rust, not Ruby.

Sure, check out their github pages (here's a few):

* https://github.com/mozilla-services/heka-mozsvc-plugins

* https://github.com/mozilla-services/heka

My friend works for Mozilla - that's how I knew.

Thanks waterside81. Can anyone explain the difference between Mozilla and Mozilla services? A cursory glance seems some of the Github repositories overlap.

Mozilla Services is the team within Mozilla that builds/runs much of the backend infrastructure, e.g. the firefox sync servers, marketplace servers etc. The existence of separate "mozilla" and "mozilla-services" github projects is largely a historical accident, since different teams started moving to github organically at different times.

(Source: I work for Mozilla, on the Services team)

Source: I work for Mozilla-Services

Just curious, what were the tradeoffs for profiling the Python code and rewriting slow parts in C vs total rewrite in Go? For high level languages, the usual argument has been to rewrite just the slow parts in C or some other low level language.

And that's what we did during our first pass: identify the slow parts and port them to C. This gave us about 30% better performance. Without getting too much into the nitty gritty of our specific case, this wasn't enough.

We needed some massive speed improvements, I'm talking in the order of 100X faster. The nature of our algorithms was such that they could be done in parallel (i.e map/reduce) - an ideal candidate for Go's goroutines. We actually tried to make it parallelizable at first in Python, using gevent (and even just multiprocessing) and the results were not great.

One other aspect that really guided us towards Go was memory usage. Python was just sucking up so much memory whereas our Go implementation thus far is so much thinner.

One other aspect that really guided us towards Go was memory usage. Python was just sucking up so much memory whereas our Go implementation thus far is so much thinner.

It's funny that you mention that, because in computationally intensive work memory management is an issue with Go. E.g. I wrote a maximum entropy parameter estimator in Go, which was terribly slow until I circumvented the garbage collector by preallocating a huge block of memory and doing my own management. In C malloc() and free()-ing had practically no overhead. After putting the Go garbage collector out of the game, the Go version was approximately within 2x of the C version.

I am interested how Go gave you one or two orders of magnitude speedup, while rewriting hot spots in C didn't...

> I am interested how Go gave you one or two orders of magnitude speedup, while rewriting hot spots in C didn't...


One can easily do that in C with OpenMP.

If you can get away with only parallelizing the C code that'll work, but if you're replacing individual Python functions calls it won't so much. You could use a lot of Python processes, but that might be awkward depending on how much they need to coordinate and might lead to too much memory usage.

One thing you don't get in the C/Python combo is total static typing. The other big win of Go is elimination of a lot of runtime errors.

So it comes down to which will be more of a win for you and your product. Go has a big advantage here IMO though because of static typing AND performance increase.

Like any other language with static typing and available compiler implementations.

Is that relevant though? The parent is asking about a comparison between python+c and Go for the OP's software, so doesn't it make sense to answer to that?

Of course, you are correct. Rewriting in any other language with static typing or an available compiler implementation would also provide the benefits of static typing and detection of runtime errors.

Because those benefits are not some special feature of Go.

On the other hand I think youth generation of programmers lack exposition to programming languages, like we used to have in the old days, hence they make quite limited comparisons.

Hi, I'm in the market* for an Arabic sentiment engine for some academic research, can you contact me? My email is in my profile.

*Note that "in the market" doesn't mean I have a lot of money to spend :D

What is an "Arabic sentiment engine" :-D? Like textual analysis as to whether a piece of Arabic text is positive or negative?

I went briefly to through the comments, it seems that no-one is bothered with lack of details in this post, things are not logical at all, almost like it was written by PR agency in charge of promoting Go.

Except it is established business so I would assume things are true, but real motives are hidden. Go is obviously good language for some specific things, but it's not like ruby is pure trash, how come you did 'everything' in ruby, and then didn't know about eventmachine, but just had to rewrite things in Go.

And rewrite happened overnight, right?

A lot is missing from this story. I will definitely look more into Go, mostly because someone compared it in comments with TurboPascal, and I have fond memories of Borland tools, Pascal especially.

Go and Rails are so different that there is almost no point in comparing them.

> but real motives are hidden

Could you elaborate on this please? Are you referring to the OP's motives of publishing the article? Or of switching to Go?

> Go and Rails are so different that there is almost no point in comparing them.

Except when one solves the same problem better than the other.

- They did not say why they did not choose one of the other alternatives.

- They did not say how they used the concurrent features of Go, only mentioned that they were there.

- They did not say how long it took to rewrite.

- They did not say what they changed in the API.

I'm not saying the story isn't true, but for a true story it lacks a lot. You could summarize the article with the title and you won't be missing much.

They didn't say why Ruby was using a lot of CPU either. Some basic investigation into the root cause might've revealed some intractable problems that are tied to Ruby (or at least the MRI interpreter) like long GC pauses (something which can be mitigated by going to a different interpreter like JRuby versus a complete rewrite in a different language)

Instead, it's "OMG we're maxing out CPU, time to completely rewrite our app in a different language"

And how big is this app that it needs 30 servers? We probably serve an order of magnitude of traffic where I work than these guys do (just guessing) and don't need that many.

I was referring to switching to Go, I am not disputing languages performance, just the whole article is very strangely written

I was originally very excited about Go when I first learned about it. But then I got tired and frustrated quickly after having to listen to the other Gophers telling me that I don't need this or that feature because there is a better way to do it in Go.


I don't need exceptions because Go function can return multiple values.

I don't need a mocking framework like Mockito because Go has interfaces.

I don't need an interactive debugger because I can debug with command line using gdb.

I don't need named arguments because I can instantiate a struct and call my function with it. (Have you seen Ruby or Javascript code? Almost every function takes 'opts' as a single argument. Go is probably going down this path too.)

Then I learned about Scala. I'm not saying that Scala is better than Go. However, it has everything that I need. :)

Comments like this are weird. Go does not like exceptions. They've built a whole idiom around error returns. When you said, "I need exceptions", what did you expect them to say? "Oh, sorry, we forgot that, we'll get right on it"?

I also don't understand what you mean by "interactive debugger". What is gdb if not an interactive debugger? Gdb is my go-to debugger for Ruby as well.

> Gdb is my go-to debugger for Ruby as well.

Unless you are debugging the ruby process(and not the running script), what does gdb buy you over ruby-debug?

Something like RubyMine IDE debugger or Firebug javascript debugger.

If you are a Ruby developer then I highly recommend trying out RubyMine. I promise that you will never debug Ruby code using gdb again!

I got all excited when I saw your comment and went and checked out the latest RubyMine video on YouTube, because having something like Firebug for Ruby would be awesome. But it looks just like what gdb gives you when being driven by a graphical frontend like DDD or Emacs. Actually it looks a lot less powerful than DDD. What am I missing?

> Something like RubyMine IDE debugger or Firebug javascript debugger.

That's not for the go team to develop.

> If you are a Ruby developer then I highly recommend trying out RubyMine. I promise that you will never debug Ruby code using gdb again!

I haven't used RubyMine, but I have used Visual Studio and Eclipse debuggers, and I still debug using gdb, ruby-debug, pdb et al.

It's pretty typical of the Plan9 mentality. See this post on the Acme text editor: http://9fans.net/archive/2008/08/134

Is that a parody?

wow, worse than the irc python channel.

Why would you say something like this? The IRC Python channel is a kind of state machine which goes like:

- How do I do this?

- Why do you want to do this?

- Because XXX

- Then that's not really what you should be doing, it's dangerous/inefficient/etc., do this instead

The post about Acme could have been a one-line answer: "If you want do do any of this, just use a better editor".

There are many times when I said "I need to do X, I know it's ugly, but I've considered all the alternatives and, for reasons too lengthy to go into right now, I just need to do X. How can I do it?" only to get people treating me like an idiot who doesn't know anything, and only giving an answer after I've responded to every single of the alternatives with my reasons.

Sometimes, people aren't new.

It's a state machine without a direct transition from "question" to "answer" :)

We also checked out both Go and Scala and picked Scala due to better IDE support (IntelliJ) and existing Java lib ecosystem. It also is typically faster than Go (but does use much more memory): http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

Overall very happy with our choice. I will probably learn Go too, but I also don't really like the way it does error handling and the tooling support seems limited vs what I can get for the JVM.

Yup. This is my exact conclusion too.

Scala fan here. But still, I can sympathize with Go developers on getting rid of exceptions.

I don't know if Go would quite feel the same, but Scala's Try, Pattern Matching and Curried functions are stupid nice and make handling errors without exceptions feel very natural.

Took me a couple months to get there, but there you go. "Folding" out different state feels like I'm writing more robust code. Where I'm coding for a range of possibilities instead of just the golden path.

Even when I _am_ focusing on the Golden Path, features like Try[A] force me (well, warn at least) to at-least stub out the other options so I get to feel like instead of having to hammer in robustness, it's something composed instead. I can focus on making the ideal X, and handle the possibility of needing a notX as alternative realities, instead of trying to build a cyborgXYZ that can handle every potential state. If that makes any sense.

Same thing happen to me, I even did some small contributions around 2010.

Then I discovered D and Rust are better languages for my purposes.

Just a guess, I'm guessing python isn't your favorite language either? They seem to be rather unopinionated languages that let you have access to lots of tools. I hear they manage to keep it pretty simple despite having a "kitchen sink", so definitely interested in both.

Python is my favorite scripting language, not for application development.

D syntax is nicer than Go's, but its also dead compared to Go. Where are you going to put your chips?

> D syntax is nicer than Go's, but its also dead compared to Go.

Because it is full of Google PR and happens to have some cool names on the team?

> Where are you going to put your chips?

In languages that the designers don't throw away the last decades of improvements in language design.

Yes, because its full of Google MONEY! Its about the libraries...

Which will get canned like many other Google projects when they fed up with it...

In the light of recent events, I'm aggreeing with you...

tl;dr: I tried Go, but it wasn't like Ruby, so now I'm trying Scala.

Ha. Is there a high performance, compiled, and concurrent friendly language that looks like Ruby? Please share. :)

Rust might be your best bet.

I think better advice for you is to just stay away from opinionated languages. I don't think Rust falls into that category, but I'm not sure.

Looks like Ruby? You mean syntactically like "end" instead of "}", I assume.

Sorry, but the C syntax is dominating the native-compiled world. You could try Pascal, they have "end". Or use C with preprocessor magic:

  #define END     ;}
If you mean Ruby like terse, generic code, then try Rust, Go, and D.

Yes, have a look at Crystal.


It is still an ongoing development, though.

I personally am looking very heavily into Rust, but if you haven't seen Elixir, you should.

For large projects, it'd probably make more sense to rewrite the critical code sections in C. rubyinline makes this really easy, and ruby C extensions aren't that bad either.

One time on a contracting gig I had some code that was interacting with cairo drawing and imagemagick. They didn't have a compatible raw image format at the time (one was RGBA and the other was ABGR, IIRC.) It's trivial to convert between the two in ruby, but it was taking 60 seconds per image. Once I switched that one loop to C, it took 0.02 seconds.

I've got nothing against Go, just suggesting an alternative that might make more sense in some situations.

This definitely does make a lot more sense than throwing out your whole app and rewriting it in many situations, but I'm always a little hinky about moving stuff into C just because it has so many ways to shoot yourself in the foot. You can obviously create bugs in any language (I know I certainly have!), but Go really does hit a nice spot between safety, productivity, and speed, so I can definitely see the attraction.

For web servers there are other things more important than the raw processing speed of a single routine. What you want is to get the CPU usage to 100%, CPU resources used preferably in actual processing and less on things like garbage collecting or blocking for I/O. For web servers in particular it is tricky since processing requests also involves a fair bit of I/O interwoven with CPU processing (in the case of apps that aren't just CRUD over a DB).

I've seen a lot of dumb Rails server configs and even dumber usages of Rails without any tuning at all.

With a couple weeks or a month of work I could shrink the hosting fees or resources consumed by a factor of 5 out of most Rails apps that are sitting up around 30 servers... and probably a factor of 10. Leaving aside whatever rookie or even intermediate mistakes were made in their Ruby code or their database, this post indicates a lack of understanding of what happened when their server fell over. Proper tuning of a deployment should not trigger a 100% failure mode like this.

These folks were itching to get off of Ruby for whatever reason... after all their roots were in Java. If your goal is to do a rewrite and learn a new language and gain some notoriety why waste time learning what you did wrong with Ruby or your server config?

Your inability to understand their clear description of a basic cascading failure mode under load speaks poorly of your actual knowledge and experience. Given that, I have to take everything else that you say with a large helping of salt.

Their description of the root problem was very superficial:

> At some threshold above 50%, our Rails servers would spike up to 100% CPU usage and become unresponsive.

Yes, but why? What exactly were those processes doing? Why the sudden change at a particular threshold?

Their lack of detailed investigation into this makes their post useless to me -- I have no way of knowing (1) what specific aspect of Ruby's architecture makes it unfit for their problem?, or (2) is their application doing something stupid that causes the problem in first place?

If you read that sentence in context, your questions are answered. Here are the previous two sentences for context.

The bigger problem was dealing with big traffic spikes. When a big spike in traffic came in, it created a domino effect that would take take down our entire cluster

Given the article to that point, it is clear what is happening. They are maintaining a steady state of X% (where X is above 50), then a big traffic spike comes along. That traffic load is not distributed equally (my guess is because requests are not created equal) and there is a hot spot. The hot spot fails, then increases the load on everything else. After this repeats a few times, there is insufficient capacity.

In other words at some steady state threshold above 50%, you wind up without sufficient capacity to handle traffic spikes. Nothing in this failure scenario is specific to Rails. It is a well-known failure mode for a cluster under load with a push based architecture (which http requests are).

The fact that you did not understand that description speaks poorly of your problem solving skills. Now they may well have been doing something trivial that is fixable to cause load to be less than it was. But you haven't convinced me that you're the person to be trusted to figure that out.

TLDR: I'm butt-hurt that people are dumping Ruby. And because I hate Java I'll blame it on it also.

Not quite. He's saying that people who think Ruby is slow are often people who have no idea how to performance-tune it.

Well, more of the time it's because Ruby actually is, in fact, slow.

I though the implementations were slow or fast, not the languages.

Typically a language is considered slow or fast based on it's most popular implementation, sadly.

True, though language design can have implications on how fast things can be implemented.

Agreed, but it is still an implementation issue, because one can eventually discover ways to optimize such cases without changing the language.

Ruby is not, in fact, slow. This myth has been debunked so many times that I'll just leave the googling to you as an exercise.

I'm getting the same experience here by entirely rewriting an old PHP web application [1] in Haskell using the Yesod web framework.

I've been using JMeter to benchmark both versions of the application. On a 10€/month dedicated server [2], the Haskell one was able to generate 220 dynamic pages per second [3] whereas the PHP one tops at 35 pages per second on a equivalent page.

Moreover, concurrence capabilities of Haskell are also pretty sweet : while I was benchmarking the web app using 2,000 concurrent connections, the application server was only using around 90MiB of RAM. I was not able to increase the number of concurrent connections as the client application I was using started to kill my quad core desktop, I suspect Haskell to be able to manage A LOT MORE concurrent connections as I didn't see any decrease in the throughput of the application as I was increasing the number of concurrent connections.

[1] http://files.getwebb.org

[2] http://www.kimsufi.com/fr/

[3] Static content has been ignored as modern servers like NGinx seem to be able to carry the static content (CSS, images, ...) at more than 5,000 req/sec on the same machine.

Interesting to hear of Go being used in production. It'd be great to hear some more details on your setup when deploying the go processes - how are you managing failover, what's your load balancer, and how are you handling swapping out processes etc? Are you compiling on the server or local machines before deployment? Most other languages have lots of solutions on the deployment side now but Go is so new there isn't much info out there.

Also, as you are running an API presumably your app in Rails was pretty simple, did you have the impression things would be more complex in Go if you were writing an app with an extensive front end and UI and using sql? I'd miss all the view helpers etc available at present I think. Going from 30 servers to 2 certainly sounds like a huge improvement, so it was definitely worth it for you, are you thinking of writing any front-end apps in Go?

I've been playing around with Go recently and it is a fantastic language for someone coming from dynamic languages like Ruby. I particularly liked interfaces as a way to define a contract for implementations to follow, and the simple package system which encourages you to make your code modular.

We actually have our own deployment tools for Go (and for Rails, our databases, etc). We built them before all the new hip options that are around today. We build on the target machines, although that's not a requirement since we all run the same architecture on our dev machines too (64 bit linux).

I'm not sure about writing a front end in Go, there's not a lot in terms of UI frameworks and Iron.io front end (HUD) is still in Rails so I can't really say much about it.

Interesting to know that you're building on target machines, I have yet to explore cross compilation, but it'd be great to be able to build before deploy.

Sorry, deployment was probably the wrong word - I was not so worried about getting the files up there (like you I have a simple home-grown solution for that), but more how you were handling process management, swapping out processes etc.

Perhaps with Go this is less of an issue because startup time for new processes is minimal, and you can simply kill one process, start another and not really miss any requests (unlike say Rails with startup times of 10 seconds or so for instances)? Did you find this wasn't a big issue in practice and something very simple works for you?

I'm currently playing with go but if considering it for client use would have to be sure that things like this were not an issue. Are you using any off the shelf tools for process management/load balancing or is it something that you have built yourself? Did you run into any problems?

For my (toy) apps, I've been compiling locally and pushing to the server. It's trivial to compile for a target platform and architecture, and you get a single compiled binary. Certainly my response isn't getting all the way to what you're looking for, but it's not difficult to write shell scripts that manage the deploy process from here.

Thanks, I use rsync wrapped in a script usually (because I'm dealing with more files than just a binary), haven't tried cross compiling yet but was going to give that a go. I suppose I was more concerned about what happens when things are on the server and how multiple processes can be managed. As soon as you have more than one and want to swap out server workers seamlessly and load balance etc it gets a little more complicated.

Yes, I think you're right on the mark re: swapping out workers, etc. Those nuances I haven't had to deal with due to these being toy apps.

Ok, this bit has left me confused.

They were Java devs that liked ruby, they wrote applications in Ruby on Rails and the ruby apps were hitting limits so they immediately started looking at other languages.

But they don't mention the most obviously (to my mind) simple option.


It is ruby (They like ruby). Most ruby apps can be run on JRuby with very very little changes (No need for a big rewrite) and it runs on the JVM with which they are familiar and it is very fast (and true multi-threading).

Maybe they did look into it but I would have thought that would be #1 on their "We tried this but discounted it" list.

In my experience, rewriting for JRuby is as much work as re-writing in a new language. Last I looked was a year or two ago, so it may have all changed, but it's not as simple as just moving your code over. Many really important gems don't work, or don't work properly.

That used to be the case, but a lot has happened in the last few years. Most of the time you'll be able to move a large Rails application to JRuby without any changes at all, except adding a Warbler config for deployment.

We use JRuby for a one of our backend applications in production, but we develop them with plain MRI Ruby. Only real problem we've run in to was some of our badly performing (badly written) parsing code that ran even worse under JRuby.

The only gems that do not work are those which are not pure ruby. For the popular gems I think there have been rewrites. I haven't looked into JRuby but it has changed quite a bit in the last year from what I read

I don't think that's true. There are plenty of threadsafe gems with C extensions, and plenty of non-threadsafe pure ruby gems that mutate class/class-instance variable willy-nilly.

gems with C extensions are not supported https://github.com/jruby/jruby/wiki/C-Extension-Alternatives

I forgot about thread safety in pure ruby gems but that is not a huge problem - https://github.com/jruby/jruby/wiki/Gems-known-not-to-be-thr...

I'd like to see some real benchmarks with a real app, but I doubt moving to jruby would bring the magnitude of improvement mentioned in the post.

They also did not mention why they didn't try any other of the more performant implementations of Ruby.

They probably needed a rewrite anyway, after learning more about the problem space they weren't happy with the api they'd made thus far.

Go wasn't the only thing that made their new stack better, but you can bet that Go would have made it very difficult for them to achieve performance as bad as ruby gave them.

That said, they are STILL using rails for the frontend stuff and Go for their more computationally intensive stuff IIRC.

They still kept some of their rails code, for things that were easier to keep in rails without hurting performance.

> "We also weren't sure if we would be able hire top talent if we chose Go, but we soon found out that we could get top talent because we chose Go."

I feel[1] that a smart/talented C/C++/anything developer can go from someone who has never seen or heard of golang to a proficient and productive Go developer in a matter of a few weeks, maybe even _days_, if not less.

That's how long it takes to go through the following materials (and fav some for later reference) and play with the language a bit.









http://golang.org/pkg/ - use as reference





And a some more similar things that you can mostly get to from golang.org site. The beauty of how concise the language and even its website are, is that you can literally just go through everything there one thing after another.

[1] This is my personal opinion based on playing with go the last few weeks/months. I'd love to verify this theory. It's not yet the primary language in which I do things in (I use C++11 atm), but for all my side tasks[2] it proved to be indispensable. And I found it very easy to pick up. I can't wait until I start doing all my work in Go, that will be a true test of its productivity efficiency.

[2] https://gist.github.com/shurcooL

It's quite strange to me that people would identify as or look for a "[language] programmer". Sure, I happen to write more C++, Python, and C than anything else, but I've dabbled in just about everything and could reach comfortable proficiency in a matter of weeks. Most of programming and all of computer science is universal.

Any serious programmer should be a polyglot by default.

> Any serious programmer should be a polyglot by default.

It depends if you're going to spend years training someone or if you need an expert right now.

My experience is that it is impossible to maintain expert level skills in more than one or two language + library environments. You can remain familiar with other environments but you don't have the time to be an expert.

While I sometimes switch between C-family languages for different projects, it can take months to get up-to-speed with the changes in a language and its environment since you last programmed in it. I'm talking about situations where I know the language well but the environment has changed. Languages change a little but the libraries they use can change dramatically. And along with the change comes a whole body of implied knowledge about how to safely and effectively use it all and this impacts on how you use the language itself.

If you've literally never programmed in a language before, it can be a few years before you know about all the eccentricities, before you understand why the language follows certain patterns, before you understand the risks with certain behaviors.

If you need an expert now, not in 12 months time, then they need to know both the language and the environment. If you can wait a couple months, then they still need to know the language.

"It depends if you're going to spend years training someone or if you need an expert right now."

I'm not going to claim that there are no legitimate reasons to hire people who are narrow experts in Blub (and only Blub), but I'm having a hard time thinking of any.

At an established company, you've got the luxury of time -- there's rarely a good reason to "need an expert right now" that isn't just a contractor. At a startup, where hiring the wrong person is a disaster, hiring an "expert right now" is like holding a loaded gun to your head. Ideally, you should be hiring "T" people -- lots of breadth, with lots of depth in at least one area.

A truly good programmer will pick up your language/framework in days, and be at full productivity in months, even from scratch. It's hard to do better than that.

Even at an established company, a really smart language newbie could write a bunch of ad-hoc code that does almost the same thing as well-understood libraries that everyone more experienced in the language uses. Even if his code was relatively high-quality, now you have a bunch of extra stuff to maintain, and everyone who interacts with it will have to learn this thing instead of just using the library everyone knows. I've done this in languages I'm unfamiliar with and will probably continue to do so.

Not Invented Here syndrome goes away with general experience. It doesn't come back every time you switch to a new language.

Just because I've never written Erlang doesn't mean that I will automatically try to write a random-number generator (say) the first time I need one in Erlang. I have enough experience to look for a library function first.

Empirically, NIH tends to be more common in single-language developers, not less. People who place a lot of value on their "expertise" in Blub tend to do so because they're over-weighting the importance of their memory of the API details. When they don't automatically remember something, they leap to the conclusion that it doesn't exist. They're also typically a less-experienced cohort than people who have written in lots of different languages.

I wasn't talking about NIH syndrome, I was talking about "I don't know that a common library exists for this standard use-case so I'm going to write my own one-off because I have a job to do". I mean, you can google for libraries but sometimes you just don't find them and then find out a few weeks later what you should have used.

I don't know that a common library exists for this standard use-case so I'm going to write my own one-off because I have a job to do.

That's called "laziness", and is even more likely than NIH syndrome to be mitigated by experience. Certainly, having lots of coding time with a single language doesn't make you less lazy, and having lots of experience with different languages doesn't make you lazier.

Well, I'm less lazy than most devs and have worked in lots of languages and did it just the other week -- maybe I'm just not that bright :)

It's certainly possible that you'll miss some oddball idiomatic way of doing things in a new language (e.g. Python itertools or using C++ STL algorithms or something like that), but this is rarely a real problem. The job gets done correctly -- and in any case, we're all learning new ways of doing things. It's not as if you gain total prescience after N days on the job with Blub.

The point isn't that the generalist programmer will be 100% correct in all the details in new language, it's that they'll be able to quickly (and correctly) implement the important parts in whatever language you're using. Idioms tend to be the low-order bit of a solution anyway.

Could you give a specific example as I am still thinking "random number generator" and want to balance it with your "standard idiom or library that everyone steeped in $lang knows "


Things like the Counter in python collections. Actually, the python collections library in general. You don't need that stuff to do the job, but it makes the code a lot clearer.

It probably took a couple of months full time in python before I had enough understanding of the std lib to know where to find everything (I sat down and read through a description of most of the functions in there).

There are still plenty of libraries that would have made my life easier if I knew about them. I'm sure there are plenty more.

Having said all that, that's just for example. When it comes to hiring I'd still pick a good experienced developer over a specific language expert for a full time position. For me it's not so much about the understanding of a specific language as it is the appetite for understanding computers / systems in general that makes truely good developers.

Ultimately the task is going to dictate the best type of hire. Short term contract, get a specific expert. Long term hire, find someone who actually enjoys developing software :)

I dunno how well-known the things I didn't know about are, but an example of me doing it: https://news.ycombinator.com/item?id=5113202

I think there is a fine line between using a library and writin your own to understand better the domain

It's something to do with how critical the library functions are to you / your system. I would never write my own compression software, but I can see why people would just to learn about the trade offs.

Okay, but if I had known about the alternatives I would have used them, so I'm not sure how that's relevant.

philosophising really. I guess on the "actually helpful advice" level, I'm all out if the googles cant help.

I have nothing interesting to say, I just wanted to see how far the left indentation can go

That's where code review can be such a valuable tool. I was learning Python coming from Ruby about a year ago and, sure, the languages are similar and it wasn't too hard to get started, but having code review sped up how quickly I became proficient by a lot. It's great having a bunch of people around you that can say things like "There's this thing that exists, don't do that." or "It's more idiomatic to do this that way instead of how you're doing it." or "People hate that because, do it like this.". It makes a huge difference.

This is unlikely with go as it has a good standard library and a central place for documentation of all the packages.


Third party libraries are also listed here:


And the ecosystem is not so broad or mature that you would miss significant libraries because there are too many options and you just didn't find them.

It depends. If you're the only one knowing the language at this company... it's a bad idea, because the bus factor becomes unacceptable.

Otherwise you have a methodology problem. You don't do daily standups, where you'd have the opportunity to say "Yesterday, I started implementing X because I couldn't find a library" and more experienced coworkers would tell you "Use library Y instead".

Being a programmer is not about knowing everything. With experience one should be able to separate library functions that should always exist from those that are unique for each language. Everything else should just be a matter of information searching.

That's just an imbalance in programming virtues: too much hubris, not nearly enough laziness (unknown impatience).

You talk as if we develop code by chiseling it out in stone. We write it in text files, usually with IDEs. No one needs to learn the new libraries the newbie made, they can talk to the newbie, point out what the library already gives (and where to look in future) and write to well known interfaces.

Any project should have time allocated for "tech debt" and that's where this sort of thing gets addressed.

"I'm not going to claim that there are no legitimate reasons to hire people who are narrow experts in Blub (and only Blub), but I'm having a hard time thinking of any."

Blub is a very useful tool but has a bunch of quite esoteric gotcha's that an expert will be able to avoid due to long experience.

Expert has used almost all of the Blub framework over the years, including the more popular third-party addons and already knows what's best and fastest in a variety of situations without having to think about it much.

"A truly good programmer will pick up your language/framework in days, and be at full productivity in months, even from scratch. It's hard to do better than that."

Yeah, but then HR won't be able to tick off the 'Blub expert' box and, well, nobody would ever get hired! Or something...

> impossible to maintain expert level skills in more than one or two language + library environments

And the crappier your language and libraries, the more time it takes to be an "expert."

By this definition, Ruby/Rails are pretty crappy (I kind of disagree). As a newcomer, the amount of stuff going on in a Rails app and the stack trace when there's an error are pretty overwhelming. Meanwhile, people expound on how simple and elegant Rails is. Lately I've been thinking that this is because those people started using it 5 years ago when it was small and their knowledge has built incrementally with the environment.

My other theory is that it's a lot harder to track down bugs for a newcomer because like C (and unlike Python in most cases), importing functionality from another file is implicit. That is, when you 'require' a bunch of files, there's no indication which functions are coming from where. For me, this is one of the things Python solves marvelously (it's generally considered bad form to 'import *').

Rails is pretty crappy if you ask me. It's "omakase" which is Japanese for "acts according to what DHH wants despite what the community wants". And there's a lot of magic happening that isn't explained very well. There are better frameworks in Ruby. Ruby itself doesn't take all that long to be an expert at.

>Rails is pretty crappy if you ask me. It's "omakase" which is Japanese for "acts according to what DHH wants despite what the community wants".

This sounds great. I'd hate software made in the way some "community" wants. Community is the other name for committee.

Though I could argue that all the languages pretty much take the same amount of time to become an expert. Just because some languages are supposedly higher-level than others doesn't mean that the complexity they allow you to tackle (and associated challenges as a programmer or "expert") is any less.

Especially since being a good programmer is more about design, choice of interfaces, reactivity to change, etc. which are by definition language agnostic.

I firmly disagree that Ruby requires any less effort than say C, C++, Java or LISP to become an expert.

I didn't mean to say that Ruby was less complex than C, I meant to say that if you take out the magic it is comprehensable and one can be good at it quickly enough just like a straightforward language like C. I agree that learning design, etc. is the real key in any language, thanks for pointing that out, because that's what everyone should focus on.

Ruby doesn't take long to be productive in. It is an utter pain to become expert in, for any reasonable definition of "expert". Fortunately it's fun enough that I don't mind the slog.

"it's a lot harder to track down bugs for a newcomer"

This is FUD. It's a different way of doing things, not harder. I live inside Pry which makes it rather easy to figure out what's going on.

That's kind of what I'm talking about. I've never heard of Pry. With Rails I have to learn about the 100 most common gems (devise, paperclip, mongomapper or mongoid), Pry (thanks for that), bundler, rvm, and ActionEverything before I can be productive (or understand a simple app) . With Node.js, something newer with less "maturity", I figure out npm and I'm good to go.

I really don't think it's FUD to say that Rails has gotten much bigger in the past 5 years, and it's definitely not FUD to say that as codebases and tooling grows, so does barrier to entry.

The specific thing I guess you're objecting to is that it's harder for a noob to understand implicit imports and where something is coming from if you don't know much about what you're importing. If you use a tool to solve that language deficiency, that doesn't remove the deficiency from the language. By that logic, adding an IDE to Java makes it a very concise language.

Most of what you said above is a strawman. I said one specific thing.

"it's a lot harder to track down bugs for a newcomer"

This statement is FUD. You seem to be defining "language deficiency" as "hard for a noob before they become proficient". I think that is absurd.

However, you do not need to be an expert to be productive. You can write Django web apps perfectly well without knowing about Python's metaclass tricks, for example.

An expert still needs time to become familiar with a large codebase and architecture. I wonder if a competent programmer could simply learn the language in addition during this period.

Until your form isn't coming out quite right, and you need to start digging around in the source for the form class to figure out why. The happy path generally doesn't need expert-level skills, but debugging often results in a quick deep plunge.

Go as of today doesn't have too many libraries available. So there is really nothing to learn besides the language itself.

That's not really true.

Granted Go doesn't have as many libraries as the more established languages, but to say there aren't many and there's "nothing to learn besides the language" is just flat out wrong.

Just for starters there's http://code.google.com/p/go-wiki/wiki/Projects

A seasoned Java developer is expected to have worked with one of the mainstream ORM like Hibernate or Ebean. But a Go developer gets away with not having to know an ORM because there is no mainstream ORM in Go. :)

So what, the lack of a mainstream ORM implies "So there is really nothing to learn besides the language itself"? I don't think so.

And there's probably more to a lack of an ORM other than "Go is immature." It's a fairly common opinion among the Go community that ORMs are not worth their complexity. I tend to share that opinion myself, after having worked with a few in a couple different languages.

Not sure why you were downvoted, I tend to agree ORMs are not the best ways to communicate with databases.

As to only the lack of ORM would make Go immature? Honestly I don't know. There are tons of software written in JavaScript; do they make JavaScript mature? Hell no.

>If the language is strictly OO (as Java and C# are) then an ORM is pretty much required.

Really? Because we managed to get just fine without one for decades...

If the language is strictly OO (as Java and C# are) then an ORM is pretty much required. What is the preferred solution in Go? And please don't say writing out raw SQL.

I've moved away from ORMs in Java because they don't really solve the problem faced. Hibernate could be a really good solution to providing a blanket system for SQL interaction except for the performance issues. If one was really using Domain objects (not pojos, or dumb DTOs) the typing ability of Hibernate would be great. Unfortunately for simple CRUD apps the issues presented by Hibernate (like loading it in an EAR) are such that it requires too much maintenance time on complex projects.

I know that I've used Hibernate for years. I've used it with DTOs. I know that it does provide a certain benefit for simple reads and writes. Unfortunately, Hibernate was designed to work in a disconnected manner like the Web. As a result you get into complications of session management. This is just one example of Hibernate maintenance costs.

Now I use Spring JDBC. This removes the noise of the checked exceptions, connection management, transactions, just like Hibernate. But I write simple or complex SQL in one place and map in and out contently.

Well, I can't speak to Java here but NHibernate works fantastic. I've checked the generated SQL on complex joins and it always ended up writing what I would have by hand.

In C# the session management is no big deal: you access everything via the Repository pattern, turn on trait injection and just make sure any repository method that will need a session has a transaction attribute. If you need to do a series of read/writes in one transaction you throw that whole method into the repository class and put a transaction trait on that method.

That's a helluvu lot of abstraction for writing to a database.

It's not. I create a DTO and I set up an exact mapping of how that appears in the database. Then I create a type generic Repository pattern that makes the DB access explicit as opposed to NHibernate's magic database access. Then the rest is just one time wiring I set up so I don't have to worry about actually connecting to the DB in my code.

In any solution you come up with you'll either need a repository that handles sessions and so on, or you'll have to explicitly connect to the db every time you need it. Either case will be more work than what I've done with this solution.

What's wrong with using SQL in your program? As long as your database layer is able to perform parameter substitutions to avoid SQL injection, this is a pretty efficient way to get stuff out of the database (and only the stuff you want). Why would using an ORM be a 'requirement' for OO-oriented languages?

There are quite a few problems using SQL directly in your program. Separation of concern issues, typing issues (e.g. the compiler can't tell you the USER table isn't appropriate here because it has no way to see what you're doing).

In a multi-paradigm language you wouldn't be tied to using an ORM but you should still use something that provides some kind of anti-corrupt support. In an OO language you can have nothing but objects so there is nothing else your database library can return. It may as well return objects that represent the records rather than, say a list of strings.

Even though it is terribly out of fashion, the performance of database interface classes (a database access layer) that wraps and abstracts SQL prepared statements and/or stored procedure calls is almost always much more efficient than ORM when the SQL written by someone minimally SQL competent. ORM can be much faster to implement, ah- classic tradeoffs.

ORMs are broadly considered antipatterns in Go.

Source? I have never heard that claim. And contrary to that, Go has facilities for implementing ORMs, like field annotations in structs for specifying column bindings.

A seasoned java developer stays away from orm anyway ;)

What, you don't know about The Programmer Hierarchy?



I find a lot of people who use high-level languages are terrified of tediously direct contact with the machine, and a lot of people who use low-level languages are terrified of the performance costs of abstraction. I'm terrified of both.

I think that in general, C++ is a lot harder to get your head around than C, being for all intents and purposes a superset, and an easily five times bigger one at that.

Love this comment, "I'm terrified of both."

That's how you gauge the experience of a programmer: how much of the field she/he's terrified by.

EDIT: I originally meant this as a joke, but seriously, if someone accurately knows where the gotchas are, that's valuable. Also note if they're biased to false positives and/or false negatives, and by how much. Are their heuristics for dealing with unknown territory efficient and likely to converge on good approximate results?

> That's how you gauge the experience of a programmer: how much of the field she/he's terrified by.

I'm terrified by what I don't know, especially by what I don't know that I think I should know, to be "in the know".

Thanks guys, I guess I really have had some rough times. I like how Lisp and Assembler at the top of the hierarchy capture the two extremes. Some hypothetical language in the middle would be great, but maybe the best we can do is to straddle that point, e.g. with C++ and Python.

Rather than straddle the middle, I think I'd go with as high-level a language as I could get, coupled with a simple low-enough-level language to get whatever performance benefits I needed. Some combo like Python/C, Clojure/Java, or maybe some other lisp dialect and C.

Lua is known for pairing really well with C.

> I like how Lisp and Assembler at the top of the hierarchy capture the two extremes.

Not really. I'd place Agda or Coq above Lisp.

That is somewhat orthogonal. Lisp is better at abstraction than Agda or Coq or Isabelle or any of those ML/Haskell theorem proofers. To maintain the theme:

C if you are terrified about performance

Lisp if you are terrified about boilerplate

Agda if you are terrified about correctness

If you are terrified about all of these, then welcome to the world of engineering.

If you wrote an S-Expression syntax on top of Agda (or Haskell), you gain all of Lisps anti-boilerplatitude for free. There's of course also Template Haskell.

As it is, the static typing helps to avoid some boilerplate, too, indepedent of macros. In some sense, static typing removes, among other things, boilerplate tests.

If you are terrified about all these, there is ADA.

Ada gives you neither the performance of C nor the abstraction- and boilerplate-removing power of Lisp nor the provability of Agda. This is why it is not used.

The performance is pretty close, actually (far more than Go, for example) : http://benchmarksgame.alioth.debian.org/u32/benchmark.php?te...

And yes, it is not used, only in some obscure and low profile projects : https://www.adacore.com/customers

http://benchmarksgame.alioth.debian.org/u32/benchmark.php?te... might be a better link to show how Ada performance compares to C: ranging from twice as fast to three times as slow, with a median in favor of C. It's true that it's pretty close. I don't know if those numbers are representative of how performance works out in the real world; any insights from your experience?

Golang, it's true, is in the 3×–10× ballpark.

https://github.com/languages/Ada shows Ada as the #52 most popular language on GitHub. The #10 most popular is Objective-C at 3%. Using R:

    summary(lm(log(c(25, 13, 8,8,8, 7, 6, 4, 4, 3)) ~ log(1:10)))
I derive that the Nth most popular language on Github is used in 24% * N -0.81 of projects, with an R² of 0.93. This suggests that Ada should be in use on about 0.98% of projects on Github, which makes me wonder why https://github.com/languages/Ada/updated?page=10 can only find 200 Ada projects that have been updated in the last nine months. (JS, the #1 language, has 200 projects updated in the last 22 minutes.)

That should be 24% × N ^ -0.81. The double asterisk I was using for exponentiation got eaten.

Ada. Not an acronym.

>Not really. I'd place Agda or Coq above Lisp.

Because more than 10 people have heard of and use Agda or Coq?

I'm already having a hard enough time convincing the team at work to use Clojure!

Oh, Clojure is probaly more useful in practice. We also stick to a Haskell dialect at work, and don't dabble in Agda for production.

C++ and Python pretty much run everything, everywhere. Standard, open languages are hard to beat when you want to fully control your development stack and not worry about future control issues. I wish go was an ISO standard like C++, I'd be more interested if it was.

> C++ and Python pretty much run everything, everywhere.

I don't think so. Last I heard, there was still a lot of COBOL out there.

Yes, yes there is.

Most banks, and many other businesses that have been around 30+ years, are still running on COBOL.

It was popular because of its highly readable syntax and its fixed-point packed decimal math for financial calculations.

COBOL gets all the hate but if they have a Big iron mainframe then they'll most likely be running RPG.

RPG was more oriented towards mini-computers than mainframes, from what I recall. It's niche was the IBM S/36, S/38, AS/400 and iSeries boxes. I'm sure you probably could get RPG for your S/360 or S/390 or whatever, but from what I've seen over the years, it was mostly COBOL, PL/I, and MVS Assembler on the mainframes.

Only if it is AS/400 based systems.

Fantastic! I'm primarily a C# programmer and have first-hand experience of this being true.

I agree with the premise that, as a professional software engineer, it is my responsibility to be a polyglot.

As for myself, when I started writing Python, I mainly wrote C or Java code in Python - in a similar way, perhaps, how early C programs were littered with __asm__() constructs.

It took a long time to learn how to write things in (though I hate the term) a "pythonic" way. That is, to learn the common language idioms that are not taught in any tutorial, or are part of pep8, which illustrate common patterns in Python code, the ins-and-outs of PYTHONPATH, etc etc.

So while I agree that, in a weekend, a reasonably proficient programmer can pick up reasonable proficiency in a given language, being a "Python Programmer" to me means that one has developed an intuition for the common patterns, libraries, pitfalls, platforms, and clever specific features of a given language and its ecosystem.

I agree, but I think the age of the language and community is another factor. Java, Python, C++, these are all old languages with decades of history and habits. Newer things like Node.js and Go have no history, no baggage to learn or avoid. I think starting in something with such a clean slate is somewhat easier because there is less ecosystem to learn.

I once heard a quote like this "programming languages are frozen knowledge about software engineering". (Anybody got a clue who said something like this?)

New languages usually improve upon older languages by making certain errors impossible by design. For example, Go allows pointers, but not pointer arithmetic. With C we learned pointer arithmetic is often harmful (though sometimes necessary), so Java removed pointers from the language (not completely, though). Go takes a sensible middle way, because no pointers sometimes is ugly. Such a clean slate is good, because with older languages programmers fight their language deficiencies with habits (e.g. if (5 == x) instead of if (x == 5) to prevent if (x = 5)).

On the other hand, we will find the pitfalls and dark corners of Go over time, but currently we do not know them enough. Go will acquire its own conventions, but we do not know all of them yet. This is the risk of a clean slate.

> On the other hand, we will find the pitfalls and dark corners of Go over time, but currently we do not know them enough. Go will acquire its own conventions, but we do not know all of them yet. This is the risk of a clean slate.

That's bullshit, Go is not a "clean slate", and a number of design issues of Go are known and have been known from the first release regardless of its designer's refusal to acknowledge them. We know pervasive nullable types are a source of errors, we know shared-memory concurrency is an error-prone default, we know a lack of generics makes userland code painful and generics are hard to retrofit in an established language (and even Go's designers know it, why do you think they build special-case generic collections in the interpreter?), we know allowing implicitly ignoring errors is a bad idea and making it easier to ignore than handle errors also is. These are not recent issues, they're well known and there are a number of possible strategies for handling them.

And Go's worst sin, to me: we know that foisting complexity and repetitiveness upon the user leads to forgetting, and forgetting leads to mistakes. And that's exactly Go's approach to errors, resources management and shared structures mutability. Human error is something you can very reliably bet on, human infallibility... not so much.

Funny you should take Go as an example, since it's often bashed for not having taken the last decade of language design into account.

Go does take the last decade of language design by Pike, Thompson, and Griesemer into account. Mostly Pike as I am not sure if the other two guys even did design some language in this time.

Personally, I consider gofmt the biggest achievement of Go, if it manages to make that mainstream. While there are equivalent tools for C they are not widely used.

> Go does take the last decade of language design by Pike, Thompson, and Griesemer into account.

Decade which ended 20-odd years ago relative to the rest of the world, kind-of the point.

You mean by designing a language that is basically Alef from Plan9(1992) with a few changes?

Yeah, really actual.

The English term for "actual" is "up-to-date". The English word "actual" means "not imaginary".

thanks for the correction.

Happy to help!

There's a difference between not taking it into account, and not believing it is worth including.

This has been my experience learning Go as well.

This is why Joel Spolsky correctly observed that the replacement of C/C++ and functional languages with Java in university CS curricula is a tragedy. If you don't understand pointers or recursion, you are not a polyglot and you cannot pick up just any language in a matter of weeks.

The "[language] programmer" (where [language] usually = Java or .NET) trend is a misguided attempt by industry to commoditize programmer talent.

My intro programming course (in high school) was in C++, and I think it would've scared me off programming entirely if I hadn't picked up mIRC-script (a very different language) on my own as a hobby. There is so much accidental complexity in C++ that it's a pretty terrible introductory language. We literally spent several weeks of the semester on how to get input and output to work properly through the giant mess that is iostreams.

At the introductory level, I think the biggest issue is teaching "computational thinking" [1] or "procedural literacy" [2]: getting people thinking about the idea that you're writing specifications for a machine to carry out computations. From that perspective, it's best to pick a language that lets you get to algorithmic logic as quickly as possible.

[1] http://www.cs.cmu.edu/afs/cs/usr/wing/www/publications/Wing0...

[2] http://dm.lcc.gatech.edu/~mateas/publications/MateasOTH2005....

Me too (well, ok, my first programming class was in Fortran taught by a 85 year old man who spent most of the time telling us about how much harder it was back when you had to use fortran).

I hated C++. I still think it's a fairly terrible language. But, for fun I took the Harvard CS50 course to refresh my knowledge of C and I found that WAY better than my C++ course. I think C is brilliant for introducing programming because it's very very simple, yet also very very difficult. There's not much to learn, except a lot of concepts (memory usage, data structures, etc).

I also think Objective-C is a really great language though, so I might be crazy. But, you give me a choice between C++ and a language that is basically C with a few additional keywords and garbage collection, and I find that an easy choice...

C++ is terrifying, and teaching an introductory programming course in C++ is an awful idea. Using plain C would make much more sense. C is also a better choice than Java because Java doesn't force you to learn about pointers, or memory in general, really. Countless Java bugs are introduced by programmers who don't understand what operations give you a copy of something, and which ones give you a reference to something (especially if that something is mutable).

I also think Objective-C is pretty good though, especially with the addition of ARC. It's like a better C, with Smalltalk-style objects and a lot more (mutable and immutable) datatype options. Much saner and less painful than C++, but more challenging than Java.

Java doesn't force you to learn about pointers, or memory in general, really.

Countless Java bugs are introduced by programmers who don't understand what operations give you a copy of something, and which ones give you a reference to something

These two statements seem to be at odds with each other.

The Objective-C garbage collection is deprecated.

The problem complicates further with the fact that IDE fever grips you very early on while using a language like Java.

When I started with eclipse + Java, within hours my perception of Java was that it was really more of 'fill up the blanks' rather than programming.

This became clear to me when suggesting using a language other than that supported by Visual Studio to one of my previous teams. Their reply of, "Can't do it; no IntelliSense" shocked me completely.

And I think that is a valuable lesson too. As a developer, the end-goal is to produce something of value, working software, not spend hours frowning over pointers and references. That was my early experience with C/C++, in any case. I did end up finishing my assignments (ranging from a fractal generator via a microcontroller operating an RC boat to a simple 3D FPS game), but I did feel a lot of overhead with non-functional things.

Of course, you get the same thing in Java when you first encounter a NullPointerException, ;)

I've seen C/C++ programmers failed in writing good readable code.

C/C++ programmers tend to do premature optimization (habit) due to the culture and the problem domain (past experience): device driver, kernel code, game development.

How important Pointer is for say learning any high-level programming language that don't have pointers (pretty much everything outside C/C++/Objective-C)?

Sometimes I felt that knowing Pointer arithmetic and tricks are some sort of chess-thumping that old programmers tend to do. (My background is system programming so I know how pointer works).

It doesn't matter what you know or learn during the university. What matters is whether you continue to hone your skill or not and have a high bar of quality.

I think he was talking about pointer from the "idea" point of view, not the language feature.

References in high level languages are mostly hidden pointers. A good knowledge of how pointers work is really important to get a good understanding of how high level languages work (a lot of Java programmers don't well understand the distinction between value and reference types, for instance).

Likely, be at ease with recursion is also very important as many complex problems are recursive by nature, and so way easier to solve using functional techniques.

Yes. Pointers are always there in programming and you need to understand how they work, even if you are using a language where you don't have to declare them as such explicitly.

I feel like this article on generalization could be a biography of my life based solely on the introductory paragraph, but I don't have enough time to finish reading it to be certain.

"Sometimes I felt that knowing Pointer arithmetic and tricks are some sort of chess-thumping that old programmers tend to do. (My background is system programming so I know how pointer works)."

Or the best, easiest and clearest way to do things... Sure you can write confusing crap with pointers, but they are very useful.

> The "[language] programmer" (where [language] usually = Java or .NET) trend is a misguided attempt by industry to commoditize programmer talent.

From what I can experience in the multi-site enterprise projects I participate, the industry is being quite successful at that.

Successful at commoditizing programmer talent? Perhaps.

Successful at producing high-quality software, relative to the resources they are expending on it? Not so much.

> Successful at commoditizing programmer talent? Perhaps.

Yes this is what I mean.

In most enterprise companies nowadays, developers are seen as easy to replace programming cogs.

> Successful at producing high-quality software, relative to the resources they are expending on it? Not so much.

Who cares about quality when the price is right?! :)

Being sarcastic here, I am the opinion the software industry should be under the same quality regulations and expectations as the other industries.

Usually most people return stuff that does not work properly, while with software they just live with whatever bugs the software has.

I agree that any serious programmer should be able to be productive in any language, but mastery of a style of programming takes time.

It's easy to move about in the same style of programming, but trying to move someone from enterprise-y OO (Java, C#) to functional (Haskell, Lisp) or even Go is a bit of a leap. Concepts often just don't cross over. A Go channel, for example, makes sense to an Erlang developer, but not a Java developer. Goroutines don't make sense to someone familiar with traditional threads.

There's a certain amount of rewiring that needs to be done. Looking for a "[language] programmer" is strange, but looking for someone proficient in the style of programming used makes sense to me.

Even within languages, styles are not set in stone; e.g. are you the OO-style Scala programmer, or are you the FP-style Scala programmer?

Java has had channels for a while now at the library level, they just lack pretty syntax. But this just reinforces my point.

> It's quite strange to me that people would identify as or look for a "[language] programmer".

I understand people doing this.

First you can be more sure of what you're getting. It's sad that tests like FizzBuzz are so useful, but it's a fact. If I hire a Java developer for a Java position, I can figure out there Java skills. If I look at a PHP developer, it's more of a crapshoot. They may have a lot of PHP experience, but can they convert that to Java or have they just been doing cargo-cult stuff? This would be a smaller issue with something like C# that's closer to Java.

Second, and I suspect more common, is time investment. It might take someone quite a bit of time to switch languages. If they haven't done it before, you might find out it's a weakness for them. In my experience, at least in smaller companies, the fact that you're hiring means you need someone now. That extra time could be killer, because you waited to long to start hiring.

I would at least expect candidates to look into what we're using. When I changed jobs 2 years ago, one place I applied was a Python shop. I've never used Python professionally, but I've tinkered with it. If you're applying for a Java position, I expect you to have at least looked at Java before the interview. Sadly, I bet that would happen a non-trivial number of times.

How did you convince them to hire you when you had no professional experience? Did you have to take a pay cut to do it?

I didn't get hired for the job, after the first interview I bowed out. I ended up in another mostly Java position (although we've also started doing some Obj-C).

I had 4+ years of working on a reasonably sized and complicated Java application. In addition, a few years before I used Python for personal projects so I could show some experience.

But it turned out the position was largely for front end development, and not some of the more back-end stuff I'm interested in. I think they liked me (don't know how I compared to other candidates) but I thanked them for their time and told them the position wouldn't be a fit for me.

If your code in that language is good enough then you can usually get a job for it. Of course, I'm talking about places where programmer/HR department have a big overlap. I don't much like working for the other places.

If you're looking for a good programmer, it often pays to be looking for one in a niche language. Paul Graham referred to this as the Python paradox: http://www.paulgraham.com/pypar.html

(The inverse of this is if you're looking for a PHP programmer, and posting ads on monster.com ;) )

Try a Lisp. You'll be humbled. In my opinion, anything resembling C is easy stuff. Pointers, threads ... it's all pretty easy, really. Truly breaking out of that C mold is the differentiator though, in my opinion. Anybody can learn C, Python or Javascript.

Lisp is a huge paradigm shift, but once you get that writing code in Lisp is basically just writing parse trees directly, it's not so scary.

If you don't already get recursion, though, you will struggle. Everyone struggles with recursion at first.

I completely agree.

I think to completely explain this phenomena, one would have to make references to people's skill levels, and it's hard to explain it in a few words. So I'll just not say anything at all on this subject beyond that.

If only the morons that hire could figure this out. I almost NEVER use Java, but obviously I have and CAN use Java. Yet interview after interview demands that I use Java day to day in my current job to be considered. And this is for data transformation/analysis jobs, in which my use of Python makes me dramatically faster and more productive than my Java using colleagues.

Of course use of the word "moron" might be poisoning the well a bit. "Possessing incomplete information" might be a better way to put it.

Generally I don't understand "C++ programmer" to mean a programmer who is an expert in C++, and C++ only, but rather a programmer who may be comfortable in many languages but has a special expertise in C++. All my team identifies as C++ programmers, but we all speak Python and Java, and some can do Haskell, etc.

It takes a long time to become really good at C++.

"Any serious programmer should be a polyglot by default."

The language itself is not usually a problem for me. I mean an array is an array in any language. It's environment. Example: c# (which I think in MS's best work, by the way); c# was easy to pick up. The .Net framework was another matter. That took time. Knowing where the resources are and hell - even getting comfortable with the documentation and the IDE took time.

So now, when I hire a programer, I'm not too interested in his language skills. I'm assume he can code well enough not to embarrass himself. It's his knowledge of the environment that we run. That's what I look for.

>Any serious programmer should be a polyglot by default.

My own experience is that there are PLENTY of programmers who have used, say, PHP and Python, but who you wouldn't want to touch your C codebase in a pair-programming session.

If you've used C++ and C, then sure, you can probably jump to just about any language. Someone who's only used Python (or worse, Java or PHP) will likely be a danger to themselves and others for the first year or three using C.

Starting out with Python as a newbie is disastrous.

I started out with assembly, then went to C and then all the C-family based languages. Non C-family languages are actually rare enough to neglect for 99% of all programming tasks today.

Always remember going from something difficult to simple is easy, but going from something simple to difficult is not easy.

I disagree. My path went like this:

HTML->CSS->php->javascript->C#->Visual Basic->Java->Ruby->Python->C->Go->Scheme->Lisp->Haskell

I believe the "python way" and "pythonic" code that the community pushes is great for newbies, since it gets them thinking about not just writing code, but writing it idiomatically.

This, in turn, makes most think about why doing it the "pythonic" way works better.

>Always remember going from something difficult to simple is easy, but going from something simple to difficult is not easy.

Great. There are plenty of hard working people to make this "piece of wisdom" completely worthless.

>Starting out with Python as a newbie is disastrous.

... why?

>My own experience is that there are PLENTY of programmers who have used, say, PHP and Python, but who you wouldn't want to touch your C codebase in a pair-programming session.

There are plenty of programmers that wouldn't want to touch C to begin with, regardless of past experience. Anyway, why would you ask somebody with no experience in C to develop your C codebase?

>If you've used C++ and C, then sure, you can probably jump to just about any language.


>why would you ask somebody with no experience in C to develop your C codebase?

The message I replied to said:

>It's quite strange to me that people would identify as or look for a "[language] programmer".

So what I was saying was that you would want a C programmer to develop in a C codebase.

>There are plenty of programmers that wouldn't want to touch C to begin with, regardless of past experience.

I wouldn't want to do a lot of original development in C myself. I have used C -- and assembly language -- to do nontrivial things in the past. But I'm ready to be done with C myself.

>>If you've used C++ and C, then sure, you can probably jump to just about any language.


Umm...[citation needed].

I doubt that. Learn a language's syntax and semantics, perhaps, if it's not C++. Learn its standard library so one uses it aptly rather than re-inventing, less likely, depending on the size of that library. Become proficient in the language's idioms, know its gotchas and what are the more efficient of the choices it presents, less likely still.

Code in it, yes. And I agree with the polyglot.

>>Any serious programmer should be a polyglot by default.

The quality of a serious programmer is to get things done. Not tool tricks.

When I go out to hire a carpenter, I just ask him to show me some of his work. If I like it I buy from him or hire his services. I don't go around and ask him to demonstrate how he uses a hammer.

Getting things done means using the best tool for the job. Sometimes that's a language outside of your immediate preference, because that's what the best tool/library/platform is written in.

Today I wrote Java (LDAP server plugin), and ssembly (for unit testing some code that has to minimally interpret x86 assembly). I'll also occasionally need to write some JS/HTML/CSS for the web, work on systems programming for other architectures (ARM, AVR), write kernel code and drivers (C/C++), write/extend a Python script (HTTP test client), write Tcl (to drive simulavr), ObjC (iOS apps), and occasionally write some functional code for fun (I have no business case for it, sometimes it's just nice to work on something clean).

I've done and do all those things (and more) not because I have 'tool tricks', but because in nearly every case, I needed to use the tool most suited to getting things done in that problem domain.

Programmers that refuse to adapt to the given problem domain do their users a disservice. It's like the old joke about which nationalities run heaven and hell -- the worst possible people set to tasks for which they're innately ill-suited. However, programmers can learn more -- if they're willing -- and adapt to better suit themselves to the problem at hand.

He doesn't use a hammer though, he prefers to use his screwdriver for everything. What you didn't see from the pictures of his work is that they have now burnt to the ground.

Burnt to the ground on purpose because there were too many bugs or it became an impossible to maintain tangled mess of meaningless symbols.

By the way, since he only uses his screwdriver and you project requires nails, he'll take 5 times longer than your deadline.

Your end product will work for about 2 months, then you'll pour the gasoline on it and burn it down yourself.

Absolutely. Even if you're a blub grandmaster, you really ought to know a dozen other things fairly well.

>> Any serious programmer should be a polyglot by default.

I can pick up a new language very fast but won't consider myself a polyglot (know a handful of languages). I do still consider myself a serious programmer.

> Any serious programmer should be a polyglot by default.

Fully agree.

At the end of my CS degree, I was able to code in:

- Pascal

- C

- C++

- Prolog

- Smalltalk

- Camllight

- Java

- Assembly (x86 and MIPS)


Many programmers are religious about their languages.

> I feel[1] that a smart/talented C/C++/anything developer can go from someone who has never seen or heard of golang to a proficient and productive Go developer in a matter of a few weeks, maybe even _days_, if not less.

Agreed! My career is mostly database development. Very high level. Yet I felt comfortable in Go in a few weeks. In a few months I was handling low-level stuff (to me, anyway) like tweaking Go's web server. Go's official online documentation is excellent, although I wish it had lots more examples. The next best learning resource I have on it is the book Programming in Go (also get the Go Programming Language Phrasebook).

Re [1]: I think that's a reasonable statement. My issue with it, however, is that after just a few weeks (or even a few months), said developer might still not know about various design pitfalls or best practices in the new language.

In short, they might not be writing idiomatic code, and you'll end up maintaining it for a long time. IMHO, becoming basically proficient with a language is very different to being able to create software which will scale well to large teams, while also aging gracefully over time.

In general I agree, but I think it's important to note that while a lot of language skills are interchangeable, going from an objected oriented language to a functional one is a pretty big step.

It took me 1 week each to go from 0 to being productive in javascript & ruby and it took roughly the same time for some other developers in my team. I think it's more about knowing where to start, knowing various paradigms, etc than the language itself.

Thanks for the links man.

Yea, michaelochurch have talked about that one before.

wow. these are very good... thanks.

Excellent success story with Go.

Love this quote... "We also weren't sure if we would be able hire top talent if we chose Go, but we soon found out that we could get top talent because we chose Go."

That's really obvious but overlooked. Developers just looking for their paycheck will only have learnt the established languages.

Developers who enjoy their profession will look to the future, excitement gleaming in their eyes.

I've heard of business developing in Haskell a good amount of success. I suspect, for the same reason.

It could also be said that developers who use established languages are excited to get things done rather than reinvent wheels continuously.

No, it can't. People don't use only Java because they "just want to get things done." They use only Java because that's all they learned, and all they need for their day job. If not, then they will invariably step over languages that are very interesting but obscure.

I'll give an anecdote. I've spent the last 7 years using Ada, a language that is /both/ interesting and obscure, almost exclusively. It's really a fantastic language that is frequently neglected much like poor Go, but with more merit. Anyway, in preparing the launch of my company's first iteration of its podcast network (http://76streetnetwork.com), I absolutely wanted to use Ada, but the allure of the utterly fantastic tool support and breadth of libraries available for Java had me up and running in a day or two despite having never written anymore than basic hello world-like crap with it in the past. There's really nothing crazy about Java, you're right, but I live for coding and I have a deep interest in quality software development, but I guess I'm just looking for a paycheck for this side project that will never make me any money! Ironically, Ada is what pays my bills.

Don't get me wrong, I'm not saying you shouldn't use Java--just that "I've only ever used Java" is a pretty good indicator of somebody not being genuinely interested in software development, and different ways to do things. I'd take somebody who learned Ada for fun over somebody who learned Java in school (and did nothing else) for a Java job any day.

the lisp community used to hold this as a truism - there are some languages so good (powerful, satisfying and pleasurable to work with) that they become a compelling reason to take a job regardless of the nature of the actual product.

Yaron Minsky of Jane Street said pretty much the same thing about OCaml in one of his talks. And it makes sense. Go and OCaml are not mainstream, so the people who know these languages are likely interested in improving their own skillset, and it makes sense that they would be above average programmers. If they like the language enough, they would of course want to work with it rather than other languages.

“the entire process started up with only a few hundred KB's of memory (on startup)”

There is no way this could be true: last time I checked, a bare minimal Go HTTP server requires at least 2.8MB of memory on 64-bit machine. Are you using the default net/http library?

Yeah, my HTTP server made in Go usually starts up at around 6MB or so.

This story sounds mighty familiar. I realize the topic is about one language over another, and most of the comments here seem to be focusing on that, but there's another part which I find fascinating: the cascading failure mode when they'd lose a web head.

"This would in turn cause the load balancer to think it failed and take it out of the pool, thereby applying the load that the unresponsive server would have been handling to the remaining servers. And since the remaining servers are now handling the load of the lost server plus the spike, inevitably a second server would go down, the load balancer would take it out of the pool and so on."

It was 8 years ago, and the customer was running PHP, not Ruby, but otherwise it's the same basic story that lead to the creation of "Surge Protector", or its actual name, "Suicide Pact". http://rachelbythebay.com/w/2011/06/28/sp/

Keeping in theme with the suicide pact, STONITH, the other kind of pact for high-availability machines comes to mind - Shoot The Other Node In The Head.

Why did go "win" over erlang, scala, and clojure? Because it's becoming trendy and "fun?"

And, why are "fun" and "joy" terms ruby bloggers use for languages? Is this code for "easy" and "familiar?" As in, "this language is easy to learn.. It does not require us to learn difficult but mind-expanding concepts to become proficient."

Replacing ruby/rails with something (just about anything!) results in far fewer servers. Isn't this obvious to all? Is this really news to the average HN reader?

If that's an I/O bound API server, bet those two servers could be brought to one in a language/runtime that's more productive but a little less "fun" to learn.

One could adopt some kind of event-based system in scala or clojure, but Erlang and Go are alike in being the lone runtimes where running millions and millions of very small messaging processes is AOK no problem for the runtime, and not something one has to work really hard for.

Fun and joy are terms applicable here because the alternative is Erlang. Zing!

Go is rather ideal for these guys use case: if it wasn't an easy win, given what a snug fit their use case is, there would've had to have been red flags abound, as this really is a nearly idealized work load for Go's use: a hell of a lot of processes which sit around doing nothing, where one occasionally gets a message and forwards it along. Perfect Go story, as Go's lightweight processes (goroutines) are ideal for this kind of Communicating Sequential Processes routing workload.

>One could adopt some kind of event-based system in scala or clojure, but Erlang and Go are alike in being the lone runtimes where running millions and millions of very small messaging processes is AOK no problem for the runtime

Is that really the case? Scala's Akka actor library seems like it would qualify just fine: http://letitcrash.com/post/20397701710/50-million-messages-p...

50m msg/s is super happy making times, I agree. I'd love some absurd number counting on Go's front, but Go is entirely not a word which one can get anything useful out of from Google, so fuck whomever picked that awful name & cursed their language to being unindexed in any useful way. What happened to great names like Newsqueak? :/ Anyways!

I do have a couple things to point out:

First: "JVM settings: -server -XX:+UseNUMA -XX:+UseCondCardMark -XX:-UseBiasedLocking -Xms1024M -Xmx2048M -Xss1M -XX:MaxPermSize=128m -XX:+UseParallelGC"

Second: "The test was run with 96 actors"

I am massively a fan of actors, I hound for good interesting Akka use cases being talked about. I'd also suggest digging into Kilim if one really wants to go batshit crazy in JVM world. But Akka actors are not alike what Go does- Akka went crazy fast at tossing messages around with a 2:1 oversubscription of Actors:Cores (50m msg/s was on a 48 core machine, with 96 actors), but this is a highly qualified situation- first it was done with some very well chosen black magic command line spices, and second, more importantly, it was done with a very limited number of actors, something short of the millions of tens of millions Go or Erlang might be ok with while still being stupidly fantastically high-message rate. I don't have any idea about Goroutines, but in Erlang the time sharing is mad- lightweight processes can be interrupted all over the place, allowing messages to flow in extremely robust & reliable fashions even when all the millions of processes on the system are going ape demanding CPU right now.

I have great respect for the JVM actor crew, for Akka, but read the post mentioned by & preceding the one you linked to- Akka tapped Java concurrency supreme guru Doug Lea to come in and build them an executor engine to make Akka run fast, real fast. And it does, phenomenally well, and you can reap that too, with nearly no real thinking about it cost- but if your parameters deviate too heavily, if you happen outside the bounds of this carefully set up JVM environment, who knows. What happens to your message rate when you start trying to use CPU too? What happens to the message rate when you have a million actors? Erlang, and so I've been told about Go, is, by nature, AOK with keeping millions upon millions of lightweight processes floating about passing each other messages over channels, and is not some masterfully reworked engine (the JVM) when running in those kinds of configurations.

That said, although Erlang & Go are the natively friendly to having massively concurrent systems and I still charge the JVM as not being up front designed for it, it's clearly capable. At scale it's not how fast or how good your runtime is- once you are in the runnings. For most people Akka will work great. If we were to say there's a 30% difference in msgs/s between Java and Go for X situation, if your company is make or break on that difference, your product is broke. Fix it. Learn how to scale out. Find what your development sweet spot is, what makes your workers happiest- be it Akka or Go or Kilim or Erlang or managing your own Node scale out- and dev to that. The runtime is drastially less relevant than how you get many runtimes working together to tackle the problem. And Akka here has some great answers, as does Erlang. Go, otoh, I haven't heard anything interesting out of. Node has some interesting stories, available options, but it's far from ready-to-roll out and no one has open sourced anything that threatens to gain serious traction.

Haskell's runtime makes for cheaper threads than Erlang. Possibly cheaper than Go's, too.

Are they also in Haskell as in Erlang pre-emptable by the runtime?

There's certainly some cost in this design decision, more context that has to be swapped in and out and more state kept, but it's important capability to allow blind design & use- think Node, where everyone has to keep re-iterating how important it is to keep yielding to the event loop, to not do a lot of CPU work in a handler: irrelevant in Erlang world- in spite of the threading model not being OS threads, Erlang is happy to drop your lightweight thread on the fly when it sees fit.

> Are they also in Haskell as in Erlang pre-emptable by the runtime?

Yes. [1]

[1] - http://www.haskell.org/ghc/docs/latest/html/libraries/base/C...

> irrelevant in Erlang world

Exactly. That is a subtle distinction but for cases where responsiveness and low latency is important that is key. Another thing Erlang has is isolation of process heaps. If one process crashes, it won't affect others. No shared data structures between processes. It all goes to fault tolerance but also a major win for a completely concurrent garbage collection.

> If one process crashes, it won't affect others.

Unless they're linking or monitoring one another, an other important property of tolerant and distributed system.

But that is on purpose. Sometimes you do want that. In other words they are explicitly set up to monitor/link each other.

> But that is on purpose.

Indeed, but I was pointing it out because it's important to be able to do it.

> Haskell's runtime makes for cheaper threads than Erlang. Possibly cheaper than Go's, too.

What's your measure for expenses? Because if it's size in memory goroutines seem more expensive than erlang processes: http://en.munknex.net/2011/12/golang-goroutines-performance.... indicates goroutines were measured at ~4k; the default start size for an erlang process is ~310 words, or ~2k on a 64b machine.

1kb in ghc, iirc.

Fun and joy doesn't always mean easy and familiar. For instance: Haskell is fun, but it isn't always easy or familiar.

> Why did go "win" over erlang, scala, and clojure?

Erlang sucks at string handling which is a no-go in Web developing.

Scala, Clojure, people don't even bother to setup JVM. Python is popular in some degree because it's installed by default on Linux and OS X.

This could easily be rephrased as "How We Went from 30 Servers to 2: Static Typing and Compiled Code".

I rewrote my DNS checking tool (http://www.dnsinspect.com/) in Go and I saw huge differences in resource usage (previously it was implemented with Ruby on Rails + EventMachine), my memory usage went down from 128MB per background worker to a few KB.

Now I'm able to run hundreds to thousands of concurrent reports using a small VPS, the Go application is using 36MB of RAM (24MB front end + 12MB for background workers). Go language is well suited for my particular case (many concurrent IO operations).

This was my first Go project, in a week I was comfortable with it, I had many alternatives but I really liked the simplicity of Go, fast compilation and easy of deployment. Because I've missed some pieces from the Ruby's world I've combined Jekyll (Compass, HAML SASS, RedCarpet, etc) with Go. :)

Looks great, bookmarked.

Do you also used the default net/http support libraries?

Yes, I'm using net/http.

I'm missing the connection between poor API performance under Rails and the decision to do a ground-up rewrite. Was the initial performance problem due to an API problem or a platform problem? Was there any profiling of the poorly performing API server? I'd love to hear the story behind that analysis and how it impacted the tale told in TFA.

I ask all this since I've lost count of how many times I've participated in or witnessed an averted rewrite via a good dose of profiling and a few key bugfixes. I'll acknowledge that's not as fun as a clean-slate project, but vast amounts of engineering time were saved.

I agree with the 'Go is fun' (and easy to pick up) sentiment. I spent a few weeks recently writing an NTLM library for Go for a consulting client (I hope they'll let us open source it at some point). All the parsing and bit manipulation was extremely straightforward due to the excellent standard libraries and I was able to quickly move from not knowing Go, to a fully implemented NTLMv1/v2 library!

Another day, another Go PR post on HN.

What is new about porting code running in one of the slowest interpreters around to a compiled implementation of another language?

I was already porting Perl/TCL code to C++ back in 2002 with similar performance results.

Don't kids nowadays learn anything about performance in their CS degrees?

It's an interesting tale, and I honestly have no bias toward Ruby or Go, but the cause/effect relationship is poorly illustrated (we switched languages). We all know runtime / performance / scalability is more complex than that. I'd like to hear where the bottleneck(s) were and how Go solved them.

Last month I picked NodeJS and build couple of sites with it. This month, I want to pick up something new. I was hesitating between Python and Go. Can you answer couple of questions about Go from someone coming from NodeJS:

1- What is the state of the external Go libraries, especially DB (MySql), caching libraries (memcached), protocol libraries (Oauth). Are they stable 100%

2- How easy is logging and tracing in go?


Memcached's author (Brad Fitzpatrick) is a Go user, and wrote a memcache client for Go.

Go has a bunch of good stuff surrounding SQL. I haven't used MySQL, personally, so I can't comment on which MySQL driver is best, but I know YouTube uses Go for something related to MySQL, though I can't say for certain exactly what. http://code.google.com/p/vitess

The goauth2 (http://code.google.com/p/goauth2) library is written by Brad Fitzpatrick and Andrew Gerrand (two members of the Go team), and is stable.

Logging is great. Interfaces make it really, really flexible. I'm not sure what you mean by tracing. Stacktraces? Those are easy to retrieve: http://golang.org/pkg/runtime/debug/#Stack

Well, Go has a bunch of libraries for different SQL servers. I wouldn't go so far as to say it has lots of good SQL stuff; it's in approximately the same place as C is w.r.t. databases.

Having to interface with complicated SQL is a reason not to use Go (but not an insurmountable one).

> it's in approximately the same place as C is w.r.t. databases.

I think the existence of a standardized interface like `database/sql` makes things strictly better.

An anecdote: I started making a new web application using MySQL. As soon as I realized the error of my ways (not because of the driver), I swapped out a single import from the MySQL driver to the PostgreSQL driver, and everything continued to work as it did before.

[EDIT] Anyone care to explain the down-votes?

The downvotes are likely because they're reading your MySQL-to-Postgres anecdote as the firing of a shot in an unrelated battle many love to hate, if not a mild troll.

Ah, right. My bad. The folly is still fresh, and there is still a bit of pent-up rage that I hadn't made this discovery sooner.

I think it has lots of good SQL stuff just because I think it laid a nice foundation for developers to build SQL libraries on, and I think there's a lot of potential for using SQL in Go to get dramatically better in the next year or so. That said, I'm not an SQL expert or even very competent, so I'm probably not the best to comment on it--I just included it for the sake of addressing all the points.

I would agree, though--working with SQL is the worst part of the software I'm currently writing in Go. I think that's more a praise of working with Go than a condemnation of its SQL support, though.

Brad Fitzpatrick is not just a Go user, he is a core Go developer :-).

Yeah, I noted that later on in the comment. :)

There's also the mgo library for MongoDB which is probably one of the best Mongo libraries out there (even 10gen thinks so): http://labix.org/mgo

And http://github.com/lib/pq for Postgres.

I'm the author of memcache client and server in go [1]. The client is much faster than bradfitz's client when serving hundreds of simultaneous workers - try comparing them with go-memcached-bench [2]. There is also memcache server [3] written in go, which can cache much more data comparing to the original memcached thanks to ybc library [4].

[1] memcache client and server library https://github.com/valyala/ybc/tree/master/libs/go/memcache [2] go-memcached-bench https://github.com/valyala/ybc/tree/master/apps/go/memcached... [3] go-memcached https://github.com/valyala/ybc/tree/master/apps/go/memcached [4] ybc library https://github.com/valyala/ybc

Python and Go are my languages of choice. I have found both share a the philosophy of less typing is more and more standard library is more. For web apps I have found Python has everything I can think of and more as far as third party libraries. Go package management and deployment once I got it I wondered how I could ever live without it. 1 - SQL is supported by Go specifically MySQL has a couple of drivers to choose from. After using sqlalchemy [1] everything else I have ever used just seemed like the dark ages. 2 - Go logging is standard and has been a couple third party options depending on your tastes. Tracing I think you mean stack is pretty amazing when my app crashes it is as easy to debug as any VM (JVM, Python VM or Ruby VM).

Python gives me a lot of choices based on the type of application I am building monolithic (DJango) or light (Bottle) and many package management and environment setup tasks (as with Rails, Java nightmares ect). Go on the other hand takes care of all the package management and environment setup almost non existent. YMMV good luck. [1] http://www.sqlalchemy.org/

Can someone tell me why they would use memcached over redis? Isn't redis a superset of memcached? I don't see a use case for memcached anywhere in a stack any more.

Has anyone had an experience of running websites from Go; or more specifically, how you handle none-HTML content?

I've been considering porting my CMS from mod_perl to Go, but I'm not sure how you work with the other files (CSS et al). I did read somewhere that you run Go from Apache but it's not recommended.

I've just started creating toy projects in Go. I put these Go projects (and Apache, which runs older PHP projects) behind nginx.

For hosting CSS, you either generate it programmatically and send the right Header, or you can use http.FileServe from the standard library [1]. (Surely other approaches are possible, but those are the two I've played with so far.)

[1] https://code.google.com/p/go-wiki/wiki/HttpStaticFiles

Thanks for that.

The issue I have is that I would prefer to keep Apache around if I can (I have a few sites - some of which I host for friends). While my web server does have a few IPs attached, I'd rather not have to buy more IPs just to separate Apache from Go.

And to be perfectly honest, I do quite like Apache. (each to their own I know, but I've had little reason to complain about it).

Right, you're describing my situation exactly.

I used to run Apache solo.

Now I run Nginx, which forwards requests to Apache or to my various go apps based on the domain name. Apache and the go apps just listen on different ports internally. The server only has 1 IP address.

You are an absolute star. If I could up vote you a hundred times I would.

Thank you

I use Apache too (it's all I've ever known), and run a few Go web apps. Everyone lives quite peacefully together. With Apache, I just set up a simple reverse proxy.

Here's an example of one I use to run my own `godoc` server:

    <VirtualHost *:80>
        ServerAdmin admin@burntsushi.net
        ServerName godoc.burntsushi.net
        ProxyPreserveHost On
        ProxyPass / http://burntsushi.net:8080/
        ProxyPassReverse / http://burntsushi.net:8080/

Nice idea.

I might try that before dipping my toes into Nginx

Yeah, it's a nice holdover. I keep hearing great things about Nginx though. Someday I'll have time to check it out :-)

Seems very complicated. Why not just host the CSS files statically?

My website, niflet.com, is written in Go. It uses Go's web server. While in learning mode for Go I saw that others were using Nginx instead. Now that I've gone through the development process I can see why. Go's web server is basic. Things like compressing files, setting header values and other niceties you have to write yourself or look for code written by contributors [1]. It's not as bad as it sounds though. Go has a function for serving static files like CSS files, which I wrapped in a function to serve a zipped version if the client supports that.

If I had it to do over again I would still use Go's web server, as it seems to work great. My site can conservatively handle 1 million page views hourly, using a cheap box.

[1] https://groups.google.com/forum/?fromgroups=#!topic/golang-n...

I created a project for compression doing automatically doing compression and supporting a simple proxy system [1]. It is a pretty simple system but allow for dynamically doing proxy stuff and the compression is amazingly fast. I have found Go amazingly fast and stable. It being fun is just a bonus to code in.

[1] https://bitbucket.org/lateefj/httphacks

It's funny that they argue that they dumped the JVM derivatives like Scala, but wouldn't exactly tell why, nor support their argument with any sort of data/logs.

I am genuinely curious to know why they chose Go over Scala. If it was the syntax, etc. I can partially agree because it's one of scala's weak points, but then they pitch the main reason citing performance, so I'm genuinely curious to know.

He mentions JVM memory usage.

I think he edited it after me and many other posted similar comments, coz I didn't notice it the first time...

I got the same feeling from the odd dismissal of Erlang.

Exactly, you know, my advice would be to take these blog posts with a pinch of salt. There is something called as the 'mob' mindset. The mob in general can be easily manipulated to believe in something that is not true. (Remember Julius Caesar?). For example, a year or two ago we saw the whole world of start-ups adopt MongoDB with so much vigor and almost everyone started writing "Why we moved away from MySQL to MongoDB". And two years later, we have now a bunch of posts saying the opposite - "Why we moved away from MongoDB..", etc. The same thing applies to Node.JS and Go too. They are still new technologies, so I think they deserve some time to be tried and battle-tested, instead of writing zero-data driven blogposts like these at an earlier stage. I would have been much happier to have read something like "After x years of using Go, Erlang and Scala, here is our comparison on what works well and what doesn't and how each one of them perform under different conditions" instead of "Hey we just reduced our server count to 2 using Go, hence it's better than everything out there.."

Personally, just like you, I think these guys could have achieved more with Erlang or Scala, but given the fact that they chose Go because it works well for their architecture, I am not complaining.

>> "After x years of using Go, Erlang and Scala, here is our comparison on what works well and what doesn't and how each one of them perform under different conditions"


"We decided to rewrite the API" and now it is over 300x faster. Go had nothing to do with this, it was obviously some boneheaded original design. Probably something like fork+exec on a Ruby VM on each request to enter a chroot.

I know HN has a hard-on for Go but come on.

Are you the same asshole making throwaways to shit on all of the Go postings or are there actually more than one of you going around making assumptions, accusations and spreading falsehoods about Go.

I can't get over how pathetic it is to make a throwaway for that comment, I'm more embarrassed five people validated your comment and attitude.


why we needed 30 servers: Rails

I've read this domino-effect on server clusters on several HN postmortems and I've seen various flavors of this on our own web servers. I'd like to think there's a simple configuration in most servers that prevent 100% cpu utilization from taking place and preventing the server from telling its cluster that it's still alive. Anyone have any experience with this?

The problem is queueing at the worker end and not the proxy side.

Using a fair queueing system (such as Nginx HttpUpstreamFairModule) can limit the number of concurrent requests sent to workers so they don't get overloaded. It will make requests queue up at the proxy and then get distributed evenly.

See: recent discussion on problems caused by Heroku's random order load balancing.

in linux cgroups (control groups) allow you to allocate resources to groups of processes, which makes it easy to ensure things like ssh or npre or your load balancer ping (as long as you run it out of band) still have cpu/mem even when your main server cgroup gets pegged

It's not really possible without adding resources (ie a new server).

The problem is, once your servers get saturated, requests queue up, processing slows down and users start to hit refresh which artificially and exponentially increases traffic. Then to compound things, servers buckle, which means you lose a resource right when you needed it the most.

A domino effect is one analogy, another might be how a small hole in a damn can buckle and blow open from the force of the water pushing through.

> It's not really possible without adding resources (ie a new server).

That's not quite true. With the right architecture, performance will degrade gracefully. You'll get slower and slower responses as the load goes up, and eventually start dropping requests, but the servers will not die and there will be no cascading failure.

One way to achieve this is to make sure the queuing happens at the load balancer, and no large queues are allowed to build up in the individual application servers.

Well yeah. You can do all sorts of tricks from TCP/IP hacks to streamline HTTP requests through to disabling queuing entirely. But my point is you cannot entirely prevent your site from saturation without adding extra servers to your web farm (and more so, that siavosh's method of taking servers out of service has the inverse effect of what he was trying to achieve).

Thus all you can do is slow the escalation in the hope that the traffic peaks before your resources buckle.

The only method I'd found that is "guaranteed" to prevent such outages is the use of sorry pages (ie a static page stating "We're experiencing high volumes" which users are directed too if the dynamic page connections are maxed out). However even that is just essentially a prettier version of a page time out - and I mean this in terms of usability rather than technicality. ie the site is still unavailable, but you're killing the connection in a user friendly way rather than allowing connections to stack or just flat out disallowing "> n" active TCP/IP connections.

I think it really has more to do with a lack of spare capacity - each server is so maxed out that it can't handle a 1/n increase in load when one server goes away.

Can you describe your workload a bit as well as any benchmarking you did to determine (and perhaps optimize) hotspots in the ruby code before you started the port?

Personally, I've been interested in moving to Go on a python-based project of mine. Thus far, I've avoided it because 1) the extra work required to self-implement a few third party libs I rely on and 2) I've been able to eek out sufficient performance using c extensions and cython.

This first rewrite in Go was for the IronWorker API so all the operations are here: http://dev.iron.io/worker/reference/api/

There were less endpoints back then, but you can get the idea. The most heavily used operation being queuing up tasks/jobs.

Science and engineering from Bell Labs behind Go is what make this possible. It is not a "different language", it is carefully selected design decisions and ideas behind it.

It relies on a principle of being good-enough, not to stuff everything in, as a "feature sellers" and "buzzword shouters" used to do.

There are also lot of work of great minds behind Lisps or Erlang, and same principles in a foundation.

"So then the decision came down to which language to use."

I find this tendency to constantly jackknife between talking about languages and talking about frameworks a little dizzying.

Shouldn't the decision have been what web stack to use? I'm sure you could get half way there with a bespoke Ruby web stack, maybe built on top of JRuby or with a sprinkling of C extensions. You could get the same poor scalability with some badly designed framework on top of Go [1].

There's a lack of meat in the post (hence why people are calling it out as PR). How do these features of Go make it easier to write applications or frameworks that cater to this particular scalability need? Could extra effort be put in up front to get something similar out of Ruby or any of the other languages, or will they always be sub-par? What is it about Go that makes it light up the rest of the stack in a way other languages don't?

Or have you just traded one trendy technology for another because it promises to be the magic bullet for your current itch?

[1] This isn't saying Rails is badly designed.

Hey that's my photo of the GOpher! :)

[1] http://www.flickr.com/photos/jianshen/8080852738/in/photostr...

Hah, nice photo! Are you ok with that? Found it on google images.

It's cool. :)

:) I don't know if the small plastic or larger plushy ones are cuter. Love having them on my desk.

We got a bunch of the little plastic ones sitting behind my desk: https://plus.google.com/u/0/101022900381697718949/posts/1Dbe...

+1 for the cool monitor stands but geez... Invest some $$$ in a real keyboard: scissor switches are the most painful type of keyboard switches ever : (

The Apple wireless keyboard is the best keyboard on the market right now IMO.

OK, I am sold on Go as my next language for web applications. Unfortunately, I can't throw away the existing PHP/MySQL code. Any suggestions on how to integrate Go and PHP into a single web server? Should I be using FastCGI for Apache to call the Go programs?

Out of curiousity, I would be interested to know which version/patches of ruby you were using. With ruby I'm used to running out of memory on servers long before running out of CPU. Of course, most of our workloads are IO bound, so that may be the diffent.

This was on Ruby 1.9, 2010 era. No patches. Memory was definitely a concern and probably contributed to some of it. Just to note though, after a machine was taken offline by the load balancer, it would generally come back to life.

Memory "probably contributed to it"? You don't even know why the old version was slow but you want to credit golang with making it fast? Hype much? The lack of critical thinking here on Hacker News is astounding.

When running I/O bound workloads you really should be using a multithreading-capable app server. For example, if you're using Unicorn then that is extremely bad for I/O bound workloads (the Unicorn website's Philosophy page documents this under section "Just Worse in Some Cases": http://unicorn.bogomips.org/PHILOSOPHY.html). On the other hand, something like Phusion Passenger Enterprise 4 with multithreading turned on is excellent at handling I/O bound workloads and reduce your memory by a significant factor.

    while true; end

What changed in the architecture? It is rather interesting that by just having a language change you could remove 28 servers from the system.

My guess is that removing 2-5 layers of abstraction can get you pretty far along.

We're writing most of the systems for our company in Go, and of the many reasons this ranked highly. Just removing the multiple unnecessary abstractions gets you a hell of a lot of simplicity and performance back.

I work in an area that requires a lot of high performance computing and there is a reason people in this area like code simplicity. Stacking layer upon layer of complex algorithms is a lot more difficult to reason about than having a simple and elegant system that's close to the metal.

There is a recent trend of building more and more complex systems, have less coupling and 'good oop'. That's all good and nice but simple code is:

  - more maintainable (a complex nest of objects & function calls all loosely coupled isn't)
  - a lot more readable
  - easier to optimize
  - better for your sanity
Some languages have build a culture of extreme complexity and are notable for building huge structures and systems to do simple tasks. Having a 50K lines of code dedicated to just inversion of control or O/R mapping makes a problem a lot more complex than it probably needs to be.

"Having a 50K lines of code dedicated to just inversion of control or O/R mapping makes a problem a lot more complex than it probably needs to be."

Exactly. That's the terrible stuff with all this "enterprisey" mindset: people are working on medium-sized codebase made of this special kind of hell that Java/C# + ORM ([N]hibernate) + XML + SQL is and these app often run into the 200K / 300 KLOC lines if not more. Yet what does these applications really do? Actually not very much. Yet these programmers are sure to be working on super-advanced stuff because their codebase is big.

When several companies reported a drop in LOC of 90% by switching to something else than Java then at one point you have to at least consider that maybe most of your Java codebase is hot air.

Abstractions are not necessarily slow. The BSD socket API is an abstraction over sending raw TCP/IP packets to your Ethernet card, but you're not going to get a whole lot more performance out of accessing your Ethernet card directly.

Well, they're always slower than not having them. Having a few layers on top of each other starts to add up. It's very easy to lose track of the bottlenecks of 2 or more layers below the one you're working in. There is a reason HTTP.sys runs in kernel space and BSD sockets don't. And there's a reason both of them are slower than XMLHttpRequests.

PS. The fact that you can point to one (or a few) efficient abstractions doesn't mean they all are.

Just no. Some abstractions are light years faster than not having them. To the point that this is a silly statement. Consider the elementary abstraction of an array. The slightly less elementary one of a map.

More close to heart, the abstraction that source control gives you. Imagine not having git (or whatever tool) and thinking you will compare this weeks code with what was available last week. Now just do the same based on the changes introduced by a single developer. Trivial with the abstraction of source control. Hella tough without.

So, no, just because there are layers of abstractions does not automatically mean things are going to be slow.

All of that said, I can see and ultimately agree with your point. Excessive abstraction can be bad. Unfortunately, I think the jury is out on where the hell that line is. (Likely they are distracted with other pointless questions while others are out solving problems.)

>> Just no. Some abstractions are light years faster than not having them. To the point that this is a silly statement. Consider the elementary abstraction of an array. The slightly less elementary one of a map.

Abstractions can definitely be an improvident in some cases. They just bring extra complexity at it's cost. In the 'map' case you pointed out, you now need to take into account the performance characteristics of an array, one or more hash functions and linked lists in their specific configuration.

Even though the lookup of a hashmap is supposed to be O(1), you now have to run a hash function for every lookup. That's bound to be slower than the simple integer addition we were doing before on the flat array.

Besides that, if it's a bad hash function (or if the data is bad), performance will easily go down to O(n).

>> So, no, just because there are layers of abstractions does not automatically mean things are going to be slow.

That's not what I'm saying is it? I'm making the point that it'll be slow/er/ because of the overhead of the abstraction. I didn't say "less productive" or "every abstraction is bad", I'm point out that each extra layer comes at a cost and that the extra cost gets buried the more layers you add.

I'm not going to go into the source control argument because I'm talking runtime application performance and mental overhead, not developer performance.

But that is my point, if used correctly, the abstraction just makes it easier to do what you want to do quickly. Consider, math is basically abstracting away the tedious nature of some physical process. I mean, if you know it takes 2 eggs to make 4 waffles, and want to make 82 waffles, the abstraction of math makes this much quicker than just doing it to determine how many eggs you need.

Basically, I object to the idea that abstractions slow you down by their nature. I agree that poor or unnecessary ones do so.

I chuckled at "just a language change".

What could be more expensive than rewriting all you code? For many Rails web apps the cost of additional servers is far below the cost to rewrite things in Go.

The cost of running 28 servers is not negligible compared to developer time. If a server goes for, say, $150 per month that's $4200 wasted each month. And contrary to the rewrite, the server cost is perpetual, so the rewrite amortises eventually.

Obviously, the economics of a rewrite depend very much on the size and complexity of the code base, but it didn't sound like they had a huge code base at the time of the rewrite.

Thus the improvements where not a result of using Go (which is is a fine language). The improvements came from replacing a bloated system with a leaner one. You can do that with many other languages. There is no magic in that.

It would be interesting to hear some of the reasons you went with Go vs Clojure (and also the other languages you mentioned)

I know I'm late to the party, and no one will read this, but ...

    "... Java derivatives like Scala and Clojure ..."
Someone help me out here: In what way is Clojure a Javav derivative?

Perhaps in the sense that they are languages that target the JVM and that rely on interoperability with the Java standard library to bootstrap people making apps with them?

I can't see how it could mean anything else, but I find that, well, bizarre. Oh well, different point of view, I guess.

> with Ruby.

Title should have been: "How we went from 30 servers to 2: Replacing Ruby"

How about BDD, test-driven development and quality insurance? Does Go provide an ecosystem that supports agile refactorings that are common in lean startups?

You can say anything bad about the performance of Ruby and Rails etc. but rspec, cucumber, capybara, vcr, factorygirl are really important features to start from zero and reach a viable product.

> test-driven development

`go test`.

> Does Go provide an ecosystem that supports agile refactorings that are common in lean startups?

Go has a compiler and is staticly and strongly typed. Which automatically makes it at least an order of magnitude easier to refactor safely.

Not really sure about the other stuff you mentioned. I've never even heard of BDD before.

I've heard this argument before e.g. we don't need to test because the compiler catches errors. The compiler catches the mot trivial of error conditions. Unfortunately it doesn't address faulty logic/behavior.

I rather like what go has set up for test's. They made it as small and functional as they could.

Out of curiosity. How does a language specifically support "agile refactoring"? What features permit (or hinder) stopping every couple of weeks and removing the accumulated WTF's in the codebade?

Want to buy a left handed stapler?

Go's interfaces make it really, really easy to change things with very little code changing. Go's testing tools are superb.

Can you blog about it in the future? Couldn't find good resources and tools about testing in Go. Especially with concurrency this seems to be very important.

from a testing point of view this looks pretty week. I would not bet my business on something that has not a testing culture.

What? Go check out any moderately popular library in Go (including packages in the standard library), and I bet you're fairly likely to see a `*_test.go` file somewhere (perhaps more than one).

Unit testing in Go is refreshingly simple. And it comes standard with the Go toolchain.

I wouldn't say Go doesn't have a testing culture. On what basis do you make that assertion?

I write better tests for my Go code than for any code in other languages that I write. But maybe I just think in complementary ways to Go's testing philosophy.

The talk that I gave just before the one linked above talks about testing: http://vimeo.com/53221558

What makes you say it doesn't have a testing culture? I think you're just trying to justify not using Go and sticking to your choice.

I haven't came across any Go libraries without tests.

* as far as I can see very few go projects use CI (e.g. travis)

* gospec seems to be dead/not widely used

* I don't see projects measuring their code coverage

* As pointed out in my first comment, I couldn't find high level testing tools and none of the repliers could name one, too.

If you don't have tests to measure function and performance, any new technology is just playing another round of Roulette and/or "bike-shedding". Most projects and companies — to my knowledge — don't fail because of 30 servers for Rails but in the inability to iterate (to find a profitable business segment) without investing/burning bulkloads of money re-engineering. Therefore any serious business use requires a deep evaluation and comparision with existing methods, especially in the quality of service sector.

The Agile/XP/TDD/BDD-movement was not created because of "bad and slow" programming languages. It's because failure happens all the time if you like or not. It's about dealing with the omnipresent risc and changing specs.

Agreed, but something you also have to worry about is your stack becoming unmaintainable. Sometimes it's worth re-engineering your stack. For instance, even the simplest programs have a "time until too complex to safely be maintained".

I suppose you have to make that time far enough away that it won't affect your business. If you go past it, your good developers will leave and you'll likely only get lower class devleopers who have to take the job.

As a result your entire stack will suffer. The business side is important, but the entire engineering side can die if you neglect it too much.

Complexity must be managed.

That said, you are correct about:

* Few projects use CI. I think this is due to the fact that most of the libraries being developed are done by very small teams that see it as too much overhead.

* Same with code coverage.

* Most find the testing that Go uses for the language itself to be more than enough.

I think that Go has a much different attitude towards development than you are used to.

I can't quite quantify why and I could be wrong, but I suspect not all of the testing methodologies you talk about are necessary.

Would be great for someone else also experienced with Go and experience with the type of testing environments rmoriz describes to chime in!

Hmm I see a lot of opportunity

Go doesn't have any mocking framework like Mockito/Java or Rspec mock. You will end up writing a ton of code yourself if you want to mock something simple.

Uh what? No, that couldn't possibly be less true. Have you ever heard of "interfaces" in Go? They're kind of an important feature.

I've only dabbled a bit but I haven't seen a DSL that easily creates a mock that implements all the methods of a given interface, verifies that methods were (not) called with expected arguments, and returns or throws canned values. It's not difficult to write mocks from scratch, but it's very repetitive and hard to skim what the test does.

You nailed it!

That's what the other gophers say too. Java has interfaces too, but it also has Mockito that makes testing much easier. You should look into it. Mocking library will help reduce boiler code. You would know that if you had used one.

Go interfaces are not the same as Java interfaces. Go interfaces are implemented using structural sub-typing.

>You would know that if you had used one.


My bad. :)

Uh I think I love it. Easy to learn language and awesome performance.

I wonder how far they could have got going more of a bare-bones Sinatra app using JRuby. It's still not nearly as fast as Go obviously, but take away Rails and some of the more dynamic usages of Ruby on a faster VM and I'd venture to guess you might have been able to go from 30 servers to 10-15?

That's still not 2 mind you...

There's simply no technology that makes you switch from 30 machines to 1 (you say you keep 2 just for reliance).

Obviously there must be other architectural changes in play here (I'm guessing RoRs thread-per-request vs some kind of event loop on Go?). Please be more specific.

Go is good at keeping all the cores on the machine pumping. Maybe they switched to machines with more cores.

Why Go over VisualBasic?

There are a lot of things that can be said about the "average" programmer in such-and-such languages, what they do and don't do, what habits they have, etc. But I'm not interested in working with merely average people.

That's right. People like you and me work with the top 5% of programmers! It's because I'm smrt.

You have an error in Opera

'SyntaxHighlighter.config.bloggerMode = true;'

Undefined variable: SyntaxHighlighterError

Go 1.1RC just around the corner in April. A lot of good stuff in Go 1.1RC.

If you look for a Golang programmer today, it's more likely you'll find a true programmer instead of a me too programmer. So if you're a true programmer, learn Golang. ;)

Now I know that the dynamic languages such as ruby and python are not known for performance, but the benchmarks I have seen with Go look like it isnt exactly the fastest language available either. After reading this article I am more apt to think that something is/was VERY wrong with their Ruby code than to think that Go is that much better. Maybe my lack of experience with either GO or Ruby is showing here, but Ruby cant be that slow can it????

p.s. come disrupt our business: with nearly no hardware, no costs, and no maintence burden you too can pass OMG huge amounts of message traffic.

this is one bold as fuck post.


>> Q. How does Dart relate to Go?# Dart and Go are both language projects started at Google, but they are independent and have different goals. As a result, they make different choices, and the languages have very different natures, even while we all try to learn from each others' work.

Hmm, don't think I want to bet the farm on that.

Don't want to bet the farm on what, do you mean Dart or Go sticking around? Go is used extensively through Google. It powers a significant DB layer that powers YouTube. It serves up all of Google's static downloads and several employees hint at other significant projects they can't speak about.

I've only dabbled in Dart, I can't speak to how vital it is, currently it only seems viable when translated to JS or when using in a server-side VM, which is certainly an option if you want.

(sorry, I originally had written Google where I meant YouTube, see "Vitess" for more info)

We decided to rewrite the API. This was an easy decision, clearly our Ruby on Rails API wasn't going to scale well and coming from many years of Java development and having written a bunch of things that handled tons of load with way less resources than what this Ruby on Rails setup could handle, I knew we could do a lot better

Was anyone else expecting this to be a boast from EA about how they cut their operating costs for the SimCity launch?!?

man, you just made me worry very much now - we are making taxi dispatching api using ROR - and we will lunch it soon

Why worry? If you can handle the initial load, you can still optimize later on, rewriting some functions or parts of the system with something faster.


don't worry... our Ruby servers kept us churning for a long time.

If perf is that much of a worry, you can look at JRuby, that will get you a bit (or a lot) farther from what I've seen.

I wish Go has named arguments like Python and Scala. Reading and debugging code could be so much simpler.

Why Go over Erlang?

(Note: I'm a big Erlang supporter and 90% of my startup's code is written in Erlang)

Erlang is not memory efficient (compared to Go? I don't have benchmarks to back this up) AND the learning curve for developers not exposed to functional programming (which is a lot) is quite a bit higher than it is in Go.

Hmm, wouldn't it be better to rewrite the ruby services to Go, the actual workers and not the entire API? I don't think the API HTTP requests where your main issues here...

Would be great to hear more specifics of the rewrite. What did the API do, what pieces were built in RR, what got replaced in Go, etc.

At least as much as you're willing to share. :-)

The problem was Ruby, not Rails. Did you give ever give Jruby a chance?

Rails is not for API,why not use eventmachine?

Why was Go chosen over NodeJS ?

Mostly because I can't stand Javascript. It makes me cringe just thinking about it. Go is much nicer to work with.

Hey OP! I appreciate your sharing. Since you came from ruby and we're on the topic of the language itself, I'd appreciate your impression of how well Go supports collections. Is there or could one write something like http://underscorejs.org/? Can you do this kind of thing?

[1, 2, 3, 4, 5].reject {|i| i < 3}.map {|i| i + 9}

I did the go tutorial the other day and I became a little worried that one would not be able to do this kind of thing without a bunch of unwieldy declarations. I think that would be a showstopper for me.

Thanks for any insight you might have!

Go's lack of type parametrisation makes many higher order functions really awkward. I'm no expert on Go, but below is my attempt at writing a generic Map function. As you can see, the code of the Map function itself isn't so bad (though there's a lot of noise in the declaration). Using that function however is really annoying, as you need to convert from the slice you have ([]string, []int, etc.) to an empty interface slice, and in the function you give to Map, you need to dispatch on the element's dynamic type. I think this is why Go people prefer to just use for loops and not do the whole higher-order combinators stuff.

    package main
    import (
    func Map(fn func(interface{}) interface{}, xs []interface{}) []interface{} {
	    res := make([]interface{}, len(xs))
	    for i, x := range(xs) {
		    res[i] = fn(x)
	    return res
    func main() {
	    words := []string{"foo", "bar", "baz"};
	    // First we need to create a version of words that has the []interface{} type.
	    interface_words := make([]interface{}, len(words))
	    for i, w := range(words) {
		    interface_words[i] = interface{}(w)
	    fmt.Printf("%v\n", Map(
		    func(x interface{}) interface{} {
			    switch s := x.(type) {
			    case string:
				    return s + "!!"
			    return interface{}(0) // Need to please the compiler.
		    }, interface_words))

Here's a more idiomatic go version:


what you don't realize (apart from the fact that go doesn't terminate lines with semicolons since 2009) is that you don't need to create a copy that's typed []interface{} because []X already satisfies interface{} and you can use reflection to handle it. if a seasoned Go programmer wanted to write a somewhat generic map function they'd do something similar to this:


It doesn't look pretty and you're right, most go programmers don't have to write code like this. It is much more common to define a common interface for something that's map-able and expect callers to implement that. examples in sort.Interface.

In your more idiomatic version, you actually went ahead and changed the declaration of words to be []interface{}, but you may not be in control of the type of words (for instance if it's the return value of a function). This is why I did the explicit conversion. As for the reflection, it's pretty ugly.

Also, what about semi-colons?

i understand why you did the explicit conversion, the idea i was trying to relay is why i undid it...

nevermind the semicolons, 'twas a joke :)

Thank you for your answer. If you're right and this is the Go way to do it (and I have no reason to doubt you), it's exactly what I was afraid of. I can see why for loops would just read better. Feels like a step in the wrong direction though -- at least for me.

Go's lack of parametric polymorphism is an abundant source of Internet debate; proponents of Go say that it keeps the language simpler and find that they don't miss it in practice. I won't get into that debate (it's too late anyway), but I'll say that if you are writing an application, that kind of polymorphism might not be as useful since you can know all the types of your application. Why bother making a generic function if you know you are only ever going to use it with strings? When you are writing libraries however, you don't have this sort of freedom.

Go does not have the fascination with one-liners that Ruby has. Which is why I like Go.

In my experience, people who write in Go value code clarity over terseness.

If you're looking for more elegant methods of expressing this in a higher performance runtime, here are two examples.

Erlang: [I+9 || I <- lists:seq(1,5), I >= 3] Haskell: [i+9 | i <- [1..5], i >= 3]

Like go, both Erlang and Haskell both have strong concurrency stories. For an I/O bound server in a production environment, Erlang is a very safe choice.

Compared to these list comprehensions, that ruby example looks unwieldy.

Even python looks more readable to me: [i+9 for i in range(1,6) if i>=3]

Go has first-class functions, so you could write a library do the same thing with, yes, a few more declarations around it. In your example, .map { |i| i + 9 } is .map( func(i int) int { return i + 9 }), or .map(myFuncDefinedElsewhere) in most real-world scenarios.

As a commenter upthread pointed out, Go and serverside Javascript are aimed at very different use cases and audiences.

I thought this would be a fun exercise, so I implemented the Go equivalent. Can anyone get it to fewer lines? It abandoned all pretenses of readability long ago.


Updated to generalise a bit more. If this ever came up in code review, I'd probably facepalm pretty hard.


Ha. Yeah, the map() and reject() function defs are fine but the actual usage is a little eye-bleed inducing :)

So now that we've established Ruby is better at being Ruby than Go is, I wonder if there's an actual use case for this kind of thing, so I can demonstrate the "Go way" of handling it...

Wrong and you know it. Go does not have generics so you have to use void* (interface{}) and casting. It ends up being about an order of magnitude uglier and more verbose. Which is why map, fold, etc don't exist in Go.

You go fanatics are incorrigible.

Seriously? Is this just a coincidence that another throwaway named "TakeTwo" and then "TresAmiga" are both making throw aways and dismissing all Go users? Jesus.

What he's describing is far more than possible using interfaces. I have plenty of code that does so.

How is that "wrong and he knows it". Your worse than any Go fan here. Where the hell does someone get off being vitriolic about a language or someone's preference for that language. Pathetic, at least have the balls to post under your real account.

That having been said, as an "incorrigible Go fan", I would love to see proper generics. It's something I'm enjoying in Rust. It's also something that everyone is open to adding in Go 2, so it doesn't necessarily have to be a permanent absence from Go.

There's some idiots who troll the #golang-nuts freenode channel occasionally with a bunch of spam saying 'node is way better!'.

I assume they're not gainfully employed, or else they would understand that the two languages/platforms have very different goals and principles.

`chord`? Yeah, I was in there and was one of the people feeding him/her, I'm ashamed to say. I'd had a few beers and wasn't on my best behavior.

Chord was on about "Rails" is better, couldn't fathom a web world without MVC and failed to understand that Rails was just a framework on Ruby and that one could write similar helpers in Go.

It was the perfect example of Dunning-Kruger. Painful to witness.

I think there is honestly something wrong with that guy in a mild "losethos" sort of way. He started spamming me trollish nonsense in PMs because I dared utter a single line while he was active.

Something about golang just seems to attract a certain flavor of nutters.

He stalked me too. Mocked my parents, my sexuality (lolironic), my job, etc. Classy fella.

I'm starting to feel so left out. Why do crazy Go-haters never harass me? Am I not good enough?

Although I personally like CoffeeScript, I must say that Go fills me with joy for some reason.

It feels "fresh" and lightweight to the beginner, and reminds me of TurboPascal in some ways.

Why would NodeJS be the most obvious alternative to Go? The two seem more dissimilar than similar.

No Love for Lua and LuaJit.

Man, what is it about Go that just brings out the pricks of HN?

There are over three throwaway accounts created exclusively for this thread to shit-post about Go, to steal an old term I have never, ever used on HN before.

I'm not exactly sure, I've noticed the same thing.

Go appeared a few years ago, quickly became pretty well-known, and continues to grow in popularity. Maybe it's seen as "threatening" to older projects like Ruby and Python? The most vitriolic comments always seem to come from Ruby, Python, and Rust partisans. Of course, I'm not sure why they're so threatened by it, they themselves explain in great depth how the lack of generics or exceptions will prevent anyone from writing any real code in it, or even trying to in the first place!

great stuff!!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact