Hacker News new | comments | show | ask | jobs | submit login
How We Went from 30 Servers to 2: Go (iron.io)
688 points by treeder on Mar 12, 2013 | hide | past | web | favorite | 495 comments



It's not really "go" that makes the difference. it's how the runtimes and frameworks are used and/or made.

frameworks on top of frameworks, all being over engineered, with poor understanding of what the system actually does, result in super slow apps on top, that you generally go to aws to scale.

It's not the first time that I see people reducing a dozen servers that were "always maxed out" by a couple of servers "that can barely feel the load".

Yeah throwing hardware at the issues is fine'n all but we're been way over the limit too many times in too many directions.

Go provides a clear start/APIs/framework, and the language enforces good habits. Also, it doesnt have things like global interpreter locks.

The lesson? Stop using cool techs because there's blogs about them. UNDERSTAND the tech before using it.


I'm very sympathetic to your tech-on-tech-on-tech-on-tech hogwashery promulgated by open sourcers & vendors prosteltyzing their wonderful solutions and the other acculturation factors that permits developers approaches other than trying & seeing & exploring-

That said, you are wrong.

I'm not a Go user and I don't care for it, but Go is fundamentally different and better than most runtimes. Go has it's own green threading, which allows it to start new tasks using extremely small amounts of memory, and it can switch between tasks without the normal cost of context switching full on threads.

That is a huge difference. Node has some elements in play here, with it's event loop and callbacks, but really the only thing in the field that compares to Go that's in use is Erlang, and there's no frameworking on top of Erlang that wouldn't cause most PHP or nine tenths the Python developers to curl into a weeping ball at the Eldritch horrors they'd been exposed to: even ASM code is not nearly so effective, being clearly a low level somewhat harmless thing.

Being able to kick off small processes, that can sit around for a really long time, awaiting data- that's a model of computation we haven't done in decades, never with the mass popularity- there's a TLA for this kind of processing- Communication Sequential Processes, and Go is it. It's entirely different from the procedural model, from the Python and Perl and PHP scripting, where concurrency is a carefully waded into thing- Go is a concurrent first runtime, about ongoing, enduring, concurrent processes, and that's huge, HUGE I say, huge.


> I'm very sympathetic to your tech-on-tech-on-tech-on-tech hogwashery promulgated by open sourcers & vendors prosteltyzing their wonderful solutions and the other acculturation factors that permits developers approaches other than trying & seeing & exploring-

Are you studying for the SATs, or just trying to make your argument sound more compelling through the use of unnecessarily complex words?


Yet, different, and sometimes more complex words can have entirely unlike connotations, that much more effectively describe the message the author is trying to convey. Sometimes, simpler words are just that—too simple to properly describe the concept the author would like to capture, and the emotions he would like to encode within it.

That's not to say complicated is always better. Like it programming, the best set of tools are those suited for the job.

In the end, you consider your audience. Can I add more to this sentence to better capture the feeling I'm trying to convey, without detracting from it's ability to be understood? The parent most likely decided that on a forum like HN, the answer was yes.


It's how I feel, I'm so very very sorry it's tripped your acceptable tolerance levels. I'll bite my lip and not add "you prat" to the end. Thanks for your considered moderation, and I see your point.


My only point is that being obtuse is not the best way to get your point across. I assume your goal in expressing your opinion is to convince others, and to do so it helps to be clear, concise, and direct.


That's a very narrow assumption as to why someone would express their opinion. The OP might be sharing their experience (in what was a humorous way) purely to be sociable in this discussion...

Not everyone is out to convince others.


Righto, he could be here to make himself feel good by using words he doesn't usually get to.


Dare I call into question what you are here for? How do you feel about your contribution- attempting to heap more shame- plays for the whole audience? Is what you are doing of value?


On a forum like HN, where the audience spends a great deal of time reading and engaging in self-improvement, choosing words that are not simplistic should be fine.


I'm going to go out on a limb and guess that even in a community where "acculturation factors" is something people actually say (crit lit? sociology?) it's something that causes people to play that community's equivalent of buzzword bingo.


Ia, Ia, Erlang Ftagn!

I agree though, I just wish that Go would have developed from Erlang as a runtime.

The Erlang virtual machine and systems are beautiful things - I used them for some wire-level work on a project and bitfields are awesome, and it provided a clean interface to a more 'normal' language to display to the user, fail-fast fault tolerance on a public safety project, neat stuff.

It's just erlang-the-programming-language that has all the Prolog warts that scares people, I think.


Per your queueing, I'll toss in the Elixer chip- http://elixir-lang.org/ - it's a rather rote, boring, routine language that we all can be semi familiar with, with some aberrations carried over - pattern matched dispatch is still there, for example, making people wonder wtc. The web-page highlights it's advancements, it's "features," which is mildly amusing to me in that the biggest draw is that it's an Erlang that feels familiar to us Algol-derived unwashed/don't-know-anything-but hordes.

Kind of as a foil to that normality, it's similarly remarkable to me that the Prolog asserted-messages/facts & reactive processing cycle of Erlang isn't itself better captured and shown off. It's kind of a middle stage introductory tutorial, after you've made it threw guarded dispatches, yet by itself I think it'd make a great "start with the hard stuff" intro-tutorial to Erlang, and I haven't really seen it presented independently, firstly.


> It's just erlang-the-programming-language that has all the Prolog warts that scares people, I think.

I must be strange but I actually like Erlang's syntax. Its pattern matching is really hard to beat. Also with Erlang, it is not just about the language unless one just wants to learn academic FP or Actor model concepts -- it is about the framework. Debugging, tracing, distribution all those come as part of the package.

Think of Erlang as a tank. It might not be pretty and slick as a new Mercedes, but when you go into some battle, you need to learn to operate a tank, a shiny Mercedes will only take you so far.


Another one here... I like Erlang's syntax as well. Of course, we could have some things better (like record syntax), but it's really not bad at all.

I think most people have curly braces fetish (we call them "tits" at times where I work). If it doesn't have "tits", no thanks, it's ugly and bad.


There is talk about frames and maps for a while. I noticed there is an upcoming Erlang factory talk from Ericsson about this feature. I suspect those might end up used in place of records in some cases.


Correct me if I'm wrong, but the Haskell concurrency model is a very efficient one which is also based on lightweight threads. Moreover, is CSP is all you care about, you might want to have a look at CHP.


Go is... fundamentally different than most runtimes? Probably. Better? Maybe. Better than the comparable ones? I think it's losing that war. You don't even need a special runtime, it can help, but I think the amount it helps is overstated in Go's case.

Concurrency isn't that hard in practice. If you're willing to forgo automatic multiplexing Perl will give you Go's concurrency model in a library written ten years ago, and it works as naturally as it does in Go. Python seems to have similarly good support but it probably doesn't look much like Go. I'm not sure what Lua's doing these days, but in the past I've found concurrency dead simple. If you happen to be writing C, time-slicing the work isn't even that hard. If you know enough about Go's concurrency to tell whether a single goroutine will cause its scheduler to deadlock all by itself, you can easily write concurrent programs in all these languages.

I know this will be a controversial claim, but all of these solutions are easier to reason about than Go. Usually not by much, and it has primarily to do with the nature of its multi-threading support, but that's just the concurrency model being weakened by the support for convenient parallelism.

There is a case where Go is best: when you like writing Go more than the others. You can guarantee a little less about concurrency than most, but at least you can usually guarantee more about types than most. If you like writing Go you will either luckily avoid the problems or develop habits that prevent you from producing concurrency bugs, possibly even without realizing it.


Go's standard lib is built around it's communications and concurrency primitives. Sure one can extend other languages with new capabilities to take them into Go's realm, but that implies a movement away from stock standard core, whereas Go users, at their core, share a common massively concurrent message passing beast of a runtime and a concurrency friendly standard library that they are all unified behind. One will never mobilize behind a concurrent library in the kind of ways doing it up front will accomplish. Go has a large community of practitioners all writing massively concurrent code. That is huge and has never been done before.

But let's go look at patterning: look at Node: everything is hinged around re-using Node's core patterns, callbacks and every now and then slower events. There is promises work abound, but it's not used heavily: indeed it'd be hard to, with intermodule work, because A) it's not the standard lib B) nor are promises standardized. People seem to prefer something I loath- the horrible Async library- which gives them a bunch of composition tools to make worse the awful awful crime that is callbacks: because callbacks are what Node is, people pick up tools to do it more, to a further extreme, rather than going to the foundation & reshaping the landscape. (PS: Callbacks are A) awful B) actually & entirely the right decision.)

I was reasonably seriously into Perl, but I don't know what perl CSP style library you seem to be alluding to that A) you seem to indicate people used & B) got on par with Go 10 years ago. But I'm interested to hear of it! <3 me some CSP.


> but that implies a movement away from stock standard core [...] never [etc.]

I wouldn't have said anything if I didn't disagree very strongly with these sentiments (see below about growing your own CSP/etc, and later about Coro in particular.) But first, about Node: let's not look at Node (unless you really want to, but as long as you understand it has nothing to do with my comment.) Problems with JS are usually founded on people complaining they don't like callbacks (translation: "Go is better because I like it better"—in which case please dispense with the cognitive dissonance and just be honest and say as much♯) and are inapplicable to other most other languages regardless.

♯Edit: That's not necessarily directed specifically at you, but people often vilify every aspect of things they don't like. I'm not convinced that's what you're trying to say about JS.

I don't view concurrency as this mysterious thing, if you know CSP you can write it in any language, usually fairly naturally. In languages like JavaScript, Perl, Tcl, you can even make it look like it was already part of the language, tucking away calls to the scheduler so the user doesn't need to write them explicitly (heck, you can do that part in C... but... don't), like in Go (save for pure-computation loops, also just like in Go.) All Go really does over the general case is adding implicit calls to the scheduler (not nicely possible in some languages) and treats the scheduler as a second-class primitive feature.

To go on about a unified implementation reaches my ears much like an argument that you'd better use `for` loops and never `map`s (guess what you can't do in Go.)

I wasn't going to bring this up, but since you're familiar with Perl you will be less likely to assume limitations. The lib is Coro. People do use it, but it is controversial because of #dramaintheperlcommunity (#whatelseisnews.) I don't know how popular it is overall, but if popularity is an issue for you let me know because it's a stupid argument to get into and I don't want to have it without realizing it. People rarely pay attention to concurrency anyway, but it's been around forever.

It is not a parallel library, for which I am thankful. (It's not incompatible with parallelism, and importantly: it doesn't make Go's mistakes in that area.) It amounts to a scheduler, facilities for spawning coroutines, wrappers to make calling the scheduler implicit (eg. at IO points), do async IO and every single thing you would need to make Perl automatically do the things Go automatically does. It will look exactly like Perl with Go's concurrency model, in broad strokes you do the same things, in the fine details you use Perl modules and methods and a new keyword or two.


"if you know CSP you can write it in any language"

If you don't know CSP you still have no choice but to write it in Go: that the core unity I've been trying to describe, something the entire standard library is built around, like callbacks in Node. I don't see that you've added anything to your case for other languages in this recent reply: I think this excerpt captures well how your line of thought remains focused on what is possible, while ignoring my core arguments that it's all about the prime interface of the Turing machine before one, not what they set up for themselves on the machine.

"In languages like JavaScript, Perl, Tcl, you can even make it look like it was already part of the language,"

And newcomers may be able to immitate/cargo-cult your styling blindly without straying too far off the gods-sworn path if they are really lucky!

But rather than risk it, making a language CSP first would, IMO, be a clearer way to keep unity.

I really don't care what can be done with a Turing complete machine: I care what use of that machine looks like to the important audience: it's practitioners, all of them, practiced and experienced, and those with other backgrounds and seeking only practical results.

Go has created an entire language all of whose practitioners trade in CSP. I'm happy to de-rate from a language to a massively successful widely known active (let's say +30k active developers) shared and intercompatible practice of doing CSP (libraries or what not). Until I hear that challenged, my argument about Go being fundamentally different for having a runtime and standard lib based around CSP stands untouched, unless you have something specific there to contend.


And newcomers may be able to immitate/cargo-cult your styling blindly without straying too far off the gods-sworn path if they are really lucky!

I guess they should count themselves lucky if they know what an API is at all. C'mon this is a 101-level topic. You can't expect everyone to suddenly forget how to use an API (with or without native syntax) just because it's for concurrency... or do you?

As for Coro, it doesn't matter how many people use it, you can write the exact same things you would write in Go and it will work in the exact same way. It's not cargo-culting, it's writing CSP, which you still have to actually write CSP in Go if that's what you want, and it's perfectly possible not to.

The runtime distinction isn't very important either, in fact when I hear "runtime" I ask "why not library?" Some things are impossible in libraries but CSP is not among them, so I'm still asking "why not a library?" The scheduler, probably the most interesting part, can always be implemented as a library. Can you say anything particular to Go's runtime that makes it so special? I can't think of a single thing. The scheduler itself is about as boring as it gets. Not that there's anything wrong with that—it's not a difficult subject after all.

As for the popularity contest goes, we might as well be talking about an iteration protocol. There are many different ones but you'd rarely notice it. The most commonly used ones are relatively less-than-great. You can always use a for-loop like you can always write a coroutine or time slicer in the worst case scenario. It works, and very well.

The other thing is CSP. Up to now, I wasn't worried about the concurrency model in particular, but now you're saying we can't use actors, agents, etc.—it must be CSP. That's BS. CSP is nice when you have to build it yourself, but it's not all there is and it's no easier to reason about than other good options. In fact I would mostly only chose it when I had to build it myself or support for something else wasn't available, which is understandably common. Really though, this sounds like a deceitful way to rule out Erlang and friends.


    And newcomers may be able to immitate/cargo-cult your styling blindly without
    straying too far off the gods-sworn path if they are really lucky!

  I guess they should count themselves lucky if they know what an API is at all.
  C'mon this is a 101-level topic. You can't expect everyone to suddenly forget how
  to use an API (with or without native syntax) just because it's for concurrency...
  or do you?
Taking perl as an example, they probably know how perl does file i/o. Yet if we're switching to a concurrent system, the entire practice they have of doing file i/o needs to be re-grokked. Your proposal, that learning an API is all it takes to master a form of concurrency, is a fucking joke sir. Concurrency is greater than proceduralism, it shapes the systems of code whereas writing code and hitting apis is a lower order operation: shaping code.

  As for Coro, it doesn't matter how many people use it, you can write the exact same
  things you would write in Go and it will work in the exact same way. It's not
  cargo-culting, it's writing CSP, which you still have to actually write CSP in Go
  if that's what you want, and it's perfectly possible not to.
And how many CPAN modules can I download which I won't have to hand wrap in Coro? How many modules will expose coro based concurrency patterns?

I don't know about you, but most of my work is not writing code: it's using code. You miss this essential fact again and again.

What is possible is entirely not at all my interest here. I am interested in the practice of writing code. My sole contention against you is that the availability of libraries is irrelevant, an my basis for this point is that I agree with:

  The runtime distinction isn't very important either,  [[snip]]
Thank you for repeating what I said last time in a dumbed down way. I continue to agree we have Turing complete runtimes: we can do anything. Yes, the runtime distinction is largely unimportant (although performance, natch). Adoption is. Availability of code is. Having other practitioners is important.

  in fact when I hear "runtime" I ask "why not library?"
Indeed, you've done it three times now!

And every time you've gotten a response that agrees with your conjecture that it might be possible!

So why are you still making this same uncontested assertion!

So maybe you need to find something else to do, other than to re-describe how possible libraries make it to be Go-like or otherwise concurrent, because- with some niggling performance issues- there's been naught but agreement, and you've avoided clashing every time with my fundamental point that Go has a practicing community with some many thousands of libraries written for it, and the libraries you are talking about don't, and using your libraries will create a schizm between you and the other practioners of the parent language and the modules written for it, and Go users have no impedance when working with one another w/r/t their concurrent systems.

  Some things are impossible in libraries but CSP is not among them, so I'm still
  asking "why not a library?" 
FFS I ended my last post with a Turing equivalency argument that could not have been more explicit about agreeing with you. It's not about what is possible, unless you are a god and live forever and don't mind doing it all yourself. If that's in fact the case, fuck you, I'll see your ass at the fucking fields of Ragnarok. Now get off my lawn or start saying something you haven't that might be useful.

  The scheduler, probably the most interesting part, can always be implemented as a
  library.
Yes, Turing completeness. Good and acceptable performant? Meh, maybe, often actually, sure. I really could care less: I think you are so very far off in the weeds trying to discuss this, as what can be done is the most irrelevant topic when dealing with Turing machines, aka programming languages.

  Can you say anything particular to Go's runtime that makes it so special? I can't
  think of a single thing. The scheduler itself is about as boring as it gets. Not
  that there's anything wrong with that—it's not a difficult subject after all.
You are a masterful troll sir. The runtime itself is not novel. The important thing I highlighted from the beginning about Go is that its standard library is written for a distinct communications oriented set of concurrency primitivies, and thus all uses of the language flow from this reference library's standard, and that creates an interesting practicing community of people all using the same communications oriented concurrency constructs. Thanks for making me spell out the value again, I hope I'm clearer this time.

  As for the popularity contest goes, we might as well be talking about an iteration protocol. There are many different ones but you'd rarely notice it. The most commonly used ones are relatively less-than-great. You can always use a for-loop like you can always write a coroutine or time slicer in the worst case scenario. It works, and very well.

  ???
The other thing is CSP. Up to now, I wasn't worried about the concurrency model in particular, but now you're saying we can't use actors, agents, etc.—it must be CSP. That's BS. CSP is nice when you have to build it yourself, but it's not all there is and it's no easier to reason about than other good options. In fact I would mostly only chose it when I had to build it myself or support for something else wasn't available, which is understandably common. Really though, this sounds like a deceitful way to rule out Erlang and friends.

  Please cite where I went hardline and demanded CSP as the only acceptable communications/concurrency primitive I'll amend that such that your argument here is thoroughly unnecessary- I certainly don't see myself as opposing what you are saying, but you didn't actually tell me what the clash was that arose this contention- all you said was "The other thing is CSP. Up to now,"- and that doesn't actually inform me about what changed your mind, & made you think I was hardlining for any specific thing; so, I'm not sure how to clarify and reinforce my agreement with you, and my willingness to embrace any practioner's use of good solid helpful concurrency construct, be it of the communications variety (which I do myself enjoy, as concurrency in my artisan's mind is modeled as non-locality) or otherwise.


Taking perl as an example, they probably know how perl does file i/o. Yet if we're switching to a concurrent system, the entire practice they have of doing file i/o needs to be re-grokked.

I see you didn't look into Coro or read much of what I said. You do IO in the regular way and the scheduler is automatically invoked.

(If you want to know what needs wrapping, just use your head. If it does IO, make sure you have that kind of IO wrapper. If you screw this up your program will still work.)

You are a masterful troll sir. The runtime itself is not novel.

You yourself stressed the importance of the fundamental difference in the Go runtime, two or three times I believe. It seemed to have something to do with your argument against libraries.

The one argument you claim to have left (userbase) is the one you've been ignoring my comments about since the beginning. I'm not interested in arguing it anymore, but if you're interested in it, feel free to re-read my comments on that matter.


From the Coro intro docs-

Using only ready, cede and schedule to synchronise threads is difficult,

Coro supports a number of primitives to help synchronising threads in easier ways.

Coro adds threading to Perl but falls short of being a concurrency practice- it's tooling for doing concurrency.

Go establishes two fundamental primitives, goroutines and channels, and gets rid of old-worn notions of threads.

This isn't a novel runtime, because it's been done- in Newsqueak, just to name one- but the entire platform built from the ground up to be mobilized around not giving you the tools to find your own concurrency solutions, but to have a deliberated answer to all concurrency problems, a masthead banner saying "THIS IS THE ANSWER" is the differentiation and novelness I'm ascribing to Go.

That approach, over our discussion, I've come to realize is more characteristic of the standard library of Go than it is the runtime. That said, you're holding my feet awfully cruelly to the fire and I take issue with you using this newly gained outlook in such a mean-spirited and cruel fashion to stab at me.

Talking with you is incredibly frustrating as you toss away my points with a single dismissing wave and do not cite material things; you pick tiny little fragments to quip at as springboards to launch into whatever you wanted to start talking anyways. As I said before, masterful trolling, it's very hard to hold a conversation with you. I'd beg of you to yield, because otherwise we might indeed find ourselves on the field of Ragnarok, at that end that never comes after all infinity.

You also seem very ill informed about Coro- which you champion-

Fortunately, the IO::AIO module on CPAN allows you to move these I/O calls into the background, letting you do useful work in the foreground

So, no, _You do IO in the regular way and the scheduler is automatically invoked_ is a bullshit claim; this has all the weaknesses of Node.js's event loop, and the need to write small things that yield, or here in Coro world, cede, regularly. And if you use a CPAN module that does block, you are toast, making this fundamentally incompatible with everything else ever done in Perl in a very serious way.

a simple module that implemented a specific form of first class continuations called Coroutines. These basically allow you to capture the current point execution and jump to another point is in no way a parallel to the runtime of Go because the stdlib of perl is not meant to operate in a non-blocking way, and Go is. That, I think, is safe to say.


You also seem very ill informed about Coro

On the contrary, I've used it. And Go. I know what it's like in practice which do not, and this makes your claims sound very ignorant to me.

So, no, _You do IO in the regular way and the scheduler is automatically invoked_ is a bullshit claim; this has all the weaknesses of Node.js's event loop, and the need to write small things that yield, or here in Coro world, cede, regularly.

This is proof of your ignorance. It isn't like working with an event loop (but it is compatible with them, ultimately giving you a better option for parallelism than Go.) I was going to mention how rarely `cede` is necessary, but I had honestly forgotten what it was called. You need it exactly as often as you need to write `sync` or whatever it is in Go. Seriously, you can write the exact same program as far as where and when calling concurrency facilities are and you will get the exact same concurrency profile in your program.

I'm not sure what you think the wrappers are for, but they introduce non-blocking behaviour into the library calls you make by, as I've said a few times now, invoking the scheduler—that's how this all works after all. There's no compatibility problem either, that's BS you came up with like just about everything else you've said about it.


Oh you've used it! Well, I take back all the other points I said! And the documentation from Coro that I cited explaining how it works, how it requires different I/O patterns, and that it's just a GOTO that doesn't do anything about synchronous long running code problems one might have when trying to interface with the rest of Perl! You're message is read loud and clear, I get where you are coming from now!


The fact is everything you said about it is wrong, and you drew the wrong conclusions from the docs. I would forgive a person for making that mistake from a lack of experience, but if you want to play it the other way I could just accuse you of lying.

The best part is when you say it's just a GOTO that doesn't do anything about synchronous long running code problems—how do you think Go's concurrency actually works under the hood? It works the same way—and has the same problem you describe.


_it's just a GOTO that doesn't do anything about synchronous long running code problems_

_how do you think Go's concurrency actually works under the hood? It works the same way—and has the same problem you describe._

I think Go, in most uses, makes use of a thread pool to decouple it's green threads from the OS threads such that a single busy green thread does not seize up the rest of the runtime, and I think Go manages the assignment of green to OS threads in a way that is hands off for the programmer and by default.

I also think one gets a runtime in Go where I/O is multiplexed on an event loop, such that most common syscalls a Perl program might make don't hold up execution, and are instead handled in a non-blocking way by default.

Please let me know what else I can clarify so that we can find agreement.


That's an acceptable approximation for now at least, although Go doesn't know which threads will hold up the runtime.

I'm not sure what's left to clarify on your part. I think you misunderstand what working with Coro is like. In practice the Perl code that holds up execution is the same code that holds up execution in Go, things like:

    while(1) { $i++ }
have the same problem in both languages. Throw a `cede` (or `sync`) on it and the problem goes away. Might want to unroll it a bit first.

I'm not sure what there is to agree on. I could perhaps eventually convince you that working in Coro is very much like working in Go, or you could do that on your own time if you really want. But you seem to have an issue with a concurrency implementation that isn't widely popular, and that sounds like a matter of taste to me. To my ears it sounds like an incredibly stupid thing to get hung up on—CSP is so easy—but if that's your bag I'll only ask you not to use it to convince me of anything.


It's like this-

Concurrency is the root of all code construction, at a more meta- level than writing code. Layering it in with libraries is feasible, but you've already imposed some structure from the code, then you're adding on top of that structure an underlying structure, and that's... well... that's something Go avoided doing.


You probably stopped at the half of my comment, because I did write Go had advantages (over the likes of ruby&friends anyway).

But it's the wrong way to think about it, that's why it's in the second half of the comment. Else, people will just use go because they're told it's the cool kid on the block. WRONG way of thinking.

That is all :)


"Also, it doesnt have things like global interpreter locks." is basically the only thing you said that is anything alike what I'm asserting. And it demonstrates a knowledge of a thing I contrasted Go against, without demonstrating knowledge of what Go is & why it is different. Further let me add that I don't think "clear start/APIs/framework" captures the essence at all; there is something deeper than framing and end users here- there is something deep in the bowels that is using the machine dislike how all else do, and that is sorcery that is deeply important.


Alright - note that I generally attempt to make one point per comment, hence the succint part on Go. Your comment goes deeper, that is good. I don't believe we have diverging opinions however, hence my reply, as you did write that I was "wrong". But being wrong is difficult if we (mostly) share the same opinion.. in my opinion ;-)

Now then again, I do believe a clean start helps, and it's very possible that I didn't properly "capture the essence" , or at least not well enough



They went from a language with a factor of forty cost compared to C++ to one with a factor of four cost. They saw a factor of ten improvement in CPU utilization. Seems like all Go really brought to the table here was speed. I don't see any evidence that its unusual features were a factor, besides its base-level increased efficiency.


Nice rant, and I mostly agree with it, but there are a few slow things about the ways that some languages are usually implemented that don't really have much to do with over-engineering. Let's take this line of Python code, for example, and look at how the CPython interpreter runs it:

    return x + 42
This turns into the following bytecode instructions:

    LOAD_FAST       0 (x)
    LOAD_CONST      1 (42)
    BINARY_ADD
    RETURN_VALUE
First we get the x variable from the local namespace. This is an array of PyObject pointers; the LOAD_FAST oepration simply does an array lookup and puts the result on the stack, incrementing the reference count. Pretty fast. Next is LOAD_CONST, which is even faster; it takes an already-allocated PyObject and puts a pointer to it on the stack, incrementing the reference count. BINARY_ADD removes two numbers from the stack, dereferences the pointers to get the integer values, allocates a new PyObject with the resulting integer inside, bumps up its reference count, decrements the reference count of the two operands, and pushes the result on the stack. Finally, RETURN_VALUE jumps back to the caller of the function.

In Go, the corresponding code would be compiled to two, maybe three machine code instructions. There are good reasons why Python does it this way, but it does suffer some inherent slowdown.


A more interesting comparison would be the PyPy-generated native code for the expression versus the compiled code from Go. Comparing an interpreter with a compiler is generally uninteresting; the compiler almost always wins.

If you look at the code that comes out of various modern JITs you often will find really interesting differences in the native code they produce, even for simple constructs like arithmetic. Type checks, barriers, bailouts, etc - in addition to more mundane differences, like one using SSE to do floating point and the other using x87, and one JIT having a better register allocator.


> dereferences the pointers to get the integer values

Doesn't it find a method of x that implements addition?


Nope! There's a branch in the interpreter that checks if both operands are Python's built-in int objects. If they are, and the result can fit in a C int without overflow, then the interpreter adds the numbers directly. This is by far the most common case.


How can the interpreter know, at bytecode compilation time, that x is bound to an int? Surely it generates two code paths, one for the "common case" and another for when x is a general object with an __add__ method?


The test is in the interpreter, not the compiler to byte-code. Every time the + is evaluated, the interpreter checks for the common case before resorting to finding the first object's method.


Okay, I think I get it now. The check you refer to is done inside the BINARY_ADD implementation. That's why the bytecode is the same for both cases. Am I right?


Yes, as you say, it's not known at compile-time and is checked at interpretation time, each time. http://hg.python.org/cpython/file/b49971a1e70d/Python/ceval....


Thanks. And wow, the CPython source code seems to be very much readable. I've been meaning to dive further into Python internals for a while, maybe this is the time to do so.


Interesting, thanks.


> It's not really "go" that makes the difference. it's how the runtimes and frameworks are used and/or made.

Exactly.

For the type of product they sell, if the processing time isn't pretty much all going to the actual running of the customer supplied job code and network latency, they're doing something very wrong. The language choice shouldn't be much more than noise in a setup like that. The framework choice, on the other hand... Rails is hardly well optimised for a high traffic API that can't readily be cached.

I've done queuing and job processing in Ruby. Spending 90% of the time in kernel space handling network traffic or waiting is not hard to achieve for a pure queuing server. If you add delegating jobs to external code to do the actual work, you should spend less time in userspace in the code actually processing the messages.

If language choice nets you more than 10%-20% in a setup like that, something else has changed beyond language.

And that's fine. But a post focusing on the language change then makes it seem like they didn't understand what caused the performance problem in the first place.


Maybe I'm just a new and terrible programmer, but I feel like if you wait to fully understand whatever tech is out there in terms of what to use, you'll never get started.


Let's be new and terrible programmers together, because I feel like the only way to fully understand whatever tech is out there is to actually use it.


In reality you have to follow the buzz in the tech to understand new technology. If something sounds like you could use it, take a few days out to toy with it. If you still want to use it, try it out for a small project and the rest will come automatically if it is the right choice.


> It's not really "go" that makes the difference

But from what I've seen and read about Go, it does seem to have things in it that do make a difference.

For example from day one Go was designed for with concurrency in mind and that's a big plus.

To see how this can help this Rob Pike video does a good job of showing off Go's version of concurrency:

http://blog.golang.org/2013/01/concurrency-is-not-parallelis...

Now, you can implement concurrency lots of different ways using many different languages. But I've used a lot of languages and I never seen concurrency done as easily as shown in that video.


> But I've used a lot of languages and I never seen concurrency done as easily as shown in that video.

So I take it you never used Erlang.


Erlang is big and complicated compared to Go. It's not easy, and besides it is functional (not easy) which already makes it unusable by 90% of people.


> Erlang is big and complicated compared to Go.

No. OTP is big (arguably), Erlang isn't even remotely. It's also completely irrelevant to OP's remark.

> It's not easy, and besides it is functional (not easy)

That doesn't make sense. What's hard in Erlang? That you don't have a for loop?


OTP is big an complicated... Erlang really isn't.


Go's definitely picking up momentum. I know that Mozilla is shifting much of its services infrastructure to Go.

We're porting over our Arabic sentiment engine, currently in Python/Cython to Go. If you're dealing with simple data structures, going from Python to Go is almost a line-for-line port, but the performance benefits are, of course, massive. Our benchmarks show a 40x speed improvement so far.

Lastly, for anyone thinking about taking the plunge, use Go tip (from their source code repo) - don't use their "stable" releases. They fix bugs so fast you'll always want to be current.


Do you have a citation for Mozilla using Go? I would be a bit surprised given their(Mozilla) development of Rust.


Rust and Go are not similar or competing. They are both new, but aside from that the differences are huge. So if a project uses one, that doesn't mean the other was a possibility for that project too.


They're not similar at all, but both were designed to replace C++. That sort of means that they're competing for the use-case of "things I would write in C++ if it wasn't a god-awful nightmare to do so".

From there, though, you're right that they take very, very different approaches.


> They're not similar at all, but both were designed to replace C++.

In practice, the main use for Go appears to be like this story - an alternative to RoR, Node, Python/Django, etc. - and not a C++ alternative. Go and Rust may both start from the idea of doing C++ or something similar "right", but end up in very different places.


>They're not similar at all, but both were designed to replace C++.

Well, Rust WAS designed to replace C++, both in the intention behind its design and the way it was done.

Go designers had this vague intention of "replacing C++", but the way they have designed the language they only really replace Java or some scripting language. Which they admit (the get mostly Python etc converts than C/C++ converts).


The Rust team agrees: https://github.com/mozilla/rust/wiki/Doc-language-FAQ (control-f for 'this Google language')


Indeed they are not similar but wildly different tools are often used in a similar or competing use cases(eg. the blog's post of the company using Ruby and conversion to Go). I had asked because if you were to reverse the usage do you think Google is likely using much Rust since it is developing Go?

Just to clarify, I see a bright future for both languages.


It would be interesting to see Rust used in Android in some of the parts that are now C or C++.


The way I would put it is that Go was designed to incorporate some of the simplicity and speed of C as well as the ease of use of Python, while Rust was designed to combine the speed and flexibility of C++ with the safety of Haskell.


... and the concurrency of Erlang. (and OCaml might be more apt than Haskell).


> Rust and Go are not similar or competing.

Rust is more "kitchen sink" (done as elegantly and cleanly as possible) and Go is more minimalist.


Being "kitchen sink" isn't a design goal of Rust. Rather, Rust is a lower-level, safer language. Go is higher-level.

Rust strives to be as minimalist as possible without sacrificing the goals of low-level control over memory and C++ performance (optional GC), memory safety, race-free concurrency, and type safety (no null pointers).


I don't think he meant kitchen sink, more that there are fundamental aspects from many parts of languages that are bound together well, which makes

> Rust strives to be as minimalist as possible without sacrificing the goals of low-level control over memory and C++ performance (optional GC), memory safety, race-free concurrency, and type safety (no null pointers).

somewhat funny.


Why funny? (coming from someone who's thinking about giving Rust a try)


He said "rust doesn't have the kitchen sink", and then listed the kitchen sink. Rust is awesome, you should use it.


Alright, I'll download the compiler/etc right now :D


Go is from 2009. Ruby is from 1995. Although it took about 10 years for Ruby to really start to be popular in the English speaking world.


Rust, not Ruby.


Sure, check out their github pages (here's a few):

* https://github.com/mozilla-services/heka-mozsvc-plugins

* https://github.com/mozilla-services/heka

My friend works for Mozilla - that's how I knew.


Thanks waterside81. Can anyone explain the difference between Mozilla and Mozilla services? A cursory glance seems some of the Github repositories overlap.


Mozilla Services is the team within Mozilla that builds/runs much of the backend infrastructure, e.g. the firefox sync servers, marketplace servers etc. The existence of separate "mozilla" and "mozilla-services" github projects is largely a historical accident, since different teams started moving to github organically at different times.

(Source: I work for Mozilla, on the Services team)


Source: I work for Mozilla-Services


Just curious, what were the tradeoffs for profiling the Python code and rewriting slow parts in C vs total rewrite in Go? For high level languages, the usual argument has been to rewrite just the slow parts in C or some other low level language.


And that's what we did during our first pass: identify the slow parts and port them to C. This gave us about 30% better performance. Without getting too much into the nitty gritty of our specific case, this wasn't enough.

We needed some massive speed improvements, I'm talking in the order of 100X faster. The nature of our algorithms was such that they could be done in parallel (i.e map/reduce) - an ideal candidate for Go's goroutines. We actually tried to make it parallelizable at first in Python, using gevent (and even just multiprocessing) and the results were not great.

One other aspect that really guided us towards Go was memory usage. Python was just sucking up so much memory whereas our Go implementation thus far is so much thinner.


One other aspect that really guided us towards Go was memory usage. Python was just sucking up so much memory whereas our Go implementation thus far is so much thinner.

It's funny that you mention that, because in computationally intensive work memory management is an issue with Go. E.g. I wrote a maximum entropy parameter estimator in Go, which was terribly slow until I circumvented the garbage collector by preallocating a huge block of memory and doing my own management. In C malloc() and free()-ing had practically no overhead. After putting the Go garbage collector out of the game, the Go version was approximately within 2x of the C version.

I am interested how Go gave you one or two orders of magnitude speedup, while rewriting hot spots in C didn't...


> I am interested how Go gave you one or two orders of magnitude speedup, while rewriting hot spots in C didn't...

paralellization


One can easily do that in C with OpenMP.


If you can get away with only parallelizing the C code that'll work, but if you're replacing individual Python functions calls it won't so much. You could use a lot of Python processes, but that might be awkward depending on how much they need to coordinate and might lead to too much memory usage.


One thing you don't get in the C/Python combo is total static typing. The other big win of Go is elimination of a lot of runtime errors.

So it comes down to which will be more of a win for you and your product. Go has a big advantage here IMO though because of static typing AND performance increase.


Like any other language with static typing and available compiler implementations.


Is that relevant though? The parent is asking about a comparison between python+c and Go for the OP's software, so doesn't it make sense to answer to that?

Of course, you are correct. Rewriting in any other language with static typing or an available compiler implementation would also provide the benefits of static typing and detection of runtime errors.


Because those benefits are not some special feature of Go.

On the other hand I think youth generation of programmers lack exposition to programming languages, like we used to have in the old days, hence they make quite limited comparisons.


Hi, I'm in the market* for an Arabic sentiment engine for some academic research, can you contact me? My email is in my profile.

*Note that "in the market" doesn't mean I have a lot of money to spend :D


What is an "Arabic sentiment engine" :-D? Like textual analysis as to whether a piece of Arabic text is positive or negative?


I went briefly to through the comments, it seems that no-one is bothered with lack of details in this post, things are not logical at all, almost like it was written by PR agency in charge of promoting Go.

Except it is established business so I would assume things are true, but real motives are hidden. Go is obviously good language for some specific things, but it's not like ruby is pure trash, how come you did 'everything' in ruby, and then didn't know about eventmachine, but just had to rewrite things in Go.

And rewrite happened overnight, right?

A lot is missing from this story. I will definitely look more into Go, mostly because someone compared it in comments with TurboPascal, and I have fond memories of Borland tools, Pascal especially.

Go and Rails are so different that there is almost no point in comparing them.


> but real motives are hidden

Could you elaborate on this please? Are you referring to the OP's motives of publishing the article? Or of switching to Go?

> Go and Rails are so different that there is almost no point in comparing them.

Except when one solves the same problem better than the other.


- They did not say why they did not choose one of the other alternatives.

- They did not say how they used the concurrent features of Go, only mentioned that they were there.

- They did not say how long it took to rewrite.

- They did not say what they changed in the API.

I'm not saying the story isn't true, but for a true story it lacks a lot. You could summarize the article with the title and you won't be missing much.


They didn't say why Ruby was using a lot of CPU either. Some basic investigation into the root cause might've revealed some intractable problems that are tied to Ruby (or at least the MRI interpreter) like long GC pauses (something which can be mitigated by going to a different interpreter like JRuby versus a complete rewrite in a different language)

Instead, it's "OMG we're maxing out CPU, time to completely rewrite our app in a different language"

And how big is this app that it needs 30 servers? We probably serve an order of magnitude of traffic where I work than these guys do (just guessing) and don't need that many.


I was referring to switching to Go, I am not disputing languages performance, just the whole article is very strangely written


I was originally very excited about Go when I first learned about it. But then I got tired and frustrated quickly after having to listen to the other Gophers telling me that I don't need this or that feature because there is a better way to do it in Go.

Like.

I don't need exceptions because Go function can return multiple values.

I don't need a mocking framework like Mockito because Go has interfaces.

I don't need an interactive debugger because I can debug with command line using gdb.

I don't need named arguments because I can instantiate a struct and call my function with it. (Have you seen Ruby or Javascript code? Almost every function takes 'opts' as a single argument. Go is probably going down this path too.)

Then I learned about Scala. I'm not saying that Scala is better than Go. However, it has everything that I need. :)


Comments like this are weird. Go does not like exceptions. They've built a whole idiom around error returns. When you said, "I need exceptions", what did you expect them to say? "Oh, sorry, we forgot that, we'll get right on it"?

I also don't understand what you mean by "interactive debugger". What is gdb if not an interactive debugger? Gdb is my go-to debugger for Ruby as well.


> Gdb is my go-to debugger for Ruby as well.

Unless you are debugging the ruby process(and not the running script), what does gdb buy you over ruby-debug?


Something like RubyMine IDE debugger or Firebug javascript debugger.

If you are a Ruby developer then I highly recommend trying out RubyMine. I promise that you will never debug Ruby code using gdb again!


I got all excited when I saw your comment and went and checked out the latest RubyMine video on YouTube, because having something like Firebug for Ruby would be awesome. But it looks just like what gdb gives you when being driven by a graphical frontend like DDD or Emacs. Actually it looks a lot less powerful than DDD. What am I missing?


> Something like RubyMine IDE debugger or Firebug javascript debugger.

That's not for the go team to develop.

> If you are a Ruby developer then I highly recommend trying out RubyMine. I promise that you will never debug Ruby code using gdb again!

I haven't used RubyMine, but I have used Visual Studio and Eclipse debuggers, and I still debug using gdb, ruby-debug, pdb et al.


It's pretty typical of the Plan9 mentality. See this post on the Acme text editor: http://9fans.net/archive/2008/08/134


Is that a parody?


wow, worse than the irc python channel.


Why would you say something like this? The IRC Python channel is a kind of state machine which goes like:

- How do I do this?

- Why do you want to do this?

- Because XXX

- Then that's not really what you should be doing, it's dangerous/inefficient/etc., do this instead

The post about Acme could have been a one-line answer: "If you want do do any of this, just use a better editor".


There are many times when I said "I need to do X, I know it's ugly, but I've considered all the alternatives and, for reasons too lengthy to go into right now, I just need to do X. How can I do it?" only to get people treating me like an idiot who doesn't know anything, and only giving an answer after I've responded to every single of the alternatives with my reasons.

Sometimes, people aren't new.


It's a state machine without a direct transition from "question" to "answer" :)


We also checked out both Go and Scala and picked Scala due to better IDE support (IntelliJ) and existing Java lib ecosystem. It also is typically faster than Go (but does use much more memory): http://benchmarksgame.alioth.debian.org/u64q/benchmark.php?t...

Overall very happy with our choice. I will probably learn Go too, but I also don't really like the way it does error handling and the tooling support seems limited vs what I can get for the JVM.


Yup. This is my exact conclusion too.


Scala fan here. But still, I can sympathize with Go developers on getting rid of exceptions.

I don't know if Go would quite feel the same, but Scala's Try, Pattern Matching and Curried functions are stupid nice and make handling errors without exceptions feel very natural.

Took me a couple months to get there, but there you go. "Folding" out different state feels like I'm writing more robust code. Where I'm coding for a range of possibilities instead of just the golden path.

Even when I _am_ focusing on the Golden Path, features like Try[A] force me (well, warn at least) to at-least stub out the other options so I get to feel like instead of having to hammer in robustness, it's something composed instead. I can focus on making the ideal X, and handle the possibility of needing a notX as alternative realities, instead of trying to build a cyborgXYZ that can handle every potential state. If that makes any sense.


Same thing happen to me, I even did some small contributions around 2010.

Then I discovered D and Rust are better languages for my purposes.


Just a guess, I'm guessing python isn't your favorite language either? They seem to be rather unopinionated languages that let you have access to lots of tools. I hear they manage to keep it pretty simple despite having a "kitchen sink", so definitely interested in both.


Python is my favorite scripting language, not for application development.


D syntax is nicer than Go's, but its also dead compared to Go. Where are you going to put your chips?


> D syntax is nicer than Go's, but its also dead compared to Go.

Because it is full of Google PR and happens to have some cool names on the team?

> Where are you going to put your chips?

In languages that the designers don't throw away the last decades of improvements in language design.


Yes, because its full of Google MONEY! Its about the libraries...


Which will get canned like many other Google projects when they fed up with it...


In the light of recent events, I'm aggreeing with you...


tl;dr: I tried Go, but it wasn't like Ruby, so now I'm trying Scala.


Ha. Is there a high performance, compiled, and concurrent friendly language that looks like Ruby? Please share. :)


Rust might be your best bet.

I think better advice for you is to just stay away from opinionated languages. I don't think Rust falls into that category, but I'm not sure.


Looks like Ruby? You mean syntactically like "end" instead of "}", I assume.

Sorry, but the C syntax is dominating the native-compiled world. You could try Pascal, they have "end". Or use C with preprocessor magic:

  #define END     ;}
If you mean Ruby like terse, generic code, then try Rust, Go, and D.


Yes, have a look at Crystal.

https://github.com/manastech/crystal

It is still an ongoing development, though.



I personally am looking very heavily into Rust, but if you haven't seen Elixir, you should.


For large projects, it'd probably make more sense to rewrite the critical code sections in C. rubyinline makes this really easy, and ruby C extensions aren't that bad either.

One time on a contracting gig I had some code that was interacting with cairo drawing and imagemagick. They didn't have a compatible raw image format at the time (one was RGBA and the other was ABGR, IIRC.) It's trivial to convert between the two in ruby, but it was taking 60 seconds per image. Once I switched that one loop to C, it took 0.02 seconds.

I've got nothing against Go, just suggesting an alternative that might make more sense in some situations.


This definitely does make a lot more sense than throwing out your whole app and rewriting it in many situations, but I'm always a little hinky about moving stuff into C just because it has so many ways to shoot yourself in the foot. You can obviously create bugs in any language (I know I certainly have!), but Go really does hit a nice spot between safety, productivity, and speed, so I can definitely see the attraction.


For web servers there are other things more important than the raw processing speed of a single routine. What you want is to get the CPU usage to 100%, CPU resources used preferably in actual processing and less on things like garbage collecting or blocking for I/O. For web servers in particular it is tricky since processing requests also involves a fair bit of I/O interwoven with CPU processing (in the case of apps that aren't just CRUD over a DB).


I've seen a lot of dumb Rails server configs and even dumber usages of Rails without any tuning at all.

With a couple weeks or a month of work I could shrink the hosting fees or resources consumed by a factor of 5 out of most Rails apps that are sitting up around 30 servers... and probably a factor of 10. Leaving aside whatever rookie or even intermediate mistakes were made in their Ruby code or their database, this post indicates a lack of understanding of what happened when their server fell over. Proper tuning of a deployment should not trigger a 100% failure mode like this.

These folks were itching to get off of Ruby for whatever reason... after all their roots were in Java. If your goal is to do a rewrite and learn a new language and gain some notoriety why waste time learning what you did wrong with Ruby or your server config?


Your inability to understand their clear description of a basic cascading failure mode under load speaks poorly of your actual knowledge and experience. Given that, I have to take everything else that you say with a large helping of salt.


Their description of the root problem was very superficial:

> At some threshold above 50%, our Rails servers would spike up to 100% CPU usage and become unresponsive.

Yes, but why? What exactly were those processes doing? Why the sudden change at a particular threshold?

Their lack of detailed investigation into this makes their post useless to me -- I have no way of knowing (1) what specific aspect of Ruby's architecture makes it unfit for their problem?, or (2) is their application doing something stupid that causes the problem in first place?


If you read that sentence in context, your questions are answered. Here are the previous two sentences for context.

The bigger problem was dealing with big traffic spikes. When a big spike in traffic came in, it created a domino effect that would take take down our entire cluster

Given the article to that point, it is clear what is happening. They are maintaining a steady state of X% (where X is above 50), then a big traffic spike comes along. That traffic load is not distributed equally (my guess is because requests are not created equal) and there is a hot spot. The hot spot fails, then increases the load on everything else. After this repeats a few times, there is insufficient capacity.

In other words at some steady state threshold above 50%, you wind up without sufficient capacity to handle traffic spikes. Nothing in this failure scenario is specific to Rails. It is a well-known failure mode for a cluster under load with a push based architecture (which http requests are).

The fact that you did not understand that description speaks poorly of your problem solving skills. Now they may well have been doing something trivial that is fixable to cause load to be less than it was. But you haven't convinced me that you're the person to be trusted to figure that out.


TLDR: I'm butt-hurt that people are dumping Ruby. And because I hate Java I'll blame it on it also.


Not quite. He's saying that people who think Ruby is slow are often people who have no idea how to performance-tune it.


Well, more of the time it's because Ruby actually is, in fact, slow.


I though the implementations were slow or fast, not the languages.


Typically a language is considered slow or fast based on it's most popular implementation, sadly.


True, though language design can have implications on how fast things can be implemented.


Agreed, but it is still an implementation issue, because one can eventually discover ways to optimize such cases without changing the language.


Ruby is not, in fact, slow. This myth has been debunked so many times that I'll just leave the googling to you as an exercise.


I'm getting the same experience here by entirely rewriting an old PHP web application [1] in Haskell using the Yesod web framework.

I've been using JMeter to benchmark both versions of the application. On a 10€/month dedicated server [2], the Haskell one was able to generate 220 dynamic pages per second [3] whereas the PHP one tops at 35 pages per second on a equivalent page.

Moreover, concurrence capabilities of Haskell are also pretty sweet : while I was benchmarking the web app using 2,000 concurrent connections, the application server was only using around 90MiB of RAM. I was not able to increase the number of concurrent connections as the client application I was using started to kill my quad core desktop, I suspect Haskell to be able to manage A LOT MORE concurrent connections as I didn't see any decrease in the throughput of the application as I was increasing the number of concurrent connections.

[1] http://files.getwebb.org

[2] http://www.kimsufi.com/fr/

[3] Static content has been ignored as modern servers like NGinx seem to be able to carry the static content (CSS, images, ...) at more than 5,000 req/sec on the same machine.


Interesting to hear of Go being used in production. It'd be great to hear some more details on your setup when deploying the go processes - how are you managing failover, what's your load balancer, and how are you handling swapping out processes etc? Are you compiling on the server or local machines before deployment? Most other languages have lots of solutions on the deployment side now but Go is so new there isn't much info out there.

Also, as you are running an API presumably your app in Rails was pretty simple, did you have the impression things would be more complex in Go if you were writing an app with an extensive front end and UI and using sql? I'd miss all the view helpers etc available at present I think. Going from 30 servers to 2 certainly sounds like a huge improvement, so it was definitely worth it for you, are you thinking of writing any front-end apps in Go?

I've been playing around with Go recently and it is a fantastic language for someone coming from dynamic languages like Ruby. I particularly liked interfaces as a way to define a contract for implementations to follow, and the simple package system which encourages you to make your code modular.


We actually have our own deployment tools for Go (and for Rails, our databases, etc). We built them before all the new hip options that are around today. We build on the target machines, although that's not a requirement since we all run the same architecture on our dev machines too (64 bit linux).

I'm not sure about writing a front end in Go, there's not a lot in terms of UI frameworks and Iron.io front end (HUD) is still in Rails so I can't really say much about it.


Interesting to know that you're building on target machines, I have yet to explore cross compilation, but it'd be great to be able to build before deploy.

Sorry, deployment was probably the wrong word - I was not so worried about getting the files up there (like you I have a simple home-grown solution for that), but more how you were handling process management, swapping out processes etc.

Perhaps with Go this is less of an issue because startup time for new processes is minimal, and you can simply kill one process, start another and not really miss any requests (unlike say Rails with startup times of 10 seconds or so for instances)? Did you find this wasn't a big issue in practice and something very simple works for you?

I'm currently playing with go but if considering it for client use would have to be sure that things like this were not an issue. Are you using any off the shelf tools for process management/load balancing or is it something that you have built yourself? Did you run into any problems?


For my (toy) apps, I've been compiling locally and pushing to the server. It's trivial to compile for a target platform and architecture, and you get a single compiled binary. Certainly my response isn't getting all the way to what you're looking for, but it's not difficult to write shell scripts that manage the deploy process from here.


Thanks, I use rsync wrapped in a script usually (because I'm dealing with more files than just a binary), haven't tried cross compiling yet but was going to give that a go. I suppose I was more concerned about what happens when things are on the server and how multiple processes can be managed. As soon as you have more than one and want to swap out server workers seamlessly and load balance etc it gets a little more complicated.


Yes, I think you're right on the mark re: swapping out workers, etc. Those nuances I haven't had to deal with due to these being toy apps.


Ok, this bit has left me confused.

They were Java devs that liked ruby, they wrote applications in Ruby on Rails and the ruby apps were hitting limits so they immediately started looking at other languages.

But they don't mention the most obviously (to my mind) simple option.

JRuby

It is ruby (They like ruby). Most ruby apps can be run on JRuby with very very little changes (No need for a big rewrite) and it runs on the JVM with which they are familiar and it is very fast (and true multi-threading).

Maybe they did look into it but I would have thought that would be #1 on their "We tried this but discounted it" list.


In my experience, rewriting for JRuby is as much work as re-writing in a new language. Last I looked was a year or two ago, so it may have all changed, but it's not as simple as just moving your code over. Many really important gems don't work, or don't work properly.


That used to be the case, but a lot has happened in the last few years. Most of the time you'll be able to move a large Rails application to JRuby without any changes at all, except adding a Warbler config for deployment.

We use JRuby for a one of our backend applications in production, but we develop them with plain MRI Ruby. Only real problem we've run in to was some of our badly performing (badly written) parsing code that ran even worse under JRuby.


The only gems that do not work are those which are not pure ruby. For the popular gems I think there have been rewrites. I haven't looked into JRuby but it has changed quite a bit in the last year from what I read


I don't think that's true. There are plenty of threadsafe gems with C extensions, and plenty of non-threadsafe pure ruby gems that mutate class/class-instance variable willy-nilly.


gems with C extensions are not supported https://github.com/jruby/jruby/wiki/C-Extension-Alternatives

I forgot about thread safety in pure ruby gems but that is not a huge problem - https://github.com/jruby/jruby/wiki/Gems-known-not-to-be-thr...


I'd like to see some real benchmarks with a real app, but I doubt moving to jruby would bring the magnitude of improvement mentioned in the post.


They also did not mention why they didn't try any other of the more performant implementations of Ruby.


They probably needed a rewrite anyway, after learning more about the problem space they weren't happy with the api they'd made thus far.

Go wasn't the only thing that made their new stack better, but you can bet that Go would have made it very difficult for them to achieve performance as bad as ruby gave them.

That said, they are STILL using rails for the frontend stuff and Go for their more computationally intensive stuff IIRC.

They still kept some of their rails code, for things that were easier to keep in rails without hurting performance.


> "We also weren't sure if we would be able hire top talent if we chose Go, but we soon found out that we could get top talent because we chose Go."

I feel[1] that a smart/talented C/C++/anything developer can go from someone who has never seen or heard of golang to a proficient and productive Go developer in a matter of a few weeks, maybe even _days_, if not less.

That's how long it takes to go through the following materials (and fav some for later reference) and play with the language a bit.

http://golang.org/ref/spec

http://www.youtube.com/watch?v=ytEkHepK08c

http://commandcenter.blogspot.com.au/2012/06/less-is-exponen...

http://tour.golang.org/

http://golang.org/doc/install

http://www.youtube.com/watch?v=XCsL89YtqCs

http://golang.org/doc/code.html

http://golang.org/doc/effective_go.html

http://golang.org/pkg/ - use as reference

http://www.youtube.com/watch?v=f6kdp27TYZs

http://talks.golang.org/2012/splash.article

https://gobyexample.com/

http://golang-examples.tumblr.com/

And a some more similar things that you can mostly get to from golang.org site. The beauty of how concise the language and even its website are, is that you can literally just go through everything there one thing after another.

[1] This is my personal opinion based on playing with go the last few weeks/months. I'd love to verify this theory. It's not yet the primary language in which I do things in (I use C++11 atm), but for all my side tasks[2] it proved to be indispensable. And I found it very easy to pick up. I can't wait until I start doing all my work in Go, that will be a true test of its productivity efficiency.

[2] https://gist.github.com/shurcooL


It's quite strange to me that people would identify as or look for a "[language] programmer". Sure, I happen to write more C++, Python, and C than anything else, but I've dabbled in just about everything and could reach comfortable proficiency in a matter of weeks. Most of programming and all of computer science is universal.

Any serious programmer should be a polyglot by default.


> Any serious programmer should be a polyglot by default.

It depends if you're going to spend years training someone or if you need an expert right now.

My experience is that it is impossible to maintain expert level skills in more than one or two language + library environments. You can remain familiar with other environments but you don't have the time to be an expert.

While I sometimes switch between C-family languages for different projects, it can take months to get up-to-speed with the changes in a language and its environment since you last programmed in it. I'm talking about situations where I know the language well but the environment has changed. Languages change a little but the libraries they use can change dramatically. And along with the change comes a whole body of implied knowledge about how to safely and effectively use it all and this impacts on how you use the language itself.

If you've literally never programmed in a language before, it can be a few years before you know about all the eccentricities, before you understand why the language follows certain patterns, before you understand the risks with certain behaviors.

If you need an expert now, not in 12 months time, then they need to know both the language and the environment. If you can wait a couple months, then they still need to know the language.


"It depends if you're going to spend years training someone or if you need an expert right now."

I'm not going to claim that there are no legitimate reasons to hire people who are narrow experts in Blub (and only Blub), but I'm having a hard time thinking of any.

At an established company, you've got the luxury of time -- there's rarely a good reason to "need an expert right now" that isn't just a contractor. At a startup, where hiring the wrong person is a disaster, hiring an "expert right now" is like holding a loaded gun to your head. Ideally, you should be hiring "T" people -- lots of breadth, with lots of depth in at least one area.

A truly good programmer will pick up your language/framework in days, and be at full productivity in months, even from scratch. It's hard to do better than that.


Even at an established company, a really smart language newbie could write a bunch of ad-hoc code that does almost the same thing as well-understood libraries that everyone more experienced in the language uses. Even if his code was relatively high-quality, now you have a bunch of extra stuff to maintain, and everyone who interacts with it will have to learn this thing instead of just using the library everyone knows. I've done this in languages I'm unfamiliar with and will probably continue to do so.


Not Invented Here syndrome goes away with general experience. It doesn't come back every time you switch to a new language.

Just because I've never written Erlang doesn't mean that I will automatically try to write a random-number generator (say) the first time I need one in Erlang. I have enough experience to look for a library function first.

Empirically, NIH tends to be more common in single-language developers, not less. People who place a lot of value on their "expertise" in Blub tend to do so because they're over-weighting the importance of their memory of the API details. When they don't automatically remember something, they leap to the conclusion that it doesn't exist. They're also typically a less-experienced cohort than people who have written in lots of different languages.


I wasn't talking about NIH syndrome, I was talking about "I don't know that a common library exists for this standard use-case so I'm going to write my own one-off because I have a job to do". I mean, you can google for libraries but sometimes you just don't find them and then find out a few weeks later what you should have used.


I don't know that a common library exists for this standard use-case so I'm going to write my own one-off because I have a job to do.

That's called "laziness", and is even more likely than NIH syndrome to be mitigated by experience. Certainly, having lots of coding time with a single language doesn't make you less lazy, and having lots of experience with different languages doesn't make you lazier.


Well, I'm less lazy than most devs and have worked in lots of languages and did it just the other week -- maybe I'm just not that bright :)


It's certainly possible that you'll miss some oddball idiomatic way of doing things in a new language (e.g. Python itertools or using C++ STL algorithms or something like that), but this is rarely a real problem. The job gets done correctly -- and in any case, we're all learning new ways of doing things. It's not as if you gain total prescience after N days on the job with Blub.

The point isn't that the generalist programmer will be 100% correct in all the details in new language, it's that they'll be able to quickly (and correctly) implement the important parts in whatever language you're using. Idioms tend to be the low-order bit of a solution anyway.


Could you give a specific example as I am still thinking "random number generator" and want to balance it with your "standard idiom or library that everyone steeped in $lang knows "

Cheers


Things like the Counter in python collections. Actually, the python collections library in general. You don't need that stuff to do the job, but it makes the code a lot clearer.

It probably took a couple of months full time in python before I had enough understanding of the std lib to know where to find everything (I sat down and read through a description of most of the functions in there).

There are still plenty of libraries that would have made my life easier if I knew about them. I'm sure there are plenty more.

Having said all that, that's just for example. When it comes to hiring I'd still pick a good experienced developer over a specific language expert for a full time position. For me it's not so much about the understanding of a specific language as it is the appetite for understanding computers / systems in general that makes truely good developers.

Ultimately the task is going to dictate the best type of hire. Short term contract, get a specific expert. Long term hire, find someone who actually enjoys developing software :)


I dunno how well-known the things I didn't know about are, but an example of me doing it: https://news.ycombinator.com/item?id=5113202


I think there is a fine line between using a library and writin your own to understand better the domain

It's something to do with how critical the library functions are to you / your system. I would never write my own compression software, but I can see why people would just to learn about the trade offs.


Okay, but if I had known about the alternatives I would have used them, so I'm not sure how that's relevant.


philosophising really. I guess on the "actually helpful advice" level, I'm all out if the googles cant help.


I have nothing interesting to say, I just wanted to see how far the left indentation can go


That's where code review can be such a valuable tool. I was learning Python coming from Ruby about a year ago and, sure, the languages are similar and it wasn't too hard to get started, but having code review sped up how quickly I became proficient by a lot. It's great having a bunch of people around you that can say things like "There's this thing that exists, don't do that." or "It's more idiomatic to do this that way instead of how you're doing it." or "People hate that because, do it like this.". It makes a huge difference.


This is unlikely with go as it has a good standard library and a central place for documentation of all the packages.

http://golang.org/pkg/

Third party libraries are also listed here:

http://go-lang.cat-v.org/pure-go-libs

And the ecosystem is not so broad or mature that you would miss significant libraries because there are too many options and you just didn't find them.


It depends. If you're the only one knowing the language at this company... it's a bad idea, because the bus factor becomes unacceptable.

Otherwise you have a methodology problem. You don't do daily standups, where you'd have the opportunity to say "Yesterday, I started implementing X because I couldn't find a library" and more experienced coworkers would tell you "Use library Y instead".


Being a programmer is not about knowing everything. With experience one should be able to separate library functions that should always exist from those that are unique for each language. Everything else should just be a matter of information searching.


That's just an imbalance in programming virtues: too much hubris, not nearly enough laziness (unknown impatience).


You talk as if we develop code by chiseling it out in stone. We write it in text files, usually with IDEs. No one needs to learn the new libraries the newbie made, they can talk to the newbie, point out what the library already gives (and where to look in future) and write to well known interfaces.

Any project should have time allocated for "tech debt" and that's where this sort of thing gets addressed.


"I'm not going to claim that there are no legitimate reasons to hire people who are narrow experts in Blub (and only Blub), but I'm having a hard time thinking of any."

Blub is a very useful tool but has a bunch of quite esoteric gotcha's that an expert will be able to avoid due to long experience.

Expert has used almost all of the Blub framework over the years, including the more popular third-party addons and already knows what's best and fastest in a variety of situations without having to think about it much.

"A truly good programmer will pick up your language/framework in days, and be at full productivity in months, even from scratch. It's hard to do better than that."

Yeah, but then HR won't be able to tick off the 'Blub expert' box and, well, nobody would ever get hired! Or something...


> impossible to maintain expert level skills in more than one or two language + library environments

And the crappier your language and libraries, the more time it takes to be an "expert."


By this definition, Ruby/Rails are pretty crappy (I kind of disagree). As a newcomer, the amount of stuff going on in a Rails app and the stack trace when there's an error are pretty overwhelming. Meanwhile, people expound on how simple and elegant Rails is. Lately I've been thinking that this is because those people started using it 5 years ago when it was small and their knowledge has built incrementally with the environment.

My other theory is that it's a lot harder to track down bugs for a newcomer because like C (and unlike Python in most cases), importing functionality from another file is implicit. That is, when you 'require' a bunch of files, there's no indication which functions are coming from where. For me, this is one of the things Python solves marvelously (it's generally considered bad form to 'import *').


Rails is pretty crappy if you ask me. It's "omakase" which is Japanese for "acts according to what DHH wants despite what the community wants". And there's a lot of magic happening that isn't explained very well. There are better frameworks in Ruby. Ruby itself doesn't take all that long to be an expert at.


>Rails is pretty crappy if you ask me. It's "omakase" which is Japanese for "acts according to what DHH wants despite what the community wants".

This sounds great. I'd hate software made in the way some "community" wants. Community is the other name for committee.


Though I could argue that all the languages pretty much take the same amount of time to become an expert. Just because some languages are supposedly higher-level than others doesn't mean that the complexity they allow you to tackle (and associated challenges as a programmer or "expert") is any less.

Especially since being a good programmer is more about design, choice of interfaces, reactivity to change, etc. which are by definition language agnostic.

I firmly disagree that Ruby requires any less effort than say C, C++, Java or LISP to become an expert.


I didn't mean to say that Ruby was less complex than C, I meant to say that if you take out the magic it is comprehensable and one can be good at it quickly enough just like a straightforward language like C. I agree that learning design, etc. is the real key in any language, thanks for pointing that out, because that's what everyone should focus on.


Ruby doesn't take long to be productive in. It is an utter pain to become expert in, for any reasonable definition of "expert". Fortunately it's fun enough that I don't mind the slog.


"it's a lot harder to track down bugs for a newcomer"

This is FUD. It's a different way of doing things, not harder. I live inside Pry which makes it rather easy to figure out what's going on.


That's kind of what I'm talking about. I've never heard of Pry. With Rails I have to learn about the 100 most common gems (devise, paperclip, mongomapper or mongoid), Pry (thanks for that), bundler, rvm, and ActionEverything before I can be productive (or understand a simple app) . With Node.js, something newer with less "maturity", I figure out npm and I'm good to go.

I really don't think it's FUD to say that Rails has gotten much bigger in the past 5 years, and it's definitely not FUD to say that as codebases and tooling grows, so does barrier to entry.

The specific thing I guess you're objecting to is that it's harder for a noob to understand implicit imports and where something is coming from if you don't know much about what you're importing. If you use a tool to solve that language deficiency, that doesn't remove the deficiency from the language. By that logic, adding an IDE to Java makes it a very concise language.


Most of what you said above is a strawman. I said one specific thing.

"it's a lot harder to track down bugs for a newcomer"

This statement is FUD. You seem to be defining "language deficiency" as "hard for a noob before they become proficient". I think that is absurd.


However, you do not need to be an expert to be productive. You can write Django web apps perfectly well without knowing about Python's metaclass tricks, for example.

An expert still needs time to become familiar with a large codebase and architecture. I wonder if a competent programmer could simply learn the language in addition during this period.


Until your form isn't coming out quite right, and you need to start digging around in the source for the form class to figure out why. The happy path generally doesn't need expert-level skills, but debugging often results in a quick deep plunge.


Go as of today doesn't have too many libraries available. So there is really nothing to learn besides the language itself.


That's not really true.

Granted Go doesn't have as many libraries as the more established languages, but to say there aren't many and there's "nothing to learn besides the language" is just flat out wrong.

Just for starters there's http://code.google.com/p/go-wiki/wiki/Projects


A seasoned Java developer is expected to have worked with one of the mainstream ORM like Hibernate or Ebean. But a Go developer gets away with not having to know an ORM because there is no mainstream ORM in Go. :)


So what, the lack of a mainstream ORM implies "So there is really nothing to learn besides the language itself"? I don't think so.

And there's probably more to a lack of an ORM other than "Go is immature." It's a fairly common opinion among the Go community that ORMs are not worth their complexity. I tend to share that opinion myself, after having worked with a few in a couple different languages.


Not sure why you were downvoted, I tend to agree ORMs are not the best ways to communicate with databases.

As to only the lack of ORM would make Go immature? Honestly I don't know. There are tons of software written in JavaScript; do they make JavaScript mature? Hell no.


>If the language is strictly OO (as Java and C# are) then an ORM is pretty much required.

Really? Because we managed to get just fine without one for decades...


If the language is strictly OO (as Java and C# are) then an ORM is pretty much required. What is the preferred solution in Go? And please don't say writing out raw SQL.


I've moved away from ORMs in Java because they don't really solve the problem faced. Hibernate could be a really good solution to providing a blanket system for SQL interaction except for the performance issues. If one was really using Domain objects (not pojos, or dumb DTOs) the typing ability of Hibernate would be great. Unfortunately for simple CRUD apps the issues presented by Hibernate (like loading it in an EAR) are such that it requires too much maintenance time on complex projects.

I know that I've used Hibernate for years. I've used it with DTOs. I know that it does provide a certain benefit for simple reads and writes. Unfortunately, Hibernate was designed to work in a disconnected manner like the Web. As a result you get into complications of session management. This is just one example of Hibernate maintenance costs.

Now I use Spring JDBC. This removes the noise of the checked exceptions, connection management, transactions, just like Hibernate. But I write simple or complex SQL in one place and map in and out contently.


Well, I can't speak to Java here but NHibernate works fantastic. I've checked the generated SQL on complex joins and it always ended up writing what I would have by hand.

In C# the session management is no big deal: you access everything via the Repository pattern, turn on trait injection and just make sure any repository method that will need a session has a transaction attribute. If you need to do a series of read/writes in one transaction you throw that whole method into the repository class and put a transaction trait on that method.


That's a helluvu lot of abstraction for writing to a database.


It's not. I create a DTO and I set up an exact mapping of how that appears in the database. Then I create a type generic Repository pattern that makes the DB access explicit as opposed to NHibernate's magic database access. Then the rest is just one time wiring I set up so I don't have to worry about actually connecting to the DB in my code.

In any solution you come up with you'll either need a repository that handles sessions and so on, or you'll have to explicitly connect to the db every time you need it. Either case will be more work than what I've done with this solution.


What's wrong with using SQL in your program? As long as your database layer is able to perform parameter substitutions to avoid SQL injection, this is a pretty efficient way to get stuff out of the database (and only the stuff you want). Why would using an ORM be a 'requirement' for OO-oriented languages?


There are quite a few problems using SQL directly in your program. Separation of concern issues, typing issues (e.g. the compiler can't tell you the USER table isn't appropriate here because it has no way to see what you're doing).

In a multi-paradigm language you wouldn't be tied to using an ORM but you should still use something that provides some kind of anti-corrupt support. In an OO language you can have nothing but objects so there is nothing else your database library can return. It may as well return objects that represent the records rather than, say a list of strings.


Even though it is terribly out of fashion, the performance of database interface classes (a database access layer) that wraps and abstracts SQL prepared statements and/or stored procedure calls is almost always much more efficient than ORM when the SQL written by someone minimally SQL competent. ORM can be much faster to implement, ah- classic tradeoffs.


ORMs are broadly considered antipatterns in Go.


Source? I have never heard that claim. And contrary to that, Go has facilities for implementing ORMs, like field annotations in structs for specifying column bindings.


A seasoned java developer stays away from orm anyway ;)


What, you don't know about The Programmer Hierarchy?

https://news.ycombinator.com/item?id=1622553

http://lukewelling.com/wp-content/uploads/2006/08/programmer...

I find a lot of people who use high-level languages are terrified of tediously direct contact with the machine, and a lot of people who use low-level languages are terrified of the performance costs of abstraction. I'm terrified of both.

I think that in general, C++ is a lot harder to get your head around than C, being for all intents and purposes a superset, and an easily five times bigger one at that.


Love this comment, "I'm terrified of both."


That's how you gauge the experience of a programmer: how much of the field she/he's terrified by.

EDIT: I originally meant this as a joke, but seriously, if someone accurately knows where the gotchas are, that's valuable. Also note if they're biased to false positives and/or false negatives, and by how much. Are their heuristics for dealing with unknown territory efficient and likely to converge on good approximate results?


> That's how you gauge the experience of a programmer: how much of the field she/he's terrified by.

I'm terrified by what I don't know, especially by what I don't know that I think I should know, to be "in the know".


Thanks guys, I guess I really have had some rough times. I like how Lisp and Assembler at the top of the hierarchy capture the two extremes. Some hypothetical language in the middle would be great, but maybe the best we can do is to straddle that point, e.g. with C++ and Python.


Rather than straddle the middle, I think I'd go with as high-level a language as I could get, coupled with a simple low-enough-level language to get whatever performance benefits I needed. Some combo like Python/C, Clojure/Java, or maybe some other lisp dialect and C.


Lua is known for pairing really well with C.


> I like how Lisp and Assembler at the top of the hierarchy capture the two extremes.

Not really. I'd place Agda or Coq above Lisp.


That is somewhat orthogonal. Lisp is better at abstraction than Agda or Coq or Isabelle or any of those ML/Haskell theorem proofers. To maintain the theme:

C if you are terrified about performance

Lisp if you are terrified about boilerplate

Agda if you are terrified about correctness

If you are terrified about all of these, then welcome to the world of engineering.


If you wrote an S-Expression syntax on top of Agda (or Haskell), you gain all of Lisps anti-boilerplatitude for free. There's of course also Template Haskell.

As it is, the static typing helps to avoid some boilerplate, too, indepedent of macros. In some sense, static typing removes, among other things, boilerplate tests.


If you are terrified about all these, there is ADA.


Ada gives you neither the performance of C nor the abstraction- and boilerplate-removing power of Lisp nor the provability of Agda. This is why it is not used.


The performance is pretty close, actually (far more than Go, for example) : http://benchmarksgame.alioth.debian.org/u32/benchmark.php?te...

And yes, it is not used, only in some obscure and low profile projects : https://www.adacore.com/customers


http://benchmarksgame.alioth.debian.org/u32/benchmark.php?te... might be a better link to show how Ada performance compares to C: ranging from twice as fast to three times as slow, with a median in favor of C. It's true that it's pretty close. I don't know if those numbers are representative of how performance works out in the real world; any insights from your experience?

Golang, it's true, is in the 3×–10× ballpark.

https://github.com/languages/Ada shows Ada as the #52 most popular language on GitHub. The #10 most popular is Objective-C at 3%. Using R:

    summary(lm(log(c(25, 13, 8,8,8, 7, 6, 4, 4, 3)) ~ log(1:10)))
I derive that the Nth most popular language on Github is used in 24% * N -0.81 of projects, with an R² of 0.93. This suggests that Ada should be in use on about 0.98% of projects on Github, which makes me wonder why https://github.com/languages/Ada/updated?page=10 can only find 200 Ada projects that have been updated in the last nine months. (JS, the #1 language, has 200 projects updated in the last 22 minutes.)


That should be 24% × N ^ -0.81. The double asterisk I was using for exponentiation got eaten.


Ada. Not an acronym.


>Not really. I'd place Agda or Coq above Lisp.

Because more than 10 people have heard of and use Agda or Coq?


I'm already having a hard enough time convincing the team at work to use Clojure!


Oh, Clojure is probaly more useful in practice. We also stick to a Haskell dialect at work, and don't dabble in Agda for production.


C++ and Python pretty much run everything, everywhere. Standard, open languages are hard to beat when you want to fully control your development stack and not worry about future control issues. I wish go was an ISO standard like C++, I'd be more interested if it was.


> C++ and Python pretty much run everything, everywhere.

I don't think so. Last I heard, there was still a lot of COBOL out there.


Yes, yes there is.

Most banks, and many other businesses that have been around 30+ years, are still running on COBOL.

It was popular because of its highly readable syntax and its fixed-point packed decimal math for financial calculations.


COBOL gets all the hate but if they have a Big iron mainframe then they'll most likely be running RPG.


RPG was more oriented towards mini-computers than mainframes, from what I recall. It's niche was the IBM S/36, S/38, AS/400 and iSeries boxes. I'm sure you probably could get RPG for your S/360 or S/390 or whatever, but from what I've seen over the years, it was mostly COBOL, PL/I, and MVS Assembler on the mainframes.


Only if it is AS/400 based systems.


Fantastic! I'm primarily a C# programmer and have first-hand experience of this being true.


I agree with the premise that, as a professional software engineer, it is my responsibility to be a polyglot.

As for myself, when I started writing Python, I mainly wrote C or Java code in Python - in a similar way, perhaps, how early C programs were littered with __asm__() constructs.

It took a long time to learn how to write things in (though I hate the term) a "pythonic" way. That is, to learn the common language idioms that are not taught in any tutorial, or are part of pep8, which illustrate common patterns in Python code, the ins-and-outs of PYTHONPATH, etc etc.

So while I agree that, in a weekend, a reasonably proficient programmer can pick up reasonable proficiency in a given language, being a "Python Programmer" to me means that one has developed an intuition for the common patterns, libraries, pitfalls, platforms, and clever specific features of a given language and its ecosystem.


I agree, but I think the age of the language and community is another factor. Java, Python, C++, these are all old languages with decades of history and habits. Newer things like Node.js and Go have no history, no baggage to learn or avoid. I think starting in something with such a clean slate is somewhat easier because there is less ecosystem to learn.


I once heard a quote like this "programming languages are frozen knowledge about software engineering". (Anybody got a clue who said something like this?)

New languages usually improve upon older languages by making certain errors impossible by design. For example, Go allows pointers, but not pointer arithmetic. With C we learned pointer arithmetic is often harmful (though sometimes necessary), so Java removed pointers from the language (not completely, though). Go takes a sensible middle way, because no pointers sometimes is ugly. Such a clean slate is good, because with older languages programmers fight their language deficiencies with habits (e.g. if (5 == x) instead of if (x == 5) to prevent if (x = 5)).

On the other hand, we will find the pitfalls and dark corners of Go over time, but currently we do not know them enough. Go will acquire its own conventions, but we do not know all of them yet. This is the risk of a clean slate.


> On the other hand, we will find the pitfalls and dark corners of Go over time, but currently we do not know them enough. Go will acquire its own conventions, but we do not know all of them yet. This is the risk of a clean slate.

That's bullshit, Go is not a "clean slate", and a number of design issues of Go are known and have been known from the first release regardless of its designer's refusal to acknowledge them. We know pervasive nullable types are a source of errors, we know shared-memory concurrency is an error-prone default, we know a lack of generics makes userland code painful and generics are hard to retrofit in an established language (and even Go's designers know it, why do you think they build special-case generic collections in the interpreter?), we know allowing implicitly ignoring errors is a bad idea and making it easier to ignore than handle errors also is. These are not recent issues, they're well known and there are a number of possible strategies for handling them.

And Go's worst sin, to me: we know that foisting complexity and repetitiveness upon the user leads to forgetting, and forgetting leads to mistakes. And that's exactly Go's approach to errors, resources management and shared structures mutability. Human error is something you can very reliably bet on, human infallibility... not so much.


Funny you should take Go as an example, since it's often bashed for not having taken the last decade of language design into account.


Go does take the last decade of language design by Pike, Thompson, and Griesemer into account. Mostly Pike as I am not sure if the other two guys even did design some language in this time.

Personally, I consider gofmt the biggest achievement of Go, if it manages to make that mainstream. While there are equivalent tools for C they are not widely used.


> Go does take the last decade of language design by Pike, Thompson, and Griesemer into account.

Decade which ended 20-odd years ago relative to the rest of the world, kind-of the point.


You mean by designing a language that is basically Alef from Plan9(1992) with a few changes?

Yeah, really actual.


The English term for "actual" is "up-to-date". The English word "actual" means "not imaginary".


thanks for the correction.


Happy to help!


There's a difference between not taking it into account, and not believing it is worth including.


This has been my experience learning Go as well.


This is why Joel Spolsky correctly observed that the replacement of C/C++ and functional languages with Java in university CS curricula is a tragedy. If you don't understand pointers or recursion, you are not a polyglot and you cannot pick up just any language in a matter of weeks.

The "[language] programmer" (where [language] usually = Java or .NET) trend is a misguided attempt by industry to commoditize programmer talent.


My intro programming course (in high school) was in C++, and I think it would've scared me off programming entirely if I hadn't picked up mIRC-script (a very different language) on my own as a hobby. There is so much accidental complexity in C++ that it's a pretty terrible introductory language. We literally spent several weeks of the semester on how to get input and output to work properly through the giant mess that is iostreams.

At the introductory level, I think the biggest issue is teaching "computational thinking" [1] or "procedural literacy" [2]: getting people thinking about the idea that you're writing specifications for a machine to carry out computations. From that perspective, it's best to pick a language that lets you get to algorithmic logic as quickly as possible.

[1] http://www.cs.cmu.edu/afs/cs/usr/wing/www/publications/Wing0...

[2] http://dm.lcc.gatech.edu/~mateas/publications/MateasOTH2005....


Me too (well, ok, my first programming class was in Fortran taught by a 85 year old man who spent most of the time telling us about how much harder it was back when you had to use fortran).

I hated C++. I still think it's a fairly terrible language. But, for fun I took the Harvard CS50 course to refresh my knowledge of C and I found that WAY better than my C++ course. I think C is brilliant for introducing programming because it's very very simple, yet also very very difficult. There's not much to learn, except a lot of concepts (memory usage, data structures, etc).

I also think Objective-C is a really great language though, so I might be crazy. But, you give me a choice between C++ and a language that is basically C with a few additional keywords and garbage collection, and I find that an easy choice...


C++ is terrifying, and teaching an introductory programming course in C++ is an awful idea. Using plain C would make much more sense. C is also a better choice than Java because Java doesn't force you to learn about pointers, or memory in general, really. Countless Java bugs are introduced by programmers who don't understand what operations give you a copy of something, and which ones give you a reference to something (especially if that something is mutable).

I also think Objective-C is pretty good though, especially with the addition of ARC. It's like a better C, with Smalltalk-style objects and a lot more (mutable and immutable) datatype options. Much saner and less painful than C++, but more challenging than Java.


Java doesn't force you to learn about pointers, or memory in general, really.

Countless Java bugs are introduced by programmers who don't understand what operations give you a copy of something, and which ones give you a reference to something

These two statements seem to be at odds with each other.


The Objective-C garbage collection is deprecated.


The problem complicates further with the fact that IDE fever grips you very early on while using a language like Java.

When I started with eclipse + Java, within hours my perception of Java was that it was really more of 'fill up the blanks' rather than programming.


This became clear to me when suggesting using a language other than that supported by Visual Studio to one of my previous teams. Their reply of, "Can't do it; no IntelliSense" shocked me completely.


And I think that is a valuable lesson too. As a developer, the end-goal is to produce something of value, working software, not spend hours frowning over pointers and references. That was my early experience with C/C++, in any case. I did end up finishing my assignments (ranging from a fractal generator via a microcontroller operating an RC boat to a simple 3D FPS game), but I did feel a lot of overhead with non-functional things.

Of course, you get the same thing in Java when you first encounter a NullPointerException, ;)


I've seen C/C++ programmers failed in writing good readable code.

C/C++ programmers tend to do premature optimization (habit) due to the culture and the problem domain (past experience): device driver, kernel code, game development.

How important Pointer is for say learning any high-level programming language that don't have pointers (pretty much everything outside C/C++/Objective-C)?

Sometimes I felt that knowing Pointer arithmetic and tricks are some sort of chess-thumping that old programmers tend to do. (My background is system programming so I know how pointer works).

It doesn't matter what you know or learn during the university. What matters is whether you continue to hone your skill or not and have a high bar of quality.


I think he was talking about pointer from the "idea" point of view, not the language feature.

References in high level languages are mostly hidden pointers. A good knowledge of how pointers work is really important to get a good understanding of how high level languages work (a lot of Java programmers don't well understand the distinction between value and reference types, for instance).

Likely, be at ease with recursion is also very important as many complex problems are recursive by nature, and so way easier to solve using functional techniques.


Yes. Pointers are always there in programming and you need to understand how they work, even if you are using a language where you don't have to declare them as such explicitly.



I feel like this article on generalization could be a biography of my life based solely on the introductory paragraph, but I don't have enough time to finish reading it to be certain.


"Sometimes I felt that knowing Pointer arithmetic and tricks are some sort of chess-thumping that old programmers tend to do. (My background is system programming so I know how pointer works)."

Or the best, easiest and clearest way to do things... Sure you can write confusing crap with pointers, but they are very useful.


> The "[language] programmer" (where [language] usually = Java or .NET) trend is a misguided attempt by industry to commoditize programmer talent.

From what I can experience in the multi-site enterprise projects I participate, the industry is being quite successful at that.


Successful at commoditizing programmer talent? Perhaps.

Successful at producing high-quality software, relative to the resources they are expending on it? Not so much.


> Successful at commoditizing programmer talent? Perhaps.

Yes this is what I mean.

In most enterprise companies nowadays, developers are seen as easy to replace programming cogs.

> Successful at producing high-quality software, relative to the resources they are expending on it? Not so much.

Who cares about quality when the price is right?! :)

Being sarcastic here, I am the opinion the software industry should be under the same quality regulations and expectations as the other industries.

Usually most people return stuff that does not work properly, while with software they just live with whatever bugs the software has.


I agree that any serious programmer should be able to be productive in any language, but mastery of a style of programming takes time.

It's easy to move about in the same style of programming, but trying to move someone from enterprise-y OO (Java, C#) to functional (Haskell, Lisp) or even Go is a bit of a leap. Concepts often just don't cross over. A Go channel, for example, makes sense to an Erlang developer, but not a Java developer. Goroutines don't make sense to someone familiar with traditional threads.

There's a certain amount of rewiring that needs to be done. Looking for a "[language] programmer" is strange, but looking for someone proficient in the style of programming used makes sense to me.


Even within languages, styles are not set in stone; e.g. are you the OO-style Scala programmer, or are you the FP-style Scala programmer?

Java has had channels for a while now at the library level, they just lack pretty syntax. But this just reinforces my point.


> It's quite strange to me that people would identify as or look for a "[language] programmer".

I understand people doing this.

First you can be more sure of what you're getting. It's sad that tests like FizzBuzz are so useful, but it's a fact. If I hire a Java developer for a Java position, I can figure out there Java skills. If I look at a PHP developer, it's more of a crapshoot. They may have a lot of PHP experience, but can they convert that to Java or have they just been doing cargo-cult stuff? This would be a smaller issue with something like C# that's closer to Java.

Second, and I suspect more common, is time investment. It might take someone quite a bit of time to switch languages. If they haven't done it before, you might find out it's a weakness for them. In my experience, at least in smaller companies, the fact that you're hiring means you need someone now. That extra time could be killer, because you waited to long to start hiring.

I would at least expect candidates to look into what we're using. When I changed jobs 2 years ago, one place I applied was a Python shop. I've never used Python professionally, but I've tinkered with it. If you're applying for a Java position, I expect you to have at least looked at Java before the interview. Sadly, I bet that would happen a non-trivial number of times.


How did you convince them to hire you when you had no professional experience? Did you have to take a pay cut to do it?


I didn't get hired for the job, after the first interview I bowed out. I ended up in another mostly Java position (although we've also started doing some Obj-C).

I had 4+ years of working on a reasonably sized and complicated Java application. In addition, a few years before I used Python for personal projects so I could show some experience.

But it turned out the position was largely for front end development, and not some of the more back-end stuff I'm interested in. I think they liked me (don't know how I compared to other candidates) but I thanked them for their time and told them the position wouldn't be a fit for me.


If your code in that language is good enough then you can usually get a job for it. Of course, I'm talking about places where programmer/HR department have a big overlap. I don't much like working for the other places.


If you're looking for a good programmer, it often pays to be looking for one in a niche language. Paul Graham referred to this as the Python paradox: http://www.paulgraham.com/pypar.html

(The inverse of this is if you're looking for a PHP programmer, and posting ads on monster.com ;) )


Try a Lisp. You'll be humbled. In my opinion, anything resembling C is easy stuff. Pointers, threads ... it's all pretty easy, really. Truly breaking out of that C mold is the differentiator though, in my opinion. Anybody can learn C, Python or Javascript.


Lisp is a huge paradigm shift, but once you get that writing code in Lisp is basically just writing parse trees directly, it's not so scary.

If you don't already get recursion, though, you will struggle. Everyone struggles with recursion at first.


I completely agree.

I think to completely explain this phenomena, one would have to make references to people's skill levels, and it's hard to explain it in a few words. So I'll just not say anything at all on this subject beyond that.


If only the morons that hire could figure this out. I almost NEVER use Java, but obviously I have and CAN use Java. Yet interview after interview demands that I use Java day to day in my current job to be considered. And this is for data transformation/analysis jobs, in which my use of Python makes me dramatically faster and more productive than my Java using colleagues.


Of course use of the word "moron" might be poisoning the well a bit. "Possessing incomplete information" might be a better way to put it.


Generally I don't understand "C++ programmer" to mean a programmer who is an expert in C++, and C++ only, but rather a programmer who may be comfortable in many languages but has a special expertise in C++. All my team identifies as C++ programmers, but we all speak Python and Java, and some can do Haskell, etc.

It takes a long time to become really good at C++.


"Any serious programmer should be a polyglot by default."

The language itself is not usually a problem for me. I mean an array is an array in any language. It's environment. Example: c# (which I think in MS's best work, by the way); c# was easy to pick up. The .Net framework was another matter. That took time. Knowing where the resources are and hell - even getting comfortable with the documentation and the IDE took time.

So now, when I hire a programer, I'm not too interested in his language skills. I'm assume he can code well enough not to embarrass himself. It's his knowledge of the environment that we run. That's what I look for.


>Any serious programmer should be a polyglot by default.

My own experience is that there are PLENTY of programmers who have used, say, PHP and Python, but who you wouldn't want to touch your C codebase in a pair-programming session.

If you've used C++ and C, then sure, you can probably jump to just about any language. Someone who's only used Python (or worse, Java or PHP) will likely be a danger to themselves and others for the first year or three using C.


Starting out with Python as a newbie is disastrous.

I started out with assembly, then went to C and then all the C-family based languages. Non C-family languages are actually rare enough to neglect for 99% of all programming tasks today.

Always remember going from something difficult to simple is easy, but going from something simple to difficult is not easy.


I disagree. My path went like this:

HTML->CSS->php->javascript->C#->Visual Basic->Java->Ruby->Python->C->Go->Scheme->Lisp->Haskell

I believe the "python way" and "pythonic" code that the community pushes is great for newbies, since it gets them thinking about not just writing code, but writing it idiomatically.

This, in turn, makes most think about why doing it the "pythonic" way works better.


>Always remember going from something difficult to simple is easy, but going from something simple to difficult is not easy.

Great. There are plenty of hard working people to make this "piece of wisdom" completely worthless.

>Starting out with Python as a newbie is disastrous.

... why?


>My own experience is that there are PLENTY of programmers who have used, say, PHP and Python, but who you wouldn't want to touch your C codebase in a pair-programming session.

There are plenty of programmers that wouldn't want to touch C to begin with, regardless of past experience. Anyway, why would you ask somebody with no experience in C to develop your C codebase?

>If you've used C++ and C, then sure, you can probably jump to just about any language.

Nope.


>why would you ask somebody with no experience in C to develop your C codebase?

The message I replied to said:

>It's quite strange to me that people would identify as or look for a "[language] programmer".

So what I was saying was that you would want a C programmer to develop in a C codebase.

>There are plenty of programmers that wouldn't want to touch C to begin with, regardless of past experience.

I wouldn't want to do a lot of original development in C myself. I have used C -- and assembly language -- to do nontrivial things in the past. But I'm ready to be done with C myself.

>>If you've used C++ and C, then sure, you can probably jump to just about any language.

>Nope.

Umm...[citation needed].


I doubt that. Learn a language's syntax and semantics, perhaps, if it's not C++. Learn its standard library so one uses it aptly rather than re-inventing, less likely, depending on the size of that library. Become proficient in the language's idioms, know its gotchas and what are the more efficient of the choices it presents, less likely still.

Code in it, yes. And I agree with the polyglot.


>>Any serious programmer should be a polyglot by default.

The quality of a serious programmer is to get things done. Not tool tricks.

When I go out to hire a carpenter, I just ask him to show me some of his work. If I like it I buy from him or hire his services. I don't go around and ask him to demonstrate how he uses a hammer.


Getting things done means using the best tool for the job. Sometimes that's a language outside of your immediate preference, because that's what the best tool/library/platform is written in.

Today I wrote Java (LDAP server plugin), and ssembly (for unit testing some code that has to minimally interpret x86 assembly). I'll also occasionally need to write some JS/HTML/CSS for the web, work on systems programming for other architectures (ARM, AVR), write kernel code and drivers (C/C++), write/extend a Python script (HTTP test client), write Tcl (to drive simulavr), ObjC (iOS apps), and occasionally write some functional code for fun (I have no business case for it, sometimes it's just nice to work on something clean).

I've done and do all those things (and more) not because I have 'tool tricks', but because in nearly every case, I needed to use the tool most suited to getting things done in that problem domain.

Programmers that refuse to adapt to the given problem domain do their users a disservice. It's like the old joke about which nationalities run heaven and hell -- the worst possible people set to tasks for which they're innately ill-suited. However, programmers can learn more -- if they're willing -- and adapt to better suit themselves to the problem at hand.


He doesn't use a hammer though, he prefers to use his screwdriver for everything. What you didn't see from the pictures of his work is that they have now burnt to the ground.

Burnt to the ground on purpose because there were too many bugs or it became an impossible to maintain tangled mess of meaningless symbols.

By the way, since he only uses his screwdriver and you project requires nails, he'll take 5 times longer than your deadline.

Your end product will work for about 2 months, then you'll pour the gasoline on it and burn it down yourself.


Absolutely. Even if you're a blub grandmaster, you really ought to know a dozen other things fairly well.


>> Any serious programmer should be a polyglot by default.

I can pick up a new language very fast but won't consider myself a polyglot (know a handful of languages). I do still consider myself a serious programmer.


> Any serious programmer should be a polyglot by default.

Fully agree.

At the end of my CS degree, I was able to code in:

- Pascal

- C

- C++

- Prolog

- Smalltalk

- Camllight

- Java

- Assembly (x86 and MIPS)

- PL/SQL


Many programmers are religious about their languages.


> I feel[1] that a smart/talented C/C++/anything developer can go from someone who has never seen or heard of golang to a proficient and productive Go developer in a matter of a few weeks, maybe even _days_, if not less.

Agreed! My career is mostly database development. Very high level. Yet I felt comfortable in Go in a few weeks. In a few months I was handling low-level stuff (to me, anyway) like tweaking Go's web server. Go's official online documentation is excellent, although I wish it had lots more examples. The next best learning resource I have on it is the book Programming in Go (also get the Go Programming Language Phrasebook).


Re [1]: I think that's a reasonable statement. My issue with it, however, is that after just a few weeks (or even a few months), said developer might still not know about various design pitfalls or best practices in the new language.

In short, they might not be writing idiomatic code, and you'll end up maintaining it for a long time. IMHO, becoming basically proficient with a language is very different to being able to create software which will scale well to large teams, while also aging gracefully over time.


In general I agree, but I think it's important to note that while a lot of language skills are interchangeable, going from an objected oriented language to a functional one is a pretty big step.


It took me 1 week each to go from 0 to being productive in javascript & ruby and it took roughly the same time for some other developers in my team. I think it's more about knowing where to start, knowing various paradigms, etc than the language itself.


Thanks for the links man.


Yea, michaelochurch have talked about that one before.


wow. these are very good... thanks.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: