Hacker News new | past | comments | ask | show | jobs | submit login
Everyday hassles in Go (crufter.com)
239 points by friendly_chap on Dec 30, 2014 | hide | past | favorite | 291 comments



I couldn't agree more with this article.

I did some Go a year ago and liked it. Then coming back to it a year later after having done some functional programming in Clojure, it's not just the lack of generics that disrupt my flow but also having to think about all sorts of imperative programming details like naming and creating variables and scope placements in cases that would otherwise be unnecessary in a functional language.

I feel like I have been tainted by functional programming, and now Go feel like a joyless programming language which inevitably affect productivity.

I am looking for a replacement programming language to solve small problems for which Python/Ruby would have traditionally been used, good file system, stream and networking APIs, but concurrency as a first class citizen, garbage collected, fast startup time. Doesn't need to be good at long running programs, but it would be great if it can evolve to play a repeat role in a large system/process without the hassles of having to duplicate every written line of code into tests to prevent small changes from breaking things.

I am considering giving Haskell a try, but wondering if it might be overkill.

Any suggestions?


I think Haskell is a possible alternative to Go/Python/Ruby for these kinds of tasks.

But nothing's perfect. Some possible annoyances you might encounter:

* Much of the functionality required to be a valid "Python alternative" is not on the base packages. You'll have to install extra packages much sooner than with Python. And you'll have to know what packages to install.

* Despite having proper sum types, most I/O libraries in Haskell rely on exceptions for signalling errors. And Haskell has no proper stack traces. If you are not careful, you might find yourself pining for Go's multiple return values.

(Some would add "having to use the IO monad" to the list of annoyances, but I think it actually helps.)


I found ocaml to be a more flexible alternative -- I can sprinkle printf for quick debugging without having to alter types. (I think this is what you mean by "having to use the IO monad". I didn't work with haskell much, so it's possible there's an easy way around this there.) And stack traces in ocaml are reasonable, though not quite as verbose as python's.

But your point about needing to install extra packages stands for either language.


You should look into Debug.Trace for quick printf-like debugging. Usage: trace ("error occurs here") (f x)


OCaml is currently my language of choice. I love functional programming, been doing it for a while now, but it's nice to know that if I read an algorithm in a textbook, I can just implement it as-is instead of doing a translation to be purely functional (and sometimes having to fight to get back the asymptotic complexity). The module system is also absolutely great, and should be copied shamelessly by more languages.


Haskell doesn't ban imperative programming, it simply distinguishes it in the types.


True, what I mean is that I can be "messy" in OCaml for the sake of expediency. In principle, I prefer Haskell's clean approach, in practice I'm a bad programmer who sometimes does naughty things.


Well, Haskell does force you to go and update your types about your effects, and fix the use sites to do proper lifting.

I don't feel this costs much, though, so even in quick&dirty mode, I feel that Haskell is quite productive enough.


Proper sum types aren't enough to replace exceptions.

Imagine every IO action having its own error type in a sum type. How do you compose them?

Composing the different error types wouldn't unify.

The best attempt I've seen in Haskell is the control-monad-exception[1] library, but I'm not sure how well that would interact with everything.

[1] https://hackage.haskell.org/package/control-monad-exception


> Imagine every IO action having its own error type in a sum type. How do you compose them?

Perhaps by converting each error type to a common sum type. Something like "Either (Either error1 error2) result". Typeclasses like "Bifunctor" are quite useful for massaging the errors around:

http://hackage.haskell.org/package/bifunctors-4.2/docs/Data-...

Asynchronous exceptions do pose a problem for this approach though, as they can pop up in any function at any time. And perhaps a fully sum-type-based approach would clutter function signatures too much anyway.


Check out how rust handles error composition using its Result type


Elixir sounds like it might be worth a look. Nice for scripting and as good as it gets for concurrent programming. Some of its design is inspired by clojure so you might feel right at home. http://elixir-lang.org


> I am looking for a replacement programming language to solve small problems for which Python/Ruby would have traditionally been used, good file system, stream and networking APIs, but concurrency as a first class citizen, garbage collected, fast startup time.

You just described Go, mostly. I can't think of a more fitting language given your stated requirements.

To your first point: I don't know what Go has to do with your preference of functional programming to imperative. Go doesn't claim to be a functional language so I just am not seeing the connection.


Yes, I did describe what Go is good at, that's because I am trying to find something to put it against for what I use it for currently.

Let's say I prefer Haskell's syntax/approach. Coming from Go, what could I be missing in Haskell?


I quite like F# - it's similar to Haskell, but since it's a .NET language, you have an extensive STL and tons of 3rd party libraries. It works great with C#, so you can switch between functional and OO programming depending on your domain, while keeping the same STL and technology stack.

Also, you get the full power of Visual Studio (they recently changed it to be completely free), which can really help with productivity in my experience.


The ability to easily solve problems Go was made to solve (mainly, server programs and OS utilities). Those problems are, often, inherently easier to express with an imperative style (because they deal a lot with the external, imperative world and all its nasty aspects).

You might also loose the ability to know how much memory and time a given task requires at most, which can be a problem when dealing with servers (it's much harder to know how many requests you will be able to serve per second or how long a given request will take to be treated, for instance).


Fast compile times, for one.


> solve small problems for which Python/Ruby would have traditionally been used, good file system, stream and networking APIs, but concurrency as a first class citizen, garbage collected, fast startup time

I think for the most part, you won't be missing a lot in Haskell. If you are looking for what you say above, Go should satisfy your needs greatly. Haskell is amazing for functional programming needs - fast prototyping and great at doing mathematical calculations. It helps you think in another way, but I certainly wouldn't use it for building something like an API or networking.


While it's kind of off-topic, let me still ask: what are the problems you see in using Haskell for networking? I see that building a library with the C calling convention might be hard, but a web service must be pretty language-agnostic.


I'm not sure why the xasos says that. For me Haskell has been great for apis. Regarding calling C, the Haskell ffi is very easy to use.

Try Nim, Ocaml, or Haskell to replace Go in that order. I order it in terms of probable familiarity.


Calling C is easy. Calling from C might be not.


> I am looking for a replacement programming language to solve small problems for which Python/Ruby would have traditionally been used, good file system, stream and networking APIs, but concurrency as a first class citizen, garbage collected, fast startup time.

You may want to have a look at ClojureScript, which has come a long way in the last few years. While the tooling is still not as nice as Clojure-on-the-JVM in my opinion, it continues to get better everyday--this week support for a better REPL via NodeJS[1] was introduced, for instance. A lot of smart people are putting good effort into the compiler and ecosystem and I suspect we'll see a continued uptrend in cljs adoption. Anyway, it may be just what you want: an elegant, productive, pragmatic language with first-class support for concurrency (atoms, core.async) and well-suited to short-lived and scripting-domain problems. (Bonus you can write your frontend and backend in the same language if you want!)

[1] http://swannodette.github.io/2014/12/29/nodejs-of-my-dreams/


The "REPL via NodeJS" is amazing!!! I want to try ClojureScript on NodeJS for a long time. Now I can play with the REPL.



If you don't need windows support, O'Caml with Jane Street's Core is pretty good. The Async module handles the concurrency bits, and the Pipe module handles the streaming parts.


Note that for now Ocaml is still single threaded so the concurrency is cooperative, similar to how it is in Python or Nodejs. BTW, I'd recommend Lwt instead of Async because its more popular (kind of sad that there are two competing concurrency libs though)


Have a go at Hy (hylang.org). You can have your cake and eat it too. :)

Other than that, I'm fiddling with LISP Flavored Erlang (lfe.io), but I've had previous Erlang exposure.


That looks pretty interesting, but I don't have that much experience with Python, so the benefits are diminished.

I am looking for a runtime which can take full advantage of multiple cores for a single program, something I believe Python would fall short at.


You may try out using Scala as script language like this: "scala myScript.scala". The first startup times are slower, and even after that it may be slower than Python/Ruby but it still may be sufficient for your use case.

Using "sbt ~run" (re-runs your program automatically at every file change) may also be an option.


I don't see a reason why Clojure couldn't achieve the same startup time? I am trying to avoid the JVM startup time itself so that leaves out Scala/Clojure.

I'd like to be able to have a fast startup time when, say, running a script with its containerized environnement, when it is done the container exits, so I can't leave a vm running.

So I believe that leaves out JVM languages.

Independently from that, I did try Scala before Clojure and found its grammar to be overkill and unintuitive so it wouldn't be my first choice.


This question is not easy. Most languages I consider well designed do not become mainstream - which comes with a host of problems. Nowadays I think if having no type system at all is better than having a broken type system, thus I am considering picking up a mainstream dynamic language (JS being an obvious candidate - but the runtime puts me off).


JS could fill the role but both the runtime and language are unideal, we end up dealing with a lot of browser baggage.

JS is still my tool of choice for building web apps. Just as for long running services, Clojure is a no brainer for me.

I am pretty open to ML or LISP based syntaxes, not necessarily looking for something Clojure-like.


Go has first-class functions, if it had generics it'd be trivial to put together libraries for most of the functional-collection goodies, while still having imperative style available for when you care about performance or when it's just simpler for the problem.


This, and using something like `Either` / `Result` instead of the `result, error := MethodCall()`. (Alas, the authors of Go seem to consider notions like 'monad' or 'Kleisli category', both pretty simple, impractical.)


Words like 'monad' and 'kleisli category' (which I've never heard of) absolutely REEK of impracticality to me, regardless of the concepts.

There's a rule for academic papers where if the author is writing in clear language, he has something interesting to say, and if he's writing in fancy academese, he's probably saying absolutely nothing.

I don't really see a huge additional value for your proposed method call semantics, can you explain more? Some(Result) can make nil go away, but you've still got the error to deal with. So you're talking some type system where a single value encapsulates Result/Nil/Error?


These words are merely unfamiliar. Do you think that 'pointer arithmetic', or 'open-closed principle', or 'abstract class', or 'exception propagation' sound less intimidating for a layman? But in reality none of these are complex after you've put a small effort to comprehend the notions behind them.

Specifically, try reading these completely un-academic un-papers.

'Either' monad, with nice kid-friendly pictures: http://fsharpforfunandprofit.com/posts/recipe-part2/

Bonus: Martin Fowler, of all men, endorses the above approach, in Java, of all languages: http://martinfowler.com/articles/replaceThrowWithNotificatio...

A Kleisli category, with examples in down-to-earth, compact, un-hairy C++: http://bartoszmilewski.com/2014/12/23/kleisli-categories/

This stuff is not hard and is utterly practical. It can very well be used in current industrial languages.

The worst service you can render to yourself is to start thinking that you already know all what is worth knowing, and close your mind to new concepts. Guys that thought that Fortran and Cobol are enough to get by for foreseeable future were technically correct — both are still in some demand! Unfortunately, their market share has dramatically shrunk, deservedly, as have employment opportunities.


I've been trying to get my mind around this kind of stuff for maybe a year. And I have to say that monads don't look like the solution to any of the problems that I actually face or have ever faced, in a thirty-year career.

So, no, it's not just that the words are unfamiliar. It's that the problems that they're trying to solve are so abstract that they're completely disconnected from the work that I actually do.

And before you tell me that I'm really doing this stuff without realizing it: Maybe so. That doesn't mean that my life would be improved by doing it explicitly.

It seems to me that the "everything is category theory" approach has the same over-abstraction problem that the Java "AbstractFactoryFactoryFactory" approach has, and deserves to be as frequently mocked.


When I found out about monads it was after I'd already implemented the same abstraction. It actually comes up quite a lot (async calls, error handling, audit logging, database access). http://m50d.github.io/2013/01/16/generic-contexts.html . Just like when I was told about "the visitor pattern", my reaction was mostly "that? I've been doing that for years. So that's what it's called".

Still, it's worth knowing the name for a concept so that you can talk to people about it, and it's certainly worth knowing the library tools that exist and can save you time. Pretty often I'll write two or three lines of code, look at it, and then realise "oh, that's just traverseM" or some such. Even if it doesn't save me any time writing, replacing three lines with one library call is a huge bonus to maintainability.


Did you ever work with a list? Did you ever do things like

   foo = [bar(x) for x in quux]
Or did you ever flatten a nested list?

Congratulations, you've been using one of the most widespread monads.

Monads are not artificial constructs. Monads, like functions, or loops, or conditional statements, emerge naturally when you try to describe a computation, even a simple one. This is like speaking prose without knowing that.


    foo = [bar(x) for x in quux]
This is effectively a map, so it's strange to call it "using monads". "Using functors" would be more accurate.

You can get "monad" out of list comprehensions though, when you have multiple "for" clauses.

    foo = [y for x in quux for y in bar(x)]
seems equivalent to foo = quux >>= bar


No, actually, I don't do things like that (see my parallel reply to codygman).

The Haskell types often seem to think that the computing they do is the same as the computing that everybody else does, and therefore that the problems that they want to solve are the ones that everybody else wants to solve. That's not a very accurate assumption.


> The Haskell types often seem to think that the computing they do is the same as the computing that everybody else does

Seeing as Haskell is used for general computing (including embedded software) the computing is the same. I know what you ate getting at, but I don't think it is based on facts or anything rational.


Just for reference: what are the kinds of problems you solve?

Note that the code I quoted above is Python, and I have lots of stuff like that in my mundane production code (processing files, tracking transactions, counting money, etc).


If you give some examples of the work that you do perhaps we can make some of the abstractions concrete.


Perhaps you could. That's not the same as making them useful for me, though.

I work primarily in embedded systems, where sequence is critical, many things are stateful, and there are multiple threads of control. So, for example, I could take the sequential aspects and re-write them as a monad. Or I could just do nothing, and let them be sequential imperative code.

I know that in a pure functional world, sequence can be written as a monad. But why should I care? In my world, it does nothing for me. I'm not going to try to write imperative code in a monad to build an illusion of non-sequential purity when the essence of what I need to be doing is so sequential.


One neat thing you can do with monads thats useful in your setting is coroutines/generators as a library. You get to write code that looks like its regular sequential code and behind the scenes it gets expanded into the "callback hell" you would have had to write if you decided to code the things in event driven style by hand.

Anyway, I do have to say that the Monad abstraction is not that useful outside of Haskell because you kinda really need Type Classes to get monads "right" and most languages don't have that. What you might end up is with specific monads like promises in JS but then the monads are just a design pattern instead of being a concrete abstraction like they are in Haskell.

In addition to that, in Haskell monads are the only way to do sequential computations with side effects because by default the language is Lazy. This means that Haskell programmers have to learn this abstraction whether they like it or not while in other languages you can avoid it better :)


The purity is not an illusion though. Writing those sequences as a monad gives you the ability to compose actions, creating a new sequence from code you've already written for one.


But that's not a net win for me, compared to just having the action in a function that is called from several places.

Worse, composing the actions means I have to think through how the composing is going to affect the sequencing of events. Just having a function doesn't give me that problem - the sequencing is explicit, not hidden, and therefore is easier to reason about.


Can you give me some example code? I think our discussion would be a lot more productive and mutually beneficial with some code examples/output.


I can't give actual example code (proprietary, company confidential, blah blah blah). So I'm going to have to come up with some fake example.

[Thinking...]

How about a robotic hand that's supposed to pick up a ball. The (C/C++) code might look like :

  void pickUpBall()
  {
    openHand();
    moveHandToBall();
    closeHand();
  }
You have to have that sequence. If the hand isn't open, moving it to the ball just bumps the ball away. If you close the hand before you move to the ball, you don't pick up the ball, you just close on air.

So, in Haskell (if I had the hardware control libraries), I could put that in a monad to guarantee the sequence, and in an IO monad so that I could do things that had side effects like affecting the external world. But in fact I get both of those for free, just by not writing it in Haskell. (The pure functional nature of Haskell also doesn't do much for me in this situation. Yes, for the functions that can in fact be purely functional, it can make them easier to reason about and prove correct. But that isn't the difficult part of the code.)

Composability: It's unclear to me how you're going to compose something like this in any way that's a) useful in this environment and b) dramatically different from simply making a higher-level function that calls pickUpBall() and some other functions.


Cool, now we have something to work with. However at such a high level no paradigm's differences (much less pros/cons) would be apparent. Would you mind implementing those individual functions?

Actually it might be easier to use some open source embedded code.

For completeness sake, I'd write the snippet above almost identically:

    pickUpBall = do
        openHand
        moveHandToBall
        closeHand


OK, now you've got me really confused. Is that Haskell code? I thought that Haskell couldn't do sequence except inside a monad. (Or is "do" a monad?)

But if that isn't a monad, then why did you write it that way? My original claim was that a monad wouldn't do much for me in my world, and if you wrote this without a monad, that kind of seems like you're agreeing with me.

You said that you'd gain composability from writing this as a monad. In this example, how would that work? What would it buy me that I couldn't write (as easily) using what to me is the normal way?


It would most likely be in the IO monad since IO would be happening.

As for whether monads would help your code (and I still think they would) I'll need a larger example. In a small example abstraction and reuse aren't really apparent.

If you can provide a larger example, we can find out who's right ;)

Alternatively I'll try to find a larger example if you can't or don't want to.


Well, I just made up this example, so feel free to embellish it...

But as you do, remember the question. We're in an embedded system, with lots of situations where sequence matters, and lots of stored state. The question is, what do monads buy me in that environment?


While I understand your gut reaction, the fact that those words are unknown to you does not imply that they cover complicated concepts. It's just a new word, nothing more. it has nothing to do with .

Monad is just a way to establish a (type verified) set of rules for a sequence of functions. This set of rules can be -for example- about the management of return value of these functions : executing the next function if we get a result, returning immediately if we have an error.

What is powerful with Monads, is their genericity. They can be applied to many different contexts, and still provide the same type safety.

You're likely already building very limited/specific monads without knowing it.


Why should words you've never heard of reek of anything to you? You've never heard of them; you have no idea what they mean!


You may want to take a look at Pixie.

"Pixie is a lightweight lisp suitable for both general use as well as shell scripting."

https://github.com/pixie-lang/pixie



F# ?


Scala is pretty nice. Concerning the JVM startup time: It's usually in the range of milliseconds for your use-case.

What gave the JVM a bad name regarding startup time are large JavaEE application servers which had dependencies to hundreds of MBs of libraries. In this case, startup is slow, otherwise: not so much.


Although i'm still very unsure about rob pike's argument that go don't need generics since it has interface, a recent experience :

After having implemented a mini web services in go and being fed up with its limited type system, i decided to stop coding in go and start recoding my project in java using what is often advertized here as the most minimal framework : dropwizard.

Well, i downloaded the framework, configured maven, started the hello world tutorial, and quickly found myself dealing with 2 xml files, 1 yaml file, multiple classe files ecerywhere, and couldn't get the whole stuff work immediately.

Then i realized that my golang implementation was already working, and that i completely understood my code and its potential performance characteristic, without having to have deep knowledge about the jvm internals, things like sevlet containers, or how the whole stack would deal with my global variables in a multi-threaded context. All my code was just one simple program.

My temporary conclusion right now is that golang type system is indeed extremely shite, but that if what you need is to develop simple yet efficient web services, it doesn't matter that much.


Although I am a Ruby programmer and would hate to have to work in Java, I do adore C# (it is a near-perfect language). And I feel I need to defend this certain aspect of statically typed pre-compiled languages.

1. Dropwizard is not a minimal web service library, it's a minimal web service framework. You could've used just a library, for example this[0] one, and be done in 10ish lines boilerplate that you fully understand.

2. That would also mean you wouldn't need Maven or any XML configuration. Just a bash script or a Makefile that invokes your compiler.

3. There's 3 ways to deal with global variables in the world. The first is plain allow them, and let everything go to shit when two threads access them at the same time. That's how most languages, including Java, do it. The second is allow them, but never allow more than one line of execution at the same time, this is how many scripting languages like Javascript and Ruby do it. The third is 'not' allow global variables at all, and instead only expose shared state through mechanisms that explicitly deal with concurrency like pipes/messages/locks, this is how Haskell and other pure languages do it. (There's also STM, but that's a bit more complex)

Once you know which of the three your language does, you'll know exactly how to deal with it. No need to be uncertain about it.

0] http://docs.oracle.com/javase/7/docs/jre/api/net/httpserver/...


I don't think Java will get many defenders here. But try that in Scala, with Spray. There're a couple of lines of boilerplate to create the actor system, but that's about all. And you get a system that's flexible enough to let you write route definitions that look like a config file, but everything's typesafe. Your routes are just functions, you can refactor them like ordinary code. So too are the kind of "cross-cutting concerns" like authentication or async calls that you'd have to either have built into your framework, or use some kind of "magic" (AOP, monkeypatching) to add to all the methods where they were needed. https://github.com/spray/spray/tree/release/1.1/examples/spr...


> use some kind of "magic"

No it doesn't use magic, but it does take a language that is burdened with a complex type system and throws it out the window.

You also are glossing over a huge host of complexity around build/deployment. What jvm are you targeting? Is it on the servers you are deploying to? Are you going to make a fat jar? If not, how are you doing dependency resolution? Is your build artifact something that sbt does out of the box or slightly different (lord help you if it is)? Are you going to do ivy, s3, or maven resolution?

All of that complexity falls away with golang and this is a conscious choice of the golang team. I have lots of things I love about Scala and lots of things I hate about golang, but from a "get simple web service out the door in a scalable way" golang wins hands down.


> No it doesn't use magic, but it does take a language that is burdened with a complex type system and throws it out the window.

What are you talking about? Spray works hand-in-glove with the Scala type system; would, in fact, be impossible in a language without it.

> You also are glossing over a huge host of complexity around build/deployment. What jvm are you targeting? Is it on the servers you are deploying to? Are you going to make a fat jar? If not, how are you doing dependency resolution? Is your build artifact something that sbt does out of the box or slightly different (lord help you if it is)? Are you going to do ivy, s3, or maven resolution?

If you're just playing around, you push the button in your IDE and it runs. If you're a serious business you make these decisions once and reuse them in every project. (To answer your questions explicitly: the default is to build for java 1.6, which has been on servers since before go even existed; if you're building for a newer version it's because you know what you're doing. My personal choice would be the maven appassembler plugin, but using the shade plugin to make a fat jar works fine too. I wouldn't touch SBT, it's too complex).

Yes, you do have to make some choices, put a bit of effort into deployment. But I think that's a necessary cost, because it's the only way to allow tooling to evolve. Imagine if ant had been built into the JVM back in 2000; maven would never have been able to replace it, so we'd be stuck with it forever. Look at the Python standard library; when Python first got big, it was this great selling point full of really useful tools. Now, it's where modules go to die, because things that are built into the language can't evolve at the same place as things that are outside it. I fear exactly the same thing will happen to the Go tooling (though of course, we won't know one way or the other for ten years). If I can think of one language that really emphasised an "easy deployment model", it's PHP. Not only did that approach lead to terrible deployment practices, but now that the landscape has shifted and Apache is no longer universal the way it once was, it's not even particularly easy to deploy PHP any more.


> What are you talking about? Spray works hand-in-glove with the Scala type system; would, in fact, be impossible in a language without it.

In the example you linked to: https://github.com/spray/spray/blob/release/1.1/examples/spr...

The "main" function dispatch is not type safe (nor can it be given Spray's reliance on the current akka type model).

My experience is that anything that relies on akka ends up throwing out a huge majority of useful type safety because it is hard to minimize it. A bunch of the advantages you mention around AOP and not monkey patching are due to this.


Ah, my bad. https://github.com/spray/spray/tree/release/1.1/examples/spr... (or even https://github.com/spray/spray/blob/release/1.1/examples/spr... if you want the very simple case) are better examples, that use the lovely type-safe routing DSL.


Java's problem is a problem that Go won't escape from. Java is a language with baggage tacked on. It's standard library is bytecode-compatible with java1 classes. With a small wrapper class the applet that I wrote in my introduction to Java class 14 years ago works perfectly fine.

The configuration problems you mention are the result of these libraries' need to be compatible with older versions of Java.

If you write your own data structures in Java, there is nothing preventing you from fully utilizing Java 8 features. It makes everything unusable in Java 7 of course, which is not considered an acceptable sacrifice by Oracle and it won't be for decades.

The same is true for C++. When it was designed it was a magnificent advance (and it hasn't fallen for the particular "enterprisy" traps java has fallen for). But then the language was redesigned, and redesigned. RTTI being the first major clusterfuck that they decided to make partially binary compatible. Exceptions being the second, and now lambdas being the third. You may disagree on what exactly is a clusterfuck and what isn't, but I hope you can agree on the deeper problem : baggage. Various things matter more to other developers (like pthreads, winsock, ...)

Writing a program from scratch in C++11, not using any of the old libraries ... is almost as pleasant as using rust. But most libraries can't update, because they'd lose too much doing so.

Go is on it's first design of it's standard library, and everything works reasonably well with today's standards ... this is of course not so much a feature of the language as it is the result of when it was constructed.

When the winds turn again, and they will, Go will look as antiquated as Java and C++. Actually, given how well C++ has handled this in the past, it will probably look more antiquated than C++.


There have been some fairly involved discussions about generics recently on the Golang mailing list[0][1]. I think at this point the Go team isn't really opposed to Generics, but high-level ideas about it have basically all been proposed at this point.

The only way a generics (however you choose to define it) will be added to Go is if someone puts together an in-depth proposal about how it works, beyond just syntax. This means how it will work with the compiler/linker. How it will work with the runtime and GC.

Due to the amount of time it will take to put together such a proposal without the guarantee that it would be added, no one has gone down that path yet. You also need a deep understanding of many parts of the language to be able to produce such a proposal, which is another barrier.

[0] https://groups.google.com/forum/#!topic/golang-nuts/L-2OTItS...

[1] https://groups.google.com/forum/#!topic/golang-nuts/smT_0BhH...


Not to mention deployment. No JVM to configure. Or application server...

Minimally, just drop the binary and an init.d script. Feel free to use something fancier for manageability etc., the point was the simple option is available.


You don't need to configure the JVM as in many cases the defaults are fine. And most of the frameworks in use today do not require an application server.

Go is definitely simple. But I wouldnt characterise modern day Java development as being that complicated.


Well, you don't need the JVM and any dependencies it pulls. That's also worth something in the era of "cloud" services. Less to install. Less to update. Ideally you only need to update the deployed application. Which in case of Go is that one binary.

Another big thing is that same app written in Go just takes (significantly) less RAM than when written in Java. Go has arrays (and slices) of structs, not just struct pointers (in Java-speak: object references). This also dramatically reduces short lived garbage. Additionally Golang strings consume less memory, as they are UTF-8 by default in memory. In Java, you can start JVM with -XX:+UseCompressedStrings option, but that adds branches and still falls back to UTF-16 when string contains non-ascii characters.


What does memory usage have to do with ease of deployment ? Congratulations Go uses less memory. Is that really a bottleneck these days ?

And sure you only need to replace one binary. Java can do that too but also has the flexibility to hot deploy new code/libraries so no costly outage.


Yes, computing resources, including computing power, memory usage and storage are limiting factors and they do increase cost. One instance - who cares. 1000 instances and you might care. Something you ship for other people to run? End users might care.

Give me ssh, and I can deploy my app to default installation setup in a second, like CentOS 6. That includes binary transfer, installation and startup.

Hot deploy is nice. Sadly Golang is one of the only language environments where you just can't do it. Do watch out, the issues you'll encounter may be rather unique. Especially when applied in production...


Java also has arrays. And on heap/off heap RAM allocation.

We deploy all our services as uberjars, we have a single Amazon AMI with JDK 8 installed. It's pretty frictionless. It goes from a simple Gradle project, to a single jar on S3, and pulled from S3 by our AMI. Automatically launches via an upstart script.

I'll give you less RAM usage for sure. But when I'm launching a whole VM for a service anyways, I don't care that much. I tend to lean toward Go when I want a low-footprint coprocess for another service, or other small apps.


Comparing Java's off heap allocation to golangs (or the CLRs) structs is pretty misleading isn't it? I mean there are all kinds of things you can do off heap to reduce gc, but the other platforms make it easy.


Java doesn't have Object or any Object derivative arrays. Only elementary types can be arrays. Of which reference type is one. So when you say:

  Integer[] boxedInts = new Integer[5];
You're allocating an array of references with length of 5, so usually 5 * 4 + 8 bytes. Additionally, each of the referenced Integer Object instance take up 16 bytes. Multiple arrays (or reference fields) can refer to same Object instance.

The example was about Integers, but it's basically the same story for FactoryWorkerPersons.

Java doesn't have arrays of objects. Only arrays of elementary types, up to 2^31-1 entries.


I see you use both Groovy (for Gradle) and Golang. Do you have any installation problems with version management using "GVM", the name chosen for the popular "Go Version Manager" and then used again for VMWare's "Groovy Version Manager"? The Groovy project managers haven't made clear whether they're going to fix this problem, or even explained why they used a name already in use. That project seems to have lots of issues over choice of names.


Another big benefit of Go over Java is licensing costs. While it may not matter to company that is just a website, it does matter if you are shipping hardware with services running on it. Java you have to pay per device shipped. Golang from top to bottom is BSD licensed, no costs to ship apps with it.


I have to say that I am tempted by Go precisely because I see a lot of comments like yours, namely, it seems just to "work" in a very simple way. I find myself somewhat overwhelmed by the avalanche of new language opportunities I am supposed to invest time in. Most seem to have this huge kitbag of functionality and they seem to merge into each other making it quite confusing which I should go to, and with technologies such as the LLVM it seems to me we're only at the beginning of the new language storm, where every CS postgrad wants to create a new language that does everything. Go seems to have a very clear and unique proposition in this environment: sparsity. This seems to me to be a very good differentiator, and to the extent that the language were to move towards all the other languages and try to add features, I personally would be disappointed. This is not spoken at all as an expert in Golang, quite the opposite. I simply love the clean imperative aspect which Golang seems to have and is why I am very tempted to learn it.


Go is the most boring language I have ever coded in, and that is why I love it. Sure, it doesn't have some fancy bells and whistles people seem to want, but when you can sit down and write something functional, portable, and readable fairly quickly, I'd say that is a huge win.


You know you could replace "Go" with "Java 1.0" and your comment would still be accurate. Funny that.


This is old and tired. Go has a ton of features java 1.0 didn't. Hell, go has features java 1.7 doesn't (possibly 1.8, but I'm not as familiar with it).


Actually, it isn't the same argument as golang == java 1.0. It's java 1.0 had similar goals and made similar design choices that golang is making (and had similar responses from developers). In many ways this rings true. Java 1.0 and Golang do share many of the same goals.

A couple of areas where it really falls down though are that Java 1.0 lacked the emphasis on tooling which is golangs greatest strength and added a goal of "write once/run anywhere" which at the time seemed very important but now adds a lot of baggage.


Kudos. That is perhaps the best reasoning I have ever seen for this comparison. I agree totally, though I don't think most people who make the comparison are thinking along the same lines. I think they generally just mean "compiled, GC, no generics".


"Functional"? I suppose you meant "procedural" :)


Something that works.


The article author points out, correctly, that Go code has far too much "interface{}" in it. That's an indication of real need for more power in the type system.

Generics may be overkill. Parameterized types may be enough. The difference is that you have to explicitly instantiate a parameterized type. Generic functions and classes get instantiated implicitly when used, which gets complicated. (See C++'s Boost.)

Go already has built-in parameterized types - "map" and "chan". Those are instantiated with "make", as with

    p := make(map[string]int)
That's an instantiation of a parameterized type in Go. Go already has a full parameterized type mechanism. It's just that users can't define new parameterized types - all you get are "chan" and "map".

For parameterized types, struct definitions would need type parameters. A parameterized type would later be instantiated explicitly in a type declaration, with the parameters filled in and a new name given to the generated type. This is more explicit and less automatic than generics. There's no automatic specialization, as in C++.

This is enough that you can write, say, a binary tree library once and use it for multiple purposes. It's not enough to write C++'s Boost. That seems in keeping with the design of Go.


That's not really the full story though. The parameterized type isn't just `map[K]V`, it's more than that. The type `K` needs to be hashable, and not all types support that. e.g., slices do not: http://play.golang.org/p/IKp_I25NW2 It gets more complicated then that too, because composite types can be used for keys, but only if the type does not contain any non-hashable type.

Similarly, if you're going to build a binary tree data structure, then you probably need a way to compare elements. How do you express that constraint?

Standard ML has a similar limitation, which resulted in the presence of an `eqtype`, which is basically a way of saying, "any type T that can be compared for equality." But of course, ML has its modules...

So I'd argue that type parameterization alone doesn't really buy you much. You might say: well, we need bounded polymorphism then. OK. How do you define bounds? Maybe you figure out a way to abuse interfaces (but a bound isn't a type while an interface is a type), or you add a new way of specifying interfaces to the language.

The road to generics is long, complex and fraught with trade offs. Blessing a few key parametric types is a really interesting compromise, and I believe it's one that was absent in a similar practical manner in pre-generics Java and C++.


True. A useful question to ask is this: what's the minimal parametric type design that would allow writing "map" and "chan" within Go? (Ignoring, of course, that "map" and "chan" both have special syntax; map has a "[]" operator, and "chan" has "select')

If we needed to write, say, "thread-safe map", or "network chan", with type parameterization, that should be possible.


Excellent idea. I think maybe having strict type interfaces for the implementation so you could roll your own would be a good idea too; à la perl's tie HASH => TIEHASH, FETCH, STORE, et alia, TIEARRAY, FETCHSIZE, STORESIZE, et alia, etc. Obviously then you can have slices, arrays, maps (hashes, dicts), that can be use whatever backing store / behaviour you want / need. This will mean that behaviour of individual data collections may not be as clear cut in their behaviour (skeuomorphically) but the upside is more flexibility.


"Every computer language is an imperfect way of describing a potential solution to a poorly understood problem" -- Me

I find it interesting when people wax poetic about how one language is better than another and how if we just did x, y or z then we'd have this perfect solution.

That said I appreciate the author's write-up of some challenges with Go. In the end the reality of any "product" is building what the user's need not what they ask for. That said I do worry that the Generics argument seems to be slowly approaching a religious war that will distract people from the other enjoyable aspects of Go. Worse yet it may calcify the Go development team in a way that will keep them from addressing the basic issues that generics might help solve. Either way Go is just another computer language and I'm happy to use it often to solve problems I'm working on. That said I use a lot of computer languages every week when I'm working on stuff, none of them are perfect and my solutions to the underlying problems are fragile and constantly evolving as the problems unfold. This is the nature of our business, to constantly do battle with poorly understood problems using imperfect tools in a world where we are fooled into thinking everything is black and white because at the core of our technology everything is a 0 or a 1.

"One Computer Language to bring them all and in the darkness bind them".. LOL


Languages may not be perfect, but language features can be compared, and some features are strictly more powerful than others.

There is scope for legitimate criticism of language design based on the expressive power^1 of their features.

In this case, generics would make Go strictly more expressive, as, without it you must write O(n) more code or perform a global refactoring to simulate it.

1. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.4...


You can compare language features this way, but it won't tell you which language is a better fit for a particular problem. JSON is strictly less powerful than JavaScript and it's easy to come up with examples of data that can be better compressed in JavaScript. But for what we use JSON for, not being Turing complete is a feature.

When programming in an intentionally restricted language, the question is whether the limitation in power is put to good use in some other way.


> That said I do worry that the Generics argument seems to be slowly approaching a religious war that will distract people from the other enjoyable aspects of Go

Because generics are important.We're not talking about crazy C++ templates here but a more rigid feature that would still make Go language more expressive.

Or why expose this interface {} feature and allow people to use it in a statically typed language?

People who chose Go obviously want type checking (along with CSP that makes go so awesome), or they would be using something else.


> Or why expose this interface {} feature and allow people to use it in a statically typed language?

`interface{}` isn't anything special, or something akin to `void*`. Interfaces are a core part of the language, and `interface{}` is just the literal form for an "interface with no methods", which happens to match anything since all types can fulfill an empty method set. It's no different than using this `Something` for exmaple:

    type Something interface {
    }


I'm assuming he meant why would they expose or promote the use of a construct that bypasses type safety in a statically typed language. Although `interface {}` is not some sort of special feature, it's still circumventing the type system.


You don't lose "type safety", as interfaces retain all type information -- you're not casting `void*` pointers (without the unsafe package), and type conversions are limited and well defined.

It however does pass the type checking to run-time instead of compile-time, which is a trade-off that some feel is worth the language simplicity.


> It however does pass the type checking to run-time instead of compile-time

That's absolutely still loosing type safety. Reflections don't substitute type safety.


No, it's still type safe (http://en.wikipedia.org/wiki/Type_safety), memory safe, and free of undefined behavior. There's is no way to use a type contained in an interface as the incorrect type (without the unsafe package, but that wouldn't need an interface anyway), or to inadvertently access or modify arbitrary memory.

I also wouldn't consider Go type assertions to be in the same class as reflection. A type assertion is the way you extract and check the underlying type from an interface, and is quite common even when using more complex interfaces. There's also no way to make use of the underlying type from an interface without first asserting it as that type.


Type assertions happen during runtime, thus using reflections. There is no other way than to use reflections.


Are generics really that important? Java didn't have generics for 9 or 10 years, C# only got them in version 2. C still doesn't have generics and I never heard anyone complain.

As a former C# developer I would agree that life got easier with the introduction of generics, but you can develop the exact same software with and without generics.


You can say the same thing about every language feature there is. Of course it is possible to develop software without some particular feature, it's whether the benefits of the feature out way it's costs. With generics, most of the world has decided to land on the side of wanting them. Even golang has them, just not for users.


Java would be a lot nicer to work with had it had generics from the beginning.


That's one of the big problems with adding major features later, the standard libraries don't get rewritten to fully use the new feature.


That's not Java's major problem. It's that generics were implemented using type erasure to stay source and binary compatible with older libraries.


Depends on what you mean by "important." You noted yourself that "life got easier" once the language you used introduced generics.

Generics can drastically reduce the time required to implement features because less time is spent on re-implementing (nearly) identical algorithms for different data structures and more on simply using them to write something that works.

The cost paid for this is usually in the form of minor efficiency hits because data-specific efficiencies aren't exploited.

Also, consider me complaining about C's lack of generics. It is in fact the primary reason I prefer C++. If C had robust generics with reasonable syntax I'd strongly consider using it instead of C++.


C11 added the _Generic keyword, so someone must have complained. A common complaint is the desire to have, say, cos(x) work on floats, doubles, and long doubles rather than calling respectively cosf(), cos(), cosl().


Huh? C has had the <tgmath.h> header for 15 years, which lets cos(x) do exactly that, and work on complex types besides.


_Generic makes it easy to implement things like tgmath yourself.


C has macros, which allow you to create something close-enough to generics.

And as somebody who developed in Java and C# before generics, those languages were awful before those features were added.


>Are generics really that important?

For the kind of work we do in 2014, yes.


For some of the kinds of work we do in 2014, yes. For others, not really; for still others, not at all.


I've been working on juju (http://jujucharms.com) for 18 months and have not missed generics. I have 16 years of development experience in C++ and C#, so it's not like I don't know what they're good for.


If you read the article, he puts forth a few examples where generics make code re-use possible, where it was impossible before. This not only means less effort, it means fewer bugs.


> C still doesn't have generics and I never heard anyone complain.

People who would complain already know not to bother using C.


To be more fair, I'd say people using C are not people solving abstract problems that can profit from a generic approach.


The big thing people miss is that while lack of generics might cost them effort worth X, other Go features benefits them Y. The value of Y is pretty large in case of Go, when it comes to concurrency and language level simplicity. It's hard to go wrong in Go.

Whoever designed Go seems to have some experience debugging and maintaining large concurrent systems.

When you design a language, you have to be so careful of things you add. Because otherwise you get C++ (or one of those cute let's-add-all-the-features-we-can-think-of scripting languages). In good and in bad. You can't remove features afterwards, so better not add a feature you can't get right from the beginning.


Nobody is saying Go should rush to implement generics or any other feature.

The issue is more the dismissive attitude towards them.


Citation needed please. Ian Lance Taylor, a core member of the Go team has revised and replied to many of the generics debates on go-nuts.


> This is the nature of our business, to constantly do battle with poorly understood problems using imperfect tools in a world where we are fooled into thinking everything is black and white because at the core of our technology everything is a 0 or a 1.

Isn't that extremely sad though? We finally have a perfectly precise tool, yet we keep building gooey piles of uncertainty on top of it.

Sure we're just imperfect humans, but that is what compilers and tooling are for - to point out our mistakes and help us get it right

I'm hoping that in the next 5-20 years Rust and Haskell (and later Idris) will change everything.


> I'm hoping that in the next 5-20 years Rust and Haskell (and later Idris) will change everything.

Unfortunately, they won't. These problems are not a 'bug' of current languages, but the inherent nature of the problem. At some point, the perfectly precise world of computers has to meet the messy world of user requirements. As a programmer, my job is to translate one into the other, within the constraints applied by whoever is paying the bill.


Well right now the messy world of user requirements meets the even messier world of legacy languages and poorly behaving libraries. I'm not under any illusions that the first part would change; my hope only pertains to the second part.

For example, the behaviour of null/nil references in most typed languages mentioned in the article has nothing to do the inherent nature of messy user requirements. Same goes for the lack of sum types: infact they're perfect for modelling messy requirements, much better than simulating them with structs and forgetting a check somewhere (I'm just parroting the article here)

These issues aren't caused by messy requirements but by messy leftovers from legacy languages. Thats fine too, nobody expects us to get it right the first time: but that was 40 years ago and repeating the same mistakes in languages made in 2009 feels really sad.


I half agree. While new programming languages will make solving today's problems easier, there will be new problems that they won't solve, and so in 20 years people will be moaning about the problems the contemporary languages don't solve.

For example, one of the biggest problems with building websites ten years ago was that you had to slice everything into tiny images because of table-based layouts. Now we've got CSS3 with SASS and Compass and that's no longer a problem. But whereas then, 90% of users were using IE6 on a 15 or 17 inch monitor, now my users are on a multitude of browsers and screen sizes.


If your language is versatile enough, has adaptable syntax, and powerful at DSLs, it can adapt to future requirements quite well.

Haskell is great at concurrency and parallelism not because the language was designed around those at all - but because the language is well designed, and focuses at communicating the programmer's intent, rather than implementation details. This intent is more directly translatable to correct concurrent code than an imperative spec which is inherently lower-level.


> my solutions to the underlying problems are fragile ... This is the nature of our business

I think this is a mistake. A lot of business is run this way, though.


This article has some good points but I'm going straight for the first one: I don't miss generics.

Indeed, they are useful. I use them in Java every time I find a good reason to do it. So, I should say in another way: I don't miss generics in Go.

Given my experience [0] in Go in the last years, I realized that if you miss generics in Go, your code is trying to cope with too much.

I didn't miss them when I developed Mergo [1], a simple library to merge structs. I think we all agree if we should rewrite this in Java (or other language with similar type system) we would use generics at some point.

[0] https://github.com/imdario?tab=repositories

[1] https://github.com/imdario/mergo


Did you miss it when you were working on larger codebases? The library you mentioned clocks at 500 LOC, and while the LOC metric is pretty inaccurate, it takes a lot more time to really miss language features.


No. I didn't. I have around 2K public LOC in several projects (I can't check now my private projects, I guess I had typed more than that) but I would trust better NateDad's experience.

I have two years of sustained Go usage (since end of 2012) and never missed them.


I've been working on Juju for 18 months, which is over 250k LOC and have not missed generics at all.


"I realized that if you miss generics in Go, your code is trying to cope with too much."

Was wondering what you mean by that (not disagreeing just interested)?


I meant that if you miss generics, your code is probably too complex.


Sorry yeah I get that, but in what way?


The first point "no generics, no code reuse" is the best one. Go really needs generics/templates/macros or _something_. It doesn't even have subclassing, although you can kinda extend a type if you only use its public interface. I've been tempted to see if I could reasonably use the "text/template" package and feed that back into the Go compiler to achieve this.

The rest is mostly a Haskell fanboy whining that Go isn't Haskell.


I am starting to think they'll be recommending we use `go generate` to do handwritten templates before too long. It saves them the hassle of building generics into the type system and they clearly want it to be part of the build cycle.

http://blog.golang.org/generate


> Just keep in mind that it is for package authors, not clients...

Generics by author-template sounds exactly like what the Go authors would want: source code is still shared (rather than binaries), no Make/Include/Macros for the build, and no hassle for the _user_ of a package.

> Also, if the containing package is intended for import by go get, once the file is generated (and tested!) it must be checked into the source code repository to be available to clients.

Moving external processes into the source code is characteristic of how they've developed Go.

A GCO/SWIG style generator built-in to `go` would be a great next step. It wouldn't have to be perfect, just deal with includes and defines separate from Go build. Even if the output needed to be hand edited it would be a great starting point.


Go has a very nice standard library. The whole thing is reusable code.


Not every problem can be solved with the standard library. At some point you need a 3rd party library, which may require your modification to work with your data types.


The point is that everyone assumes you need generics to reuse code. And that's simply not true. For some very specific problems that can be true, but in many many cases it's not. My point is not that you never need anything outside the standard library, just that the standard library does a hell of a lot and it is by definition reusable.


Go's standard library uses generics a lot. In fact, Go's built-in datatypes are the only parts of the language that are allowed to use generics.


Built ins are not the same as the standard library. The standard library consists of packages written in normal go for things like networking, web servers, JSON, regular expressions, etc.


Best joke in the industry:

"Code reuse!"

Cracks me up every time. ;-)

We fall for those words over and over again. But in reality, sadly, reuse almost never happens. Maybe one day, maybe even in the next project...


>We fall for those words over and over again. But in reality, sadly, reuse almost never happens. Maybe one day, maybe even in the next project...

I don't know in what industry you work for. In the IT industry code reuse very much happens every time.

It might not be using code from your previous project to build the next one (through people do that ALL the TIME), that it's very much using common libs for thousands of projects.

The kind of code reuse he talks about has been working perfectly fine for ages. E.g a generic sort, filter etc function, instead of having to handle the basics every time.


Like c's 'libc', or C++ boost, or pythons 'requests', or...

Lots of code reuse there that depends on generics.


Yes, libraries get used. That's their point. My point was, that very little of application code ever gets reused, although we somehow always think it will.


And most people are asking for generics to write libraries which don't rely on unblocking interfaces.

Even in the process of writing a single program, there are a number of times where you reuse certain bits of logic, and having generics makes it easier to refactor those into a single function, instead of duplicate code with multiple types (or overly generic types).


That's why you write or update libraries before writing an application, so it can be reused and your application specific code is minimized as much as possible.


Well, I want Go to have refcounting instead of current GC, so my programs could guaranty latency. Single-threaded runtime, without all the locks, slow channels, races and so on, because there is no point in so much overhead and complexity for the majority of programs. More consistency couldn't hurt, so I wouldn't have to assign anonymous function to a variable just to return it. Fast regular expressions compiled by the language compiler into a native code could improve matching, parsing and validating consistency, instead of writing lots of loops everywhere. Compiler warnings instead of errors on unused variables, imports, etc. to allow faster prototyping. Better testing culture with meaningful test names and line numbers with actual failed tests. Table testing suggested by Golang team doesn't even support that with the means of standard library, you have to use your own runtime.Caller(1) wrapper.

But generics? "Safer" type system? No, that's unnecessary complexity. As others pointed out, we already have Haskell and Rust for all those things.


You don't like locks, but you want refcounting that requires a large number of atomic ops for acquire/release semantics and thus scales like crap? Ah, you really mean, single threaded as in no other threads? Well... ok. But that'd make Golang a lot less useful.

For compiled regular expressions: just use PCRE library for compiled regular expressions. I bet there's already some library that does it for you.

Maybe you should take a look at Lua and especially LuaJIT [1]. You might like it. Lua(JIT) is single threaded, but provides co-routines [2] as a language construct. Performance is about same as Golang, sometimes faster, sometimes a bit slower. Lua has GC, but you can control latency by controlled GC invocations. It can be made pretty predictable, a lot of current AAA games use Lua internally.

LuaJIT's FFI [3] is excellent, calling native C libraries is a breeze, very easy and requires no bridge libraries or code.

[1]: http://luajit.org/

[2]: http://www.lua.org/pil/9.1.html

[3]: http://luajit.org/ext_ffi.html


Mmmm. Threads + mutable data. What a disaster this industry has unleashed upon itself. Blech.

I think that threads are easily up there with null/nil as one of the worst CS mistakes. Communicating via shared variables at the application program level, vs an IPC function / channel, seems like a noose that should be designed away from reach.


Refcounting and a single threaded runtime, instead of GC and a multithreaded one.

By compiled regexpes, I meant replacing things like bytes.Equal(foo, "qwe") with things like foo.m(`^qwe$`) or even foo.m/^qwe$/ that have the same performance. All the loops that scan through slices could benefit from it, golang has a lot of them. Plus more overall matching/scanning consistency, that should lead to fewer mistakes.


Well, if you're happy to run them only over byte arrays, then you should be able to use PCRE and satisfy your performance requirements. Your regular expressions will get compiled into native code at runtime.

http://sljit.sourceforge.net/pcre.html


Go would need to be completely rearchitected in order to move to a single-threaded runtime.


As far as I know, it should be enough with runtime.GOMAXPROCS(1) to run as single-threaded.


No, the runtime will still create new threads whenever you call a C function or syscall.


I've come to really enjoy the error on unused variable/import.

It's so easy to add/delete imports with GoSublime plugin.

Everytime I write Java now, I seem to end up with loads of unused imports as I prototype and it just feels messy.

Once you get used to it, it's kind of nice.


Unused import cleanup has been a standard part of all the major Java IDEs for quite some time. Further, checkstyle and other static analysis tools can easily replicate the check.


The majority of the programs do not really need to guarantee latency.

That being said, it is well known that Golang's present GC performs poorly. It presently fails to free memory in certain cases, and has pretty high overhead. There is work done on improving the GC to be fully concurrent, but that is still probably still 2-3 major versions away.


The concurrent GC is planned for version 1.5 and is only 6 months away. There is also work being done on a compacting GC, but I don't know when that is supposed to land.

Actually, the Go team is making very good progress on all the prerequisites to real-time garbage collection. Who knows if they will manage to get all the way there, but I know there's been some discussion about it on the mailing list.

If you look at how fast the GC has improved over the years, it's kind of ridiculous that anyone bothered to criticize it at all, as if it was not really the temporary solution the devs said it was, and was in fact going to stay shitty forever.


Where does it fail to free memory? As of 1.4 the GC is fully precise. Concurrent GC is in the works, and is scheduled to be in 1.5 (August of this year).


For one, it fails to handle memory fragmentation. The effect is that your application seems to grow and grow. It is not very common to cause real problems, but compression is something JVMs learned a long time ago. There are also afaik a few corner cases with stack, I've seen a few discussions about those on Stackoverflow lately.


I'm not sure if this is sarcasm or not, but I will assume not. If they made the changes you suggest golang would be come useless to me. That isn't to say what you want isn't valid, only that it gets to the heart of all these arguments. Every design decision comes with trade offs and you can't please everyone.


No, it's not a sarcasm, but I didn't mean they need to replace those things, runtime and GC could be a choice, or it could be a completely different golang compiler. I've been itching to write one myself.


Go 1.5 is going to have real-time guarantees on GC.


"real-time guarantees" is not a particularly useful requirement as you can just set the minimum time very high to accomplish it. Real-time and low latency on the other hand would be a game changer, and I have pretty serious doubts about their ability to accomplish this.



I consider 10ms pauses to fall into the "set the minimum time very high" branch of my statement. That is, if you can handle pauses of that magnitude there are already lots of GC options for you and golang is not adding much (that said 10 ms pause guarantees are much better than the current so more power to them).

Even 1ms pause ceilings drive people to non-GC options, so I think the "game changer" number is much lower than that.


Think of a computer game. 60 FPS. So 16 ms time per frame. Assume you're running comfortably within 14 ms per frame, so you get nice and smooth 60 FPS. A 10 ms GC here means you'll definitely miss at least 1 frame, a good chance you'll miss more, because the simulation code (or something else that needs to run whether or not a frame was on time) will need to catch up.

For interactive desktop applications, the pause should be adjustable or at most a few milliseconds. Say, maximum 5 ms.

For hardware devices... well, you just don't use GC there in the first place. Microsecond is a pretty long time in that area.


If the author reads this, I think he's confusing "quiet" and "quite".


> For those who are bothered about the exponential algorithmic complexity of nub

There is nothing exponential about nub. Given only equality comparisons, detecting and eliminating duplicate elements requires quadratic running time.


You are right! Others noted that mistake as well.


Wanted to say, you can potentially do better than this in go for arbitrary JSON:

type JSON map[string]interface{}

Go ahead and use the json.RawMessage type instead:

http://golang.org/pkg/encoding/json/#RawMessage

Not even going to wade into the lack of generics war, still very happy without them and going on a full year of Go usage.


Yeah, I almost never use map[string]interface{} when unmarshalling JSON. Most JSON has a static structure (and I'd argue that any JSON that doesn't is misbehaving). If the structure is static, you can simply use a struct that represents this[0].

If it doesn't, then you just use json.RawMessage to defer unmarshalling, but you can still dispatch to a finite number of structs that represent the universe of possible responses.

[0] And no need to write it all out by hand - if you have a single example JSON response, you can generate it automatically: https://github.com/ChimeraCoder/gojson


Every time I read an article criticizing Go I end up appreciating it even more.

Maybe it's because I've never felt the need to use generics and in all these articles the examples they give are functions of a few lines that would be quicker to write 2-3 times for different types than remembering generic syntax.

Maybe it's because they exalt one line functional functions over a nice, simple, easy to read FOR loop when the former are so difficult to read, figure out what they really do, what is the performance cost, debug...

Maybe it's because of the bogus examples they give like criticizing

> file, _ = os.Open("file.txt")

> file.Chmod(777)

for not handling explicitly a possible error when it's just because the example is ill written and the proper code is

> file, err = os.Open("file.txt")

> if err != nil {

> ... handle error ...

And here I stopped reading the article, it's always the same arguments over and over: more elegant and complex code vs. the un(cool) but oh my, so much simpler Go code.

What I find really amusing though it's how people are so smug in their writing, pointing out the "obvious errors" (billion dollar mistakes!) that the Go authors made and their "ignorance" of proven modern programming language constructs they could implement in Go.

There are two possibilities here, pick your preferred one.

1) Pike, Thompson & co. made obvious errors in designing Go because of their ignorance of programming languages and/or ineptitude

2) These bloggers claiming obvious errors in Go design don't really fully understand the trade-offs involved in what they ask for and ignore the fact that the Go authors have carefully thought about them and optimized accordingly

I will go back to programming in Go, I'd take writing for loops all the time vs. writing a one liner in Haskell and then agonize over using the lazy or strict version of it :)


I think there is a third option that you are ignoring. Pike, Thompson & Co. are designing for a different problem set than the blog authors.

Golang seems to shine at very simple, concurrent tasks that can be passed from one set of developers to another regardless of sophistication levels of those teams. It seems to fail pretty miserably at making a single sophisticated developer vastly more productive. It's what Java would have been if they'd thrown away the write once/run anywhere goal and never gone down the J2EE box canyon.

There is nothing wrong with either goal btw, they just are in some ways opposites.


> 1) Pike, Thompson & co. made obvious errors in designing Go because of their ignorance of programming languages and/or ineptitude

Go was designed by a team at Google, where there is a policy that the only languages you're allowed to code in are C++, Java, and Python. I think the resulting language reflects that; it has many of the strengths of C++, Java and Python, and might even be a better language than all three. But it is also missing really obvious improvements that could be taken from e.g. the ML language family - which shouldn't be surprising, given that no-one at Google is allowed to use those languages!


> Maybe it's because I've never felt the need to use generics and in all these articles the examples they give are functions of a few lines that would be quicker to write 2-3 times for different types than remembering generic syntax.

Uhmm, what? That seems entirely ignorant for ignoring an expressive concept and rather "do it the hard way". And, "remembering the generic syntax" is no different from remembering any other features' syntax, and I don't imagine you have issues remembering those, do you?


> Maybe it's because they exalt one line functional functions over a nice, simple, easy to read FOR loop when the former are so difficult to read, figure out what they really do, what is the performance cost, debug...

Why do you even need `for` loops? Comparison + labels + jumps are good enough. You want to know the reason? It's easier to understand the intent of the programmer when there is a loop. `for i = 1 to n do ... done` means that the programmer wants the body to be executed n times. `while not (list.is_empty()) do ... done` means the body should be executed as long as the list isn't empty. If those are acceptable, why not also abstract away applying an operation to a collection, or removing elements from a collection, etc.? Basically it boils down to: I like Go, and everything it has needs to be there, and everything it doesn't have is unnecessary complexity.


OTOH, I think is part of the point:

  > file, err = os.Open("file.txt")
  > if err != nil {
  > ... handle error ...
A type system with generics can use types to make an error a different type and produce a compile error if you don't handle the case without making the code more complex.


One thing that is perhaps counter intuitive is how you do a lot of for loops compared to other languages (Compiled/Statically typed included). Once you let go your previous expectations, Go gets a lot nicer to use.

The amount of for loops you will write will also makes it obvious when you start doing O(n^2) operations in your methods. Same can be said with variables declaration and error verifications in Go. They are annoying at first, but then you understand by reading your code the places where error can occur or where you decided to ignore errors (By using _).

It's a different approach and it's refreshing.


> ... error verifications in Go ... you understand by reading your code the places where error can occur or where you decided to ignore errors (By using _)

This compiles:

    package main

    import "errors"

    func main() {
        f()
        println("hello erroneous world")
    }

    func f() error { return errors.New("boo") }


Which you can catch with a linter like https://github.com/kisielk/errcheck. Errors in Go are just values. There's nothing "more wrong" about ignoring a return value that is an error or one that is file handle, for example. And in cases where a function returns an error and a value (where the error generally indicates the validity of the value), you'll get a compile error if you don't check the error. That's pretty good for basically zero overhead.


> And in cases where a function returns an error and a value (where the error generally indicates the validity of the value), you'll get a compile error if you don't check the error.

This compiles:

    package main
    import "errors"
    func main() {
        x, err := f()
        if err != nil {
            return
        }
        y, err := g()
        h(x, y)
    }
    func f() (x int, err error) { return 1000, nil }
    func g() (y int, err error) { err = errors.New("boo"); return }
    func h(x, y int) { println(x/y) }
(I do write Go most days and rather like it, but apologists for its weak parts are legion)


Pretty sure errcheck will catch that too. The thing that bugs me is the people that make up problems that are not at all problems in real day to day coding.


This looks more like a bug. The second err is a valid redeclaration, and so should be checked for usage on it's own. Though the redeclaration through multiple assignment is itself a bit of a wart that the programmer hast to be wary of, so it may not be a big deal.

In any case, the go compiler is very helpful in most cases, and better tooling can improve things further.


FYI the second use is just assignment, that's why it's not caught by the unused variable rule.


Ah right. It only shadows if it's a new scope.


Sure, you can fail to capture any return variable (not just errors) in lots of languages. What is available to Go that isn't in, say, C++ or Java is the "black-hole variable" marked with the underscore _ so its very much a stylistic choice to insert it in Go where errors would be returned. Applied consistently, you can gain a lot of readability. But it is very much a stylistic choice.


IMO, the number of imperative-style for loops I have to write in Java (pre-8) is one of the most frustrating things in the whole language, especially after exposure to Scala (and later, to Java 8).


It's not really a different approach. You had to write a lot of vanilla loops in C, too.


Yeah, it seems like a lot of the criticisms of go boil down to "I don't like writing simple loops".


Rather, I don't like to copy-paste and slightly modify simple loops over and over again. Abstraction is a basic tool of programming; where has it gone?


They're simple loops. You don't need to copy and paste, you just write them out because it's just ridiculously simple logic. Just like you don't copy and paste if statements.


All loops encode simple logic? I don't think that's true.

Either way, a very good reason for not using loops is to make code more readable. If I see a loop I have to run it inside my head to figure out what the intent is, and if there's multiple things going on, that can take time. If I see a 'map' I think "right, this is tranforming every element in this list in this way"; if I see a filter, I think "right, this is removing elements in this list that don't satisfy this property", if I see a groupBy, I think "right, this is grouping elements in this list by this property", etc., etc.

Code isn't only more succinct, it's more human friendly.


Expanding a map to a for loop is very simple. Yes it removes a shortcut available when writing code, but when reading code I can consume that simple chunk of logic in one mental bite just as easily as I can a call to map.


Sure ... but if a loop is effectively doing a map, a filter, and a bunch of other operations all at once? It's a lot quicker to figure out what's going on if it's been written with combinators (once you're familiar with them) than if it's the vanilla loop.


It can be a lot less performant to chain several combinators.


If we assume the operations we're talking about take time linear in proportion to the list, you've just gone from a * n time to b * n time, where b > a. Both of these are still O(n) in Big O notation, which drops constants, because constants unless they're very large or n is extremely large, tend to have relatively little effect on the running time of an algorithm.

Choosing to write more verbose, difficult to decipher, difficult to maintain code, under the claim that it will perform better, is not a good thing: "premature optimisation is the root of all evil".

In practice, if this becomes an issue (which you find out through benchmarking once you know there is a perf issue), most modern languages offer an easy way to swap out your data-structure for a lazily evaluated one, which would then perform the operations in one pass. Languages like Haskell or Clojure are lazy to begin with, so do this by default.


My comment was carefully worded in order to denote that it is not true in all cases.

Your Big O analysis is correct. However, in a real-world case the list could be an iterator over a log file on disk. Then you really don't want to use chained combinators, repeatedly returning to the disk and iterating over gigabytes of data on a spinning plate.

And yeah, you could benchmark to figure that out, or you could use the right tool for the job in the first place.


Or you could use a library explicitly designed with these considerations in mind, that offers the same powerful, familiar combinators, with resource safety and speed. e.g.,

Machines in Haskell: https://hackage.haskell.org/package/machines

Scalaz Stream in Scala: https://github.com/scalaz/scalaz-stream

I've no doubt an equivalent exists for Clojure, too, although I'm not familiar enough with the language to point you in the right direction.

One of the most amazing things about writing the IO parts of your program using libraries like these is how easy they become to test. You don't have to do any dependency injection nonsense, as your combinators work the same regardless of whether they're actually connected to a network socket, a file, or whether they're just consuming a data structure you've constructed in your program. So writing unit tests is basically the same as testing any pure function - you just feed a fixture in and test that what comes out is what you expect.

I found this really useful when writing a football goal push notifications service for a newspaper I work for. I realised that what the service was essentially doing was consuming an event stream from one service, converting it into a different kind of stream, and then sinking it into our service that actually sent the notification to APNS & Google. The program and its tests ended up being much more succinct than what I would normally write, and the whole thing was fun, and took hardly any time.

I would say that is the right tool for the job.


In many cases, YAGNI.

This started as a conversation about for loops vs. chained combinators and wound up with specialized libraries for IO. You're not wrong, but that's a lot more work than a for loop to parse a log file efficiently.


Unless your compiler (ghc for example) uses fusion to turn it into a single loop anyway.


By that logic why bother with for? You can implement any control structure with goto, after all.

The value of using map is precisely that it can't do everything a for loop can. It can only do one simple thing, which makes it easy for the reader to understand what it's doing.


> You can implement any control structure with goto, after all.

Because a for loop is the most appropriate construct in the given language. That's different than comparing hypothetical constructs that a language doesn't have.

Go has higher-order functions, and you can write all the maps funcs you want, it just doesn't have the ability to create generic combinators with parametric polymorphism. That's a trade-off I accept with Go to get things done. Like a lot of people, Go for me hits a sweet spot of simplicity, productivity, and performance; simply put the benefits outweigh the drawbacks. Whatever, it's a programming language; a tool.

It's like complaining that a functional language doesn't have an easy way to write mutating procedural code. You're always working within the confines of some language, and unless you're directly working on that language's implementation where you can change things, I'd rather be working with the language than fighting it.


But we're talking about language design. Of course it's harder to use the feature the language doesn't have than the feature the language does have. The whole point is that the language should have that feature, because if it did have that feature then code using that feature would be better than code using the current features.

> It's like complaining that a functional language doesn't have an easy way to write mutating procedural code.

I do complain about such languages. That's why I use Scala.


Ridiculously simple and wordy. Exactly the stuff the computer should do for me.


I think editors solve that problem pretty well. I usually only have to type something like "fo"<TAB> part of my collection name and another <TAB> or 2 before I get down to business, no matter what language I'm using.


You only write source code once. You read and edit it many times, and your colleagues, users of your library, etc, read it many more times.

The longer the piece of code one has to read, the easier it is to read it incorrectly, miss a bug, or introduce a bug.


This is just not true. Many simple lines are easier to read than fewer more complex lines. It's hard to miss a bug in

    x = x + 1
    foo[x]
But easier to miss a bug in

    foo[x++]


As an industry we have no objective measures for what makes code "better", so in many ways these debates are pointless. That said, I'd add 3 points to this particular thread:

1) You cherry picked one of the most confusing uses of operators (and sources of bugs) that we have. If it weren't so ingrained in our educations/history (and so useful for old fashioned looping semantics) I think we all would have moved on from ++ syntax a long time ago.

2) As flawed as it is, it is a well studied phenomenon that LoC is nearly the only measurable statistic we have that is predictive of error rates (that is higher LoC solutions tend to have higher bug counts).

3) The bigger point about looping constructs isn't about LoC in any case. It is de-coupling 2 different areas of code responsibility. One is the mechanics of iteration and the other is what to do on each event.

At the end of the day most looping constructs end up being sugar around while loops anyway, so the reductionist argument is all of it is unnecessary complication, just do everything in a while loop.


> As an industry we have no objective measures for what makes code "better", so in many ways these debates are pointless.

I don't know why people keep saying this. We write code to solve problems. The best code solves the most problems in the most correct and efficient ways.

Regardless of your measure of efficiency, if someone cannot quickly tell if your code is correct, does what is intended, and does or does not apply to their problem, then the code isn't good.

That's your measure: Can the people who want to solve the problem you are trying to solve efficiently identify, evaluate, and employ your attempted solution?


Nothing in your measure can actually be "measured". That is given 2 different solutions to the same problem, how do you determine which is better?

The reason people keep saying that we don't have any measures that do this, is that it is a bafflingly tricky problem that has been studied for decades with hardly any forward progress, and lies at the heart of most of the big issues in our industry.


What makes you think it can't be measured? Just because we don't have the infrastructure to do it doesn't mean it can't be done.

>The reason people keep saying that we don't have any measures that do this, is that it is a bafflingly tricky problem that has been studied for decades with hardly any forward progress

And what if I had a solution? How does this comment help anything? What do you think the difference is between unsolved and unsolvable, and how does your reaction not foolishly apply to both? How would you recognize progress in the context of my post?

Here's a Haskell factorial function:

    -- Basic factorial function.
    fac :: Integer -> Integer
    fac n = product [1..n]
Here's an attempt at a factorial function in another language:

    .r $15265f38a [] (44) ; dup n + offset;
    _
Which is better code? Do they both compute a factorial? Do they both terminate? How did you determine this without using any 'measurable' data?


>I don't know why people keep saying this.

What I'm getting at is that determining which of 2 pieces of code are better is one of the biggest open question in the computer science/software industry. Further, it has been widely studied for a long time and across a wide variety of measurements and contexts, yet remains an open question.

If you have a measure that can be systematically reproduced by anyone for any 2 pieces of code that purport to solve the same problem, that would be a huge breakthrough worthy of wide publication and probable riches.

If you can make that machinable (even given caveats around provably impossible things like the halting problem) you could literally transform the entire industry.


This is a very complex problem on a very volatile dataset. It would be like asking a physicist to predict the outcome of an foot race by calculating a complete quantum description of all the runners.

If that is your standard, then it is highly impractical, but not theoretically meaningless. However, "determining which of 2 pieces of code are better" is not such a problem. It is one that can be solved by heuristics and approximation, just the same as predicting the outcome of a race can.

So while there is as of yet no perfect algorithm, I would strongly disagree that there is _no_ algorithm or that there is no way to compare different algorithms meaningfully. We have measurements, and just because they are not perfect doesn't mean they cannot be verifiably correct within a useful margin of error.

We can reliably tell which two pieces of code are better. We cannot do it with extreme precision, and we cannot do it quickly enough to make predictions, but this does not mean that the concept of 'better code' is invalid or not understood in a productive sense. It certainly doesn't mean you can't make informal arguments from it.


> They're simple loops.

Unless you need a do/while loop. Then you're SOL.


maps and filters are things that have existed for a long time, and are extremely commonplace. They are also very representative of common operations in programming.

Not having generics means we can't properly build these ourselves, and Go doesn't offer list comprehensions, so instead we're forced to for loop.


Yes, loops, just like we learned in CS101.


Loops are basically just compares + labels + jumps, yet we abstracted away from those, because we saw that it was common to want to execute a block of code a given number of times (for loop) or as long as a condition held (while and do/while loops). In recent years, more languages (C++, Java) have added a construct for doing an operation on every element of a collection (for each). Why then can't we have more abstractions of patterns programmers write over and over again? It makes a lot of sense to have an operation to apply a function to every element of a collection (map) or to remove elements of a collection based on a predicate function (filter). Saying that we can write those with loops is the same as saying we can write loop with jumps: true, but have a specialized construct makes the intent of the programmer a lot more clear.


But writing a loop takes up at least 3 lines in my editor!


  I used to play with a small plastic hammer when I was in 
  kindergarden thirty years ago!
  
  I will be very irritated if people laugh at me for trying 
  to build my new house with this hammer now.

                                               -- Golangers


Yeah, it seems like a lot of the criticisms of go boil down to "I don't like doing everything with assembler".


While the author has a lot of valid points, he forgets the goal of Go. Go was designed as a simple language, that is fast, but has similar power to existing dynamic languages such as python.

The tradeoff in these designs are to prevent tuple types, always keep using structs, prevent using algebraic types over structs. Allow for nil, but try prevent common Null errors etc.

Haskall is theoretical a much better language. But Golang was designed to be practical & simple over being mathematically sound, and theoretical better.


I dislike it when people say languages are either 'practical' or 'well designed'. As if a language is only useful if it's not built on sound mathematical ideas ...

Generics are not a particularly new idea and nobody who uses language that implement them yearn for the days when they weren't around.

See Paul's post on the idea that 'worse is better': http://pchiusano.github.io/2014-10-13/worseisworse.html


The point is that you reach an area where the mathematical ideas don't map perfectly to the underlying functionality you're providing.

You can make your language "unsound" by violating those mathematical principles, or you can make something like the I/O monad. It is indeed "soundness" vs "practicality".

EDIT: Oh, and that link seems to have misunderstood the 'worse is better' philosophy -- C++ isn't the champion of 'worse is better', C is.


Paul explains fairly well why he thinks C++ is a good example of "worse is better", including quotes from the language designer to that effect.

I think when you get to more complicated (or at least alien) abstractions like the I/O Monad, it's not about whether it's practical to build software with it. It's perfectly practical - I know people working at big investment banks writing large scale software in Haskell, and they don't have any issue with it. What it comes down to is whether the people you work with will understand or want to invest the amount of time required to learn those abstractions. If you're working with very talented programmers, that might be yes. With the majority of programmers it is probably no.

But we're not talking about the I/O Monad. We're talking about generics, which are not difficult to understand, and that Go lacks them is a shame, as it means that whole layers of abstraction are not available to the programmer, so they end up having to spend more time writing boilerplate.


So he invested time in building up his straw-man before tearing it down?

The problem with being in such a rush to confirm your own worldview is that you don't learn anything from anyone else's. The 'worse is better' philosophy is about valuing simplicity over completeness, specifically in the context of UNIX/C vs Lisp machines back in the 80s. How do you think most of those people feel about, say, the STL?


Not that it particularly matters whether we're talking about C++, as it's rather tangential, the original essay by Richard P. Gabriel that coined the term "Worse is Better" talks about C++ (http://www.jwz.org/doc/worse-is-better.html):

  The good news is that in 1995 we will have a good operating system and programming language; the bad news is that they will be Unix and C++.
In his later essay on the same topic (http://dreamsongs.com/Files/IsWorseReallyBetter.pdf) he again talks about C++:

   In the computer world, the example of interest to JOOP readers is C++. Many would concede that languages like Smalltalk, Eiffel, and CLOS are vastly “better" in some sense than C++, but, because of its worse-is-better characteristics, the fortunes of object-oriented programming probably lie with C++.
Simplicity is more often simplicity of implementation than interface. Anyone who's worked with the C standard library knows that it's anything but simple. There's a lot of incidental complexity that is due to design inconsistencies. The complex parts of a language like Haskell are the bits where the ideas themselves are complicated, but the power you get for overcoming that initial learning curve is very great.


"Worse is better" doesn't preclude good design. Something like STL is fairly straightforward to implement, so it is not too hard to get it right.

C++ has a problem with bloat and 'tacked on' features, and that is clearly 'worse is better' design. Backwards compatibility is a huge indicator of this kind of thinking. Partial backwards compatibility is an even more egregious example.


> Go was designed as a simple language, that is fast, but has similar power to existing dynamic languages such as python.

I'd like to see a citation for this as I've never seen this as a stated goal of golang. In fact, if it were the stated goal that is one of the most damning arguments against golang as it very clearly fails at this task.

I've been under the assumption that golang was designed to make writing/deploying/managing simple web services across a wide variety of machines and across a wide variety of teams. While I'm still reserving judgement on it accomplishing this goal, I'd say it is much more likely to accomplish this than trying to have similar power to python.


http://golang.org/doc/:

"Go compiles quickly to machine code yet has the convenience of garbage collection and the power of run-time reflection. It's a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language."

That comes reasonably close.


For a Python-like core language with an expressive type system that is simple, clean and immediately absorbable, you're really looking for Nim, not Go.

The following produces a single binary with native code generation via compilation to C, no VM:

    import rdstdin, strutils

    let
      time24 = readLineFromStdin("Enter a 24-hour time: ").split(':').map(parseInt)
      hours24 = time24[0]
      minutes24 = time24[1]
      flights: array[8, tuple[since: int,
                              depart: string,
                              arrive: string]] = [(480, "8:00 a.m.", "10:16 a.m."),
                                                  (583, "9:43 a.m.", "11:52 a.m."),
                                                  (679, "11:19 a.m.", "1:31 p.m."),
                                                  (767, "12:47 p.m.", "3:00 p.m."),
                                                  (840, "2:00 p.m.", "4:08 p.m."),
                                                  (945, "3:45 p.m.", "5:55 p.m."),
                                                  (1140, "7:00 p.m.", "9:20 p.m."),
                                                  (1305, "9:45 p.m.", "11:58 p.m.")]

    proc minutesSinceMidnight(hours: int = hours24, minutes: int = minutes24): int =
      hours * 60 + minutes

    proc cmpFlights(m = minutesSinceMidnight()): seq[int] =
      result = newSeq[int](flights.len)
      for i in 0 .. <flights.len:
        result[i] = abs(m - flights[i].since)

    proc getClosest(): int =
      for k,v in cmpFlights():
        if v == cmpFlights().min: return k

    echo "Closest departure time is ", flights[getClosest()].depart,
      ", arriving at ", flights[getClosest()].arrive
Statistics (on an x86_64 Intel Core2Quad Q9300):

    Lang    Time [ms]  Memory [KB]  Compile Time [ms]  Compressed Code [B]
    Nim          1400         1460                893                  486
    C++          1478         2717                774                  728
    D            1518         2388               1614                  669
    Rust         1623         2632               6735                  934
    Java         1874        24428                812                  778
    OCaml        2384         4496                125                  782
    Go           3116         1664                596                  618
    Haskell      3329         5268               3002                 1091
    LuaJit       3857         2368                  -                  519
    Lisp         8219        15876               1043                 1007
    Racket       8503       130284              24793                  741
Not only is this syntax far more approachable for anyone who likes Python, as opposed to Go, Nim is actually suitable for systems and embedded programming. Optional GC and manual memory management makes Nim one of the few up and coming systems programming language that ventures into C territory while still being safe and pragmatic enough for general usage.

http://goran.krampe.se/2014/10/20/i-missed-nim/

https://github.com/Araq/Nim/wiki/Nim-for-C-programmers


"Why Go is not Haskell"


Author here. I think that is a very valid point. However, a lot of features I mentioned are present in other imperative languages, like Rust. The Go authors however picked other solutions I consider inferior, hence the article.


It seems like Rust is what Go would become of you added all the Haskell-ness to it. Which is fine, but we already have Rust. So... Maybe we should be asking what Rust is missing from Go? Or if Rust is good enough, just use that.


Rust is missing garbage collection (which is by design, but IMO manual memory management is the wrong hair shirt to wear), and maturity.

What I want is OCaml without people complaining about concurrency (which I suspect is mostly FUD - how many people who choose go "for performance" actually profile their programs? I get the sense there is this group of people who think any high-level language is inherently as slow as Python). Or Haskell without laziness (maybe Idris will eventually become this?), and perhaps with a lower-friction approach to I/O. Or Scala with native compilation (again I think complaints about this are probably mostly FUD, but if nothing else the JVM startup latency is real). Or any of these three getting the hype that Go gets, tbh.


So, uh, what are you actually after? Just use Scala if that's what you want. If you can't be bothered to manage memory in rust, then it seems hard to believe that native compilation makes a difference for you.

Is it all just about the hype? Write code, ship products.


As I said, the JVM startup latency is a real issue. I'd like to be able to write little command-line utilities in Scala (or a Scala-like language) and not have to wait a second when I run them.

But yeah, I'm writing and shipping in Scala, and life is good, I shouldn't care about people hyping what I don't like.


Well Rust became production ready well after Go and the low level focus it has doesn't match my problem domain. I have the luxury of not caring about the constraint of a single consumer box - hacking mostly on backend stuff living in the cloud, so all those pointer types for example would only get in my way.


What about Elixir? Lack of static typing?


Yes, I used to ignore anything non-static, nowadays I am reconsidering this, for me untyped languages are all the same (they provide on compile time type checks), give or take syntactic sugars.


Your article is unbelievably arrogant. You should have been honest, not dishonest. These aren't everyday hassles, these are hassles you have selected to promote Haskell.


I write Go exclusively during my day job - these are indeed chores I encounter daily.

I could and probably will blog about problems with the Haskell as well - nothing is perfect.


FWIW I thought the article was fine, when though I love go. I do think it reads like you wish you could use Rust or something. It is nice to hear about hassles from someone who has actually used the language for a change. Not every language is for everyone, that's OK. Obviously, most people who love the Haskell style of type classes and pattern matching are going to find much packing in go. That's OK.


While I understand your comment, I have to disagree.

Programming languages should be kind of boring, and support the average developers' work. That is why Haskell will ultimate fail and fade away - most of the developers on this planet are simply not skilled enough to touch any of that stuff. Attempting to add all Ninja concepts to a language will just lead to a massive failure.

Go does a very good job on forcing even the most average developers to reach surprisingly succinct and elegant solutions. However, the lack of generics really forces in some situations to either use the reflection or stop being very DRY or elegant. Go wouldn't have to implement full generics to fix that. Just adding something similar to what interfaces are to methods for variables would do, without the rest of the moving parts.


So type classes?


First of all, you're making the enterprise java argument. Which is valid, but drives me to suicide.

Furthermore ... Go ... succinct ... elegant ... using reflection as an alternative to generics ...

In my experience Go is neither succing (if err { ... } one-statement if err { ... } one-statement if err { ... } one-statement). It is extremely inelegant because due to the confused sequence of statements it isn't possible to keep your thoughts on what the method is doing, as there are constant error handling concerns everywhere (or regrets/TODOs about error handling to be completed in the future ...)

Is this

(if this doesn't refer to itself or the sentence it's in, return that information to the caller manually so he can ignore it)

somewhat

(if the above statement is not and adjective you have a problem with your understanding of english, report to the relevant authorities)

unclear

(if un doesn't result in the reversal of the meaning of the word clear, we have a problem and should inform the main sentence interpretation routine that understanding is flawed)

perhaps

(missing error checking)

?

(if the above is not punctuation, report back that your browser is probably not using the correct character set)

Using reflection as an alternative to generics is horrible in lots of ways, but just mentioning a few : no type safety (not even a decent verification you're passing the correct number of arguments). It's extremely long code, full of edge cases (such as arrays being a value type and slices being a reference pointer despite differing only in a single character). And it's sloooooooooooooow. As in ruby slow.

And you forgot Go crashes. If you think C++ error messages are long, you haven't seen a nil-pointer dereference in Go.


Go's philosophy is to consider errors and failures are part of solving problems, not something "exceptional" that should make you fail and refer to a higher authority. Error-checking and error-reporting is not something that is distracting the normal flow of your function : it is part of the normal flow of your function.


First except of course for the corner cases that Go thinks should not be accounted for. These include serious system failures (e.g. gc fails for some reason), memory allocation failures, and of course anything that calls panic. Other errors are effectively impossible to be handled (e.g. system is thrashing and Go's gc runs - you'll never see your program again). So this can't be solved.

So please don't say Go doesn't have exception handling. It has code that detects exceptional conditions and attempts to handle them by unrolling the stack. Of course that's entirely different from "exceptions", right ?

Plus it once again points out the schizophrenic nature of Go. Go uses error codes. Except when it doesn't. Go doesn't have generics. Except when it does. Go doesn't allow polymorphism. Except in ~5 cases. No language on the planet allows for return type polymorphism (meaning of the function changes depending on what you assign it's result to) ... except of course Go.

This was one of the main, and valid, objections people had to languages like modula-2, oberon and the like in the past. And maybe I'm getting old, but my money's on that the pendulum will swing back again.

Furthermore I disagree with your assessment. Go and C insist on the same thing : in-line errors. In C that lead to errors being ignored in the vast majority of cases by "average" programmers. In Go ... well, here's the top-5 of Go projects. Go and take a look for yourself :

https://github.com/trending?l=go (note: changes every day. Now check if errors are ignored today or not. Today's answer 4 of the top 5 projects ignore almost all errors)

But I don't see how your comment addresses the complete unnatural way of thinking that C and Go promote. Handing errors in-line is not natural. This was one of the big complaints when we moved from C to Java/Python, a move supported by a lot of programmers.

So let's be honest here. Go's philosophy in practice is to make users ignore errors.


About your first point : yes, go has exceptions, the so-called panics, but they are here for truly exceptional situations, those you can't recover for, mainly bugs from the programmer (nil dereference, out-of-bound access, bad dynamic cast, etc.) or, in theory, when the system gets in an unstable state (no memory left, hardware failure, etc.). But they really are the exception (sic), that's why I didn't mention them. Let's be honest : almost nobody ever deals with these errors anyway, and that's not the kind of things people think about when they talk about dealing with errors via an exception mechanism.

I really think (and it looks like we strongly disagree on this point) it's one of Go's virtues to really distinguish between what's part of the problem you're trying to solve and what can be considered as defaults from the computer (as a whole, i.e. both software and hardware). You shouldn't have to deal with these very different problems the same way.

As for your comment about the practice being to ignore errors in Go, I have to believe your words I guess (maybe I should have a look at these horrors you're talking about). This is clearly not the way I work but that's a serious fault from developers if they do. And, of course, that will lead to very serious bugs since an ignored error code is much more damageful than an uncaught exception. Sure, empty try-catch blocks are found in java too, but they tend to disappear.


> About your first point : yes, go has exceptions, the so-called panics, but they are here for truly exceptional situations, those you can't recover for,

1) I recover from such errors in Java all the time. It's easy : execute finally's ("defers"), execute retry policy (go, incidentally, makes generic retry policies very fucking hard due to lack of generics. It is impossible to write a method "TryThreeTimes(method, timeout, params)" in Go.

2) It is extremely disappointing that after all the effort that Go forces on you for error handling there are still a dozen classes of errors (of the kind you find often) that can kill your program, even with error handling perfectly up to the Golang author's standards.

3) These errors happen often enough that it's not reasonable to kill the program if they happen, so you absolutely need to recover in daemons and the like. So any reasonably large Go program has to consider exception handling, recovery, and alternate method returns.

4) You also have the C++ problem : libraries should never, ever raise an exception ("panic"). But of course, they do (ok granted, the Go standard library is reasonably good about this, but the same cannot be said for github stuff)

5) Nobody actually does the error handling. You never see anything other than "return err" or "exit program with log message" in error branches anyway)

> I really think (and it looks like we strongly disagree on this point) it's one of Go's virtues to really distinguish between what's part of the problem you're trying to solve

Yeah we disagree here. I feel Go's error handling standard prevents me from focusing on the problem independently of what might go wrong, and this limits my abstract thinking. What makes this even more frustrating is that this is exactly the same problem C had.

> This is clearly not the way I work but that's a serious fault from developers if they do.

With the guarantee of quality developers, I'd develop nearly anything in C++. There is no problem in C++ if you have a really good team and there's just no beating the C++ language in too many departments (portability, abstraction, compiler quality, tooling, power, control of complexity in huge programs ...).

But the great failing of C++ is simple : it doesn't support junior programmers, or otherwise somewhat incompetent teams very well at all.

The question here is : does Go ? I don't know, but to me it's not looking good. I would argue that the damage that can be done by slightly incompetent programmers in Go far exceeds the damage that can be done in most other languages (ok, granted, maybe it's better than C++, but definitely worse than Java).


Ha, it's really a matter of taste, because what I like about error codes is how easy and readable it is to write "I want to try three times then fail" blocks.

    for i := 0; i < 3; i++ {
        res, err = f(arg)
        if err == nil {
            break
        }
    }
Writing this kind of behavior with try-catches is rather annoying, the try-catches being leaking all over the code.

About C++, well, I almost agree ; this is my favorite programming language (in the sense it would be the one I'd use if I had to pick only one), but I think it is not the right tool for a lot of tasks (including the ones Go is targeting : servers and sysadmin tools). Java might be better for servers than Go, but for sysadmin tools it clearly sucks (because of startup time, mainly).


Exactly. Haven't we heard the same arguments 20 times already ?


The Evolution of a Haskell Programmer http://www.willamette.edu/~fruehr/haskell/evolution.html


"When the three of us [Thompson, Rob Pike, and Robert Griesemer] got started, it was pure research. The three of us got together and decided that we hated C++. [laughter] ... [Returning to Go,] we started off with the idea that all three of us had to be talked into every feature in the language, so there was no extraneous garbage put into the language for any reason."


Agree with a lot of this, but I find the lack of any punctuation one of the most confusing things about Haskell code.

    Map String Any
Is that a function call? A type declaration? What's a parameter to what? The go style

    map[string]interface{}
is, much as I love to hate on all things go, much clearer.


I disagree. You are merely used to different conventions. In haskell, symbols starting with an uppercase are types (or Constructors). You can think about Map as a type constructor which takes two other types as its arguments. So Map Foo Bar is a Map with Foo as keys and Bar as values.

In the go version [] is used for both indexing and type definition.


> Is that a function call, A type declaration?

If its following a "::" then its part of a type signature and its a type constructor application. Otherwise, its part of an expression and its a regular function call.

BTW, Haskell capitalization is significant in Haskell. Identifiers starting with lower case are always regular functions. Types and type constructors always start with upper case.

> What's a parameter to what?

Application in Haskell is left associative so its

    ((Map String) Any)
Haskell is a functional language and programs are filled with function applications so using spaces for function application avoids a lot or syntactic noise. The only time I find this really bites me is that, due to automatic currying, passing the wrong number of arguments to a function can lead to hairy type errors. This is not a big deal if you use add a healthy amount of type annotations to your code though.


GHC type errors are far from ideal.

But if you give too few or too many arguments, you usually do get an error like: "Perhaps `foo` is applied to too [many|few] arguments?"


  Application in Haskell is left associative so its

  ((Map String) Any)
You seem to have stumbled on the exact conflict. Anybody who's seen the theory of lambda calculus will recognize the significance of what you've written, and how it applies to partial binding of functions.

Anybody who hasn't will either be thinking of lisp or just be totally confused.

This is where the conflict is. Many developers actually know some programming theory and find Go woefully insufficient for anybody calling himself a programmer. And the ones that just fiddle around with things and make barely working programs without really knowing what they're doing are very happy that they don't have to know this in Go.

What's really going on is that people with real programming background find that Go forces them to think at the lowest possible level of abstraction, with no way up. And people without any programming background say "that's GREAT !".


I'd find it depressing if there was any programmer out there who had trouble understanding the concept of "left associativity". I think I learned it in high school.


He's talking about the concept of currying, not left-associativity.


Original question:

> What's a parameter to what?

Answer:

> Application in Haskell is left associative so its

> ((Map String) Any)

Which looks crystal clear to me. This is also the part of the answer that user waps later quoted.


The reason Map String Any does not contain any special symbols specific to that certain type is because it is not a built in special type like the Go map[string]interface{}.

It is just a regular type parametrized over two other types. And kinda that is the point - being expressive enough to describe a map type without explicit support.


There's nothing a priori clear about punctuation like [ ] { }, you're just used to seeing lots of syntactic markers.


Sure. But many languages have a standard way of defining what's a function declaration, what's a function definition, what's a type definition, what's a function parameter, what's a type parameter, what's a function call, what's a type instantiation; it doesn't really matter what the specifics are, what matters is that there are standard indicators that stand out from the code (and I think there is a sense in which []{} are a priori more visible in the middle of a paragraph than letters are).

Haskell doesn't seem to have that; any piece of Haskell code tends to look like a smooth surface of words separated by spaces. Function? Words separated by spaces. Type? Words separated by spaces. Instance? Words separated by spaces. There are no visual handles to latch onto, nothing to help you realise how this stream of words separated by spaces forms a tree structure (AST).


> There are no visual handles to latch onto

Haskell uses syntax and context to distinguish identifiers as follows:

- variables, lower case initial letter

- constructors, upper case initial letter

- type variables, lower case initial letter

- type constructor, upper case initial letter

and similarly for classes and modules. And of course, white space is function application.

So you know if it is a variable or a constructor by looking at the first letter. And you know if it is a value-level variable or a type-level variable (or constructor) by context (types can only appear in certain places, e.g. on the right hand side of `::`).

But, yes, they all look like words. They're not ambiguous though. :)


I understand this feeling, but it goes away after a bit of practice.

In java/go/c#, there are lots of statement and small expressions. In haskell expressions tends to be longer and are broken down in multiple local definitions (with let or where).

You learn to recognize important words (fmap, forM_, $, etc). You also learn to recognize how those words are associated, and how data is passed around.

Granted, there are people who abuse point-free notation to build complex and hard to read expressions. However I find that a codebase with rich informative types and a reasonable coding rule set is very pleasant to read.


Point free can be used to have something similar to Unix pipelines as well though, resulting in simple, fast, and readable code.


The article mentions many uses of generics but it doesn't go over one of my favorites, which is that parametricity gives you "theorems for free".

For example, I recently wrote a compiler in Ocaml for a scripting language. The type I used for the syntax tree has lots of type parametsrs:

    type ('id, 'str, 'enum) stmtf =
        (* big ADT definition goes here... *)
The 'id type parameter is the type of identifiers (names), 'str is for strings (they get interpolated) and 'enum is for constants.

These type parameters give me some flexibility. When the parser builds the first syntax tree everything is still a string:

    type syntaxtree = (string, string, string) stmtf
and after a pass to bind names the strings get converted to more informative types:

    type boundtree = (nameId, interpolation, string) stmtf
Up to now, we could get this flexibility by being untyped and defining all these fields in stmtf as "void pointer" or "interface". But the generics lets us make everything type safe. Central to this is the "functor map" over stmtf:

    val map_sstmtf:
        id: ('i1 -> 'i2) ->
        str: ('s1 -> 's2) ->
        enum: ('e1 -> 'e2) ->
        ('i1, 's1, 'e1) sstmtf ->
        ('i2, 's2, 'e2) sstmtf
It takes 3 functions telling what to do to the id, str and enum fields, respectively and converts the stmtf by applying those fucntions to each field. The neat thing is that in the implementation of map_stmtf if we forget to apply apply one of these callbacks to one of the fields the code would not typeckeck. The code also won't typecheck if we apply a callback to the wrong field - say apply the "id" callback to a "str" field. This lets us catch at compile time bugs that wouldn't even have been caught at runtime because in the initial syntaxtree both "id" and "str" fields are represented by "string" and are thus prone to being confused with one another. Both of these turned out to be pretty useful as I evolved my language and changed the structure of my syntax tree.

Liberal use of type parameters also gave many other benefits. For example it was trivial to modify the tree to add a line number next to all identifiers, all I had to do was update the type definition and then see the compilation errors to find the parts of my code that had to be updated.

    type syntaxtree = ((position * string), string, string) stmtf
Finally, one extremely neat trick is that you can use type parameters in some places where you would have used recursive types. This pattern is described pretty well in this post I'm linking to and its the sort of thing that you can't really do without generics.

http://lambda-the-ultimate.org/node/4170#comment-63836


Ok, we all know the problem, but do you have a solution? Have you considered the researches done by others, for example this one: http://research.swtch.com/generic


Yes, and Go apologists would also know at least one solution if wouldn't keep pretending that they can't keep reading the post until the very _first_ comment on the same page.


What I had in mind was that Go is an open source project, if C#'s solution is a good one, why not write up a proposal and try an implementation?


Why am I supposed to do that?

A) I'm using a working language already. B) Read the comments on the mailing list regarding third party proposals/implementations from the Go authors.

In short: Complete waste of time.


Looks like someone else discovered Go isn't as good as Haskell to program in Haskell. It's a good, very pedagogical article, however.


I really like Go but have always missed generics. Wondering if some kind soul would fork Go, add generics and see how that fares.


How can sort be considered generic. Every type must be sorted by different stuff, the compiler can't possibly know what that stuff is.

I see what the author is coming from but generics would break what makes go great.

There has been times when I wanted generics, but it has generally meant that i was looking at the wrong side of the problem.


> How can sort be considered generic. Every type must be sorted by different stuff, the compiler can't possibly know what that stuff is.

What. Have you even used a language with generics? Yes it can, very easily; you can do it in a traditional-OO way by having a sortable interface, or if you want to be really cool you do it with a typeclass.


Sort is generic when order is generic.


golang is a great language for devops to automate infrastructure tasks, but for application dev not so much


Have you used it in a real application? Is actually quite good.


I loved VB6 so i like go really much...


This really seems like a guy who doesn't want what Go has to offer. At every place he compares Go and Haskell, he wants Go to be like Haskell.

There's already a language like that - Haskell.


Statically typed languages impose unnatural hardware-oriented constraints on your business logic - It forces you to spend extra time and effort to make sure that your logic is in a format which the compiler can understand.

Yes, this can sometimes help you to find silly errors in your code at compile time (and thus occasionally save you a bit of time) but, unfortunately, that occasional benefit doesn't make up for the productivity loss incurred from constantly trying to 'please the compiler'.

Dynamically typed languages offer fluidity in your logic where it is needed. For example, if you add an integer and a decimal (floating point) number together, there is nothing wrong about this mathematically, but a static compiler will complain! It's a limitation of the compiler not the programmer!

Statically typed languages force you to implement rigid class/type schemas within your logic (sometimes with complex hierarchies) - These schemas have to be significantly reworked whenever your business requirements change.


I'm not trying to be insulting, but did you read the article? From what you've written, it sounds like you haven't actually used an expressive type system.

For instance, in Haskell I can type 2 + 3.75 and I get 5.75 right back -- no errors. That's because the type signature of + is "Num a => a -> a -> a," meaning I can add two of any type of Number (Integers and Floats are both Numbers, of course).


Actually, I see having to rework the type schema when the requirements change as a plus, not as a con. In a static language the compiler can point out all the bits of the program that need to be updated to conform to the new schema while in a dynamic language this kind of stuff is usually only detected at runtime.


Have you never used a statically-typed language? Most statically typed languages can handle adding an integer and a float together. The only one I can think of that isn't capable of that is OCaml, though this is probably a strength of OCaml rather than a weakness (int-to-float casting can be a common source of bugs if it happens unexpectedly).


Try to add an int and a float32 using 'Try go' on http://glolang.org - I get this error:

prog.go:13: invalid operation: x + y (mismatched types int and float32) [process exited with non-zero status]

I'd have to cast one of the variables to get it to work. Casting is unnatural - Maybe it's not so bad in this case, but in many cases it isn't desirable.


In Ocaml's case its less about casting being a source of bugs and more about casting not playing well with Hindley-Milner type inference, which is Ocaml's "killer feature". (and Ocaml doesn't have type classes to solve this problem like Haskell does)


I think you should spend some time familiarizing yourself with actual statically typed languages before expressing an extremely ill-informed opinion as fact.


Your comment certainly doesn't apply to languages with expressive type systems.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: