Hacker News new | past | comments | ask | show | jobs | submit login
Go Rocks - How Can We Avoid Something This Bad In The Future? (acooke.org)
137 points by andrewcooke on July 10, 2011 | hide | past | web | favorite | 63 comments



No syntactically lightweight way of writing anonymous recursive functions? I can't make myself care about that; I just spent 30 seconds trying, and failed. That's just not a use case worth optimizing for in a practical systems language.

Also, it's true that the Go spec doesn't guarantee tail calls are optimized. If Go were designed to be a functional language, that'd be problematic. However, guaranteeing TCO isn't free in implementation, and in the presence of features like "defer", it becomes non-trivial to understand when it is happening. So, while I like TCO as much as the next guy, I agree with the Go designers decision not to require it. Just like with a pile of other languages, the lack hasn't stopped people from writing good code, and not even stopped them from writing useful recursive code. I think there's some code in the Go standard library (parsing code, mostly) that uses trampolining to simulate it, and that code would be better off if it could just do a tail call, but on the balance that code still works well and isn't convoluted.

Anyway, not a bad blog post; I agree with most of it. But, as someone who has written a fair bit of code in functional languages and a fair bit of Go, I find that the more Go I write the less I care about the language features (or lack thereof) that I was horrified by at first, and the more I see most other languages as overcomplicated. Go isn't a very good language in theory, but it's a great language in practice, and practice is all I care about, so I like it quite a bit.


I'm a full time OCaml dev these days and I run into OCaml's 'lack of lightweight anonymous recursive functions' maybe three times a year. It costs 3 lines of code to declare the function (called loop() or whatever) and then call it. I've never realized I was supposed to want to switch languages because of it.

Otherwise, this comment makes me more interested in Go than I have been before. OCaml has a bazillion features that I never use, and since every OCaml program can look completely different by using those features, the libraries outside the standard lib are all but incompatible with my code.


what about pattern matching? that is imo one of the saddest omissions from go - it's not a "huge" feature like continuations, but i find it makes a language significantly more pleasant to work with.


I asked Rob Pike why they didn't add pattern-matching given that it already has multiple returns and some other baby steps in that direction. His response was, "We considered it, but it turned out to be too much work in the compiler."

That was a huge WTF to me. Isn't your job as a language designer to do that work so that I, the language user, don't have to?


The cost of complexity in the compiler is not only to the persons implementing it.

Also I'm pretty sure the amount of work in the compiler was only one of the considerations, and while I don't doubt your account of your conversation with rob, I doubt it was the main consideration in this design decision.


Channels + select are a pretty good replacement for what one usually uses pattern matching in Erlang.

There are pros and cons for both models, but they are roughly equivalent and I never missed pattern matching in Go.


Pattern matching isn't just a feature to aid concurrent programming; it's extremely helpful for all sorts of code. However, since Go doesn't have unions (either tagged or unsafe), pattern matching in Go wouldn't be particularly powerful anyway.


I think you and a lot of other people got too caught up in the cognitively-well-trodden ground about how a language doesn't support anonymous functions and TCO very well, but missed some of the other less common arguments that I think are at least as important. The complaint about the compiler and runtime having too much magic access is very valid. I've come to see this as a major design flaw in any language, especially given how easy it is (relatively speaking) to give programs access to the magic interfaces instead of locking them away. Python is the best example of this, there is now almost no statement in the language that there isn't an official way to extend with some double-underscore protocol or other. It always causes problems when the language has some magical layer of access that automatically does something for you, because it eventually turns out to be the wrong automatic thing. It's only a matter of time.

And there isn't anything about that complaint that is the slightest bit "academic". If anything, the academics are far more into the idea that they can come up with the One Correct Behavior and write it straight into the underlying specification of the language than the practical languages are. (One of my personal criticisms with Haskell right now is that while this is slowly being fixed, it's being fixed in a haphazard, one-off for each syntax element manner, instead of seeing the underlying problem and addressing it in a unified manner, and I think that this is because, like I said, academics don't really think this way.) In my book, it's simply a mistake.

I think it's a grave mistake to collapse this criticism of Go to "It's not academic enough"; my sense of the problem is actually that it's more on the practical side. There are some clear practical wins that really ought to be well known by anybody doing PL design that seem to have been passed up. Hopefully these are corrected, but it dampens my enthusiasm somewhat that they were missed in the first place. YMWV.


> Go isn't a very good language in theory, but it's a great language in practice, and practice is all I care about, so I like it quite a bit.

just like C was meant to be a practical language for writing Unix tools. thanks for your comment.


I work on Rust, which is in a similar space to Go. IMHO this article is oversimplifying.

(1) Recursive anonymous functions: This is the first time I've heard this criticism. MLs usually don't have this functionality either. It's not too much trouble to give the function a name.

(2) Tail call optimization: TCO isn't free. It requires all callees to pop their arguments, which slightly penalizes every call in the program. Rust is planned to support full TCO, but the current compiler only does sibling call optimization (a subset of TCO).

(3) Continuations (I assume call/cc is meant here): Again, these are expensive, because they interfere with the stack discipline. You end up having to garbage collect stack frames in the worst case. For this reason both Go and Rust have implemented exceptional control flow using stack unwinding and destructors. (For Go panic/defer/recover play the role of destructors, while for Rust we use a more traditional RAII system.)


(1) MLs allow to nest named functions. Go only allows named global functions, and lacks a feature that Pascal had.

(2) Someone has to do cleanup anyhow. Either the caller or callee. (I am no expert in calling conventions) Anyhow, isn't this practically free, because the stack is in the L1 cache?

(3) To implement lightweight threads (ala. Erlang, Go, Stackless) you already can't allocate your activation records on the C-stack. So the main cost for call/cc is already paid for. Besides, Go lacks any form of non-local jumps, so escape continuations alone (setjmp/longjmp) would already be an improvement.


> Go only allows named global functions, and lacks a feature that Pascal had.

    func main() {
    	bar := func() {
    		fmt.Println("Hello, 世界")
    	}
    	bar()
    }
What do I lose by doing that versus having 'func bar()'?


this:

    func global(some_arg_to_close_over Bar) {
    	func walk_tree(node Node) {
            ...
            walk_tree(left(node))
            walk_tree(right(node))
    	}
    	walk_tree(something)
    }


Can you elaborate on how popping arguments penalizes every call? It seems like, for non-variadic calls at least, it should take the same amount of time to add a constant to %esp whether you're doing it in the RET instruction or in the caller; but factoring that code into the callee should improve code density and therefore icache hit rates. What am I missing?


Callers have to readjust the stack after each call when TCO is enabled, but without TCO a function can allocate outgoing argument space for every function invocation up front and reuse that space for every call.


God what an annoying troll.

It's a very new language, and things are being improved + fixed all the time, I would be very surprised indeed if tail-call optimisation + the ever so vital self-recursive lambda issue weren't fixed at a later date. To treat these things as permanently a part of the language is disingenuous at best.

Rob Pike (part of the core team) has said that what puts a lot of PL people off is that it is not theoretically exciting, it's just designed to be very useful [1].

The funny thing about go is that it seems like an uninteresting language when you first begin, but as you write more code you begin to appreciate its simplicity and orthogonality and realise that these are extremely nice qualities.

Language design is as much about saying no to things as it is to saying yes to things.

On a related note, I don't like the implication that there is now a scientific method to language design which obviates the need for practical considerations - that strikes me as class A ivory tower bullshit.

[1]:http://www.youtube.com/watch?v=-i0hat7pdpk&feature=playe...


I wonder what the author means by programming language "academic research". None of the mentioned deficiencies of Go (anon lambdas/TCO) is the product of any new academic research and have been around for a very long time (see Lisp).

Most PL academic research goes into Haskell/ML , and we know how useful they really are. (Monads/Type inferencing , anyone?)


I wrote a bunch of go code this morning. It's definitely got a few really neat ideas in it.

I tend to write erlang or c++ mostly for my day job. I do really wish the designers had a bit more erlang experience. Just a touch. Just enough so that I had a way to propagate the death of a goroutine through a bunch of other goroutines without building all of that manually.

For example, code I wrote today:

http://pastebin.com/Zp6eqqTa

That infinite loop is really infinite. If the goroutine feeding the channel dies (e.g. if the connection is dropped), this thing will keep reading on the channel. In erlango, I'd do something like "go_link client.Feed(ch)" and the right thing would happen. Here, I really need to spin up a second channel to send out the death spurt, select across the channels in my loop and send my death message with defer.

defer is awesome, and I wish every language had it (C++ has something that looks like it if you write enough code and squint hard enough). Still, I've got a lot of code like the above and I imagine it's not an uncommon thing.

Every language has something that just feels unjustifiably wrong with it, though. go certainly isn't one I'd be pained to use if I had to every day.

(Edit: couldn't figure out code formatting, tossed it over to pastebin)


If you modify your for loop to range over ch, you can make the sending side close(ch) when an error occurs, and then the loop will end and the receiver can shut itself down.

http://pastebin.com/FMSLvFCn


Wow, that's not intuitive. :)

Throwing in a "defer close(ch)" and that range thing did exactly the right thing. I'm not sure my API is right, but I know how to get it right now. Thanks a lot.


I'm surprised you say it's not intuitive. What is it just that you didn't know you could close channels?


The idea of using "range" across what should generally be an infinite channel is not something I would've guessed existed.


> I do really wish the designers had a bit more erlang experience.

I feel the same way. I had this hope that this would be what I would learn instead of Erlang. You know it has goroutines and channels -- kind of like processes and casts. Well except that it is not the same. Erlang's processes that have explicit pids and messages are cast to those processes by referring to their pid, make a lot more sense to me than a bunch of channels. Maybe it is just me, but I really just prefer the Erlang way. (Is the the "actor" vs "CSP" paradigms?).

Then it is what you mentioned, exit signals. Anyone who wants to promote concurrency as first order feature in the language and expects it to be useful in the real world, would have to have something like supervision trees & OTP. In other words when you have lots of little goroutines doing stuff concurrently, it is expected some will just crash and then you'd want to do something reasonable, which is not always to just crash the whole application (OS process).


Go is something quite different to Erlang, despite some small similarities. If you want Erlang, just use Erlang.


I think Rob Pike discusses some of the differences between the Go and Erlang CSP-like models:

http://confreaks.net/videos/115-elcamp2010-go


I really wanted to learn something from this post, but it's mostly just hyperbole and whining.

"It turns out that, in Go, you can't write a self-recursive anonymous function - there's no way to make it refer to itself. Well, you can do a dirty hack that's the "official work-around" (I'll let you google for the bug report)."

To summarize this argument: 1. It is impossible to have a self-recursive anonymous function in Go. 2. Oh wait.. It is completely possible, but far too ugly to soil this blog post.

I looked up the temporary solution (http://code.google.com/p/go/issues/detail?id=226). It is this:

    var recurse func()
    recurse = func() {
      recurse()
    }
The proposed permanent solution is allowing recursive function definitions, because that is the proper solution instead of adding a magic self keyword like other languages do.

(tl;dr The author just doesn't like the fairly-simple work-around proposed while the language is still developing)

Edit: There definitely are far more important things in Go I dislike, and would change, however this blog post is essentially fluff.


This is what they want:

    func outer(a int) {
        func inner_helper(b int) {
            ...
        }
    }
Pascal allowed nested functions 40 years ago...


When saying "nested functions" in 2011, people expect you to mean lexically scoped nested functions.

Pascal's nested functions really just scoped the name of the function differently, but they were equivalent to a function definition on the outer-most level.


I don't think Go's creators are as ignorant of academic ideas as you say. I've seen Rob Pike refer to OCaml a lot for instance. They just don't seem to like many of those ideas.


I cannot recall him ever saying the word "OCaml".


It's possible that I am mistaking Pike for someone else on the Go team.



Go's performance isn't good enough to broadly compete with C++ yet, and for everything higher level I don't see a strong enough advantage over Java or Scala to start all over with a new library ecosystem.

That said, I'm keeping an eye on it. If it takes off I'll learn it but it doesn't make sense to make it a priority yet, IMO.


I still think the major stumbling block for Go is the syntax. I can't put my finger on it, but from the first time I saw it I thought it looked wrong.

Can anyone who shares my opinion chime in on what things they think cause the syntax to suffer aesthetically?


I'm amazed by this particular criticism of Go. Compared to many modern languages (Erlang or Ruby, for example), Go's syntax is barely different to Java or C. But I think it's better than either of those (It's certainly more regular).

Try it for two days and you'll change your mind.

Also, see: http://blog.golang.org/2010/07/gos-declaration-syntax.html


It seems some programmers just will not accept new syntax at all. I think it's a waste of time trying to please these people and, contrary to popular belief, you don't need their blessing to make a successful new language. Coffeescript is an example that supports this.


It's a valid point, but it paints with too broad of a brush. For example, I have enthusiastically moved from JavaScript to CoffeeScript, in part because I love the syntax.

While I know there are programmers who act just like you described, my complaint is with Go's syntax specifically.


I guess it is something like the Valley of The Uncanny for languages. Yes, it looks deceivingly similar to C++, C & Java, but is not any of those. When your eyes see the syntax, your brain expects the functionality to be the same. However Go does introduce fairly radical new paradigms (goroutines, type inference...) that really upsets that initial feeling of familiarity.

I (unlike the majority) like Erlang's completely strange and radical syntax. When I see, it tells me that "there is something completely new happening here, don't expect this to be C++ plus processes" or something like that.


I agree except for the whole ";" "," "." line endings issue. When I can't copy and paste to move lines around without "fiddling" with the line endings, I get angry.


I think the solution is to treat those as line beginnings rather than endings: http://news.ycombinator.com/item?id=2389715


The trouble is that it borrows a lot from C and then subtly changes it for no discernible reason. This is especially true with variable declaration.

Go is the only language I've ever tried for a while and been really put off by the syntax. I can't quite articulate why either. It's not...pretty. I write quite a lot of C++ and Python, so theoretically I should be a pretty good fit for Go. But it doesn't click with me.


There are good reasons. See the blog post I linked to.


I think it's kind of like as a portuguese speaker I can never learn to speak spanish properly. It's easy enough to understand, but I can never remember the different words because it's too similar to compartmentalize.


It's the mix of old-school brace syntax, with the omitted parens which makes it look off to me.


I personally kinda like that decision. You don't need both parens and braces to remove ambiguity. Since braces are useful for denoting code blocks, parens are basically noise.

I'm not a language expert, so if anyone can correct me on the theory I would appreciate it.


I can't think of anyway to get around once you add Static Typing.

Dynamic languages have much more bare function/object definitions.

Haskell does make Types look sexy on the other hand. So yes there could have been some borrowing from there.


I've played very little with GO, and it seems to be a nice language overall.

Anyway, it's designed to allow easy parallelism. It's a system's language for parallel work.

The only thing that I don't like about it is that the built-in types get special treatment (if you want to write a map yourself, you can't use the same syntax). This indeed feels like a design smell.

GO is still very useful this way, but I do hope that it gets generics/templates sometime in the future.


I think Go is a pretty mediocre language overall, one that would have been a killer early 2000, but this article basically boils down to: "Go sucks because it doesn't support tail call optimization and continuations".

Way to miss the boat.


I don't mind those as much, but what really kills it for me is the lack of separation between the language and the standard library.


Wait, is there supposed to be such a separation? Python's standard library is half the reason people use Python. Do I consider the MatLab matrix multiplication routine the library or the language? Because the abstract syntax and semantics of MatLab could certainly do without it. What about ODE solving? I wouldn't use MatLab without it. Is Lisp's (defun) library or language? In my own pylisp, I put (def) in the standard library. But you'll see little code that doesn't use it.

What about Haskell? Is Prelude library or language? Every Haskell file has it preloaded, so you can't code without it (without effort, anyway); but it can certainly be implemented in Haskell itself. Is Haskell the language just typed lambda calculus with type classes and some basic syntax sugar? Is IO Haskell language or library?

What about C's varargs? Is it language or library? It seems like library, since you include a header file to use it. But it's also impossible to do variable-lenght argument lists in C without it. What about longjmp? What about sbrk? It seems like library, since different OSs do it differently, and thus it has multiple implementations, in C. But without it, you can't do heap allocated memory, which is something you can't do without it.

Separation between language and standard library? What language does do it?


C is actually one of the few languages that gets the language/library separation mostly right. You can write pretty much the entire standard library in C, and you can easily write C without the standard library. It's very straightforward (if sometimes a bit annoying) to do this in Windows, and I believe neither EPOC/Symbian or PalmOS supported the ISO C library.

varargs, setjmp/longjmp and sbrk are 100% library. You can do varargs without stdarg.h; if you know the calling convention, you can just walk through the memory occupied by the arguments. setjmp is trickier but if you know the rules you could write it using assembly language, just like the library writers did. As for sbrk, I'm not terribly familiar with that I do admit, but judging by the man page it is not part of the C standard library, kind of proving my point. But you can definitely have a heap, and allocate out of it, and (where plausible) grab more address space, without having it.


I agree with your broad point that you can write a usable C library with a very minimal amount of assembly, but I am going to quibble on some of the details.

> You can write pretty much the entire standard library in C, ... varargs, setjmp/longjmp and sbrk are 100% library.

If you're saying "These four functions can be written in C" then you are mistaken. If you are saying "These four functions can be written in assembly" then you are not saying anything interesting about C, because it is true of any language that it is possible to write its library in assembly if it's possible to write it at all.

In more detail.

Whether you can write va_arg in C depends on the platform's calling convention. On most platforms you can (because the default calling convention has to support varargs, and the easiest way to do that is to push the arguments on the stack) but on x86-64 you can't.

setjmp and longjmp cannot be written in C on any platform I'm familiar with, unless you're writing them on top of something like setcontext, which has a slight superset of their functionality. They are, however, very simple to write in assembly.

You can write sbrk in one line of C, and the glibc implementation is only ten: http://koala.cs.pub.ro/lxr/glibc/misc/sbrk.c#L29 But sbrk is simply a convenience wrapper around brk, which is necessarily a system call. And C doesn't have a "system call" statement.

(Actually, you can write brk in C too if you just want to allocate out of some fixed blob of address space, which is a reasonable choice on, say, a microcontroller.)


Varargs for the X86_64 is most definitely baked into the compiler, and impossible to do library only.


He lost me when he said "to understand that, you need to understand the academics".

There is a pretty sizable disconnect between what's going on in academic computer science today and what professional software engineers are looking for. Go explicitly sets out to solve practical problems, not provide an implementation of this year's trendiest compiler porn.


For some reason I can't scroll this page on the iPad.


Use two fingers. It's still a pain, but you can read it.


I know that this is the wrong mentality, but I want to become an expert in a language that is as good as it can be. Can Go evolve to address some of these issues? Would it even be go anymore?

I tried Googling, but why are Scala and Haskell automatically classified as "academic"? Is there value in learning them purely for that reason? I walked through some Haskell tutorials after reading this, but I'm rather stare at the sun than read Scala.


I wouldn't call either Haskell or Scala academic. However, here a quick shoot from the hip stereotyping of each.

Scala gives me the impression it wants to be Java's C++. C++ is largely an extension of C. Sure you can use it entirely differently than C, but that C-ish core is still there. And Scala gives me that same feeling in relation to Java.

On the other hand, Haskell suffers from being a "pure" language. I mean "pure" in a wider sense than "functional". I mean "pure" as in being built entirely around a concept. For Haskell that concept is "functional programming". For Lisp, it's the "cons cell". And for Forth, it's the "stack machine". The benefit to being a "pure" language is there are all sorts of synergies you can take advantage of. The down side is when something doesn't quite line up, it gets ugly fast.


I think Haskell accommodates imperative programming so well that it surpasses many imperative languages and beats them in their own game.

I also think the name "purity" is misleading. In the context of Haskell it means:

* Referential Transparency: references to things are indistinguishable from those things. This helps a lot with equational reasoning about code, optimizations and all sorts of mechanical refactorings. I don't think there are significant disadvantages here.

* Typed effects: Haskell types its effects and requires explicit composition of effects and not just of values. This, in turn, turns "side effects" simply into "effects" because they are so explicit. This has some disadvantages, mainly regarding learning curve. If you get a pure function type, you are guaranteed many of useful things about its behavior. The flip side of this, is that if you want to change a pure function to have effects and break these guarantees, you will have to change its type and somewhat restructure it.

Haskell doesn't "get ugly fast" in any circumstance I've seen. It accomodates an extremely wide array of paradigms as pretty encodings on top of its own functional paradigm.


I completely agree, in fact I've been trying to show people how well Haskell can encode imperative programs.

See https://github.com/DanielWaterworth/Musings/blob/master/why_... and https://github.com/DanielWaterworth/Musings/blob/master/comp...


I tried Googling, but why are Scala and Haskell automatically classified as "academic"?

It's what people say when asked about languages they don't use or understand.

I.e., damning with faint praise.


I tried Googling, but why are Scala and Haskell automatically classified as "academic"?

Because they are inspired by 20th century mathematics rather than 18th century mathematics. This quote from wikipedia explains it:

These developments of the last quarter of the 19th century and the first quarter of 20th century were systematically exposed in Bartel van der Waerden's Moderne algebra, the two-volume monograph published in 1930–1931 that forever changed for the mathematical world the meaning of the word algebra from the theory of equations to the theory of algebraic structures. -- http://en.wikipedia.org/wiki/Abstract_algebra

The academic languages are the ones aware of that change in the meaning of the word algebra. Everyone learns Euler in high school. Nobody learns Mac Lane.


It came from inside a bunch of universities, and it's not hard to find papers talking about things like "monads" or "arrows" mentioning things like "monoids", which are mainly Haskell things. (But monoids aren't really Haskell things.)




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: