Hacker News new | past | comments | ask | show | jobs | submit login

Disclaimer: I mean this with love

This post really frustrates me, because the lengthy discussion about identifying problems and implementing solutions is pure BS. Go read the years worth of tickets asking for monotonic time, and see how notable names in the core team responded. Pick any particular issue people commonly have with golang, and you'll likely find a ticket with the same pattern: overt dismissal, with a heavy moralizing tone that you should feel bad for even asking about the issue. It's infuriating that the same people making those comments are now taking credit for the solution, when they had to be dragged into even admitting the issue was legitimate.




> asking for monotonic time

His anecdote about Google/Amazon using leap smears to solve the problem is telling. I suspect that they were unable to see outside their own bubble to think about how others may be impacted.

> We did what we always do when there's a problem without a clear solution: we waited

The original problem described the issue and the potential consequences very well and the problem didn't change between the initial report and when cloudflare hit it. It was only until a Serious Industrial User (to borrow a term from Xavier @ OCaml) got bit in a very public way that they actually began thinking about what a clear solution would look like.


> We did what we always do when there's a problem without a clear solution: we waited

And this is exactly why the Go designers don't understand language design, and how this ignorance shines through every single place in their language.

Language design is about compromises. There is never a perfect solution, only one that satisfies certain parameters while compromising others. You, the designer, are here to make the tough choice and steer the language in a direction that satisfies your users.

Besides, characterizing their position as "We waited" is very disingenuous. First of all, this is more stalling than waiting, and second, waiting implies they at least acknowledge the problems, which the Go team famously never does. Read how various issues are answered and then summarily closed with smug and condescending tones.


You are being deliberately negative here. Choosing to forego generics in favor of simplicity (and its impact along several axes) is a postcard example of a compromise. It is a tough choice that many people will be unhappy with, but there are also many Go programmers that are extremely satisfied with that direction.

As for acknowledging, well, they have always been very clear about their position. It makes no sense to spend a decade answering the same question over and over with a long and elaborate response which the person asking has already seen and dismissed. I can understand them becoming condescending after a decade of facing people who act with a deliberately obtuse and holier-than-thou attitude.

It's not like they have been lazy - every release of Go has had a large amount of improvements that matter. Working on generics would have meant sacrificing a (probably large) amount of them.

(for the record, I dearly miss generics too!)


Regarding the leap second bug, I suspect this is an example of perfect being the enemy of the good.

It appeared to me that the golang devs believed so strongly in the superiority of leap second smearing that waiting for everyone to adopt it was better than compromising their API or the implementation of time.Time.


Well, but that's not waiting, that's stalling.


Given Google's orientation towards server-side web applications, it makes sense. On the other hand, the real-time OSs such as QNX have had monotonic clocks available for decades, and they use them for all delay and time interval measurements. (In real-time, you use the monotonic clock for almost everything that matters, and the day clock for logging and display only. You don't want the clock used for delays and time intervals to "smear"; it might have physical effects if it's being used to measure RPM or something and seconds got slightly longer.)

Go is great for server-side web applications. The libraries for that are all there and well debugged. Anything else, not so much.


This hit a nerve, so I'm going to leave it up, but after thinking about it I also don't like how purely negative it is. I can't edit it anymore but I do want to make a follow up to say something things explicitly:

I want to be clear I'm thankful for the effort everyone has put into golang. Making something people care about enough to love or hate is hard. Stewarding a FOSS project can be a very negative thankless experience as well. While I'm being critical above, there are also lots of examples of golang folks, core or otherwise, making really constructive and productive comments and contributions. I don't mean to trivialize that or ignore it.


Note that at no point in the post whatsoever were any contributions to these specific issues from outside the core maintainers thanked or even acknowledged.


FWIW I think this is a fair criticism. We've had so much help from the Go community, not just for those two issues but also for essentially all the work we've done since the release in 2009. Go simply wouldn't exist without the community, and we're very grateful for it.

I couldn't fit any kinds of thank yous into the talk, except for the credit to the overall community in the first few minutes, because I had a lot to say and only 25 minutes. It's possible I should have written an extended blog post that was more than just the talk, but I didn't - the blog post is just the talk, as it says.


Which is quite surprising. With open source projects it's usually the other way around...


Google already had incident with Guava where they didn't give a shit about the patches: https://news.ycombinator.com/item?id=3691587


Would you mind providing examples of what you are talking about? My experience has been different - I find that the golang maintainers are thoughtful but also very very (very) busy people. So I find they tend to respond in a terse, and sometimes what might seem like an unfriendly way, but I haven't seen any of what you mention here.


Same, and to add: Go 2 has been planned for a while (since the beginning?) and I'm pretty sure the core developers tend to defer major new feature ideas (e.g. generics) to Go 2.

If they implemented everything everyone asked for they'd have another screw-up like C++ or Java.


Calling those languages screw ups is one of the funnier things I've seen in awhile.


I know that both Java and C++ have a lot more issues than Golang ever will. I've used all three languages. Java has mile long class hierarchies, massive try-catch blocks, indentation as thick as my neck, and runs in a JVM. It's so bad there are already other implementations that people would much rather use. I also really don't like the file/project naming conventions.

C++ is a whole other animal. The whole system of a Go program is typically very penetrable. You can trace the code down to the bottom of the stdlib or even the compiler builtins in seconds with ease (thanks to the excellent Guru and vim-go). In my experiences of C++, I doubt how many C++ programmers have ever had a look to the implementation of STL or iostream, which is hard to do ergonomically. Although some people occasionally call the opaqueness encapsulation and see it a good practice (which I don't agree). There are lots of cross compilation and static linking problems that don't arise with Go. There is the problem that one C++ code base will look completely different from others because no one in the C++ world follows conventions the way people in the golang world do.

Compared to them Go is a very well thought out and elegant language.


> know that both Java and C++ have a lot more issues than Golang ever will.

No you don't. But even if I granted that Golang has a bright future (I don't think it does) by any objective measure C++ and Java are more successful than Golang. LOC, number of devs, performance, install base, etc. Pick a measure other than current hype (not even peak hype) and either Java or C++ trounces Golang.

> I've used all three languages.

I have too. I program golang full time and have for 3 years. I'd switch tomorrow to either of the other languages if I could wave a magic wand (and I'm a fair hater of both).

> and runs in a JVM

I desperately miss the JVM (any of the ones I've used). The amount of sophistication and polish in comparison to the Golang runtime is embarrassing. Debugger support, operational sophistication, IDES, tooling, basically everything is better on any of the JVMs I've used in comparison to the golang runtime.

>Java has mile long class hierarchies, massive try-catch blocks, indentation as thick as my neck

And go has ridiculous copy/paste libraries, horrendous concurrency edge cases, terrible error handling, worse third party libraries and no story around packages.

> There are lots of cross compilation and static linking problems that don't arise with Go.

Because dependency management is not possible in golang. It is literally the worst story in any language I've used in 20 years. You have 2 choice in golang half baked vendoring or a monorepo where you have all of your code in one place.

> Compared to them Go is a very well thought out and elegant language

No, compared to them Go is a young language. There aren't any big projects in Golang yet. We shall see if there ever will be. My theory is that either none will ever happen as some other language will be a better choice, or golang will adapt and all the things people claim are benefits (simplicity, default tooling) will go away, crushed under the reality of complex projects being complex.


>Compared to them Go is a very well thought out and elegant language.

But interestingly, not one I would choose with which to embark on a project that I expected to require more than 2000 lines of code (as a ballpark scale measurement).

Go seems to be an excellent glue language. It's very suitable for making a tool that does /this/ and only /this/, putting it in a docker container and running 500 of it at once, linking other parts of my stack together.

I would never use it to write a RDMS, because (as someone who has admittedly been following the situation only loosely) the language maintainers don't seem interested in solving other-people problems. They seem to be making a language for them, and I can respect and understand that, but it certainly gives me pause to think about using it for anything big. If I run into a problem that Go can't solve, will it be possible to persuade the language developers to give a shit? Situation unclear.


To each their own. I've been having fun watching the progress of https://github.com/cockroachdb/cockroach && https://www.cockroachlabs.com/blog/cockroachdb-beta-passes-j...


> overt dismissal, with a heavy moralizing tone that you should feel bad for even asking about the issue

The attitude of Go community cannot be separated from the patronizing tone of Go maintainers. In fact it stems directly from the people @ Google working on Go. All the bullshit "you don't need that with go"™ comes directly from Pike,Cox and co. It's fine to be opinionated, but just admit these are opinions instead of engaging into the bullshit they engaged in for years when dismissing this or that problem as irrelevant.

Pike said "Go design" is done. Except that a language which design "is done" is a dead one.


I'm still sitting here shocked that a language where "err" (and the keywords that check around it) are used an order of magnitude more frequently than all other syntax in that language, has achieved this much popularity: https://anvaka.github.io/common-words/#?lang=go


That's what kept me from looking at the language for the last five years. I finally broke down and wrote my first Go program. Yeah, the errors (and lack of generics) are annoying, but it gets enough else right – tooling, runtime speed, compilation speed, type inference, parametric polymorphism (on interface types) – that I enjoyed it anyway. Every language has some annoyances; it's nice when there are only a few.


Not only that, but you can just ignore errors during prototyping and with a little editor magic you can go back easily generate at least 85% of the if err != nil {... etc. statements.

As much as people complain about it, explicitly checking errors like this (and checking them all) is almost essential for production quality enterprise code and any code running on mission critical systems (and to most project managers, all systems are mission critical).

And fwiw I still prefer go's style of error checking to languages that implement massive try/catch blocks.


I think the "embrace failure" supervisor model in BEAM is a superior design for mission-critical systems (such as cell networks, where it originated) while resulting in literally a ton less boilerplate. You just code the "happy path" and done. Any errors are logged and the process instantly restarted by the supervisor, you can follow up on them if they present a problem.

If you were to code just this "happy path" in Go, it would actually be the unhappy path, because you'd have silent errors resulting in indeterminate state (read: horrible bugs to troubleshoot). If you believe programming is mostly about managing all possible state (as I do), then Go's strategy is way too explicit. In Erlang/Elixir/BEAM's case, as soon as state goes off the rails (read: encounters something the programmer didn't anticipate), it's logged and restarted by its supervisor process almost instantly. (and there are of course strategies for repeated restarts)


People keep saying this and I struggle to understand how this represents good design. In any language I can always just eat the error and keep going, or eat the error and restart.

That doesn't fix the error, and it doesn't imply the program will work correctly.


> In any language I can always just eat the error and keep going, or eat the error and restart.

Defer to pmarrek on what they meant, but to me it's an issue of practical programming.

In a choice between "I will tell you what to do about errors" vs "I will assume you only crash and restart on any error", I've found the later to be far more efficient.

The former generally leads to a rat's nest of never ending error specialization as unexpected or rare stuff bubbles up in UAT or down the road in prod.

Which isn't to say there's a right answer. There's always going to be particular situations where of course you should use one or the other.

But on the whole, as a philosophical default, fail-and-recycle-on-all-errors is a helluva lot easier to spec, code, test, and maintain for me.


I think you need to try it out to see what the big deal is. It's pretty liberating.

The thing is, there will always be the "unknown unknown" bugs, the ones you didn't even think to anticipate, and Erlang/Elixir/BEAM will always win out in a behavior contest on those vs. in Go, because it is built to anticipate any possible error, not just the possibilities you're aware of.


so the argument is that there are (broadly) two classes of errors. ones that happen all the time and ones that happen hardly ever

the erlang strategy is to make it possibly to continue from a known good state when the latter happens where chances are it won't happen again. the errors that happen all the time you will encounter early/often enough that you can work out how to handle/fix them

the result of this strategy is you pretty much never write code that handles the unexpected. it turns out that's a lot of code to not have to write


Those supervisors don't always handle the errors the way you want and can't always be implemented without diminishing overall performance. It's a trade-off.


There's nothing stopping you from handling the errors, though. It's just that the ones you didn't anticipate won't typically bring down the system.


It's convenient to write this sort of optimistic code that Erlang encourages, but the error messages are often not very helpful. You'll often end up with a pattern matching failure and stacktrace that isn't very clear compared to a hand-written error message that you would get in Go.


And don't forget simplicity as a core design feature of the language. That alone is worth sacrificing generics for ;-)


Who gets to decide what is simple or not though ?

Coroutines seem like something that is more complex to me than generics.


I've described Go programs as often looking like a listing of things that could go wrong.


In my opinion and experience, that is exactly what programs are


Unless you're writing in Erlang, in which case you program the success path and 'let it crash' in a controlled manner.


You're assuming that it faithfully crashes in all error states as opposed to incorrectly continuing on it's merry way.


Have you used erlang? Pattern matching is a big part of it, and pattern match failures always result in errors. So, if you have some function foo() that always returns ok or {error, Reason}, then in the "follow the happy path" style, you'd just have a line reading "ok = foo(),", and then if foo actually returns something else, you get a "crash". There's no assuming anything there, it's just the way the language works.


Yes I have. but I see the erlang blindness still runs strong. Just because you have pattern matching doesn't mean you don't have bugs cause by humans. It can be syntactically correct and also logically not match the problem domain.

For example what if you screw up and forget to encode that it SHOUDN'T return OK?

you know that many other frameworks also have these safeguards right? Java server frameworks have been doing it forevever. You can code dirty if you want and deal with no issues or you can be error prone. Hell you can do the same thing in go. You can catch panics. It's not pretty but you can definitely do it.

I forgot one can only point out strengths of erlang not problems with humans coding in general. all hail erlang.


Logical errors are always an issue, regardless of the language.

As for Erlang's problems, it does have a fair number of warts and special cases, but the biggest of them all is that it's still a niche language. We actually had to dump Erlang for new projects in favor of Go. That may sound ridiculous, but filling positions involving Erlang was next to impossible (even with candidates without any prior knowledge of the language).


Why wouldn't it, if you're pattern-matching correctly? And in any event, it seems far easier to get into that situation in Go (which ignores errors unless you explicitly check for them) than in Erlang (which crashes on every error because restarting is cheap)


My point is... "Pattern matching correctly"... assumes no mistakes on the coder's part. Go back and re-read my comment. Or don't. Downvote me because I dare question your skill in the context of the mighty erlang.


I would never downvote someone simply for disagreeing, and certainly not unless I was 100% sure they were wrong or it was way offtopic. (And you would be surprised how often I admit I was wrong or cede a point!)

You're basically saying "well, it's still subject to logic errors." Well, that's a nonargument, because you can say that about 100% of languages in existence. That is not a criticism you can levy just against BEAM langs. And we were never arguing that BEAM magically takes care of your logic bugs. Just that it seems to handle the "set of all bugs that are not logic bugs" better.

I also apologize for erlang/elixir fans being rabid and insufferable. ;) A lot of us have dealt with many other langs (I spent years on Ruby and other OO langs) and we're enjoying the unexpected (to us) benefits of this new/old world, that's all. Joe Armstrong is basically Einstein AFAIC...


You can't downvote people who reply to you on Hacker News, so that scenario isn't possible.


Sure you can. You only need two accounts. Which a lot of people have for work posting or other reasons. I don't but I know folks who do.


Not in Java, Scala, Clojure etc. it isn't.

You can wrap all your code in an unchecked exception and never again have to worry about errors. It's bad practice obviously but there are many situations in which you don't care what error occurred but that an error itself occurred.


Which is exactly what software engineers should spend most of their time doing.


  <>, err = <>
  if err != nil { return nil, err }
  <>, err = <>
  if err != nil { return nil, err }
Is not what software engineers should spend time on, definitely.

Not automating ubiquitous trivial propagations with at least explicit "rie <expr>" (rie for "return if error") is a complete engineering fail under any philosophy.

Oh, I have an idea. If ", err" part is missing from lvalue, insert that mantra automagically under #pragma ARIE=on. This workaround will give designers some stats to embrace.


I consider that explicitly and manually handling every possible error, and making a conscious decision to bubble it up (return) or to interpret it is a very good use of my time as a programmer.


> Is not what software engineers should spend time on, definitely.

But hey! Your editor can easily boilerplate all that... boilerplate!

I think Go will continue to grow until we finally have metrics that say that coding something in Go might get you fast-running code but will be slower to code and hell to maintain.


It really doesn't have to be that way. See http://fsharpforfunandprofit.com/posts/recipe-part2


hah. It's #1 "err", #2 "if", #3 "return", #4 "nil". :)

That must be because a lot of function calls are followed by:

  if err != nil {
      return err
  }
https://stackoverflow.com/questions/18771569/avoid-checking-...


If it weren't for Goroutines I would not find go very useful. I expect C++ will implement coroutines and/or fibers someday that will be very well designed and more carefully/broadly thought out.



Good point. I prefer ISO standards and the built-in library, but your point is valid. Implementations do exist in C++ today.


Still better that try / except: pass from other languages.


No, it's way worse, because you have to write the error handling multiple times even if it is all the same (and it usually is).

For example, in Java, I can make as many method calls I want inside of a try block, and catch an exception from any one of them in the catch block. In Go, I would need the equivalent of one catch block per method call, in the form of "if err != nil { return err }". Why repeat yourself?

If you want to pass on an error in Go, its easy to do accidentally -- just omit "if err != nil". Both types of exception handling models enable this kind of behavior. If you are going to do it, you might as well not repeat yourself while you are doing it.


> Why repeat yourself?

Because you can put more meaningful error messages due to the granularity of the checking. This has made debugging things a lot easier at times.


You can, but you should not be forced to do so. I could easily do that in Java if I wanted to, but I also have the flexibility to handle them all in the same way, or pass them back to the caller (which the right thing to do 99% of the time), with a very small amount of code.


To be honest moaning about Go idioms is like moaning that Java is object orientated or Lisp has too many braces. If you don't like the idioms of a particular language then don't use that langauge. There's plenty of others to choose from. But moaning that language x should be more like language y is just daft as it misses the point of why language x decided to do something different in the first place.

I'm not saying Go's error handling is better than Java's. But I've written some relatively large and complicated projects in Go (like the Linux $SHELL and REPL scripting language I'm currently coding this very moment) and error handling really is the least of the problems I've been having. Sure, Go's idioms do get in my way from time to time. But on balance Go's idioms save me more time than it wastes. Which is why I keep coming back to Go despite having used more than a dozen other languages in anger. But that's my workflow and my personal preference. Others will find the reverse to be true and I wouldn't be complaining that their language should be more like Go.

Frankly I think people waste too much time comparing languages. Just try something and if it doesn't work, move on. Don't assume your personal preference is a benchmark for how all languages should be designed as there will be a million other developers whose personal preference will exactly oppose you.


Maybe you haven't used exceptions on the JVM before but you can wrap them so it's possible to still have localised error messages if you want them.

But in most cases it is unnecessary since you always have the line number and type of exception printed out as part of the stack trace.


I remember in Python some class / module would have different fields in the error object, so it was really difficult to do something meaningful with those errors.

There is also the switch case you need to make when catching an error, like tcp connection then dns resolution and so on, you have to know every time any kind of sub errors.


Frankly, I've never seen that (try / catch / ignore exception) in any commercial codebase I've worked on, running into tens of millions of lines.

The worst I've seen is people logging the error message but not the exception itself (so losing the stack trace), or causing another exception to be thrown (and losing the stack trace or cause of the original).


15 Million to 17 Million is not an order of magnitude.


The top 4 keywords in the list are what's used frequently in go to handle errors.

  if err != nil {
    return err
  }
So it's more like 7,7 million to 17,6 million (the top fifth "the" is from comments, so it doesn't count)


That is not an order of magnitude either. Except in base 2 :)


Python expats needed a place to go.


I don't understand this comment - Python does what I'd argue to be the "right" thing, which is that operations which can fail raise exceptions that unwind the stack. If you don't care to handle failure in a particularly granular or graceful way (e.g., you're a command-line process, or you're a server process with lots of independent requests), you can have one large catch statement around all of your work, or even just use the implicit one around the entire program provided by the runtime.

Meanwhile, well-written C code does this the "wrong" way; every external call has to be done like

    if (read(fd, buf, len) < 0) {
        hopefully remember to free some things;
        return NULL;
    }
Why do you think explicit error handling is a Pythonic practice?


> Pike said "Go design" is done. Except that a language which design "is done" is a dead one.

Guess C is done for.


Except they updated C in 2011 and they're already working on C2X.

http://www.open-std.org/JTC1/SC22/WG14/www/docs/n2086.htm


I think you're confusing stable with "done". The C language has been getting updates about once every 10 years:

Original (~1970) K&R (~1980) ANSI C (~1990) C99 (~2000) C11 (~2010)

I would not be surprised to see a C22.


To what degree are those being used though? Major projects (e.g. Linux, CPython) are still on ANSI C.


C99 took a while to be widely adopted mainly because some compilers where very slow on the uptake (I'm looking at you Visual C++) but now I would never consider using anything less than that for a new project. It's almost 20 years old!

Many small quality of life improvements: stdint.h stdbool.h, inline, restrict, allowing variable declaration within the code, snprintf, variadic macros...

Most of those were accessible through compiler extensions but having them in the standard means that you know it'll work everywhere on any compliant implementation.

Linux definitely isn't conservative when it comes to the language, not only does it use modern features of the language but it even relies on GCC extensions and even sometimes on the compiler's optimizer to produce the correct code: https://unix.stackexchange.com/questions/153788/linux-cannot...

I'm not super familiar with CPython but it also doesn't appear to limit itself to C89, see for instance this random file I opened: https://github.com/python/cpython/blob/master/Python/future....

You can see that `const char *feature` is declared in the middle of some code. It also uses "inline" in several places.


> I'm looking at you Visual C++

They decided to ditch clang/c2 integration work, and are planning to actually fully support C99 and not only what is required by ANSI C++.

https://www.reddit.com/r/cpp/comments/6mqd2e/why_is_msvc_usu...

https://www.reddit.com/r/cpp/comments/6mqd2e/why_is_msvc_usu...


Any code that declares variables in the middle of a function, // comments, or varargs macros is using C99.


Linux is definitely not on ANSI C, and CPython is on C99 starting from 3.6.


CPython switched to C99.


I think you are confusing getting updates with "design"

Go gets frequent updates as well.


The second half of the "Introduction" section of the blog post explains what "done" meant and why.


The discussion is not about identifying problems. The issue is communicating the problem and explaining the impact. The Go team was wrong about monotonic time but the correct response is not to assign blame. Concrete examples and conversations are a better way to communicate a problem. This is what they are trying to communicate now.


And if you _do_ want to assign blame, it's fine to put it on us. I did as much in the talk:

"Just as we at Google did not at first understand the significance of handling leap second time resets correctly, we did not effectively convey to the broader Go community the significance of handling gradual code migration and repair during large-scale changes."

Our fault, both times.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: