Hacker News new | past | comments | ask | show | jobs | submit login
I want off Mr. Golang’s Wild Ride (2020) (fasterthanli.me)
428 points by Decabytes on April 28, 2022 | hide | past | favorite | 464 comments



My Anecdotal Experience:

Golang is great for spinning up new services and tools with very little overhead, the language is well designed for the backend - and the lack of <insert programming language feature here> avoids "odd" decisions which other engineers will dislike in the future. If your job is building lots of new things using relatively common building blocks, then Golang looks fantastic!

However on mature services, engineers often need <insert programming language here> because they are dealing with something extremely complex. The author of this blog post digs into the scenario for a package which needs to support different non-nix operating systems, but the same can be true of an app which has to manage many different kinds of objects (Kubernetes, or any service with many DAOs). Alternately there are engineers building all kinds of weird software which may do new things in novel simplified ways. Golang occasionally makes expressing these types of applications more difficult.

Overall I enjoyed my time writing go, until I tried to write a statistics based application for a startup idea back in 2016. The project didn't get very far, and that was partly due to me getting bored having to deal with all of the different numeric data types, at the time the lack of generics made it so that you either had to use reflection - or implement the same function N times over to support the different numeric data types. This tedium killed my love of the language (I'm sure it's a better experience in 2022).


The Kubernetes codebase is huge, but in my (limited) experience I really felt like it was delivering on Go’s promise: that I can read any given file and understand what’s happening. At least when I needed to debug Kubernetes issues 4 years ago, I could grep around, dive into a file, and command-click to “go to definition” and quickly build a local understanding of the code around my problem. No spooky action. Everything explicit, verbose but plain spoken.

Yes there is a lot of code, but I would be very surprised to be able to dive into a C++, Java, Python, or TypeScript project of the same scope and be anywhere as near productive.

> partly due to me getting bored having to deal with all of the different numeric data types […]. This tedium killed my love of the language (I'm sure it's a better experience in 2022)

That’s the very tradeoff Go is designed to make. It is not expressive. It is tedious. But it’s objective is not to make you as an individual fast. It is to make large programming projects cheaper over the long run, and to reduce the variance between the best programmer on a project and the worst.


> It is tedious.

> to reduce the variance between the best programmer on a project and the worst.

In my experience the only way this can be done as with trying to level anything is to bring down the level of the best.

I find this awful.


For small companies, variance can be good. For large companies, avoiding surprises is everything. So they cram a bunch of processes to make everyone a B player. That’s great if you’re naturally a C player, and can be less work if you’re a B+ player who wants to coast. But it’s hell hell if you’re an A player who is now working with handcuffs, and getting paid the same as the C player.


Can you consider yourself an A player if you think yourself too good to work with B / C players in a language that, as you say, handcuffs you? I mean it frees you up mentally to think about the bigger problems, beyond the bit of code you're working on right now.

In the end, the person writing the code is just a cog in the machine. You can try and pretend you're an Important cog by being Very Clever and using very smart code, but in the end you're not actually important. If you want to be important, you need to let that code go and move up, towards architecture and company level.


go has null and has modifiable function parameters.

The first thing is a reintroduction of billion dollar mistake [1] which is not acceptable in 21-st century. It was acceptable when Hoare did that because noone knew better then. It is not acceptable now.

[1] https://en.wikipedia.org/wiki/Tony_Hoare#Apologies_and_retra...

The second point means that some predicate on locally constructed and held data that is true before calling a function with that data as parameters, may not be true after return from a call.

These two "features" of a language make language hard to use. Instead of thinking about bigger things, you think about whether you may have NULL or not in this variable after that call to that function.

In my not so humble opinion, taking OCaml and bolting on it type constructors (monads for hand-made channels, software transactional memory, etc, and even more than monads) could been better choice than reimplementing Simula on Plan 9, poorly. And I do not like OCaml at all.

As per "unimportant cog" argument, I can only say that some companies recognize(d) talents in different areas. AFAIR, IBM used to have parallel ladders for officing-oriented and engineering people, with comparable salaries on comparable levels. I believe it is a good thing.


> In the end, the person writing the code is just a cog in the machine.

I mean, I hate to break it to you, but everyone is a cog in the machine. The person running the company is a cog in the machine. The person running the country is a cog in the machine.

And, more than that, there's no hierarchy, there's no order. A person may go their entire lives working as a cleaner, looking up at the person running the company, and that person may look up at the pension fund that owns their company, which fund in turn exists to invest the pension of the cleaner. The world is a graph, not a tree.

In the context of the dynamical chaos that we call the world, some people like to take pride in their craft. I think that's OK.


It’s not the people who handcuff, it’s the process.

I will use Sales since the metrics less ambiguous there. In a startup, without a lot of structure, a great salesperson will find deals that others can’t, and will close many more of them. They will have the autonomy to get things done and are rewarded when they crush their quotas.

In large companies, there are all kinds of Marketing and Sales support programs that lift the average salesman. But a great salesman is told “Only talk to companies on your list” and “All your deals must get approved by the deal desk” and “Don’t sell more than X because it will ruffle feathers downstream” These processes are good in aggregate but drive top performers nuts.


^^ This!


I guess the correct language depends on the ratio of good/bad developers in this case.


I've found Go's compromises help good programmers too over the long run.

It isn't lowering the variance by slowing down the quick, it does it by shrinking the space of possible solutions. Bad programmers and good programmers wind up making similar "good enough" design choices. You don't wind up with over engineered frameworks for every bit of logic in you backend.


> In my experience the only way this can be done as with trying to level anything is to bring down the level of the best.

> I find this awful.

It mitigates risk; if "the best" is at a level above the other developers, to the point where a lot of things hinge on this individual, it's a huge company risk. You never want to rely on "the best"; see also https://en.wikipedia.org/wiki/Bus_factor

Go is a language that tells people to put their ego aside. The problem you're solving is already complicated enough, no need to add complexity with a complex language. Dumb and verbose code is better than concise but complex and hard to comprehend code.


Errors are introduced at constant ratio per line of code that is not syntactically checked by compiler. The ratio is specific to developer and language used.

E.g., you can't make a mistake on a line that contains only curly bracket, but you can make (one-off) mistake on line "i += 10;"

The more verbose code, the more lines of code that are not syntactically checked. The more errors you introduce during development.

Why I talk about syntax and not types? Because type system of go and some other languages like C (and C#, C++ and some other including OCaml, for that matter) are not rich enough to express long dependencies as is done in Rust with its borrow checker. Rust has its own failing, of course. In some cases of type systems [1] it is possible to express environments with long dependencies without resorting to hacking of a compiler. But most languages do not have that luxury, the only dependence checking are balancing brackets and missing variable declarations.

[1] http://blog.sigfpe.com/2009/02/beyond-monads.html

The axiom of (not only) software engineering says that cost to fix a defect is directly propotional to the time between introduction and discovery of said defect. The funnel of defect filtering (personal review, colleague code review, CI, QA, beta, etc) is not perfect, it still allows for some defects to fall through. The more defects are on wide part of the funnel, the more defects are on the narrow part of it. And for these defects that pass through, the time between introduction and discovery is bigger than for all other defects.

The more verbose code in some language, the more defects developer will generate using that language. The more defects will pass through defect detection funnel. The biger will be the cost to fix them and cost of software user to, well, use the software.

Google can afford that, because they can afford extremely long funnel with prolonged code reviews, big QA teams and long lasting betas. They can afford that because of money they got.

Practically everyone else cannot afford that.

Ego is not even a problem when you factor all other problems you get with simple or good enough verbose solutions.


It may be awful to the best programmer. But if you're looking at the team as a whole, well, how many of the best do you have on the team? It's really hard to create large teams where the average programmer on the team is better than the average programmer in the country. If you can bring up the lower half of the team, that may be a net win, even if it brings down the best one on the team.

The effect of that may be that the best programmers leave the team, though...


I used Go (didn’t choose it exactly) at an olds job of mine 4 or 5 years ago. I was definitely the most senior and very likely the best developer at the company. Organizationally, many of their backend services were Java-based using Spring Boot, although there were a few Python and NodeJS ones as well. My general observation was:

The people who called themselves Java programmers or Python programmers or JavaScript programmers complained a ton about Go. The people who weren’t committed to any particular language didn’t seem to mind at all.

(I am in the whatever-language-let’s-write-code camp myself)

We weren’t in love with it. I don’t use it for spare time projects (woo Common Lisp with SBCL). The only super positive feedback I really remember is “wow! It’s so easy to just walk through from when the program starts to knowing where all the endpoints are and where you can find the source!” which isn’t so much praise of Go but an indictment of Spring Boot.


If the skill difference among your devs is really high, it may be preferable to have the best programmer leave, or at least change roles (architect, team lead, trainer), as this may increase productivity of the median workers. At least the skill difference should be treated as a risk and properly mitigated.


In the role in my sibling comment, I somewhat filled the role of the developer that went away. Hardcore Java/Spring Boot guy who could wrap his head around these amazingly complex code bases he made that the younger/less experienced devs could not manage on their own. I spent a ton of time mentoring in my role, and hopefully helped undo some of the mental damage the previous guy did :D. Things were running much smoother when I left then they did when I first joined at least, with teams that could have chosen whatever language they wanted choosing Go voluntarily.


Meh, the best developers sound like whiny brats.

I'm not the best developer but I've been around long enough to know that bringing the bottom half of development up to the plate does more for me in my day job than catering to what magic bullshit the "best" developers are pumping.


What I wonder, is if by designing a language to enable happy corporates, rather than happy programers, is Go not shooting itself in the foot, as the now less happy programmers (because corp mandated Go) would move to other corps, creating more cost to the corporate than if they had use a more joy-inducing language.

IOW, Go is probably great if you're quite big. But like most Google created dev tools, the questions we need to ask ourselves is "are they a good fit outside Google?"


I have asked to myself and answer seems yes. I wrote a tool many years ago in Go. It has saved me at least 100 times the hours I put in to code it. And I put in couple of weeks at max. The point here is I created a tool (my application) using another tool (Go) and I care quite a bit more about my tool, features, usability and so on. Programing language provided few features like single binary so I can install on VMs, desktop and not downloading whole internet to build that project. Its been working fine for half a decade or so.

So it does its job and saved time is spent somewhere else like reading interesting books, cooking or watching documentaries etc.

I can understand people who sweat on perfect error handling, sum type and what not will be eternally unhappy with Go. Hopefully they find whatever suits their taste, considering we are in a new PL boom nowadays.


I don't care about past tense.

If you had to write every day in Go, would you still be happy about it? (I sure wouldn't)


I write Go every day, and have done so for the past two years or so (along Typescript, JS and PHP, the latter two for a 'legacy' application), and I'm happy with the language.

If I were to criticise it, there's a few things that come to mind:

- Struct tags don't feel right; it's string-based annotations and some libraries design whole DSLs in them.

- The community seems to push towards using hexagonal or clean design, but unless I'm getting it wrong, this leads to a lot of really tedious conversion / adapter code.

- There's not many good tools out there (yet?) to make things easier, e.g. database abstraction layers. Writing lower level SQL and some utilities to help with that is still better, but it's tedious and voluminous (like using raw JDBC and iterating over ResultSets). I've used ent (which required manual mapping between every struct field and every column, unless I got it wrong) and gorm (which breaks down when you nest deeper than one level), currently have about 40 tables.

But these are issues that show up, I think, in any larger application.


So the purpose of a programming language is to spark joy in developers? I mean, I get it, but it's a very selfish position to take and it's not in the interest of a business, in the end.

I mean, Java makes me feel dead inside, but I appreciate it for being ubiquitous and reliable, and that a business will be able to find half-decent Java developers any day of the week.


Workers who enjoys their job tends to do better at it.


The time to peek up local problem domains are independent of language. The time to peek up language is, well, independent of language. I had an experience when former hardware engineer picked up Haskell in a month.


Java worse than Go in many regards, and many successful business seem to continue using it. Probably the marginal cost of one unhappy Scala magician leaving a company is far lower than the cost of writing all software in Scala, to keep Scala type magicians happy. Substitute "Scala" with any language that has high joy/freedom.


Tell me you don't know modern Java without telling me you don't know modern Java.


Modern java is still pretty poor. I don't like coding in Go but it is better than Java. Peer into the abyss https://docs.oracle.com/en/java/javase/17/docs/api/java.base...


I hadn't thought of it that way. Trading verbosity for clarity was always COBOL's shtick. I had always thought of Java as this gen's COBOL, but maybe the reflection oriented frameworks in common use undercut that use case and Golang would do it better.


If you need to use Vectors and Quaternions then Go isn't really for you either.

The lack of any ability to do any operator overloading or writing arithmetic types along with the lack of parameteric polymorphism means you wind up doing stuff like Q.VectorMult(v) instead of just Q * v.

Somewhere there's one of those "Go Koans" about how languages should allow programmers to easily express their intent, which I think the latter covers much better since nobody writes mathematical texts/articles using method notation.

Meanwhile Go has that whack and almost useless complex type built into the language because Rob Pike pulled it out of the ANSI C book and figured that was all the Engineers at Google would ever need to use, or something. But users can't write types like that because they can't be trusted with that much power I guess.


I kind of like making that a method since matrix products are not associative.


You can determine that by the ordering of Q * v vs. v * Q though just like on paper or in any book. The method syntax isn't doing anything for comprehension.

(Although it could really use a transpose operator as well and proper row and column vectors, but gamedevs are filthy savages)


> or implement the same function N times over to support the different numeric data types

Did you consider using a preprocessor or suchlike


Then you wouldn't really be programming in Go. You'd be programming in whatever franenkstein's monster of a language your preprocessor is using.

Preprocessors are a code smell, and indicate that the language you are using lacks the right abstractions for productive programming.


> and indicate that the language you are using lacks the right abstractions for productive programming.

Indeed go did not have generics, in fact that's explicitly stated in the post I was responding to:

"at the time the lack of generics made it so that you either had to use reflection - or implement the same function N times over to support the different numeric data types"

So, you unthinking prat, what do you recommend he should have done, carried on coding much near-duplicate code by hand or get the computer to do the repetitive job infinitely faster and more safely?

HN is riven with plodders who just spew out memes like "Preprocessors are a code smell" or "correlation is not causation" with a relentlessness that would make Sisyphus say, 'well, you know, at least I'm not him'.


> So, you unthinking prat, what do you recommend he should have done, carried on coding much near-duplicate code by hand or get the computer to do the repetitive job infinitely faster and more safely?

No, my recommendation would be to use a different language that doesn't need a preprocessor.

Use any more insults and I'm just going to flag you.


You don't get always to choose your language, and even then there will be stuff that needs automated cut & paste at a level that the language can't provide (even down to the lexeme level). Which is why I use generated code for my C# work. I use python rather than a preprocessor but either would do.

Prepros and code generation in general are just tools, no more nor less, and not good or bad intrinsically. There is no code smell from using these. The bad smell comes only from misusing them.


Macros and templates are the proper solution to the problem, not unhygienic hacks.


I use preprocessors for Java. I'm not averse to using them when I have to work in a more verbose language.

And you are right that we often don't have a choice of language to work in.

But they are still a sign that the language you are using is deficient. Preprocessors may ameliorate those problems somewhat, but they come with their own costs and complexities.

For anyone who is in the position of making a language choice, the fact that a given language community relies heavily on such tools should be a reason to avoid that language.


Now this is an answer I can work with!

I completely agree they represent a language deficiency and that should be rectified within the language, but that's not always practical than or even possible. Sometimes you have to fiddle with the language at the lexeme level [edit: at the token level, messing about with single keywords for example], which most (all?) most templates/generics won't allow you to do. Also the input to a code generator can be heck of a lot more compact and readable than the code it outputs, as that's a win.

Heavily modified example based on my actual code (no animals harmed):

   makeClasses = [
       ("Dog", "chases", "Cat"),
       ("Fox",  "eats", "Chicken"),
   ...
each line of which produces a small class.

Also completely agree they come with their own costs, but sometimes the cost is lower than the cost of not using code generation.


Seems too many devs use go for something it is not meant for. Tried doing math calculations in go - not a good a idea. But for backend stuff like rest api, web servers, networking, sysadmin and devops tasks, it shines.


I mean, a lot of developers have or look for a golden hammer, coerce their favorite language in doing a task. And since every programming language is turing complete, it CAN work, but whether it's the most elegant is not guaranteed.

I mean for math, similarly, Java and co wouldn't work very well either.


Java is solidly mediocre at math, like everything else, but it does work. At least if by math you mean numeric/arithmetical programming, not an algebra system. I think I'd prefer Julia though.


Golang is great because it was the first to include excellent tooling in addition to the language (strict compiler, linting, non-customizable gofmt, package manager, good html docs, online playground, etc.)

It’s undeniable that it brought a lot of good ideas that languages like Rust borrowed.

Golang is still undefeated in terms of battery included. Its standard library is top notch and full featured.

On the other hand it is more high-level, has a GC, and has made a number of mistakes (no sum types). The biggest trade off it does though: it lacks expressiveness. The new generic will probably help, but the trade off of Golang is deep. It makes writing code harder so that reading is easier.


Common Lisp, Smalltalk, Delphi, C++ Builder, Java, .NET languages...

Plenty of examples preceded Go in what good tooling means.


I’ve used all of those except Delphi. I think Go got a lot right and is in a different (better) ballpark, tooling wise.


Only from the pre-historic UNIX CLI point of view.


I don't believe Go was the first language to include "excellent tooling." Ruby on Rails, for instance, has had excellent tooling since the dawn of time, and I'd argue that Ruby on Rails' and Elixir's tooling are superior to Go's.


Elixir is a FL so it is in a category of its own. I’ve done some Ruby and I don’t remember the tooling as being exceptional like Golang’s


what elixir tooling are you using? I absolutely deplore writing elixir because I just find the tooling atrocious


Java and Erlang both have phenomenal tooling, especially when it comes to runtime introspection and debugging.


I'd say Java and the JVM is the most overcomplete language and ecosystem out there when it comes to available tooling.


I've had nothing but negative experiences debugging elixir honestly. In a large project, trying to step debug is just way too slow


this is a bit part of it for me. Most of the tools you need for average development, triage, perf analysis, etc... are included out of the box.


What is meant with expressiveness in this context?


Generics, sum types, macros, functional programming stuff, basically things that allow you to do more things


I am both bemused and empathetic for those who keep having this realization with Go. It is like an abusive relationship and people stay in it too long before realizing it is not going to get any better.

I took a look early on at Go. I tinkered with it and decided it would not work for me.

It lacked a lot of things that any new language should have, given the lessons learned in language design over the last few decades. It was clearly the result of a lot of NIH syndrome. It was built by Plan 9 guys to solve their problems on their preferred platform. For their purpose, I am sure it works just fine and is very comfortable for them. To expect it to be adapted to other kinds of problems and platforms is just wishful thinking.

For years, I have been hearing that Go will get generics and that it will be easy to add and use. I was working with Java before and after generics and saw what a mess it made to not design it in from the beginning. This made me skeptical that Go with generics would be as good as it could be. Now Go finally has generics and many are not happy. This should not be surprising.

This is why I see it as an abusive relationship. That partner you are so fond of is not going to change. Accept that and stay or move on.


I don’t get the “abusive relationship” metaphor. It just seems like the “abuse” metaphor is an attempt to stir up emotions.

Every language has its own weird way of doing things. You decided Go wouldn’t work for you? Ok, I get it. People who stick with Go are in an “abusive relationship”? Seems like an attempt to stir up a flame war more than anything else.

I’ve never encountered a language that I’m 100% happy with. It’s always been about cataloguing the tradeoffs. I’ve even designed my own domain-specific languages a few times—languages that are laser-focused on solving a small set of problems that I have. It seemed impossible to design a language that would even solve my narrow set of problems in an ideal way. I was always making tradeoffs, from the syntax to the semantics. I’d make syntax for one thing slightly simpler, and then I’d realize that I introduced some ambiguity, so I can make the parser way more complicated, or I can make the grammar way more complicated, or maybe I try again. Over and over again, “simple” changes in small languages turned out to be diabolically complicated.

There are plenty of languages that I’ve personally just stopped working with because I can’t deal, personally, with the design decisions. Java is one. Haskell is one. (And the reason I gave up Haskell is all about finalizers, FFI, and the library ecosystem, it’s not about types or the language itself.) Rust is another, but I check in every once in a while to see if Rust has improved. I’d love to give up on C and C++, but realistically, there are too many existing libraries that I want to work with, and it’s often more fuss working with FFI than to just write C++ in the first place.

And just to speak about Go… there’s a small hobby community I’m in. People build their own tools to get things done. There are tools written in Go, and there are people who complain that the tools are written in Go. They think Go is awful, they hate it. But the complainers (in this particular community) don’t seem to be writing any tools themselves. Instead, we keep getting new tools, written in Go.

The history of language design is the history of people making decisions about how to design their language, and then discovering that the decision had unintended consequences, like, five or ten years later.


It sounds like you enjoy Go. That is great. I am referring to TFA/post which sounds similar to many experiences with Go.

Of course all languages have pros and cons. I am referring to the specific situation where people are expecting a modern language and ecosystem heavily influenced by its user community. Go is not such a language. People should have different expectations.

> The history of language design is the history of people making decisions about how to design their language, and then discovering that the decision had unintended consequences, like, five or ten years later.

I agree completely, which is why it is so surprising that people adopted Go when it seemed to learn very little from history.


> I am referring to TFA/post which sounds similar to many experiences with Go.

Lots of people have bad experiences with Java, C++, Rust, Python, or JavaScript. I've had bad experiences with all of those languages, as well as bad experiences with Go. I don't think you've managed to articulate what makes Go different.

I'm not saying you should use Go. I will say this: it sounds like you have opinions about Go, and I can't tell if you're trying to imply that those opinions are somehow universal or common, or reflect some actual design flaw in Go that you can articulate.

"Many are not happy" about Go generics but that's inevitable, IMO. I don't think there's any possible language design decision that won't make many people unhappy. You might as well be asking for pizza toppings that make everyone happy.

> I agree completely, which is why it is so surprising that people adopted Go when it seemed to learn very little from history.

I can understand why you would think that way, but Go seems to me completely different. When I look at Go, I see a ton of small decisions where it seems like (to me) that the Go language designers have taken some fairly sophisticated lessons from history. Things like:

- Error handling,

- Strings,

- Default garbage collector tuning,

- Generics,

- Dependencies,

- Versioning.

Just to pick an example... Java's checked exceptions are a good idea, but have some unfortunate consequences when Java is used for larger projects. One of the key problems is that when you handle an error, you don't just need to know what the error is, but where the error came from. Some errors can be handled at a distance and some errors must be handled near their origin. Errors in this second category can be made into checked exceptions in Java--at least, that's the idea--but in practice, it's a bit of a mess.

If you pull back from dividing errors into checked / unchecked variants, then you lose the ability for the language to help you distinguish between errors that can be handled locally and errors that can be handled at a distance. IMO, Go's decision to use error returns makes the decision that "local error handling is default", which is a pretty nuanced take on things that I really appreciate. I don't think anyone would have come up with Go's error handling approach without the lessons from Java's error handling approach.

Some people hate it. That doesn't mean that it "didn't learn from history".


This is an extreme intellectualisation of Go's error handling, which is literally just "errors are nullable strings returned by functions, and you have to check for them"[0]. I can't see where Go has learned from anything in PL research or practice, and indeed it's famous for not having learned: errors can't be matched; any hacky matching solution is unhelped by the type system; nor does the type system enforce checking for an error before accessing a value; adding context to errors is an absolute clusterfuck (in the absence of anything that could be called a type system) and therefore you end up with a load of "failed: op failed: encountered error: service could not load foo: error: database error: EAGAIN".

I just can't make sense of your argument. You say Go has learned some lesson from Java's checked exceptions, because ... errors are return values. What does that have to do with checked exceptions? Do you mean to contrast it with exceptions themselves? It certainly seems a complete non sequitur in relation to the (type) checking of exceptions. If your point is the "exceptions vs values" argument, then that's certainly one that's been hashed out at length, but I fail to see how picking one approach - of the two only approaches that any language can pick, both of which countless languages have used - is evidence of some kind of deep thought and originality.


> literally just "errors are nullable strings returned by functions, and you have to check for them"

That’s definitely not how errors work in Go, either in theory or practice.

In Go, I would characterize errors as an open union type. That’s how I think about them. They turn into strings when you log them.

There is the type of error you get from errors.New. This is not the same as a string, because it is compared using pointer equality, not string equality. For example,

    n, err := f.Read(buf)
    if err == io.EOF {
        // ...
    }
This is not a string comparison.

> …errors can't be matched…

You use ==, type assertion, typeswitch, or errors.Is/As/Unwrap is how you match errors in Go. Occasionally you will see a helper function (e.g. os.IsNotExist) but I think these functions are only there because they predate the errors.Is/As/Unwrap interface.

> nor does the type system enforce checking for an error before accessing a value

This sounds like some kind of dogmatic analysis not founded in PL theory. PL theory does not say “it is better for the type system to enforce checking”, because PL theory is not prescriptive.

> adding context to errors is an absolute clusterfuck […] "failed: op failed: encountered error: service could not load foo: error: database error: EAGAIN".

The nice thing about that is that you can then just do this:

    if errors.Is(err, syscall.EAGAIN) {
        // handle EAGAIN
    }
Admittedly, producing good errors requires the programmer to care, and most programmers simply don’t (an observation). The error you wrote is not what errors in the applications I work with look like. I wrap errors with context when the context is relevant, unwrap context from errors if the context is implicitly understood.

Equally, you will find that programmers don’t care in other languages, like Java, Haskell, or Ruby. When a programmer doesn’t care about errors in Java, you get unreadable stack traces, devoid of context. When a programmer doesn’t care about errors in Haskell, you get partial functions, which throw exceptions (which must be caught in the IO monad) even though the functions are declared to be pure.

Go’s outcomes here seem pretty good to me.

> You say Go has learned some lesson from Java's checked exceptions, because ... errors are return values. What does that have to do with checked exceptions?

The default location to handle exceptions is at a distance. The default location to handle error return values is where they are returned. Checked exceptions are an attempt to bring exception handling closer to the location of the error, but they had ergonomic problems. The ergonomic problems with Java checked exceptions have been discussed elsewhere.

> …I fail to see how picking one approach - of the two only approaches that any language can pick, both of which countless languages have used - is evidence of some kind of deep thought and originality.

“Originality” is what you want to see in research languages like Haskell. That’s the frontier where new ideas are tested. However, speaking as a longtime Haskell programmer, it can often be damn hard to get work done in Haskell. Same thing with Rust—lots of good ideas, can be hard to get work done.

Go is a more conservative approach. “Conservative” does not mean “better” and nor does it mean “worse”, it’s just another niche for programming languages.


> That’s definitely not how errors work in Go, either in theory or practice.

Sorry, I originally had a footnote after that, which I mistakenly removed: something to the effect that "yes, I realise errors are actually nullable structs implementing an interface which can return a string". Not quite as detailed as your comment, and perhaps incorrect in some respects, but in short: yes I understand errors aren't literally strings in Go, but I wanted to focus on the essential aspect, not the distinction between 'is a string' and 'is an interface type whose only common denominator is the string it returns'.

(FWIW, I wrote Go professionally for several (painful, stagnant) years, so you can safely assume I have at least an intermediate familiarity with the language, and that any simplifications are just that.)

And yes, I'm aware you can do type assertions on errors. That's why I specified "any hacky matching solution [i.e. type assertions] is unhelped by the type system": in other words, you can type-assert on some random types if you know that's what the function can return, but the type system neither (a) tells you what errors the function can return (like Java does with checked exceptions, or Rust - among many others - does with the errors-as-values approach) nor (b) enforces that you have handled all possible error types.

In practice, this is a moot point because most Go code (that I've seen) doesn't really use the type system to structure its error handling anyway. If you look through any file at random in the Kubernetes source - probably the most vaunted example of 'well written Go code in a widely used application' – I can't find a single example of that being done, and very few examples of any error handling tout court besides "if err != nil { return [_,] err }".

> The nice thing about that is that you can then just do this[: "if errors.Is(..."]

That only works on errors created using the `errors` package. That also does runtime reflection on nullable pointers.

> The error you wrote is not what errors in the applications I work with look like.

Yeah, I'm used to the "it's ok if you ensure all the code you run is meticulously handwritten" attitude, from when I worked with Go. Unfortunately I have to use libraries, at which point that argument rather falls apart.

That's why I prefer my programming language to enforce or encourage good habits, rather than simply saying "well it's your fault for being rubbish!" if my, or a dependency's, code is problematic for me.

> “Originality” is what you want to see in research languages like Haskell.

I'm not criticising it for being unoriginal. I'm saying that your reverential comment that "the Go language designers have taken some fairly sophisticated lessons from history" is rather over the top.

I couldn't care less whether it's original or not, but you claimed it was, and I responded to that because it seemed a very odd claim (not because it's terribly important if true).

---

Look, I appreciate the arguments for Go. It's a simple language that avoids all that clever, computer-sciencey, type-theory-y argle-bargle, for simple salt-of-the-earth craftsmen who just want to write Good Honest Application Code in a language where no one cares about that garbage collection stuff because we don't write garbage (and anyway what's a few milliseconds here or there!? what's determinism? http go brr!). I tried to convince myself of that for several years. The truth is that there are plenty of simple languages which aren't designed with obstinate inadequacies of the kind amply illustrated by the OP (though I'd throw in the horrific 'zero value' semantics, along with the bug-inducing, heap-smashing 'nullable pointer as option type' monkey patch that's semi-standard among Go programmers).


I would challenge you to rephrase this without the loaded language, not only because loaded language invites a flame war, but also to see if your argument is compelling without assuming outright that the language is bad. I.e., can you make a persuasive argument that the language is bad rather than simply coming up with myriad different ways of implying that you don't like the language (or perhaps even the people who do like the language).

I think this is a good way to debate/discuss in general; nothing particular to Go or programming languages (although it may not yield as many upvotes).


I won't edit my post. Maybe I did use bad wording. I will contemplate that.

I don't think Go is "bad". The important point to me is the expectation that Go will adapt to its user community.

For example, I am not at all surprised that Go has bad abstractions for files under Windows. I would not expect Pike or Thompson to care about Windows. I would not try to make Go work as a cross platform language.


I'm not asking you to edit your post per se, I was giving feedback that your post is (1) flame-y and (2) not very compelling because it leans more on loaded language than it does on reasoned argument (and there are plenty of reasoned arguments to be made against Go, including, for example, the Windows file abstraction argument in your more recent post).


> I was working with Java before and after generics and saw what a mess it made to not design it in from the beginning.

Counterpoints; was the concept of generics a thing before Java implemented it? That is, could they have known?

Second, a ton of the Java spec and compilers and stuff is dedicated to generics; would it have been less complicated and voluminous if Java had it from the start?

With Go, they took their time to think about generics, just to avoid bloating half the compiler with implementations and workarounds and the like just to support generics, or to not end up with Scala's massive architecture. Their focus remains on a simple language, simple code, and a fast compiler; in many ways it's the antithesis of Scala in that regard.

anyway, just some Thoughts I have on the matter.


> was the concept of generics a thing before Java implemented it? That is, could they have known?

You can literally google "X lang feature" and it gives you a date. ML/Ada have had generics since the 70s.



Everything is a question of tradeoffs. I have to deal with cross-platform Go code on a small project at work. The requirements are modest and the Windows file system code is mostly for developer testing. The production system is Linux. Yes, it's sometimes not pretty and Go was clearly designed for Posix style file systems. We chose Go is that it is fairly high level, it gets the job done, and we don't have to worry about what's in the runtime on the box (e.g. library or package dependencies). It suits our needs. The other options we have (e.g. statically compiled, native C/C++, Rust, or more weirdos like Ada) just aren't as fit for the job. I also ran into the same problems using Afero for unit testing. Moral of the story - pick the right tool for the job or abstract it out a layer and make it fit better.


Engineering is the art of compromise.

The compromises I make to work in Go are easily paid off in what Go allows me to do and for the amount of effort it requires. Language wars are silly.


"Language wars are silly" doesn't mean that only negative views are silly, and that we should all be blindly positive about every language. Evaluating our tools properly is a good and desirable thing to do.


> It is a minefield of subtle gotchas that have very real implications

Perfectly describes my 10+ years with the Node ecosystem. Not that it's a bad thing necessarily. I've made it my niche and the knowledge I've accumulated has made for a great career. But still, I understand the narrative.


And the most interesting thing to my mind is how little the subtle gotchas matter.

The key is they're subtle. If a subtly-wrong design takes only 20% of the code of the truly-correct design to express and understand and it works for 99% of the cases, then there's actually a benefit to a lot of users of using that architecture. If you end up in the swamp outside the happy path you can get badly burned, but that's the thing... If most people never see the swamp, the language's simpler approach can win.

To take an example from the article: the author correctly notes that Go's filesystem library may not be as portable as one wants it to be across OS's, and that it doesn't handle paths as byte-strings. The former issue is one many people don't see because they get to operate on a Linux monoculture (Go's wheel-house was web servers, which are often running on Linux). And the handling of file paths as byte strings is irrelevant if other tools in the Linux ecosystem can't handle them correctly either, i.e. "If it doesn't `ls`, it's not a real filename" is a reasonable position to take on that problem.

It sounds like the author lands in the swamp a lot or has some significant anxiety about ending up in the swamp, and a developer in that situation may very well benefit from using a language and library ecosystem that is more technically correct more of the time.


You have no idea what the "swamp" is. What if it's some catastrophic data loss, an RCE vulnerability or causes an individual to be permanently locked out of an important account?

Still don't care because 80/20?


You've made an excellent point, and I think it's worth noting that half the companies that fail we never even hear the story of why.

If we see a lot of companies succeeding using Go as critical infrastructure, that might be evidence that the swamp is rarely those things. If we don't... The swamp might be those things.


I've only been doing Node for about a year now, and already I've been bit by gotchas.

Obviously all languages have their warts, and I can compensate. Using Typescript with it, I've actually been quite productive. It's higher level of expressiveness relative to Java is also a breath of fresh air.

But some languages definitely have more warts than others, and Javascript/Node is on the higher end of that distribution. All things being equal I would much rather work in a language with fewer warts.


Out of curiosity (if you're at liberty to say), what has bit you?

I do Node / TypeScript basically all day these days, and I'm probably too close into the forest anymore; I'm wondering if other people are getting bitten by things that I just roll right past because I'm used to the pain.

The extremely weak (and frankly incorrect) Date handling that's built into JavaScript is the first thing that comes to mind.


Fair question, here's a few:

- object methods cannot be passed as arguments to a function or another method like regular functions can because they lose their 'this' pointer.

- array.sort() is broken. It does the sort in-place and converts the elements to strings before sorting.

- const doesn't support separate declaration and assignment. This makes it harder to conditionally decide what to assign to a variable while still making it const.

- number types.

- the fact that you have to use '===' for the normal understanding of equality

- Date handling could be better

- NPM has a number of problems, including default behavior of updating your dependency versions.


That's a good list, and I was right; I've been at this so long I've forgotten that most of those are 'WAT' to the average user.

Regarding `object methods cannot be passed as arguments to a function or another method like regular functions can because they lose their 'this' pointer.`: yeah, the fact that JS objects are actually kinda weird li'l things is frustrating. There are nowadays two ways to handle that:

- referencing the object in a closure binds the object instance. `(x) => foo.bar(x)` will do the right thing

- there is an explicit bind operator, so you can do `myCallbackTaker(instance.myFunction.bind(instance))`

... but I agree that in a better language, you'd be expected to do neither of those things.


Thanks for the tip about binding the object instance.

Just to share an experience on the other end, I worked with Erlang several years back. While it was a harder language to learn, every new thing I learned about the language gave me the impression that the language designers did the right thing.

Don't get me wrong, I'm having fun with my current Typescript/NodeJS stack at work, but if I was building my own stuff it would absolutely be on the Erlang VM.


This is my take as well. From the article I can see why writing a library that works on anything and with anything would be hard in Golang, but I bet that's not what Golang is mostly used for. If I'm writing a millionth custom warehouse inventory app, I really don't care about hexadecimal filenames, I just want to ship it and get paid and Go is great for that.


I suspect the author deployed Go code to end-user windows desktop systems. I’m sure that would traumatize me. I deploy Electron/Node to Windows, but barely use the file system or any OS interaction and for that I am thankful.


Could you share a few of them?


- Most Node code I encounter that uses “streams” gets something wrong. Usually minor, sometimes major stuff like forgetting errors exist altogether, or to reject a promise wrapper when an error occurs.

- Incorrect use of concurrency stuff like process.nextTick. This function probably doesn’t do what you think it does based on the function name - it won’t “yield to the event loop”.

- for years Buffer and friends was totally busted, these days Buffer has deprecated insecure APIs and uses an internal object pool to avoid the worst of performance issues.

- until Node 12, async stack traces didn’t work and needed to be copied manually for understandable errors.

- anecdotally, whenever I review a popular library I often find some issue where the library papers over incorrectness in order to expose a simpler, more appealing API. For example when I reviewed the popular NPM package execa for executing subprocesses, it put a NodeJS stream based on a Unix pipe into the subprocesses stderr with no way out, even when you request it to connect the parent process stderr directly. This will break with EPIPE if the subprocess writes too much data. But who cares, we get to see stderr in the exception thrown if the command fails, so that’s a nice choice for adoption.

For a taste of how easy and common it is to get this kind of thing wrong, look at this issue in execa from 2019 - stream error handling leads to unresolved promise that hangs forever. Very hard to blame a programmer for using this API wrong. It is so easy to use wrong, and so hard to use correctly. https://github.com/sindresorhus/execa/issues/350


> Most Node code I encounter that uses “streams” gets something wrong

This is really true. I ported a Node project to Go for this reason, and the streams in Go were miles easier to reason about.


Sorry, what are Go's "streams"? Thank you!


Their reader / writer interfaces, basically. It's not exactly equivalent, but for my usecase, it was.


Thank you!


Awesome response, thanks.

You're right about streams, the way their API was designed just doesn't fit into my process of thought and I'm always heading back to read the spec. carefully + examples.

It definitely could've been made much more simple.


I see this point everywhere about Rust's union types and it always kind of irks me:

> The point is, this [Result type] makes it impossible for us to access an invalid/uninitialized/null Metadata. With a Go function, if you ignore the returned error, you still get the result - most probably a null pointer.

It's all about framing. You can just as equally say it is "impossible" to access an invalid Go FileInfo, because you'll get a panic for derefencing a null pointer. Or you can just as equally say it is "possible" to access an invalid Rust Metadata, just by doing .unwrap(). Everyone knows an unchecked .unwrap() is just bad Rust code, but then again dereferencing a pointer without checking the returned error is just bad Go code.

Anyways the rest of the article seems like just a criticism of Go's file system API, which seems fair but also seems a little niche given how difficult it is to create a good cross-platform file system API. This particular point irks me though:

    > stat "$(printf "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98")"

    > fmt.Printf("      %s\n", e.Name())
> It... silently prints a wrong version of the path.

What did you want it to do? The author even admits go strings are just byte slices, not UTF8, and then passes a non-UTF8 string to a function that expects UTF8. If there's a chance the file path your program works with might not be UTF8, then you should validate it. I think moving the complexity UTF8-ness out of the type system was a necessary evil.


The difference is what the language makes easy to do and how it signals to you you're about to do something dangerous.

If you call `.unwrap()`, that's a big yellow flag that you're going to be taking the gloves off and maybe touching something radioactive. Go has the maybe-radioactive thing sitting right there; safely touching it and unsafely touching it look exactly the same.

I generally enjoy using Go, but this is one of the pieces of the language design that I was surprised Go went with; we've known as an industry for decades that including bare null / nil / undefined / whatever we want to call it without type-system assistance is leaving a bare third rail lying around.


Rust's approach is definitely safer, but my point is that the concerns are overblown. The Go compiler raise an error if a variable (error) goes unused, and just ignoring this error by naming it "_" is obviously dangerous. Yes, Rust makes it easier to never ignore an error, but I don't think I've ever accidentally ignored an error that I shouldn't have in Go.


> Go compiler raise an error if a variable (error) goes unused

It doesn't though. It's not a warning or error to not use the return value of a function that only returns an error, for instance (https://go.dev/play/p/se6-zHHVezH).

There are static error checking tools you can use like https://github.com/kisielk/errcheck to work around this, but most people don't use them.

I've run into a lack of Go error checking many times. Many times it's just the trivial case, where the compiler doesn't warn about not checking the result of an error-returning function.

But often it'll be subtler, and the result of Go's API design. One example is its file writing API, which requires you to close the file and check its error to be correct. Many times people will just `defer file.Close()`, but that isn't good enough - you're ignoring the error there.

Worse still is e.g: writing to a file through a bufio.Writer. To be correct, you need to remember to flush the writer, check that error, then close the file and check that error. There's no type-level support to make sure you do that.


> It doesn't though. It's not a warning or error to not use the return value of a function that only returns an error, for instance (https://go.dev/play/p/se6-zHHVezH).

An other issue which is as big or bigger is that Go only tracks variables, it doesn't track reads and writes (unlike... well rust for starters).

So the compiler will also be perfectly happy if you call two erroring function, check the first's error but then reassign the second's error to the same variable (say, err, because it's always the variable for the error) and... completely forget to check it: https://go.dev/play/p/GJiovZwvHqj

I think errcheck also checks for it, but as you say it's not part of the language and its use is not ubiquitous.


Rust will allow you to ignore a Result as well, to be fair, though with a compile time warning. I don't think I know any language that'll throw a compiler error at you if you ignore the return value of a function that indicates success or failure.


Languages with "linear types" would, but there aren't any mainstream ones.

(And, to be extra clear, it only lets you ignore a Result if you don't care about the Ok value, if you want to use it, it does not.)


Golang gives you freedom.

Rust gives you seat belts.

It depends on you what you value more.


I don't think Go gives you freedom, actually. There aren't any good standard library functions for dealing with badly encoded unicode for file names, for example.

Instead, Go provides what the language designers considered solutions to most problems. Window not having Unix permissions? Just fake a bunch of them. Path not valid unicode? Probably not a problem. As long ad you agree with the way the Go designers think, that'll save you tons of work.

Just don't use Go in situations where those solutions might not work out, like when you're iterating over arbitrary files instead of the files you've created yourself in Go code.


The counterpoint is "your filenames shouldn't be badly-encoded unicode." In other words, "If `ls` can't render it, it's a bad filename. Rename the file."

(This does mean that Go is constraining the set of problems it's easy to solve with it. But that's the nature of programming in general... We decide what problems need to be easier to solve at the expense of putting some problems outside the "sweet spot" and requiring more work to solve them).


If `ls` can't render a filename that is legal under the POSIX specs, there are two possibilities:

- `ls` is wrong and should be fixed.

- The specs are wrong and should not allow filenames to be arbitrary bytesequences.

The third option ("The user is wrong even though they did exactly what was in the spec") is just unsatisfactory because it self-contradicts.


Specs are a three-edged sword: the spec, the intent, and the implementation.

And "The user is wrong even though they did exactly what was in the spec" is pretty much the rule, not the exception.


Yes nothing says "the language gives you freedom" like an unused import being a non-bypassable compilation error.


I would argue Golang is the most restrictive because it doesn't have an escape hatch from its language features.

Rust will get out of your way if you really want it to. It just gives you a seat belt because you are on the highway (writing systems code) but you are free to take it off.


would highly prefer my car to have seatbelts over "simplicity" and "freedom" if i plan to go over 30mph with it


Personally I like my programming languages to have warnings instead of errors for something like an unused import.

To avoid the car analogy, let's use construction - an in construction building should be allowed to keep the scaffolding up while it's being built.

But maybe you should still clean the floors? Uh oh, it's getting away from me already.


> If you call `.unwrap()`, that's a big yellow flag that you're going to be taking the gloves off and maybe touching something radioactive.

Sure. And, in Go, if you write

    v, _ := f(...)
or

    v, err := f(...)
    // use v without checking err
that is an equivalently large yellow (red) flag.


By extension of that logic, all dynamic languages are as strict as static typed, since it will crash on first abuse.

The benefit is in capturing things in type system so you can see what’s going on and you can’t ignore it. Runtime vs compile time debugging.

Did I get you right?


`Result<T, E>` is just one possible use of enumerations ("union" types as you're calling them). The beauty is that you can make illegal states unrepresentable.


I find it's often useful to analyze the extreme endpoints of something to get a feel for an issue.

Making a language too simple makes using it difficult. Too much of the inherent complexity of real problems ends up needing to be written by the users, and re-written every time (or results in massive dependency trees). For the extremes of simplicity, the Binary Lambda Calculus[1], [2] Brainfuck, and similar "Turing tarpits" show that the simplest of languages are extremely difficult to use.

Making a language too complex makes using it difficult. When there are lots of ways to do any given thing, you need to know all the ways to be able to read code "in the wild". You end up with languages like C++ that are really 3 or more nice languages standing on each other's shoulders in a trench coat. Or PERL, where "There Is More Than One Way To Do It" makes it into a write-only language.

Go errs towards simpilicity, and IMO goes too far. Rust gets things reasonably towards the middle, but there are unfortunately a few areas where "There Is More Than One Way To Do It" is rearing its ugly head. The edition system helps to keep this in check: while the compiler will always compile older code, and you can link older code to newer, a new edition can deprecate old syntax/functions/etc for files written in that edition.

[1] https://tromp.github.io/cl/Binary_lambda_calculus.html

[2] https://en.wikipedia.org/wiki/Brainfuck


For an angry rant against the language in general, this seem like arbitrary minor critiques, largely involving trade offs where there is no right answer but the author prefers the other one and refuses to acknowledge the trade offs involved.

My opinion of go did not change after reading this.


I miss Perl. It has everything I like in a language. Mystic runes that do really complex things with just 1 extra character of code. Switch between procedural, OO, and functional programming anywhere. Rewrite the language itself if you feel like it. C/ASM extensions for speed. Reusable, inheritable, extendable modules rather than quirkily-named modules that nobody can build on top of. Extremely thorough warnings, errors, debugging, and documentation. Hybrid and duck typing. First-class regex support. One-liner conveniences.

But an entire generation never really learned the language, and subsequently churned out dog vomit that looked like Perl syntax, so the whole language became a pariah. But I have at least 10x the productivity with Perl compared to any other language.


What do you think of Raku?


I've been using go for a personal project for a few months now and it didn't take very long for me to run into the: > It is a minefield of subtle gotchas that have very real implications

The generics problem isn't solved with the introduction of generics either. What they have introduced feels limiting in ways that will solve none of the problems that I have had to deal with by using large amounts of code replication to avoid having to use reflection.

I like go because it provides a good WebServer STL, besides that, for my next web project I'm going back to using C++ or Erlang.


I've been learning go over the past 3 weeks. This made me kinda sad in a way. I have similar (and much less angry) complaints with Go as a beginner.

My main gripe is not having proper error handling (try-catch maybe) and lack of default parameters. (Anonymous function without parameters when passed as an argument to another function) would be a nice to have too.

There are some instances I feel that the makers of Go don't want to share the internals because that would be "unnecessary syntax".

Want default parameters? Don't have those. So how does make() work? It's a language construct.

Obviously I'm not going to stop learning as I think it's the easiest language to make small/standalone binaries in and, I love the syntax, the native tooling, the performance, the easy concurrency, etc. It's a worthy tool.

But I really hope some of obvious language quirks get acknowledged and fixed by the Go community.

And I really hope I don't regret.

I haven't really learnt it in depth but it has been easier so far than whatever language I've tried learning it in.


Why is this flagged?

I read the first page or so. I don't think I'm going to read it to the end, but it doesn't seem something that should be removed from the HN frontpage.


It was originally posted without the (2020) disclosure, having been posted at least twice in past years. Doing so looks like "karma-farming".


I can definitely tell the author put a lot of time and energy into this article and is generally more intelligent than I am so I'm absolutely not trying to be difficult here, but I.. how to put it.. well, disagree. Kind of. It's complex.

You are raising valid engineering concerns. You rightly point out that the handling of permissions on Windows is subpar and Go really sucks balls when it comes to handling files named '\275\262=\274 ⌘'. I mean, it's hard to disagree here and I don't.

It's just hard to shake the feeling these things, in practice, just. do. not. matter. These are issues you solve once with an as ugly-as-needed solution and stow away in a package with a clean interface. I agree, it's not pretty and it would be awesome if the language would have handled this better by default but it just doesn't. Handling of files called '\275\262=\274 ⌘' is not going to convert me to Rust, which while generally awesome also its own - different - set of major issues. Everything is a trade off and Go just sometimes wins, even if it also sometimes sucks. The point is that the stuff it sucks at is usually not important for the domain it's meant to shine in. I'm sorry, handling files on Windows.. I don't know what to say. "I'm sorry" is all I can think of.


The author fundamentally misunderstands language design. He picks an arbitrary design constraint, in this case correctness, and argues that any language that does not provide 100% correctness is bad. He uses Rust for his examples, a language that has correctness as one of its top design goals, and contrasts it with Go, for which correctness is not that important. So of course Rust will come out on top when the only metric you care about is correctness.

As usual with such one-sided rants, the downsides to the supposed clearly right alternative are omitted. How long would it take for me to learn Rust + write a given program vs Go? How long does it take to model problems 100% correctly vs merely well-enough?

> It [Go] constantly lies about how complicated real-world systems are, and optimize for the 90% case, ignoring correctness.

Yes, exactly. Optimizing for the 90% case, not simplicity, is the primary design goal of Go. In other words, pragmatism or the 80/20 rule. Go attempts to provide 80% of the benefit for only 20% of the cost.

This is pervasive throughout the language:

- the author's examples,

- a GC,

- merely good-enough error handling.

The sweet spot Go attempts to strike between speed of coding, correctness, safety, performance, and mental overhead is the reason it's so polarizing. You may like this approach or not, but let's not pretend that Go is all bad or that say Rust is some perfect holy grail that will save us all.


What an extremely convenient template to dismiss any nuanced argument against "worse is better". You even get to question my credentials a couple times! (I apparently pick metrics that are convenient to my argument, and fundamentally misunderstand programming language design).

Even if I accept the premise that "I'm challenging Go on things it doesn't promise to deliver" (which is disingenuous to begin with — correctness underpins /everything/, it's not a hobby), I can't help but notice you carefully use the word "attempt" when referring to the promises Go /does/ make.

One of the things I'm saying is that Go does not deliver on those. It does not deliver on "speed of coding", precisely because everyone, even seasoned Go developers, keep hitting its many design pitfalls.

It does not lower mental overhead, because it prevents you from building abstractions, and pushes complexity out of the language and into your head.

There's a lot to be said about safety and performance, which gets bleaker real quick once CGo enters the picture (hence why the Go team likes to remind folks that CGo is not Go).

But say we disagree there and you truly find it a breeze to write large amounts of Go code: as soon as you deploy something to production and others start to rely on it: you don't get to choose not to care about "correctness". If you don't, you're just pushing the problem onto someone else.

You may be pushing the problem onto ops people, other devs, or your customers, but it does land on someone. And nowadays that someone is often me: my frustration is fueled by years of real world use, I do not, like you seem to imply, enjoy thinking about these things in the abstract, just for the fun of it.

Of course, you can choose to ignore that too, and that's fine! But let's please drop the pretense that this kind of response is anything other than "I refuse to think about this".


>because it prevents you from building abstractions,

Replying to just this part of your comment, but building abstractions can be as much a source of new complexity and cognitive overhead as it can reduce them. I think Go is wise to be on the side of less abstraction, because most of the abstractions it makes hard end up hurting more than they help.

>What an extremely convenient template to dismiss any nuanced argument against "worse is better". You even get to question my credentials a couple times! (I apparently pick metrics that are convenient to my argument, and fundamentally misunderstand programming language design).

I think if you want to make an argument that Go has the wrong values, you should make that argument. But your essay is not that argument. In your essay the claim that the values are wrong is implicit and unexamined, and you spend most of the words on demonstrating that Go has different values than you.

It would be more persuasive if you were to examine why Go has adopted these values and explicitly explain why you think they are mistaken. Instead it comes off as though you simply don't understand Go.


There's nothing stopping you from building bad abstractions in Go, and I find it pretty common. Here's an example:

Debug(msg string, keyvals ...interface{})

The keyvals interface assumes you are passing in both a key and a value but if you don't, it doesn't work correctly and in fact in our code I think it blows up.


Definitely. Go is not the last word in the conversation on programming language design, and it hasn’t solved the problem of prohibiting harmful abstractions. But I do think the conservativeness around abstraction is an improvement, at least culturally, over the way many language communities view software development.


> because most of the abstractions it makes hard end up hurting more than they help.

You got any proof for that?


No proof, you're free to disagree. Just my experience.


> What an extremely convenient template to dismiss any nuanced argument against "worse is better".

Nuance is exactly what I'm arguing for, there's none in the article.

> You even get to question my credentials a couple times! (I apparently pick metrics that are convenient to my argument, and fundamentally misunderstand programming language design).

You're right, I apologize. I usually try hard to never directly address whoever I'm responding to on HN, but figured it was fine since it's the linked article itself. It's clear to me now that reasoning makes no sense, I should have taken the time to reword it.

> I can't help but notice you carefully use the word "attempt" when referring to the promises Go /does/ make.

It's an acknowledgment that there's space to discuss Go failing to adhere to its design principles. I'd love to read an article on small changes to Go that would provide immense benefits. However, I don't see much value in an one-sided rant on the less-valued principles not being valued highly.

> correctness underpins /everything/, it's not a hobby

> [...]

> you don't get to choose not to care about "correctness". If you don't, you're just pushing the problem onto someone else.

Correctness isn't binary. It's perfectly valid to trade off correctness for gains elsewehere.

The classic example is companies switching from dynamic languages like Ruby to Go/Java once they hit scaling issues. Does that mean Ruby is a bad language and the company should have used Go/Java from the start? No. Using Ruby gave them the development velocity that let them get to scale in the first place.

If you do need strong correctness guarantees, by all means stay away from Go and use Rust/Ada/etc. Just remember that it's not free, something had to be traded off to achieve it.


> It's perfectly valid to trade off correctness for gains elsewehere.

Agreed, but no-one is disputing that, right? The problem is that very often saying correctness was given up in order to gain velocity and simplicity is simply a lie. Incorrectness can easily lead to slow development and tons of complexity.

It’s only fun when the trade-off is actually a trade-off.


Agreed. I just want to make clear that Rust/Ada/etc. don't give correctness guarantees! Their compiler is just more enforcing. It's all not black and white.

Rust is promoted for its correctness, but those correctness related bugs Rust prevents are an extremely low fraction of real world bugs (comparing to managed memory languages). I mean, how many type-system related bugs are there in real world projects with Java, C# or Go?

Rust is promoted with "fearless concurrency". Does it prevent your code from deadlocks, which are among the most tricky bugs?


I mean, just yesterday while I was running the JetBrains Rider install workflow, I hit a NullReferenceException (or whatever it's called in Java). That is a type-system related bug which arises only because Java's type system trivially represents illegal state.


Null pointer bugs are prevented by idiomatic use of Option in Rust, so that’s at least one case where Rust’s focus on correctness prevents memory-safety bugs.


Yet many Rust code bases are littered with .unwrap(), which undoes that benefit.


A great deal of the benefit of Option comes from where it isn't used - you know most things can't be null and simply don't have to think about the possibility.

unwrap() bugs are annoying, yes, but at the end of the day they're just a fancy assert(), helpfully spelled out in the code and confined to bits of it where the value is Option<T>. This is a much better situation than a language where nulls can in principle turn up anywhere.


but it's explicit; you can easily tell which lines of code will panic.


> Just remember that it's not free, something had to be traded off to achieve it.

I don't think that is clear at all in the general case. There's no reason why we should believe that "something had to be traded off to achieve it". In some cases probably that's what happened, maybe even explicitly, but in other cases there are just some designs that are better than others.


>> even seasoned Go developers, keep hitting its many design pitfalls

I'm a professional dev for 14 years. Started with Python, C# (WPF and ASP.NET), Qt, Java (Spring), Kotlin, Frontend with React, Angular, Vue, ... even Rust, so I've seen a lot.

Honestly, in the last 2 years using Go for backend services and some CLIs I never hit any of those design pitfalls you're writing about. Every language has it's quirks. Even Rust. But Go is not decisively better or worth than the other. For example, Kotlin is easier and more fun to write, but writing code is just a small part of a project. The whole spectrum needs to be considered. That's where Go shines.


> It does not deliver on "speed of coding", precisely because everyone, even seasoned Go developers, keep hitting its many design pitfalls.

Well that’s a subjective statement. Are you basing that data or just your own feeling?

I love writing Go and I’m way faster in Go than Rust. If I need more acceleration, of course Rust is the place to be, but if I just need to hammer out an API, Go is going to be much easier to do that in. Also my code will be far more consumable than Rust because it’s way less complex and has enforced opinions.

> If you don't, you're just pushing the problem onto someone else.

Really? Maybe that happens occasionally but I’ve never had that happen. My Go services have been incredibly stable and easy to develop on.

With Rust your are pushing onto others code that’s very hard to understand because it’s a kitchen sink like c++. Go believes community is greater than the individual so it’s opinionated.

In general, I found this post incredibly inflammatory. Different tools are good at different things. Rust isn’t a panacea, neither is Go, but yes Go hits a sweet spot of lower level with rapid development. Why is that controversial to you?


As someone who don't work actively in both Go and Rust, I perceive Rust to be readable and I feel more comfortable writing in Rust due to how strict it is in term of correctness.

If I have to write go, I need to have an IDE well setup. And I had to do it a couple of times to tweak a simple restapi. Rust is surprisingly faster to write more complex things, down to the details such as when I want strings to be obfuscated in the resulting binary.

Reading is also so much easier in Rust, surprisingly, despite the complex syntax. It is easier to swallow complex concepts such as how ethernet packet is shaped, or how entitiy component system stores data and execute systems, compared to reading a simple API in Go.

But again, this is just me.

> I love writing Go and I’m way faster in Go than Rust.

The problem is, not every go developer is you.


> The problem is, not every go developer is you.

This is pretty common sentiment, the borrow checker is hard, you will be slowed down by it


The generalization of anecdotal experience is a flaw in any argumentation. We see it every day everywhere.

You *assume* that Go creates sooner or later problems everywhere it runs. I have another anecdotal data point: At my workplace the most troublesome services are the ones written in Java. The Go services are much easier to manage and run much more reliably. Actually I can't remember of any issues with Go services.

So, you need to take more data points into account to come to a justified verdict. Your generalization of Go leading systematically to huge problems is far away from reality. There's already a huge amount of services written in Go. Networking stuff, backend APIs, CLIs, databases, ... do they all fall apart? No, just the opposite: They are some of the most successful projects in the last years.

Your analysis of the (tiny) problems is correct. Your conclusion that people shouldn't use Go is very wrong.


> correctness underpins /everything/

Correctness is a spectrum, not a boolean. Failures of correctness are, equally, a spectrum of risk. And risk is measured primarily by impact on business goals. Consequently an incorrect program that satisfies its business-level responsibilities is definitionally better than a correct program which does not.


I think it's a matter of different goals. What you describe is pains of a very senior developer who has worked on a lot of very subtle bugs and never questioned their desire to deliver the best software possible.

Parent comment, however, talks about a situation where you have to hire dozens (if not hundreds) of $10/hour developers to ship software that is just good enough. I mean folks who may be great people and deserving overall respect, but who can't actually solve FizzBuzz. For such a situation, this:

> You may be pushing the problem onto ops people, other devs, or your customers

is quite alright and much more preferable than paying the market rate for developers who are actually capable of learning Rust.


I'm not fond of this framing, which suggests that go is for bad programmers and rust is for good programmers. If Go makes it easier for bad programmers to write decent code, it also makes it easier for good programmers.

Good programmers aren't good because they're insanely clever and whip up brilliant combinations of abstractions. They're good because they write maintainable, understandable, simple, and effective code.

Or: they write code that understands the problem and solves it, not code that is stylish.

Go helps good devs do this.


> I'm not fond of this framing, which suggests that go is for bad programmers and rust is for good programmers.

Right, go is for inexperienced programmers.

> The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software.

Or so Rob Pike thinks. However, as I get more and more experienced, the shortcomings of go become too much to bear.


> go is for inexperienced programmers.

I am an extraordinarily experienced programmer, and I vastly prefer Go to Rust.


That doesn’t really have anything to do with a statement about Rob Pike’s design intent.


Before rust started being used in crypto, rust salaries weren't great - probably targeting people happy to get less money to use their favourite language


> What an extremely convenient template to dismiss any nuanced argument against "worse is better".

You say that like its a bad thing.

If there is an easy template to dismiss your argument, it means you did a bad job arguing. Part of making a good argument is anticipating likely objections and addressing then in your argument


Note that a "dismissal" is not the same as an actual rebuttal. A template can just as easily generate a tricky-to-dissect fallacy or bad-faith argument as actual logic.


I disagree - if its a template that is commonly used, someone only has to dissect it once.

Regardless, what exactly is tricky to disect or falacious about this template? The rebbutal is basically:

* go is trying to optimize for different things

* go makes no secret that its trying to optimize for different things

* some people like the things that go optimizes for (and some people don't).

* [with an implied] if you intentionally used a tool that made choices you don't like, and its not a secret they made those choices, is it really the tool's fault or your fault? Its like ordering ice cream and being mad the ice cream is cold.

If the original argument was more phrased as the types of design choices go makes are bad, it would probably be more palatable (but also less interesting, because whether worse really is better is flame war that has been going on for decades)


You're too optimistic about defusing tricky BS. And I was deliberately avoiding making a claim about the article, only about the idea that the existence of a template to "dismiss" your argument implies anything about your argument. It does not.


I think there are 2 senses of the word argument here. After all, how good your argument is (i.e. how good a job you do at convincing people of your view) has no bearing on how good your argument is (i.e. how true it is).

When i claim that failing to address a criticism that is so common it has a template form and thus should easily have been anticipated, makes for a bad argument, i mean in the first sense not the second. To be clear, by a template i mean a template response that people believe in good faith - like what was used in this discussion. I don't mean a template for making an ad hominem attack or something bad faith along those lines. But ultimately if there is some "tricky bullshit" that is commonly believed in good faith by the audience, then yes an argument that doesn't defuse it is a bad argument.


Can you see the value in being able to create software more quickly, at the expense of the software's stability?


Yes, but that is definitely not achieved by checking for errors every second line.


I think checking for errors every line makes my go programs easier and faster to produce at a higher level of quality.


Some languages achieve error checking by type transforms and pattern matching, which are much more effective and efficient than error checks after every function call


Sure, but is writing the error check by hand what creates the benefit? Or is it having the error check?


It is thinking about what to do for each operation that can fail. This thought is what separates the applications that recover when the network glitch is over vs. the applications that need to be restarted.


The value here is the few seconds of human attention given to the error at hand. The thought is irreplaceable. Like others mention, 90% of the, end up if err != nil { return nil, err } but thinking about it will, in my experience, make the system more robust to failure. At the very Least for your process you will decide on a goal of carrying on in the face of failures and repairing when things work again vs. just bailing out and letting something else worry about it.


What if the return nil, err was the sane choice at the time of writing but there are changes downstream that mandate a new error handling mode there? Also, there are plenty of “writing this code at 3AM” where I would question the validity of such thoughts.


No, the compiler needs to save us from ourselves. What if we get a runtime nil? Oh God, the horror!

I understand correctness, I do not understand why we're making languages a nanny state where if we're not completely focused on correctness and zero copy interfaces it's not worth using.

I don't want to always care about memory allocation strategies when I want to have some fun on a project. I don't understand why a vocal group tends to dismiss some languages because they're not always doing the PERFECT thing.


I used to prototype in PHP and nowadays I prototype in JS.

I also ship JS in production and it's a painful experience compared to rust or haskell.


I think describing this article as a one sided rant isn't really fair. The author shows that Go makes a few design decisions that make it easy to shoot yourself in the foot. This is generally a bad idea. Rust may be the summum of correctness. In my view that makes it an ideal candidate for comparison. It shows the reader what a better solution would look like. And that this can work in practice. Does that mean you shouldn't write code in Go? No of course not. But I think the author has done a great job of showing why they don't want to use Go anymore.


It’s one sided when it points out the pitfalls without bothering to describe why they’re there, and what the upside is. There are sloppy mistakes in Go, just like in anything, but many of the common criticisms of go are about trade offs not about mistakes.


Sure, but the way I read the article, that's what the author shows. They don't like trade-offs that are made.

Maybe the tone of the article feels a little aggressive towards Go, I don't know. To me it felt more like an outcry of frustration with the chosen trade-offs.


The author clearly does understand language design. He probably even agrees with your description of go above, but simply describes it really doesn’t work for him.


> He picks an arbitrary design constraint, in this case correctness, and argues that any language that does not provide 100% correctness is bad

I'm sorry, WHAT? This is not about correctness (and IMHO if you have something that makes this tradeoff you should not use it anyway), it's about pretending that correctness does not matter and that unexpected bad behavior is okay... cause you know 80/20.

IMHO Go has its use cases but it's definitely not a panacea. I don't mind people pragmatically picking it and using it. I am bothered by the zealots that think this is the best thing since sliced break (spoiler alert: it's not). Its authors and the community also seem to have an elitist / better than you default attitude when being humble and open would actually serve them better (package management and generics come into mind - what disasters)


Author here: I wrote this in 2020, have changed jobs twice since. Both jobs involved Go in some capacity, where it's supposed to shine (web services). It has not been a pleasant experience either - I've lost count of the amount of incidents directly caused by poor error handling, or Go default values.

If folks walk away with only one new thought from this, please let it be that: defaults matter. Go lets you whip something up quickly, but making the result "production-ready" is left as an exercise to the writer. Big companies that have adopted it have developed tons of tooling around it, use all available linters, do code generation, check the disassembly, and regularly pay the engineering cost of just using Go at all.

That's not how most Go code is written though. I'm interested not in what the language lets you do, but what is typical for a language - what is idiomatic, what "everyone ends up doing", because it is encouraged.

Because that's the kind of code I inevitably end up being on-call for, and I'm tired of being woken up because of the same classes of preventable errors, all the time. It doesn't matter that I don't personally write Go anymore: it's unescapable. If it's not internal Go code, it's in a SAAS we pay for: and no matter who writes it, it fails in all the same predictable ways.

Generics will not solve this. It is /neat/ that they found a way to sneak them into the language, but it's not gonna change years of poor design decisions, and it's definitely not gonna change the enormous amount of existing Go code out there, especially as the discourse around them not being the usability+performance win everyone thought they would be keeps unfolding.

As I've mentioned recently on Twitter, what makes everything worse is that you cannot replace Go piecemeal once it has taken hold in a codebase: its FFI story is painful, the only good boundary with Go is a network boundary, and there's often a latency concern there.

Lastly: pointing out that I have been teaching Rust is a lazy and dismissive response to this. For me personally, I have found it to be the least awful option in a bunch of cases. I am yearning for even better languages, ones that tackle the same kind of issues but do it even better. I like to remind everyone that we're not out there cheering for sports team, just discussing our tools.

If you're looking to reduce the whole discourse to "X vs Y", let it be "serde vs crossing your fingers and hoping user input is well-formed". It is one of the better reductions of the problem: it really is "specifying behavior that should be allowed (and rejecting everything else)" vs "manually checking that everything is fine in a thousand tiny steps", which inevitably results in missed combinations because the human brain is not designed to hold graphs that big.


Don't fall prey to an ad-hominem argument - I don't think your article negatively hints at any kind of 'this is a Rust fanboy-made praise text' and it saddens me that a genuinely legit article like this needs to have the author defend himself like this.

Your points were well explained. Go has several serious warts which, in my own opinion, are showstoppers, and you are comparing it to a language which is somewhat newer and had more time to mature and of course, learn from other designs and their decisions as well.

Generics were intentionally left out, for example, because of the fear-mongering claim that "if you have generics your code is going to automatically become the C++'s STL at some point". After many years, the lack of even minimal generics got so bad they figured they'd start to lose share to rising languages like Nim, Rust and even the old grandpa C#, now portable everywhere with the Core stuff. It is very clear how badly bolted and rushed generics were in Go.

Rust has its warts as well and the language spec is already starting to become somewhat... large. We need to be careful not to invent C++2.0 right? ;-)

Anyways, just wanted to say this to you in hopes it'd lighten the mood and validate that you are in a good path with these articles. It was a very insightful reading for me. Thank you.


Tone matters a lot. Just being strictly, technically correct does not get much far, specially in these times. Its funny that author put in scathing, personalized review of Go but then act surprised that some Go fans took that personally.

At least in Go case it is just random Go fans dismissing criticism. Go authors do not jump in to defend language at every internet posting. This is absolutely not the case with Rust where committers just pop-in everywhere language/ecosystem is criticised.


> Rust has its warts as well and the language spec is already starting to become somewhat... large

It doesn't have a spec, so we don't know how gigantic it would be.


> It doesn't have a spec, so we don't know how gigantic it would be.

While it's not a formal spec, there is https://doc.rust-lang.org/reference/index.html which can give an idea of the size a Rust spec would have.


>It is very clear how badly bolted and rushed generics were in Go.

They were designed with the help of type system experts like Phil Wadler: https://arxiv.org/abs/2005.1171

Not sure why you think they were 'rushed', given the timeframe involved.


I don't follow. The Arxiv link you provided lacks a single mention to the name you mentioned... and even without knowing him, I can believe you that Mr. Wadler is probably a prominent and important type-theory researcher. But unless your argument is about a reverse ad hominem (citing an important name for the sake of "important person participated in this, cannot criticize design/execution"), I cannot see the point of this at all.

Explaining further: When I mentioned "rushed", it was clear for most of us that "generics" appeared quite late in the Golang picture, and it did only rise in priority when they figured they were losing market share to 15yo+ languages because of this single design flaw. Also, as soon as it became a heated topic, the working group started racing to have it included as soon as possible. As early as Go 1.17. Then, in the last moment, it got pushed to 1.18 because several deep questions remained open. Why the need to race it so much for .17, I ask? They probably felt it was 'too late' so must get it out of the door as soon as possible, even build hype for it on .17 in several changelogs even though it was clearly not ready (and those who were following the working group discussions know that it would NEVER be done for the .17 release).

Hope I was able to shed some light on the reasoning and word choices I had in my previous answer.


>Then, in the last moment, it got pushed to 1.18 because several deep questions remained open.

Delaying a feature to make sure that it's implemented correctly is the exact the opposite of rushing it!

I think you are vastly overestimating the influence of internet message board drama on the actions of the Go core team. In any case, you provide no evidence to support your claims about their motivations.

I'm not sure what happened with that Arxiv link, but here is another link to the paper I was thinking of: https://homepages.inf.ed.ac.uk/wadler/papers/fg/fg.pdf The fact that a leading researcher in type systems was a key participant in the effort suggests that this was a carefully thought through proposal (not necessarily perfect or beyond criticism, but not a rush job). Of course, if you think otherwise, you can point to any specific flaws in the paper.

>reverse ad hominem

The term you're looking for is 'argument from authority'. But argument from authority isn't really a fallacy. It's perfectly sensible to trust authorities, within reasonable limits.


I definitely went into the article thinking you have to be an idiot, but you exposed a lot of reasonable issues that can certainly bubble up depending on the type of application you are working on.

I primarily work on web servers but even I've noticed a handful of the issues in present. For example, the net/http package is not all that great, in my mind. It has a lot of downsides, and the timing out section really calls this out quite well. The native functions pretend to have sensible defaults, but in any production-ready application those are all discarded and you'll need to create custom clients to handle the different types of requests you are making.

I will say I'm a fan of Go. I think it has made my life much simpler in that it was probably the first language to expose me to the world away from "magic" - with Java being my first language, frameworks are far too rampant and Go helped de-mystify a lot of that for me.

But as I've progressed to learn other languages, I think this is less a strength of Go and maybe moreso just a negative of Java/Ruby/Python oftenntimes. I've not played around much with Rust but I've heard great things.

As much as I wanted to think this was just a Rust fanboy attacking Go, I think your article was quite well written - and also helped me question "why do I even care?" I didn't write Go, I just happened to land a job that uses it where I excelled well, and I think in my early career that's largely because Go is extremely beginner friendly. It's a tool disjoint from me - I wouldn't be upset at someone if they told me they don't like hammers made by DeWalt, because it's not an attack on me. Same with Go, or any other tool we use to do our jobs.

Now that I'm a much more senior engineer I find I use Go out of convenience purely due to expertise, but I've been itching to work with something new. I think for my next side project I'll take a stab at Rust.

Anyway - great article! I'm happy I read it (even though in the beginning you tell me not to, hehe).


Thanks for your amazing articles, Amos!

> I'm interested not in what the language lets you do, but what is typical for a language

This is the crux of the problem. To step away from Go/Rust and pick on another language, one could argue that Python lets you annotate every variable for some linting checks, but that doesn't mean they all are. This leads to horrible time-sinks where someone accidentally adds a comma to the end of a line, turning a scalar into a 1-tuple. I know folks to whom this has happened and who burned a half day trying to track it down.

I personally get annoyed by the "Language X gives me the freedom to do Y." I find that I and a lot of my peers often prefer constraints imposed (instead of freedom given) by the language as a way of preventing countless issues at runtime.


> its FFI story is painful

Is the Rust FFI story much better?

Serious question: I have a Go code base with performance-sensitive inner loops. Considering options now for the longer term.


> Is the Rust FFI story much better?

So very much so. All the usual C types are available, you even have as much control as even saying that you want to struct laid out as it is in C, doing something as simple as binding a function is just declaring the function like you would in C. As it uses llvm it can even inline across C code, optimize and all just like they were written in the same language if you wish to do that.


I think you can add this to your original post.


Go is interesting because it has a cult like following of advocates who spout the same sound bites all the time, yet seem completely oblivious to the larger world out there.

Go seems to be in this weird space where it is not particularly suited for anything that its proponents say it is. As a systems programming language it gets a lot of flack for being garbage collected. As a web programming language, it is not ergonomic at all. The process of simply formatting a string is ridiculous compared to the string interpolation of Python or Scala. Furthermore, if I was a web developer, why would I want to deal with the cognitive overhead of Arrays vs. Slices or pointers?

The type system also takes a lot of heat, mainly for generics but that seems like something that is coming soon to the language. Nevertheless, it is sort of telling that Kubernetes decided to implement its own internal type system instead of leveraging the OO paradigm provided by the language.


This should not be flagged, this is a well written article


Go has a lot of issues. Some of them would be easily fixable if people promoting and developing Go actually admitted the problems. However, the fanbase usually acts as a cult pretending that issues are features. Thing is, just like broken, hackish dependency "management" had to be fixed (introducing tons of complexity for the sake of not destroying backward compatibility), other problems will have to be fixed as well. It will add even more complexity to the language.

For example, Go error handling is shit. People will attempt to fix it with generics now and that will create a lot of inconsistency between different APIs. Because of those inconsistencies the entire language will loose any possibility of elegant, high-level, pre-packaged solutions for certain things (like automated logging, recovery, etc.) This loss will be permanent and the vast majority of users won't even understand why some things are so hard.


I mean, all programming language communities are partial to their language, but among Go there seems to be an unusual tolerance for disagreement and discussion of language issues compared to most other programming languages. But yeah, when you come in guns blazing talking about how certain language features are "shit" and there's no possibility of elegance, people are rightly going to think you're not there for any sort of productive conversation.

For example, I regularly have productive conversations with people in the community about error handling and sum types and generics, including my criticism for the way Go does some of those features. A little civility goes a long way, and this isn't particular to the Go community or even programming language communities in general. Note that there definitely are PL communities that generally can't handle any criticism irrespective of civility, but the Go community isn't among them. Indeed, in my experience, Go's critics are very often much more zealous than its proponents.


> Note that there definitely are PL communities that generally can't handle any criticism irrespective of civility, but the Go community isn't among them. Indeed, in my experience, Go's critics are very often much more zealous than its proponents.

This is because Go is a programming language for people who don't care about programming languages. I mean this in the most positive way possible. If you're using Go, it's because you care about the end result of what you're building, usually a backend service or command-line tooling, far more than the code that was used to create it.

Go is not a language where you come up with clever syntax to solve your problem. Go is not a language that makes you feel smart when you write it. Go is not a fun language to program in. Go is a language that gets out of your way, encourages you to solve your problem in the most boring way possible (usually with a lot of for loops) with a predefined level of safety (i.e. static typing, explicit error handling with the `error` type, etc). It's a language for building bridges, not creating masterpieces.

People who are passionate about programming languages would never like Go in the first place, so you don't get too many zealots to sing its praises.


> Go is a language that gets out of your way, encourages you to solve your problem

Maybe I'm just too dumb for Go, but this is not consistent with my experience at all. Go's insistence on pretending that complexity doesn't exist would get in my way all the time. Go's extreme hostility toward FFI calls got in my way several times.


Yea.. i used Go for 5 years. I just can't agree that the simplicity is true. Yes, the language itself is, but it offloads complexity into my program and thus my day to day is jumping around huge piles of logic which could be made for easier to reason about and understand with some actual help in managing the complexity.

Your complex programs aren't easier in Go, in my experience. The simplicity of the language doesn't help me much when my day is spent fighting to figure out how to make a problem which is inherently complex easy to maintain, reason about, and be without bugs.

I want a language that makes my day simpler. Where at the end of the day, my "net complexity" is less. Go leaves all the complexity to you and offers you very few tools to solve this. Bugs, spread out logic, and even runtime costs of the overuse of Interface{} (prior to Generics at least) left me with a lot of things to solve myself. My days in aggregate were more complex with Go.

Just my experiences.


> runtime costs of the overuse of Interface{} (prior to Generics at least)

Generics aren't going to improve this situation - at least not the current iteration of generics :(


It depends. Things like sorting a slice will be faster, for example. But yeah, the current iteration is a bummer.


> Go's inane hostility toward FFI calls got in my way several times.

All languages w/ obligate GC are "hostile" to FFI in some way or another. The Go default implementation also uses split stacks or something for its goroutines, that cannot feasibly interop with FFI code. But it's usually easy enough to just isolate Go code to it's own process/address space and use IPC or network communication to enable the interop one would usually achieve via FFI.


Inversely, virtually all languages with "easy FFI" end up being even more hostile in that a significant chunk of the ecosystem depends on C build tooling which is almost always fragile: C build systems have implicit dependency management, so you don't know what dependencies you need to have installed on your system or where they need to be installed. This means that something which builds on one machine may fail to build on another machine (in the case of build-time dependencies) or that it may run on one machine but not another (in the case of run-time dependencies). It's also opaque to the host build system, so cross compilation becomes dramatically more difficult. Lastly, C is inherently unsafe and insecure in ways that most host languages are not.

In practice, whether by accident or design, the Go ecosystem is really, really nice because it avoids FFI to a high degree. An overwhelming majority of programs can be cross compiled into a truly static binary (it may not even depend on libc unless--as is the case with Windows and MacOS--the host platform requires it). It also means that there are very few "C-shaped libraries", by which I mean thin bindings around some C library which exposes idiomatic C semantics rather than idiomatic Go semantics. Moreover, your programs aren't running a bunch of inherently unsafe code under the hood, and are consequently more likely to be secure as a result.

It's kind of nice that C FFI is possible such that libraries which are unlikely to be ported to Go (e.g., ffmpeg) or which cannot be ported to Go (e.g., opengl) are still available, but not so easy that people pull in C libraries for every little convenience.


The Rust ecosystem does one better and packages the C libraries and build configuration (including making it portable across platforms) as part of the crate. So you just add the dependency to your Cargo.toml and the C library will build as part of the regular `cargo build` process.


Unless something has changed in the relatively recent past, I think you're overselling a fair bit. Not only does the package author have to understand the C dependency well enough to package it correctly on all platforms (basically by verifying the build in a hermetically sealed environment, and who is doing that?), but also the process for cross compiling is (or at least was) pretty complicated: https://www.modio.se/cross-compiling-rust-binaries-to-armv7..... And even then, I'm not sure this will yield a truly static binary (i.e., no dependency on libc).

In Go, it's just `CGO_ENABLED=0 GOARCH=armv7 GOOS=linux go build` for pure Go programs.


> (basically by verifying the build in a hermetically sealed environment, and who is doing that?)

Lots of people run stuff in CI, which isn't exactly that, but is close enough to make it not as big of a pain as it might otherwise be.

It can also help if their docs aren't great; I've looked at CI configs to realize how to install some sort of system dependency before.

> but also the process for cross compiling is (or at least was) pretty complicated

Most of this article is talking about installing and setting up both Docker and a C cross-compiled toolchain. So, you're right, but also not, sorta kinda. That is, this is certainly more hard than Go, but we're not talking about pure Rust at this point, so the fair comparison would be cgo with some C dependencies, which would also involve setting up a C cross-build toolchain, (and maybe docker). But at the same time, it doesn't have to be this way: Zig includes a full C cross toolchain in its compiler, so that you don't have to do this installation. It is, in my opinion, currently best-in-class here, far surpassing both Go and Rust.

It is also worth nothing that, IIRC, Go had to switch to dynamically linking libc on many platforms, since the idea of a "fully statically linked binary" is basically only coherent on Linux.


> Lots of people run stuff in CI, which isn't exactly that, but is close enough to make it not as big of a pain as it might otherwise be.

CI has a whole lot of variation. On the extreme end, there are people running Jenkins jobs on the same hosts as other jobs, and everyone just pre-installs whatever they need onto the base image for the host (i.e., not even working with a fresh OS image). Moreover, many people are just going to run their CI on amd64 Debian or RHEL and assume it works for all targets.

> That is, this is certainly more hard than Go, but we're not talking about pure Rust at this point, so the fair comparison would be cgo with some C dependencies

My whole thesis here is that Go leans less on FFI than other ecosystems, so you shouldn't need CGo in most cases where you would have to use FFI in other languages. It's a lot easier to get a pure-Go dependency tree than it is a pure-Rust dependency tree. Of course, that's an emergent property derived from weaknesses of Go's FFI, but it ends up being a really nice property in practice.

> But at the same time, it doesn't have to be this way: Zig includes a full C cross toolchain in its compiler, so that you don't have to do this installation. It is, in my opinion, currently best-in-class here, far surpassing both Go and Rust.

I think this is true if you assume that all ecosystems lean on C equally, but it's better by far to depend on C less because including the C cross toolchain doesn't absolve you from humans packaging C dependencies (in which case it's either easy because you neglect a bunch of packages or you test in a hermetic environment a la Nix and it becomes more bothersome than maintaining a pure-$hostLang version of the same package).


> Moreover, many people are just going to run their CI on amd64 Debian or RHEL and assume it works for all targets.

Rust has a strong concept of "tiered platforms", and so lots of people support at least Mac/Windows/Linux. Nobody uses Jenkins (for open source packages that will become your dependencies, at least), they use GitHub Actions or CircleCI, which make it easy to support many platforms. I personally run Windows, no WSL, and 99.99% of the time, everything Just Works for me.

But yes, that's why I wasn't saying it's purely just as good. For sure. But it does generally work well.

> My whole thesis here is that Go leans less on FFI than other ecosystems,

Gotcha, that's fair. Pure-x for any x often is really, really nice! Full agreement there.


> Gotcha, that's fair. Pure-x for any x often is really, really nice! Full agreement there.

Yeah, and it's really interesting how the relative ease/difficulty of FFI shapes an ecosystem. On one extreme, you have Go which has a lot of purity, but on the other you have Python where FFI is so easy that the maintainers can't really change anything including optimizations without breaking compatibility which means pure Python packages are relatively slow driving more reliance on FFI. It also means the package management tooling has to solve for the universe of C packages to be worthwhile, which drives a whole bunch of other problems. A decade ago, if you had asked me whether easy FFI was a good thing or a bad thing, I would have unequivocally said "a good thing". That might've been the correct answer if the lingua franca had a standard concept of reproducible builds.


While this is true in many cases, it’s worth pointing out that that is the choice of the package author, and is not always super simple to implement. So yeah, much of the time it is nice, but you’ll sometimes also run into these classic sorts of issues, either because the authors do not put in that work or because there’s a bug in the implementation.


And as a bonus, be statically linked with all the benefits that brings.


I recall statically compiling Rust to be a big pain (like, actually statically compiling, no dynamic dependencies on libc in Linux). I assume this is all the more true with arbitrary C dependencies?


If you have no C dependencies, it’s simple: you ask for the musl libc, and you’re done. It is not more onerous than go.

If you have C dependencies, then it does become a pain, depending on how well those dependencies' -sys packages interface with whatever build system they use.


I'm not talking about an actual static binary, I'm talking about a typical "mostly static" binary with the handful of common dynamic dependencies.


Serializing a request structure, making an IPC/network call, deserializing the request structure, serializing the response structure, sending it back, and deserializing it ... isn't really a solution when the purpose of an FFI call is typically to fix some performance issue.

Lots of garbage-collected languages make FFI not only easy but plenty fast. Go does neither.


I started out thinking that fast and easy FFI was ideal and being disappointed that Go's FFI was neither. I've since changed my opinion as it's really nice that one can usually get away without pulling any C dependencies into their dependency tree. I wrote more in the sibling comment: https://news.ycombinator.com/item?id=31194347


That would be great if Go provided better performance. With its awful FFI, you have no recourse when you hit its limits other than to rewrite the entire codebase in something else.

As with many things, there's nothing stopping you from just sticking with pure Go if you don't like C toolchains. While C build issues are a valid theoretical concern, in practice I've never had any Python package fail to install because of a C dependency problem that wasn't trivially resolved, nor any Rust project fail to compile because of a C dependency problem at all.


> That would be great if Go provided better performance. With its awful FFI, you have no recourse when you hit its limits other than to rewrite the entire codebase in something else.

I wouldn't know. I've never run into an issue where Go's performance was a real bottleneck, and anyway every mainstream language with easy FFI still has significant FFI overhead (so much so that many programs actually run slower with FFI). This isn't really true for Rust (Rust makes it easy to define types which are essentially C structs and thus require little/no marshaling), but performance also isn't the reason you FFI out of Rust.

> As with many things, there's nothing stopping you from just sticking with pure Go if you don't like C toolchains.

Right, that's my point. You viably can stick with pure Go because such a large share of the Go ecosystem is pure because FFI is rarely worth the hassle.

> While C build issues are a valid theoretical concern, in practice I've never had any Python package fail to install because of a C dependency problem that wasn't trivially resolved

Try building a significant Python project on anything except a recent version of RHEL, Debian, MacOS, or Windows. For example, try getting your Python project running on something like a scratch Docker container. Or try packaging a Python package (which depends even transitively on a C library, especially one which isn't already packaged for Nix) with Nix.


> every mainstream language with easy FFI still has significant FFI overhead (so much so that many programs actually run slower with FFI). This isn't really true for Rust

This isn't true for almost any language to the extent it's true for Go, and for many compiled languages it isn't really true at all.

> Try building a significant Python project on anything except a recent version of RHEL, Debian, MacOS, or Windows. For example, try getting your Python project running on something like a scratch Docker container. Or try packaging a Python package (which depends even transitively on a C library, especially one which isn't already packaged for Nix) with Nix.

While these are legitimate theoretical problems, none of them are really problems in practice. Containers don't need to be scratch, and if you're building a Python project, you're already not running a scratch container, so the addition of FFI doesn't change that. Nix is not an environment I've ever seen a requirement to support, let alone had a requirement.

It really seems to me that you like Go and you like Go's design decisions, but "I like this" is not the same as "this is better than that". I'm not particularly interested in rehashing the same conversation over and over again.


> This isn't true for almost any language to the extent it's true for Go

Go's object structure is much closer to C's. In many cases, it's just casting pointers (e.g., you can convert a Go slice to a C array by taking a pointer to its first element and casting it to a pointer of the C element type, provided the element types are binary-compatible). This means fewer allocations than a language like Java or Python where you would have to allocate a new slice and chase pointers around the heap for every field in every element in the array. In my experience, most of Go's overhead comes from function call bookkeeping at the FFI boundary rather than marshaling data.

> for many compiled languages it isn't really true at all.

I'm pretty sure it's still true for many compiled languages (e.g., Java, Haskell, etc).

> While these are legitimate theoretical problems, none of them are really problems in practice.

They were problems for me in practice.

> Containers don't need to be scratch, and if you're building a Python project, you're already not running a scratch container, so the addition of FFI doesn't change that.

Containers don't need to be scratch, but they often need to be lightweight with low cold-start latencies (including pulling). Not having to pull in a whole distro is advantageous. And a pure-Python project certainly could run in a scratch container (i.e., a container with just the interpreter, the program, and the program's transitive dependencies).

> Nix is not an environment I've ever seen a requirement to support, let alone had a requirement.

"Reproducible builds" is the requirement. Our customers' security teams vetted our dependencies individually, and if a dependency changed we would have to have that dependency re-vetted.

> It really seems to me that you like Go and you like Go's design decisions, but "I like this" is not the same as "this is better than that".

Clearly, no one here is conflating those things. I'm arguing that Go has a particular strength, not that that strength is the only factor.


> I've since changed my opinion as it's really nice that one can usually get away without pulling any C dependencies into their dependency tree.

That trophy is owned entirely by the Java ecosystem. Thanks to that, once Loom arrives, basically the whole ecosystem will automagically become reactive-aware.


> The Go default implementation also uses split stacks or something for its goroutines

This has not been true since Go 1.2, back in late 2013.


The fact remains that you need a separate implementation (cgo) if you want to do FFI. It might be something else goroutine-related that blocks FFI in the default Go implementation, but the issue is still there either way.


What do you mean "a separate implementation"? CGo is part of Go, it's not another implementation of Go.


Separate in the "CGo is not Go" sense.


FFI isn't important. It's a niche feature for a superminority of use cases.


> Go's insistence on pretending that complexity doesn't exist

Probably the most accurate and concise summary of my problems with go also.


I am usually unhappy/ worried working in a language or library that pretends the world is simpler than I know it really is. On a good day there is documentation clearly explaining that the maintainers know about the complexity and here's what they've done about that so at least I know; on a bad day it's just shrug emoji.

The article mentions the whole filename thing as an example, and that's one of the first places where I felt I was at home with Rust. It's not unnecessarily complicated but it does force me to acknowledge that yeah, the name of a file might be incoherent nonsense. It's probably a String, but it might not be. I can write code that says "I don't care, we're probably fine" and accept that if it's not fine the code will fail at runtime in a defined way - or I can write code that actually cares about this problem, even if just to explicitly ignore such files as if they didn't exist.

In too many languages the second isn't really an option (which is frustrating if I'd like to write reliable software) or worse, the first isn't an option and so I'm stuck writing endless boilerplate even for a toy or one-shot.

The latter is arguably OK if your language is really just for space rockets and medical implants where failure is not an option. But that's never really how things work out.


Go always seems to be to be designed to be simple for the compiler (which, to be fair, has benefits: fast compilation is useful in a compiled language, to keep code-build-test cycles short) more than the programmer.


Go strives for a balance. It tries to be a fast language without trading off everything else to that end. So it has a GC and really fast builds and it produces machine code that isn't as aggressively optimized as Rust or C++, but it does so much more quickly (as you noted). These are ideal tradeoffs for a huge swath of applications.


I never really understood this reasoning. To me the ideal thing would be a fast debug-compile mode that barely optimizes and a don’t care how slow release mode that uses every possible optimization for the end result.

Rust is plenty interactive with its similar mode of working. Incremental builds are fast.


I am a performance engineer and recommend against this. Optimizations don't really work that well on their own; to get performance you want an ongoing conversation between yourself and the compiler, which you don't get if the compile time is super long.

If you do have a really long running superoptimizer discovering things, then you'd want a way to write that back into the code so you don't need to discover it again.

Also, most of your program should be at -Os because it's not hot code and the important thing is to stop it from disturbing the fast parts. (Or because the aggressive optimizations actually make it slower. Totally possible with fancy ones like autovectorization.)


I've done pet projects in Haskell, Ocaml, Racket, Rust... Now I'm learning Zig... I've worked for years with Java, Python, Javascript/Typescript... Use to work with Z3... Tried plenty of different stuff.

After years my conclusion is that If I want to get a job done I'll choose Golang. Hands down the best productivity programming language nowadays. GC for memory management and productivity, explicit, easy to read, hard to mess up, good performance and efficiency. Get' the job done and really well. End.

I love PL theory. Reading an Idris book an implementing some cool recursive patterns. Building a small project in different languages and compare them... Compilers, type systems and GC papers... But in my experience, the more complexity and "implicitness" a language has to offer, the easier is for "us" to go the wrong way.


I've done all kinds of stuff too, and agree that Go is pretty good to "get things done", especially networking. But I don't know about best. Maybe ten years ago when a static binary was important, but now that everything is deployed as a container, that's off the table and things like Python or Kotlin are equally deployable, but way easier to use. Nowadays, if you _really_ need a single binary, you probably also need it to be tiny. Cramming an entire GC and runtime into the executable doesn't seem much different than building a container to me.


God no not Python. It’s easy to use but it’s something I would only put out there as a duct tape or personal use solution. If I know someone is going to have to look at it after I’m gone, I need something heavily opinionated and without too many syntactic sugar that slows down the refactoring/debugging process.


I've been developing Python professionally for 15 years, including almost a decade of deploying to containers. I think Go is much easier to use (especially in a container environment):

1. Static types make it much easier to read and write code for even a single individual, and the benefit scales superlinearly as the contributor count and code base age increase. Go also has a ton of other tooling which just outclasses Python equivalents for both simplicity and performance (e.g., profiling tools, and even things like gofmt vs black where the former is way faster)

2. Because Python is so slow, even medium sized test suites take a long time to run. You end up having to triage your test suite to keep CI times reasonable. This just isn't a problem in Go (unless you're doing something I/O bound).

3. Python dependency management still sucks. If you want reproducibility, it takes ~30 minutes just to resolve dependencies for relatively small-but-not-toy-sized projects. This obviously kills your CI times, and there aren't great workarounds except to forego dependency management altogether. Go builds are nearly instant in most cases (assuming you have build caching enabled in CI) and still far better than Python builds in the worst cases. Python also depends a ton on C, so cross compilation is basically impossible (whereas it's trivial in Go) and simply building for any non-mainstream platform is going to entail a whole bunch of work (C projects typically make sweeping, undocumented assumptions about their build environments and targets).

4. Being able to make small artifacts is surprisingly important. When your container image is hundreds of megabytes, you feel it in your iteration loop (especially if you're in a "site down" situation and your iteration loop involves rebuilding and deploying containers to production to restore service). It also means your services can't scale up as responsively, and if a container gets bounced (and scheduled onto another node) it implies longer downtime before that container can carry load again. Similarly, rolling back from a bad deploy can be almost instantaneous if your images are small. Go has the advantage here because it can build on scratch images and because it doesn't need to ship the complete source code (native compilation prunes unreachable code, and binary machine code is considerably more compact than unicode source code).

5. If your development environment is Mac or Windows, Docker kind of sucks for Python development because you'll want to mount your source code volumes into the container, but Docker for Mac/Windows runs the container in a Linux VM with a process that marshals filesystem events back and forth over the guest/host boundary consuming virtually all of the CPU allocated to the VM. In Go, you don't mount the volume at all, rather you just build the image from scratch or you rebuild the binary within the image (or outside of the image and copy it in). You can viably use something like `docker-compose build` as part of your iteration loop with Go.

6. Distributing CLIs via container images makes for a crumby end-user experience, and if you don't distribute Python via container images. Something like shiv mostly works, but there might still be dynamic dependencies that users have to include (iirc, we ran into this with graphviz and a few other libs). Go binaries Just Work.

> Cramming an entire GC and runtime into the executable doesn't seem much different than building a container to me.

A Go runtime (which includes the GC) is just a couple of megabytes. Slim python base images are 60mb compressed.


> If you want reproducibility, it takes ~30 minutes just to resolve dependencies for relatively small-but-not-toy-sized projects.

I will be the first to complain about Python packaging, but 30 minutes is far far beyond anything I have experienced.


It seems approximately correct to me. I always do `--no-dependencies` when I have a `pip freeze` output, for exactly this reason.


Idk man. Pipenv. I’ve heard people say similar about poetry, but I’ve also heard people say poetry has improved.


Well put and I agree with most points but

> Go is not a fun language to program in

Not having to think about how something should be done in the most elegant way, instead focus on the problem at hand is a lot of "fun"


Having to write the same code several times with minor changes because of a lack of abstraction is a lot of fun.


I don't know how people can say go "gets out of the way".

Go makes me write dozens of lines of code to do something simple that in an any modern language takes a few.

It doesn't get out of the way, it gets in the way constantly. I'm constantly thinking in any modern language I can just do X, but in Go with its myriad missing features I have to sit and think about how I'm going to do it with just loops and if statements.

It's the exact opposite of getting out the way, don't even get me started on the syntactic verbosity.


> Go makes me write dozens of lines of code to do something simple that in an any modern language takes a few.

"Getting out of the way" doesn't mean it takes fewer keystrokes - it just means that you don't have to think about it / there are no surprises. It took me a while to grok what pythonic code is and looks like, and I feel the bar for Go is even lower. Even if you're browsing an unfamiliar codebase, code is exactly where you expect it to be, and you don't have to ponder on where to make your changes. To me, that is how a language moves out of the way; it fades into the background and you mostly concern yourself with the logic.


Most programmers aren't bottlenecked by keyboard proficiency, but rather by dealing with poor tooling or gratuitously complex programs ("terse" doesn't entail "simple", and very often it's the inverse).


That's a strawman. We're not advocating for terseness in character count (otherwise we'd be using languages like APL and Jelly), but for better abstractions. There are other benefits than character count.

* Having a lot of repetitive code makes it easy to make a mistake when you edit one copy and forget about the others. * A lack of abstraction can obscure intent, making you focus on implementation details. * Having less code overall makes it easier to keep track of it in your head.


True, fun is certainly different for everyone! I also enjoy being able to just focus on a real world problem, but I also programmed Scala professionally for many years, and I found it a lot more fun purely from the point of view of writing code. Writing a really elegant for comprehension or using currying in clever ways to make your code "elegant" was just enjoyable in and of itself, regardless of what problem you were actually trying to solve. Rust is pretty similar to me in that regard.


Same. I use Go for work because that's what the company uses, but I can't imagine coding in it for "fun". Elm + Scala feel much better to me.


Agreed. This was the only nit I was inclined to pick as well. I have a lot of fun writing Go, because it gets out of my way.


I just can't stand taking three lines to unpack a value from a map or to return if error.

Why can't I just say `return if err := somefunc(); err != nil`

It's mega frustrating on top of the lack of generics and other abstractions.

And now that generics are coming about, I'm sure it will take forever until my current project can use them. My current project is in the k8s ecosystem which due to the lack of generics, implemented its own clever but awful type system.


I can't relate. Newline characters have never been burdensome to me, and they aid in visual structure (the control flow is represented by the visual structure of the program, not only for "good data" paths, but also for error paths). My programming problems are usually not related to localized keystroke boilerplate, but rather larger issues of abstraction and data modeling.

> My current project is in the k8s ecosystem which due to the lack of generics, implemented its own clever but awful type system.

The k8s ecosystem's type system is unrelated to generics. It has a concept of user-defined resource types, which means that users can provide an OpenAPI document describing their resource type which Kubernetes will then use to validate new user-provided resources of a given type. From the perspective of the Go compiler, these types are dynamic types--they can't be known at compile time. They aren't a candidate for generics in the host language.

That said, it's often tedious to write a controller for these resource types, but that's because Kubernetes' controller frameworks are really complicated. They remind me of enterprise Java code with gratuitous abstraction. Maybe that abstraction serves some purpose, but it wasn't helping me and I ended up rewriting much of it in more standard Go (I didn't release it because it was prototype code and I didn't want to support it) and it was quite a lot simpler. I don't recall seeing many places where I felt that generics would be a significant improvement, but it's been a while.


Well, if you don't know the structure of a resource ahead of time but know that it has a status.ready, I would think that would be a candidate for a generic? I haven't explored that much yet, but in retrospect I might even be able to convert all objects to a struct that has only status.ready without generics.

I've only been in the ecosystem 6 months, but yeah larger abstractions are difficult too.

I'm not a fan of the lack of sub-classing. I like writing a base class and concrete one, and it's quite difficult in Go unless you want to make everything an interface.


Go's interfaces work fine for this case (see below), and Go's generics wouldn't help you (generic constraints operate on methods, not fields).

    type Resource interface {
        Status() Status
    }

    type Status interface {
        Ready() bool
    }
> I'm not a fan of the lack of sub-classing. I like writing a base class and concrete one, and it's quite difficult in Go unless you want to make everything an interface.

I've written a lot of Python, C++, Java, etc in my life (I cut my teeth on OOP). I'm thoroughly persuaded that inheritance is almost never better than composition, even in those languages where inheritance is idiomatic. Indeed, the trend in most of those languages has been away from inheritance and toward composition. Certainly in Go you'll be fighting an uphill battle by trying to make everything maximally abstract (which is a big part of why the k8s framework is so complicated per my earlier post).


    type Resource interface {
        Status() Status
    }

    type Status interface {
        Ready() bool
    }
This is better expressed as

    type Statuser interface { Status() Status }
    type Readyer  interface { Ready()  bool   }


Go's equivalent of sub-classing is embedding


Yeah, and I kind of did that. But I've found it annoying and not great.

For example in the base struct I had an interface and a ton of methods that use it.

Then when I declared the concrete struct, I have to manually point the concrete type that matches that interface to the base class's interface.

Composition doesn't really allow the same thing as inheritance. Composition typically means you'll have a motor and wheel struct in your car struct and maybe your car struct uses both in some drive method.

Inheritance is more like having a car struct with a rev engine method, but no concrete engine set.

So you can later make a Kia, set the motor to a type, and then call the base struct's rev engine method.

Again, it's not impossible, just really ugly.


Seems pretty straightforward to me:

    type Car struct {
        Motor Motor
        Wheels [4]Wheel
    }

    type Motor interface {
        Rev()
    }

    type KiaMotor struct { ... }

    func (kia *KiaMotor) Rev() {}

    func NewKiaCar() Car {
        return Car{Motor: &KiaMotor{ ... }}
    }


Took me a second to see the difference in what you were doing between what I was doing.

In your case you're making a Kia car by making a regular car with a Kia motor.

In my case (at work) I'm making a type KiaCar struct { Car ... }.

Which is why I have to link a concrete type in the KiaCar to the Car if I want methods with Car receivers and KiaCar receivers to use the same concrete Motor.

I do like what you've written above though. I'll consider if I can use that instead of what I'm doing currently.

I'm just not entirely sure how I'd write methods for the KiaCar that knows what its concrete type is.


Yeah, I think very, very few problems (if any) are better-suited to being modeled with inheritance rather than composition. Go sort of forces you to think about composition, and once you get the hang of it I'd bet you won't go back. When you need reuse, reach for composition. When you need polymorphism, reach for interfaces/callbacks. When you need both (for example, a BookStore that works with both Postgres and SQLite backends) then you reach for both:

    type BookStoreBackend interface {
        GetBook(isbn string) (*Book, error)
        PutBook(*Book) error
        ListBooks() ([]*Book, error)
        DeleteBook(isbn string) error
    }

    // BookStore embeds (i.e., is composed of) a Backend, which is an
    // interface type.
    type BookStore struct {
        // ... other fields

        // Backend supports PostgresBookStoreBackend, SQLiteBookStoreBackend,
        // FileSystemBookStoreBackend, MemoryBookStoreBackend (for testing),
        // etc.
        Backend BookStoreBackend
    }


The problem here is that you think "error handling" is somehow different, and probably less important, than normal logic in your codebase. But Go asserts that the "sad path" is just as important as the "happy path".


But programming languages should get on your way while you're doing things wrong. Go does not. To be fare, most mainstream languages do not: I think Rust is the best in this thing, other languages often aren't. But Go is by far the worst of all, because of its striving for "simplicity".


> But programming languages should get on your way while you're doing things wrong. Go does not. To be fare, most mainstream languages do not: I think Rust is the best in this thing, other languages often aren't. But Go is by far the worst of all, because of its striving for "simplicity".

Go typically does get in your way when you're doing things wrong, but yes, I'd like to see Go require return values be dealt with or explicitly ignored. That said, there are linters for this, but in practice it's never been a material problem for me so I haven't bothered to wire one into my project. Over time, I've learned not to be so concerned about issues which are mostly just theoretical--there are enough practical problems to deal with first.


> I'd like to see Go require return values be dealt with or explicitly ignored.

Ever use the return value from fmt.Println?


Not usually, but the correct answer would be to either explicitly ignore the unused return values or use APIs that don't return values you don't care about.


    _ = fmt.Println("ok") // 1
    fmt.Println("ok")     // 2
1 is unambiguously worse than 2.


Seems like the opposite to me. I once tried to set up and run a GRPC service (I can't remember which one) but something it depended on changed and so the codebase I was trying to run basically didn't run anymore. It was f-ing weird that (at the time?) there was no way to lock down deps - that or someone didn't care to? I don't know the language baffles me completely.


There has always been some way to pin dependencies, but historically you had to opt into it via vendor directories and the like; however, as of the last ~4 years, the standard project format manages this for you (the go.mod file pins dependency versions).


> Go is a language that gets out of your way, encourages you to solve your problem

Haven't experienced this yet but I'm a Go noob. I think everything looks easy when you mastered it, I don't think Go is so much easier than JS/Python or even C. Might be easier than Java but Java has so much more community support (e.g Stackoverflow answers) it easily evens out.


I started playing with Go in 2012 when I was doing professional C, C#, C++, Java, and Python. I stuck with it because almost everything was surprisingly easy. For example, I didn't have to learn an obscure DSL just to include dependencies! I didn't have to figure out how to wire together a "test target" in that DSL or evaluate a dozen test frameworks to get unit tests running! I could build and deploy a high-performance HTTP server with a single binary (no external apache/uwsgi/etc web server process)! And often without any third party dependencies at all! And idiomatic code ran 100x faster than Python, and on top of that there was headroom for minor optimizations (pulling allocations out of tight loops, basically). After a bit of experience, it as even easier than Python or JS thanks to static types.

> Might be easier than Java but Java has so much more community support (e.g Stackoverflow answers) it easily evens out.

This was true in the early days, but now Go is extensively covered in Stack Overflow. Of course, there aren't as many Go posts on SO as there are Java posts, but that's because Go is considerably simpler--there's less information to cover.


This just sounds like the same arguments that have been made for C for ages. The complexity of programming is pushed out from the language into the program, the tools, and the programmer’s head. It’s not often that that’s a worthwhile trade-off


Huh, I had sort of found the opposite, that the Go community I had interacted with was more aggro and prone to offense. I'm forming this opinion from the reddit and the discord though, so if there is another community you favor I'd genuinely love to hear about it.


I concur. I was used to a professional tone, then joined discord-go. Showed some of my online code to receive a "shit structure" response by some anime girl avatar youngling. When I started explaining how I don't think docker is the way to go I was met with passive aggressive behavior and plain false responses trying to justify its use. But Go's discord server wasn't to only bad experience. Angular might as well have been called Heil Angular. I worked with it for 2 years and found vue to be more productive. But they didn't want to hear cross worlds experience. Instead they insisted that I had no clue and after a while of back n forth played the "we're the moderators so we're right" card. Then big daddy server owner later stepped up and in the end I was banned, because despite the truth that vue was more productive only their agenda mattered. I have also met hostility in discord's vue server. Insults by by an official name there for saying that it wasn't a good move to require me or anyone joining to provide a phone number for account verification purposes.

All in all discord seem to have the immature unprofessional crowd. It's a gaming chat system after all.

Reddit Go is not as hostile but not very informed either. Although that's not true for all participants.

Compare Reddit to the quality of Go nuts, there is a difference.

But at least Reddit is a more or less open forum, where discord is hidden and a walled pff property.


You can't base your opinion on anything, particularly computer languages, off of your Discord experience, come on.

The Discord demographic is teenagers and young adults, that's the last place where you'd find professional and mature advice about a programming language. I mean, even Reddit is better, and it still is a cesspool.


I've found the rust discord nice and respectful (although I'm also a young adult). I also don't think [edit: PL] reddit is particularly a cesspool, at least compared to HN.


rust discord is at least (semi?)official go discord isn't listed anywhere on official sites


I've had very helpful experiences on Discord (but I don't frequent it). Reddit is Reddit. GitHub (e.g., issue tracker) has been very productive. The mailing list is also productive.


Last I checked there GoNuts on Libera, there's the go nuts mailing list, and there's that Slack. I've seen members of all of the above complain and publicly vye for features and fixes.


> However, the fanbase usually acts as a cult pretending that issues are features.

Per Rob Pike (Lang NEXT 2014), golang was created for fairly young programmers that are fresh out of school and don't know many other languages.

So, something I've observed: When somebody doesn't know many things but is building a career, planning their life, on one of the things they know, they're going to take that one thing more personally. This is why it's good for people to be exposed to a diversity of ideas early on. Early exposure to diverse ideas helps engineers reason about tools and systems more objectively, with less input from their ego.


Do you have any data or survey to back your statement? Forgive me if I've misunderstood you but are you saying that Golang is mostly used by young programmers?

In my experience most Golang developers are highly experienced... Same with Rust.


Yes, you just needed to search for Lang NEXT 2014 and Rob Pike.

Enjoy the video, https://youtu.be/YM7QYx-LPSA

"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."

Or if you prefer reading, https://talks.golang.org/2012/splash.article

"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."


> https://youtu.be/YM7QYx-LPSA

That link is to a panel Rob Pike participated in at the same conference. I'm not sure if he makes similar remarks during that panel, but that "fairly young, fresh out of school" quote specifically comes from Rob Pike's presentation at the same conference titled From Parallel to Concurrent, which you can watch here: https://www.youtube.com/watch?v=iTrP_EmGNmw


You're right, thanks for the correction.


There are two parts of my comment. The first part paraphrases what Rob Pike said about the purpose of Golang, in his presentation at Lang NEXT 2014:

> "The key point here is that our programmers are Googlers, they're not researchers. They're typically fairly young, fresh out of school. Probably learned Java, maybe learned C or C++, probably learned Python. They're not capable of understanding a brilliant language. But we want to be able to use them to build good software. And so the language we give them needs to be easy for them to understand and easy to adopt."

The second part is based on my personal observations of human nature. Young and relatively inexperienced engineers often form a sort of personal attachment to whatever technology is enabling their new career. With this personal attachment comes a perception of attack against their person when that technology is criticized. This is a broad phenomena, not unique to golang by any means, but golang happens to be one of the languages that is popular with and promoted to young engineers. In discussions critical about golang, or javascript, or C, or python, there will often be young or otherwise inexperienced engineers interpreting criticism of the tool to be personal attacks.


> the fanbase usually acts as a cult pretending that issues are features

JavaScript kind of went through the same thing a few years ago. While everybody else was complaining about Callback Hell, the JS guys were insisting it wasn't a problem. Then they added promises, and later async/await. And lo and behold, what wasn't a problem eventually got fixed.

For a while you would constantly find folks on forums saying, "yeah JavaScript used to be shit, but with ES* it's now perfect." This went on for years.

I think the cultish "we like it this way" is just basic human programming. We do it with items we purchase, political parties we've joined, cities we live in, programming languages, everything.


“The JS guys” is not a thing. I want to say that grouping everyone that uses JS into a single entity is a bad thing, but it’s not even clear whether you’re referring to JS users or the shadowy JS powers that be.


> For example, Go error handling is shit

What is bad about it?


It requires several additional lines of code just to bubble up an error, for starters, and there's nothing stopping you from ignoring errors and continuing with what could easily be corrupt data.


"there's nothing stopping you from ignoring errors and continuing with what could easily be corrupt data."

In theory, this is a big deal.

In practice, it doesn't seem to be a problem. I've neither hit this very often myself, nor have I seen even newbies have much problem with it.

A lot of error handling procedures are based on reacting to C, which was awful. You could call a function, and then have to call another function, deliberately, to see if it failed. This is a nightmare, absolutely. A "Result" type does indeed solve the problem, but the fact that it solves the problem doesn't mean it is the only solution to the problem. The Go solution seems to be about 99.99% effective. It isn't a 100% solution, no, but by 99.9% or 99.99% or so, it takes it below the level of problem that I care about.

The issue with "bubbling" is a bigger problem in practice, certainly.


> A lot of error handling procedures are based on reacting to C, which was awful

Go's design decisions in general make a lot more sense from this perspective. "X was horrible to deal with in C, how can we make a (reactively) better version?"

The problem is a lot of these choices (willfully?) ignore the decades of language innovation that have happened since C. They are incremental, reactive improvements, where it doesn't feel like the designers necessarily stepped back and looked at the bigger picture


> it doesn't feel like the designers necessarily stepped back and looked at the bigger picture

I get the sense that the designers have a huge amount of ego from making C what it is today.


I've ran into bugs multiple times because I ignored an error result, or overwrote the "err" variable and swallowed an error. Errcheck helps a bit.

Other languages that have exceptions that bubble up the stack have a few advantages (easier to instrument with monitoring, stack traces and line numbers out of the box) but developers often misuse error handling as flow control


> I've ran into bugs multiple times because I ignored an error result, or overwrote the "err" variable and swallowed an error. Errcheck helps a bit.

Any reasonable code review process would catch these (very obvious) problems.


my golang code is littered with err handling etc, it’s easy to miss something over the course of dozens of prs. This is something best caught at compile time or by a linter. Or tests which I am often lacking

I find code review unreliable at best for catching bugs or logic errors, but depends on the reviewer


In practice, it's been a pretty common problem at organizations I've worked in.


The latter is a much stronger argument than the former (no idea why people get so worked up about character counts), but even then, "shit" is really strong considering how often one experiences exception traces when using an application written in Python or Java or some other exception-based language. Point being, we should probably evaluate error handling schemes based on results rather than ideology (even though I tend to agree with some of that ideology).


Screen real estate is limited, especially vertical real estate. Compared to languages with saner error handling, I can read approximately 25% as much Go code at once. That's a real cognitive burden when maintaining code or learning your way around a new codebase, which seems especially egregious from a language whose community consistently proselytizes about how the lack of language features is great for maintainability and onboarding.

I'd take exceptions any day of the week over Go's solution. I'd much rather the program crash by default than attempt to continue with corrupt data by default. I'd rather have concise, explicit, compiler-required error handling than exceptions, though.


I don't think "screen real estate" is the right argument here.

The problem is just that every line creates cognitive load and there's a tradeoff between concision and descriptiveness.

A language with piles of syntactic sugar and magic gets it wrong with too much concision and can read like line noise when it gets overused.

Go goes the other way though and makes it way too verbose and just makes it difficult to read the code. When a method needs to have 6 different error handling clauses in it, then it isn't as clear that 5 of those just bubble up the error while one of them has some unique handling. It also increases the chances that some programmer copypastas the boilerplate bubble-up code to all six of those cases and it sails through PR review. You can write a static analysis linter to force programmers to always handle errors and not ignore them, but you can't force them to handle errors correctly. When humans are reading the code, concision helps and verbosity hurts -- up until that crossover point where magic causes readability to suffer.

Go programmers seem to focus on abhoring magic and rejecting the benefits of concision. But when it comes to PR review your job is to stare at the whole method (or the whole file) and be able to "see" the bug, and more lines of code will make this job more difficult (and is also why some of the recommendations of the "clean code" book are pretty bad since extracting more tiny little methods can harm overall readability). There's a happy optimum somewhere where cognitive load is minimized. That isn't attained though by just having the simplest language design possible and offloading complexity into more verbose code.


I disagree. IMO, there's much more cognitive load in parsing dense, "minified" code than there is in scanning code whose control flow mirrors its visual structure. Humans are very good at seeing visual structure (which is why we tend to indent, split code across lines, and other syntactically insignificant usage of whitespace). By convention in most mainstream programming languages, this visual structure mirrors code flow, so we can see the control flow at a glance; however, many languages have special hidden control flow (exceptions) or control flow which is otherwise isn't part of the visual structure and thus easily overlooked at a glance (e.g., Rust's `?`). In my opinion, this "hidden" control flow allows more errors to slip past reviewers (though some languages might recoup some quality by other means).


So, the thing specifically about ? in a language with Result is that you can read some code that uses it and not worry about what happens for Error cases if that's not currently your focus - the question marks aren't a "Look at me!" focus the way something like try-catch is.

But if you are wondering about Error cases, they are there to see when you're looking for them because that ? while unobtrusive is something you can look for.

I'm sure in most IDEs you could have it highlight ? in a "Looking for error handling" mode if that's what you want.

Note that Rust does not consider control flow to be something the core language owns exclusively, you can return core::ops::ControlFlow to say actually I also have an opinion about whether you should keep going, this can make sense for a closure or function intended to be called inside an iterator or other loop context. Some of the ergonomics for this aren't finished, but what is there is already useful where a Result would work but is ugly because your early exit scenario isn't in fact an error at all.


> not worry about what happens for Error cases if that's not currently your focus - the question marks aren't a "Look at me!" focus the way something like try-catch is

Error handling is no less important than the happy-path.


I mean I spent quite a few words talking about how there's a happy optimum where beyond that you start to get too much magic and code gets too terse and unreadable.

You just did prove my point though which is that this is the only argument that Go programmers consider, and they blindly reject that adding more lines of code can harm readability.


> You just did prove my point though which is that this is the only argument that Go programmers consider, and they blindly reject that adding more lines of code can harm readability.

Can we lower the rhetorical temperature a notch? Just because someone disagrees with you doesn't mean they're "blindly rejecting" your reasoning. In particular, I'm not just a Go programmer--I've used Java, C#, Python, JS, C++, and C in various professional over the course of my career and I've also played around with dozens of other languages and I have more experience with several of those languages than I have with Go. My opinions are shaped by those other languages at least as much as they're shaped by Go, and indeed I didn't start out having these "pro-Go" opinions--rather, I adopted them over time after allowing my preconceptions to be challenged. Note also that some of my preconceptions haven't changed--I still think sum types and enforced handling of return values are a good idea, for example.


I was never arguing for "minified" code, which is ridiculous. Lower your own rhetorical temperature.


I don't know how you interpreted "minified" in any disparaging way, but that was never my intent. I apologize for any emotion that stirred up.


I don't think not handling errors after every single method call, makes the code dense. Its just way easier to read. 99% of the time you're just going to wrap the error in your own error and return so why not just have a single place that does that?


Cognitive load is unrelated to SLoC.

This expression

    let a = x.iter().filter(...).apply(...).map(...);
is equally or even potentially _more_ cognitively complex than this expression

    for _, v := range x {
        if !filter(v) {
            continue
        }

        vv := apply(v, ...)
        vm := map(vv, ...)
        ...
    }


focusing on a pedantic detail that i clearly didn't intend and which doesn't change my point.

consider it from a blocks-of-code metric, or some better slightly more abstract metric, that isn't affect by simple things like whitespace transformations, and try assuming that we all understand that we should write code that isn't monstrous to begin with.


"Monstrous" is an opinion, not a metric.

I would personally much rather maintain the code in the second example than in the first.


> Screen real estate is limited, especially vertical real estate. Compared to languages with saner error handling, I can read approximately 25% as much Go code at once.

In my experience, people can't actually read everything on the screen at one time anyway, and the more dense/terse things are the harder it is to read (otherwise we would minify everything).

> I'd much rather the program crash by default than attempt to continue with corrupt data by default.

It's not likely that it will continue with corrupt data because you can't use the return value without explicitly ignoring the error. It's not perfect, because there are cases where you want to crash when there is an error but no return value, and Go doesn't help you here. I would like to see this improve, but it's relatively low on my list of qualms with Go (I would rather have sum types, for example). It certainly isn't worth changing languages over especially since, in practice, Go seems to have fewer error handling bugs than exception-based languages.


> In my experience, people can't actually read everything on the screen at one time anyway, and the more dense/terse things are the harder it is to read (otherwise we would minify everything).

Whether or not you can read everything on the screen at one time is missing the point entirely. The point is that context matters, and the more frequently you have to scroll to find it is more cognitive burden.

> It's not likely that it will continue with corrupt data because you can't use the return value without explicitly ignoring the error.

It is far too easy to accidentally do the wrong thing with an error in Go. In Rust, for example, no matter what you want to do with the result of a fallible call, you have to do it explicitly. If you want to crash on error, you `.unwrap()`; if you want to bubble it up, you `?`; if you want to continue with a default value, you `.unwrap_or()` or one of its variants.

> in practice, Go seems to have fewer error handling bugs than exception-based languages

This is based on?


> Whether or not you can read everything on the screen at one time is missing the point entirely. The point is that context matters, and the more frequently you have to scroll to find it is more cognitive burden.

And I disagree. Scrolling IMO is a lot easier than squinting to parse dense code. We have visual structure (indentation blocks and so on) for a reason. The visual structure aids in readability, and indentation blocks help the eye scan quickly over a document. The visual structure in most languages resembles control flow, except some languages make an exception (no pun intended) to this rule for error handling paths which are not easy to see at a glance.

> This is based on?

My experience.


> Scrolling IMO is a lot easier than squinting to parse dense code.

This is a false dichotomy - there's a third option, which is not squinting (because, presumably, you're doing so because you decreased your font size), and being able to see more on the screen at the same time.

Moreover, scrolling is bad for cognition. It's pretty well-known that the human brain likes to use spatial maps - that's the reason why memory palaces are so effective. Scrolling decreases the ability of the brain to make spatial maps compared to, well, not scrolling.

> The point is that context matters, and the more frequently you have to scroll to find it is more cognitive burden.

This is not something you can "disagree" on - divorcing information from context always leads to more cognitive burden.


> This is a false dichotomy - there's a third option, which is not squinting (because, presumably, you're doing so because you decreased your font size), and being able to see more on the screen at the same time.

It's not a false dichotomy. Visual structure (via whitespace) comes at the expense of strict information density (assuming a fixed font size). If this is not true, then we would never have any (syntactically insignificant) whitespace.

> This is not something you can "disagree" on - divorcing information from context always leads to more cognitive burden.

Agreed, but this supports my point. It's a lot easier to scroll and scan visual structure than it is to reparse dense code. Density divorces us from context a lot more than physical distance on a screen.


> Density divorces us from context a lot more than physical distance on a screen.

You mean "unreadable code divorces us from context". "Density" doesn't have anything to do with it until you get to the point where your code is so dense as to become unreadable.

Moreover, "physical distance on a screen" is a strawman. The options aren't density and distance, they're density and not being able to see the code on the screen at all - between which, density is objectively better.

Seeing context is always better than not seeing context, assuming equal readability. Go's verbosity is both less readable and less dense than that of other, better-designed languages.


> You mean "unreadable code divorces us from context". "Density" doesn't have anything to do with it until you get to the point where your code is so dense as to become unreadable.

As density increases, the difficulty of parsing also increases. At a certain, relatively early point, that difficulty rapidly exceeds the costs of scroll-and-scanning.

> Moreover, "physical distance on a screen" is a strawman. The options aren't density and distance, they're density and not being able to see the code on the screen at all - between which, density is objectively better.

Well, we know density is not "objectively better" because scrolling exists (granted, if you have a hard requirement on a code editor that doesn't allow for scrolling, then you should definitely stick with the densest language you can find), and a little scroll-and-scanning is better than parsing dense code.


> At a certain, relatively early point, that difficulty rapidly exceeds the costs of scroll-and-scanning.

"Relatively early" is an unquantifiable statement, but regardless, that point is very far away from Go's design & generally accepted style, so this statement isn't really relevant to the conversation.

Regardless, scrolling exists by necessity, because some things simply can't fit on a single screen. It's still clearly always better to not scroll than scroll, assuming you aren't packing things in super tightly - I shouldn't have to provide evidence for this, but the fact that people don't just randomly clip text so they can add scrollboxes everywhere should be sufficient.

This is all a distraction from my last statement in my previous comment:

> Seeing context is always better than not seeing context, assuming equal readability. Go's verbosity is both less readable and less dense than that of other, better-designed languages.


A long time ago I would shrink my code (C at the time) with a very small font and then just look at the shape of the important files. It was illuminating. The C and hence to some extent Go philosophy is that well written code has a narrative structure. Each file tells a coherent story about one character of the system.


How is that different than an early return? Exceptions basically reuse the existing stack-oriented structure of programs - it will do the same thing as if you would have returned from that point, unless you use a try-catch block, which again guides the eyes very well. Compared to that repeating the same repeating pattern will just introduce useless noise that will make identifying the actually important greater pattern (e.g. manual bubbling up) harder to see.


> > in practice, Go seems to have fewer error handling bugs than exception-based languages

>

> This is based on?

By explicitly annotating functions as fallible the language hints to the programmer that errors need to be accounted for.

With exceptions, the hints only appear at runtime - when your program crashes. There's nothing that nudges you towards handling errors at the point of writing code, so you end up with brittle software.


Checked exceptions are a thing. Java’s implementation is unfortunately not perfect, but exceptions themselves are analogous to basically Rust’s Result type, but it has in-built support on a language level which packs the stack trace into the error case and auto-bubbles up if not handled.

I believe a language where instead of subtypes you would get algebraic data types and could optionally mark whether a given exception is checked or not would be the ideal solution.


> auto-bubbles up if not handled

I'm guessing you already know this, but for anybody else reading - this isn't entirely accurate; Result::Err doesn't auto bubble up like an exception, you have to manually bubble it up. The "special sauce" comes from (A) the compiler forcing you to notice this and do something about it, and (B) the `?` syntactic sugar to make that super easy.

It does occur to me as I'm typing this that you might be talking about panics, though, in which case yeah that's entirely accurate.


Additionally, the more repetitive code there is, the more opportunities there are for some subtle difference to be lurking in one particular chunk. And with pervasive boilerplate it becomes easier to eyeglaze past that subtle difference. Whether that difference is a bug or intentional, it's important to have code that highlights it by default.


> Screen real estate is limited, especially vertical real estate.

meh, my ide squashes short if `err != nil` clauses (goland, but I've seen other editors/ides/golang plugins do this as well ), also I run a vertical monitor. It's just not enough of an issue to care about. I've seen similar features for editors for other languages that have features or patterns that also create a lot of 'extra bullshit that takes up screen realestate'.


> no idea why people get so worked up about character counts

Think of reading code as mining ore. If the ore is rich, you don't have to mine and process nearly as much of it to get the material you need. If the ore is poor, you have to invest extra effort to mine more ore to get the same amount of refined material.

You might think Go is easy to read because lines are individually very easy to read, but Go code is so information-poor (partly because of error handling boilerplate) that you have to read a lot more lines of it to understand what a system does compared to other languages. Quantity has a quality all its own, and Go does bog you down with its sheer line count. I'm not one who often appeals to this argument, by the way; the only other language I've done significant work in that I would apply it to would be C. It's very common to write bloated, information-poor Java code, but that is still a choice, even if it is the most popular one.

My reaction looking at Go initially was that it was exciting to have a fast, simple language designed for writing services. My reaction to reading and writing code of real applications has been that Go is badly suited for writing nontrivial application logic.

> how often one experiences exception traces when using an application written in Python or Java

Python and Java aren't particularly ambitious standards for a 21st-century language.


> Think of reading code as mining ore. If the ore is rich, you don't have to mine and process nearly as much of it to get the material you need. If the ore is poor, you have to invest extra effort to mine more ore to get the same amount of refined material.

Reading code and mining have nothing in common. In particular, mining technology works best on dense ore, human visual perception requires whitespace to operate efficiently.

> You might think Go is easy to read because lines are individually very easy to read, but Go code is so information-poor (partly because of error handling boilerplate) that you have to read a lot more lines of it to understand what a system does compared to other languages.

I think Go is easy to read because (1) it ranks at the top in my experiences with other languages and (2) because humans are very good at scanning visual structure and less good at parsing arbitrary syntax. Most languages tacitly acknowledge (2) by way of indentation and other syntactically irrelevant whitespace, but they don't apply the same rigor to error handling.

> Python and Java aren't particularly ambitious standards for a 21st-century language.

I was remarking specifically about exception handling. Has there been much innovation in exceptions among 21st-century languages?


I don't think the difference between Go and more expressive languages is about whitespace. If the only way a language achieved fewer lines of code was by cramming more characters into a line, I wouldn't give it any credit for that..

> Has there been much innovation in exceptions among 21st-century languages?

Yes, there has, mostly in the ability to use exceptions less than previously. With Java (at least old-school Java, not sure where it is now) exceptions are the only type safe language-supported way for a function to terminate with multiple types. If a function has multiple possible return types that don't have an inheritance relationship, you can choose between 1) returning Object and dynamically checking for the specific types, 2) defining a return class with a field for each possibility, or 3) picking one type to be the "expected" outcome and defining all other outcomes as exceptional. With 1) you lose many of the benefits of static type-checking; with 2) you get code bloat; with 3) you get the hazards of using a nonlocal control mechanism when you don't want that power.

If your language has sum types, you have a better option for those situations.


Cognitive complexity is not related to character count.

The expression

    let x = f.a()?.b()?;
is exactly as "easy" to parse as the code block of

    y, err := f.a()
    if err != nil {
        return fmt.Errorf("a: %w", err)
    }

    x, err := y.b()
    if err != nil {
        return fmt.Errorf("b: %w", err)
    }
They are effectively equivalent in terms of cognitive load.


I disagree. These two bits of code cater to different ways of reading. The first caters to a happy-path reading, where the reader has the choice to yadda-yadda the error handling or mentally expand it. The second foregrounds the error handling on an equal footing with the other logic.

I like your example, because this is exactly what happened in the application code I had to work with. In an application with complicated business logic, it isn't just one line of code turning into ten like you have here. It's ten lines of business logic turning into forty or fifty, where each operation is separated from the next by multiple lines of error-handling boilerplate.

The trade-off is that in Go code you can see every error path. This is a good trade-off for systems where absolute reliability and rigorous error handling are critical.

In an exception-oriented language the error paths are often invisible. This is a good trade-off for complicated business logic where error handling usually means aborting with an appropriate exception. Think about processing a record or a request where you have to validate the request, look up a few related objects in a datastore, check some business rules, do an authorization check for the requesting user, calculate the result of the request, store the result in a datastore, and produce a response. Each step can be written in a couple of lines of code that are hopefully pretty understandable if you have good names, like this:

    validate_request(request)
    user = fetch_user(request.for_user_id)
    authorize_user(user, Privileges.CanFoo)
    dingles = fetch_dingles_by_dongle_group(request.dongle_group_id)
    unfooable_dingles = dingles.filter(not_fooable)
    if (!unfooable_dingles.is_empty()) {
        throw BadRequestException("This dongle group contains unfooable dingles")
    }
    fooment = calculate_total_fooment(dingles)
    fooment_store.save(fooment)
    return DongleGroupFooed(request.dongle_group_id)
From one point of view, this code is nice. You can read these lines of code quickly and see what the basic request handling logic is. It reads like a story.

From another point of view, this code is terrible. A lot of different things can go wrong here, and only one of them is visible. What happens if the user can't be found? What happens if the user isn't authorized? What happens if the dongle group id can't be found? If the wrong exception is thrown, the wrong result will be reported for this request. You have to navigate to other functions to check that. If that makes it bad code for you, then you'd probably rather be writing Go. In Go, these eleven lines would turn into thirty to fifty lines of code. The handling of each error would be visible, at the cost of the happy path being harder to follow.


You're absolutely spot-on with this analysis. And it is a core assertion of Go that non-explicit error handling produces less reliable programs. It's a value judgment and it's a subjective assessment.


The solution space here isn't "go vs exceptions," it's "errors as values vs exceptions (vs "let it crash" vs...)," and Go isn't the only implementation of errors as values.


Of course. I wasn't trying to imply anything to the contrary. But given the prevalence of exceptions, if Go's error handling is performing on par or better, then it seems pretty ridiculous to characterize Go's error handling as "shit". Don't worry Steve, I think Rust's error handling is pretty cool!


I do agree that exceptions feel like the worst of the various bits of the problem space, to me, but just to be extra clear about it, I have never written a significant amount of Go, and therefore don't really have a very strong opinion about its error handling.

And error handling is such a huge and interesting problem space! I've long wondered about why I didn't like checked exceptions in Java but do like errors as values, for example.


I'll take checked exceptions over Go's multiple return values error convention any day.


Agreed! For something as pedestrian as error handling, it's always surprising to me how much it seems there remains to explore.


Character count does matter, but I still don't see how it makes code more readable if all the significant operations are supposed to happen in the condition block of an if statement.


> It requires several additional lines of code just to bubble up an error

I mean people hate exceptions for a reason (other than in Java world).


And what is that reason?


Errors as return values is only acceptable for code that is so performance-sensitive that you aren't allowed to do dynamic memory allocations. For everything else, conditions+restarts are the correct answer, because errors-as-values restricts you to a single error-handling strategy and couples high-level code to low-level code as a result.


> Errors as return values is only acceptable for code that is so performance-sensitive that you aren't allowed to do dynamic memory allocations.

Exceptions-based code can be zero-cost, if no errors occur, at an increased error-case cost. Using error values pessimises this, and increase branch prediction load (as every callsite is now a branch).

So in the common case where errors are extremely rare, exceptions-based error handling can be quite a bit faster than return-value error handling.

Which doesn't mean it's preferable, but beware thinking that return values are faster.


This is an extremely good point - thank you for your correction!


Fun history fact: Rust had conditions, a very very very long time ago, but folks didn't use them and found them vaguely confusing, so they were removed.


This doesn't mean very much. Most people find the Rust borrow checker "vaguely confusing" (if not very confusing), and most people also wouldn't use it were it not strongly suggested by both the compiler and the community ("suggested" as unsafe Rust exists, but you of all people are aware of that).

Conversely, I understand condition systems, and I'm not a very good programmer. (I've tried and failed to learn Rust once already) That's a pretty low upper bound on how hard they are, especially relative to advanced features of languages like Haskell.

We're very fortunate that programming language design doesn't advance solely by giving people more of what they already use.


I don't mean to say anything about conditions generally, just to mention that they have existed in non-Lisp languages on occasion.

> We're very fortunate that programming language design doesn't advance solely by giving people more of what they already use.

Agreed!


Doesn't adding conditions/restarts require your language to have stackful continuations? Those are quite complicated unless you don't bother to make them safe (like C doesn't bother to with setjmp/longjmp). I would be nervous about them for the same reason I am about exceptions.

Also, the restart seems to imply that you should "handle" the error, but I think this is really overemphasized cause what are you actually going to do about it? There's nothing to do about a lot of errors except die, but this encourages programmers to just make something up they think might help.


> Errors as return values is only acceptable for code that is so performance-sensitive that you aren't allowed to do dynamic memory allocations

Not really. It is acceptable for code whose maintainers value readability and simplicity over everything else. I totally agree that readability and simplicity are quite subjective and this is up to the maintainers.

I don't really know what "conditions+restarts" is but a few articles landed me into LISP which I find totally unreadable. So, can you point me to some "conditions+restarts" code that I can understand/appreciate easily? Any language is fine, I just want to understand the concept better since I am more a C/C++/JS programmer. (FWIW, never written Go, but it's easy to read and understand).


It's probably sufficient for this conversation to just understand it as try-catch. A function is invoked; if it "signals" (throws) then control moves to a handler that matches the signal (exception); the handler runs and resolves the situation. Of course, Lisp being Lisp, the system is extended to announcer voice FULL. GENERALITY. but in its simplest form it's basically equivalent to exception throwing.


This is correct (in that the most simplistic case is try-catch).

However, the difference between a try-catch and conditions / restarts is that when one signals a condition (exception), the restart (catch) has a continuation from the condition. This allows you to inject an expression into the location where an exception occurred and "restart" your code from that point.

Whether you do such a thing or not depends on the code, on the type of condition raised, and on what expressions are valid. So you get a lot more flexibility in how errors are handled across the system. But likewise: more complexity in having to make that choice in the first place.

Going farther than this, conditions and restarts are really just a fancy way of packaging delimited continuations. I don't personally know any non-Lisp language that has attempted to package these concepts (maybe Dylan, which is a Lisp-like in its own way but without the syntax?). Going back to the original thought regarding error handling - I think Result<T, Err> type handling is fine and that most languages would be better served by that than having different types of exceptions. Conditions and restarts are powerful but your language has to be very expression focused (i.e. does not use a lot of statements) and it's not really clear that there's been a lot of work on making restarts nice to use. Exceptions in all languages that have them have their own set of associated problems, for what its worth, and it's not as easy to move Lisp features into a non-Lisp as one might believe...


> Going farther than this, conditions and restarts are really just a fancy way of packaging delimited continuations. I don't personally know any non-Lisp language that has attempted to package these concepts (maybe Dylan, which is a Lisp-like in its own way but without the syntax?).

Dylan does have a condition system, but it’s basically a Lisp without the parens, so probably doesn’t count. On the other hand, algebraic effects are another fancy way of packaging delimited continuations, so arguably the research languages Eff[1] and Koka[2] tried. (I don’t think either one explored the connection with condition systems, but I’m not sure.)

> I think Result<T, Err> type handling is fine and that most languages would be better served by that than having different types of exceptions. Conditions and restarts are powerful but your language has to be very expression focused (i.e. does not use a lot of statements) [...]

Huh? I don’t know why you’d say that, if anything I think it’s the Either err t / Result<T, Err> style that is more expression-focused (I mean, it even originates in Haskell :). I wouldn’t even call Common Lisp particularly expression-oriented, honestly, not unless we’re comparing with plain old C and not Rust.

[1] https://www.eff-lang.org/

[2] https://koka-lang.github.io/


> Huh? I don’t know why you’d say that, if anything I think it’s the Either err t / Result<T, Err> style that is more expression-focused (I mean, it even originates in Haskell :). I wouldn’t even call Common Lisp particularly expression-oriented, honestly, not unless we’re comparing with plain old C and not Rust.

I think that's exactly what I mean. The vast majority of languages (including golang, in TFA) use statements for dealing with exceptions. Rust also had try-catch, but has long since removed that syntax.

Anyways, the reason I said it is because it is not clear what to do when one wants to restart a statement. There are plenty of non-expressions that can throw, and usually it's not thought about deeply, but from a language semantics point of view one does need to have an idea of how to engage with it. For example, if you wrote:

    with open('somefile') as f:
        for line in f: 
            # ...
in Python, and had to deal with a restart during `open`, how do you manage this? The naive answer is to just return the continuation at `open`, but the "with" statement may have contextual setup. For example, `open` might be fine during `__init__`, but may have failed in `__enter__`. If you "restart" in `__enter__`, you need to deal with the partial state. Expression-based languages don't really have this issue because the call stack is usually clear (there's no magic under the hood). Similar analogues would be the `using` keyword in C#, or perhaps even lambda-expressions in C++. The abstraction in the code is separated from the execution of the restart, so it gets kind of gross as a language implementer in terms of not having to have very specific places where restarts can and cannot be.

This is a good reason why Rust / Haskell don't package these and just use Either / Result instead. If you have a bunch of types that you didn't write, injecting a restart into any failing code now brings a question of: "Can you safely inject types into a restart for code that you do not have access to?" and the answer is often no. The visibility rules in Rust make this a non-starter, and in Haskell you have a problem of mutability as well. A condition may be triggered at a point where IO could be injected, and so many of the language semantics would be in question. I suspect the type definitions for a restart in any arbitrary location in the code would be pretty hard to write, so maybe this is an open research area in Haskell already, but I doubt it'd be as ergonomic.


> It is acceptable for code whose maintainers value readability and simplicity over everything else.

Errors-as-return values are less readable than conditions, not more - there's literally more visual noise on the screen.

And if you want "simplicity", don't use a computer. Computers are intrinsically complex devices, users desire features with complex implementations, and our job as programmers is to manage complexity, not pretend that it doesn't exist. One of the article's main points is that Go does the latter in lieu of the former, and that's also what errors-as-return-values does.

----------------------------------

The formal name for a condition+restart system appears to be "algebraic effects"[1].

Conditions and restarts are similar to exceptions, with the following changes:

First, conditions are conceptually used for non-error conditions in some cases, like what Python does.

Second, throwing a condition doesn't cause the stack to unwind up to the handler, unlike exceptions.

Third, in addition to throwing conditions, you, uh, wrap ("establish" is the jargon used) code in what are called "restarts", similar to wrapping things in try/catch blocks (but distinct, because with conditions you still have condition handling blocks). Restarts can have names and are non-mutually-exclusive. Conceptually, restarts represent error-recovery strategies, while conditions represent the errors themselves.

Fourth, when a condition is thrown, it propagates upward until it hits either the toplevel (in which case the interactive debugger is launched), or it hits a condition handler - without unwinding the stack. Then, either the human looking at the debugger can pick which restart they want to use, or the logic at the condition handler can do so.

Why is this better than any alternative error-handling mechanism? Because every other error-handling mechanism (1) unwinds the stack (destroying all contextually useful information that isn't explicitly saved by the programmer, and preventing you from restarting a computation in the middle) (2) forces you into a single error-recovery strategy and (3) couples low-level code to high-level code as a result.

In general, low-level code has details about the specific kind of error, context around it, and access to data and control flow that would allow the error to be recovered from (e.g. for a log-processing program, reasonable restarts while parsing a log entry would be (1) skip it (2) retry (3) use an alternative parser and (4) return an empty entry), while high-level code has the application context about why the low-level operation is being performed in the first place and which error-recovery option should be picked.

Conversely, high-level code doesn't have details about what the low-level code was doing at the time of the error, and low-level code doesn't have the high-level context necessary to determine which error recovery strategy is appropriate in this use of the low-level code.

[1] https://en.wikipedia.org/wiki/Effect_system

https://en.wikipedia.org/wiki/Exception_handling#Condition_s...


> Errors-as-return values are less readable than conditions, not more - there's literally more visual noise on the screen.

No.

When you make a function call, and that call can fail, then the happy-path and the sad-path are both things that you need to manage as a caller. Happy-path and sad-path are two equivalent states that both need to be accommodated by the program logic.

Error handling code is not "noise". It is equally important to success-path code.


> No.

Yes. There is literally more visual noise on the screen. This is not up for debate - more pixels are lit on the monitor you are looking at.

> When you make a function call, and that call can fail, then the happy-path and the sad-path are both things that you need to manage as a caller.

False - the direct caller is not responsible for error-handling, in general - some transitive super-caller will be. Errors as return values needlessly generate this visual noise for every caller, when not needed, in addition to introducing aforementioned coupling.

> Error handling code is not "noise". It is equally important to success-path code.

You're misunderstanding my point. I never said that error-handling code is noise - it isn't. What is noise is forcing every single function call between the appropriate error-handling point and the error location to have extra useless junk. When there's an error, you should see exactly two things in your codebase: some stuff at the point where the error is thrown, and some stuff at the point where the error is handled - and, given that the place where the error should be handled is rarely the direct caller, you should see nothing in between.


Error handling is not visual noise. It is equally important to non-error-handling code paths.

The direct caller is _absolutely_ responsible for error handling.

> What is noise is forcing every single function call between the appropriate error-handling point and the error location to have extra useless junk.

No. Falliable operations must be managed by the thing which calls them. Anything else is shadow control flow, which subverts understanding and negatively impacts reliability.


> Error handling is not visual noise.

You clearly did not actually read my previous comment before replying to it. Let me quote it:

"I never said that error-handling code is noise - it isn't. What is noise is forcing every single function call between the appropriate error-handling point and the error location to have extra useless junk. When there's an error, you should see exactly two things in your codebase: some stuff at the point where the error is thrown, and some stuff at the point where the error is handled - and, given that the place where the error should be handled is rarely the direct caller, you should see nothing in between."

Please read this carefully and respond to it.

> The direct caller is _absolutely_ responsible for error handling.

This is objectively false, both on an empirical level, and on a theoretical one.

On the empirical level, it's trivial to find dozens of instances of code on the internet where it's crystal clear that the direct caller of an erroring function is not responsible for error-handling.

Here's one: on line 1471 of emacsclient.c[1], a call to connect() may fail - yet the caller, set_local_socket(), is clearly not responsible for e.g. quitting the application, because only its caller, set_socket()[2], has the contextual information necessary to know that quitting should not happen unless the attempts to open local UNIX domain and network sockets to the Emacs server also fail.

That's it - counter-evidence to your claim. It's straight-up false.

But, let's go and find a few more examples.

Here[3] is a random screenshot of a Python error trace that I found on the internet. You see that bottom frame, listen()? It's calling the erroring function sock.bind(addr). Yet, it's pretty clear that listen() isn't the right place to handle the error - it's in the user's application, "ryu", because again, only that code has the contextual information necessary to determine the correct way to handle the error.

Here[4] is another Python error trace - again, it's pretty clear that the place to handle the erroring getattr() call is not in its direct caller bind() in socket.py, but in the user application in siriServer.py.

Finally, here's some Lisp code. The Hunchentoot web server has a ENSURE-PARSE-INTEGER function[5], which can fail if it receives a non-integer to parse. But, it simply doesn't have the contextual information necessary to handle the error, because it's called by URL-DECODE[6], which is called by FORM-URL-ENCODED-LIST-TO-ALIST[7], which is called by MAYBE-READ-POST-PARAMETERS[8], and that is where the error handling can, should, and must occur.

It's crystal clear - errors are not required to be (or always capable of being) handled at the call site of the erroring function, and the reason for this is simply because context gets lost as you travel down the call stack, so that the point at which an error occurs often simply doesn't have the necessary context to recover from it correctly.

> Falliable operations must be managed by the thing which calls them.

Also false. Look at every one of the code examples I've linked. Go and look at code in general, actually.

> Anything else is shadow control flow, which subverts understanding and negatively impacts reliability.

It sounds like you don't understand exceptions very well. Go and read some code with exceptions - you'll see that the idea is extremely straightforward. Exceptions are very simple - they bubble up through the stack until handled, and that's it. They're far easier to understand than first-class functions, coroutines, monads, or any of another dozen different software engineering concepts that are also being put to extremely good use.

[1] https://github.com/emacs-mirror/emacs/blob/3af9e84ff59811734...

[2] https://github.com/emacs-mirror/emacs/blob/3af9e84ff59811734...

[3] https://i.ytimg.com/vi/CryQPaz8UO0/maxresdefault.jpg

[4] https://serverfault.com/questions/476715/python-socket-error...

[5] https://github.com/edicl/hunchentoot/blob/0023dd3927e5840f1f...

[6] https://github.com/edicl/hunchentoot/blob/0023dd3927e5840f1f...

[7] https://github.com/edicl/hunchentoot/blob/0023dd3927e5840f1f...

[8] https://github.com/edicl/hunchentoot/blob/18d76801150330a579...


> on line 1471 of emacsclient.c[1], a call to connect() may fail - yet the caller, set_local_socket(), is clearly not responsible for e.g. quitting the application, because only its caller, set_socket()[2], has the contextual information necessary to know that quitting should not happen unless the attempts to open local UNIX domain and network sockets to the Emacs server also fail.

Line 1471 describes a failure condition, which is returned to the caller of the encapsulating function, which in this case is is line 1374 set_local_socket. The caller which invokes the function set_local_socket absolutely is responsible for handling that failure condition. The caller is not set_local_socket, the caller is the code which invokes set_local_socket. And if set_local_socket fails, the code which invoked set_local_socket is absolutely responsible for determining what to do. Quitting the application is a decision that only `func main` can choose to do! All other points in the call stack can only bubble the error up to their caller. That's the only rational course of action.

> Exceptions are very simple - they bubble up through the stack until handled . . .

I agree that this is "simple" in one sense. The problem is that this "simplicity" means that there are two mechanisms of call stack control flow. One is the code as it exists "on the page" -- function calls and return statements -- and another is the exception control flow -- everything expressed as throw/catch statements. This is two control flow paths: one visible in the code, and another invisible, or implicit, to the code as written. It should not be controversial to say that removing the concept of exceptions makes control flow easier to understand, to model, to predict, and therefore easier to model the behavior of programs in general.


Maybe this is the fundamental argument here. Plenty of cases I have written code where I am trying to do something over a big set of things, e.g. check a file for hard-coded paths, or send a message to a lot of people; it isn’t weird for those things to fail. Maybe I couldn’t open the file. Maybe the file lacked hard coded paths. Maybe the sender lacked rights to send to that receiver, or maybe the receiver is currently offline. But if most of your code is some complex calculation, say weather simulation?, maybe there is by default just one path.


> I don't really know what "conditions+restarts" is but a few articles landed me into LISP which I find totally unreadable. So, can you point me to some "conditions+restarts" code that I can understand/appreciate easily?

You’ll have to read Lisp, I’m afraid; the best description I know is in the book Practical Common Lisp[1].

(Come on, Lisp syntax is quirky, but it’s not unreadable, and unlike APL or Forth or even Haskell it doesn’t require you to memorize a bunch of semi-meaningless punctuation before you can understand what is going on—it’s pretty wordy usually. I’m not saying you must bring yourself to love writing (f x y) instead of f(x, y), only that adjusting from one to the other should not be particularly hard.)

I mean, I have done a toy Forth implementation, but that is hardly more readable with no experience with the language.

One system that is almost conditions and restarts is 32-bit(!) Win32 SEH, but it is not particularly well-documented and the language bindings usually try rather hard to hide that (though, if you think about it, On Error Resume Next from classic VB is unimplementable on top of bare try/catch).

...

OK, you nerd-sniped me :) Here’s a toy (no subtyping! no introspection! no condition firewall[2]! no tracebacks! no support for native errors! etc.) condition system in Lua (sorry, nested functions in Python are painful):

  -- save as cond.lua
  
  local M = {}
  
  local error, unpack = error, unpack or table.unpack
  local running = coroutine.running
  local stderr = io.stderr
  local exit = os.exit
  local insert, remove = table.insert, table.remove
  
  -- conditions
  
  local handlers = setmetatable({}, {
      __mode = 'k', -- do not retain dead coroutines
      __index = function (self, key) -- no handlers by default
          self[key] = {}; return self[key]
      end,
  })
  
  local function removing(xs, x, ok, ...)
      assert(remove(xs) == x)
      if ok then return ... else error(...) end
  end
  
  -- establish a handler during call
  function M.hcall(h, f, ...)
      local hs = handlers[running()]
      insert(hs, h)
      return removing(hs, h, pcall(f, ...))
  end
  
  -- signal the given condition to currently active handlers
  function M.signal(...)
      local hs = handlers[running()]
      for i = #hs, 1, -1 do hs[i](...) end
  end
  local signal = M.signal
  
  function M.error(...)
      signal(...)
      stderr:write("error: " .. tostring(...) .. "\n")
      exit(1)
  end
  
  function M.warn(...)
      signal(...)
      stderr:write("warning: " .. tostring(...) .. "\n")
  end
  
  -- restarts
  
  -- invoke the given restart
  function M.restart(r, ...)
      local n = select('#', ...); r.n = n
      for i = 1, n do r[i] = select(i, ...) end
      error(r)
  end
  
  local function continue(r, ok, ...)
      if ok then return ok, ... end
      if ... == r then return false, unpack(r, 1, r.n) end
      error(...)
  end
  
  -- establish a restart during call
  function M.rcall(f, ...)
      local r = {}
      return continue(r, pcall(f, r, ...))
  end
  
  return M
Example: DOS-style abort-retry-ignore prompt implemented in the shell with some support in the (mock) I/O system and no support in the application:

  local cond = require 'cond'
  
  -- common condition types (XXX should use proper dynamic variables instead)
  
  local retry, use = nil, nil
  
  -- I/O library
  
  local function _gets()
      if math.random() < 0.5 then cond.error 'lossage' end
      return 'user input'
  end
  
  local function gets()
      local ok, value = cond.rcall(function (_use)
          use = _use
          local ok, value
          repeat ok, value = cond.rcall(function (_retry)
              retry = _retry
              return _gets()
          end) until ok
          return value
      end)
      -- ok or not, we got a value either way
      return value
  end
  
  -- application (knows nothing about errors)
  
  local function app()
      for i = 1, 5 do print(string.format("got: %q", gets())) end
      return "success"
  end
  
  -- shell
  
  local ok, value = cond.rcall(function (abort)
      return cond.hcall(function (err)
          io.stderr:write("I/O error: " .. err .. "\n")
          while true do
              io.stderr:write("[a]bort, [r]etry, [u]se value? ")
              local answer = io.read('*l')
              if answer == 'a' then cond.restart(abort, "aborted") end
              if answer == 'r' then cond.restart(retry) end
              if answer == 'u' then
                  io.stderr:write("value? ")
                  cond.restart(use, io.read('*l'))
              end
          end
      end, app)
  end)
  print(ok, value)
This is not a perfectly accurate semantic model for real condition system, but it should be enough to give a general idea of how these things work and what the advantage over bare unwinding mechanisms like try / throw or Lua’s pcall / error is.

[1] https://gigamonkeys.com/book/beyond-exception-handling-condi...

[2] https://www.nhplace.com/kent/Papers/Condition-Handling-2001....


Error handling is basically orthogonal to performance.

If a function call can fail, it should return an error, and that error should be managed by its caller. Any other approach means callstacks are unpredictable, which makes a program way way harder to model.


> that error should be managed by its caller

Objectively false. Correct answer: the error should be managed by the code that makes sense to handle the error.

> Any other approach means callstacks are unpredictable

Also false. I use conditions regularly, and my callstacks are very predictable - errors bubble upward through the call tree until they're handled. There's nothing simpler.

> which makes a program way way harder to model

Also false. I have no problem at all modeling and understanding my code rife with conditions.


You're wrong on every point. I don't know how to speak to you given the strength of your conviction, so shrug


I think the fact that you didn't actually refute any of my points speaks volumes about which of us is correct.


What do you mean by "conditions"?

> errors bubble upward through the call tree until they're handled. There's nothing simpler

Between

    output_a := f1(input_a)  // f1 throws exceptions on error
    output_b := f2(output_a) // f2 throws exceptions on error
and

    output_a, err := f1(input_a)
    // inspect and deal with err
    output_b, err := f2(output_a)
    // inspect and deal with err
it seems uncontroversial to me that the latter is less complicated than the former, and easier to model control flow paths.

Exceptions make sense in a lot of contexts! Just not services, servers, or long-lived processes.


The problem is that a system of "conditions + restarts" approaches the generality of fully async code. Go can of course do async well enough via its goroutines.


For starters, there is no way to consume multiple return values of a function or method inline. This makes chaining extremely verbose and often results in having three lines of error handling per one line of "normal" logic.

Secondly, Go creates a silly dichotomy by introducing two completely different mechanisms for error processing: error values and panics. (Soon to be three, because people will start using generics.)

Thirdly, "errors are values" approach is extremely counterproductive when you have to create generic error handling (with logging, default behaviors on failur,e etc). Something as simple as printing why a web page panicked becomes an exercise in cleverness.

Go enthusiasts will probably say none of this matters if you follow some set of "good practices". However, even core language libraries often fail to handle errors consistently. (E.g. text/template.)


f(g()) does work if g returns multiple values and f takes that many args, but f(g(), h()) always requires that g and h each return a single value.


> Go error handling is shit.

See, if you actually think this, then there can be no common ground here. Just use other languages and stop complaining.


I don't dislike Go myself but it taught me something important purely due to timing. It's the first language I wasn't late to the party for and got to watch grow up.

But I also watched people skilfully build the same towers of excrement that exist on all other platforms and languages, yet again, in exactly the same way with very little innovation or thought. The end game is that Go is just another loop in the endless cycle of technology replacement which we fail to improve on.

Ergo the language, tooling, wild ride is almost irrelevant and we should look at what we're building with it.


What would be a good alternative to Go, with

- large and well maintained standard lib - great runtime characteristics - esp. reasonable memory usage - developer ergonomics - matureness - long term stability - preferably managed memory

I really have used a lot of languages. They all have some downsides. I like Kotlin a lot but the JVM is just to cumbersome and resource heavy. Grade is way to complex. Rust is way to cumbersome to write and lacks the standard lib. The only one that comes to my mind is actually .NET Core (C#). But I don't know mature it is on Linux for your standard backend.


I'm not up to date on Core but .NET/C# on Windows with Visual Studio is by far the best development experience I've ever had. It's so mature and well integrated. If Core offers 10% of that I'd highly recommend looking into it.


I really like C#, and F# is one of my favorite languages. I just really hate having to manage csproj files, and I miss Go’s compilation speed and compact binaries. C# is pretty decent on Linux, though, so I’d at least kick the tires a bit.



I've been tempted to consider kotlin + graalvm as an option as well. Graalvm is getting pretty cool, but it does add some complexity, and it's too tied to Oracle in my opinion...

C# might be cool again as well though, haven't really considered that since before .NET core.

(And I think Go is ok, but I find coding in kotlin more enjoyable)


How is the JVM cunbersome? I do understand your concern about it using more memory (though I feel it is often overblown. Sure, it’s not a hand-optimized C executable, but the performance is top-notch and the memory usage can be very well controlled)


The binaries you get from GraalVM are bigger than Go's and if you are doing something like hosting a bunch of little services on a cheap VPS, Go definitely is nicer there.


I think Go is a great tool, if you use for the problems it was intended to solve. Replacing something you otherwise would have built with C, C++ or maybe Java.

The problem is when people try to use it for *everything*, like web applications or just the backend of web applications, where you need a TON of "webby" stuff such as dealing with sessions, authentication, orms, validations, translations, etc.... for these use cases boring and (more dynamic) languages such as Python, Ruby, PHP or JavaScript are much better options.

If you use Go because otherwise you'd had to use C or C++ and you're writing a database, a DNS server, a command line application, or some hardcore infrastructure service, it's an awesome tool and a big improvement, definitely go for it, will make everything easier.

If you use Go because of hate against other languages and because it is better for everything than everything else and only dinosaurs are using scripting languages, etc, etc (my experience with most people around me using Go) then you're using the wrong tool.


I liked the article. I think it was really well written and these are great points. So if not Go, then what's the alternative? I too am starting to feel a bit burnt out by some of Go's deficiencies, but one of the things I really like about Go is its concurrency model. What other languages have great concurrency models? Please keep in mind that I want to keep things simple... having a single binary to deploy is incredibly efficient.

I've been wanting to use Crystal. Does anyone have experience running Crystal in production? Without getting into the specifics, the Go parts of my personal project are a websocket server that upgrades when a user is authenticated and authorized, and their actions spawn jobs in a queue. I have processors (written in Golang) that take these tasks and perform somewhat complex async actions. Would Crystal fit the bill here? I'd really like to avoid getting into Elixir/Erlang world or any kind of interpreted language that will complicate the deployment process (like Deno or Ruby).


I’ve kicked the tires with a lot of languages, and always end up coming back to Go. It’s a pretty subpar language, but it hits a sweet spot that nothing else does: fast builds, decent tooling, relatively small single binary deployment, pretty good performance out of the box, decent stdlib. I could go on.

I want to use Ocaml or a Lisp or a number of other languages, but Go really does the entire package better than anything I’ve looked at.


Nobody will suggest Java, but honestly Java. Java has pretty great flexibility when it comes down to concurrency. It sets the standard which all the other languages have to compete with.

Apart from that, I like languages and frameworks that scale out of their box and into the world of distributed systems. Elixir is very cool, because the concept of distribution with message passing is built into the core of the language. Alternatively, Scala's Akka Framework operates on a similar concept of passing messages to Actors. My bias here is that I don't like to deal with very large super-computers, and would rather have multiple smaller machines that are allowed to blackout occasionally. This is especially relevant for websockets because the stateful connection to a client can be made on any single machine in a cluster, but the state itself is (often) global.


I'm going to sound very elitist, but there is 0 chance I'm going to use Java. Aside from my own bad experiences with Java in a professional setting, .NET Core is now a thing, and I'm actually very experienced with C#, so I'd rather just use C# at that point rather than become proficient with Java and the JVM.


Rust is the obvious language to consider. It’s concurrency model is arguably better than go’s as it actually statically prevents you from misusing types that aren’t threadsafe.


To be frank I really don't want to deal with Rust's complexity.


> I'd really like to avoid getting into Elixir/Erlang world or any kind of interpreted language that will complicate the deployment process

For what it's worth, the deployment story for Elixir is great. Erlang has a built-in concept of "releases", which Elixir supports natively.

In essence, it compiles all dependencies and outputs a directory containing the VM, and all the compiled bytecode. After that, you just copy the directory to /app on your runtime container (or whatever other kind of deployment method you're using), and it runs fine from there.


That's pretty interesting. I haven't worked with Elixir in many years, I'll take a look.


I like Nim or Zig so far as my home project alternatives to Go...


Golang has generics now and I'm already using them. They're great! But that's an obvious point.

I use Golang professionally every day, and these aren't the sorts of things that I run into. Then again, we run everything on Linux.

One thing I wish Golang had was the ability to easily deep clone things. Especially if you have a struct with nested struct _pointers_.

Options now are manually doing it field-by-field (error prone, if a new field is added you might miss it); using reflection (yuck!); or marshalling to JSON and back again.

Another thing is when reading to/from JSON, being able to differentiate between a field which doesn't exist ("undefined" in Javascript), or is null. Unfortunately some third-party APIs I interact with treat "null" as a command to remove the field. But if you use `omitempty`, you can't have null at all.

Other than that though, it's a treat to work with!


Go's main selling point is that it is extremely easy to deploy and it has good standard libraries for most routine programming problems. It works really well for providing web services running on Linux, but I can see why you wouldn't like it if you were running on Windows.


My main takeaway from the blog post was really how much Windows sucks, not that Go is bad haha


A lot of the criticism is fair but not the part about runtime os detection. It is vastly preferable over conditional compilation since it is much easier to test. With conditional compilation you introduce code paths that are only traversed on, say, Windows. If most developers use, say, Linux you will introduce difficult-to-debug system-dependent bugs. This goes for similar things like SIMD-extensions too. If possible, use runtime checking over conditional compilation.


What? Even with runtime os detection, you won't be running linux code paths on windows. However you do it, you can't test linux apis on windows or windows apis on linux natively.


Yes, you can accomplish that using proxies and/or using the Wine libs.


Platform specific code is typically implemented in Go using build tags, which are a less flexible form of conditional compilation.


I'm baffled that the Go designers for a long time basically considered generics to be harmful - but had no problems adding reflection to the language.


Reflection is essential for things like code generation and introspection in go. It isn't something you can classify as wholly harmful. It is of course possible to write bad code using reflection, but not having reflection isn't the solution. https://go.dev/blog/laws-of-reflection


I'm a java guy mostly, so I might be biased, but in that language, the vast majority of overengineered, arcane and plain "black magic" code was involving reflection - whereas I almost never had a problem with generics.

So I'm surprised that Go considers generics more harmful than reflection.


"Cross platform handling is bad."

That's all I've had the energy to extract. Isn't cross-platform software always a nasty compromise between not being able to do anything useful and being too specific to some OS or another. IMO go write a library if it really annoys you.

There are much more unpleasant problems that I've had with the language but I've had more with C++ so ... it's a step up for me.


Read to the end. It eventually talks about monotonic clock bugs that affect every major platform, including Linux.


If you try to use a language for absolutely everything its going to struggle somewhere. You have to use a language for what its good at, and have structure in place to make using a better language for a job practical whether thats by hiring/training or by building in some kind of ABI layer from the ground up.

Go is really really good at some things. Its alright at others. Its awful at others.


Why was this post flagged?


I was asking the same question until I scrolled a little further down and it appears that there are some hurt feelings down there.


Probably needs a (2020) in the title.


Not a "Go guy" but I try to keep up on the news. They've since added generics and cleaned up some of this since 2020 yeah?


This submission came up because the article author is on twitter today complaining about golang and posted the submission link.


So they wanted out in 2020 and they’re still on in 2022. So either the language isn’t really that bad or they have an attention deficit?


Eh, plenty of people get stuck using technologies they hate.

Perhaps the OP is forced to use it for work, or already put the time and effort into building something and cannot afford to rebuild it from scratch.


In https://news.ycombinator.com/item?id=31193260 the author clarified that he’s changed jobs twice yet “It doesn't matter that I don't personally write Go anymore: it's unescapable.” I’m also worried about this because of trends at my day job despite having joined a sharp, experienced Scala/Java team.


I'm someone who wants to write Go 100% of the time at work, and I haven't really found a lot of companies that are doing this. All the cool kids have moved on to Rust. All the legacy code is Python/Ruby/Typescript. There is the niche in the k8s ecosystem, but I've already written all the code I need to integrate with the k8s ecosystem ;) All that's left to do are write 2000 copy-pasted operators for running super legacy apps on OpenShift, it feels like.

But even with that in mind, it's not that hard to find. Whatever tech stack you like, there is a company in every industry hiring software engineers to work with that set of tools. There is just so much software in the world, it's strictly a numbers game. And if there's some industry you like that doesn't have the tech stack you want, why not start your own company? You're reading "startup news" after all ;)


At the end of TFA there's an update from April 2022 that explains the author's current thoughts.


Link?


Here it is: https://twitter.com/fasterthanlime/status/151945555551713690... (source: am fasterthanlime).

No idea what HN's policy is on posting links to Twitter, but since you asked. Also if the consensus here is that that 2020 rant is "unfair and toxic", that twitter thread is not going to go over well either!


I agree with you about Go. I hate it, but we started it using it at work because it's "easy".

It's also easy to walk across a busy road without checking but it's still a fucking stupid thing to do.


I quite like your 2020 rant. I also quite like Go. I think you make your points quite well even if I don't feel compelled to feel the same way you do after reading them. A detailed, finnicky, nitpicky critique like this makes for good insightful reading.


I see Go as a language for some use cases on some Unix platforms. It was clearly designed early in the current iteration of "modern" programming languages and I firmly believe that Rust, for example, learned a lot from the mistakes Go made.

My impression is that Go as a language tries to make things simple, but things aren't always simple. Time and date operations are hard and simplification leads to I correctness.

If you just want to show a nicely formatted number, you don't want to care about wall time, you want to show a number that's good enough. Go gives you something that's probably good enough. It's not correct, but who cares? For the purposes of the people who made Go you don't need correctness.

Same with the permissions. Permissions are hard, especially on flexible systems like Windows. Unix pretends these problems don't exist and Go as a languages tries to pretend Windows is just a weird flavour of Unix. For most use cases, this is fine; read only is normally the only flag you care about as a developer and read only is something the API will give you. If you want something correct, go get a library or something.

Then the path issue. Paths are hard. Path separator APIs are hard, especially if your standard library doesn't like using the operating system's standard methods for dealing with paths. Most paths are UTF8-compatible strings. Sure, Windows is UTF-16 and plenty of real-world file systems don't even have UTF support at all, but most file systems used by most users are compatible enough. If your file system shows you weird bytes, that's either a mistake or you're using some kind of complex, incompatible encoding (some Asian languages have these still in use). Go is simple, you're either the common use case or you're wrong.

This all makes Go quite simple to work with for many use cases. It simplifies your computer and makes some of the decisions for you. That's not even a bad thing if you're using it as a replacement for hacky shell scripts and messy Python tools, because people usually ignore the real life complexities of computers in there too.

For a system that's supposed to be correct, I wouldn't even think about using Go, because it chooses simplicity over correctness. For something that I kind of, sort of, probably want to just work most of the time, the language works fine. Sure, the code looks like drunk Python combined wirh endless checks to see if err is nil, but it works. It's very easy to get productive with Go if you can get over the language itself. Just go in with the right assumptions.

Similarly, don't go into C or Rust if you want a simple programming language. Rust is trying to be correct, or even pedantic, more than it's trying to be simple. Writing correct code is verbose and annoying and dense languages like Rust will easily allow you to write an unreadable mess. C, on the other hand, will let you do your own thing: whenever there's any kind of complexity, the language shrugs and says "you probably should check this but if you don't, well, let's just call it undefined behaviour and move on". It could be correct, or it could not be and you'll probably never know for sure.

There are tons of simple programming languages, even ones that do the correctness deal better than Go. Take a look at C# and Java (or if you want to feel like you're writing modern code, maybe Kotlin), with platforms made to work on Windows and Linux with their crazy quirks. Go isn't a universal solution because no universal solution exists.


> Take a look at C# and Java (or if you want to feel like you're writing modern code, maybe Kotlin)

Every time one of these Go/Node/Rust/etc rants comes up, my brain is screaming "but why not C#?". I would not be doing what I do today if I had not discovered this realm of goldilocks experience - Approximately "just works", runs about everywhere, tooling with decades of heritage & features, fast, etc. The only 2 excuses I've ever heard were: "Microsoft bad", and "it has too many features" (?).

I'd be posting daily about how much ass C# (or even latest JVM/kotlin) kicks compared to all the other training wheel garbage if it weren't heavily against the rules of this community. I'd not have even typed the prior sentence if we weren't already mid-brawl about another language being good/bad.


C# used to be quite a bad development experience for anything other than Windows Server + IIS for years. Many people who would not probably love the language abandoned it years ago and never came back with the bad 2010s era after image in their minds.

The thing about C# and Java is that they're not exactly exciting languages. They do what you want and not much more. They don't tend to break massively between releases. Their syntax was clearly designed decades ago. Their extensive package repositories are overwhelmingly filled with boring packages that solve boring business problems. I personally much prefer C# above Java, but even Java with its flaws is one of the most important programming languages for business in the world.

If you want to learn new things, you don't turn to these new languages. Up and coming languages come with interesting ideas and paradigms, even something as polarizing as Rust is now getting good market penetration. WASM is all the rage and every time you blink someone has a fancy new idea that can be hacked into Javascript.

Honestly, though, the boring languages work well for business. Yes, Go and Rust are very fast and you CAN run them as front-end frameworks if you wish to, but most applications are boring old CRUD apps that are much better served with something like C# or the JVM and its many many years of library maturity, guides, Q&As online, and books. It's not fun or exciting, but it gets the job done well.


Yes, the cross-platform stuff sucks on Windows. I’ve seen languages with a few different approaches, I’d like a moment to compare them here. I’m going to talk about some specific aspects of cross-platform compatibility but not attempt to compare one single aspect across many platforms.

With C++, you can use the preprocessor to give you a string type which is UTF-8 (nominally) when compiled on Unix and UTF-16 (nominally) on Windows. This is… okay-ish, workable, but I’m unaware of any good libraries that do this for you. Instead, you’re basically on your own. It’s not horrible, it’s not great, you spend some time writing interfaces to work on Windows (if you are doing anything specific). There’s no C++ “mkdir” call, for example. (Maybe there is in C++75 or whatever. They keep adding things. There was no “mkdir” in C++ for, like, 30 years.)

As a side note, there is something in C++ called “wchar_t” and “std::wstring” which is basically complete garbage and only ends up being used because somebody, in the past, wrote code that used it and now you are stuck with it. There’s a whole story here.

With C# and .NET, the APIs seem to be designed with Windows as the norm, and Unix/Unix-like systems are an afterthought with some shim for portability. You can take a look at System.Diagnostics.Process for one of the worst offenders. Anything involving pipes is just not doable with the .NET process spawning interface, as far as I can tell, because it wouldn’t match the Windows semantics. You also can’t even roll your own process spawner without great difficulty, because you can’t safely call fork() from C# (not a surprise).

With Go, the APIs are designed with Unix as the norm, as discussed in the article.

With Rust, the APIs are carefully designed to give you a subset of functionality that is present on both Windows and Unix. As a result, the available functionality is (IMO) garbage, kind of like wstring in C++. In short, with OsString / OsStr in Rust, you get a string that is hard to manipulate. In the effort to make it safe and cross-platform, it’s been made into something like an opaque box for strings, and it’s missing 90% of the typical API that you’d find on a useful string class. In an effort to make sure that you don’t pay any unnecessary cost for conversion to/from Rust strings, the subset of OsString that are valid Unicode strings can be typecast at zero cost to Rust String / &str types—but this means that the encoding of these strings doesn’t match the Windows encoding at all.

F'ing bizarre.

Honestly I do not think that there is any easy out here, at all. If you care about running on Windows, and you care about providing a good experience on both Unix-like and Windows systems, then you should embrace the fact that any cross-platform API that abstracts the differences away will be imperfect, and just ask yourself what tradeoff you think is appropriate to get a better experience on Windows and Unix-like systems.

I’ll also add that while macOS provides a fully functional, fully usable POSIX API, that doesn’t mean that this API is the best option for performing low-level tasks on macOS. In particular, you should probably be going through the NSFileManager API when possible if you really care about macOS, because this provides higher-level functionality that is tricky to implement yourself—the “correct” way to do certain operations is different depending on the semantics of the underlying filesystem.

I’m just going to finish with the note that if you want to handle the pathological edge cases when using strings in a cross-platform application, you are in for quite the wild ride. Even something really simple as “I want to list all files in a directory and send the result to a client in JSON format” is just, well, a nightmare, if you are committed to handling all the edge-cases.


The unfortunate story of Windows is that it tried to do UTF before UTF8 was invented. Java has the same issue. UTF16 existed before UTF8 and now UTF8 has become the norm. That's why you have to deal with wchar_t and other such nonsense in low level languages.

As for C++, the standard library has create_directory (https://en.cppreference.com/w/cpp/filesystem/create_director...) in the std::filesystem namespace, taking a std::filesystem::path.

C++17 may not be as widespread as its older companions, but if you're using the language in a modern context there's a perfectly usable set of standard APIs for this stuff that works with a whole bunch of char types (or the std::u8string if you're on modern C++ and want to forego the whole UTF issue as much as possible).

The problem is that you have to deal with edge cases. File system corruption happens and iterating over JSON files in a directory can show random character sequences at any point in time. You can choose to ignore it, or fail on error, or let the language decide what to do in the case of Go, but you always need to be mindful of the failures that can happen. Ignoring the problem won't make it go away and eventually it'll bite you in the ass of you keep doing it.


Rust may give a subset of file system functionality that all general file systems should in theory support in the front interface, but you can access platform specific functionality by asking for the extended interface, which you put inside a platform check block. You have the full functionality there.


> The unfortunate story of Windows is that it tried to do UTF before UTF8 was invented. Java has the same issue. UTF16 existed before UTF8 and now UTF8 has become the norm. That's why you have to deal with wchar_t and other such nonsense in low level languages.

UTF-8 was first implemented in 1992 while UTF-16 was specified in 1996. Windows and Java use UTF-8 because they retconned their UCS-2 APIs to expect UTF-16 rather than add UTF-8 support. For Windows this is especially damning as they already had a API for encodings with 8-bit code units which they didn't bother adding UTF-8 support to until very recently.


> You can take a look at System.Diagnostics.Process for one of the worst offenders.

Yeah, this is one of my least favourite APIs in all of .NET. My understanding is that the .NET team is planning to redo it in the next few years, but if you want something better right now I highly recommend the excellent CliWrap library: https://github.com/Tyrrrz/CliWrap


> You also can’t even roll your own process spawner without great difficulty, because you can’t safely call fork() from C# (not a surprise).

Do you get posix_spawn()? That's a far superior API to the frankly horrifying and slow mandatory use of fork().


WTF is going on in this thread? How did we become so sensitive, Haskell has had a lot of criticisms here at HN, never saw any flags/downvotes. People argued vehemently but without any animosity.

Maybe something to be learned here https://youtu.be/iSmkqocn0oQ


it's probably the fourth time this article has been posted and people are tired of it


Blaming a language such as Go for poor software design choices is like yelling at a garden spade, in ASL.

I've seen some well-written library code that is kept up-to-date, and I've seen some crufty libs that, frankly, would barely pass muster on an internal code review. A good litmus test is to check the linting configuration for a given library repository, as well as the frequency by which a given library is updated. And none of these are anything that an experienced developer wouldn't do otherwise, regardless of the code.

To the original poster-sorry your life has been beset by having to clean up the messes of others, though you will find this to happen again and again. Grow up a little–some engineers might see this as an opportunity to make their corner of the world a better, more functional place, whereas a younger, more entitled engineer might expect everything to be perfect on day one (HINT: it isn't).


If your idea of a "grown up engineer" is one who fails to notice systemic problems and diligently, repeatedly hits themselves in the head with a rake because that's the way it's always been done, we have very different ideas.


Systemic problems may be organizational (i.e., a company and/or culture of sloppy code) and may have little to do with the language used.



Seems like he has more of a bone to pick with the windows filesystem than he does GO. Sadly he gets close to the answer but doesn't quite cross the finish line to realizing his issues are a result of limitations of windowsisms.


The issue isn't that the Windows filesystem doesn't support all of the UNIX features. The issue is that Go pretends nothing's wrong and just silently does the wrong thing when you try to use said features anyway.


ZZzzZzzz I'm so bored of this article and I'm so bored of this author picking out extremely specific arguments about how things aren't exactly perfect for exactly his need, but moreso than anything I'm so bored of this culture of takedown posts being taken super seriously just because they're takedown posts, articles that are thirty pages long when they could be three or four, and the back-and-forth with cartoon characters blog posts.

> I've been suffering Go's idiosyncracies in relative silence for too long, there's a few things I really need to get off my chest.

this notion that because someone designed a tool in a way that is not exactly the way you wanted it to be is a form of suffering is ridiculous. You're not suffering, you are at best mildly inconvenienced.

> Most of Go's APIs (much like NodeJS's APIs) are designed for Unix-like operating systems.

This whole angle of attack is frankly absurd. Go's APIs (and the APIs of countless other ecosystems) are Unix-like because the Unix-like ecosystem is structured around interoperability. The reason that Go's APIs for dealing with Windows aren't as good as for other systems is that Microsoft has at every turn made their environments subtly different from other things often for no good reason at all.

This whole thing with making a file path with arbitrary byte strings that aren't representable in utf-8 is frankly embarassing. He's spinning a yarn about filesystem APIs and then goes on a tangent about string handling and how great it is that Rust displays one thing and Go displays another, but here's the rub: he used the `%s` format string operator, whose entire function is to print the uninterpreted bytes; if you want to print something quoted in a string-safe manner, you use `%q`. That entire section can be summarized in three lines of code:

    s := "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98"
    fmt.Printf("Hello, %s\n", s)
    fmt.Printf("Hello, %q\n", s)
https://go.dev/play/p/NVbLMhb7UkV

His examples are always like this: long, convoluted, and so complicated that people who know what's going on don't bother to interject because it's an impossibly tedious waste of time.


Why is this toxic rant re-posted again and again and again?

This post is far from a fair evaluation of the strength and weaknesses of Go. Just a long nitpicking of few of Go's peculiarities forgetting the many strengths that Go brings to the table when building and running real world projects.

This is just another unfortunate passive aggressive post from a Rust evangelist bullying other languages.


I don’t think this rant is toxic - there’s no personal attacks or rude words of any sort, and the tone is lighthearted - but it is all criticism. Maybe that’s enough to qualify it as toxic.

However, the author has real Go bonafides, he wrote many of these libraries: https://github.com/itchio?q=&type=all&language=go&sort=

I presume working on the above software burned some deep scars. Their switch from Go to Rust is motivated by this kind of issue.

(To be honest, I also stopped writing Go for fun because these kinds of paper cuts build up. I enjoy Go when writing stuff in its sweet spot - dealing mostly with network bytes in a server environment of my choosing. For me, venturing too far out of that domain is joyless, but not exactly painful.)


I don't think this article intends to give a full appraisal of Go. It highlights the downsides of a particular philosophical stance Go takes.

That stance is very much real, even if it's best described by gesturing at half a dozen examples. And it's nothing new, really. It's just Worse is Better all over again. I think it's instructive to see how it plays out in Go.

This rant is one-sided, but it doesn't pretend to be otherwise.


> toxic

The author presents things about the language that they don't like. How on earth is that "toxic?"

Have we reached the point where "toxicity" is just "things I don't like hearing?"


Yes. Can we go back now?


I disagree. I've used Golang just once and I got frustrated with a lot of things mentioned on this post and I don't even write Rust.


I am curious, please do send the fair evaluation, or the mistakes in this post if there are any.


Can you please elaborate on why you think it's toxic?


That's not what toxic means.

The article isn't in the slightest bit toxic.

Just because you disagree with it doesn't make it toxic.


The utterly poor decisions made by the GO team over decades tells the story that stupid stubbornness is more important than practical use of the language. How could one not address the obvious problem with the lack of generics directly. Why is the stupid AND ERROR PRONE ERROR HANDLING NOT FIXED?

Along with this the GO team make breaking changes for performance. Amateurs basically.



This is a long blog post which is a huge rant, and honestly I find the whole tone extremely unproductive.

# Point 1: Judging a tool (programming language) out of context (project)

For some reason programmers really like to talk about the merits of programming languages without specifying anything about the context you're using it in. This article talks a lot about how Go handles its Windows support, yet the conclusion is not "I'm done using Go for projects that require Windows support". No, the conclusion is "Go is bad".

I've seen this "argumentation" so many times:

- "I used programming language X on project Y."

- "Programming language Z works much better for project Y."

- "Therefore Z is better than X."

We should talk more about when it makes sense to use a certain programming language instead, and less about trying to declare that a language is "stupid" or "worse than another".

# Point 2: Simplicity is about assumptions

The article has a nice catchy headline called "Simple is a lie". Ironically it immediately points out that the statement "Simple is a lie" is also a lie: "Or rather, it's a half-truth that conveniently covers up the fact that, when you make something simple, you move complexity elsewhere."

Go models the world from a Unix perspective, and it provides a compatibility layer for Windows. This means that …

… if you run it on Unix, everything will behave as expected.

… if you use the basic functionality on Windows, everything will behave as expected.

… if you use functionality which does not make sense on Windows, it will try to fake it and it will kinda make sense.

… you can't idiomatically take advantage of Windows-only features.

This is a simple model. And there's nothing about this simplification which is a "lie" or a "half-truth": It's simple for a programmer to program against this model.

This is of course not a universal model. It's a model which works great if you're only developing against modern Linux and MacOS. It's a model which works if you want to support basic functionality on Windows as well. It's a model which will cause your program to behave weirdly on Windows in many edge cases. And those "edge cases" might be more common in the real world than you think.

The article spends a lot time showing how Go's model break apart on Windows. And yes, that is indeed the cost of a simpler model: When you don't follow the assumptions then unexpected things will happen. At this point the conclusion could be "don't use Go if you need Windows support", but for some reason the article instead brings in another language (Rust) and starts showing how Rust is solving this "better" than Go. (Spoiler alert: They introduce their own abstraction instead.) And there's still no context: Are we only talking about programs that need to run on Windows? Are we talking about all programs that you can write in Go? Are they trying to demonstrate that Rust is a "better" language even when I'm concerned about a Unix command-line tool?

# Point 3: The monotonic clock

Turns out there's really hard for a programming language to guarantee a monotonic clock (see Rust). Go's take a simpler approach: If you need a monotonic clock then it's up to the OS to take care of it. Is it a good solution or a bad solution? I dunno. It seems pretty reasonable to expect that the OS can take care of it. I can't really come up with any project in Go where this has impacted me in any possible way.


For point 2, forget the Windows problem, in that section we are reminded that Go's strings are just slices of bytes, so the simplicity of these "strings" ends up moving the complexity to the programmer, now we need to keep track of which "strings" are human readable UTF-8 text, which are filenames, and how those correspond in our program.


is the writer a previous Java developer?


Could this post be unflagged please? I don't think it satisfies any requirement for flagging.


Off-topic, but HN needs review of flagging abuse. Maybe assign flagging privileges only to those with X reputation, and revocation if on review the privilege is abused?

It seems that flagging is as much "I don't like this" as "this is not appropriate".

Likewise downvoting. Maybe instead display both downvotes and upvotes, so people can see if a comment is found controversial?


Lately I've had comments swing wildly dozens of points in each direction and also flagged. Brigadiering has come to HN and is getting worse over time. There seems to be a growing segment of internet users who wish to cleanse perceived wrong think, it is disturbing and not what flags/downvotes are for. It also feels like there is more bots.


Go look at HCQ/Ivermectin/Cryptocurrency for examples of such brigadiering.


The trouble is, you can't -- only the commenter can see their score swings, and then only if they pay attention.

Publishing upvotes and downvotes would allow others to see these effects, and would allow them to register a comment as "controversial" rather than "not worthy of notice".

Data on who made what vote could also be useful.


There are two things I've always wanted out of a comment system that allows up and down votes:

1. I want to know who upvoted/downvoted/flagged a comment so I can identify attempts at censorship, revenge voting/flagging, etc.

2. I want the ability to exclude individuals from the upvote/downvote tally I see.

For example, if I see someone downvoting every post that is favorable to Elmer Fudd, or upvoting every post that is favorable to Bugs Bunny, I want to block that person's emotional bias from the rankings I see in order for my view of the comment section to be as objective as possible.


Some old, old discussions on this:

https://news.ycombinator.com/item?id=2403716

https://news.ycombinator.com/item?id=2445039

That is, in the old days the total score was shown to everyone, but it was deliberately changed.


Well yeah, there are a lot of those "controversial" subjects that attract far-left far-right flamewars and this has normalized using downvotes/flags as a weapon. Even about innocuous things. Which is... weird and sad.


This is how the system already works...


The FAQ does mention a karma threshold to flag links, so sorry about that. Perhaps that threshold should be raised.

It doesn't mention revocation of flagging permission for abuse, and I think that would be helpful.


You can try emailing hn@ycombinator.com


@dang


You usually need to send an email to the site administrators to get their attention. This information is available in the FAQ:

https://news.ycombinator.com/newsfaq.html


TL;DR use C++


A two year old rant that everyone saw the first time about a language which has subsequently changed significantly is not in any sense "hacker news." It is "old hacker opinions."


Which will be why the linked post has a section "April 2022 Update".

> I wrote this in 2020, and have changed jobs twice since. Both jobs involved Go in some capacity, where it's supposed to shine (web services). It has not been a pleasant experience either - I've lost count of the amount of incidents directly caused by poor error handling, or Go default values.


Before clicking the link I knew "Mr. Rust's Wild Ride" will be waiting for me on that article. Welcome to the hype train, 3rd edition: Ruby -> Node -> Rust. Everything else is just, well, inferior and I'm going to bash its brains out. Needs some "Rust" in the title.


To say go's simplicity is a lie, then to go and and say this;

> This function signatures tells us a lot already. It returns a Result, which means, not only do we know this can fail, we have to handle it. Either by panicking on error, with .unwrap() or .expect(), or by matching it against Result::Ok / Result::Err, or by bubbling it up with the ? operator.

What the hell? I'm not a rust dev. I have no idea what half of this means. "bubbling up"? What the fuck even is that?


What's your argument? The point the article is making is that Go says it's simple, but isn't, while Rust doesn't say that it's simple, and uses its complexity to solve the complex problem.

Bubbling up means returning the error to the caller, and is general error handling jargon.


And the reason Go keeps a toe-hold in the ecosystem is that file-handling is simple for most people, most of the time.

Rust is expressing the full complexity of a twisty maze of corner-cases built up from decades of filesystem features being slapped on with no care for each others' existence. Go just says "Sure, but you won't need most of that most of the time; here's a subset that usually works." So it solves most people's problem most of the time with far less words (with the trade-off that if you get burned, you get burned).


Just because Go doesn't have a consistent API for file stuff, does not make it 'not simple' it makes it inconsistent. It is still simpler, and easier for me to understand ALL of the Go code on screen, than it was for me to understand just that rust function signature, and all the stuff in the paragraph I posted.


I hear you. People keep saying Spanish is an easier language than English, but yesterday I pulled up an article written in Spanish and I couldn't even read the first word.


"Bubbling up" is common parlance for "exit this function early and return the error we got from our callee" (or "wrap the error we got from our callee and return that")

In Go, this looks like

   thing, err := call()
   if err != nil {
       return nil, er
   }
In Rust, this looks like

   let thing = call()?;


People keep telling me this is common nomenclature but I have never heard of it until today.



>"bubbling up"? What the fuck even is that?

When you throw an error and don't catch it, you just let the error immediately propagate up to the calling function. Think "return err"

>> Either by panicking on error, with .unwrap() or .expect()

This means fatally terminating the program immediately when an error is encountered in this spot.

>> or by matching it against Result::Ok / Result::Err

Result is an enum can be one of either Ok<T> or Err<E>. Pattern matching is one way of finding out which, and running code depending on which variant it is. It's just a fancy and convenient if statement, if that helps.


Thanks. The point of my comment was not to actually ask someone to explain it to me, but again, thank you for explaining it, I appreciate it.


"Bubbling up" is the default in C++, when the code you call throws an exception and you don't have a try/catch for it.


Do C++ people actually call it "bubbling up"? I don't think I've used a language where this isn't the default behaviour so I'm not sure I've ever heard it given an actual name, it's just what happens when you don't catch the exception - it keeps going until someone does catch it, or it hits the runtime and demolishes your program.


I've heard "propagating" the exception used as well.


Now that you mention it, I think this is the term I've heard for it when it has been mentioned. The exception propagates upwards.


Yes, but not just in C++. Bubbling up errors is a common phrase in Java and Python.


I often use and hear "bubbling up" in non-tech discussions, too. It's not at all uncommon, IMO.


I've heard this term more often in JS contexts than in C++ contexts, to be honest. I don't think C++ programmers use the term all that often, though the reference to error propagation will probably be understood anyway.


Bubbling is what my guts do when they're unhappy. Not a great programming term.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: