I don’t think it’s fair to say they ignored advances in modern programming languages. There are opinionated reasons for the omission of generics and they do make sense in the overall architecture of the language. At the end of the day it’s about trade offs and ultimately my opinion is that, having written a lot of both C# and Go, I can see the pros and cons of feature sets. Its about choice of tool for the solution in mind. I’m happy with that.
Absolutely! The "advances" were design decisions that go chose not to use. The people who designed golang are not newbies. They didn't ignore anything, nor were they ignorant of other languages. They had good reasons for choosing to go another way that makes sense for their language.
The end result is, in my humble opinion, a very stable language that is very high performance, and is highly maintainable. It's not perfect. It's not great for every use. Java does some things better. C does some things better, etc... It just fills a niche that was unfilled in an opinionated manor.
> The people who designed golang are not newbies. They didn't ignore anything, nor were they ignorant of other languages.
Chalking everything missing from go up to "the designers know what they are doing--every omission was on purpose", ignores the way humans design things. We are really good at post hoc rationalization.
Bob: "Why did you draw the woman in tall grass?"
Alice: "To allow the viewer to engage with the piece by forcing them to use their imagination to visualize the hidden area."
Bob: "Are you sure it's not because you don't like drawing hands and feet?"
At the same time critiquing without understanding the problem the engineer was trying to solve is equally easy.
Go is far from perfect, I just don't agree that lumping things in a good/bad pile is very insightful.
Engineering is all about making compromises. It's unsurprising that you'll meet everyones use cases that it's not designed for. The article would have provided additional value if it explored alternatives to the bad, and the disadvantages that come with them.
They are talking about things like generics. Considering the years it took for a team of people to build Go, I think it’s fair to assume they didn’t just forget about generics and that it was an intentional decision.
I think one thing that people get caught up in with Go as well is... well it's stupid easy to learn and work with.
I work for a giant enterprise company and the range of programming talent that we have is incredible. I know some people that couldn't code there way out of fizbuzz. Unfortunately, getting these folks onboard to learn the functional abstractions doesn't always sell (especially to the biz folks).
One of Go's biggest strengths is that anyone can pick up the language and be somewhat productive in it after a week.
My 2 cents in the bottomless well of programming language opinions.
> One of Go's biggest strengths is that anyone can pick up the language and be somewhat productive in it after a week.
And then what? They find themselves duplicating the same code again and again because Go doesn’t provide good abstractions to reuse code. Or that they write a ton of code to do what could be achieved in a few lines in a reasonable language.
You learn the language in a week because there is not a lot to learn. Shell scripting is simple to learn as well, but nobody is building anything beyond short scripts using it.
An ideal language should allow the user to be a productive after a short time, but should have have enough power for advanced users as well.
One of the things I hate the most about Go is that source code generation is considered an acceptable solution to many problems. Perhaps I should go back to C — the preprocessor can be used to generate code as well as anything. In fact, the predecessor to C++ was a preprocessor used to generate C code that simulated C++ like behaviour.
>They find themselves duplicating the same code again and again because Go doesn’t provide good abstractions to reuse code. Or that they write a ton of code to do what could be achieved in a few lines in a reasonable language.
This.
>You learn the language in a week because there is not a lot to learn.
Exactly. For example, Brainfuck is even simpler and easier than go. It also produces even worse, unmaintainable, horrible code.
> One of Go's biggest strengths is that anyone can pick up the language and be somewhat productive in it after a week.
This ranks just about at the bottom for any language I care to work with.
"Learning a language" is more than just about being able to write syntactically-correct code. It's about understanding effective design patterns, idioms, the standard library, common pitfalls, writing maintainable code, and so on.
Learning the syntax comprises less than 5% of that effort, and optimizing for that step—particularly at the expense of the other steps—is ill-advised. And in my opinion, go has done precisely this by having tons of sharp edges: nil interface values, race conditions with channels, supposed-"meaningful" zero values, implicit interface implementation, etc. are all sharp edges I've personally run into that have caused bugs in production.
> One of Go's biggest strengths is that anyone can pick up the language and be somewhat productive in it after a week.
IME, that's basically the case with any language with suitable libraries in the application domain and that doesn't have radically unfamiliar syntax or programming paradigm, so this rings as praise for it being neither especially novel nor deficient in library support for it's target uses.
> The people who designed golang are not newbies. They didn't ignore anything, nor were they ignorant of other languages. They had good reasons for choosing to go another way that makes sense for their language.
Yes, for using monkey coders more effectively.
>and is highly maintainable
No. Lack of exception handling and generics really hurt maintainability a lot. Not to mention that package management is still full of wrongs.
1. It complicates the compiler.
2. It slows down the compilation.
3. We didn’t think about it in the beginning, and can’t add it now without making non backwards compatible changes.
4. It makes the language harder to learn.
5. We don’t use this feature a lot.
I didn’t invent these reasons — search for why doesn’t golang have <insert-feature> — and you will eventually come across one or the other as a reason mentioned by one of the core members.
I get that you're just the messenger, but almost every one of these arguments makes my blood boil.
1. So what? The whole point of writing a high-level language is to pay the cost of low-level burdens once so that application developers don't have to pay them repeatedly.
3. This makes me treasure even more greatly the Rust core team's forethought when releasing Rust 1.0, in ensuring they hadn't walled themselves into any easily-avoidable corners before stabilizing the language. Such an approach has repeatedly paid dividends.
4. Optimizing for a person's first week of using a language at the expense of their next ten years is borderline indefensible.
5. This last one pretty much boils down the entire problem for me. Go is designed to solve problems Google has that most of us don't. It's also designed to solve them in a way that makes the most sense to Google (e.g., dependency management is irrelevant if you have a monorepo).
I totally agree, in case it wasn’t clear. Go, like many others, was designed by people who thought the users of the language are not as smart as themselves. They seem to have forgotten that after writing a few thousand lines, even a novice can become a power user and start demanding more of the language.
Unfortunately this meme will never die. The good news is I’ll be getting shit done while people who go on about this crap will just blog about Rust or monads or something.
Improving your tools != making the same stupid argument people have been making for a decade as if it hadn’t happened before. I actually agree that there are improvements to be made to Go; generics and dependency management being 2 that look like they’ll probably be resolved soon. That’s not that the same as this crap about “it ignores advances in programming!” or “it’s a failure because it’s not a systems language like they said it would be when they announced it.” or “Google made it because they think their programmers are too stupid to write Haskell.”
Improving tools is complicated and involves a lot of trade offs. It’s not just “omg it doesn’t even have generics what a shit language.”
Opinions on programming languages are like arseholes in that everyone has one. So I'm far less interested in hearing people intellectualise over what they perceive to be good or bad qualities of a particular language (otherwise known as arguing personal opinion as irrefutable fact) and far more interested in seeing what what cool projects people can make. After all, many of us managed to write some pretty awesome stuff in BASIC back in the 70s and 80s so there's really no excuse these days given the tooling and hardware we now have available.
I am not interested in what people CAN make in a language. Anything that can be made can be hypothetically made in any (Turing-complete) language. That is an, IMO, silly criteria. People made awesome things (for the time) flipping switches and entering opcodes directly at one point.
I am interested in a language that makes things easy, clear and elegant. I agree with the original article in that I do not find Go to be any of these for many things I would use it for. Hence, I do not use it.
I think the author’s comment that Go ignored a lot of modern programming language theory especially hits it on the head for me.
> I am interested in a language that makes things easy, clear and elegant.
Isn't the easiest way to establish that by writing code yourself rather than reading an opinion piece written by someone else who likely has a different coding style to yourself?
We built a heavily asynchronous back-end communications service in Go a year or two ago. It was a great learning experience. I learned that I don't like Go and its limitations and design choices first hand, wrote a bunch of notes about what I did and didn't like (mostly didn't like), and moved on.
We had chosen Go to use for the reasons others here have commented about (especially the part about enabling relatively inexperienced software engineers to be productive quickly), but in the end I felt it sacrificed too much on that altar and I would rather spend more time mentoring and training our software engineers to use tools I felt were better.
It's mostly a matter of opinion, like I said - with sufficient dedication you can pretty much do anything with any language. But I'm not investing any more time into Go when I have other tools out there (like Clojure, Rust, Haskell, Common Lisp, C#, various BEAM languages, Ruby/Python, etc.) that I find to be uniformly more pleasant to use.
We didn't re-implement it. If we do reimplement it, however, we are likely to lean toward Clojure, as it deals with a lot of routing of unstructured data. Clojure deals nicely with data, transformations of data, and asynchronous processes very well, including having an analog of Go's goroutines and channels, so there would not necessarily need much redesign.
A BEAM language would also be a reasonable choice, whether it is Erlang, Elixir, LFE or whatever.
>So I'm far less interested in hearing people intellectualise over what they perceive to be good or bad qualities of a particular language (otherwise known as arguing personal opinion as irrefutable fact) and far more interested in seeing what what cool projects people can make.
People can (and have for decades) MCGyvered "cool projects" with all kinds of primitive and shitty languages and tools.
Progress doesn't come from projects, it comes from better tools.
As a caveman, you could build all kinds of heating projects, but you wouldn't get far if your only invention is the fire...
Some programming languages can be, and are, better than other programming languages. They've all been evolving along a single historical timeline, and many mistakes were made in the past that we acknowledge and wouldn't make again. To take a well-known example: eminently abusable Turing complete templates in C++ versus the more modern generics as seen in C#, Java, Rust, etc.
We should all have a shared goal of building of tooling that maximizes our leverage by providing powerful primitives, and minimizes the number of mistakes we make by providing dependable safety rails. Some more recent programming languages are really doing an exceptional job of this.
This critique of Go quite correct. It's a pretty good language with a lot of pretty good ideas, but it's also not without a healthy dose of very sizable flaws as well (which the author has enumerated nicely), and they're made more concerning because most of the Go team refuses to acknowledge many of them.
Without even going near the third rails of package management or generics, other niceties like real enums, ways to build typed non-slice non-map data structures, safety measures to protect against Goroutine leaks, or explicit interfaces would all be big improvements. I personally find that once it comes to large projects, Go feels obviously less brittle than Ruby/Python/JS, but much closer to their end of the spectrum compared to languages that offer stronger guarantees (e.g. Rust, Haskell, but also even languages like C#). I don't think that's what its creators were going for.
Interesting example of PL progress. I would have said that free-style templates were part of the "C++ way", in fact fairly close to its essence.
One feature which seems to lie along the axis of forward progress is lambdas (even Java got them) and one that appears to lie in its past is unstructured control flow (IMO this is somewhat regrettable). One thing I wish languages would all evolve is an expression-language syntax but there was and is a surprising amount of kvetching about Rust's "implicit returns" so perhaps I'll be waiting a while longer for that.
OTOH, the idea of forward evolution is also deceptive. The imperative world had tagged unions many decades ago, but they were abandoned in the OOP craze in favor of forms of subtyping (the Modula-3 designers at least gave this justification as a form of pure progress) and it's only fairly recently that they have percolated back over from the functional world and OOP-style subtyping has fallen from grace.
Progress doesn't come from endless bitching about existing tools either. Progress comes from writing better tools. Hence my point about how everyone loves to have an opinion about tools but the vast majority of those opinion pieces don't serve as much more than cannon fodder for language flamewars.
If you look at the people who have genuinely progressed technology, they spend more time inventing and writing code than soapboxing their opinions. They lead by example rather than simply talking about what they believe we should be doing. Those are far more valuable to the community than egotistical flame bait blog posts.
So the fact that Go provides no solid dependency management solution is "personal opinion"?
If everyone followed this approach, we would never have ended up with amazing languages like Rust. Why try to improve C++ if you can build cool stuff with it, right?
> So the fact that Go provides no solid dependency management solution is "personal opinion"?
The fact this is argued as a bad thing is personal opinion. He might have a valid point on that specific arguement but it's still an opinion piece as there have been others who have
voiced how they like vendoring in Go.
> If everyone followed this approach, we would never have ended up with amazing languages like Rust. Why try to improve C++ if you can build cool stuff with it, right?
The guy isn't designing a new language or trying to improve upon an existing language. He's just airing his opinion.
I see language bashing really as more of a tribal ritrial rather than a productive contribution. Like the KDE Vs GNOME wars or the vi Vs emacs debates before that.
For what it's worth, when I haven't liked a particular language which I've depended on, I've written my own parsers to add features. I'm not suggesting everyone should do this but it's a great deal more useful than starting flamewars.
> So how do you think new language development starts?
By people writing code rather than pontificating
> Why do we have languages like Rust, D, Nim, and Crystal?
Because people wrote them rather than wasted hours on end arging why other popular solutions are garbage. As I said in my previous post:
"For what it's worth, when I haven't liked a particular language which I've depended on, I've written my own parsers to add features. I'm not suggesting everyone should do this but it's a great deal more useful than starting flamewars."
Or to put things another way, you don't see the developer of Nim endlessly slagging off other languages. He just quietly gets on and creates something awesome.
Sure, if the author has something new to add to the debate. However there becomes a point where the internet is already over saturated with opinion pieces and thus any newer ones only serve as flamebait by endlessly reiterating the same arguments. Submissions like this one happen weekly - and that's just one programming language. In fact this isn't even the first article on Go written by this author alone. So while I'm in favour discussion whether this article adds enough to the debate to be considered important. I'm inclined to say "no" since the author literally just rehashes the same points every other argument against Go has done already.
Vendoring solves sharing dependencies with your team, it does not solve the addition of new dependencies. You still need to get your dependencies somehow, and they need to be able to specify what dependencies they work with. And your libraries can't vendor their libraries in go because then they'll have a different import path in vendor directory and in the main project and you won't be able to share types.
Much of the problems w/ dependency management have to do with over-coupling and adding lots of random dependencies for things you could just inline into your project. Also why the stdlib is so strong.
2. You can pick specific versions and follow basic semver constraints if you want by checking out the source for the dependency locally, and then copying to vendor. Automatic resolution would be nice but it's not like you can't do it.
3. Better automatic dependency management is coming. But honestly I prefer it to be arduous to add an external dependency. You should only do it if you absolutely need to.
The bit about import paths being wrong I've generally not found to be the case; the go compiler will treat deps with the same in vendor or in gopath.
Some of us have some fatigue about this topic as it has been retrodden literally hundreds of times on here, an army of people stomping their feet and screaming "OMG WHY ARE PEOPLE STILL USING GO!?"
If everyone followed this approach, we would never have ended up with amazing languages like Rust.
This comment reminds me of the top comment in a prior anti-Go spiel (which of course did well on here).
Is it interesting how little of consequence is written in Rust, or in that example Haskell, and how much is written in Go, C, and even derelict dregs like PHP. The formal perfection of a language does seem to have a great deviance from the likelihood that it serves as a basis for important work.
> Is it interesting how little of consequence is written in Rust, or in that example Haskell, and how much is written in Go, C, and even derelict dregs like PHP. The formal perfection of a language does seem to have a great deviance from the likelihood that it serves as a basis for important work.
That is pretty simple to explain. Well designed languages have a learning curve. They require an initial investment in terms of learning — something that a lot of self styled hackers are not willing to put in. I am sure a ton of stuff is written as shell scripts.
And there is also the matter of how long the language has existed.
I agree with your point, but I'm not sure how it ties in to the comment I was replying to.
Edit: You seem to be overlooking the less tangible contributions languages like Rust or Haskell have made to PLs in general. One example that comes to mind is how Scala has affected the development of Java, even though, in the real world, Scala use is relatively non-existent.
> Is it interesting how little of consequence is written in Rust, or in that example Haskell, and how much is written in Go, C, and even derelict dregs like PHP.
Exactly this. It always amazes me when people respond with "You can't judge a language by the things that are built with it because you can build anything in any Turing complete language". They're pretty much arguing that haskell/rust etc are better than go because all languages are equal.
> If everyone followed this approach, we would never have ended up with amazing languages like Rust.
What approach would that be?
> Why try to improve C++ if you can build cool stuff with it, right?
Go improves on C++ in at least one regard, compile time. And there they succeeded spectacularly. While I like Rust, imagine how great it would be with Go's compile speed..
> Disregarding all critique of a language as "personal opinion" as long as you can "build cool stuff" with it.
I can't tell if you're arguing that all Turing complete languages are equally useful, or that "utility" is a bad metric for evaluating programming languages. In the first case, of course this is false or Docker would be written in BrainFuck. In the second case, you've given no alternative (maybe you rank languages by the coolness of the type system?) nor given any justification for why 'utility' is a bad metric.
> I can't tell if you're arguing that all Turing complete languages are equally useful, or that "utility" is a bad metric for evaluating programming languages.
I'm not sure why you think my argument is constrained to these two choices?
I never said utility is bad. My argument is that critique of a language can be constructive even if the language has utility. In other words, I am against dismissing critique of a language simply because the language has utility.
Well, no one was dismissing critique of a language _simply because the language has utility_, so I was giving you the benefit of the doubt that you weren't strawmanning or otherwise introducing an entirely unrelated topic.
The OP said he didn't weight intellectual criticisms more highly than observations, which is a pretty reasonable position. I'll go a bit further and say that criticisms based on theories aren't worth very much when the belying theories do a bad job of predicting the observations. For example, if the theory is "Powerful type systems predict language utility" and the observation is "Very little software is built with languages with powerful type systems, especially very little _important_ software", then criticisms based on that theory aren't worth very much.
> otherwise introducing an entirely unrelated topic.
Are you even reading my comments?
> I'll go a bit further and say that criticisms based on theories aren't worth very much when the belying theories do a bad job of predicting the observations.
True, but only if the observations you make include both direct and indirect impacts.
Following your example, it is true that very little software uses type system-oriented languages, but you are ignoring the impact such languages have made on other languages, tools, frameworks, and libraries.
For instance, Haskell is not a widely used language, but some of its ideas have impacted other languages and domains in various ways.
Of course. I'm referencing them. Perhaps you mean something other than you're saying?
> True, but only if the observations you make include both direct and indirect impacts. Following your example, it is true that very little software uses type system-oriented languages, but you are ignoring the impact such languages have made on other languages, tools, frameworks, and libraries. For instance, Haskell is not a widely used language, but some of its ideas have impacted other languages and domains in various ways.
No, you've confused yourself by adding in Haskell. The theory in your example is "type system-oriented languages predict utility", it has no indirect effects, it has direct effects on Haskell and the languages that Haskell passed this trait onto (and other languages that aren't inspired by Haskell).
> Disregarding all critique of a language as "personal opinion" as long as you can "build cool stuff" with it.
GP said it's more valuable to evaluate a programming language by the projects people make with it, than by reading the 417th piece of perceived strengths and weaknesses of it. I think GP has a point.
> Go is a GCed language, so you can't really compare it to C++.
> GP said it's more valuable to evaluate a programming language by the projects people make with it, than by reading the 417th piece of perceived strengths and weaknesses of it. I think GP has a point.
And I don't, so let's just agree to disagree on that.
> That's a very strange thing to say.
Does this (or any other) advantage of Go help people who are writing software that cannot have a GC in the loop? No, it does not, so it's really not a valid comparison imo.
Could Rust and C++ have better compile times? Maybe. I'll defer judgement to the actual compiler experts though.
> Does this (or any other) advantage of Go help people who are writing software that cannot have a GC in the loop? No, it does not, so it's really not a valid comparison imo.
This is also a very strange rationale, for several reasons.
1. You can program in Go without a GC
2. There are no applications that prohibit GCs; there are _some_ which prohibit long and/or nondeterministic pause times
3. Even if you were right about 1 and 2, these still wouldn't be reasons for not comparing Go and C++. In particular, your argument is in the form: "You can't compare Go and C++ because {comparison between Go and C++}".
But Go was designed to be used with a GC, no? I see the same argument used with D, even though a lot of the stdlib relies on a GC :P
> 2. There are no applications that prohibit GCs
You are making a very strong statement here. Since we're on HN, I'll give you the benefit of the doubt.
I am not a GC expert, so I have a follow-up question: does a deterministic GC exist? If so, what kind of pause time bounds (error/variance) are we talking about? Depending on the answer to this, I could probably list several areas where the bounds are unacceptable.
> 3. Even if you were right about 1 and 2, these still wouldn't be reasons for not comparing Go and C++.
Of course you can compare the two languages, but my argument is that it's essentially a useless comparison.
I'm not sure this question makes sense. It was designed for easy memory management for the default case by way of GC, but it was also designed to allow users to opt into their own memory management schemes for niche cases. In any case, this seems even less relevant than your original claim.
> You are making a very strong statement here. Since we're on HN, I'll give you the benefit of the doubt.
It's a strong statement in that it uses absolutes, but it's pretty obvious. "No GC" is not a requirement, it's an implementation decision that assumes GCs are incompatible with the actual requirements (usually something like, 'there may not be unexpected, long pauses' for some values of 'long' and 'unexpected'). In many such cases (for example, game development), Go's low-latency GC is perfectly suitable. In other cases, you can just mind your allocations.
> Of course you can compare the two languages, but my argument is that it's essentially a useless comparison.
I disagree. To pick a language to use for a new project, you're best served by comparing the candidate languages (as opposed to picking one at random).
Strictly speaking, it's one option; no worse than C. That said, I'd probably just manage my own arenas and avoid dynamic allocation.
> Two can play at that game. You are completely wrong. If your best example of an environment where a GC might not be a good fit is game development, then you are in no position to use such absolute statements
No need to get combative. I was quite clear--there are many systems that are frequently described as "not-GC friendly" (like games) for which Go's GC is suitable. These are soft real time systems. There are other hard real time systems for which Go's GC is not suitable, like the automotive systems I worked on in my last job. Go can serve hard realtime systems too (just use it like you would use C), but it's often not the best tool for that job.
> Personally, I tend to add "I think" or "I believe" whenever I am making statements that are outside of my realm of expertise.
I was an embedded engineer for a critical hard real time system in a previous life. This _is_ my area of expertise.
> This prevents me from making a fool of myself
If you say so. ;)
> Good luck getting Go to run efficiently on a microcontroller without a ton of weird hacks.
Granted, but this has nothing to do with GC.
> ?
Picking at random is an alternative to choosing a language by comparing candidate languages.
Go's compile speed might be impressive over C++, but it is hardly so when compared against many compilers from the 90's, before C and C++'s massive adoption.
When C++ modules finally get integrated in C++20, that advantage will fade away.
I’d rather have my compiler and tools spend the time doing their job well instead of me spending ages because my language doesn’t provide the right abstractions and forces me to build everything from scratch.
What do you mean by "no solid dependency management"? Do you mean out of the box? This used to be true, but with glide and dep (https://github.com/golang/dep) it seems to. It is not npm or bundler - but progressing.
I assume NPM has gotten a lot better in the last few years if it's being used as example of something to strive towards.
Every time I have to mess with the Node ecosystem, it really makes me question the direction I'm heading with the research or project I'm working on, just by the haphazard way modules seem to be used and dealt with on both the dependency level and at the repository/packager level.
The last time was a couple months back with Reasonml. Reason seemed (and still seems) really cool, but ultimately I wasn't willing to put up with the umpteen layers of indirection just to compile to JS.[1]
1: I know there's an Ocaml compiler as well, but it didn't easily support some of the use cases I was looking at.
They're pretty inseparable. If npm is great, and the community and default registry are terrible, I'm still not going to want to use the tool if I don't absolutely have to.
Yes, go decided to wait to let the various competing systems duke it out in much the same way that node.js left it out for a while.
I use git as my dependency management. I pull a specific version of a golang dependency and put it into my private git repository. Then i don't have to worry about someone's github getting hacked, repositories getting deleted, etc...
Those problems have all happened with npm and github in the node ecosystem. Go solves these problems by allowing you to decide on how to manage your dependencies.
Do you also fork all their dependencies and subdependencies and fix the path in all of them? Otherwise you are still at the mercy of account/repo renames or deletions
I was using one of the points raised in the linked article to refute OP's point about "personal opinions" in the context of critiquing programming languages in general.
I've been using `glide` for a few years now, no issues at all. It's the same as cargo(rust), bundler(ruby), npm(js). Thanks to Go it's probably better since deploying dependencies in those languages is really painful.
And not mentioning the security issues associated with having a central repository for packages.
Ruby and Rust both let you have your own repository for packages, so that doesn't make much sense, and AFAICT Rust literally needs two lines of code to pull in a new package, so I can't exactly agree that it's painful.
I don't know much about Glide but if it uses Git for packages then that's pretty much centralized thanks to GitHub, too. At least the properly centralized solutions help assure you that, unless you're running a server yourself, the package is guaranteed to exist online, and even the developer can't remove it. On GitHub there's far too many things that could go wrong, and I have much less confidence that repos will continue to exist.
I try to read one of these on the major paths I'm not on. It's answering the question, "should I build my next project in what I know, or checking something new out?"
Seeing these for Go, Rust, Elixir, Elm, and others let me narrow it down before spending the 4-8 hours to figure out if it works for my brain and use case. (For example, seeing this tells me that Go is bad for the things I want to build because my brain works in types and good error checking.)
If you don't like articles on this topic, then don't read them. Why bother to tell us all about how you don't like this kind of article and you would much rather read about X instead? Plenty of people do, so leave them in peace to get on with it.
> The standard JSON encoder/decoder doesn't allow providing a naming strategy to automate the conversion
This is dangerous. You should be tagging all your fields even if the name matches exactly because, especially if inconsistently applied, someone might not realize that they are changing public schema with a refactor. If tags are completely missing in your codebase, you have to research every single type to determine if it gets serialized to JSON somewhere (good luck).
Speaking of magic, json tags are weird stringy magic only used at runtime via reflection.
If I have a struct with the tag `josn:"foo"` instead of "json", I won't get an error, it'll just silently blow up.
If I add a new field, I have to manually add the tag too.
You know what's both less magic and less fragile? The rust equivalent.
In rust, I write '#[serde(rename_all = "camelCase")]' above a struct. If I typo, it won't compile. If I add new members to the struct, they'll work without me having to remember some boilerplate.
If I have one that needs an exception, I can put '#[serde(rename = "foo")]' above just that one. It also won't compile if I typo.
This is the alternative to what go has; compile-time safety, language-features that let a library provide such naming strategies, etc. It's less magic since it doesn't rely on these weird conventionally named tag things that are only sorta part of the language.
I suspect what the author meant by naming strategy was something like that. Your argument that the author is adverse to being explicit is a strawman; the author appears to be asking for a struct-wide way to be explicit.
> Speaking of magic, json tags are weird stringy magic only used at runtime via reflection.
Annotations are used at run-time via reflections. JSON marshaling is a common use for annotations.
> If I have a struct with the tag `josn:"foo"` instead of "json", I won't get an error, it'll just silently blow up.
well it won't blow up, it won't function as intended - much like a logical error. A linter, like the one that comes with VSCode will catch this error for you.
Annotations don't need to conform to any format and are not compile time errors. I am unsure how this could change while maintaining the Go1 guarantee. Perhaps this could be an area of improvement going forward - having typed annotations.
> If I add a new field, I have to manually add the tag too.
It will work in go too. If you want it to deviate from the naming of your struct field, you would need to describe the map. It's like complaining that when you add a field to your database, you need to add it to your code too. Sure it's not as powerful as Rust, but that's the point.
> In rust, I write '#[serde(rename_all = "camelCase")]' above a struct. If I typo, it won't compile. If I add new members to the struct, they'll work without me having to remember some boilerplate.
That's handy, but incompatible with Go 1. It is worth noting that Go and Rust are different. Rust is more focused on power, while Go has more of a focus on simplicity.
> But specifically on simplicity of implementation
I'd suggest a simple language requires a simpler implementation.
> It wasn't enough to just add features to existing programming languages, because sometimes you can get more in the long run by taking things away. They wanted to start from scratch and rethink everything. ... [But they did not want] to deviate too much from what developers already knew because they wanted to avoid alienating Go's target audience.[1]
Sometimes for simple problems a simple tool is the best fit.
> Annotations are used at run-time via reflections
They're called tags, not annotations, per the spec (https://golang.org/ref/spec). I don't know what you're trying to say there, but it's using wrong terminology.
> well it won't blow up, it won't function as intended - much like a logical error
Aka "silently blow up", I could have used a better phrase, but I do indeed know what happens and that's what I meant.
> A linter, like the one that comes with VSCode will catch this error for you.
How is that possible? As you say on the next line, they don't need to conform to any format. How can my linter know that I don't have a package that parses "josn" tags, and that I typoed?
Please link me to the lint rule if it exists.
> That's handy, but incompatible with Go 1
So? It is, but the point of the article is that go is badly designed; the fact that they can't change it makes that design all the worse since we must live with it.
Pointing out that go decided to freeze their language is totally irrelevant.
> Rust is more focused on power, while Go has more of a focus on simplicity.
Simplicity and power aren't always opposites, and apply to many different conflicting pieces of a language. This is a massive over-simplification.
Go optimized for simplicity of the language spec and simplicity of the compiler at the expense of the simplicity of writing correct code.
> They're called tags, not annotations, per the spec (https://golang.org/ref/spec). I don't know what you're trying to say there, but it's using wrong terminology.
My mistake
> How is that possible? As you say on the next line, they don't need to conform to any format. How can my linter know that I don't have a package that parses "josn" tags, and that I typoed? Please link me to the lint rule if it exists.
> Speaking of magic, json tags are weird stringy magic only used at runtime via reflection.
Yeah, that is pretty awful. I'm a big fan of strongly-typed attributes (having come from C#).
> '#[serde(rename_all = "camelCase")]'
That's 99% perfect because the locality of the schema contract is near the field.
I mostly have contention with using a global settings singleton (or similar) as a configuration store for this stuff as a good example. With examples like yours, my comment wouldn't have been posted.
Buffering channels doesn't avoid deadlocks. As I understand it: buffering is a performance feature; if your code isn't correct without buffered channels, it isn't correct.
Not true. Buffered channels are able to express semantics impossible to express with unbuffered channels. E.g. a counting semaphore. The most common buffered channel buffer size is one, and the code would not be correct with either zero, or more than one. Using buffered channels in Go for their semantics instead of their performance is a very common idiom in Go. Much more common than adding buffering to channels as an optimization.
Fair enough. But it is important for people to realize that buffered channels aren't "async" channels; they're async until they fill up, at which point they resume blocking until there is a slot available. If you have some sort of channel network setup that is deadlocking with unbuffered channels and you "fix" it by adding some buffering in, unless you've got a very solid analysis as to why that buffering is correct, you haven't fixed it, you've just delayed the problem until load is higher.
It's in many ways academic anyhow. Most of the tricks I've seen for buffered channels are better done some other way, and in practice I almost always use unbuffered channels. Most of the time that someone thinks they want a buffered channel because of performance issues, they're actually exactly wrong and backwards... if there is some sort of mismatch between the speed of the consumers and producers you generally want the coupling introduced by unbuffered channels, even if it is counterintuitive.
Yes, buffered channels should almost never be used for performance reasons.
Buffered channels should be used for their semantics, but in general unbuffered channels are preferred to buffered channels if possible because buffered channels cause combinatorial explosion of state, and are hard to reason about accurately.
> Buffered channels are able to express semantics impossible to express with unbuffered channels.
You can build a buffered channel from a pair of unbuffered channels with an actor/goroutine containing a buffer connected to the output of the one channel and the input of the other. Therefore, anything that can be expressed in terms of buffered channels and actors/goroutines can be expressed in terms of buffered channels and actors/goroutines.
I'll grant that somethings are very tedious to express, and it's a bit of o Turing tarpit argument.
> Up to recently there wasn't really an alternative in the space that Go occupies, which is developing efficient native executables without incurring the pain of C or C++.
That's just plain not true. There are so many languages that compile to machine code. So, so many.
Additionally, OCaml, SML, Free Pascal, Delphi, Oberon, Oberon-2, Oberon-07, Active Oberon, Component Pascal, Basic, Common Lisp, Scheme, Java (yes all commercial JDKs do support it and on Android 5+), Clipper, Modula-2, Modula-2+, Modula-3, Dart 2.0, Swift, MLton.
It just happened that with widespread adoption of C and C++, many devs lost sight of alternatives.
You are ignoring Ada. In fact most problems with the Go languages are solved in Ada(tasks vs goroutines and channels, generics vs interface {}, synchronized data structures, variants type vs runtime plumbing to create unions...).
D, Objective C, C# come to mind. Further away, Eiffel and Ada. More esoterically, Fortress. And I don't even really pay attention to the world of programming languages.
One of the changes no one seems to be talking about is in leadership. Rob, Robert, and Ken would agree on a feature going into Go before it did. Now Russ Cox is in sole control of the leadership. This is happening at a point when work such as Go 2 and vgo are happening.
I think one of the reasons Go is so good at being simple (relatively) - is that each feature required no other option available in order to make it into the language. I forget the article I read this from, so I may have the semantics slightly off.
With just rsc we lose a little of this. There are many other incredibly talented engineers involved such as Ian Lance Taylor. I'm interested if this philosophy will still be important in the future.
Where is the discussion of Go 2.0 happening? Are they still shutting down discussion of features they don't want to hear about like they did through all the Go 1.* years?
"barely know" isn't the case here. The guy seems to have a solid understanding. I certainly don't agree with his opinion on golang, and golang is still a fantastic choice for many of my uses.
Yep. I don’t agree with his opinion, but he probably doesn’t agree with mine about Rust: I’d rather gouge my eyes out than use that shit for anything other than a C/C++ replacement.