And yet it still feels like a language out of the 60s or 70s. To me, Go feels like the programming language equivalent of insisting that everyone communicate just by grunting because it's less complex to teach people just grunting than it is to teach them more effective forms of communication. And in theory, they still can communicate all the same things with more grunting! Some people refer to this as simplicity, and in a way it is, but it's simplicity in the language constructs at the expense of more complexity in the usage of the language to form more complex thoughts. There must be a golden balance here, but I don't feel like Go is it for all the fanfare it has been getting lately.
> And yet it still feels like a language out of the 60s or 70s.
I started programming in the 80s, and Go feels modern even compared to the mainstream languages of this era (mainly BASIC, Pascal and C).
It also feels modern compared to the first versions of Java in the 90s.
You don't like Go and I respect that, but Go definitely reflects its time, the 2000s, with its syntax, automatic type deduction, GC, gofmt, interfaces, goroutines, closures, built-in HTTP, built-in testing framework, etc.
C++'s template metaprogramming was "found" as an accident, and is a very bad example of metaprogramming as a whole. (Techniques like SFINAE and CRTP seem more like a hack, and leaves a bad taste in your mouth when you use it.)
What we should talk about instead: hygienic macros in Scheme/Rust, AST-level modification in Nim, dependent types, etc. I think metaprogramming is an aspect of programming that should be explored more, and is definitely useful for reducing boilerplate.
C++ template metaprogramming may be overused and very difficult to use, but its prevalence despite such difficulty precisely shows how much such a type level metaprogramming tool is desired (compared to syntax level ones like lisp macros or text level ones like C preprocessor and various code generators). New programming languages should try to improve upon C++ template metaprogramming and make more powerful and effective metaprogramming language constructs, instead of completely abandoning the front.
The source of complexity is in the problems C++ is designed to solve. I use C++ metaprogramming for problems that require it.
Sure, Boost is full of metaprogramming insanity, but with today's C++ standard library it's not necessary to use Boost.
10 years ago template heavy code in C++ did suck, because we had slow compilers, dumb build systems, 7200 rpm HDDs and a need to use Boost to augment the bare bones standard library.
I'll be the first to admit that C++ has decades of cruft and design errors. But the metaprogramming works pretty well now. C++'s current problems are (IMO) weird undefined behavior, e.g. the distinction between POD and non-POD types; having both pointers and references, the ancient exception system, and in general a lack of tools for enforcing memory safety.
There's a balance with both expressiveness and type systems. Too little (PHP, JavaScript) is suboptimal, and too much (Scala, Haskell, Rust) is also suboptimal. Please note it depends very much on what you're building, as to where that tradeoff lies. For some things I use Rust. For others I use JavaScript. I don't touch PHP, Java, Scala, or Haskell for anything in the world - you can't practically pay me enough to want to work with them. For me, for web applications and server side software, Go is in the sweet spot more than any other language I've tried, and I've tried almost 30 by now.
Go is not always elegant, it's not always the right tool, and sometimes frustrating - but it's incredibly good at enabling better productivity than the alternatives in this niche. Sample size of one, this is my opinion, and other disclaimers apply.
> Too little (PHP, JavaScript) is suboptimal, and too much (Scala, Haskell, Rust) is also suboptimal.
I find myself thinking the exact opposite. When you want to do something quick and dirty JavaScript or Ruby are my go to. When you want to do something right, do it in Haskell or Rust.
The middle of the road options I find are worse for anything.
Unless you want to hire big teams. Java, C#, Go are great for that.
I agree about quick and dirty, Python has been my goto there for over a decade.
But I find with more complex type systems and more expressive languages (which applies to both Haskell and Rust) I spend too much time thinking of which way in going to code something, what abstractions I'm going to use. Then I spend too long trying to make the compiler happy for decreasing marginal returns in reducing bugs. Then the compiler takes too long every time I want to run it. On top of all that, the tools are subpar. I like Rust, I find it so well thought out and elegant, but I still reach for Go to get things done.
All of that makes me considerably less productive. In Go I just use loops, slices, structs, and interfaces. There is usually only one obvious way to do it. It compiles right away, and I get on with my life.
It's not as pretty to look at, probably more lines of code, but it takes so much less time.
On larger teams having simple, consistently styled code is an understated advantage for code review and understanding the system (which together are probably 3/4 of the job.)
I think it indeed heavily depends on your marginal return in reducing bugs. I agree Rust is unsuitable for most software, because rightly or wrongly, bugs don't matter in most software.
Some of the bugs that Rust prevents do matter in most software but only occur in languages without GC.
In my view, Rust has a good chance of gradually replacing C++, because C++ devs don't shy away from using a large, complex language, and they can appreciate what the borrow checker does for them.
Yeah, that's where I use Rust now, where I previously would have used C or C++ (and if I previously needed C I always used C++ with extern "C".) These are places where I can't use Go because I need a language without a heavy runtime.
I use TypeScript for the browser and React native, because that is much better done in JavaScript land.
I have found with typescript that having a gradual, structural type system actually helps with quick and dirty prototyping, as it allows me to notate data structures. It's hard to find a reason to use JavaScript.
Note that type errors in typescript are actually just warnings and you can ignore them. I never do, but you can.
I've had similar success with the gradual typing in Python. NamedTuples are far superior to ad hoc dictionaries, for both documenting data structures, but also as a prompt to consider the software structure.
I wasn't aware that typescript type errors were only warnings. Every webpack/typescript project I've used must have had the typecheck set to error, leading me to believe typescript was some draconian type checker akin to Rust or Haskell.
I am getting old and I am also finding I have just settled to languages I feel hit that balance for me.
Racket for general programming (It really is the best for people learning to program and also great for making fun programs that is super easy to install on multiple of computers)
R for statistics especially since I found the functional backbone of the language.
+ By declaring a field/variable []Thing vs []*Thing you get different for loop semantics. Way to easy to think your mutating the array item, but only mutating a local copy or vice versa. If you change the field/variable you need to audit all your code to make sure you haven't broken things.
+ gofmt feels way out of date. These days clang-format (c++), prettier (typescript), black (python), scalafmt (scala) take care of wrapping/unwrapping lines such as function definitions or function calls. They basically cover all formatting needs so you never have to manually format anything.
+ Scope of element in for-range loop isn't right, so capturing that scope in a lambda does the wrong thing with no warning.
+ Encourages use of indexes; which is error prone, most modern languages allow writing most code without needing indexes using map/filter/reduce or comprehensions.
+ No help from type-system for use of pointer without nil check.
+ Very easy to get nils in places one would hope to be able to prohibit them in. EG Using pointer as a poor man's unique_ptr<> means that I also get optional<> symantics (without the type checking) when I don't want or expect such. Also allows for aliasing when I don't want or expect such.
+ Difference between '=' and ':=' is silly, especially since ':=' can be used to reassign values. Even more frustrating that ':=' creates shadowing in nested scopes, so doesn't always do what one would expect it would do, such as accidentally creating a shadowed 'err' that doesn't get checked.
+ if/switch should be allowed to be expressions, allowing much safer single-expression initialization of variables, rather then requiring default initialization and mutation, which is much easier to get wrong.
> By declaring a field/variable []Thing vs []* Thing you get different for loop semantics. Way to easy to think your mutating the array item, but only mutating a local copy or vice versa.
The loop semantics are always the same, the element variable is a copy of the value at the current index. Yes that means that if the value is a pointer, the copy of that pointer can be used to modify the pointed data. This is something a good Go programmer should understand well because this behavior goes way beyond for loops. For example functions - func (t Thing) vs func (t * Thing) - with the pointer version the body of the function can modify the pointed data. Side-effects! Just like the for loop.
> gofmt feels way out of date.
I like to think of it as stable. The biggest strength of gofmt is how universal it is. Everyone uses it and all code out there looks the same. There's a growing collection of Go code out there. If gofmt would keep changing its style, then all the already published code would no longer match the standard. Thus, I think there needs to be a real strong reason to modify anything about it.
I actually like Go's simplicity a lot, it was very easy for me to learn and very easy to programming with it because of that. But I agree strongly with some of your points. = vs := specifically feels like a plain mistake, and even against Go philosophy of having just 1 way to do things.
I think any new language should be designed around options instead of nils. F#, Rust, Zig show different ways to do this, and often any performance penalty can be compiled away.
if/switch being expressions is a simple and helpful idea, languages should allow this.
using map/filter/reduce as the idiomatic way to do things I am less sure about. This can come in handy but also would add a lot of complexity to Go, and in most languages these have a performance penalty.
its important to remember that not all programmers are interested in languages, they just want to get their project done. So being able to hop into a code base and have low cognitive overhead, because there are no mysterious features they have to learn, having quick compile times, and explicit semantics can be really helpful there. That can save you more time than typing less because of generics and meta programming sometimes.
'and in most languages these have a performance penalty' - pretty sure this is due to either bad implementation or because of additional guarantees they provide. Because fundamentally these constructs can be rewritten to be loops by the compiler, except where you're wanting to violate the guarantees they enforce (i.e., maybe you want to mutate every item in the array, rather than treating it as immutable; these won't do that). For those few situations you want to violate those guarantees, you wouldn't reach for these higher order functions. There's not really any reason not to include them except for language design ethos.
> Encourages use of indexes; which is error prone, most modern languages allow writing most code without needing indexes using map/filter/reduce or comprehensions.
Sometimes (most of the time, actually) I want exactly that. Please, take a look at cryptography libraries and try to implement them without using indexes. Horrifying. Try to create a 3-dimensional array in Erlang, for example, and try to work with it![1] No thank you. I do like my arrays and indexes.
[1] I do use and like Erlang for stuff where I do not have to use arrays though.
This doesn't contradict the post you're replying to. Most times (as evident by the wide prevalence of map/filter/reduce functions) there is no need to access indexes. Other times, practically all languages that offer these functions allow you to write a plain for loop.
> + gofmt feels way out of date. These days clang-format (c++), prettier (typescript), black (python), scalafmt (scala) take care of wrapping/unwrapping lines such as function definitions or function calls. They basically cover all formatting needs so you never have to manually format anything.
gofmt is the best formatter out there because it is opinionated. The fact that people constantly tune their formatter (if there is one) in other languages makes it a nightmare to read different codebases. As someone who used to read code for a living, Golang is a pure joy to read, you always feel like you're in the same codebase.
> + No help from type-system for use of pointer without nil check.
I do wish they had an Option type
> + Difference between '=' and ':=' is silly, especially since ':=' can be used to reassign values. Even more frustrating that ':=' creates shadowing in nested scopes, so doesn't always do what one would expect it would do, such as accidentally creating a shadowed 'err' that doesn't get checked.
I too am not a fan of shadowing via :=
> + if/switch should be allowed to be expressions, allowing much safer single-expression initialization of variables, rather then requiring default initialization and mutation, which is much easier to get wrong.
These complaints may be valid, though for me, the upsides of Go makes it worth it, depending on what you want to do of course.
Arrays/slices/maps/pointers: Go abstracts over the inherent safety-limitations in ways that do not make much sense. They may make sense from a blend of safety- and performance perspective.
Index usage: Go allows to loop over elements instead of indexes and also provides the correct range of indexes in for-loops. More functional expressions would mystify execution, while Go is more WYSIWYG of languages.
Shadowing: Yes bad, but also bad to have many layers of deep scopes for this to become problematic. Best practices of Error-variable has problems.
Initialization in if/switch: Go being a niche lower level ("system") language, it's closer to the actual physical layer. A good idea not to do too much in the same expression/line, making it easier to read and making correct assumptions.
> Index usage: Go allows to loop over elements instead of indexes and also provides the correct range of indexes in for-loops. More functional expressions would mystify execution, while Go is more WYSIWYG of languages.
Indexes are the default though, you have to explicitly ignore them if you want to use the values directly.
More importantly, there are several simple operations that simply require indexes: most error prone is trying to create a pointer to an element in a slice. The natural, high level way of doing that would be
pointers := []*element{}
for _,value := rang elements {
pointers = append(pointers, &value)
}
Which looks very nice, but does the completely wrong thing. You absolutely must use the index version of you want to do this. Same would be true if you were to capture the value in a closure.
Most of my complaints, I feel like are pretty well documented: No ability to do Generics, no ability to do operator overloading, no ability to make your own for each style loop objects, very manual and repetitive error handling. These are all simplifications to the language spec at the expense of more complicated usage. For the most part, I've been trying to understand myself how to express why I get a pretty bad taste in my mouth when using or looking at Go code, why it feels so much less productive to me than the higher-level languages. Which is what it has been making dents in more than anything else. I especially wanted to understand this since I do see myself as someone who values simplicity. In the end, it's because I value simplicity in usage, rather than the simplicity be in the language constructs. I'd instead rather learn a slightly more complicated language, and be more productive with every task I write using it. I don't see Go users as cavemen. That was admittedly an exaggeration to express my point clearly.
>In the end, it's because I value simplicity in usage, rather than the simplicity be in the language constructs.
That's interesting, because to me, Go accomplishes this fairly well. Generics, operator overloading, and custom 'for' loops make it easier to write "clever" code with surprising behavior. Go's design is not elegant like Lisp or Haskell; there are some warts and special cases that exist for pragmatic reasons. But it manages to be simple and effective, which is sadly a rare thing in the modern language landscape, where the mentality seems to be "more features = better language." 90% of Go's value is in what it takes away, rather than what it adds.
Except they didn't. Go has generic maps and slices. It was never their goal to create a language without generics. What happened is they failed to come up with a satisfactory design for user defined generics in time to make Go version 1.
Well, I don't subscribe to the deprecating aspect of the grunting stuff, but many of these discussions read very much like transcripts of Plato's cave.
I had the same experience as a Java dev back when Java8 came out. To me, it felt great. To people using functional languages which this update ported to Java, it looked poorly.
It is hard to understand go's criticism if you have not experienced languages leveraging modern datastructures such as, for instance, union types/pattern matching.
But if you have, you really feel like the language lacks something.
For me, it's not even things like that. It's boring things like push, pop, erase, index_of, etc. that probably everyone has seen but that can't be written in Go.
> And yet it still feels like a language out of the 60s or 70s.
Lisp is from that era and is far more poweful.
> To me, Go feels like the programming language equivalent of insisting that everyone communicate just by grunting
This is fair. I mean, Go is outright _primitive_. The most notorious things about the language itself: are Goroutines, channels and the interfaces. This is why there is so much discussion around, for instance, Generics. Noone really "wants" Generics. Instead, they just need to not repeat themselves. There are very few ways to work around issues in the language itself.
Another issue is that, if a library implements something in a way that doesn't suit you, or you need to add extra behavior, you are out of luck. Say you want to add distributed tracing to code that's not owned by you (even if writting in your company). You'll have to track down people, you cannot "annotate" your way as other languages can.
That said, the fact that it is mind-numbly dumb can be a feature. Everything is damn obvious. Error handlers? Staring at you in the face like the sore thumb they are supposed to be. You can't even get too fancy with 'design patterns', just go write the function you need to solve your problem and stop reading that gang of four book.
They are fixing the Go modules mess, which is one of the last pieces of the ecosystem. No "coding standards" document – it is whatever go fmt and the go compiler says it is. Lots of trivial things are errors (like unused variables), which helps keep the codebase sane. The ecosystem and toolset are great (I feel like Rust learned from Go and improved on it).
Go has to be compared with alternatives. Rust? Awesome language, but slightly different niche with a far higher learning curve. Java? Please. C or C++? They are the opposite of simple (C is a larger language, but with far more baggage, don't get me started on C++). I can't include Python and ilk as their purpose is completely different, although for things like backends both could be used.
What else could we use to write Docker or Kubernetes, that would be more expressive, and still learnable in a weekend?
> There must be a golden balance here
I would argue that Go has not nailed it completely, but it is in the ballpark.
> I can't include Python and ilk as their purpose is completely different, although for things like backends both could be used.
Yet, many real world sysadmin programs formerly written in Python are now written in Go. Go's compilation to a single statically linked binary, and tool support for easy cross-compilation makes it popular for these usecases. And when many people say "Go is blazingly fast!" they compare it to Python.
> What else could we use to write Docker or Kubernetes
Lots of trivial things are errors (like unused variables)
My learning experience with Go ended when I happened upon this. Unused variables aren't even worth a warning, and yet it is absolutely impossible to get around them in Go.
Maybe it's just my style, but I like to start out code with methods and variables in rough outline, and then build it from there. If a language makes that impossible well...there are other languages. Also, I didn't want to learn what other doozies Go had in store if they made that kind of stupid nanny issue a categorical imperative.
And that's a fine style to have - as long as they're not left behind when you commit them. Go is a bit painful in this regard, but it's targeted at maintainability by large teams over a long period of time, sacrificing local developer ergonomics in the process.
I mean I wouldn't object to it being a bit more forgiving in some instances, as long as it's as anal as can be when you commit some work and / or share it with others.
Of course, it's an open source project so in theory the unused variable check can be disabled.
> but I don't feel like Go is it for all the fanfare it has been getting lately.
What happens if the 'fanfare' doesn't stop anytime soon?
Will you just keep watching from afar that ever growing crowd of grunting and stinky subhumans?
Most "corporate" languages have that character (as opposed to hacker languages that give you expressiveness and power). Go looks pretty good for a corporate language.
I haven't looked at Go because...my corporation doesn't really use Go. :shrugs:. I don't understand why anyone would get personally excited about it unless it gives them a breath of fresh air from the worse languages they have to deal with, like as an officially-sanctioned second language where they work, or they just don't look at languages very much.
Google hand-crafted a language with the intention to make the production of code for their most important use cases cheaper. It was never their intention to build the general-purpose PL of the future. But then, you cannot put it this way in your marketing brochure.
It wasn't "Google", but a handful of employees there. Secondly, they designed or with their perceptions of some of Google's use cases, which may or may have not aligned with the actual use cases. We can see this by how much golang is actually being used at Google for critical components, where C++ and Java still dominate
This is the most remarkable characterization of Go I have ever read. "Grunting" here probably means: short, simple syntactical structures, which as someone who has coded in BASIC, C++, Java, Python, JavaScript for over a decade, I feel is a positive. The syntax of the language gets out of the way so that programmer brainpower can focus on the logic. The best tool is one that feels like it's not even there. The author here is clearly offended by Go, but the real reasons remain unclear...
If you compare Go to C++, Go has gotten more organized but still retains understandability, and in my opinion, C++ has become unrecognizable. I did 10+ years of C++ until around 2012, and since then the changes have been so quick and so drastic I would not call myself an experienced C++ programmer anymore. It's a shame because I love C/C++ but not what I see today, because they have lost what makes Go great: an opinion. They are trying very hard to be everything and it has changed the language too quickly to keep up.
Without a doubt, C has "lost" nothing, because it barely changed. And most of the modern code written in it still has mostly the same style as in the nineties.
I guess about C++ you are comparing pre-C++11 era to post-C++11 era (because C++14 and C++17 were more minor, and even for the most part a kind of finalization of the C++11 spirit). And I consider you thinking C++ has lost an "opinion" is merely _your_ opinion (how meta :p ), which I don't share at all: C++ was always kind of multiparadigm and like it or not but people attempted to do all the modern stuff even with pre-C++11 core language. It sometimes did not end well (first smart pointers...), and/or it was just annoying to miss some features, like variadic templates, or again there was missed optimizations (now permitted with rvalue references and move semantics). SFINAE was abused and let us write great things at compile time at the cost of truly insane syntax, but now there is some work to let write that kind of thing less insanely. The core language basically evolved to let what people were already trying to do actually doable with somehow sound results and/or less boilerplate/hacks (not statically nor dynamically checked in tons of cases, though, but that is another story).
Very old C++ was also very often POO heavy, in the Java/GoF sense/spirit. Some people detest that (myself kind of included), so I'm glad there is something else to do in modern C++ than creating SingletonFactory classes, that kind of insanity.
I'm not following. Can you give an example of such changes which make C less-usable? Also, are you referring to changes in the C standard or just the implementations?
I can see where you're coming from and I'm sure it must feel that way but I'm not sure its actually true. Most of what's happened is that C++11 can be summed up into a few priorities.
We want to make type deduction simple.
We want to make writing templates easy.
We want to make doing things at compile time easy.
We want you to be able to write local closures easily.
You can contextualize the vast majority of the language changes from c++11 on this way. I just don't think having the same priorities for 10 years is making constant drastic change.
It's not that weird if you consider the history. With C++, you can drop down and do lower level "C" stuff when you need to. C++ syntax is, with a few minor exceptions, compatible with C. Many, many 90's and early 2000's "C++" apps are written in a simple "C with classes" style.
I'm a newcomer to C++ and by no means intend to sound like an expert but I think the key to working with it is to treat it not like one language in which you should try to learn most of the features (like say Python) but instead treat it as a smaller language (e.g. C) with a bunch of optional extensions that you can choose to use if you wish (like classes, templates, smart pointers etc.).
If you pick a subset of C++ and keep some discipline (e.g. follow one of the existing style guides from LLVM, Google, Mozilla etc.), I've found it to be a very nice, pleasant language to work with.
Although I share similar experiences with what you're saying per se, I don't understand why people compare Go to C++. Comparing Go to C makes sense; to Rust or C++ does not.
I'm comparing it as a language that is evolving. Not as a point-to-point comparison. I feel like Go has evolved properly and C++ is mutating too quickly for its own good.
Nah, C++ is hardly mutating, in the sense of being backwards-incompatible. Rather, it is expanding incrementally (these days in big chunks). Anyway... Would you compare Python and C++, then? The goals of the languages are clearly different.
C++ was not intended to be "opinionated" in the way that you believe; consider taking a look at Bjarne Stroustrup's "Design and Evolution of C++", or just his presentations at CppCon conferences over the past several years which focus on the history of the language.
Now, it's true that C++ has seen significant changes, but:
1. C++ is, indeed, a multi-paradigmatic language; and in recent years it is becoming easier to write code using other paradigms - particularly functional. But your old code, and programming style/paradigm, still work just fine. C++ changes by addition, not by removal (with tiny tiny exceptions to this rule). So you can keep your opinion.
2. It was "always" known that C++ is a work-in-progress language, and that a lot of its idioms are temporary studs until something better is formulated, implemented, tried and standardized. In a sense, we don't even have C++ yet, we're just gradually getting parts of it.
3. The most significant ones so far were in 2011, while you were still writing. (C++2020 is arguably as much of an update as C++2011.)
4. While the changes make the language larger, they also very often simplify a lot of coding tasks. A prominent example: With C++2017 you can avoid a lot of "template metaprogramming" voodoo in favor of compile-time-evaluated code.
Imo I wouldn't say that "being everything for everyone" for C. In my mind C has still mostly retained its simpleness, which for me would be its most defined feature. It is still not that many concepts you need learn to code in C. You still need to be very organised to code safe and correct code in it but that is not something that has changed.
But I might be wrong. Someone with better experience with C throughout its entire life are free to correct me :)
When I last used it the error handling drove me a bit mad. I get the reasoning, but the amount of identical error checking code for (almost) every single call didn't make me feel terribly happy compared to my easy life in the erlang/elixir 'let it crash' world :}
All points are like stale mantras by now, but still:
* Errors happen. All the time. And how you handle them matters. The
err != nil path is not unlikely.
* If your error processing code is repetitive, you're doing it wrong.
Wrap your errors. Add context.
* Errors are values. Make them useful. Again, add context.
Honestly, while a lot of people look at Go code and think “this is 50%
error checking”, every time I read an under-handed piece of C or
under-catched piece of JavaScript, I think to myself “where is the
error checking?”
In my case though I've been changed by elixir and erlang, have you spent any time with them?
Once you've gotten around the whole 'let it crash' concept and supervision, and _NOT_ handling errors, it's very difficult to go back.
I wish I had a "systems-level" language where I could do this stuff, but I guess I'll continue to wait..
FWIW my C ends up having probably equal amounts of error checking as my go does, somehow it feels less repetitive though (perhaps because I use a macro or two for such things in C)
I use Go professionally and like it on balance, but error handling is my least favorite part of it.
* In practice there are not that many ways of combining fallible operations. "if err != nil { return nil, err }" is the right thing 99% of the time. Occasionally you might want to perform a set of independent operations (probably in parallel) and then handle their errors as a group. It's almost never the right thing to write complex, arbitrary logic about combinations of error values, or to ignore them. Go makes the wrong things too easy and requires the right thing to be spelled out explicitly each time. Somewhat mitigated by lint rules and editor macros, respectively, but still.
* Wrapping your errors produces great messages but defeats type switches for structured error handling. This pattern makes it easy to not actually have the error handling logic that that appears to be there.
* You need to be very careful about adding context, because you need your error monitoring system (Sentry, etc) to understand what are instances of the same failure mode vs. different failure modes. I find I can't really put any more information in the Error() string than would be in a stacktrace, or the cardinality of "unique" errors blows up. You can attach information to custom error structs and report it in dedicated log fields, but then upper layers need to know how to unroll the error stack (e.g. errors.Cause()) and access those fields.
> Wrapping your errors produces great messages but defeats type switches for structured error handling. This pattern makes it easy to not actually have the error handling logic that that appears to be there.
Not true. I've been successfully using error wrapping with type
switches for years. That's what unwrapping is for. Go 1.13 has
functions errors.Is(error) bool and errors.As(error, interface{}) bool
and also errors.Unwrap(error) error. Package github.com/pkg/errors,
which I use, only has function errors.Cause(error) error, but writing
your own inspection helpers is trivial.
> You need to be very careful about adding context, because you need your error monitoring system (Sentry, etc) to understand what are instances of the same failure mode vs. different failure modes. (…)
Not sure about Sentry, but Airbrake has no issues with that.
I think that the difference between the Go way and exceptions is that
exceptions are “automatic” and thus invisible. In a language with
exceptions what you get up-stack is probably something like:
FooException at /path/to/useless/file_1.x:42
/path/to/useless/file_1.x:69
/path/to/useless/file_2.x:108
…
/path/to/you/get/the/idea.x:100500
While in properly-wrapped Go code you might get something like:
performing request "abcd-1234" on host "pickles-1":
requesting "storage-4":
querying database "primary":
profile:
not found
And yes, you can attach code location information as well. See, for
example, how Rob Pike et al. do error handling in their Upspin
project[1] or the popular github.com/pkg/errors module[2].
Can you do that with exceptions? Of course! The problem is that
suddenly your code will start to look a lot like "if err != nil".
> I think that the difference between the Go way and exceptions is that exceptions are “automatic” and thus invisible
That exceptions automatically take care of propagating context is a big plus.
> See, for example, how Rob Pike et al. do error handling in their Upspin project[1]
The fact that there needs to be an entire blog post just to explain how to use errors in golang properly is quite telling about how poor of a job it does. Not to mention all the hoops they have to jump to propagate useful information, all of which we already have in languages with exceptions.
Java is probably the worst language for error handling. It has checked exceptions AND unchecked.
I can't say I've ever seen a Java codebase that handles InterruptedException properly. It's so bad that most Java devs I've talked to don't even know what the right thing to do is.
And don't get me started about catching Throwable. Why make it so easy for application code to catch VirtualMachineError. I've not once seen Throwable or Error caught in a way that makes any sense.
Java seems to encourage a culture of not handling errors properly because it looks "messy". Then, these same acculturated devs try to bring that misguided sensibility about how code should look to Go and start complaining that they actually have to handle errors.
Yeah, Java botched generics by not adding exception sum types. stream.map(f).collect(…) should throw whatever f can throw, but instead f has to wrap any checked exception, which makes them nearly unusable in practice.
But the drudgery of explicitly passing every error up the stack over and over is not a good use of human beings' time. We can generate that if we must, but not actually read it.
Sure it works fine until you start multithreading, where handling an exception doesn't end with unwinding the stack, because there are multiple stacks. So then have to pass it as a value to some other thread's stack, and remember to catch it in every thread or else your thread will silently die. But then you're right back to where Go is: treating errors as values.
I’m well aware of how to deal with the problems created by unchecked, stack-unwinding exceptions. The thing is, when I’m reading Go code, I can look at a call site and easily know if an error can occur and what happens to the error (including if it’s sent to another goroutine), using very simple and consistent mechanisms. The tradeoff is verbosity, but it is worthwhile verbosity IMO because it’s simple, consistent, explicit, and does not create the unnecessary coupling of checked exceptions. I appreciate this low cognitive overhead when I go back to a codebase 6 months later, or when onboarding a new hire.
At least you can't mistakingly ignore errors, unlike in a lot of golang code I've seen where errors are either ignored, or worse, silently overwritten.
If you mean the code that never wraps anything and just does “return
err”, then it could actually be better off using panics. In fact, it's
documented in Effective Go[1]. If all you return in your API is
a simple opaque “shit's broken, yo” error, then it is a perfectly valid
approach to use panics. As long as they are in your package's insides,
and as long as you return an error on the external border of your API
instead of making users handle your panics.
an exception is the same thing as what Go does, with the difference that the error is just immediately returned by default, while in go you decide what to do.
This can be helpful, in C# there isn't even a good way to find "all functions that might throw Exception foo". In Java this is slightly better since functions must declare what they can throw.
So Go is more typing than C# style exceptions, but more clear as well. There are some middle grounds such as the Java exception approach, or the ? operator in Rust, which Go is thinking of doing something similar to that.
> an exception is the same thing as what Go does, with the difference that the error is just immediately returned by default, while in go you decide what to do.
Exceptions also give you a stack trace, where golang errors don't (you have to jump through verbose hoops to get something barely similar). Secondly, it's much easier to ignore errors in golang, or worse, silently overwrite them (I've seen both in golang codebases). Whereas with exceptions, the exception gets bubbled up until it gets handled, or it terminates the entire program. This approach is much safer than golang's, where it's possible to end up in a corrupt state due to its subpar error handling implementation.
Ahh, errors. A programmer's best friend or enemy or both? I may be a rookie, but I'll say this:
- A few years ago I chose to learn Python "The Hard Way"[1] because I already had experience with scripting and a little bit of C++/Java/PHP/Js. While I fell in love with the flexibility of the language — how idiosyncratic it could be made, how quickly fluency came — error handling remained a mysterious dark space to me, a "TODO" for later. [which I didn't because I didn't program much after this first real training]
- I later tried to become "Eloquent" in Javascript[2], and while I can never praise that book enough, the language's paradigm wasn't what made error handling 'click' for me. Now I see, but it took a trip far, far away. [Js isn't my problem space, and when it is I'd rather use TypeScript probably]
- And finally Go. While I haven't finished once "The Go Programming Language"[3] yet — just followed a few great online books/courses so far (notably by Jon Calhoun[4], Xoogler), and only built a few basic things — I can say this: now I get it. I mean errors. I really do.
I think these few points mattered:
- Go has error checking 'in your face' indeed and makes it useful, because passing a value makes sense, and error type is no longer abstracted as in more OOP paradigms, it doesn't feel "otherworldly" to the present code, it's right there in the imperative flow.
- Testing is well-integrated (`_test.go` files; I know, at a basic level, but that's what I expect from a 'standard' tooling). Because it's a core feature and quite easy, you tend to use it; and testing helps thinking (conceptualizing) of a 'meta' framework around error handling. As a Go programmer you quickly understand that it's clearly, totally up to you to define this "otherworldly" space where your code exits its normal 'world'.
I find it hard to phrase it for now, but I'm positive Go's implementation of errors (essentially forcing you to go through it) made me level up big time. You clearly see the "normal execution space" and the "erronous space", and how design helps you keep your code either well within the former, or graciously through the latter. It feels incredibly safe, especially as a newbie.
Also, emmet snippets. Makes writing fast, but seeing the `if err != nil { return ... }` blocks is extremely beneficial to reading code, which is what we do most, by far.
Also, people don't mention these much but things like `panic()` and `recover()`[5] help you think of and learn about your own problem in elementary terms, first principles — do we need to halt execution or can we recover from this? It's all part of a sane and safe executive thread, or so it feels.
I don't know if I'll be programming Go at my job 5 years from now, I really don't; but I'll always have Go to thank for making me understand much in programming first principles.
I find it interesting that in many C++, Java and C# code bases I have read and worked on, someone cooks up an error type that they use to supplement exception handling. Exceptions are used to handle “truly exceptional” things like running out of memory, and statuses are used for run of the mill errors like an invalid request being made. You choose the mechanism based on which you think suits the scenario better. I think it strikes a nice compromise between constant error checking and having one big error handler higher up in the call stack
Yes, though it doesn't bother me. I also think that with the correct design you can eliminate a lot of those checks. (e.g, avoiding pointers that can cause nil / error returns)
About 8 years ago, I was writing some app in C#, and the recognised way of doing things was EntityFramework. I duly wrote a bunch of code around EF, only to notice that sometimes it wasn't populating with the data correctly. Digging into the documentation, I found that EF didn't actually do lazy loading properly, and that if you really wanted to be sure that the data was populated, you should call a method to check that the property you wanted had been populated, and if not, call another method to populate it.
I threw Windows in the bin after that. I uninstalled Windows, installed Linux, and started looking around for another language to code in. I found Go. It's total lack of magic, utter transparency and attitude of simplicity made complete sense to me.
<10 years later I'm still happily coding in Go. Thanks guys, this is awesome!
Today I have to write the SQL, and pull the results of the query into a struct, which implements relevant interfaces. It's more code to write, but it's simple, and straightforward, and it does exactly what it says it does, and no more. It's easier to write and understand, and trust. Most of all, trust.
Entity Framework is just an ORM. Every ORM has their quirks. It's unclear why you ditch a whole language, and even the OS just because you didn't enjoy using an ORM.
If you want to write pure SQL and pull the results into a struct with interfaces, you could do that in C# also.
It became a symbol for the entire C# coding attitude/worldview/paradigm at the time. I lost it with EF because it just didn't do what it said it did, and all the documentation lied about what it was actually doing. It became the straw that broke the camel's back, the poster child for everything that was wrong with .Net and C# (at the time - I gather things have got better since).
Go became the antidote, the "way things should be", and moving from Win/C# to Linux/Go was eye-opening and very healing.
YMMV. I'm totally happy I found Go right then, when I needed it
That's not my view of the market for Go devs. A few decent Go projects in your GitHub profile perhaps? Sounds like you're being filtered by HR. Just mention a few extra buzzwords.
"This pattern repeated for almost every language I looked at: it seems to take roughly a decade of quiet, steady improvement and dissemination before a new language really takes off."
It does help when the company who created the language is Google and they start using it in significant products like Kubernetes. :-)
I really like go, it's a wonderful language in many ways.
But I really wish we got official answers on things like the code duplication. Either something like generics or code generics and just establish we're not going to get generics and put an end to the discussion.
Nice, much like people would use Pro*C to generate C to talk to Oracle. I think converting a DSL to boilerplate (which should rarely be read and never edited) is key to living with a language that isn't good at reusable abstractions.
I don't believe that to be true. There are almost too many ORMs for Golang.
Golang has had an issue with fractured ecosystems from the start. Part of this had to do with lack of a package manager. Part of it had to do with companies and developers wanting to move fast and not get bogged down trying to get changes through other peoples process and into their repo. Or perhaps development stalled out as the ORM met the requirements of the developers/project, or for other reasons.
If you evaluate carefully each of the available ORMs it becomes apparent that a lot of them have different short comings and trade-offs, and were created around some theme or feature set missing from the others.
The actual reason is that the language lacks the meta-programming facilities required for creating effective ORMs and certain frameworks. (Whether this is a good thing or a bad thing is a subjective matter.)
Wait, is that true? I thought that the Java packages like Hibernate were just using method attributes and reflection...doesn't Go have the ability to add attributes to structs and a working reflection library?
They use generics pretty heavily too so it's not only reflection. Generics allow them to do a typesafe interface for the developer to use as well as making less work for the reflection code to do.
No, Go tags are like metadata which can be accessed through reflection at runtime. Java annotations allow you to actually generate code during the build phase.
I've encountered a bunch of code generation in client-go - I'm not very good at Go, so not sure what the term for the code generation directives in comments is, if there is one.
And annotations and classloaders. Haven't looked at Hibernate in years ("10 years" to be exact /g) but it is possible that it also requires Java and JVM's instrumentation facilities. So certain kind of software magic can't be done with Go.
Options in Go for effective ORM require source level processing.
I think in the past 10 years there's been a real move away from ORM so I wouldn't expect it in the Go ecosystem. SQL is the best way to interface with a RDBS.
I haven't seen that at all. Every popular open source project that comes to mind uses an ORM in some form or fashion. I'm sure they exist but I can't think of a single one that is just doing raw SQL or Query building, across any programming language for that matter. Heck, I think GOGS/GITEA are the reason xorm even exists. There are a number of ORMs for golang, they just all have their pros and cons.
As a query DSL I don't find a problem with that. It's like saying that query strings in urls are untyped so don't match well. In reality the gap between untyped->typed is super narrow. You almost always immediately scan the results of your query into typed struct fields.
It ensures consistency can always be reached (the most important feature for a migration tool IMO, yet most tools do not guarantee this) and uses standard SQL syntax in migration files.
I was using Squirrel and goqu extensively. Both are nice, not as easy and flexible as knex. That is basically a language constraint, JavaScript’s dynamic typing allows for all kinds of shortcuts Go cannot offer.
It's really interesting to have seen a language launch, develop and evolve over the last 10 years. Back in 2009 I'd just started my career, and while I've been mostly doing Java development professionally, I do a lot of Go stuff in my side projects.
Happy birthday to a wonderfully refreshing language.
> Go has also found adoption well beyond its original cloud target, with uses ranging from controlling tiny embedded systems with GoBot and TinyGo to detecting cancer with massive big data analysis and machine learning at GRAIL, and everything in between.
I've often thought of this, but I'm not qualified enough to answer my own questions.
While a very expressive and extendable language like Python seems well suited to researching the desired abstractions when designing neural nets and ML paradigms, it seems to me that a production phase would benefit from translating it to a 'sleek' language like Go — which forces you to think closer to the machine, and opens boulevards to safe performance.
I wonder what people fluent in Go and other languages, experienced with ML, think of this. (it's a minor concern overall, but the future of Go in ML is of interest to me)
> especially the community working together.
I have to say this is one of the nicest things for a Go programmer, a 'Gopher' as they say. I've been online since the late 1990's (teenager back then), in all sorts of communities, and being quite civil and empathic myself I tend to have strong opinions about the 'mindset' or 'culture' on a very down-to-earth human level.
The Go community is among the very best. Period. One telling sign is how welcoming, respectful and elevating they are with newcomers, to Go or programming in general. It's a trainee's paradise, even if material is still relatively scarce compared to longer-established big names.
> In fact, people often tell us that learning Go helped them get their first jobs in the tech industry.
This should not go unnoticed. Few languages are able to move that needle — remember that for now, we increasingly need more developers than we can produce at a worldwide scale. It's both a matter of a language and its paradigm being in demand by employers, by clients projets; it's also a matter of said paradigm and implementation/spec to be conducive to the making of programmers.
We know the names. C. Java. PHP. Python. COBOL before that (and I hear, today still). It's the big ones that consistently do it forever regardless of 'project / tech buzz flavor of the year or even decade'. I have every belief that Go is destined to join this select club of industry norms. Lots of reasons why but among the most fundamental are ticking all the right boxes at a low level — ease of programming, stellar readability and portability of others' code, really cross-platform target, self-documenting, etc. etc.
> While a very expressive and extendable language like Python seems well suited to researching the desired abstractions when designing neural nets and ML paradigms, it seems to me that a production phase would benefit from translating it to a 'sleek' language like Go — which forces you to think closer to the machine, and opens boulevards to safe performance.
This is a common perspective from outside of the ML field that arises usually from missing knowledge about the ML technology stack. Underneath the layer of expressive and exploratory Python lies a bedrock of optimized and domain-aware numerical libraries written in a combination of Fortran, C, C++, handwritten assembly and some HDLs. HPC resources are not cheap, so there is no software stack that provides a meaningful performance advantage over the one currently in use.
OK, I understand better. I was indeed thinking of whatever piece of code hits the metal last before kernel (including kernel actually), I see now that was badly worded in GP.
This does strikes me as an interesting perspective though:
> a bedrock of optimized and domain-aware numerical libraries written in a combination of Fortran, C, C++, handwritten assembly and some HDLs
So, we're talking lower level, from the kernel and above but below TensorFlow and Keras etc. I take it?
If yes, we're into systems by that point; it just so happens that this is Go's domain. Think of shrinking a 1000-brains 10-year cycle to a 100-brains 5-year or less (you may actually DevOps/CI-CD and sprint that stuff with Go like you would with most high-level's like Python or Js; a general "no-can-do" with low-level concurrent C++).
I may be wrong or totally out of my depth here, but I'm speaking of the library layer where e.g. in compute you'd see a CUDA-based industry versus whatever else (OpenCL...) take about a decade to unfold (and as much to move back); same in graphics, you'd see DirectX / OpenGL/Vulkan market penetration play over the entire lifetime of a GPU architecture (not 'gen', the core design like e.g. GCN).
I'm lacking the experience with ML workflows for sure; however the general hardware cycles and industrial market profiling seem to hold there as well (from GPU / TPU / FPGA / whatever fabrication, and the economics thereof; up to high-level software libraries and 'engineering muscle' that proponents of tech XYZ can push beyond marketing).
Where Go fits in that landscape is not so much versus a data scientist's Python but rather against infrastructure / Ops people's C++. Like Go is currently a strong candidate to refactor (parts of) the COBOL space, because it's basically 10x faster than in C++ or Java.
I'm thinking out loud. Don't quote me on any of this!
> So, we're talking lower level, from the kernel and above but below TensorFlow and Keras etc. I take it?
Tensorflow/Keras are frameworks built on:
> a bedrock of optimized and domain-aware numerical libraries written in a combination of...
You are not making any calculations in Python (if you do you are using native numerical libraries via FFI mostly), you only setup your workflow, prepare data etc. Heavy lifting is done in low level libraries used by ML frameworks.
That's why Go will not make it drastically more performant/better. Additionally most people doing ML knows Python, almost none know Go. 99% of tutorials, ML libraries etc are in Python. Not enough benefits to leave such an rich ecosystem in favor of Go. I write a lot of Go but even I would prefer Python over Go for any ML work.
> 99% of tutorials, ML libraries etc are in Python. Not enough benefits to leave such an rich ecosystem in favor of Go. I write a lot of Go but even I would prefer Python over Go for any ML work
I can give you one reason: Most of the ML tutorials online work on toy problems. Full stop. When it comes time to deploy the models, good luck, you need to move heaven and earth with devops stuff. Dockerize all your programs, add more heft.
Not so when using Go directly. There's a reason why data engineers are more in demand now than data scientists. With Scikit learn, keras etc it's easy to build models. It's not easy to deploy models to production. Half the tutorials don't teach the important bits: that your model needs to live in production.
Now, if you write your programs using Gonum or Gorgonia, you need to think a lot deeper about what your model is doing, about memory about things that software engineers think about. It's not easier, but it's the only sustainable way forwards.
> I can give you one reason: Most of the ML tutorials online work on toy problems. Full stop.
And most libraries and implementations are also in Python because they work on toy problems?
> There's a reason why data engineers are more in demand now than data scientists. With Scikit learn, keras etc it's easy to build models. It's not easy to deploy models to production. Half the tutorials don't teach the important bits: that your model needs to live in production.
There is also a reason why you employ full stack web developer instead of frontend developer. I can tell you what is the reason - it's cheaper than employing frontend developer and backend developer. And for toy problems you can hire fullstack developer.
> Now, if you write your programs using Gonum or Gorgonia, you need to think a lot deeper about what your model is doing, about memory about things that software engineers think about. It's not easier, but it's the only sustainable way forwards.
So you are implying that whole industry, researchers and ML practitioners got that wrong and they should use Go now?
I know a lot of people working on ML related problems and none of them use Go for their ML work. Some of them have Go in their stacks, sure, but it's not used for ML directly. And they solve practical business problems.
I've also worked before with two ML researchers respected in the industry and I can assure you, they are not working on toy problems and they do not know Go. And this is coming from Go user and enthusiast. Programming languages are just tools, not a religion.
Oh wow, CUDA compute support is Premier League. Way to go!
> The primary goal is to make calling the CUDA API as comfortable as calling Go functions or methods. Additional convenience functions and methods are also created in this package in the pursuit of that goal.
Great angle. Very idiomatic to Go, and ballpark what most developers wish. Polishing the convenience aspect (until some 'promise of stability' by 1.0) is truly what makes some projects popular imho. UX (well DX, for Developer eXperience) is a clear determining factor especially in the age of open-source and generalized `git` remote pushes to 'the cloud'.
Might make things a bit more clear on the reasons why certain things are done in a certain way. It's by no means THE only way, but I hope I have listed my reasons clearly.
Python is great anyway, a language's intrinsic quality has nothing to do with whatever else the programming scene does. In my view it takes many languages to make a better software world, and while we don't want 250 (...), a couple dozen on a normal distribution seems quite adequate.
> While a very expressive and extendable language like Python seems well suited to researching the desired abstractions when designing neural nets and ML paradigms, it seems to me that a production phase would benefit from translating it to a 'sleek' language like Go — which forces you to think closer to the machine, and opens boulevards to safe performance.
>
> I wonder what people fluent in Go and other languages, experienced with ML, think of this. (it's a minor concern overall, but the future of Go in ML is of interest to me)
So, to answer your question, yes you are ABSOLUTELY correct in that intuition. I speak several languages - in the data world: R, Python, Julia; In the logic world: Haskell and Prolog (Datalog); In the software engineering world: C, Go, Rust. Go sits right in the middle of all this. From my point of view, it's the right balance and is Thanos' preferred language too.
On that note, Gorgonia has seemingly recently took off thanks to the heroic efforts of the community - this year there were talks not by me on deep learning in Go using Gorgonia and that kinda made me quite happy.
I'm definitely going to review your work. You/it seem/s pleasingly enthusiast and just brilliant!
The names you mention, from Haskell to Rust passing by Julia... if Go indeed "is the right balance"... well it really makes sense based on what Pike, Ken, Russ and others said/say, and my related parts in my 20+ years of general computer nerding.
(currently listening to your YouTube talk and loving it)
One thing I fully believe led to Go's adoption is its feature-rich standard library. SSL connections, JSON parsing, HTTP and the like are all things that would be a major impediment if it was up to the developer to find some generally well accepted 3rd party module on Github and hope it stays maintained and stable.
Go's stdlib has spoiled me. It was the first language that I used in any serious capacity, and I just assumed that all languages had stdlibs that were more-or-less equivalent in terms of breadth and design. Sadly that is not the case. I find it hard to get excited about any new languages these days if it doesn't have a good stdlib. I think Go has really shifted people's expectations in that regard.
>While a very expressive and extendable language like Python seems well suited to researching the desired abstractions when designing neural nets and ML paradigms, it seems to me that a production phase would benefit from translating it to a 'sleek' language like Go
Without generics (which Go will sorta get) and perhaps operator overloading (which will probably wont), slim chances...
But even more so, Python is not the performance bottleneck, as the libs are all optimized C, Fortran, etc. Not to mention Go has historically been even slower than Python's (C libs) in things like strings, regex, JSON etc.
Agree with you on all of your points. I picked up Go in my senior year of college because it seemed interesting. None of my peers had heard of it. This led to my first job out of college where the company's backend was written in Go. Haven't looked back.
I am also interested in using a language like Go for ML. The reason is not really about performance though. I just find statically typed languages are much easier to debug when the size of projects are getting bigger.
10 years old and still very immature and a pain to work with... it's all so backwards and I am not sure how many unnecessary lines of error code handling I had to write. Its horrible to read, my takeaway, pick either Java for business problems or Rust if you wanna do native stuff. Python is also much better compared to Go. I feel like I became less skilled in coding the last year and a half... Switching jobs soon, moving to a go free world. :)
I am recently introduced to golang. I really like the simplicity, expressiveness and standard library. I am also a fan that developer need not to deal with every nitty gritty details about the language, os etc.
Go is such a productive language. Well deserved cred to its authors and community. Please, please hold firm against the inherent pressure from language theoreticians to add more features that increases complexity.
I think the tension is more between people writing executables and people writing libraries.
If you want your language to be suitable for any purpose by using an intricate web of libraries depending on each other, you need those language features and deep theory. Think haskell, lisp, rust, etc.
If you kind of know what people will use your language for, and you build in the most important functionality, you aren't nearly so dependent on libraries. You just make the language accessible and as productive as possible. Think Go, erlang, PHP, SQL, javascript, VB.
I think you're on point, writing system stuff in Golang can be a bit of a pain due to the lack of expressiveness, but writing anything else is a freaking joy.
Reading Golang on the other hand is always pure joy.
A standard library for a language with a purpose (like Go) should include a lot of stuff related to that purpose. That avoids the need for an intricate web of dependencies and specialized third-oarty code, but ties the language to its purpose a bit more.
A standard library for a use-for-anything-and-everything language (like rust) might be smaller because it relies more on third-party libraries for the specific purposes you have in mind.
It's not (just) about language theoreticians pushing their favorite features.
It's mostly about regular programmers who have seen other languages and don't want to write the 30th new slice copy function, or the 10th get-slice-of-keys-from-map function. They might also not want to keep reading `if err != nil {return err}` over and over between pieces of logic in a function.
I'm a regular programmer who has seen other languages and don't mind writing the 30th new slice copy function, or the 10th get-slice-of-keys-from-map function. And I like `if err != nil {return err}` for its sheer simplicity.
There's value in doing things in the most boring obvious verbose way. Quick onboarding is one, from experience. Homogeneous codebase is another. Having less ways to do the same thing. And so on.
Go isn't the type of language for ego massaging. It's meant to be a productive language for large projects while adding resiliency to teams against dev churn.
By productive you mean having developers repeatedly manually create loops that are the poor and verbose equivalents of map(), filter(), and reduce()? Out of go, scala, c++, java, python, typescript; go is the least productive language I've used in the last decade.
I tried my hand at writing generic map/filter/reduce once, and it turned out to be more nuanced than I thought. Do you want to modify the array in-place, or return new memory? If your reduce operation is associative, do you want to do it parallel, and if so, with what granularity? If you're setting up a map->filter->reduce pipeline on a single array, the compiler needs to use some kind of stream fusion to avoid unnecessary allocations; how can the programmer be sure that it was optimized correctly? And so on. If you want to write code that's "closer to the metal," these things become increasingly important, and it's probably impossible to create a generic API that satisfies everyone. That said, I wouldn't mind having a stdlib package with map/filter/reduce implementations that are "good enough" for non-performance-critical code.
Indeed, and they are even worse than that, because Streams in Java can even be parallel streams and be processed by a thread pool. So it’s not enough to know that it’s a Stream, you have to know what kind of Stream, and if it’s a parallel Stream, what threadpool is it using? How big is it? What else uses it? What’s the CPU and memory overhead of the pool? What happens when a worker thread throws an exception? Etc. These are all hidden by the abstraction but are usually things we always care about as consumers of the abstraction.
I've seen more bugs due to the cognitive overhead of reduce than writing a for loop. And then you don't have to wonder "Hmm is this a lazy stream? Concurrent?" You just look at the code, and know.
(And I almost did a spittake thinking about C++ being more productive than Go.)
> I've seen more bugs due to the cognitive overhead of reduce than writing a for loop. And then you don't have to wonder "Hmm is this a lazy stream? Concurrent?" You just look at the code, and know.
Out of curiosity, in which language(s) were those written in?
I’ve seen confusion over reduce happen in Java, Ruby, Groovy, JavaScript, Scala, and Python.
Reduce is quite elegant for certain kinds of problems. But in most practical settings, knowing when to reach for reduce vs something else is hard to know, and everyone has different opinions on it. Personally, I like to sidestep those kinds of decisions/discussions whenever I can, because it just gets in the way of actually delivering stuff.
For loops and their map/filter/reduce functional equivalents to me are just that: equivalent constructs, two styles/paradigms of doing the same thing. Can you elaborate on why one is poorer and the other much more productive in absolute terms?
Not OP. I work with TypeScript and Go, and switching back and forth, I find it much easier to express my thoughts in TypeScript and just write them out without getting bogged down in the tiniest details every single time I want to map, filter or reduce something. Go is verbose in that way which makes me lose my train of thought because of a lot more typing, and it makes it harder for me get the gist of code because I have to actually read and not gloss over the Go loops to make sure they do what I think they do.
TypeScript/JavaScript:
const itemIDs = items.map((item) => item.ID)
Go:
itemIDs := make([]uint64, 0, len(items))
for _, item := range items {
itemIDs = append(itemIDs, item.ID)
}
I find it telling that the criticisms on this article are the exact same ones on articles about Go being released ten years ago. It's like spending a decade yelling at a bee to tell it that it's not aerodynamic enough to fly.
This makes me feel even older than the XKCDs that point out its longer from now to not-that-long-ago-seeming X from your youth than it is from X since very-ancient-seeming Y from your father's days.
I'll still take that over having no idea what errors could be thrown at which point in my code. Treating errors as values is very verbose, but it's the easiest error handling I've ever tried to wrap my head around.
True enough, one could name any year between 1979 and 1985 as the "invention" year of C++, and the claim would have some merit. http://www.cplusplus.com/info/history/