Hacker News new | past | comments | ask | show | jobs | submit login

Go is such a productive language. Well deserved cred to its authors and community. Please, please hold firm against the inherent pressure from language theoreticians to add more features that increases complexity.





I think the tension is more between people writing executables and people writing libraries.

If you want your language to be suitable for any purpose by using an intricate web of libraries depending on each other, you need those language features and deep theory. Think haskell, lisp, rust, etc.

If you kind of know what people will use your language for, and you build in the most important functionality, you aren't nearly so dependent on libraries. You just make the language accessible and as productive as possible. Think Go, erlang, PHP, SQL, javascript, VB.


I honestly think that any language that enables too much cleverness will inevitably produce a mess given enough developers involved.

I tend to think libraries work fine with Go. It's "frameworks" that are difficult.

I think you're on point, writing system stuff in Golang can be a bit of a pain due to the lack of expressiveness, but writing anything else is a freaking joy.

Reading Golang on the other hand is always pure joy.


Yet the Go standard library is one of the best around ...

It seems that you are agreeing with me.

A standard library for a language with a purpose (like Go) should include a lot of stuff related to that purpose. That avoids the need for an intricate web of dependencies and specialized third-oarty code, but ties the language to its purpose a bit more.

A standard library for a use-for-anything-and-everything language (like rust) might be smaller because it relies more on third-party libraries for the specific purposes you have in mind.


Any general-purpose programming language that wants to be more than a toy needs an "intricate web of libraries".

The alternative are domain-specific languages - like SQL for example - which people simply do not and will not try to use in arbitrary new settings.


well said!

It's not (just) about language theoreticians pushing their favorite features.

It's mostly about regular programmers who have seen other languages and don't want to write the 30th new slice copy function, or the 10th get-slice-of-keys-from-map function. They might also not want to keep reading `if err != nil {return err}` over and over between pieces of logic in a function.


I'm a regular programmer who has seen other languages and don't mind writing the 30th new slice copy function, or the 10th get-slice-of-keys-from-map function. And I like `if err != nil {return err}` for its sheer simplicity.

There's value in doing things in the most boring obvious verbose way. Quick onboarding is one, from experience. Homogeneous codebase is another. Having less ways to do the same thing. And so on.

Go isn't the type of language for ego massaging. It's meant to be a productive language for large projects while adding resiliency to teams against dev churn.


By productive you mean having developers repeatedly manually create loops that are the poor and verbose equivalents of map(), filter(), and reduce()? Out of go, scala, c++, java, python, typescript; go is the least productive language I've used in the last decade.

I tried my hand at writing generic map/filter/reduce once, and it turned out to be more nuanced than I thought. Do you want to modify the array in-place, or return new memory? If your reduce operation is associative, do you want to do it parallel, and if so, with what granularity? If you're setting up a map->filter->reduce pipeline on a single array, the compiler needs to use some kind of stream fusion to avoid unnecessary allocations; how can the programmer be sure that it was optimized correctly? And so on. If you want to write code that's "closer to the metal," these things become increasingly important, and it's probably impossible to create a generic API that satisfies everyone. That said, I wouldn't mind having a stdlib package with map/filter/reduce implementations that are "good enough" for non-performance-critical code.

> I tried my hand at writing generic map/filter/reduce once, and it turned out to be more nuanced than I thought.

Rust and Java both have good APIs for this kind of work. Take a look at their approaches.


A lot of the clever helpers like that in Java unfortunately often trigger extra allocations, making them slower, as the above post implied.

Indeed, and they are even worse than that, because Streams in Java can even be parallel streams and be processed by a thread pool. So it’s not enough to know that it’s a Stream, you have to know what kind of Stream, and if it’s a parallel Stream, what threadpool is it using? How big is it? What else uses it? What’s the CPU and memory overhead of the pool? What happens when a worker thread throws an exception? Etc. These are all hidden by the abstraction but are usually things we always care about as consumers of the abstraction.

The introduction of value types in the JVM should hopefully alleviate this.

I've seen more bugs due to the cognitive overhead of reduce than writing a for loop. And then you don't have to wonder "Hmm is this a lazy stream? Concurrent?" You just look at the code, and know.

(And I almost did a spittake thinking about C++ being more productive than Go.)


> I've seen more bugs due to the cognitive overhead of reduce than writing a for loop. And then you don't have to wonder "Hmm is this a lazy stream? Concurrent?" You just look at the code, and know.

Out of curiosity, in which language(s) were those written in?


I’ve seen confusion over reduce happen in Java, Ruby, Groovy, JavaScript, Scala, and Python.

Reduce is quite elegant for certain kinds of problems. But in most practical settings, knowing when to reach for reduce vs something else is hard to know, and everyone has different opinions on it. Personally, I like to sidestep those kinds of decisions/discussions whenever I can, because it just gets in the way of actually delivering stuff.


For loops and their map/filter/reduce functional equivalents to me are just that: equivalent constructs, two styles/paradigms of doing the same thing. Can you elaborate on why one is poorer and the other much more productive in absolute terms?

Not OP. I work with TypeScript and Go, and switching back and forth, I find it much easier to express my thoughts in TypeScript and just write them out without getting bogged down in the tiniest details every single time I want to map, filter or reduce something. Go is verbose in that way which makes me lose my train of thought because of a lot more typing, and it makes it harder for me get the gist of code because I have to actually read and not gloss over the Go loops to make sure they do what I think they do.

TypeScript/JavaScript:

    const itemIDs = items.map((item) => item.ID)
Go:

    itemIDs := make([]uint64, 0, len(items))
    for _, item := range items {
        itemIDs = append(itemIDs, item.ID)
    }

100% Yes.

I find it telling that the criticisms on this article are the exact same ones on articles about Go being released ten years ago. It's like spending a decade yelling at a bee to tell it that it's not aerodynamic enough to fly.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: