Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So, Go is designed to be an engineering language and not an academic toy. Contrary to other languages, Go programmers "deliver" and have a pragmatic view of the real development world, not just their own commits. Go programmers need a deeper understanding of computer science because other programmers are lazy and have everything given for free and probably don't need to know how it works.

A whole page discussing the virtues of Go by insulting people.



Go doesn't make real world programming easier. It makes you work hard for pointless things. Most of its problems are from a lack of generics.


It may be harder to write some things, but it definitely is easier to read Go code.

Besides, while the language itself may be more verbose than it could be, the standard library is extremely pragmatic and terse. It's like the opposite of the standard C++ library. E.g. to see if a string starts with another string in C++:

    std::mismatch(prefix.begin(), prefix.end(), toCheck.begin()).first == prefix.end()
In Go:

    strings.HasPrefix(toCheck, prefix)
The Go standard library is full of things that do exactly what you want them to, whereas in other languages you have to manually do it yourself.


In your C++ example you're giving `std::mismatch`, which is a generic algorithm. If you're so inclined you could provide a wrapper that has the same interface as the Go example, but you're comparing apples to oranges. I'd argue that `std::mismatch` is much _more_ pragmatic than the Go example, in that I can use it to check any lists of user defined types.

In reality, these two methods do completely different things. `std::mismatch` is a completely generic algorithm that 'returns the first mismatching pair of elements from two ranges', which can be used for much more than `strings.HasPrefix`.


Did you read the article? The fact that there is 1 way to do it in Go, and 100 different ways to do it in C++ (or some other older language) is a feature, not a bug.


Where are the 99 other ways? I see one way to do this for any set of data types. Golang is the one with 100 implementations.


> The Go standard library is full of things that do exactly what you want them to, whereas in other languages you have to manually do it yourself.

Sorting a slice is a pretty obvious counterexample.


Or checking for the presence of an item in a slice. Because Go doesn't supply sets, or allow you to write them yourself, this is something i find myself needing to do a lot, and every time, i'm writing that idiotic function from scratch.


Dang, just like PHP (no built in set)


And yet there is a generic in array function.


...which is O(n). Unless the set is trivially small, better to coerce your data to key types and use array_key_exists().


I never said it was good, but it exists. Most of my PHP code utilizes kv arrays everywhere possible.


You can use map[X]bool as a set. Not ideal I agree, but it works ok.


> The Go standard library is full of things that do exactly what you want them to, whereas in other languages you have to manually do it yourself.

No.

Python: toCheck.startswith(prefix)

JavaScript: toCheck.startsWith(prefix)

Haskell: prefix `isPrefixOf` toCheck

What other languages does Go compete with that don't have a "string starts with" in their standard library.

Since I feel I've proven my point about all other languages Go competes with having this function, what other helpful functions do you think Go has that it's competitors do not?


  to see if a string starts with another string in C++
As others have pointed out, while a person could use

  std::mismatch
C++'s std::basic_string type has a compare method. So, the Go example you presented:

  strings.HasPrefix(toCheck, prefix)
Could be expressed in Standard C++ as:

  toCheck.compare (0, prefix.length (), prefix) == 0;


Oh wow, string prefix checking is the example you want to run with? In C#:

  toCheck.StartsWith(prefix)
Meanwhile in Golandia, this is still an issue:

https://github.com/golang/go/issues/16721#issuecomment-24015...

...and I haven't felt this kind of pinch when using C#, ever. But what do I know? I'm just a .NET wage-slave pleb who's too mentally handicapped to see Go's glory.


It was just a simple example. Of course many languages have that. The point is Go nearly always has what you want.

Don't pretend that there aren't things in C# that are better in Go.

And I agree, the min/max thing is stupid. I never said Go was perfect.


Not being a Gopher myself, and seeing whats missing from Go compared to C#, I don't see why I'd ever bother using it (unless it's a work requirement).

I like generics and the things that come with it, like LINQ and manipulating abstract collections in a type-safe manner, thank you very much :)


Yep. Add those and a proper type system and you have yourself a decent language. But when the creator of the language doesn't see the value in abstractions [0], then it's probably never going to happen.

[0] https://github.com/robpike/filter


His formulation of reduce() is strikingly clumsy, both in signature and implementation. I daresay I wouldn't have much use for such a function either!

Most languages which provide a reduce() permit programmers to provide an initial "carry-in" value. This is a neater and more useful way to handle the cases of a zero- or one-element list. Moreover, it lets you do more interesting things with the reduction. Consider the following, using ES6-style JavaScript to collect a set of the unique values of a list via reduce():

    function unique(list) {
    	return list.reduce(function (sofar, item) {
    		if (!sofar.includes(item)) { sofar.push(item); }
    		return sofar;
    	}, []);
    }


To be fair, even with a well designed interface, it is difficult to see the advantage of your example over using a simple for loop:

     func unique(list []int) (r []int) {
          for i := range list {
               if !includes(r, i) {
                    r = append(r, i)
               }
          }
          return
     }


It was just the first example that came to mind to illustrate the general pattern. We could also make this particular example shorter by using your naming conventions and writing it in a more functional manner instead of mutating the list:

    function unique(list) {
        return list.reduce((r, i) => r.includes(i) ? r : r.concat(i), []);
    }
Clearer? I dunno; probably depends on the reader's background and preferences.


I still don't see what you are gaining here. Outside of code golf, the goal should not be to try and write code as short as possible, regardless of the language you are using. But this just seems to validate Pike's assertion that a for loop is more suitable to the problem.

Perhaps an example of where map/reduce is a significant improvement to the expressiveness would be appropriate for the discussion?


I wrote an aimbot for a FPS game.

The basic idea:

Take all the players, and all the buildings.

Filter out those that aren't enemies.

Filter out those that are currently invincible.

Transform that into a list of actual points in worldspace- the hitboxes for players, the AABB centers for buildings. (Not all player models have the same hitbox count.) Unless we are holding a weapon that does splash damage- then go for the feet on players (their origin). And if we hold a projectile weapon, do prediction based on the player's velocity.

Now transform each point into a 2-tuple (position, score), based on some heuristics implemented in another function.

Do a raycast to each point. Filter out those that can't be hit.

Take the point with the highest score, if there is one, and set our viewangles to aim at it. Otherwise leave them alone.

The actual implementation of this looked something like this:

  let target = get_players().chain(get_buildings)
               .filter(|e| are_enemies(me, e))
               .filter(is_vulnerable)
               .flat_map(entity_to_scored_aimpoints)
               .filter(|(score, point)| trace(me_eyes_predicted, point).fraction > 0.999)
               .max_by(|(score, point)| score);
  if let Some(target) = target {
     aimray = target - me_eyes_predicted;
     viewangles = vector_to_angles(aimray);
  }
(Note that max_by is just a special case of reduce/fold; in my experience, you rarely want to use reduce directly; there's probably a more ergonomic wrapper. Sometimes you do, though.)

To me, that's pretty readable (stuff specific to the game aside, like the trace.fraction ugliness- fraction is "how far" the trace got before hitting something, 1.0 meaning there's nothing in the way. the comparison is to handle some floating-point inaccuracy there), and handles some really annoying cases properly.


I agree wholeheartedly with the notion that you should rarely use reduce directly. It is much less useful than map or filter.

Suppose that you have a bunch of things implemented using map or filter. When someone writes parallelized versions of map and filter, all of the existing code gets the benefits.

Now suppose you have a bunch of basic functions implemented using reduce (sum, product, min, max, reverse, ...). Can these be parallelized? Yes - by throwing away the 'reduce' implementation, and starting from scratch.

The problem with reduce, compared to its more useful cousins map and filter, is that it is too powerful. Map and filter are more limited than reduce, but if you can express your computation in terms of maps and filters, you get something valuable in return. If you can express is in terms of reduce, you save a few keystrokes, and that's about it.

For anyone interested in this kind of stuff, I recommend Guy Steele's talk "Organizing Functional Code for Parallel Execution; or, foldl and foldr Considered Slightly Harmful": https://vimeo.com/6624203


Reduce can be parallelized when the reducing function is associative. You can split the input sequence into chunks that are reduced in parallel et merge the results with another sequential reduce.

https://lparallel.org/preduce/


I like that the function in your link is called preduce and not just reduce. Reduce has a standard definition, which doesn't require associativity. To eliminate confusion, a function that does require associativity deserves a different name, just like here.

And using these names, I would say that preduce seems much more useful to me than reduce.


Picture map, filter, and fold as for loops with annotations that restrict what they can do. This helps both the source code reader (you and I) and the source code compiler more easily understand what's going on.

For the compiler this makes optimization easier. For the reader it makes reading easier after learning what these recursion primitives do.

It is also less code, meaning less room for bugs.


well ES6 have Set, const set = new Set(list);


Holy shit. Those are the longest implementations of `map` and `reduce` I've ever seen.


Well no need to be so aggressive I guess. Many of the points make some sense. But ultimately for me I get the feeling that although the managers might be happy with Go, I'd definitely not want to be such a programmer day-in day-out.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: