So, Go is designed to be an engineering language and not an academic toy. Contrary to other languages, Go programmers "deliver" and have a pragmatic view of the real development world, not just their own commits. Go programmers need a deeper understanding of computer science because other programmers are lazy and have everything given for free and probably don't need to know how it works.
A whole page discussing the virtues of Go by insulting people.
It may be harder to write some things, but it definitely is easier to read Go code.
Besides, while the language itself may be more verbose than it could be, the standard library is extremely pragmatic and terse. It's like the opposite of the standard C++ library. E.g. to see if a string starts with another string in C++:
In your C++ example you're giving `std::mismatch`, which is a generic algorithm. If you're so inclined you could provide a wrapper that has the same interface as the Go example, but you're comparing apples to oranges. I'd argue that `std::mismatch` is much _more_ pragmatic than the Go example, in that I can use it to check any lists of user defined types.
In reality, these two methods do completely different things. `std::mismatch` is a completely generic algorithm that 'returns the first mismatching pair of elements from two ranges', which can be used for much more than `strings.HasPrefix`.
Did you read the article? The fact that there is 1 way to do it in Go, and 100 different ways to do it in C++ (or some other older language) is a feature, not a bug.
Or checking for the presence of an item in a slice. Because Go doesn't supply sets, or allow you to write them yourself, this is something i find myself needing to do a lot, and every time, i'm writing that idiotic function from scratch.
> The Go standard library is full of things that do exactly what you want them to, whereas in other languages you have to manually do it yourself.
No.
Python: toCheck.startswith(prefix)
JavaScript: toCheck.startsWith(prefix)
Haskell: prefix `isPrefixOf` toCheck
What other languages does Go compete with that don't have a "string starts with" in their standard library.
Since I feel I've proven my point about all other languages Go competes with having this function, what other helpful functions do you think Go has that it's competitors do not?
...and I haven't felt this kind of pinch when using C#, ever. But what do I know? I'm just a .NET wage-slave pleb who's too mentally handicapped to see Go's glory.
Yep. Add those and a proper type system and you have yourself a decent language. But when the creator of the language doesn't see the value in abstractions [0], then it's probably never going to happen.
His formulation of reduce() is strikingly clumsy, both in signature and implementation. I daresay I wouldn't have much use for such a function either!
Most languages which provide a reduce() permit programmers to provide an initial "carry-in" value. This is a neater and more useful way to handle the cases of a zero- or one-element list. Moreover, it lets you do more interesting things with the reduction. Consider the following, using ES6-style JavaScript to collect a set of the unique values of a list via reduce():
function unique(list) {
return list.reduce(function (sofar, item) {
if (!sofar.includes(item)) { sofar.push(item); }
return sofar;
}, []);
}
It was just the first example that came to mind to illustrate the general pattern. We could also make this particular example shorter by using your naming conventions and writing it in a more functional manner instead of mutating the list:
function unique(list) {
return list.reduce((r, i) => r.includes(i) ? r : r.concat(i), []);
}
Clearer? I dunno; probably depends on the reader's background and preferences.
I still don't see what you are gaining here. Outside of code golf, the goal should not be to try and write code as short as possible, regardless of the language you are using. But this just seems to validate Pike's assertion that a for loop is more suitable to the problem.
Perhaps an example of where map/reduce is a significant improvement to the expressiveness would be appropriate for the discussion?
Transform that into a list of actual points in worldspace- the hitboxes for players, the AABB centers for buildings. (Not all player models have the same hitbox count.) Unless we are holding a weapon that does splash damage- then go for the feet on players (their origin). And if we hold a projectile weapon, do prediction based on the player's velocity.
Now transform each point into a 2-tuple (position, score), based on some heuristics implemented in another function.
Do a raycast to each point. Filter out those that can't be hit.
Take the point with the highest score, if there is one, and set our viewangles to aim at it. Otherwise leave them alone.
The actual implementation of this looked something like this:
(Note that max_by is just a special case of reduce/fold; in my experience, you rarely want to use reduce directly; there's probably a more ergonomic wrapper. Sometimes you do, though.)
To me, that's pretty readable (stuff specific to the game aside, like the trace.fraction ugliness- fraction is "how far" the trace got before hitting something, 1.0 meaning there's nothing in the way. the comparison is to handle some floating-point inaccuracy there), and handles some really annoying cases properly.
I agree wholeheartedly with the notion that you should rarely use reduce directly. It is much less useful than map or filter.
Suppose that you have a bunch of things implemented using map or filter. When someone writes parallelized versions of map and filter, all of the existing code gets the benefits.
Now suppose you have a bunch of basic functions implemented using reduce (sum, product, min, max, reverse, ...). Can these be parallelized? Yes - by throwing away the 'reduce' implementation, and starting from scratch.
The problem with reduce, compared to its more useful cousins map and filter, is that it is too powerful. Map and filter are more limited than reduce, but if you can express your computation in terms of maps and filters, you get something valuable in return. If you can express is in terms of reduce, you save a few keystrokes, and that's about it.
For anyone interested in this kind of stuff, I recommend Guy Steele's talk "Organizing Functional Code for Parallel Execution; or, foldl and foldr Considered Slightly Harmful": https://vimeo.com/6624203
Reduce can be parallelized when the reducing function is associative. You can split the input sequence into chunks that are reduced in parallel et merge the results with another sequential reduce.
I like that the function in your link is called preduce and not just reduce. Reduce has a standard definition, which doesn't require associativity. To eliminate confusion, a function that does require associativity deserves a different name, just like here.
And using these names, I would say that preduce seems much more useful to me than reduce.
Picture map, filter, and fold as for loops with annotations that restrict what they can do. This helps both the source code reader (you and I) and the source code compiler more easily understand what's going on.
For the compiler this makes optimization easier. For the reader it makes reading easier after learning what these recursion primitives do.
Well no need to be so aggressive I guess. Many of the points make some sense. But ultimately for me I get the feeling that although the managers might be happy with Go, I'd definitely not want to be such a programmer day-in day-out.
A whole page discussing the virtues of Go by insulting people.