> One example of Go's lack of expressiveness is that loops are not abstracted away, e.g. you can't use higher level constructs like map and filter on collections.
This is (or was) deliberate, with the rationale being that it makes O(n) blocks of code trivial to identify in review.
Whether or not you buy that argument is a separate thing, of course.
Can you explain why you think it makes O(n) blocks trivial to identify?
How is such identification made easier by manually writing out a `for` loop applying a function foo to each element rather than writing `map foo myCollection`?
In every mainstream language I can think of `map foo myCollection` creates an intermediary map.
Memory allocation is so expensive that making that copy is often more expensive that calling `foo` on each element.
Sometimes making a copy is exactly what you need and there's no way around that cost (but hold that thought).
But I've also seen `sum map foo myCollection` so many times (especially in JavaScript).
Here you have a short, neat but also extremely wasteful way of doing things. I see it so frequently that I assume that many people are unaware of this cost (or completely disregard performance concerns).
If you were to write this imperatively, it would be obvious that you're making a copy and maybe you would stop and re-think your approach.
But there's more.
If you're paying attention to performance, an easy way to improve performance of `map foo myCollection` in e.g. Go is to pre-allocate the resulting array.
For large arrays, this avoid re-allocating of underlying memory over and over again, which is the best a `map` can do.
In imperative code those costs are more visible. When you have to type that code that does memory allocation, you suddenly realize that the equivalent of `map` is extremely wasteful.
I don't want to discuss this forever but I have a couple of comments:
Your points about efficiency are a separate topic entirely from the original claim that manually writing out an imperative solution makes it easier to see the algorithmic complexity. That was a surprising claim to me because, in my experience, if I understand what some HOF is doing, reading the code is even easier because there is less of it to wade through (and mental exhaustion doesn't make one easier to read vs the other).
> In every mainstream language I can think of `map foo myCollection` creates an intermediary map
You need to build up the final result, sure. Not an intermediary map but whatever structure (or functor) you're mapping over. That is the whole point of immutable data structures. Also when you're using persistent data structures, which all modern FP languages do, the cost of constructing the result can be far less than what you expect, especially if the result is lazy and you need only some of the results. There is a cost to immutability and if it's unbearable in some situation, fall back to in-place mutation but the semantics of these two approaches are definitely not the same.
> But I've also seen `sum map foo myCollection` so many times
Yeah... that should be a fold (reduce, whatever). :)
.... hacker news cut off the “sum” part, and so I didn’t see it. On my phone it just happened to clip it exactly long enough that it was still valid syntax, wow. Anyway, sorry and my bad!
That's quite an interesting, if frightening, take on it. Assuming a developer knows what `map` does, then I wouldn't have very much confidence in their reading comprehension if they somehow mentally skipped over the main higher order function being called on a three word line of code.
Would anyone expect such a developer to read and parse several more lines of code more reliably, in order to understand the algorithmic complexity? Seems unwarranted to me...
Humans make mistakes. People code review after exhausting days or late at night. Or sometimes they skim more than they intended to. You want to make problems in the code obvious everywhere that you can.
In programming, if you miss an important detail the repercussions can be high. In a million line codebase every idiom that's slightly more complex than it needs to be is going to result in dozens of additional bugs, sheerly because of the increased surface area for making mistakes with that idiom.
I agree with everything you've said here but being mentally exhausted doesn't make it more likely that you can read even more lines of code in a more reliable fashion.
I think the clue to your thinking, for me, is in your description of the `map` HOF as "slightly more complex". Having years of experience with both paradigms, I've found that grokking a call to one of these fundamental building blocks (map, filter, reduce, fold, etc) is nearly instantaneous. We've all experienced reading prose where the author was excessively verbose when the same point could have been made succinctly. It feels the same way reading for loops once you get over the learning curve of these very basic functional constructs. You have to keep repeating that boilerplate endlessly and it's very tedious to keep writing and reading it.
Functions (and methods) are the only way to "hide" computation and allocation in Go. That rubric makes reading Go code to build mechanical sympathy relatively straightforward.
This is (or was) deliberate, with the rationale being that it makes O(n) blocks of code trivial to identify in review.
Whether or not you buy that argument is a separate thing, of course.