Hacker News new | past | comments | ask | show | jobs | submit login

I tried my hand at writing generic map/filter/reduce once, and it turned out to be more nuanced than I thought. Do you want to modify the array in-place, or return new memory? If your reduce operation is associative, do you want to do it parallel, and if so, with what granularity? If you're setting up a map->filter->reduce pipeline on a single array, the compiler needs to use some kind of stream fusion to avoid unnecessary allocations; how can the programmer be sure that it was optimized correctly? And so on. If you want to write code that's "closer to the metal," these things become increasingly important, and it's probably impossible to create a generic API that satisfies everyone. That said, I wouldn't mind having a stdlib package with map/filter/reduce implementations that are "good enough" for non-performance-critical code.





> I tried my hand at writing generic map/filter/reduce once, and it turned out to be more nuanced than I thought.

Rust and Java both have good APIs for this kind of work. Take a look at their approaches.


A lot of the clever helpers like that in Java unfortunately often trigger extra allocations, making them slower, as the above post implied.

Indeed, and they are even worse than that, because Streams in Java can even be parallel streams and be processed by a thread pool. So it’s not enough to know that it’s a Stream, you have to know what kind of Stream, and if it’s a parallel Stream, what threadpool is it using? How big is it? What else uses it? What’s the CPU and memory overhead of the pool? What happens when a worker thread throws an exception? Etc. These are all hidden by the abstraction but are usually things we always care about as consumers of the abstraction.

The introduction of value types in the JVM should hopefully alleviate this.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: