Hacker News new | past | comments | ask | show | jobs | submit login

Most functional languages have immutable persistent data structures that don't require a linear copy and are fairly efficient.

For example in clojure, sequences are implemented as 32-ary tries so insert/delete/update to any particular index will only cost around sizeOfNode * numParents=32 * log_32(n). This comes with a cost that indexing is now log_32(n) but it's usually a fair price to pay.

In javascript, immutable.js implement these same ideas (as least according to these talks where they talk about index tries and hash array mapped tries: https://youtu.be/I7IdS-PbEgI?t=456)

If you want native support, javascript engines are actually free to swap out their implementation of arrays! I know they already do depending on whether they think your array is sparse("holey") or not: https://github.com/v8/v8/blob/8.1.81/src/builtins/builtins-a.... I also vaguely recall someone saying that unshift (push to front) is automatically optimized by switching to a deque/circular buffer but I might be remembering wrong. Anyway, they can make these optimizations in a distant future but I don't think do today.




A great resource for understanding how persistent data structures are implemented is Okasaki’s Purely Functional Data Structures: https://www.amazon.com/Purely-Functional-Data-Structures-Oka...

Also available as a free PDF: https://www.cs.cmu.edu/~rwh/theses/okasaki.pdf


> Most functional languages have immutable persistent data structures that don't require a linear copy and are fairly efficient.

This is true, but unfortunately those libraries/languages don't make use of that tree structure for "partial memoization" of derived data. They very well could, but they don't -- even when you use Immutable.js in a Redux app, and you use Reselect, your selectors will take linear time when you mutate one element in an object/array instead of taking constant- or log-time.


Yea that's a good point. This matters when your derived data takes linear time to recompute. There are still techniques to make this work (but not automatically afaik). For example it's easy to make them efficient again if your computation is just an associative fold or reversible fold: https://sci-hub.tw/https://link.springer.com/chapter/10.1007...


That article talks about map and filter and fold, but I think there are also (trickier) ways to make joins and sorts work as well.

I think it could be a great technique for a lot of things -- compilers and database views, in addition to front-end state.


immutableJS gives you false sense of immutability - never forget that you have intrinsic reflection in Javascript and the only thing to force immutability is Object.freeze() which makes things unusably slow.


Or you can enforce it with readonlys in TypeScript. Not enforced by the engine of course, so there's always the danger some 3rd party component could get its fingers in your objects, but any of the devs on your project will be prevented from modifying their objects directly.

TS and immutablejs work together surprisingly well. One enforces immutability via static analysis, and the other creates (relatively) efficient datastructures predicated on the assumption of immutability.


TS works well. But ImmutableJS is slow, really slow. https://jsperf.com/immutable-js-iteration


I wouldn't go by benchmarks run against a 100-length array of numbers. That's so small it's practically a tuple. Where immutable.js improves your perf is if you need to have very large (1M+) datasets, and/or need to keep references to those data over time for thing like undo, diffing, saving those diffs, etc. If you used a pure array that's 1M, that'd be over 64MB per array, so at just 10 levels of undo you could already exhaust your heap. Immutable.js would be barely over 64MB for hundreds of levels of undo, since it can rely on the last state being immutable it can store only the diffs.

Immutable.js was a bad solution when it was used purely to enforce immutability. Unfortunately that's what its reputation became, but really it should be thought of as a lib that takes advantage of immutable data, not a lib that checks for immutability.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: