In my spare time I have been building various versions of the nbody benchmark to try out different fundamental data methods, arrays, objects, immutable or not etc. Every alternative version is, of course, slower than the original mutating object version. In principle however, many forms should be optimisable by a 'sufficiently smart JIT' to be at least as fast.
let x = [1,5,9];
let q = x.map(a=>a+1)
. x held the only reference to the array.
. x.map is the last reference to x
. the array in x was not modified between creation and last reference.
. the used version of a=>a+1 returns the same type as it is passed
To place the result of the map into the same memory as the original array and assign that to q.
Of course in this ultra simple example an optimizer could just figure out that q is [2,6,10] and never used anyway so throw it away. but in real world code you can frequently have arrays like this that are only ever used as a source for a map, especially if you do q.map(something).map(somethingelse).map(anotherthing).
Apparently, List is ~3x faster than native array.map, with the difference increasing with the number of elements.
I see that random access and iterator are two operations where native array is fastest.
Appending to a "native array" is amortized constant time. Secondly almost all operations on a "native array" will be faster because there is no overhead in indexing, there is cache locality etc etc. Requiring immutability will almost always reduce performance (yes, I'm aware there a couple operations like concatenation that can potentially run in O(lg n) rather than O(n) when using a tree representation for arrays).
What about sequential access? Surely array gets a benefit from data locality, where CPU caches pull in surrounding values automatically?
Even though boost has a customizable deque block size, it's performance is still way behind Clang's libc.
It does in the semicontiguous 4k blocks. And at least one msvc was that smart as i used it.
Here's the original paper: https://infoscience.epfl.ch/record/213452/files/rrbvector.pd...
A rust implementation: https://docs.rs/im/13.0.0/im/
C++ implementation: https://github.com/arximboldi/immer
The rust implemenation does a cool trick where the data is only copied when it is shared. If you are the sole owner of the data structure it will simply mutate it in place. So you don't lose any performance. But once you share it, it becomes an immutable functional data structure. See https://docs.rs/im/13.0.0/im/#in-place-mutation
> Even though List is very fast across the board no data-structure can be the fastest at everything.
This feels like a case in which everything is sufficiently well-implemented that the No Free Lunch Theorem  starts to play a role.
TLDR; linked lists' theoretical performance advantages are often negated by years of hardware optimization for dealing with arrays.