
Immutable Data Structures and JavaScript - jlongster
http://jlongster.com/Using-Immutable-Data-Structures-in-JavaScript
======
davedx
The main issue I've had using immutablejs with Redux is debugging. Whereas
previously I could simply mouse-over a data structure when I hit a breakpoint
(or crash), I now have to do a REPL dance to see what's in any of my data
structures.

I also find myself often having to do trial and error stuff to fix my code
(also while in a paused state in the console). I mean it's pretty nice that
you can actually do this, don't get me wrong. But overall, I am slower and
less productive with immutablejs than I am with vanilla JS / JSON objects.

It's a trade off people really should keep in mind before pulling the trigger
on immutable data structures. Sure, you get that performance boost, but do you
actually need it? Are your React views really so slow (or are you running on
slow embedded hardware like I am)? Or are you just drinking the kool aid
because immutable data is hot right now?

You get runtime perf improvements in certain cases, at the cost of opaqueness,
productivity and complexity. Make sure it is worth it.

~~~
gcanti
Hi, shameless plug: another ( __typed __) immutable data structure
library,[https://github.com/gcanti/tcomb](https://github.com/gcanti/tcomb).
Main features:

\- works with __regular objects and arrays __\- runtime type checks \- easy
debugging with Chrome DevTools \- immutability and immutability helpers \-
runtime type introspection \- easy JSON serialization / deseralization \-
pattern matching

Also [https://github.com/gcanti/redux-tcomb](https://github.com/gcanti/redux-
tcomb) (Immutable and type-checked state and actions for Redux)

~~~
jlongster
I wouldn't call it an immutable data structure library. As far as I see, it
does not implement any data structures.

This looks like a nice alternative to seamless-immutable though.

------
arohner
Clojure's motivating reasons for immutability are around correctness, not
speed. In typical Clojure, immutable data structures are somewhat slower
(25%-100%) than standard java, but make up for it because they can be
trivially parallelized, don't require locking, and are significantly easier to
reason about.

From that perspective, immutable structures should be the default, with
mutable structures being reserved for when they're truly needed.

~~~
jsprogrammer
Mutable structures should be optimizations of immutable implementations.

~~~
arohner
They are! [http://clojure.org/transients](http://clojure.org/transients)

------
skybrian
People often say that immutable data structures are fast because pointer
comparison is fast. But as soon as you do any transformation, even something
as simple as map(), the pointers all change. So it seems like the speed of
pointer checks is only relevant in cases where you're not doing anything
interesting with the data.

In particular, if you need to apply any transformation to your model before
displaying it in a view, it's not going to be a fast pointer check - you need
to do dirty checks on all the elements just like you would with mutable data
structures.

~~~
spion
Sure you would need to do go one level deeper if the top level is different,
but you can still use pointer equality at the lower levels

    
    
      immutableEq(a1, a2) {
        return (a1 === a2) || all(mapPair(a1, a2, immutableEq))
      }
    

Now if you update the value of a single element in the array a2, you would not
have to check all the elements in depth (only their references):

    
    
      a1 = Immutable.List(el1, el2, el3, el4, el5)
      a2 = a1.set(2, Immutable.Map({hello: "world"}));
      immutableEq(a1, a2) // all but el3 compared by reference equality

~~~
skybrian
In your example you're just changing one element and leaving the rest the same
- a pretty trivial transformation. If you're applying map() to a list then all
of the elements change.

If you have a pipeline then it's going to result in everything downstream re-
running, unless you compare the _output_ of the map() transformation to the
previous value, like some build systems do.

~~~
spion
Or alternatively, run all the major data transformations necessary to get from
data to VDOM from the render function, passing as much as possible of the
original data to the bottom-most components.

------
JDDunn9
So Immutability is Facebook's solution to excessive virtual DOM checks, which
is Facebook's solution to the render everywhere problem, which is Facebook's
solution to state change, which is a problem created by abstracting away DOM
manipulation, which was never a real problem in the first place...

~~~
jasim
The chain of reasoning is wonderfully put, but DOM manipulation was definitely
a problem. When elements are inserted or removed from lists, you'd have to do
a dance of many jQuery() steps. There is no concept of state, it is all
tangled up in DOM values, and the most elegant solution was to use
$.serialize() if it works in that context.

This mess was why people hated front-end programming. It has only been a year
or two since Angular/React became mainstream and allowed us to think of state
as plain JS objects. Lest we forget, it was truly bad times before.

------
inglor
"And strict mode, which is the default in ES6", this is not true, I think you
probably meant "in ES2015 modules". "Loose" mode still very much exists by
default in ES2015 (ES6).

~~~
jlongster
Thanks, I changed that

------
iamleppert
I'm a big fan of immutable data structures, but I will caution people to
blindly adopting them just because they've heard they are "fast" and it's
something that has been popularized by companies like Facebook.

The vast majority of apps, when properly designed (and using things such as
paginated datasets, and properly managing memory/cleaning up after
themselves), do not need immutability.

You should first ask yourself, is this a problem that can be solved with
immutable data structures, or why do I need to have huge data structures in
the browser for my frontend? Unless it's a very special case, you can do more
for frontend performance with lazy loading, and some good old fashioned
performance analysis (take inventory of event listeners, look at a heap dump
from Chrome dev tools, etc) than you can from adding the significant developer
and cognitive overhead of immutable.js or its ilk.

It may seem nice to say if obj === obj2 and if any deeply nested things have
changed it's just a pointer, but as others have mentioned this is not the
primary use case for many UI's that are constantly recomposing and creating
different kinds of data structures from other data structures (think map,
creating a hash from an array, etc).

There is no silver bullet and premature optimization with a highly specific
design pattern is the root of all evil. Don't worry about performance until
there is a legitimate performance problem that needs to be worried about.

~~~
Scramblejams
I don't use immutable structures for speed (they're often slower), I use them
because they can eliminate entire classes of bugs.

~~~
matwood
This. By making everything immutable by default, a program becomes much easier
to reason about. If I can quickly see this variable is set here and since it
is immutable it will never change, I can now quickly reason about other parts
of the function.

------
Sonata
I really hope immutable collections get added to the standard library
eventually (probably ES2017 at this rate). Proxies should allow them to be
used with libraries which expect mutable objects and arrays, as long as they
don't mutate them.

Having them built in to the language would open up some interesting new
possibilities too. It should be possible to send immutable data between Worker
threads without the overhead of serialization and deserialization, which is
currently one of the main barriers to doing heavy computation in a web worker
rather than on the UI thread. Of course, there would be some nasty internal
implementation details to sort out, as it would require sharing heaps, but it
should be possible.

------
jasim
Where does React's Immutability Helpers fall in this spectrum? I was mulling
over switching my entire codebase to ImmutableJS when a friend asked to try
them first. It worked out well, but I wonder whether anyone here has
experience with it in a large long-running project. So far it has allowed me
to use Javascript primitives everywhere; all I need to ensure is to avoid
direct mutation and use `update` that gives a new object whose properties
shares the same references with the original one, except for the path in which
the leaf node that needs to be mutated sits.

~~~
jlongster
Those helpers are basically the equivalent of seamless-immutable, which uses
native JS objects but updates them by copying. You make the exact same
tradeoffs as if you chose it over Immutable.js. Comparing those two libs
represents the choice of either persistent data structures or copying JS ones.

(Note that seamless-immutable also enforces immutability by using
`Object.freeze` in development)

~~~
baddox
It's worth mentioning that the ES6 array spread operator provides similar
functionality:

[https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Refe...](https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Operators/Spread_operator#A_more_powerful_array_literal)

The object spread operator (proposed in ES7) is essentially syntactic sugar
for Object.assign:

[https://github.com/sebmarkbage/ecmascript-rest-
spread](https://github.com/sebmarkbage/ecmascript-rest-spread)

[https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Refe...](https://developer.mozilla.org/en-
US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign)

I've even used jQuery's extend in legacy projects:

[https://api.jquery.com/jquery.extend/](https://api.jquery.com/jquery.extend/)

------
jameslk
A big issue it seems that these immutable libraries and Redux seem to neglect
is JavaScript's OO nature. I'm talking about instantiated function
declarations, something that's more readily available with ES6's class
notation. Most of these libraries (sans Redux) take advantage of this aspect
of JavaScript, yet leave you to figure things out if you do the same. This
blog post kind of touches upon this regarding Map and Set.

Now, functional programming is nice and lends itself some useful benefits, but
unfortunately JavaScript just wasn't built in the value-oriented sense. And
given the native features of JavaScript, such as the ability to create
"classes," I'd like to take advantage of these features and reap all their
benefits, such as encapsulation (which of course, some in the FP community
don't seem to care about[1]).

I'd like to see a middle ground somewhere between having e.g. immutable
objects yet respecting the way JavaScript is built. Until then, I'd have to
ditch common JavaScript idioms, which are only being advanced in ES6. Not
something I'm very keen on doing.

1\.
[http://programmers.stackexchange.com/a/216358](http://programmers.stackexchange.com/a/216358)

~~~
k__
Flux seems linke the FP equivalent to the OOP mvc to me. So there is no need
for what you say.

~~~
jameslk
I don't follow the point you're trying to make. What I'm referring to is how
JavaScript's object orientedness, which relies heavily on references, doesn't
play very well with immutable values.

The only way to really eschew mutability is to ditch the concept of objects
representing a state and a set of methods that mutate its state (which
maintain encapsulation and enforce separation of concerns). Unless you decide
to make all your prototype methods return new instances of the constructor
upon mutation, which would be both inefficient and prone to error.

------
Osiris
The problem with Immutable.js is that you have to remember to convert them to
regular objects to use with third-party modules that expect plain JavaScript
objects. One way to address that is to use Object.freeze instead. There's a
nice library called icepick[1] that facilitates using Object.freeze to create
immutable plain objects.

[1] [https://github.com/aearly/icepick](https://github.com/aearly/icepick)

*EDIT: Clarification

~~~
jlongster
Sounds like you just skimmed the top of the blog post.

~~~
Osiris
I read it. I was just commenting on an alternative that uses native objects
rather than custom objects.

------
webXL
Nice article, but I think some of the ideas need some clarification.

> For example, keys of a Map object are value equality checked... This has a
> lot of really nice implications.

I'm not sure how immutability helps here. And it's not possible to use
reference equality, since with queries, you're converting user input into a
data structure.

Edit: never mind. Precisely because you're dealing with input, you have to use
value-equality.

------
jhgg
One issue with the sample code. In the runQuery snippet, you call .set(...) on
queryCache. But since it is an instance of Immutable.Map(), without re-
assigning it to queryCache, it would really have no effect.

    
    
        queryCache.set(query, results);
    

vs

    
    
        queryCache = queryCache.set(query, results);

~~~
Touche
Mixing immutably with mutation (using let) seems weird to me. If you want to
go immutable to make understanding your app easier then great, but use const
throughout then.

~~~
elros
Why? `const` isn't that much immutable than `let` is. For any reference type
that's bound to a variable with `const`, you can mutate it without getting
errors. This was very confusing to me at first, even though I understand the
mechanics behind it.

    
    
        const a = {}
        // That's OK:
        a.a = 1
        // That's not OK:
        a = 1
    

While I don't disagree with you on your point, one could easily create the
counter-point that by using `const` you're implying something that's not true.

~~~
kansface
The translation to C would be * const, not const *. Const is strictly better
than let because it prevents a certain class of bugs - use it everywhere
possible instead of let/var.

------
amelius
One problem with the Immutable.js library is that you can still not run a fast
"diff" of two immutable values. This makes it difficult to write fast updating
code.

~~~
lojack
Correct me if I am wrong, but the big advantage is that in most cases you no
longer need to run a diff on the two values. You can do a naive comparison,
and if their references are different then you can assume the values have
changed.

If you truly do need to run a diff, you're going to run into the exact same
optimization problems with both immutable values and mutable values.

~~~
idibidiart
Since assignment of an object is by reference, if you say: var stateB =
stateA, then both stateA and stateB now point to the same JS object. If you
change stateB then stateA will be changed too since they both point to the
same object

So you would have to do: var stateB = JSON.parse(JSON.stringify(stateA))

to get a copy and then you can change stateB and it will not be equal to
stateA.

The problem is without use of actual "persistent data structures" (not
"immutables" based on Object.freeze etc) the copy/clone operation is
expensive. With persistent data structures as in ImmutableJS you get faster
(O(1)?) copy. Therefore, building your state store on ImmutableJS and doing an
explicit comparison (if you're using React) with shouldComponentUpdate by
comparing 'prevState != curState' is the only thing you need, and I had seen
benchmarks that show 35% increase in performance over not using ImmutableJS.

Update:

in response to the question about having a list of items and React invoking
shouldComponentUpdate, that should not be the case if React stops comparing
when the parent aka the list itself says that update should not happen. Does
React in that case descend down the tree to compare the state of the list
items? I suppose it would but there must be a way adound it. I heard about
batchUpdate add-on in this context. Researching now!

~~~
amelius
But let's say the UI consists of a list of N text elements. That list is
encoded as an immutable vector. When one of the elements changes, react will
run through the complete list and call shouldComponentUpdate for each item (!)

In contrast, with a fast "diff" operation, we could simply determine which of
the elements in the state have changed, and this would be fast.

~~~
epmatsw
How would you implement a fast diff that didn't involve looking at each text
value anyways? Would there be a separate store in the structure that tracks
changes, or something along those lines?

~~~
amelius
You can do this by skipping over shared sub structures, while comparing, for
example.

~~~
epmatsw
I don't see how that is related to immutability. Immutability gives you the
ability to verify that it's unchanged in O(1), but it doesn't change your
ability to do optimizations like that...

~~~
amelius
Well, immutable data-structures use "structural sharing", internally. Search
for it. For example, this image, [1], shows two data-structures (the green
nodes), that internally share many nodes. Now, comparing those two structures
can be done efficiently, because the shared parts don't need comparing.

[1] [http://eclipsesource.com/blogs/wp-
content/uploads/2009/12/cl...](http://eclipsesource.com/blogs/wp-
content/uploads/2009/12/clojure-trees.png) (a random image I found on the
internet that illustrates my point)

~~~
epmatsw
Wouldn't that be an argument for using an immutability library then? If you
can implement a shallow equality that takes shared structures into account,
what about non-immutable structures makes them a more efficient option for
such a thing?

------
_ducky
Regarding javascript interop and immuatable.js, it seems that the authors of
immuatable.js are trying to have their List/Map/Set objects as close to ES2015
ones as possible [1].

[1]: [https://github.com/facebook/immutable-js#javascript-first-
ap...](https://github.com/facebook/immutable-js#javascript-first-api)

