

Pros and Cons of using immutability with React.js - voter101
http://reactkungfu.com/2015/08/pros-and-cons-of-using-immutability-with-react-js/

======
dustingetz
> _I’d recommend the immutable-js library for it. It has nice API and it comes
> from Facebook itself. Another option is the baobab library - but it works
> better when more ‘reactish’ ideas are present in your codebase, like global
> app state._

Another option is a library I wrote, react-cursor[1], which is basically sugar
over the React immutability helpers[2] which the article mentioned. react-
cursor has a couple advantages over immutable-js and baobab:

1/ simpler types: use regular react state with plain old js data structures,
2/ simpler implementation - about 100 lines of code and tiny api, 3/ super
easy to integrate with your existing large codebase already using react state

[1] [https://github.com/dustingetz/react-
cursor](https://github.com/dustingetz/react-cursor) [2]
[http://facebook.github.io/react/docs/update.html](http://facebook.github.io/react/docs/update.html)

------
rattray
For those looking for great Flux libraries with immutability at their core,
I've been loving NuclearJS[0], which is built on top of ImmutableJS and
untangles your stores by giving you a great kind of "functional lens" called
Getters.

One problem I have is that ImmutableJS[1] doesn't list the complexity of any
of the operations in their documentation. So it can be hard to intuit the
efficiency of given operations without having read/grokked the 5k sloc of
source.

[0] [https://optimizely.github.io/nuclear-
js/](https://optimizely.github.io/nuclear-js/)

[1] [https://facebook.github.io/immutable-
js/docs/](https://facebook.github.io/immutable-js/docs/)

~~~
chenglou
Listing complexity doesn't really help here. There's a huge difference in
practical complexity than the theoretical one. Saying that "insert" has log(n)
complexity is misleading when you realize the branching factor is 32.
Likewise, "compare" has log(n) or mostly constant complexity in most real-life
settings where you're comparing against a value that was calculated from the
one you're comparing against (so shares lots of subtrees by reference). You're
not e.g. sending back a fresh new copy of the data from the server to compare
against. Immutable-js _could_ put "it's linear theoretically but most of the
time it's really almost constant time", but that doesn't help much either.

Is it blasphemous to say that looking at runtime complexity is gradually
becoming more of a premature optimization (thanks to better hardware)? You can
argue all day long that your js object has constant insertion time, but the
underlying implementation makes it an order of magnitude slower than array for
a limited number of fields. And if you accidentally trigger the hidden class
deopt that turns it into a hash map that's another order of magnitude slower.
No amount of ordinary complexity analysis will help you here. Vice-versa, when
Babel gradually starts supporting constant lifting for collections (somehow),
you can look at a piece of code in your editor, reason that a comparison is
linear, but then have the transpiler lift it out (`const liftedA = [];
function foo() {return liftedA;}` instead of `function foo() {return [];}`)
and not realizing yourself that the comparison is actually constant time
(reference comparison). And then, if you write some overly clever optimization
for that piece of code yourself, you might ironically get worse perf because
the transpiler can't lift the collection anymore.

That being said, Immutable-js uses the same concept and clojure's persistent
data structures (exposed as mori for JS users). Here's a nice article on it:
[http://hypirion.com/musings/understanding-persistent-
vector-...](http://hypirion.com/musings/understanding-persistent-vector-pt-1)

~~~
davexunit
>There's a huge difference in practical complexity than the theoretical one.

This resonates with me. I write a lot of Scheme, and often enough someone
comes along saying that association lists (simple lists of pairs) are terrible
because lookup time is linear and that I should be using hash tables. However,
they don't realize that hash tables are only faster when the mapping is very
large and come with a penalty of no longer having a persistent data structure.

------
munro
Immutability for stateful things seems unintuitive at first, but reframed as
the lack of mutability, it makes more since. We can easily add it.

    
    
        yourCar === neighboursCar; // false
        yourCarRepainted === yourCar; // false :(
    

In the article's examples, how does the program know which object is
different? It's weaved into the language design that every object has an
address. That means if we want to write code without mutability, the
interpreter has no idea since mutability is always on, and can't protect from
accidentally modifying intentionally stateless code.

Flip side, if immutability is the default, we can easily add state, since it's
just a lack of an address. The address becomes part of the data structure,
giving the coder more power, since it's not locked outside of the code. Think
SQL! Or Haskell!

    
    
        var yourCar = {id: 'my_car', color: 'red'},
            neighboursCar = {id: 'neighbours_car', color: 'red'};
        
        function referenceEqual(a, b) { return valueEqual(a.id, b.id); }
        
        referenceEqual(yourCar, neighboursCar); // false :D
        var yourCarRepainted = Object.assign({}, yourCar, {color: 'red'});
        referenceEqual(yourCar, yourCarRepainted); // true :D
    

Notice we've now flipped the address into our data, and even the (===)
operator in our hands. With JS, this will run slowly since it can't infer our
immutability, but constants are coming!

------
k__
This week I learned to love the spread operator.

    
    
        const newState = {...state, ...objectWithNewValues}

~~~
andrewstuart2
Whoa, that works on objects/properties? I hadn't seen that before. That's
nice.

~~~
lowboy
Just keep in mind that object spread operators are a TC39 stage 1 proposal at
this point.

------
skybrian
Okay as far as it goes but it downplays some difficulties.

A con when updating tree structures is the need to replace all nodes along the
path from the root, which is why functional languages sometimes use fancy data
structures like lenses.

Graphs are a bit awkwardly represented (can't use regular pointers due to
cycles).

Reference comparisons are fast for small nodes, but for something like a large
list, it's often not enough to know it was touched. You need to compute the
diff to make updates efficient, which often requires a linear scan or worse.
Making this efficient for arbitrary list mutations is a fairly difficult
problem.

~~~
klibertp
> fancy data structures like lenses

I think you mean zippers. I'm not sure if lenses can be called data
structures. They are closer to Functors than say Linked Lists in nature (but I
may be wrong here).

------
mcphage
The author's primary motivating example is the high cost of deep equality
checks. I'm not sure how immutability helps with that; yes, if two variables
are === then they havn't changed, but two objects can be !== but still value
equal. So if you want to know if two objects are (value) equal, you'll still
need to check.

~~~
baddox
The primary use case for checking equality in React is when you change one
part of your application state and it triggers a rerender of a large React
component tree. Since you probably only changed one small bit of the state
(e.g. you created a new todo list item), you don't actually need to rerender
unrelated components (e.g. the already-existing todo list items). Immutable
data allows you to do cheap (constant time) reference equality checks, because
the data objects representing the existing todo list items will be the exact
same objects that they were previously.

In this fundamental React render flow, you probably won't need to worry about
checking value equality between two different objects if you are using
immutable data. Now, your application might have specific UI requirements that
need to do value equality. As a random example, you might have a complex form,
and you might want to know if, after the user changed several things, the
form's data is different than it was at the beginning. In this case, you'll
have to do a (more expensive) value equality check.

------
rymndhng
Immutability doesn't have to be a discipline. It is the default when working
with ClojureScript!

------
straws
There's a benefit to teasing apart two ideas here:

\- writing functions that expect immutable data (you get referential
transparency and value equality → a system that's easier to reason about) \-
using persistent data structures (makes it cheap and efficient to create new
changes to your data over some messy Object.assign helpers)

Javascript doesn't promote applications written in that style though, so
you're definitely going to want to use a library like Immutable.js everywhere
for those kinds of guarantees.

------
wereHamster
Object.assign does not work well when you have classes. For one of my projects
I came up with a small helper to clone class instances:
[https://caurea.org/2015/07/19/generic-immutable-objects-
in-j...](https://caurea.org/2015/07/19/generic-immutable-objects-in-
javascript.html). In addition to cloning, it freezes the new object so
accidental attempts to mutate it will throw an exception (in strict code).

------
couchand
This is a good overview article with lots of practical examples. One nit.
These lines are written many times:

    
    
        x == y; // true
        x === y; // true
    

without a single instance where the results differ. This would be stronger if
the first line of every pair were removed, since this isn't an article about
JS equality operators.

------
stevebmark
There is a lot of good information here, but being a stickler for detail, I
found the frequent grammar errors distracting. It seems like with all the
self-advertisement conversion signup forms, it would be helpful to have an
editor, or a native speaker of the language it's written, go over it.

------
JDDunn9
Seems like the biggest benefit to immutability is that it reduces the time
spent dirty checking objects. So if you use getters/setters instead of dirty
checking, immutability doesn't provide much benefit right?

------
gcb0
so let's use tons more memory and slow copies by value everywhere, not to
mention backward coding, just because our generic UI framework is too generic?

------
wesbos
Very well written - great job!

