Hacker News new | comments | show | ask | jobs | submit login
Pros and Cons of using immutability with React.js (reactkungfu.com)
90 points by voter101 on Aug 14, 2015 | hide | past | web | favorite | 29 comments



> I’d recommend the immutable-js library for it. It has nice API and it comes from Facebook itself. Another option is the baobab library - but it works better when more ‘reactish’ ideas are present in your codebase, like global app state.

Another option is a library I wrote, react-cursor[1], which is basically sugar over the React immutability helpers[2] which the article mentioned. react-cursor has a couple advantages over immutable-js and baobab:

1/ simpler types: use regular react state with plain old js data structures, 2/ simpler implementation - about 100 lines of code and tiny api, 3/ super easy to integrate with your existing large codebase already using react state

[1] https://github.com/dustingetz/react-cursor [2] http://facebook.github.io/react/docs/update.html


For those looking for great Flux libraries with immutability at their core, I've been loving NuclearJS[0], which is built on top of ImmutableJS and untangles your stores by giving you a great kind of "functional lens" called Getters.

One problem I have is that ImmutableJS[1] doesn't list the complexity of any of the operations in their documentation. So it can be hard to intuit the efficiency of given operations without having read/grokked the 5k sloc of source.

[0] https://optimizely.github.io/nuclear-js/

[1] https://facebook.github.io/immutable-js/docs/


Listing complexity doesn't really help here. There's a huge difference in practical complexity than the theoretical one. Saying that "insert" has log(n) complexity is misleading when you realize the branching factor is 32. Likewise, "compare" has log(n) or mostly constant complexity in most real-life settings where you're comparing against a value that was calculated from the one you're comparing against (so shares lots of subtrees by reference). You're not e.g. sending back a fresh new copy of the data from the server to compare against. Immutable-js _could_ put "it's linear theoretically but most of the time it's really almost constant time", but that doesn't help much either.

Is it blasphemous to say that looking at runtime complexity is gradually becoming more of a premature optimization (thanks to better hardware)? You can argue all day long that your js object has constant insertion time, but the underlying implementation makes it an order of magnitude slower than array for a limited number of fields. And if you accidentally trigger the hidden class deopt that turns it into a hash map that's another order of magnitude slower. No amount of ordinary complexity analysis will help you here. Vice-versa, when Babel gradually starts supporting constant lifting for collections (somehow), you can look at a piece of code in your editor, reason that a comparison is linear, but then have the transpiler lift it out (`const liftedA = []; function foo() {return liftedA;}` instead of `function foo() {return [];}`) and not realizing yourself that the comparison is actually constant time (reference comparison). And then, if you write some overly clever optimization for that piece of code yourself, you might ironically get worse perf because the transpiler can't lift the collection anymore.

That being said, Immutable-js uses the same concept and clojure's persistent data structures (exposed as mori for JS users). Here's a nice article on it: http://hypirion.com/musings/understanding-persistent-vector-...


>There's a huge difference in practical complexity than the theoretical one.

This resonates with me. I write a lot of Scheme, and often enough someone comes along saying that association lists (simple lists of pairs) are terrible because lookup time is linear and that I should be using hash tables. However, they don't realize that hash tables are only faster when the mapping is very large and come with a penalty of no longer having a persistent data structure.


I found this an extremely helpful comment! Thanks very much, chenglou!

EDIT: Though, I personally think it actually would be pretty helpful if ImmutableJS put something like this in their docs: "it's linear theoretically but most of the time it's really almost constant time"


We've started using Redux [1] lately, because we found some flaws in the original Flux architecture [2]. We also started using ImmutableJS together with Redux. Would you care to list any advantages of NuclearJS over Redux?

[1] http://gaearon.github.io/redux

[2] biggest flaw: if an action updates two different stores, and a component depends on both stores, the component will get rendered twice. Even worse, the first time it will get rendered with one store updated and the other one not updated, so possibly in an inconsistent state.


The biggest gap I was previously aware of in Redux was the lack of getters, but as bryanlarsen points out in a sibling to your comment, that appears to be available in reselect: https://github.com/faassen/reselect (last time[0] I talked about Nuclear vs Redux here, reselect didn't have a readme). So I'll need to take some time to evaluate the options...

[0] https://news.ycombinator.com/item?id=9833058


Thanks for the reply. The philosophy for Redux seems to keep the core as agnostic as possible, and that's why getters are outside of core. In part I like this, in part I don't - for sure it makes it difficult to evaluate it, as there are so many possibilities... I think I wouldn't mind some more convention over configuration.


If you want the getters functionality in redux, use reselect: https://github.com/faassen/reselect

At this point I really get the impression that redux has "won" now that flummox recommends using redux instead of flummox.

Flux libraries are small and simple enough that it probably won't hurt to use a non-mainstream flux library, but it's probably still best to use the same one everybody else is using.


Hmm, reading docs:

Lists are immutable and fully persistent with O(log32 N) gets and sets, and O(1) push and pop.

Immutable Map is an unordered KeyedIterable of (key, value) pairs with O(log32 N) gets and O(log32 N) persistent sets.

A Collection of unique values with O(log32 N) adds and has.

etc.


Immutability for stateful things seems unintuitive at first, but reframed as the lack of mutability, it makes more since. We can easily add it.

    yourCar === neighboursCar; // false
    yourCarRepainted === yourCar; // false :(
In the article's examples, how does the program know which object is different? It's weaved into the language design that every object has an address. That means if we want to write code without mutability, the interpreter has no idea since mutability is always on, and can't protect from accidentally modifying intentionally stateless code.

Flip side, if immutability is the default, we can easily add state, since it's just a lack of an address. The address becomes part of the data structure, giving the coder more power, since it's not locked outside of the code. Think SQL! Or Haskell!

    var yourCar = {id: 'my_car', color: 'red'},
        neighboursCar = {id: 'neighbours_car', color: 'red'};
    
    function referenceEqual(a, b) { return valueEqual(a.id, b.id); }
    
    referenceEqual(yourCar, neighboursCar); // false :D
    var yourCarRepainted = Object.assign({}, yourCar, {color: 'red'});
    referenceEqual(yourCar, yourCarRepainted); // true :D
Notice we've now flipped the address into our data, and even the (===) operator in our hands. With JS, this will run slowly since it can't infer our immutability, but constants are coming!


This week I learned to love the spread operator.

    const newState = {...state, ...objectWithNewValues}


That's really cool. Worth noting that it's an ES7 proposal rather than part of ES6, so you get it with Babel but not native ES6, or e.g. TypeScript.


Whoa, that works on objects/properties? I hadn't seen that before. That's nice.


Just keep in mind that object spread operators are a TC39 stage 1 proposal at this point.


Yes, here some simple examples:

    (state, action) => ({...state, loading: true})

    (state, {error}) => ({...state, error, loading: false})

    (state, {result:{data}}) => ({...state, ...data, loading: false})


Okay as far as it goes but it downplays some difficulties.

A con when updating tree structures is the need to replace all nodes along the path from the root, which is why functional languages sometimes use fancy data structures like lenses.

Graphs are a bit awkwardly represented (can't use regular pointers due to cycles).

Reference comparisons are fast for small nodes, but for something like a large list, it's often not enough to know it was touched. You need to compute the diff to make updates efficient, which often requires a linear scan or worse. Making this efficient for arbitrary list mutations is a fairly difficult problem.


> fancy data structures like lenses

I think you mean zippers. I'm not sure if lenses can be called data structures. They are closer to Functors than say Linked Lists in nature (but I may be wrong here).


The author's primary motivating example is the high cost of deep equality checks. I'm not sure how immutability helps with that; yes, if two variables are === then they havn't changed, but two objects can be !== but still value equal. So if you want to know if two objects are (value) equal, you'll still need to check.


The primary use case for checking equality in React is when you change one part of your application state and it triggers a rerender of a large React component tree. Since you probably only changed one small bit of the state (e.g. you created a new todo list item), you don't actually need to rerender unrelated components (e.g. the already-existing todo list items). Immutable data allows you to do cheap (constant time) reference equality checks, because the data objects representing the existing todo list items will be the exact same objects that they were previously.

In this fundamental React render flow, you probably won't need to worry about checking value equality between two different objects if you are using immutable data. Now, your application might have specific UI requirements that need to do value equality. As a random example, you might have a complex form, and you might want to know if, after the user changed several things, the form's data is different than it was at the beginning. In this case, you'll have to do a (more expensive) value equality check.


With React, the point is that by simply comparing the reference to immutable objects you can skip most unnecessary re-renders almost for free. Of course you'll still have some false positives (ie, the reference is different but the content is the same, and re-rendering wasn't necessary) but that's not a big penalty.


Immutability doesn't have to be a discipline. It is the default when working with ClojureScript!


There's a benefit to teasing apart two ideas here:

- writing functions that expect immutable data (you get referential transparency and value equality → a system that's easier to reason about) - using persistent data structures (makes it cheap and efficient to create new changes to your data over some messy Object.assign helpers)

Javascript doesn't promote applications written in that style though, so you're definitely going to want to use a library like Immutable.js everywhere for those kinds of guarantees.


Object.assign does not work well when you have classes. For one of my projects I came up with a small helper to clone class instances: https://caurea.org/2015/07/19/generic-immutable-objects-in-j.... In addition to cloning, it freezes the new object so accidental attempts to mutate it will throw an exception (in strict code).


This is a good overview article with lots of practical examples. One nit. These lines are written many times:

    x == y; // true
    x === y; // true
without a single instance where the results differ. This would be stronger if the first line of every pair were removed, since this isn't an article about JS equality operators.


There is a lot of good information here, but being a stickler for detail, I found the frequent grammar errors distracting. It seems like with all the self-advertisement conversion signup forms, it would be helpful to have an editor, or a native speaker of the language it's written, go over it.


Seems like the biggest benefit to immutability is that it reduces the time spent dirty checking objects. So if you use getters/setters instead of dirty checking, immutability doesn't provide much benefit right?


so let's use tons more memory and slow copies by value everywhere, not to mention backward coding, just because our generic UI framework is too generic?


Very well written - great job!




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: