Hacker News new | past | comments | ask | show | jobs | submit login
Immutable.js – Immutable Data Collections (facebook.github.io)
182 points by swah on Oct 31, 2014 | hide | past | favorite | 85 comments



> The difference for the immutable collections is that methods which would mutate the collection, like 'push', 'set', 'unshift' or 'splice' instead return a new immutable collection.

I think this is an unfortunate design decision which should be reconsidered. Functional operations should have different names than side-effecting operations. In general, I think that while side-effecting operations are commonly verbs, functional operations should be nouns or prepositions.

Particularly in a language without static types, you want to be able to look at an unfamiliar piece of code and see pretty quickly what types it is using. The semantics of mutable and functional collections are similar enough that using the same names is going to be very confusing, particularly in code that uses both kinds of collection -- and such code will definitely exist.

It's important that the names convey the semantics of the operation. Java's 'BigInteger' is a good example of this being done wrong -- the addition operation is called 'add', for example, and I have read that some newbies call it without assigning the return value to something, expecting it to be a side-effecting operation. I think that if it were called 'plus', such an error would be much less likely. We're used to thinking of "a + b" as an expression that returns a value, rather than an object with state.

I understand that introducing new names has a cost: people have to learn them. But keeping the old names is going to drive users nuts. If you won't change the names altogether, at least append an "F" to them or something.

EDITED to add: if you want some ideas, check out FSet: http://www.ergy.com/FSet.html


This is great feedback and a decision that I didn't take lightly.

I ultimately decided that the mental cost of remembering a new API would outweigh the potential for accidental return value mis-management.

It's hard to make a decision like this sans-data, so I had to make a gut call. I'm really interested to hear feedback of issues encountered in-practice due to this. Of course, if I'm wrong about this (and there's always a reasonable chance I am!) then I would seriously consider changing the method names in a future major version.


But it's not the same API. There is a clear and plain specification on what each of those operations do, and the methods on these objects do not do them. I get what you're thinking about re-using what the programmer already knows but this only poisons the well by adding confusion to their existing knowledge. "Push adds a value to a list, oh wait, except it depends what type of list".

It is much easier to remember that "foo always does 'a'" and "bar always does 'b'" rather than "foo does 'a' for some things but does 'b' for others", it's why we create functions and objects with different behaviors instead of nesting lots of if statements.

New objects with new behaviors should use new language.


I think there are two opposing forces: the first, as you say, pushes us away from re-using the same names, because that can cause confusion; the second, however, pushes us to re-use existing knowledge through metaphor.

This second force is ubiquitous in natural language, but common also in programming language where operators like "+" and "[]" are re-used in different contexts without practical ambiguity, and with the benefit of transfer of knowledge through metaphor.

So I think your conclusion -- "New objects with new behaviors should use new language" -- is a bit too strong. On the other hand, in this particular case, the context is very close (array behavior) and the only difference is the immutablity, so confusion is a valid practical concern.


I agree with you, context is important here. I think you can "borrow" some of the metaphor with different but suggestive names or patterns. But you shouldn't reuse a well-known name if your implementation is not faithful to the original associations. I really think specific language/word choice is under-appreciated in programming.


You've clearly put a massive amount of work into this API, which I applaud.

One trick I use in FSet that you might want to copy is default values for maps. This is particularly handy when the range type of the map is another collection: you can make the default value be the appropriate kind of empty collection, making it unnecessary for code that accesses the map to check for a null value. In Java, for example:

  FMap<Foo, FSet<Bar>> m = new FHashMap.withDefault(FHashSet.emptyMap());
  // now I can do:
  for (Bar b : m.get(x)) ...
  // without worrying about whether 'm' contains an entry for 'x'.


Great feature.

Immutable.js supports something similar but at the access site:

    var m: Im.Map<string, Im.Set<string>> = Im.Map();
    console.log(m.get('foo')); // maybe undefined
    console.log(m.get('foo', Im.Set())); // never undefined
typescript (and soon, flow) checks that the second arg to .get() is a Value type. Flow has the concept of non-nullable types, which will eventually let us type the return value of `get(key: K): V?` differently from `get(key: K, otherwise: V): V`


How about just adding aliases to existing methods, e.g. plus() and add()?


I empathize with your argument that operations with significantly different behavior ought to have different names, although I think the convenience of using familiar method names might outweigh the confusion of new readers (depending on the size/scope/structure of your software project).

I'm less a fan of your argument that method names like "push" or "add" inherently imply that they mutate their caller. I see no reason for that to be the case, other than in the context of your first argument. I think that an "add" method that returns a new integer without mutating its caller accurately conveys the semantics of the operation just as much as an "add" method that mutates its caller.


That's silly. All of the methods are side-effect-free. It'd be useless and confusing to suffix some methods and not others.


Not to mention that if you see the assignment, you know that it's returning something, so you wouldn't (shouldn't) assume that the method is necessarily mutating the object. Plus I like the idea of moving towards immutable by default and calling methods that by name are explicit about mutation.


I wholeheartedly agree. Methods associated with mutation should raise exceptions. If errors aren't eagerly generated, assumptions about behaviour might be made. Not to mention this will make testing harder.

Readability is impacted as well. Building a mutable copy from an immutable should be explicit. These calls should have a very prominent call signature that is easily read, not skimmed over.

Very cool library, but I can sense lots of confusion and debugging resulting from accidental misuse.

Guava's immutables are pretty solid and serve as a great reference.


Good point. One solution to this is loud warnings in the IDE about "unused return value". But since this is javascript it could in many cases be hard for the IDE to know what type you are calling the method on. (And technically all functions in javascript return something, undefined, which by some people might be considered a feature)

Depending on other API designs these warnings could also drown in false positives because some functions BOTH mutate the object and return it and you very rarely store the return value somewhere else, it's just there "for convenience" when chaining calls, such as foobar.add(123).multiply(456).subtract(789). Worse offenders are those that just randomly return something for the sake of it, take memcpy/memmove/etc in C for example, it accepts destination as a parameter and it also returns the very same destination for no good reason, i have never found a reason to use that return value.

An api should be designed so that unused return values in the majority of cases can be considered an error, unless explicitly ignored by casting it to (void) or something, most cases of unused return values you find are just people that are too lazy to check return codes which is about as smart as wrapping every statement in try/catch with an empty catch block.


Not sure if I agree - it's more natural to have only immutable collections if taking a functional approach. Eg see scala or haskell. Does strong typing change that? It's very clear if you're whole code base has no side effects what is happening when and where.


> It's very clear if you're whole code base has no side effects what is happening when and where.

And how likely is that? It's great if you start with a purely functional project but in most projects you're starting with a existing codebase, and you don't always have the time to go back and change everything when you figure out a better way to do it.

So I don't think it would be that unusual for someone to want to use immutable objects alongside alongside "normal" mutable objects. But it would be harder to distinguish the two in code when the methods are all the same, particular when you don't have type information at hand.


I suppose ruby is the example where you have map! and map. I don't know - the java stream API is clear enough IMO. The Java8 approach was to allow mutating functions on collections and then have lazy-non mutative stream api. As a java8 dev, I don't think it's so hard to understand that map on a play promise is non mutative though - I'm not dropping requests for typesafe to change the api.


Agreed. Related: "Methods which return new arrays like slice or concat instead return new immutable collections" seems odd to me, at least for slice. Isn't one of the advantages of immutable arrays that you can operate on slices without copying because of the guarantee that the underlying representation won't change?


Slice is done in O(logN) with very little copying. This works via structural sharing with the original List.


It's great to see more implementations of persistent data structures. I think over time people will begin to see that they allow us to radically rethink how we organize very complex programs that we have long believed are better constructed on stateful objects - user interfaces. It's been thrilling to see people discard many long held assumptions - the ClojureScript community has been running with UIs constructed on persistent data structure via Om (a layer over React) to startling effect - I highly recommend watching this recent demo from CircleCI https://m.youtube.com/watch?v=5yHFTN-_mOo.


Cool video. But I don't get it really :S I just skimmed through the documentation and it says:

> In return for this effort, we get powerful debugging tools

Is this the primary purpose of this tool? Or is it the undo feature?


Om is an amazing and inspiring project! thanks for writing it.


Random observation -- what's up with almost 20 of the 24 'contributors' to this project mostly having made 1 edit changes to the README? Is this some kind of pervasive Github resume padding scheme that I'm just now picking up on? (it will show the repo in the "Repositories contributed to" section of your profile even for just those 1-line README edits)

https://github.com/facebook/immutable-js/graphs/contributors


As the supposed #2 contributor to the project (with 3 commits), I'd say it's far more likely that the README is simply the most visible piece of the project (or was, before the website was released today) and people contribute back typo fixes as they encounter errors.


It's possible, but it's also standard practice on github to submit even minor typos or spelling errors via a pull request, and that automatically makes you a contributor if accepted. So there's not necessarily anything nefarious going on.


Not nefarious, but definitely something to keep in mind if you recruit via github committer lists :)

I accept any reasonable pull request, even if it's a spelling fix. There have been some great bug fix pulls as well.


`git summary` from git-extras is great.

https://github.com/tj/git-extras


Here's an explanation of one of the underlying datastructures that make this library reasonably efficient: http://hypirion.com/musings/understanding-persistent-vector-...


There's also this library by David Nolen which seems similar https://github.com/swannodette/mori . Haven't used either library; someone with more experience might want to chime in on the differences/similarities between them.


I've been using this library for a project, and here are my thoughts on it:

1. Immutable data structures are a huge win. I can't count the number of times I've been bitten by a bug caused by some subtle mutation that happened in a part of my code that I wasn't expecting.

2. Using this library for data structures everywhere, as a replacement for native JS arrays and objects, requires you to have discipline in naming your variables so that you can infer their types. Things can get pretty frustrating when you try to map over a Set, thinking it's a Map, for example.

3. The most annoying thing might be the documentation, which consists of a type definitions file and liberal comments. It's ok, but hardly a great interface for exploring the API.

Overall, liking the library so far. I think with good, searchable docs (with examples of API usage) this could be something really great.


Apparently they support TypeScript, which would give you types without the pain of a naming scheme....


So, how does this compare to Clojurescript's Mori?


Performance: comparable Data-structure techniques: nearly the same API: do you like point-free functions (mori) or methods (immutable.js)

mori is a direct compile of clojurescript's excellent data structures and functional tools (written in clojurescript, of course) to javascript. It favors a clojure-style API.

immutable.js is entirely written in JavaScript and favors Idiomatic JS style API.


An important factor may also be: It's smaller in download size (15kB vs. 38kB for .min.gz).


I have started using this with our React.js project and have been able to squeeze out efficient DOM updates comparable to Om.

For fun I also made a library that overloads Immutable.js and JSON operations (and can do the same for mori): https://github.com/zbyte64/dunderdash


Doesn't React already do efficient DOM updates for you with their virtual DOM and it's diffing mechanism? Or are you referring to something else?


React does minimise DOM mutation, but you can get even better performance by skipping the diff step completely (via the shouldComponentUpdate hook). With an immutable data structure this method can be implemented as simple reference comparison previousData !== nextData , so you can get absolutely best perf with ease.


I think he is talking about implementing .shouldComponentUpdate(), which is more efficient with immutable data, since you can compare identities (===). David Nolen made this point with Om's TodoMVC.


What are the implications on memory usage in a long running app with something like immutable.js?

For example, let's say we have a mercury app [0], and the state of the app is based on a persistent immutable data type. It would seem to mean that so long as the state of the app doesn't change much that everything would work just fine because there total number of diffs is minimal. However, what happens in the case of an app where there are event sources that produce lots of data (mouse movements for example) and therefore can result in lots of diffs. Wouldn't this be a source of memory usage that just keeps creeping up and for which memory is never released?

[0] https://github.com/Raynos/mercury


There should not be linearly increasing memory usage. If you find memory leaks like this, please file bugs on github issues. AFAIK there currently aren't any.

Of course, if you do something like keep an undo stack around (check out swannodette's blog for a great example of this) then of course you have linearly increasing memory usage. One great property of persistent data structures is that the memory usage will be much smaller than if you kept your undo stack as copied JS values because of structural sharing (see one of swannodette's recent talks for a great explanation of this).

In practice, if you're implementing an undo stack, you should remain conscious of memory usage. You may want to only keep a fixed maximum number of previous states around so you don't have linearly increasing memory use.


Do you know of a way to limit your undo stack not in terms of undos (snapshots) but in terms of memory usage? i.e. limit the undo stack to N megabytes of memory?

Furthermore, do you know of any ways to perform automated testing for memory leaks?


In production, no. In a development environment yes. AFAIK, there is no exposed API in the browser to describe the size of the heap. However, you can expose hooks to get this information in a v8 or node instance; snap heap size, run tests, run GC, snap heap. Compare before and after in your test runner.


Hi! We would dearly love to use this in our code (a MEGA webclient subproject) but it's 51KB! Do you have ways to build only parts of this so that I can include (e.g.) only Immutable.Set? At the moment I'm using a custom immutable wrapper around a ES6 Set polyfill.


Yes, please open a github issue about this. There are some non trivial changes needed to make this possible, but this is something I would love to enable in a future version.

Also, if you're minifying and gzipping your static resources (standard practice these days), then Immutable.js is a relatively fit 15KB. For comparison, lodash is 8KB and jQuery is 30KB.

I definitely understand the desire to not include code that will never run (we pain to optimize this at FB), but at least it won't break the proverbial bandwidth bank.


What do you think about vanilla `Object.freeze`? Any drawbacks? Here my considerations at the moment:

Pros:

- you can use native data structures instead of continually wrap / unwrap

- you can use native methods on them (map, filter, ...)

- hence (almost) no interoperability problems with other libraries

- overall your coding style doesn't change so much

- (possible perf gains since the js engine knows the object is frozen? - I really don't know)

- EDIT: 0KB dependency

Cons:

- you must find / write a way to update efficiently


Lack of update is a show stopper. In general I would expect Immutable JS to trounce anything via Object.freeze with respect to performance. Real hash maps and sets are useful to organize programs - the semantics of Object.freeze on ES6 Map and Set are unclear and still suffer from above problems. Again Immutable JS has a big advantage.

In the context of building real applications with immutable data structures and a library like React that makes them easy to integrate - many of your concerns don't apply at all.


> Lack of update is a show stopper

Well, it's quite easy to write something like Facebook Immutability Helpers [1]. As you correctly stated, it really depends on your own use case and performance issues.

I'll try some perf experiment with [1] or something similar to see what comes out.

[1]: http://facebook.github.io/react/docs/update.html


Last time i checked Object.freeze actually made things slower, alot slower. Sounds counter intuitive and it might eventually change but that's the current state in most popular browsers.


This is pretty unfortunate.

At Facebook we use Object.freeze in a number of places to express POJSO immutability. A technique we've used to deal with the performance hit is to make most of these Object.freeze calls a no-op in a production environment.

Most FB employees (incl non-devs) load the site in development mode, so we still get the stricter error messages.


"Subscribing to data events throughout your application [..] creates a huge overhead"

So could I do without event listeners when using Immutable data? Say I have a view listening to data changes with event listeners... could I notify it about changes in data without listeners if I use immutable data?


Is it possible to use this as a drop-in replacement for Angular POJOs that hold $scope data? We're hitting a whole class of bugs which are basically due to mutability of native JS objects, and we could use something like this for the most complex ng-controllers.



I've never tried this myself, but anything is possible.

If you get this working, and want to write a blog post about it, let me know!


I think I'm getting the example wrong. Wouldn't this:

    var map = Immutable.Map({a:1, b:2, c:3});
    map = map.set('b', 20);
    map.get('b'); // 20
Give 2 instead of 20, since b is inmutable?


Only the map variable points to an immutable object. The second line reassigns that variable to a new (also immutable) Map instance, which is an exact copy of the original except that b has been changed.

Does that make sense?


Put further, Rich Hickey had a talk about this, I don't remember which one. Most languages (unfortunately those we usually start with) conflate _identity_ and _value_, even though they're completely different notions. It was ok with mutable structures, but immutability puts the problem on the front.

A map is a value; it never changes. When you add something to a map, the old map doesn't change, only the new one does (in that it is created). We have as many values as there are transformations. Of course libraries and GCs will make sure that we use physical resources as efficiently as possible.

An identity, on the other hand, links a _label_ to a _value_ at a certain point in _time_:

- the label 'mymap' is first associated with the value '{"a":true}', then with the value '{"a":false}'

- the label 'Tim's Birthday' is associated with the value '1970/01/13' forever (people's birthday rarely change)

- the label 'President of the US' is associated with the value 'Barack Obama' for the moment, it was different a few years ago, and will be different in a few years.

Going back to the example, we have a single identity, with label 'map', that takes 2 different values over time.


to put what rtfeldman said in code:

    var map = Immutable.Map({a:1, b:2, c:3});
    var map2 = map.set('b', 20);
    map.get('b'); // 2
    map2.get('b'); // 20


> Immutable always returns itself when a mutation results in an identical collection, allowing for using === equality to determine if something has changed.

But any smart equality-testing function would already include this.


Was this really hosted on Facebook initially? I remember this library before May, which is the date of the initial commit.


This was always Facebook's library, but it didn't always have a pretty homepage. You might be thinking of mori? https://github.com/swannodette/mori


Ah, I was thinking about "immutable.ts":

https://github.com/srenault/immutable.ts


What is the benefit of having immutable variables that are just fake-built at runtime? At compile time, knowing some variables are constant and will not change would give the interpreter/compiler a lot of optimization chances but does this apply the same at runtime as well?

the website says "efficiency" because immutable variables wouldn't need to be deep copied but would the deep copy operations be that frequent in JS?


Say you have this code:

    var x = {a: 1, b: 2};
    y = foo(x);
    // What is the value of x.b here?
With mutable data structures, `x.b` might have changed. If you make `x` an immutable data structure, then you can be sure that `x.b` has not changed.

This is a trivial example, but for larger programs it's going to be quite valuable to know that these kinds of changes have not happened. Maybe `foo` needs to "modify" the data structure in order to compute the final result.

With mutable data structures, what `foo` will do is to create a copy of the data structure `x`:

    function foo(x) {
        x = deep_copy(x);
        // The following assignment changes the copy,
        // no effect on the original parameter.
        x.b = 5;
    }
I just made up that "deep_copy" name -- I think you get what I mean. I don't know how it is written in JS.

I think you can see that "deep_copy" is going to be an expensive operation if `x` is a large data structure.


I'm afraid I don't get your point.

So deep copy is expensive, and if the variable is immutable and has to be changed, it would have to be deep copied. so immutable is bad?

Or are you trying to say even if it's expensive, the original variable not being corrupted is more important? I know there would be some variables that shouldn't be changed in some programs. but for this need I don't think immutable variables are necessary.

Shouldn't we focus more on the aspect of performance? Thanks for the input! Please enlighten me.


So, there are two cases. Situation 1: When you want to modify the original structure and use it with only its new value. Situation 2: When you want to share the original with a variation, but continue to use both the new and old structures simultaneously without them effecting each other.

So from your comment, you seem to be more familiar with situation 1 and you don't seem to be considering situation 2. However, situation 2 does arise quite often in the building of highly concurrent applications.

If you don't understand the benefits/need of situation 2, I can elaborate on that but right now I'm just going to explain the pros/cons of Immutability in both situations.

Situation 1: You're correct, immutable data structures are more expensive in this context. They are sometimes preferred anyways (I for example prefer them), but that can and should be debated and I don't want to get into it. However, they are not as expensive as you are making it seem, there almost never needs to be a full deep copy with immutable data structures (keep reading).

Situation 2: let x = { y: y, z: 1 }; let x' = copy(x); x'.z = 0;

Now when x.y.a changes you don't want x'.y.a to change. To make this guarantee (that x and x' can change state independently of each other) "copy" needs to be an expensive deep copy. ie x' = deep_copy(x);

However, if you were guaranteed that y and its nested children were all NEVER going to change (ie Immutable). You can now optimize: x' = shallow_copy(x); Why? because you still have the same guarantee: x.y.a will never change, thus never changing x'.y.a. x and x' are then always independent even with a simple inexpensive shallow copy.

Really quick to go back to Situation 1: I hope you see here aswell that the majority of the time, you don't require deep copies. Example, ImmutableMap.put(k,v) just does a shallow copy at depth=0 since you are only mutating the HashMap itself. I am not an expert and so I'm not sure, but I think good ImmutableMaps have been optimized even further.

Anyways, I hope that helps.


One benefit is that you can do simple O(1) equality comparisons in UI libraries like React to determine whether data has changed (and thus whether UI needs to be refreshed).


This doesn't necessarily work. Consider scenarios where you do one edit and then effectively undo it before the redraw. Or, more realistically, two rapidly successive edits effectively undo each other.

Also, one could just as easily keep a dirty bit oneself.


Of course there's no way to know in O(1) time if you've done two edits that just so happen to sum to no-op, regardless of if you're using Immutable.js or not.

However, depending on how to implement undo, you might be in good shape with Immutable.js. For example: you might keep a stack of the last few changed data around, and an undo could just pop off the stack in which case you can know if your oldData === newData in O(1).

---

Keeping dirty bits around to determine when you need to operate on your data again is totally viable, there's nothing wrong with that approach, especially for smaller applications. Some frameworks designed for large applications even employ this technique.

For larger applications, in my experience, the dirty bits tend to add up and create a lot of state management overhead, and soon you find a majority of your code cautiously stepping around mutable state instead of just making your application do what it's supposed to do.

The primary thesis of Immutable.js (and persistent data structures in general) is to illuminate the option of having data which promises to never change and thus making memoization trivial. If you have an application which can take advantage of memoization for real performance improvements, then using these kinds of structures can be a big win.


If you are doing a stack, that trick works fairly easily with a mutable stack, as well.

Regardless, I was not trying to toss out immutable structures with the bathwater. They are both incredibly cool and useful. Usually fairly memory intensive, though that is less of a deal today than it was in days past. Also, fairly cache unfriendly. For many applications, this is not a main concern. (32 way branching vectors, I'm looking at you.)

And yes, if you have a plethora of dirty bits, that could be difficult. If you have a plethora of immutable collections, that will lead to the same trouble. Consider, the optimization at stake here is essentially adding a single "isDirty" method to existing collections that is only set true on modifications and has a "clear" method. (There are other ways this can be done, just going for the easy way.) Far from difficult to encapsulate and get the O(1) dirty checking for cheap.

And I agree that applications with many dirty bits to check are difficult. You don't exactly dodge this with immutable collections. That is, in my experience, larger applications that have a lot of immutable collections tend to create a lot of management over old/new collections. Not shockingly, the trouble seems to be when you expand the scope of what you are doing wider and wider. Not necessarily how you are keeping track of modified collections.


Yeah, that's true for O(1) === comparison. The library does also provide Immutable.is(foo, bar) which does nested value checking, but that's obviously not O(1).


I don't follow. What's wrong with effectively undoing an edit before a redraw? Why do I care about the intermediate state?


I think taeric is saying you "effectively undo" an edit, meaning that you could modify the state to a new state that happens to be the same as the old state (so it'll be a different object).


That was exactly what I was saying. And apologies for my tone. I did not intend this to be a reason to toss immutable collections. I just don't think "easy dirty checking" is a large bonus for them.


The big thing is Object.freeze() which allows efficient object handling.

The trouble is that arrays must look and act like objects. Objects must look and at like hashes, and hashes are very inefficient. The solution is to implement arrays as arrays or structs and objects as structs or static classes. This sounds good, but what happens when that array changes size or a property is added to that object?

The answer is that the jit must constantly check for this behavior. If it sees this happen, it has to throw out all the optimized code it has generated and start over again (reducing performance in the meantime and using more power on mobile as well).

If the object cannot be changed, then the compiler can make more guarantees, so it can optimize sooner, more reliably, and is less likely to need to fall back to unoptimized byte code.


Oh, I failed to remember there was Object.freeze().

This is something the interpreter(Just-in-time compiler, to be more exact) would utilize as hints. but does this mean Immutable.js does Object.freeze()? and if so, why do we need a new library that does the same thing?


I think there are two concepts being confused here - this is not the same as `var` vs. `const`/`let`.

Immutable.js implements persistent data structures, which are entirely runtime concepts.

---

Deep copying isn't very common because it's usually a pretty obvious flag that performance might suffer, however it's the only other way to implement this same concept and that pattern is definitely found in applications which desire persistent data. You might here this referred to as "defensive copying".


I've been using this (v2, though) rather extensively the last few months so I figured I'd share some experiences.

First off, it works great. The TypeScript definition file also works fine as an API documentation of sorts, although it would be nice if some web site with a clickable TOC could be generated from it.

I was a little confused at the beginning how the "Sequence" base class (changed name to Iterable in v3) was an arbitrary key=>value mapping (with no apparent relation between the keys and the ordering of the key/value pairs) that I had expected would be called "Map". The name change made this clearer for me: it's basically an abstract interface, not a concrete datastructure.

The TypeScript support, to my experience, does not work as advertised on the tin, especially when you nest data structures. For example, code like this compiles fine:

   var q = Immutable.Map<string, Immutable.List<number>>();
   q.set("hello", Immutable.List([1,2,3])); // typechecks until here
   q.updateIn(["hello", "wrong"], false); // shouldn't be possible, but is.
Similarly, if you use Immutable.Record to build immutable objects, then you can still call set() with any key because the key is a string instead of something TypeScript could typecheck. Meanwhile, using direct property access on Immutable.Records, like this, will fail:

    var Banana = Immutable.Record({length: 0, colour: "yellow"});
    var someBanana = new Banana();
    someBanana.curve = 5.0;         // should fail, but doesn't
    console.log(someBanana.length); // fails, but shouldn't
The last line will fail to compile in TypeScript even though it works in Immutable.js. I managed to write a wrapper for Immutable.Record() that "solves" this, but at the cost of highly verbose repetition for each record class.

I tried very hard to "fix" this, and I have become aware that this is nearly entirely due to limitations of TypeScript's very poor generics support. It really couldn't be any better than it is. But still, the front page claims great TypeScript support and I'd say it's half-assed at best.

I chose Immutable.js over Mori because I think the API is slightly less verbose and more JavaScript-ish. I think this is mostly a matter of taste.

I found that I kept forgetting the names of methods, or being unsure about their exact signatures. Also, I often use plain JS datastructures in "little" parts of the code (stuff that doesn't persist cross function calls), I use LoDash and ES5 built-ins to manipulate those, and these functions all have subtly different names and semantics again. I suspect that very few codebases won't do this in practice: mix Immutable.js data structures with native JS data structures all over the place, and a need to manipulate both here and there.

I think transducers may be an opportunity there. Our very own jlongster has a nice writeup [1] of how they enable you to use the exact same data manipulation syntax for both native JS data structures and Immutable.js collections at no performance cost. I think that maybe Immutable.js should embrace this and release an alternative build of the library that has absolutely no manipulation functions except the hooks that transducers need to consume and produce Immutable.js collections.

Finally, I've been doubting whether Immutable.Record is worth the hassle. Maybe a single function that can easily clone a JS object but change 1 field is more than good enough. For example, with lodash/underscore this function would be a oneliner like:

    var newRecord = _.merge({}, oldRecord, {someKey: newValue});
(there might be a "shorter" way, I find the underscore API horribly obtuse and difficult to figure out, but that's not the point here)

The biggest disadvantage of using vanilla JS objects over Immutable.Record is that you're not sure that it'll not mutate if you forget to Object.freeze. I doubt many record-type objects have so many fields that cloning them has a larger performance overhead than Immutable.js's internals. I'm curious what other people's ideas and experiences are here.

[1] http://jlongster.com/Transducers.js--A-JavaScript-Library-fo...


I'm not happy with the Records impl yet. This is great feedback and resonates with some things I've already been thinking about. I'm especially not happy about how Records play with Typescript at the moment.

Also, please keep feedback coming about Typescript. I'm sorry you feel it's half-assed, I'm trying to make the best of what Typescript gives me. You might notice that the .d.ts file is full of comments where a more expressive type system would allow more accurate type information.

If you have concrete suggestions for improving the TypeScript definition file, please please hop over to github.com/facebook/immutable-js/issues and write them up.


Hi! Just to be absolutely clear, I think that it's completely TypeScript's fault that Immutable.js's TypeScript support is half-assed - you could rather say that TypeScript's support for immutable well-typed data structures is half-assed. My only complaint about your work in that entire story is that the site somewhat implies that it works great in TypeScript, whereas my personal experience is kind of that it doesn't. I have spent weeks trying to use it well with TypeScript, and then dropped it and going for vanilla JavaScript instead. I haven't looked back :-)

I'll see if I can dig up any of the stuff I did to make it work better with TypeScript, but I don't think I could find many substantial improvements to the .d.ts only. What I ended up making was wrappers for some stuff that had more cumbersome syntax, but as a result did allow for more type checking.

About the records, I believe that I might have a couple of ideas. I'll start an issue on github to discuss it.


Thanks for chiming in on github!

By the way, I had the same reaction to TypeScript. The internals of Immutable.js were originally implemented in TypeScript, but being forced to work with a subset of JavaScript with an imperfect type system was limiting and I eventually ended up with the codebase you see today of a .d.ts to describe public API type information and a vanilla JS implementation.

Good feedback about the story of how it works with TypeScript is a little over-sold right now. The reality is that it doesn't not work with TypeScript and one step better, the API source of truth is defined in TypeScript (and soon human readable html, I promise!)

Facebook's Flow typechecker is nearing a level of completion where we can open source it, and has a similar mechanism of .d.flow files. At that point I'll start maintaining both Flow and TypeScript definition files. Flow is currently more expressive than TypeScript but still doesn't solve all the problems you've outlined.

Both projects are really new, and one of Immutable.js's goals is to challenge the limits of the JS infrastructure we build with, including but not limited to the type-checkers.


I generated docs for Immutable.js using TypeDoc. It can be found here:

http://brentburg.github.io/immutable-docs/api/


I used these docs when I was exploring the Immutable.js API! Very helpful, but Immutable just updated to v3 (your docs are v2), which introduces some breaking changes (for example, Vector is now called List). Would you mind updating your docs?


I just updated it to v3.0.2.


???


Nice!!!


How




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: