So, the slowdown depends on the operation.
At the moment, comparing `immer::vector` to `std::vector`, example, for reading, using internal iteration (e.f. `for_each`) you get 1.5X, using iterators something like 2.5X, using  something like 6X iteratively, something like 1.1 in random order.
Updating a single `int` element is like 100X. (This factor decreases with the size of the updated element, though!) But then, for example, `push_back` is only 1.1X std::list, and it does not rely on amortization, making it better for latency. Considering that you get access time similar to std::vector and persistence, it is a good deal. Also, with `flex_vector`, you get logarithmic slicing and concatenation, making these operations faster than for std::vector (when using big-enough vectors). Also, I am working now on _batch updates_ via transients, so if you update 10 elements in a row, you don't pay 10 times 100X (basically, you only pay the 100X once, which is very much ok for most use-cases, think like a 1microsecond tax per mouse click ;)
There are lots of trade offs one can make (in general, one can make updating a bit faster by making access a bit slower). Also, it depends a lot on the memory management strategy, which can also be customized.
I'm on Win7 x64 Chrome.
With an immutable collection the sender can send the original, no copy required. The immutable collection can be shared (lock free) by any number of users.
In this case you have both reduced time and memory use.
So while immutable collections are obviously "slower" for general (mutable) scenarios, the win comes when you need less locking, less copying, and so on.
That is, there is nothing that guarantees foreign code doesn't modify your "immutable" data. You can start building in checks into the system to make sure they were not modified, but you will eventually get to the point where you are basically locking on your data. Or just sending a copy of just enough of the data for the other end to work. (Which, if you are at all distributed, can make some sense anyway.)
I get it, in that if you are sticking to the contract, than immutable leads to this. So, in a very real sense, it helps an individual (or small team) stick to the convention of "initialize and then use" data structures.
None of this is to say that immutable data can not be really useful. It can be. Awesomely so. I just get a little worried at the benefits being touted as absolute.
To me, the much more interesting aspect of immutable structures are the ones that let you run something in an append only way. Or in a method where you can effectively recreate a local history of an object from the data. (Neither of these things are "new", btw. In a simplistic sense, this is refinding that assoc lists can be useful.)
However, this can be customized in the library. There is a brief description here: https://sinusoid.es/immer/memory.html
For example, one may choose between thread safe or thread unsafe reference counting (the later is much faster!). One may also plug in a conservative garbage collector, like `libgc` (https://github.com/ivmai/bdwgc). While reference counting is often considered to be bad for immutable data structures, I found out that this is not the case. They have very interesting interactions with move semantics and _transients_ (this is a feature that is still on the design phase though). I should write more about this somewhere, maybe in a blog or even as a paper.
Don't get me wrong, the GPL isn't the "worst thing ever" or a "virus", but I feel it's best suited to full applications rather than libraries.
However, many of the data structures in that book do not translate great outside of the world of Haskell and ML. This library is mostly based on the work of Phil Bagwell, Rich Hickie, Tiark Rompf and others. Phil worked over the last decade on building immutable and/or concurrent data structures that are "pragmatic", ie. cache efficient, and supporting mutable facades. I recommend reading his work on Hash-Array Mapped Tries and Radix Balanced Trees.
Are you planning to use some C++17 features, or strictly stick with C++14?
Anyways, the _implementation_ of the data structures themselves does not really benefit that much from C++17 (well, there might be some interesting stuff that I could do with hazard pointers...). However, people using the new concurrency stuff from the standard might really want to look into using immutable data structures to pass data between tasks.
One way to help me choose one of the most liberal licenses is to get your company to join the sponsorship program ;)
Anyways, I and many others use plenty of GPLv3 software every day, let's agree to disagree on the "useless" part...
According to https://adtmag.com/Blogs/WatersWorks/2014/01/Worldwide-Devel..., in 2014 there were 18 million coders world wide (so few!) of which 11 million were professional and 7.5 million hobbyists. Hobbyists does not have to mean Open Source, but professional does not have to mean proprietary either. At the same time, half of all open source software was written by professionals (http://www.techrepublic.com/article/for-50-percent-of-develo...). I am not aware that there are any complete stats on how many open source developers exist. And one should not forget that also a professional C++ programmer can use a library as this for a FOSS project.
We are getting OT. It is okay to ask with respects to the licence, but it is not okay to act like a GPL project is worthless.
For what's worth, I urge you to keep GPLv3 :)
What is a postmodern data structure?
The half-serious part is that there has been a trend in the last few years in the "modern" C++ community against Copy-On-Write, heap allocation, shared pointers, etc. Immutability kind of brings new life to these concepts...
(and yeah postmodernism and post-structuralism are related, but like everything in this area, It's Complicated)