Hacker News new | past | comments | ask | show | jobs | submit login
Immutability is not enough (2016) (recurse.com)
148 points by jt2190 10 months ago | hide | past | favorite | 60 comments



While making the state immutable made it more visible what code was affecting the state, the end result is still multiple pieces of code directly affecting what is essentially a global state. Sure, a copy is passed from one function to the next, but there is still one 'current' state and everything is messing with it directly.

The real problem here is not the mutability of the state, it is the ownership of it. Who is responsible for keeping the state internally consistent ? In this code the answer is: no one.

To solve his problem, there needs to be a clear owner of the state, and that code should be the only code directly affecting the state and and be responsible for keeping the state internally consistent.

Wether this 'owner' is a collection of functions that operate on a global state in a language like C, or on a state passed in and returned, or an object in an OO language, or whatever. Doesn't really matter.

For example. Moving the character and collision detection should not be two separate function that affect the state but that can be called separately (or in the wrong order) and keep the system in an incorrect state. Only the code responsible for modifying the state should do so, and it should guarantee to leave it in a correct state on returning. Moving without collision detection can leave the system in an incorrect state and thus should not even be a function that exists.

When designing a system this is always something I keep in the back of my head: who is responsible for what ? Once you have that clear, things become much easier.


There are two (good) ways I know how to wrangle ownership of information: where, or when. But in all of the sane systems I know, at the end of the day it’s really all “when”.

If there is no “where” for state alterations and they can happen any time, then you are in full global shared stage anarchy mode, which some people seem to be perfectly fine with.

If the system of record is the source of authority then “when” is at write time, regardless of who does the write.

If you know when the data was last altered, you can reason about every interaction that happens “after” because what you see is what you get.

The smartest thing about Angular was that there was a layer of the code - the services - that was expected to do all state transformations on data from the server. Anything in your app was “after” so you could trace the interactions by reading the code.

Plus, it was easier to convince the REST endpoint to do the tranforms for you because you had a contiguous block of working code that explained the difference between what you got and what you wanted. A few sniffs at the data to determine if the modifications had already been made was all the migration strategy you needed. If the transform was cacheable upstream, or found its way into the database, the more’s the better.

The upshot is that if you don’t know a priori what information a unit of work requires, then you don’t have an information architecture. And if you don’t solve that problem then you’re going to fall into a concurrency tarpit that often gets called Cache Invalidation Hell, but that’s just the dominant symptom.


> But in all of the sane systems I know, at the end of the day it’s really all “when”.

Aren't databases basically "where"?


Only if your logic is all stored procedures. It doesn’t matter what the database says if you mangle the read operation.


What if the database itself contained all business logic? (but no plumbing)

Every table has a function that is the only function that can write/insert into that table. The functions themselves are just records in a specific table, which the database schedules to run based on global access activity (it prefers to prioritize functions that consume more records than they produce, to keep space tight).


But what language constructs are the most universally efficient at expressing the distribution of that ownership responsibility in a way that everyone can both understand and agree upon?

Objects with getters and setters?

Constant self-reflection?

Serialization and schema?

Complex query engines in smart databases?

A political process of trust management?

Or maybe just the soothing chaos of an evolving bio-electro-mechanical planetary supersystem of expanding consciousness.


This is what state machines/charts are for. They prevent you from entering invalid states, and take responsibility for all changes in state.


> Who is responsible for keeping the state internally consistent ?

What does this even mean?

> The real problem here is not the mutability of the state, it is the ownership of it.

1. Start with an initial state

2. Pass a copy of that initial state into a function, let it return a slightly modified version of that initial state

3. Pass that slightly modified version into another function, wash rinse repeat

The only ownership that is happening that needs to be worried about is the parent function passing a copy of state to a child function for the duration of that call, right?


> What does this even mean?

It means that you need to concentrate the actual code that manipulates the state and perform all state changes through this code to ensure that the state is always correct. OO solves this through the concept of encapsulation, but that is just one way of doing it.

If you have code all over the place that can manipulate the state, then it becomes extremely hard to ensure that the state remains valid.

I'm not talking about the ownership of the particular instance, I'm talking about what code is allowed to, and responsible for, making alterations to state instances in general.

If I ask you what code can make changes to state, you should be able to point to a small-ish part of your codebase and say: only these functions can make alterations. Each of those functions should guarantee that the state they return is a valid state, that is: they are responsible owners.

All functions that operate on the state have the main responsibility to keep the state valid (e.g. no player character inside an object). Each specific function has additional responsibilities (e.g. move the character if possible).

In the example, the move function can take an existing valid state, and turn it into an invalid state, the player can be moved inside an object. So you when you think about what that function's responsibility should be: it is to attempt to move the main character to a new, valid position. If you think of it like this, you quickly realise that the move and collision detection functions should be combined.

> The only ownership that is happening that needs to be worried about is the parent function passing a copy of state to a child function for the duration of that call, right?

If you're passing a copy of state from one function to the next, and every function can just modify it in whatever way it wants to, they you basically have globals but with more copying.


I think the parent post is suggesting a lack of cohesion. E.g. the things changing state are sprinkled around in too many different places--it makes it hard to reason about

The solution really depends on the design pattern so the message tends to be fairly vague. From an OOP perspective, maybe more state modifiers should be instance methods are at least belong in the same namespace


I feel like, what is confusing you is that the parent is talking using broader/more general terms, rather than about specifics i.e. the state as an object which you can pass inside functions. His point, I believe, is that what causes problems/bugs is often when partially invalid state is shared, not so much specific implementation details like "are the modifications to the state visible because of mutation or because a copy of the state is passed around explicitly".


> What does this even mean?

He's referring to the Single-responsibility principle [1]

[1] https://en.wikipedia.org/wiki/Single-responsibility_principl...


I am super bad with terminology so I'll apologize beforehand.

I've found that a good way to avoid this ownership conflict in OO is to categorically prohibit any public accessors to _inherited_ variables, be it at construction phase or later, be it passively (via setters) or actively (via observers).

And there should be only ONE provider of said value, also I've found is sometimes better to have a hot spot where all nodes converge and use it as a nursing node, and JUST THEN, fork this nursing node into every let's say "logic gate requirement" node (with a cached state each).

This is a good approach IMO as long as these smaller nodes are required by more than 2 observers, if not, then a simple specialized observer is the way to go.


Can this be summarized as „Composition > Inheritance“?


> Moving the character and collision detection

This sort of thing is so domain specific and idiosyncratic, the right answer is "follow tradition."


In statically typed languages you declare x an unsigned 8-bit integer. Everybody may write to x and still; you'd never anxiously expect x to be anything but an unsigned 8-bit integer.

There is nothing wrong with "multiple pieces of code" "directly affecting global state" if your business rules are encoded in such a way.


That works because an unsigned 8-bit integer is one piece of data.

But the problem we're kind of discussing is a group of objects and/or characters moving in a shared environment. You can't represent that as one 8-bit integer. Instead, it's a bunch of pieces of data: x- and y-locations (and maybe z, as well), extents, x-and y- (and maybe z-) velocities. You can easily get that into an inconsistent state - two objects occupying the same space. This is the point of "not many places write the data": To keep the data in a consistent state, you just have to get a little bit of code working right. To debug it when the data is in an inconsistent state, the problem can only be in a few places.

If you're in a multithreading situation, it gets even worse. Yes, it still works for your unsigned 8-bit situation, because it can be written in one assembly instruction. But if your data takes more than one assembly instruction to write, you have to worry about threading. If there are only a few places that write the data, you only have to protect a few places to keep the data from being corrupted by threading issues. (You might also have to protect the readers, so that they can't read it halfway through a series of writes...)


> That works because an unsigned 8-bit integer is one piece of data.

No, the same holds true for any struct as well. If you declare x to be {a: uint8, b: uint8} you cannot magically turn it into {x: string, y: string, z: string}.

The real problem is that in most languages the expressiveness of this system is severely limited. You cannot do it in Javascript at all, but Typescript gets you very far.


> Instead, it's a bunch of pieces of data: x- and y-locations (and maybe z, as well), extents, x-and y- (and maybe z-) velocities. You can easily get that into an inconsistent state - two objects occupying the same space.

I think that's precisely the OPs point. You model the data (with types) in a way that this invalid state becomes unrepresentable:

    Map[(x,y) -> object]
Now you simply can't have two objects at the same coordinate in your game. Of course you can now come up with more constraints and you will then have to refine your types (and create new ones) to match these constraints.

Certain constraints might be hard to express as types in many languages (especially most mainstream languages lack here), but that's not a general problem of the approach but rather a problem of specific languages - for which you then have to find alternatives.

> If you're in a multithreading situation, it gets even worse. Yes, it still works for your unsigned 8-bit situation, because it can be written in one assembly instruction. But if your data takes more than one assembly instruction to write, you have to worry about threading.

Not with immutable data and functional programming (and I mean that's what the article is all about). This style forces you to make state changes explicit by only making copies and, if copies are made concurrently, explicitly specifying how to merge these copies.

Compare:

    var map = Map(...)
    fill_randomly1(map)
    fill_randomly2(map)
    do_something_cool(map)
Now if the fill_randomly functions work in parallel/concurrent you have to somehow ensure that they are called in the right way/order and do_something_cool is called after they have finished. Or worse: if they are supposed to run after each other then you have to be super careful about how your whole code is executed to ensure sequential execution of the parts of your code that calls these functions. Which brings us to your valid point that the places that have to be checked to understand the execution should be as limited as possible.

But compare it to the pure functional style where mutation is not possible:

    immutable var map = Map(...)
    immutable var map1 = fill_randomly1(map)
    immutable var map2 = fill_randomly2(map)
    do_something_cool(???) // What do we put inside?
The compiler here will force you to give it a map - but which one? It explicitly makes you aware that you have to somehow merge the results. So let's do that:

    immutable var map = Map(...)
    immutable var map1 = fill_randomly1(map)
    immutable var map2 = fill_randomly2(map)
    immutable var map3 = merge(map1, map2)
    do_something_cool(map3)
The important thing here is that it does not matter if fill_randomly1 and fill_randomly2 are run one after each other, in parallel or somehow concurrently - it is explicitly specified how the results are merged and the result will always be the same. Also, do_something_cool is guaranteed to run after the other functions, simply because it refers to a variable that relies on the output of the previous functions, so you simply cannot run it "at the wrong time".

I believe that was the suggestion of OP here.


Right, but then you can't multi-thread merge().

More to the point for the kind of software I write: What if map3 isn't immutable? What if it's something like the state of a router for a TV station, something that keeps changing? Then it can't be immutable, and you need a way to keep other threads from reading it while it's changing (or, worse, writing it while it's changing).

Or else, you say that it's immutable, and when some thread changes the state, it produces a new map3, and the old one doesn't change. But then you have the problem of getting all the other threads updated to see the new version of map3.

(And, by the way, I've worked on that router for TV stations. Modeling that matrix in a way that invalid states are unrepresentable is, um, extremely non-trivial...)


> Right, but then you can't multi-thread merge()

Not sure If I understand what you mean. It would be both possible to call "merge" in parallel or to implement merge to do the work in parallel as well.

> Or else, you say that it's immutable, and when some thread changes the state, it produces a new map3, and the old one doesn't change. But then you have the problem of getting all the other threads updated to see the new version of map3.

Yes, but this "problem" is exactly the beauty of this style of programming. It forces you to make your data-flows explicit and hence easy to discover, understand and manipulate/change by other developers.

> And, by the way, I've worked on that router for TV stations. Modeling that matrix in a way that invalid states are unrepresentable is, um, extremely non-trivial...

I'm not saying it is trivial. But it is possible and depending on the programming language it is actually surprisingly easy. Not only that, with a good typesystem, you not only prevent invalid state from occurring, you even get a lot of support from your IDE due to the extra information it has.

Here's an example how that can look like for matrices and functions to manipulate them (like transpose): https://youtu.be/DRq2NgeFcO0?t=1137

The syntax might be alien to you, but I hope it still shows what's possible at compile-time before even running the program with modern languages today.


This is an odd complaint. Immutability doesn't make order of operations go away. (x+1)/2 is not the same operation as (x/2)+1.

I find the tangent into effect systems at the end to be somewhat ironic, given what it follows. Effect systems also don't make order of operations not matter. The order in which effect handlers are run can change the semantics of code using them.

Pairs of operations don't commute in general. There's no way around knowing which order things need to apply in. (x/2)+1 and (x+1)/2 are just different operations. Nothing saves you from needing to choose which one you mean.


I have to agree here. The author confuses what immutability is about by implicitly extending to control issues which are orthogonal to immutability. The author describes the problem very well but misuses the term "state" to describe "control".

The solution to control issues is not having your code depend on order. Logic programming, SQL, schema, type declarations, regex... Many of us use these all the time but logic/declarative programming in general is not the norm in $dayJobLang.


Exactly this ^

While effect systems are cool, they don't solve the problem described here, which is just that you need to be clear about the order of operations that are order dependent.

The question would be, what programming style lets you most simply describe the operations and their needed order?

Functional programming is often also talked to as a type of dataflow programming, and the two share a lot in common. A dataflow approach in my opinion is best suited here, and functional programming can easily be used in a dataflow style.

The author's first example is actually pretty great, it defines a pipeline of operations, the pipeline is a very clear way to declare the order of operations.

Another popular approach is instead of explicitly declaring the order using dataflow constructs like a pipeline or a DAG, that you define the dependencies on prior operations and data, and the construct infers the order from those declared dependencies.


Thanks for voicing this - in the system I work with (Swift Composable Architecture), concurrently dispatched effects are processed in the order in which they return the result of their work. The scheduler processes effect results on a single thread, so no real concurrency can happen there. If two supposedly concurrent effects mutate the same substate, then the outcome is (in theory) not even deterministic - the effect returning from their work first will „win“.


The post while interesting - puts forward a bit of a false argument. And the flaw is seen in the title: "Immutability is not enough"

Enough for what?

If the author had finished the title, it'd be pretty clear that what they're arguing is kinda obvious.

"Immutability is not enough to solve every ordering / dependency problem in programming"

I'm trying to come up with the most charitable completion of the title, but written out they all sound pretty patronizing. (maybe someone else can do better than me?)

Now all of that said. It is a great example of an article for someone to see the benefits of transitioning from inline imperative updates. To better factored code, immutable data and explicit passing of dependencies.

And then finally to introduce the motivation and benefits of an effect system.

For that I applaud the author.


I don't understand how the writer didn't realise somewhere during writing this that they're conflating immutable state concerns and higher-level state validation in logic loops.

Collision detection validation and the game loop, to use the example, has absolutely nothing to do with what kind of data structure you're using, and is absolutely not what functional programming was 'supposed to help us avoid'.


I don't know if the author meant to, but he describes a problem that people run into with concurrent systems, and which creates a lot of real-life bugs if people don't anticipate it and design for it. In the author's case, the solution is trivial because it can be solved by specifying the order of updates, but in concurrent systems that isn't always an option. Let's say instead of Manuel the Carpenter's position on the screen, we're updating the status of Monte the Money Launderer's application for a line of credit. The CreditCheck system receives an external credit report and updates Monte's application state to indicate he has passed the credit check. The Approval system sees the credit check result, sees that all other conditions have been met, and updates the application state to Approved. At the same time, a loan officer who has received a call from the FBI about Monte uses the Control system to put a manual Hold on the loan. If the concurrent updates from the Approval and Control systems are combined naively, the application can end up in the state of Approved and Hold simultaneously. The loan officer, seeing that the Hold has been successfully placed on the application, believes that Monte has been blocked, but in fact Monte is able to access the Approved line of credit.

Like the problem described by the author, this is a pretty basic problem that most HN readers would design for from the start, but the title "Immutability is not enough" should (theoretically) select for readers who are a little bit surprised by it.


I am little bit confused. Why do you expect immutability to solve all your problems?

Immutability is a tool. Like every tool, it has its limits. Its benefits are finite. Expecting it to solve all problems is silly -- there does not exist a single technique that can solve all problems in every circumstances.

Rather than looking for a silver bullet it is better to study various techniques, their pros, cons and applicability and build varied repertoire of solutions you know well enough to be able to predict the results and achieve high chance of success.

Don't discount techniques just because they are old and have problems. If something was popular in the past it is likely it has some merit to it -- try to understand it rather than dismiss it because it is not new and shiny.


> Why do you expect immutability to solve all your problems?

Immutability (and functional programming more generally) are often sold as solving issues related to state management. It's not unreasonable for someone to write about where the limits are to that. Especially when many of those limits overlap closely to the traditional types of problems we see in imperative / mutable state systems.


> there does not exist a single technique that can solve all problems in every circumstances.

The Universal Algorithm. Wouldn't that be something.


Reposting this, as it was mentioned by belter [1] in another discussion on Immutability changes everything (2019) [2] and I thought it was interesting enough for another visit.

Previous discussion five years ago: https://news.ycombinator.com/item?id=11388143

[1] https://news.ycombinator.com/item?id=27640700

[2] https://news.ycombinator.com/item?id=27640308


> Reposting this, as it was mentioned by belter [1] in another discussion on Immutability changes everything (2019) [2] and I thought it was interesting enough for another visit.

"Immutability changes everything" (https://queue.acm.org/detail.cfm?id=2884038) also seems to be from 2016.


I just want to point out that this is really funny to see at the same time on hn.

This article, and one from 7 hours ago:

Immutability Changes Everything (2016) (acm.org): https://queue.acm.org/detail.cfm?id=2884038

Posted here: https://news.ycombinator.com/item?id=27640308


Yes, but it's not a coincidence. Someone read https://news.ycombinator.com/item?id=27640700 and thought it was worth posting.


How is immutability supposed to help mitigate these bugs? This seems like the programmer just not encoding the allowed state change accurately. In the real world(tm) you should always update the state either through functions that make sure you are not reaching undesirable states or encode you state at the type level. This is completely orthogonal to immutability though.


For a second I thought this was the title of a new James Bond.


No Mr Bond, I expect you to die();


Licence to kill -9


You mean kill -9 007 :-)


Why'd he kill himself though? Unless


This was a very interesting presentation, but I think the author is too wedded to immutability to just come out and state the obvious: it’s a solution in search of a problem, and there’s not much good in using it as is. Yes, IF your problem can be decomposed into pure functional bits, that often makes a program easier to read. But if not, it’s wishful thinking to try bolting immutability onto the problem and hope that it helps. Returning state new objects is a sign that this problem doesn’t decompose the proper way to benefit from this approach!

In game dev, there’s a popular pattern called “Entity Component System” which deals with these problems quite well. The components are just blobs of pure data, like the state in the article, but instead of freezing them for no real reason, you compose systems to make them interact in predictable ways. It’s much easier to reason about, which makes it the right tool for this particular job.


The premise of immutability is that this causes bugs:

    doA()
    doB()
    doC()
and so the solution is instead of doing:

    State state
    doA(&state)
    doB(&state)
    doC(&state)
you do

    State state
    state2 = doA(state)
    state3 = doB(state2)
    state4 = doC(state3)
But obviously, the only difference between 2 and 3 is that 3 is a pain in the ass to type out. All of the benefit is there in 2 because you're explicitly declaring what it is that the functions are modifying, instead of having them modify unseen globals. There's no advantage to adding a ugly calling convention on top of that.


Here's a little immutable + effects (Re-frame) game with some physics I'm working on: https://github.com/celwell/wordsmith

See the 'pipleline' of transformations applied here: https://github.com/celwell/wordsmith/blob/0dff5446278b22a5b0...


"After experiencing the kinds of bugs I described above, it became clear to me that – despite their merits – immutable data structures are not a silver bullet."

There ARE NO silver bullets. So many developers want them, but they don't exist. It's a gigantic form of laziness that causes so many problems in software teams, and people drowning in it can't see it, or refuse to.

Good developers that see tools in a toolbox instead of constantly searching for silver bullets are rare and should be retained at great cost.


I don't understand the negativity in some of the comments. I thought this was an excellent article.

Two main takeaways for me:

- While immutability can prevent some state-related bugs, it will not prevent them if the state changes are not commutative

- I especially liked this sentence:

> This is actually very similar to the problem with imperative languages – since everything runs sequentially, it’s hard to see what parts of the code have a true sequential dependency and which parts do not.


One thing I like about this article is that it tries to evaluate how pure functional designs handle state. I've found a lot of posts give examples of programs with no state and show how elegant a functional implementation is. Most real-world programs do have a lot of state within them, so for a fair comparison, we should see how a functional design handles transitioning that state, compared to an imperative approach.


I think immutability (or immutable state) adds one big advantage over mutable state: you have access to previous state(s) and the current state.

Immutability is very convenient for checking post-conditions that compare previous states and the current state for certain properties that should hold (temporal properties for example).

In turn, post-conditions guard your states to be sane.


Today, on episode 42 of /Putting Things In-Band Doesn't Make Them Go Away/ ...


No, you also need persistent data structures.


This is a common misconception, functional code is order independent in its evaluation model, but if you are modeling order dependent operations, it still lets you specify order for them, and if you specify the wrong order for what you're doing it'll do the wrong thing.

So you can think of it as it solves accidental complexity, but leaves essential complexity still for you to solve.

A very simple example is:

    30 - 2 * 10
This is order dependent, but it's part of the essential complexity of the problem at hand.

Now where accidental complexity would creep in is with something like:

    position - 2 * position + 10
Now first we have the essential complexity, what order is the behavior we want? Let's say here we want:

    (position - 2) * (position + 10)
In imperative we might do:

    position = 10
    position.minus(2)
    position.plus(10)
    position.times(position.value)
And hopefully you're already seeing the accidental complexity created by the imperative approach. This doesn't work, and makes no sense. It isn't just that the order of the operations we're modeling is defined wrongly, in fact you could say it's defined correctly, we want to substract, add and then multiply. But in this case it's the mutable state itself which makes things more complex then they need to be.

So in imperative you'd have to do:

    position = 10
    firstPosition = position.value
    secondPosition = position.value
    firstPosition.minus(2)
    secondPosition.plus(10)
    position = firstPosition.times(secondPosition)
    position.value
That's a lot of added complexity due to us having to manage the fact that the state is mutable. We need three memory locations, we need to make sure that we copy the state values at the right times and we need to make sure we are mutating the right memory locations and than adding them back and all that ourselves.

In functional you'd just do:

    position = 10
    times(minus(position, 2)
          plus(position, 10))
All you need to model is just the essential ordering inherent to the behavior you want.

Not surprisingly, most modern programming language actually use pure functions for their math operations. That's why in Java:

    int position = 10;
    (position - 2) * (position + 10)
will work, because minus, plus and multiply are implemented as pure functions, even in Java. That said, if you use the imperative ones instead it won't work:

    int position = 10;
    (position -= 2) * (position += 10)
or if you went all imperative you'd have what I showed before:

    int position = 10;
    position -= 2;
    position += 10;
    position *= position;
Now using math operations I think shows very clearly the accidental complexity that imperative has over functional in this case. What happens is all your other operations that are not math, but related to your business logic or program logic that you also model using the imperative style suffers from this added complexity which is removed if you move to the functional style.

Now back to the article, they're expecting functional programming to somehow know the behavior they want, and magically figure out the essential complexity of it. It won't give you that, but it will get rid of a large amount of accidental complexity, and that was the point of the "No Silver Bullet" paper, that since the amount of essential complexity is fixed and inherent to your problem, only accidental complexity can be simplified when implementing said problem. Functional programming argues to dramatically simplify your accidental complexity, letting you focus all your attention on the essential parts.

Having said that, there are things that tries to tame essential complexity as well and functional programming tends to mix with them very simply and easily. For example, the ideas around declarative programming, metaprogramming, interactive programming, static code analysis, and testable code all touch on the essential complexity parts of a problem.

If you have more independently reusable pieces, that you can simply declare compositions of, or rules around them, it allows you to tackle some aspects of essential complexity. Metaprogramming lets you abstract things behind code-gen so you can more quickly reuse parts of each essential features that is the same. Interactive programming (like a REPL in Lisps or smalltalk systems) will let you have a quicker feedback loop to evaluate the effect of your essential problem and validate if it's correct letting you more quickly implement the essential complexity. Static code analysis can validate assertions of your essential complexity and quickly tell you if you might have made a mistake in it, it can also help you see a clearer mental map of how the essential complexity is modeled which can help you understand it better. Finally testable code, which functional code often is by virtue of its style, similarly lets you assess the validity of your essential behavior, and thus you can more quickly iterate over your essential complexity.


Throw in in scale, dot, angle etc and the functional style would be hard to reason about for me, I think your imperative example is more explicit and you can sprinkle a bunch of console log's between the lines.

I think pure functional programming works best when there is no state. So move the state out from the program. Only use local buffers if you need for performance...

The order of things is very important so make your coding life easier by making sure the are no concurrent state - like TCP for networking and single threaded business logic.

For example in a chat app, buffer text in a textarea, but when a user hit "send" don't render the message before it has been received by the server, that way everyone connected to the server will see the messages in the same order.

The more possible state a program can have the more you will benefit from testing.


You find this:

    position = 10
    firstPosition = position.value
    secondPosition = position.value
    firstPosition.minus(2)
    secondPosition.plus(10)
    position = firstPosition.times(secondPosition)
    position.value
easier to reason about then this:

    position = 10
    times(minus(position, 2)
          plus(position, 10))
?

Are you sure? I think maybe you're just conflating syntax and semantics. Syntax can be made in different ways either functional or imperative, the semantics are what matters here.

Like would this be better for you (still functional, but different syntax):

    position = 10
    firstPosition = position.minus(2)
    secondPosition = position.plus(10)
    result = firstPosition.times(secondPosition)


Tldr: a buggy imperative program can be mechanically translated into an equally buggy functional program.


That's only the second immutability (also from 2016) thread on first page.

What are the chances?

Is immutability a new religion?


It's almost as if the author is telling us that "There is no silver bullet". Now where have I heard that already? ...

http://worrydream.com/refs/Brooks-NoSilverBullet.pdf


It's rather a non sequitur, as no one ever said immutability was enough.

You also need design, rigor, error handling, common sense, some caffeine, testing, and fast builds. All together... that's enough!


> fast builds

Interestingly, immutability and fast builds are often closely related, because immutability (in build systems) can be used to ensure referential transparency, which makes it trivial to implement caching of intermediate build artifacts.


IMHO this is atrocious waste of computer resources. Especially if the stage gets big and must survive the duration of application. To me it looks like crusade for the sake of idea anything else be damned.

There are multiple better ways to deal with the state. For example more sane approach would involve single owner of the main state which can serve / update particular slices of said state and has the ability to broadcast state change events to subscribers.


Modern developers have the luxury of computing power, but you're right, it can only take you so far.

I've heard far too many people say things like "[hard drives/memory/cpu] are fast, who cares about how efficient this is?", "we'll just scale up some more instances to deal with the inefficiency.", "pfft, it's 2015, the internet is fast, who cares if network calls are inefficient, microservices are the future!"

Lol. Kills me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: