Hacker News new | past | comments | ask | show | jobs | submit login

Anyone else dislike the immutability fad?

Mutability has its place, and simply hiding it behind abstractions tacked on to the language (vars, refs, agents, and atoms in Clojure's case) isn't a productive way to deal with it. It has a lot of benefits, but the downsides are significant. Sometimes I just want to create a map and mutate its structure, and the language saying "no" is constraining me in an unpleasant way.

It's the old debate of whether the programmer should be constrained by the language, or the language should serve the programmer. Maybe it's a matter of opinion.

Also, typed function parameters are painful. Declaring a parameter as a `f: () -> int` instead of `f` requires that you think about what the signature needs to be. What if you don't know? What if you want to change it? The cost goes up significantly, and I'm not sure the purported ability to sidestep a few bugs is worth it. If safety is an issue, good test coverage can be a sufficient substitute for types.

(The article is excellent, by the way.)




> Anyone else dislike the immutability fad? [...] Sometimes I just want to create a map and mutate its structure, and the language saying "no" is constraining me in an unpleasant way.

Immutability does not prevent you from incrementally creating new versions of a map! At this point, you are conflating no less than three different and orthogonal concepts:

A) A convenient way to write code that incrementally creates new versions of a value. This is trivial in most languages using immutable values, but it often requires unfamiliar idioms, giving people the false impression it cannot be done.

B) A performance optimisation that performs such local incremental updates in place. This is a real concern. Any immutable-by-default language should support a safe (!!) way for the programmer to ensure updates happen in place.

C) A mechanism whereby new versions of values can be distributed to other parts of a program. This one is tricky as all hell to do safely, and most good solutions work just as well with immutable values.

"But kqr, why did you take something so simple and made it so complicated?"

It was complicated from the start. Traditional languages have just ignored this complexity and solved it in an "undefined behaviour" sort of way: let's just do wha the metal happens to do, however unsafe!" You'll note tha such languages have no way to do safe in place updates, and value propagation is never sychronised or otherwise guaranteed not to cause partial views. It's a low-level hack that's far outstayed its welcome in the age of huge systems with all sorts of interaction effects.


> It was complicated from the start.

No it wasn't. It's only "complicated" from a certain very specific and constrained POV, which you seem to have adopted wholesale and now assume as the only valid reality.

Mutability is not just how "the metal" works, it's also how the rest of the world works. If I eat a banana, the contents of my stomach (and by extension I) change. You don't get a new me that's no longer hungry + the old me that hasn't changed. If I put gas in my car's tank, the car changes. I don't get a new car. Etc.

You are also conflating immutability with atomic updates, as far as I can tell.


> Mutability is not just how "the metal" works, it's also how the rest of the world works.

This is actually false. A bit hand wavy here, but the world does not truly exist of discreet objects changing state. That's how we choose to model it, and it's useful, but it's not the "truth" of the world / universe.

A few simple examples - viewed at the quantum level, protons, neutrons, electrons aren't physical objects that "exist" in a unique place at a point in time. Now i'm not saying an imperative model is wrong because of quantum mechanics, but am merely illustrating that we chose to model the world as objects with changing states because it's useful, not because it's accurate or the truth.

The ship of Theseus is another examples. There really is no paradox here as there was never a "ship" to begin with. We just chose to model that particularly assembly of particles as an entity/object known as a ship, and gave it "state" that persists over time. But the only true answer is that it isn't a paradox the universe/physics never defined a "ship" to be there. We did. And the only "paradox" is that someone found a flaw in our modeling algorithm. But again, there was never a single object there changing state.

So the long story is - don't judge a method of modeling on how effective it is. Because no model we use in everyday is actually the underlying "truth". https://en.wikipedia.org/wiki/Ship_of_Theseus


> This is actually false.

Nope. Or: that is far too strong a statement.

> but the world does not truly exist of discreet objects changing state.

Really? So elementary particles don't change position, energy, velocity? Everything is static and immutable? And gets destroyed/recreated for everything that looks like a state-change to us? While I could imagine modeling the world that way, it is a description of the world that gets shredded to tiny little pieces by Occam's razor.

On a more basic level, just the fact that you can imagine a different interpretation doesn't mean that an interpretation I give is false. There is a difference between "false" and "there are other possible interpretations".

And the fact that you don't have a simple materialistic/reductive definition of what a "river" is, or any other macroscopic object, does not mean that rivers don't exist. If you can't see it, more power to those of us who can, like me, because we are going to have a much better time navigating the "real" world than you are, including taking bridges to cross those rivers rather than drowning, because after all it's just a few drops of water, right?

Or you are just pretending that rivers don't exist, and actually implicitly use this knowledge all the time to safely navigate the real world.

Which brings us back to immutability in programming: it's largely a pretense. It can be a useful pretense. I certainly love to take advantage of it where appropriate, for example wrote event-posters long before that became fashionable[1]. But in the same system I also used/came-up-with in-process REST [2], which is very much based on mutability.

So immutability is useful where applicable, but it is also very limited, as it fits neither most of the world above nor the hardware below, and trying to force this modeling onto either quickly becomes extremely expensive. Of course, many technical people love that sort of challenge, mapping two things onto each other that don't fit naturally at all [3]

[1] https://www.martinfowler.com/bliki/EventPoster.html

[2] https://link.springer.com/chapter/10.1007/978-1-4614-9299-3_...

[3] https://en.wikipedia.org/wiki/Metaphysical_poets


With an object-oriented model of reality, then yes, you can say there is a single banana object instance whose coordinates mutate to remain inside your body, and gradually your body decrements the banana nutrient counters while incrementing its own.

With a value-based model of reality, it doesn't happen that way. In that model, a both you and the banana are modeled as the (time, place, nutrients) triple of immutable values. Those triples never "change" or "become old" or "get duplicated". They just are.

At 4 o'clock, my nutrient counters were low and the banana had coordinates similar to mine but not quite. At 5 o'clock, my nutrient counters were higher and the banana had coordinates equal to me. Both of these statements are valid at the same time with no conflict.

Similarly, I don't get where this "if I open the boot of my car do I now have two cars" is coming from. <Chevrolet, 8:41 AM, Boot closed> and <Chevrolet, 8:42 AM, Boot opened> coexist in peace. They're not two different car objects, they're two different immutable, factual descriptions of states. As long as they were correct in the first place, the will never become incorrect or change in any way.


You are perceiving yourself as having mutated after eating the banana, merely because you are in temporal lockstep with your stomach.

Most of what you've just stated as "fact" and "how the world works" is a matter of perception. Time, after all, is (we believe) simply another dimension of spacetime.


"Reality is that which doesn't go away when you stop believing in it"

I gave very specific examples. When I open the trunk of my car, are there now two cars, one with the trunk not opened, and one with the trunk opened?

Now there may be two universes (many worlds interpretation), but so far there is no evidence that this interpretation has validity, and we have no way of accessing these two universes. As such, it is not a very useful description of the way the world works, as it doesn't actually visibly work that way, and very specifically visibly works differently.

> Time, after all, is (we believe) simply another dimension of spacetime

And objects move through space-time, thus changing (mutating) in their attributes. Objects aren't immutable in space-time and have to be destroyed/recreated continuously.


Not sure what reality you're inhabiting, but in mine, state is coterminous with identity, and mutation of state therefore creates a new entity.

That is, we are all value objects, all matter is information, all information is functional, and perception is therefore the lazy evaluation of the universe.

Note: this is especially important when you are the banana.


> Also, typed function parameters are painful. Declaring a parameter as a `f: () -> int` instead of `f` requires that you think about what the signature needs to be. What if you don't know? What if you want to change it?

This is actually my favorite thing about languages with strong type systems where type inference is less emphasized -- if you're looking at a function definition, even if you're not terribly familiar with the language, you know what the thing is supposed to do. Sure, it's nice to let the compiler infer your types at compile or runtime, but it's really nice to just be able to read the code and be able to see what a function is at least meant to do, based on what it returns.


...and in fact this documentation effect is (so far) the only positive effect of static typing that has had at least some scientific validation.


?

This sounds really fishy to me. Do you have more details?

Source: have programmed professionally in major static and non-static languages and as the years pass I appreciate static languages more and more for each time I waste time debugging what I have learned to think of as stupid, completely avoidable errors.


> Also, typed function parameters are painful. Declaring a parameter as a `f: () -> int` instead of `f` requires that you think about what the signature needs to be. What if you don't know?

Why are you writing the code if you don't know yet what your input will be? A function transforms input into output, how can it do that correctly when you don't know what its input is?

If you don't know the input type yet, you should first be writing the things that will deliver the input, or leave the function as an unimplemented placeholder and define only the output type.

> What if you want to change it

This is one of the best reasons to use types. Directly after the change you'll be told by the compiler which other code needs changes. Compare this to un-typed languages where you'll need a lot of tests to ensure the same or risk runtime errors.


Immutability is useful as a default, but languages should absolutely provide mutable alternatives for situations the programmer deems appropriate.

The reason immutability has become popular so recently, and the reason why I say it's the best default assumption is the rise in multi-threaded programming.

There are a growing number of strategies for dealing with mutability in a multi-threaded environment in a way that is both safe, and intuitive to reason about, the two that springs to mind are the rediscovery of actors (e.g. Akka), and go's goroutines.

Mutability certainly has its place, but so does immutability. Previously, immutability has been largely ignored in favour of the convenience of mutablity. I wouldn't say immutability is being heavily favoured now; more that it's back where it belongs, as part of a binary choice.


Even in single threaded programs immutability can make things easier to reason about. I've had to look at some old Java projects and the easiest time I've had reasoning about them was a project that used mostly immutable objects. Ever since then I've used immutable as the default for my Java objects and it's been better for me. Your mileage may vary obviously


But even actors benefit from immutability. Imagine if the message you received and matched on was not the message you processed (because it was mutated in between). You either want immutability at least for message handling, or messages to be full copies, and there are tradeoffs to either choice.


Messages, of course, should be immutable for exactly those reasons.

Actors encapsulate mutable state; they essentially serialize access to that state such that access is single-threaded.


Actors don't necessarily encapsulate mutable state; they may in fact encapsulate immutable state. An actor can be treated (as in Erlang) as essentially just a(n optionally) recursive function with a mailbox, where mutation can only occur across the function boundary (i.e., when you recurse, you pass in the new value).

At this point the only benefit to allowing mutability is to allow you convenience things like for loops, and for performance reasons. From the perspective outside of the actor, the state behaves the same way whether it's mutable or immutable; you do not gain new behaviors by making it mutable (i.e., for sharing data or similar), you just make a more complex coding model (in that you're mixing mutable and immutable and have to know which is which).


That's really more of a detail of how Erlang implements actors. In Erlang actors are used to represent mutable state.


Of course it is; my point was that if your communication mechanism between actors is immutable, there's hardly any way to differentiate mutable vs immutable within the actor...and, honestly, it doesn't really affect much, so why mix the two (since that then creates weirdnesses in how your data can interoperate and be handled; some pieces can be used as new messages, others can't, etc).


Gotcha, I misunderstood.


Immutability is absolutely fantastic as a default (especially for typical business logic). I'm not sure I want to go back to mutability except in constrained environments or time-sensitive software. Mutability should be harder to reach for because it forces you to think hard about the implications to other parts of your program. I can think of a number of times we've had to parallelize a routine or share resources between workers, and it has been almost trivial _because_ of immutability. In contrast to that, I inherited a hairball of C code using pthreads, mutexes, global variables, etc which had to be completely gutted and rewritten as it was impossible to understand the dependencies in the software (we were witnessing segfaults, deadlock, etc).


Declaring a parameter as a `f: () -> int` instead of `f` requires that you think about what the signature needs to be. What if you don't know?

(Java programmer here.) Then return a sufficiently "broad" type up to, if necessary, Object.

That said I almost never fall that far back, I'd see that as a sign that I should stop and think.

Oh, and the advantage of a typed languages like Java: you can have extremely good tool support. Netbeans (or IntelliJ or Eclipse) will happily help you to change types if it later turns out you were wrong.


Few points on immutability and Clojure.

The abstraction that hides mutable data structures is transient[1], not vars, refs, agents and atoms.

Those abstractions are all to do with mutable references, not mutable data. They're there to make sure that you have safe ways to coordinate state in concurrent environments.

I agree that immutability can be annoying in languages that weren't designed with it in mind, but the languages that were almost always make it easier, safer and performant to create copies rather than mutate data.

If you're writing Clojure or Haskell and you think you're going to save yourself time by "just creating a map and mutating its structure" then you're misunderstanding the purpose of the language. These constraints are what enable the guarantees and contracts those languages make.

[1]: https://clojure.org/reference/transients


But it does save time. JS is proof of that. There are no immutable data structures and the world rolls on. People will hate on JS, but it's effective.

You can learn to get into the "immutability mindset" if you train yourself to, but are you certain it's worth the time investment? It seems like there's at least a chance that it's not.


Sure, no arguments there. It does save time in JavaScript and a large part of that is because the language has been designed around mutability.

Part of that trade-off is that JavaScript can't make the same guarantees about what happens when you pass an object into a function. It's harder to be confident that a given program is correct.

Immutability is just a part of the "simple made easy"[1] ethos of Clojure and I think most Clojure programmers will argue that taking the time to understand that philosophy _is_ worth the investment.

[1] https://www.youtube.com/watch?v=rI8tNMsozo0


Ah, so you see why there are not many clojure programmers


Any bad implementation of something is bad. Sounds like immutability in JS is just badly designed and implemented, in a way that makes it difficult and slow to code with.

Don't generalise you're experience of a thing if you've only tried its bad implementations. Like don't judge Monads until you try them in Haskell. Don't judge immutability or DSLs until you try it in Clojure, etc.


Agreed that immutability seems overrated right now. The pendulum will swing back in full force, eventually.

But of course, the art is in putting mutability and immutability in the right places. There are no fast rules, but in general mutability of local variables is harmless, and you probably need some mutability at the top level as well (as per: "functional core, imperative shell"). Mutability in other places can still be useful though.

I don't agree or really follow your comment on function parameters. It seems an argument against typing in general.


Re: function parameters, here's a concrete example.

Clojure's `reduce` takes a function `f` as an argument: https://clojuredocs.org/clojure.core/reduce

When you pass an empty list, `f` is called with no arguments. That way, `(reduce + [])` can return 0, whereas `(reduce * [])` can return 1.

It's very easy to write such a function in Clojure, thanks to a lack of types. What would that look like in Kotlin? A function that can either take zero arguments or takes two arguments of types specified by the input list?


The usual implementation of fold/reduce takes a separate seed argument (as does clojure, optionally!), which IMO is far more sensible than having the same function do two completely separate things depending on the number of arguments.


Sure, let's roll with that.

How would you write a function parameter that takes two arguments, whose type is determined by the seed argument?

Also, with reduce, the return value of the function can determine what the type of the input arguments should be. For example if you pass in a `(fn (x y) (list x y))`, you end up with:

  > (reduce (fn (x y)
              (list x y))
            (list "a" "b" "c" "d"))
  ("a" ("b" ("c" "d")))
Let's throw in a print statement to print out the `y` parameter:

  > (reduce (fn (x y)
              (print (str y))
              (list x y))
            (list "a" "b" "c" "d"))
  "d"
  ("c" "d")
  ("b" ("c" "d"))
  ("a" ("b" ("c" "d")))
So `y` starts out as a string, then a list of strings, then a list whose first element is a string and the second element is a list of strings, and so on.

I'm not sure what the type of that function would even look like. And if there's no way to express something as simple as `reduce` without resorting to `f: (x: Any, y: Any) -> Any`, are we certain it's good design?


The type of the reduction function would just be something like

    (X, Y) -> (X Y)
     args     pair
The type of a reduce that has an initial seed is something like

     X   -> ((E, X) -> X) -> [E]  ->   X
    seed       reducer       list    result
The result is going to need to be some recursively defined type, which I'll call List,

    type List E = E | (E (List E))
Apply these together trivially

    (List E) -> ((E, (List E)) -> (List E))
             -> [E]
             -> (List E)


And this is the other thing I like about the haskell/ML collection of languages. The description you gave above is extremely concise and direct. If you're familiar with the language being used it's a very efficient form of communication.

I've noticed that working in languages of that family gives you a vocabulary to talk about things that previously would have been very wordy to discuss.

Languages with these types of type features provide easy abstractions to illuminate structure and patterns begin discussing new ideas that previously would've been considered on off code.


FWIW, this can be inferred fully automatically. Type

    let pair = fun x -> fun y -> { _0 = x; _1 = y }

    let rec reduce = fun init -> fun combine -> fun xs ->
        match xs with
            [] -> init
          | x :: rest -> reduce (combine x init) combine rest

    let test = reduce 0 pair [1; 2; 3; 4]
into https://www.cl.cam.ac.uk/~sd601/mlsub/ to see a demonstration.


> How would you write a function parameter that takes two arguments, whose type is determined by the seed argument?

I feel like I'm missing something here. I would think it would just be

    reduce<I, O>(fn:(O, I|O)=>O, seed:I): O


How would you call it? I'm interested in how you'd specify `I` and `O`.

`I` is obviously a string, but it seems like `O` needs to be at least three different types:

  > (reduce (fn (x y)
              (list x y))
            (list "a"))
  "a"
  > (reduce (fn (x y)
              (list x y))
            (list "a" "b"))
  ("a" "b")
  > (reduce (fn (x y)
              (list x y))
            (list "a" "b" "c"))
  ("a" ("b" "c"))


`O` would be whatever the type of the reducer is, so for `list` in the lisp you're using (is it Clojure? The function params look wrong) has a type signature `(A, B?, ...Z?) -> A | List(A, B, ...Z)`.

So the transducer in this case would have the type `List(A, B?, ...Z?) -> A | List(A, B | List(B, ...etc))`

It's not three different types, but it is necessarily recursive, which seems tricky.


I think type systems derived from linear/affine/uniqueness typing will help a lot here. Mutability is fine -- it's sharing that's the problem.


> It's the old debate of whether the programmer should be constrained by the language, or the language should serve the programmer. Maybe it's a matter of opinion.

I like to think of it as the language saving the programmer from herself


If your data is immutable or not is significant for program analysis and correctness, so I would say the distinction makes absolutely sense.

> Sometimes I just want to create a map and mutate its structure, and the language saying "no" is constraining me in an unpleasant way.

Languages that provide immutable data structures have often 2 variants, an immutable and mutable. Why can't you just take the mutable in that case?

I think it's more about making immutability or mutability explicit.


If you have sufficiently good test coverage to replace a type system you have to change tests whenever you'd have to change the type. No costs saved. Thinking more before coding is beneficial in my experience.


You pay for not specifying those parameters and so on in other ways later on.

By leaving those things out you are broadening the problem model and denying the compiler information that it needs from you as the programmer so that it can do a good job of solving the problem.

You have to remember that the computer can't synthesise knowledge about the problem domain that you deny it in the first place.

So sure tightening things down by specifying them may be tedious but it is the reality of the problem that you are solving.

Convenience of leaving it out will translate to instability such as run-time exceptions not to mention a massively reduced ability in tooling support (like IDE auto-complete).


RE: Typed functions:

Kotlin lives on top of a statically typed architecture, so I think you do have to make that function typed. Some languages get around this requirement by inferring types, but type inference itself can then become Turing complete...

So it's not like there's a "healthy option". It's more that you're choosing the poison you prefer.


> Mutability has its place, and simply hiding it behind abstractions tacked on to the language (vars, refs, agents, and atoms in Clojure's case) isn't a productive way to deal with it.

In Clojure if you want mutability, you don't use an PersistentHashMap i.e. `{}` inside an atom `(atom {})`, you import and instantiate a mutable java class `java.util.concurrent.ConcurrentHashMap` or `java.util.HashMap` `(let [m (doto (java.util.HashMap.) (.put "foo" "bar") (.put "spam "eggs"))] .... )` and bang on it just like you would in Java. Clojure doesn't attempt to solve mutability because the host platform has already done a good job at that. Clojure provides semantics around state managment of persistent data if you need that sort of thing, but that's not necessarily a good replacement for mutability if you actually need a mutable thing.


You can have both mutability and immutability in the same language. Example from Ruby:

    $ irb
    2.4.1 :001 > a = [1, 2, 3]
     => [1, 2, 3] 
    2.4.1 :002 > a.map {|x| x + 1}
     => [2, 3, 4] 
    2.4.1 :003 > a
     => [1, 2, 3] 
    2.4.1 :004 > a.map! {|x| x + 1}
     => [2, 3, 4] 
    2.4.1 :005 > a
     => [2, 3, 4] 
    2.4.1 :006 > a.freeze # make a immutable
     => [2, 3, 4] 
    2.4.1 :007 > a.map! {|x| x + 1}
    RuntimeError: can't modify frozen Array

Ruby is fundamentally a language based on mutability, but it shows that it should be possible to have both ways. Questions:

1) Is really there a language which is fully agnostic about mutability and immutability?

2) If not, it's maybe because designers have strong opinions about this matter? Actually IMHO designing a fully agnostic language would demonstrate strong opinions too.


I'd argue most languages are agnostic about (im)mutability. It's libraries that aren't.


Most but some don't. This fails in Erlang

    X = X + 1.
because even variables are immutable.

This is ok in Elixir even if it kind of fakes it [1]

    x = x + 1
and it's perfectly ok in Ruby and most other languages.

[1] http://blog.plataformatec.com.br/2016/01/comparing-elixir-an...


> What if you want to change it?

If you need to change the argument/return types of a function, you have a LOT more work to do than just changing a one-line signature.


> What if you want to change it?

Then you change it and let the compiler/IDE help you find all the other bits of code you need to change. I don't see the problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: