Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This concept is a big part of what the "big deal" around monads is - using monads to model effectful code conveniently puts the information of "this should be shell code" into the type, in a way that ensures that code that calls it also gets annotated as "shell" code. Monads are of course also a much more broadly applicable abstraction, but their application to effectful code, enforcing this design, is usually the first and most apparent place people run into them in the ML family of languages.


I disagree, although it's possible I only disagree with how you've phrased it.

Monadic interfaces in the context of non-deterministic effects are a consolation prize. They represent a way to combine effectful code, but ideally your code would have almost no effects at all.

As far as I can tell, the idealized version of this talk is a batch interface: one effect to grab all the data you need, transform the data, and then one effect to "flush" the transformed data (where flush could mean to persist the data in a database, send it out as commands to control a robot, etc).

Tracking side effects in your types (maybe what you were going for?) is helpful for measuring to what degree your code fails to adhere to this idealized model. If most of your code has an effect type, that's probably a sign to refactor. It also keeps you honest as to the infectious nature of effectful code by propagating the type as necessary.


I don't think we disagree in spirit - I didn't mean to imply that it prevents you from, e.g., writing all of your code in the IO monad, just the points you made in your last paragraph. So, more that they're a useful tool to help you realize these goals, not something that gets you there on its own. It does let you broaden/specify your definition of "effectful" a bit - modelling event streams with monads gives you FRP (as in your robot example), and I vaguely remember reading in some paper somewhere the suggestion of using monads to separate out unconstrained recursion/non-totality/co-data from total code.


The big problem with monads is that they are still imperative calculations even if individual effects are nicely typed. If functional code uses them, it effectively becomes imperative. To keep benefits of functional style one want mostly avoid monads. The whole idea of the article is to use any notion of imperative patterns only at the very top to glue things together.


What's imperative about monads? Or rather, what is not functional? Why should they be avoided?


Look at any do block in Haskel, PureScript, Idris etc. It is the imperative code. The individual effects are typed and separated, but it is still the code that depends on implicit state with all its drawbacks.

Then look at Elm code. Elm does not have any imperative hatches. The monad that runs everything is at the very top level (“shell” as the article calls it) and hidden.

As such Elm code is forced to use functional decomposition resulting in very easy to follow, refactor and maintain designs.


Its still quite different than classic imperative code

If you're working with a free monad, or if you don't specify IO (just some of the generic IO like typeclasses like say MonadError), you can still choose your own interpreter for the monad and "program" the semicolon. Which means you get back all the benefits of testability etc.

To get a similar effect in an imperative language, you would use e.g. coroutines and `yield` every side effect to the execution engine. The engine will take the action "specs" (probably a data structure describing the action to perform, e.g. set some value in memory) and decide what to do with them, and you can swap the real engine with a test/mock engine in your tests.


Programming semicolon is not different from mocking interfaces with imperative code. One still has to write it and tests still do not test the real interfaces. Surely the situation is improved compared with imperative code, but it is not as good as with monad-free code.

It is pity that modern conveniences like polimorphic record types with nice syntax for record updates were not invented earlier. With those even with complex code monads can be used only at the top level when the sugar of do blocks is not even necessarily.


Do-notation is Haskell is purely syntactic sugar over function calls. You can remove do-notation from Haskell and still write the exact same programs (with monads and all). Also, monads are not about state anymore than classes in Java are about Toasters.


Surely a do-block is just sugar for the functional code. But that code can be used to model all imperative effects. As such the code inevitably models all troubles the imperative code can cause.

If one looks at the desugared version one can see where the trouble comes. Functional code using monadic style depends on the state of the monad interpreter that can be arbitrary complex and spread over many closures with many interdependencies. It can be rather hard to uncover what exactly is going on, precisely in the same way as with imperative code it models.


Monads is taking it too far. Mutation is a reality, the correct approach is disciplined mutation. Shoving mutation into convenient boxes and convincing yourself to never look inside it does not mean mutation does not exist. The best approach is taken by scheme, and more specifically clojure to have a disciplined and practical approach. Mathematical purity of programs is a myth propagated by Type theorists dont buy into it.


> Monads is taking it too far. Mutation is a reality, the correct approach is disciplined mutation. Shoving mutation into convenient boxes and convincing yourself to never look inside it does not mean mutation does not exist.

Monads exist exactly because mutation is a reality. Monads do not defy the "mutation reality", nor try to encourage programmers to never look inside them. They are a means of dealing with the "mutation reality" by encouraging to separate pure and impure parts properly and while still making functional composition possible. The image you create for monads is a straw man. Monads ARE a kind of "disciplined mutation" as you put it.

You don't have to like them nor prefer them. But they are clearly a great and established abstraction loved and used by many. You may prefer Clojure, I get it, but I see no reason to talk shit about monads in this way. Have you ever used monads and similar abstractions extensively?

> Monads is taking it too far.

> Mathematical purity of programs is a myth propagated by Type theorists dont buy into it.

Those are big words. Are you some kind of authority? You could have at least prepended "I think" to those phrases.


You are repeating what i wrote, by writing this large comment you havent increased anyone knowledge neither mine nor yours. Monads exist because because haskell people want to pretend that there is this ideal mathematical world where things dont change, some go as far as saying Strong types removes the need for writing tests.

> Those are big words. Are you some kind of authority?

Yrs of writing programs have taught me that programming functions are not equivalent to mathematical functions, there is no equivalence that exist stop pretending that it does.


> Monads exist because because haskell people want to pretend that there is this ideal mathematical world where things dont change

Monads exist independently from Haskell and are not about "things that change".


Once again, monads are a form of "disciplined mutation." They didn't repeat what you wrote, they contradicted your entire premise.

You didn't respond to anything they said and you doubled down with your nonsense about "Monads exist because because haskell people want to pretend that there is this ideal mathematical world where things dont change."

That monads ignore the "mutation reality" isn't a very strong point when monads are a concession for the "mutation reality." Unless you want to repeat yourself a third time, the ball is in your court to bring concrete supporting arguments since you're making the extreme and somewhat self-aggrandizing claim that these other people don't really see the mutation reality of the world like you do, thus they are using inferior tools.

I'd say that anyone specifically trying to corral/isolate their I/O code (monads or not) are so "enlightened" about the mutation reality of the world that they use specific abstractions to address it.

If you want to see code that tries to paper over I/O, look at a program where you can't even tell when and where the I/O is performed because it just looks like any other function call. Active Record in Ruby on Rails might be a good candidate in its effort to make DB access opaque to the programmer.


OK This has become a big feud, apologies for choosing the incorrect words and being rude. Let me put it another way by "disciplined mutation" all i meant was localised mutation. A lexically scoped local variable is enough to handle the spread of mutation, i dont see the need for a specific datatype to handle mutations exclusively. Monads make mutations explicit and global. I hope that makes sense.


I think you misunderstood Monads, or the role type theory has to play in modern programming.

Large projects inevitably benefit from static guarantees enforced automatically by your environment. That can be a 3rd party static code analysis tool or the compiler. Even just a linter will improve code quality and thus developer happiness and productivity.[] Having your compiler enforce* the functional core/imperative shell, and exposing your business logic only through functional components is what makes a strongly typed language of the ML family stand out over, say Clojure.

Mutating state is no problem in a strongly typed functional language. In Haskell, just put your computation in an ST Monad. You can even still expose a functional signature that doesn't leak the ST monad if your algorithm is faster with mutation.

[*] Overall. Some people will probably be unhappier, because they have to follow "arbitrary" rules now, but those would usually have been the worst offenders.


Mutating state is no problem in a strongly typed functional language. In Haskell, just put your computation in an ST Monad. You can even still expose a functional signature that doesn't leak the ST monad if your algorithm is faster with mutation.

That works reasonably well in some situations, but not all.

We often work with local, temporary state, meaning something mutable that is only referenced within one function and only needs to be maintained through a single execution/evaluation of that function. (Naturally this extends to any children of that function, if the parent passes the state down.)

If that function happens to be at a high level in our design, this can feel like global state, but fundamentally it’s still local and temporary. I/O with external resources like database connections and files typically works the same way.

We can also have this with functions at a lower level in the design. An example would be using some local mutable storage for efficiency within a particular algorithm.

However, not all useful state is local and temporary in this sense. We can also have state that is only needed locally in some low-level function but must persist across calls to that function. A common example is caching the results of relatively expensive computations on custom data types that recur throughout a program. A related scenario is logging or other instrumentation, where the state may be shared by several functions but still only needed at low levels in our design.

Now we have a contradiction, because the persistence implies a longer lifetime for that state, which in turn naturally raises questions of initialisation and clean-up. We can always deal with this by elevating the state to some common ancestor function at a higher level, but now we have to pass the state down, which means it infects not just the ancestor but every intermediate function as well. While theoretically sound in a purely functional world, in practice this is a very ugly solution that undermines modularity and composability, increases connectedness and reduces cohesion. And weren’t those exactly the kinds of benefits we hoped to achieve from a functional style of programming?

If anyone would like to read more about this, we had an interesting discussion about these issues and how people are working around them in practice over on /r/haskell a couple of years ago:

https://www.reddit.com/r/haskell/comments/4srjcc/architectur...

Spoiler: We didn’t find any easy answers, and everyone is compromising somewhere.


There have been many articles on this topic. There isnt any evidence to suggest that static guarantees makes your code better. Ofcourse what does make your code better is immutability. But complete immutability isnt practical and even Haskell people understand that but they continue to pretend that programs are about mathematical purity. If that isnt enough claiming static typing removes the need for testing is complete bunk.


Have you written programs in Haskell/F#/Ocaml? Static guarantees, especially of expressive type systems, absolutely make your code better and their benefits compound as your system gets bigger. The type checker acts as a guardian of the soundness of your whole domain. And yes, expressive static typing removes the need for a whole class of tests, namely the ones that you'd have to write in other languages if you are disciplined enough to care about the soundness of your domain model. I personally loath writing these type of tests but I do when I can't use the power of say Haskell because I care.

Immutability also makes you code better but it's an orthogonal concern and utilising both is a smart move.


If it were true, we would all be writing in c++ and there would never be a Stack overflow or Null pointer exception. But thats isnt true.


Any language that has null, including C++, does not fall into the category of languages with expressive type systems. As soon as you have proper sum types the null issue goes away + the whole big world of working with ADTs open up.


What makes you sure that mutation is a reality? You can just as accurately model reality using an immutable value with time being an additional dimension as using a mutable value without time being taken into account. Both are just models of reality, not reality.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: