Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm so happy powerful type systems are more popular now. TypeScript bringing a great type system to a language as popular as JS is fantastic. Rust is also a great way to get a great type system while staying in a systems programming and procedural environment. Hell, even python type annotations support union types.

I never knew the depth of type systems until the last year when I took a type theory course and a compiler course taught by a man who loved his types. So much power - I can't wait to see the future of type systems.




The grass does seem tend to appear greener in the other paradigm.

Barely typed languages like C made rigorously typed languages like C++ and Java seem appealing. The boilerplatiness of those languages made duck typing seem appealing. Writing anything nontrivial with duck typing made more elaborate type systems seem appealing.

Needing a PhD in category theory to produce a side effect will no doubt make some other paradigm seem appealing in the future.


> Barely typed languages like C made rigorously typed languages like C++ and Java seem appealing. The boilerplatiness of those languages made duck typing seem appealing.

Eh, I consider Java to be barely typed too. If you have a variable of type Foo, the type system doesn't even guarantee that you have a Foo in there (it might be null). The whole point of a type system, in my mind, is to guarantee that I have that Foo!

> Writing anything nontrivial with duck typing made more elaborate type systems seem appealing.

In my mind, this makes type inference seem appealing, not duck typing (which is not well-defined, but most people associate it with dynamic typing).

> Needing a PhD in category theory to produce a side effect will no doubt make some other paradigm seem appealing in the future.

This oft-repeated exaggeration needs to stop. Using monads does not require a PhD in category theory. If you can understand Promises in JavaScript, then you can grasp how IO works in Haskell.


I know dozens of people who tried understanding Promises and all of them succeeded.

I know dozens of people who tried to understand monads (including myself) and maybe 3 of them succeeded (I do not consider myself one of them).


A monad is just a monoid in the category of endofunctors, what's the problem?


HN may not be the place for these kind of comments - but I will never not up-vote this specific comment.


I think they work well as long as they're reasonably specific, reposting the famous HN Dropbox comment also works pretty well on HN. Other redditisms don't fare so well, and for that I'm thankful.


Likewise


Not arguing for or againts something, this is just something I wanted to say.

Framing matters here, the API of Promises in js maps almost nicely to the common monad API of bind (then), join (implicitely done by the runtime whenever possible), and return (Promise.resolve). Learning any monads is unlikely to be harder than this as these operations are indeed quite intuitive.

What is often referenced with learning monads is instead Learning All the Monads, sometimes adding a touch of Learning All the Monad Transformers and interactions of different monads. This intrinsecally needs to be done in a generic/parametric way and is way harder.

To my knowledge Haskell is the most mainstream[1] language only typed language that allows expressing the categorical definition, most other type systems are significantly more limited in how much maths/category theory they can express and often focus on specialized functionality with practical implication like Rust's ownership system or Typescript's Capitalize<StringType> that only exists to allow nicer typing of some common API desings[2]

[1] most mainstream typed language at least, you can implement Monad is JS/TS but the language cannot express it the same way C cannor express generics even if you can manually implement them in it.

[2] https://www.typescriptlang.org/docs/handbook/utility-types.h...


Otherwise agree with you, just noting that Scala is likely even more mainstream than Haskell and has proper Monads.


Monads and Promises are not comparable. The IO Monad fulfills a similar role to Promises and people learning Haskell grasp them immediately. Haskell also has an equivalent of the async/await syntax called the do notation which makes things nice and easy to read.

You don't really need to understand monads to understand the IO monad or Promises. If you want to understand monads, look at their signature, mainly the bind operator and try to implement something with it. A logger, a list, the IO monad itself. It's not that hard, it's just that nobody bothers playing with it and just try to learn the theory without experimenting with monads in code. After a few tests you'll build an intuition for it and you'll get why they call them programmable semicolons.

Monads is just one of the abstractions that can be used to implement IO in Haskell btw, it was just the authors flexing their category theory that got us in this situation.

At the same time, one of the reasons I learned Haskell is that it had a reputation for being hard and, boy, am I grateful for it.


Basically, think of a monad as a Promise. Rather, a Promise is more-or-less an example of a monad. (Yes, I know that due to some technicalities it isn't, but it behaves like one for the purposes of this comment)

Mapping a monad is equivalent to Promise#then - if there's a value in the monad, then it calls the function you passed to map and returns the result wrapped in a monad. If there isn't a value in the monad, then it returns itself.

For example, with the Maybe monad, if you have x = Maybe.Some(y), then x.map(f) = Maybe.some(f(y)). If you have x = Maybe.None, then x.map(f) = Maybe.None. With Promises, if you have x = Promise.resolve(1234), then x.then(x => x * 2) = Promise.resolve(1234 * 2). If you have x = Promise.reject(new Error('abcd')), then x.then(x => x * 2) = Promise.reject(new Error('abcd')).

The IO monad is extremely similar to Promises - you call an IO function and get an IO monad as a result, then map that monad in much the same way you'd then a promise.


While we are here, Futures from fantasy land (in js) are better promises because they don't execute immediately and they need to be run manually


The problem with monads is they are only really practical in a language with built-in syntactic sugar to cover the boilerplate. In any other language you will surely go "whats the point?" because using monads will invariably turn simple code into a convoluted mess for no benefit. Promises on the other hand will seem immediately useful if you have tried writing async code in an ad-hoc manner. Promises solve a problem.

Mathematicians and Haskelites tend to explain things by giving their definition. (Imaging a Haskelite explaining how to write "hello world" in C: First you need a "main" function. A function is a process or a relation that associates each element x of a set X, the domain of the function, to a single element y of another set Y (possibly the same set), the codomain of the function. etc etc )

But most other programmers prefer to understand things by understanding the problem they solve. It is quite obvious what problems Promises solve, but in the context of JavaScript, monads does not solve any real world problem. That makes them hard to grasp for a programmer, even though the concept is simple.

Monads are a particular pattern for method chaining or function composition.

Here is an example of some JavaScript code which use regular method chaining:

    [1,2,3].map(a => a + 1).filter(b => b != 3)
This code results in the array [2,4].

Similar code following the monad pattern would look like this:

    [1,2,3].flatMap(a => [a + 1]).flatMap(b => b != 3 ? [b] : [])
And the result is the same. But obviously the monadic version is more convoluted and harder to read. But if there was some syntactic sugar which covered the boilerplate, then the monadic version might be bearable!

The "power" of the monadic pattern is that the operations can be chained or nested in a more flexible way. For example here the operations are nested, but the result is the same:

   [1,2,3].flatMap(a => [a + 1].flatMap(b => b != 3 ? [b] : []))
This does have some nice properties, since operations can be chained or nested together to composites which have the same type as a single operation. The question is if the benefit outweighs the cost in code complexity.

The monad pattern is purely concerned about how operations are chained together structurally, it is not about what they does or what types are involved. In this example the type is Array<T>, but it could be any parameterized type.

Monads can be used anywhere a sequence of operations is stringed together. (But that doesn't mean you would want to.)


Monad imho is deliberately badly explained to preserve the mystique and the smugness of the cognoscenti. I found this book invaluable for translating the field into something that makes sense: https://alvinalexander.com/scala/functional-programming-simp...


I would go further. Using monads, or any functional programming concept that you'll encounter in the wild, requires zero category theory. I would actively warn anyone against learning category theory for day-to-day use of FP. Learn it if you're interested in it for its own sake sure, it's a great maths subject, but it's basically irrelevant for most programmers imho and a distraction if you're trying to get a simple practical understanding.


I'd love to see an explanation of monads that accurately captures their capabilities in terms no more complex than those required to do the same for promises.


If you understand promises you pretty much understand monads already since promises are more or less a type of monad.

Monads represent computation contexts. In the case of promises, the computation is preformed in the context of a value that will be available some time in the future (or not at all in some cases).

   my_promise.then(compute_result)
Another context could be a list, where the computation is performed on each value in the list

    my_list.then(increment)
Or the context could be that the value is maybe null

    maybe_string.then(uppercase)
How the computation is actually performed depends entirely on the monad. Usually .then is called .bind or .flat_map because it will automatically unwrap nested monads.


Monads are chainable containers for a computation. The container represents some sort of "effect" implicit to the computation ie asyncrony, optionality etc. This is not technically 'correct', monad explanations are always a bit cursed, but I find this a pretty useful intuition to work with day to day.


To understand Monads, we have to first understand Functors, Monoids and the Applicative type classes (type classes are more or less the same as interfaces in Java, C++ and the like).

A Functor is something that implements a `map` function. You can think of a Functor as anything that can encapsulate/surround something else. The map function applies a function to the encapsulated element without modifying the outer structure. For example, a List is a Functor, that has this map function implemented in most languages. It indeed surrounds a given type (zero or more item of that type to be more correct), and it indeed applies the same function over each element of the map. Several languages have a “Result/Maybe/Optional” type, that can either contain one instance of a type, or Nothing. Some languages allow you to modify the inner element, when it exists, that is, it is also a Functor that encapsulates an element and has a map function to change that.

Let’s dissect this Monoid word next (which is not a Monad!). Before the definition, let’s look at an example: summing numbers. Let’s say we have a list of numbers, and we want to calculate the sum of it. We can start from the beginning and go through the list one by one, or sum the first half and the second half first, and then add them together. These are possible because the add operation is a Monoid over the numbers. What that means as an interface (type class) is, that it implements an `empty` function, a so called neutral element (0 for addition), and a `concat` one (which is + itself).

These names make us think of Lists, and indeed, Lists are Monoids as well, not only Functors. They have an empty method returning [], and they have a concat function that concats two lists together. Do note that concating an empty list to a list doesn’t change it.

Is a Return type a Monoid? We could make the case for empty being Nothing, where concating Nothing changes nothing, but what about concating two Results both containing an integer? Should we sum them or give the product, or write them next to each other?

It turns out that (in Haskell at least) you can do specify something like, this is only a Monoid if the embedded type it has is a Monoid. So for example that way a concat(Just [1,2], Just [3,4]) will return the concatenated list inside a result type.


Continue

We are almost there! Let’s tackle Applicative now. What can we do if not only are data is encapsulated in some structure, but even our function which we want to apply to?

Can we apply a list of functions over a list of values? Or an Optional/Result function over an Optional/Result value? Does it even make sense?

Let’s define Applicative as being a Functor that has a `pure` function similar to our Monoid, as well as a `sequentialApplication` one that we will shorten as the cryptix <>. Since it is a Functor, remember that it also has a map function available.

The pure function simply encapsulates a type inside itself. For example, a list constructor is precisely that, like python’s list(2, 3) creating a new list. The <> is a bit more complex, it takes as parameter a function that is encapsulated in this very type and operates on type A. Its second parameter is an encapsulated object containing type A. And the important thing comes here: it will apply the encapsulated function to an encapsulated data, without unwrapping first.

Let’s say, I have a text field where the user Maybe entered his/her name (sorry) and a field where we await an age. We’ll store these inside a Result<String> and a Result<Age> type.

Let’s say we have a User constructor that awaits a name and an age. We could handle each parameter separately here, but what if we have 20? So we instead do something like pure(createUser) <> nameResult <> ageResult. This magic line will apply the Just createUser function (that is, wrapped into a Result with an existing value due to pure) to a possibly missing name. Let’s stop for a moment here and talk about currying. In many FP languages, calling a function with less parameters is not a compile time error. It gives back a new function that awaits one less parameter. Eg `+ 3` is a lambda that got its first parameter as 3, and will “execute” when we give it the second parameter.

Knowing this, our evaluation so far will be something like Just (createUser(nameResult if it has a value)) or Nothing. Now we apply this, yet again enwrapped function to the last parameter, so we will get as a resulting type a Result<User>, which will contain a user when both values were sent, and will be Nothing if any of them were absent - cool isn’t it?

But I know, we are here for Monads!

Well, Monads are just monoids in the category of endofunctors. Just kidding. They are Applicatices, that also have a `bind` method.

So you’ve seen how we could “sequentially apply” functions. But the “problem” with Applicatives, is that we can’t depend on the output of a previous function — in the previous example we could not have ageResult’s evaluation change depending on what was returned previously. Let’s see another Applicative, the often misunderstood IO.

putStrLn has the following type: String -> IO ().

That is, it waits a string and gives back an IO structure that returns void (actually it is called Unit). If we were to somehow execute it, it would print that string ending with a newline. If we do

(x => putStrLn(“world”)) <*> putStrLn(“hello”), it will output (if we know how to execute it) hello world in two lines. The reason for this strange ordering is, that we apply a function that drops its first parameter to the second one. (Haskell does have a shorthand for this!)

But how can I act upon the result of a previous computation in a “sequential application”, eg. read in a line and print hello $name? By `bind`! It does the following: it needs a monad at hand, with encapsulated type A (eg. IO String for the readline we will use), a function that takes that type A (String) and returns this monad with any type (we will print out the string so we will use putStrLn)

So, bind(getline, name => putStrLn(“hello $name”)) will do what we want and you have just used your first proper Monad (Haskell of course provides a nice syntactic sugar over this called do notations)

With these abstract structures, IO can be dynamically constructed and side effects will only happen where we expect them.


Not an explanation, but they key thing to know is that monad is not a thing, but a pattern. It's a pattern for composing things. Like the FP equivalent of the Unix shell pipe character. Like the Unix shell provides composability for processes that happen to read from stdin and write to stdout, most FP languages provide a framework for composing monads (of whatever kind) neatly e.g. Scala and Haskell "for comprehensions".


Monad: In functional programming, a monad is an abstraction that allows structuring programs generically. Supporting languages may use monads to abstract away boilerplate code needed by the program logic.

Promise: A Promise is an object representing the eventual completion or failure of an asynchronous operation. Essentially, a promise is a returned object to which you attach callbacks, instead of passing callbacks into a function.



Great write up!


Monads are a pattern for function or method chaining.


> Eh, I consider Java to be barely typed too. If you have a variable of type Foo, the type system doesn't even guarantee that you have a Foo in there (it might be null). The whole point of a type system, in my mind, is to guarantee that I have that Foo!

That seems a rather arbitrary limitation. When Java says Foo, it would translate to what you would consider Maybe<Foo>. How you represent types, and what that representation implies is a matter of semantics.

That said, Java's type system is pretty dang weird, especially generics.

> This oft-repeated exaggeration needs to stop. Using monads does not require a PhD in category theory. If you can understand Promises in JavaScript, then you can grasp how IO works in Haskell.

The concepts themselves aren't particularly hard to grok, but the more academic side of functional programming (i.e. Haskell) is comically inaccessible, mostly due to jargon (monads aren't even that bad compared to something like kleisli arrows).


"Maybe<Foo>" doesn't have the same problem as Java nulls though, because with an optional type the type system would force you to always handle the possibility of the value not being present. With Java nulls you just get a runtime error if you try to do certain things with something that turns out to be null.

Incidentally, Java does have Optional, and codebases which use it consistently are vastly better to work with. https://docs.oracle.com/javase/8/docs/api/java/util/Optional...


You don't need a PhD, you just need the first few chapters of [Category Theory for Programmers](https://github.com/hmemcpy/milewski-ctfp-pdf) :)


I think the major shift was that originally types were used mostly to talk about representation. Then we wound up in a situation where caring that much about representation didn't make sense as often (at this point it's at system boundaries and when we care an unusual amount about performance) and so it made sense for some languages (covering a growing portion of programming) to stop talking about representation. But it turns out there are other useful things that we can use similar technology to "talk about" as we learn to build better tools and to better apply them.


> Needing a PhD in category theory to produce a side effect will no doubt make some other paradigm seem appealing in the future.

I think it’s only really Haskell (and perhaps languages like Idris) that’s super strict on side effects.

In Rust it’s a simple mut annotation, and perhaps a mutex (and you’ll want that in C too of course) if you’re working across threads.


Does rust have Mut annotations on functions?

I mean, when you look at a problem that monads solve with types is that every function has an “annotation “ of what it uses (IO or mutable state). Similar how async in JS allows await, a state annotation would allow put get or IO annotation any of the IO capabilities.

Of course monads are much more but Mut does not look like it pollutes everything with Mut, including function definitions and results


No, it does not.


Effects systems in Scala


TS has good option of being optional. I programmed C++ (rigid niminal), Ruby (duck), Haskell and TS is best compromise so far as it works like tested documentation which is most practical for many purposes (where program must be iterated). If you know what you want to build beforehand then sound typing might be better.


C is not "barely typed". It has quite a lot of type checking.

An expression like "obj.memb" in C requires obj to be declared to have a type which has a member "memb".

C catches it if you call a function with the wrong number of parameters, or wrongly typed parameters, such as passing a "struct foo *" pointer where a "struct bar *" argument is required.

C has "holes" in the static safety net in areas like memory safety: object boundaries and lifetimes. It allows some unsafe conversions, like any object pointer to a void * and back. But not only those: C has unsafe numeric conversion and operations.

Still, there is a type system there, and C programs greatly profit from it; it's the big reason why we have so many lines of C code in our computing infrastructure, yet the proverbial sky isn't falling. (Just the odd lightning or hail here and there.)

C compilers also help; modern compilers have a lot more diagnostic power than compilers thirty years ago. In C, it is critically important to diagnose more than the bare minimum that ISO C requires. For instance, whereas a function that has not been declared can be called in any manner whatsoever (any number of arguments), it's a bad idea to do that without issuing a diagnostic about an undeclared function being used. If such a diagnostic isn't enabled by default it's a bad idea not to add that. C programmers have to understand the diagnostic power of their toolchain.

Recently, GCC 11 found a problem in some code of mine. I had converted malloc/free code for a trivial amount of memory to use alloca. But somehow I left in a free call. That was not diagnosed before, but now it was diagnosed.

Another obscure bug that a newer compiler with newer diagnsotics caught for me in the last few years was a piece of code where a comparison like this was being made:

   d <= UINT_PTR_MAX
where d is a double. The idea was to try to check whether d is in the range of a certain integer type before converting it. Trouble is that the above expression moves the goalpost because when UINT_PTR_MAX is 64 bit, then its value is not necessarily representable in the double type. What happens is UINT_PTR_MAX is converted to double, and in that process it goes to a nearby double value which happens to greater than UINT_PTR_MAX! And so then the range check becomes wrong: it includes d values in that extended range, which are beyond the range of that integer type, causing undefined behavior in the conversion.


In the field of formal type systems two common approaches to defining types, in practice they are quite similar but in my opinion they differ a lot in framing.

One side can be represented by Haskell, Hindley–Milner type systems, or even Coq; here every value has its own "best" type that is intrinsecally associated with it, that is values and types are defined and constructed together.

On the other side you have sort of a formal definition of duck-typing; you have values and properties that are satified by some set of values, here you have your values (all numbers, all strings, all memory addresses) and expres in usual logic terms any property you want (e.g. this memory address must be either Null or point to a string of even length).

All this to say that C has a nice type system from the first point of view (function pointer allow you to have higher order functions!) but a very weak one from the second point of view in that it is very hard to decide if an operation will have a valid result just by the types of the values you feed into it (let's not talk about UB for now).

In my opinion in later decades there is a movement to care more about type systems that follow the second approach. In my opinion it is one of the reason for the success of Typescript; its objective wasn't to have a nice type system full of good properites, but to model how javascript was being written.


C is a statically, but weakly typed language with very few compile time checks and many non-intuitive automatic casts.


The wheel of typing.


Could you point to any good resources on this topic?


The standard reference, if there is one, is Benjamin Pierce's "Types and Programming Languages" book.


What course?


It was at my university unfortunately, but the content is likely covered by any course labeled as "programming language theory" or something similar.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: