b) I wish HN had more of these sorts of articles as opposed to the usual startup/productivity/lifestyle crap
In Haskell, a type represents the results of performing a computation. The "results" include the "output" but are not limited to it: they also include things like state changes, possible exceptions, and I/O. That's why Haskell is so big on Monads, because that's the best way to embed these results into a return value.
In a systems language like C or Rust, a type represents the size and meaning of an area of memory: is it an int, a float, a pointer to a float that stays constant, or a pointer to a float that someone else can change?
In languages like C# & Java, a type represents something like the allowable operations on a value, usually represented as the list of methods on a class.
What this article is basically doing is taking a type system of the third category (C#, allowable operations), and embedding into it a type system of the first category (result of computation). While there are advantages to Haskell's approach, I'm not convinced the metaphor shear is worth the effort here. Your types end up expressing things that no other part of the language cares about or can make use of.
In the case of C#, Java, et al, I think limiting the type system to “expressing allowable operations” is due to habit and custom rather than something intrinsic to language.
In my mind, the purpose of a computation is to create new/more structure so that, on the surface, your program ends up being a series of simple queries rather than a bunch of messy computations using complex algorithms. Classical OOP in which the constructor does the computation and the methods do the querying on the new encapsulated structure seems to be a good fit for that. This is subtly different than treating methods as allowable operations because we’re not concerned with transforming the structure, just asking easy questions about it. So in this way, OOP à la C# is very similar to FP à la Haskell.
I've been thinking recently about why functional languages and features have been gaining traction in the last half decade. Concurrency & parallelism have often been given as rationales, but I suspect the real reason is the need for programs (and libraries) to behave as intended and the desire to express complicated processes more simply. And the driver for that may be that a 'commodity programmer' mindset for businesses is faltering as the rate of change and complexity of technology increases.
In my hiring experience, readily available programmers are well-versed in the technology of several years ago. This makes sense because they've spent their time maintaining and extending systems designed and built years ago. But as requirements increase and become more exotic (due to competitive business pressure), such systems get used beyond their original design, and become more difficult to work with. I believe a 'testing culture' developed to mitigate this risk (along with the rise of dynamic language use to address the need for development speed). The hope was that the cost of testing (which can be pretty heavy for a business) would outweigh the cost of maintenance. But the overhead incurred may go to trivialities (ie, test that are easy to write) or struggling to implement more complex (and probably more valuable) tests which themselves are prone to trouble. Wouldn't it be nice if the parts of the program did what was intended and that testing be used more selectively to ensure the integrity of the system? I think correctness is one of the things functional programming languages encourage.
Types are an important part of many (most?) functional languages and are arguably an important part of the puzzle of building correct program components. I think we'll see a lot more articles musing on what they mean for engineering.
type FullName = FullName of string
with member x.Value = match x with FullName x -> x
Edit: Also, phantom types may be of interest. Here's an example in F#:
But from reading it, it seems like we're trying to graft a Haskell-like type system onto a couple of languages with little support for it in the first place. I think the impulse is noble but there's a truckload of boilerplate which introduces another layer of complexity. It seems brittle to me.
Maybe invest the time you'd otherwise spend maintaining this kind of scaffolding into more/better tests?
Also this article seemed like a long winded way of saying 'make up types for your own data'. Don't just pass around void*! Well thanks for the tip.