If anyone is confused by how the author uses "proof" in this article, they are referring to the Curry-Howard isomorphism that types represent propositions in a constructive logic and well-typed terms inhabiting those types can be considered as proofs. a :~: b is the type that a and b are "equal", i.e. they are the same type. the functions convert1 and convert2 witness the isomorphism between a and b.
The core reason why GHC didn't allow the naïve recursive definition in the first place is because the definition failed the occurs check[0], so using type families and explicitly labeling the types circumvents the problem.
Thank you for the article, it is great read. I'm trying to follow haskell-oriented blogs, some of them are cool to read, while more deeper ones are harder to follow (probably it is about me, not about author!).
Anyway, I often struggle to understand "what problem is author solving exactly", in terms "what is the benefit if we create such recursive type (or whatever else is the theme)" ? Somehow, there are lot of articles making some magic and showing "how can this be done in haskell", but at the end it is hard to understand "what is the benefit here, which real-life problem can I solve by using this technique, how to relate this with real world".
Please do not get me wrong, this not only about this post, but somehow very often in haskell - related posts. Or is it only me ?
As Simon Peyton-Jones put it[0], Haskell is a language where people work as well as play. Sometimes there's an obvious practical use, sometimes it's not so obvious or may not exist at all. In this case, I as a Haskeller don't see a good use of "forbidden types", however, type families have many practical uses, see [1].
In the same lecture, SPJ elaborates on how the usefulness of typeclasses weren't apparent in the beginning only to be discovered more later, in part through people playing.
This is very true, but I think there’s a good reason for it - using Haskell forces you to first address many fundamental issues that most languages just let you paper over. The issues still exist in other languages, but you can ignore them, and then years later wonder why so much software is ad-hoc and full of complicated, but preventable, bugs.
E.g. haskell’s laziness forced early haskellers to think about denotational ways of managing effects, and ended up forcing them to invent modern monadic effect systems. Now, there are a lot of people who actually prefer explicit effect management over freely mixing effects and logic, and not just because they have stockholm syndrome.
LOL, I've been programming Haskell full-time professionally for 10 years and think that Haskell is very effective for practical tasks. But in this case I completely agree with you. That quote fits this thread perfectly.
As to what is the practical implications of a Conat class? You can use it pretty much anywhere you use natural numbers, except it gives you less guarantees as it can be infinite.
To be honest, looking at the code again, I cannot be sure if it's coinductive or inductive. i.e. I'm unsure if you can construct:
something : T
something = just something
which would be an infinite stream of `just`s.
If that's not possible, it'd be isomorphic to natural numbers we know.
Cool! Yeah I've been lazy as I don't have Haskell setup on my work computer.
So, in short it's just coinductive natural number type. I'd say it's very far from useless, I use Conat in Agda pretty often especially if you use other coinductive types.
> Cool! Yeah I've been lazy as I don't have Haskell setup on my work computer.
That's why I find `repl.it` so handy. (No affiliation, and not an ad; it's just one service that I've found useful.) If I want to know something like this from any computer, I can just pop it over to their online REPL. I wasn't sure if it supported GHC pragmas, but it seems by this example that it does.
> So, in short it's just coinductive natural number type. I'd say it's very far from useless, I use Conat in Agda pretty often especially if you use other coinductive types.
Maybe I misunderstand the terminology. If it's true that one can't solve `something = just something`, then doesn't that mean that the type is inductive, not coinductive?
No, that's my fault, not yours. It originally did say it compiled, but that turned out to be some kind of error with repl.it, which appeared to be down as part of this outage (https://news.ycombinator.com/item?id=23875692). When I tried again, it didn't compile, and I thought I could sneak edit since you hadn't responded. Sorry!
I have the same question... if a type is used by the type-checker to verify the consistency of the data it’s annotating, then I don’t see how a recursive definition could provide anything new, other than potentially making type checking run forever...
Although maybe there’s something conceptually satisfying about defining bottom as the only type that refers to itself?
It's not you. There's a lot of navel gazing and making people feel like they can't produce working software unless they have higher kind types preventing side effectual reordering for their hylomorphisms. And yet, life finds a way.
The core reason why GHC didn't allow the naïve recursive definition in the first place is because the definition failed the occurs check[0], so using type families and explicitly labeling the types circumvents the problem.
[0] https://en.wikipedia.org/wiki/Occurs_check