The book itself Type Driven Development with Idris is great too, it is the principal way I write code these days, by writing my types upfront then filling out the logic, and in some languages, the logic itself can be produced automatically via the filling of type holes. In other words, simply based on the types you provide, the compiler can guess the implementation of the function, somewhat like Copilot but guaranteed to be correct on a type level, even if not necessarily on a business logic level.
I had never heard of this book before, now I definitely want to give it a try.
As an aside, isn't type driven development how everyone codes naturally in statically typed languages? Whether it's Ocaml or Java, I feel like I try to get the types defined, then put on music and turn my brain off and just line the types up (much moreso in Ocaml than in Java).
In my experience it's not at all how everyone codes, or even almost anyone, in statically typed languages. Here's an example to elucidate a (potential) difference between the common and the commitment to types:
You're coding a card game, say poker. Games should only start with a shuffled deck.
1. A reasonable, common solution would be to have a type like Deck, and a function shuffle, taking a Deck and returning a Deck (where taking can mean it's an instance's member function, and returning can mean inplace mutation)
2. A type driven approach can be having two distinct types: Deck and ShuffledDeck, and a function that takes a Deck and returns a ShuffledDeck, and all procedures henceforth in the game's logic require ShuffledDeck - _not_ Deck.
In the type driven approach the possibility of playing with an unshuffled deck has been removed solely through the usage of types, and successful compilation is a proof no cheaters can sneak in a bad deck.
But it's not just that. There's exactly one function that takes Deck and returns ShuffledDeck, so a language like idris knows this has to fill a hole somewhere, in the one place that starts with Deck and then need ShuffledDeck. So in some sense part of the code has been written automatically.
I rarely see the second approach used, though I see analogies of this example all the time.
good example. i thought i used type-driven design, but your example made me realize i'm clearly not.
In your example, how would you qualify using an enum "State" with the case "shuffled" and "ordered" ?
I feel like this would help me check all cases in places where it's needed, sometimes with the help of the compiler. But does that qualify as "type-driven" ?
In particular, do languages like idriss require encoding all possible states of a state machine as a unique type to leverage the full power of the checking mechanisms ? This would seem a bit unmanageable in practice..
There is at least one language that does this as a form of its core mechanism of execution- Mercury Language, a declarative/logic programming language.
Its type system is fairly complex. To accomplish the goal you describe, functions must be declared as deterministic, non-deterministic, or multi-deterministic. Meaning, they always have a solution, sometimes have a solution, or multiple solutions for the same input.
Armed with that information, the compiler will throw an error if you haven't thought of all possibilities in your logic flow. The fact that the type system required you to be very descriptive in many different ways compounded the number of bugs that the compiler would catch before your program ever runs... I think someone said "if you can get your code past the compiler, it will work on the first try."
I wish Mercury were more popular, it's an awesome project!
Another example, let's say you're parsing a subscriber name for an email list, and you have a few checks such as whether it's too long, has whitespace, orcontains forbidden characters:
pub fn parse(s: String) -> SubscriberName {
let is_empty_or_whitespace = s.trim().is_empty();
let is_too_long = s.graphemes(true).count() > 256;
let forbidden_characters = ['/', '(', ')', '"', '<', '>', '\\', '{', '}'];
let contains_forbidden_characters = s
.chars()
.any(|g| forbidden_characters.contains(&g));
if is_empty_or_whitespace || is_too_long || contains_forbidden_characters {
panic!("{} is not a valid subscriber name.", s)
} else {
Self(s)
}
}
And let's say your code to add a new subscriber to the email list only accepts a SubscriberName, not a String:
Because `parse` gives back a SubscriberName if and only if all of the parse checks pass, you simply cannot add a new subscriber without all checks passing; the function will simply fail otherwise.
This example comes from the excellent book Zero To Production In Rust, which has a chapter on type-driven development [0].
I'm quite sure nobody would do something like that "in production"…
The problem starts with the signature of this function: There is—of course—no function which could take an arbitrary String and return a SubscriberName.
You need to return at least an Option, or better a Result.
But at this point the fun starts. You need to thread this wrapped value throughout your whole program.
Crashing your program (by a panic in the above function, or by a unwrap somewhere later) because someone sent you an invalid String is not an option in real life.
That's a simplified example that I grabbed in the middle of the book before the author refactors it, of course you shouldn't `panic` if you don't need to.
I'm not an expert, I don't know how the community would treat such enums. I think the bottom line is if you can't fail to shuffle the cards, whether through an interesting enums and matching mechanism or through some other part of the type system, then you're good.
I've never used idris, but as far as I understand it encoding state machines in the type system is exactly the kind of thing type-dependent languages in general and idris in particular shine at. This is the prototypical example given, and it's the one Mathew Farwell himself gives in a podcast interview [1]
It's a bit more than that in Haskell (and I guess Idris, but never tried). In Haskell you can write 'undefined' in any definition. 'undefined' is of any type, so the code will type-check. Then you can write a high-level sketch of your program, including functions, purely based on types and 'undefined'. When you are satisfied playing the types puzzle, you proceed to write implementations. This is referred to as "hole-driven programming" or "type hole-driven programming".
Also, like the other guy mentioned, the types lend themselves very well to auto-completion. It's no coincidence than in Hoogle, you can often find a function by its type. E.g., what could "[a] -> [b] -> [(a,b)]" possibly do?
Indeed, and this is precisely why I hate writing in dynamically typed languages like Ruby or Python; they hinder me from thinking about the shape of the program clearly.
this is putting the cart before the horse given that in any real-world facing application it's not the language that determines the shape of the program but the data and rest of the world that interfaces with your program.
Dynamic languages are in many ways simply a recognition of that fact, namely that data is sparse, incremental, changes and accretes and so on. There's a reason languages like Idris, Haskell and so forth are much more popular in solipsistic domains like theorem-proving than they are in a business application.
> There's a reason languages like Idris, Haskell and so forth are much more popular in solipsistic domains like theorem-proving than they are in a business application.
Scala developers would disagree.
Especially in business applications strong static types are extremely helpful to prevent bugs and model the domains of interest in a proper way.
I agree; I would expect to see such languages feature much more prominently there, especially in the subtle and convoluted business/legal complex.
One may argue that the proof-strength formality can be a hindrance: in such domains, inconsistencies and contradictions abound, and some of them cannot be predicted in advance. But I believe that languages like Idris can accommodate this. Maybe I am wrong, or maybe they can, but not more efficiently than more free-form languages.
...but ok, I am failing to see what I can do with COBOL that I cannot do with Idris, business-logic-wise. lol
I still use python type hints (and TypeScript) - and while they have benefits, I find that they fall short of something like Rust, Haskell, etc (because they are just that - hints). I don't have the surety that the types as annotated are the same as the types at runtime.
Just to make it clear. If you use typeguard @typechecked annotations snd the type hint does not match reality at runtime it will blow up with an error telling you.
The same, but at "lint time" (not runtime) will happen with mypy or pyright, although I am sure there are rough edges.
As an extensive user and a fan of these Python type checking tools, I do not remember this happening for some years now. Do you have a particular example where it should be improved?
Such a major failure was my first (and last) experience with TS about two years ago. Had all the "strict" options set, was quite happy finally using a "type safe JS", and boom, it exploded at runtime with some nasty bug which was discovered not before having the code run in production. Than I learned that TS' "type system" isn't sound, and you're required to write tests even for the parts that should be actually covered by the "types". That's not helpful at all. That's not what I expect from type safety.
Was this blow up due to a dependency with an 'any' type somewhere, and your code fed the dependency library some garbage? That's what bit me, even with the strictest ts settings, this can't be picked up by tsc.
I don't know anymore what it was concretely. That was two years ago on some project that wasn't even part of my usual duties. We lost some team members and I had to look after some extra projects. As I have quite some experience with JS I was given a TS task.
At this point I was even quite eager to update my JS skills with TS knowledge. All in all it was fun to write this project. But my error was to trust the types like I would when programming on my main project in Scala. Big error…
Since than I think TS is much oversold.
You can do fun things with the types (even some things that I miss in Scala, and there is not much to miss in Scala in general!), but you just can't trust the TS types. So this makes them superfluous in some sense. (Yes, some types are better than no types; especially when it comes to the IDE experience; but as long as any lib call (due to trivial bugs in "typing-files"), or some "impossible" input can break things this is not as helpful as one would like; especially when coming form really strongly typed languages).
Same. As long as you sanitize dynamic serialized input like json while parsing this is not a problem.
Later versions are very strict even about dynamic types, treating them as Union[] rather than Any, forcing you to check the type before using it. Likewise with strict non nullable.
Don't forget that "type derived code" also happens with any language that has implicit arguments, e.g. type classes in Haskell and implicits in Scala. For example, in Haskell, if I compare two lists-of-maps-of-tuples for equality, there are many nested equality functions involved, but the type class machinery generates this code, at compile time for me. In a language without this, e.g. OCaml, many generic operations like equality, to_string, serialisation, etc become very painful.
Even "implicits" (especially the explicit ones called `given`s in Scala) are extremely useful I don't think you need them for type-class derivation. Rust does not use "implicits" for that, I think.
If I would need to guess I would say it's done through Rust's macros.
But I would need to look this up also myself. I'm not 100% sure. (But I've never heard of any implicits implemented in Rust; they discussed this feature here and there, but it's not part of anything in the language or the compiler, afik).
OT: I've never seen you on the Heise forums ever since. But I guess there isn't much to miss anyway by now…
I prefer not to link accounts, but we've talked not only once. :-)
I've recognized your handle as I remember you as being one of the few people there who actually know what they're talking about when it comes to software engineering.
But I've also moved on. The overall level has dropped to a bottomless pit by now. It's no fun anymore.
> the logic itself can be produced automatically via the filling of type holes
Does that work in anything but boilerplate code? Like, hole filling won't be able to decide between then and else clauses in a conditional, for instance, because both clauses return the same type. And conditionals are one of the most basic data flow structures.
For Boolean conditionals, you're right. But there are richer conditionals that make certain variables available in only one clause of them, e.g., matching on whether an Either in Haskell is Left or Right, or whether a Result in Rust is an Err or an Ok.
Today I learned that there's a name for the kind of development I do naturally. Modeling my input and output types first is so much more effective at helping me to develop my mental model of the program. I often catch "bugs" before I've started writing code just by thinking about how the data flow will work.
The hopi is out of days though. It covers Idris 1 and isn't, I believe, being updated. Edwin intimated to me a while ago he'd rather have his bollocks cut off than write another book, and I assume he included updates in that. Idris 2 has significant differences to 1.
since we know absolutely NOTHING about A, all we can really do is give it back to you, so foo turns out to be identity, or id.
another might be
bar: [A] -> Nat
we still don't know anything about A, but we know a little about lists, so a good suggestion might be length.
All of the environments are pretty interactive, and you pick and choose. but from time to time there is exactly one possible implementation that meets all the constraints, so the compiler can fill that in.
Also "Wingman" for Haskell does some type-driven code generation. (It might be part of the haskell language server now, I get it when I use the vscode plugin.)
When the solution is ambiguous, Idris (and I believe wingman) will supply suggestions that you can pick from.
In your `bar` case, `bar xs = 42` is also a solution. Idris suggests 0 1 2 3 ... I think, it's going off of the constructors for the return type. I expect this technology to improve over time (try to use arguments, etc.)
Idris and Agda will also do stuff short of "fill in the answer" like "case split an argument for me" or "insert the appropriate constructor or lambda and leave a hole inside". This is quite helpful in cases where it's not gonna find the answer, but can type out the next step or two for you. (Wingman might have this, I haven't used it much. I don't know Coq at all.)
On the Agda side, the first couple of sections of "Programming Language Foundations in Agda" will walk you through using that functionality. Idris has a couple of videos that others have pointed out. It's fun stuff.
and yes programming with holes is almost magical. I really like "run as much as you can" then complain when you get stuck. kind of a mind blowing change for me.
While the concept is neat, I struggle a bit to see how it can be useful in practice. Do you have an example for the sufficiently constrained case? That sounds like the most interesting case by far.
I bought a basic b&w two-sided laser printer in 2011 for this purpose, and it's served me extremely well over the years. You can then take the stack of pages to a store like Staples and they will bind it for you for a few dollars.
I got the Brother HL-2270DW and I think they still make it. They even publish official Linux drivers for CUPS! Just make sure to get the extended toner cartridges (TN450), the regular ones are a ripoff.
Staples and others will also print and bind PDFs you bring in. I've done this with purchased books, sometimes they get a bit picky but usually they'll just do it.
I've also installed koreader on mine (dual boot context switched kind of thing), I use that for comics and stuff. Downside is that the notetaking parts don't work on that, but I can just swap back for notes.
Hey, author here! Thank you for recommending my book. I am always happy when someone else finds my work useful :)
> Anyone know if there's a decent way to get a printed copy?
I decided to re-publish the book with Apress, and it should be ready for print by March this year.
> How is it better than tdd in Idris book?
I would not say it is better, or worse. I read TDD and it's a great book, but it was mostly focused on practical stuff (i.e. programming in Idris), and I found it lacking the theoretical explanations (for example, what proofs are and how to do a mathematical proof, or what is a type-checker and how to implement one) which I hoped to cover in my book.
Adding onto the authors’s own review, I liked the second part on learning Idris but for my purposes the real gem was that it took me from zero prerequisites to a good intuition for dependent type theory in a very clear way that prepared me to go even deeper into topics like Homotopy Type Theory. The works I treasure are those that can build understandings of really complex topics without the crutch of a lot of prior assumed knowledge. It takes incredible clarity to do that well. This was one of my favorite book finds of recent years.
Type Driven Development with Idris. You can find the link on the first answer of this thread. Among other things, the book shows an example of a state machine that cannot get stuck (by construction) to manage an ATM.
I can’t say. I’m just a hobbyist dilettante here. Maybe that other book is great too but I don’t know it. One fun thing about a language so deep down the rabbit hole as Idris is the mean caliber of people involved is higher than average.
>"In Idris, types are first-class constructs in the langauge.
This means types can be passed as arguments to functions, and returned from functions just like any other value, such as numbers, strings, or lists.
This is a small but powerful idea, enabling:
o Relationships to be expressed between values; for example, that two lists have the same length.
o Assumptions to be made explicit and checked by the compiler. For example, if you assume that a list is non-empty, Idris can ensure this assumption always holds before the program is run.
o If desired, properties of program behaviour to be formally stated and proven."
PDS: I'd be curious to know about any/all languages where types can be passed to functions, returned from functions -- and even generated at runtime by functions... that is, where the language regards types as first-class constructs...
I tried ATS after finding Rust difficult. Zig is becoming my C replacement, and ATS is like a C++ replacement, although, I am using SPARK2014 now, but I am looking into ATS' contracts. There is no lack of PLs as a tool for whatever you may need to do nowadays. It's refreshing compared to 1970s to 1980s for me.
Zig is probably the closest you'll get to this in an imperative language. You can write functions that take types as arguments and return types, and that's how e.g. generic data structures are implemented.
Terra is a language that can also do that, and uses Lua as the metaprogramming language. Types are just Lua values.
But unfortunately, there's a lot of work left kind of half-baked so using the language is a pain... if someone invested a lot of time to make Terra work properly and added some tooling around it, wrote proper docs and so on, it would be a really interesting language.
> Terra is a language that can also do that, and uses Lua as the metaprogramming language. Types are just Lua values.
But then Terra doesn’t do that. Lua does. But not to itself, so it’s not really about passing around types as first-class values any more than metaprogramming Java in Python would elevate Java types to being first-class values.
Terra programs are kind of embedded in Lua, you can't separate the two. So to be very precise: Terra has metaprogramming done via Lua at compile-time, with Terra types being Lua values you can pass around, transform, return etc. in the metaprogramming "layer" of the Terra program.
In Zig, you do the metaprogramming in Zig itself, but you're also limited somewhat as you can only use the comptime "layer" of the program. In my mind, it's very similar.
Zig is just doing "compile time reflection". That's not even remotely close to dependent types.
The whole point of DT is that computed types can be used for type checking, and not only to compute values form them.
DT enable to reason about programs as mathematical proves. That's not possible in Zig, afaik.
If merely passing around values called "types" would be similar to what Idris does every language with runtime reflection and also all dynamic(!) languages would have DT. That's of course not true.
That's why dependent types are so special (and complicated): You can use them as first class constructs without them being runtime values ever. Everything happens in the type checker.
At runtime there are no types in proper static languages; things like Java / C# are actually quite dynamic in contrast; the dynamic nature was even part of the marketing of those languages back than. (Runtime) reflection is just dynamic typing…
> "Runtime value" is an implementation detail. You could write an Idris interpreter.
Sure. And such an interpreter would still not interpret types at runtime, as types are not (runtime) values.
Only at type-checking time (which is of course a phase before the program actually runs / gets interpreted) you would do anything with the types.
> It'd be useful too. Why should your whole project fail to run just because one file that's not going to get used has a type error?
Whether that would be anyhow "useful" depends on the definition of "file usage".
If there's any reference form the "used" parts of the program to the "unused file" you have a problem:
A type error (in a language where "programs as proves" is a thing) means that you "proved" false. But form a "prove" of false you can "prove" anything! So you actually lost all proving power. Your program may now explode at runtime with arbitrary type errors anywhere. At that point your "static type system" becomes completely useless.
(Of course, if nothing in your "unused file" is referenced anywhere in the "used" files, noting happens. But in this case that "unused file" isn't part of your program, so it's completely irrelevant what's inside anyway.)
> Relationships to be expressed between values; for example, that two lists have the same length.
This can be also expressed in any language, even in unityped (a.k.a. "dynamic") ones. You don't need types for that at all…
> Assumptions to be made explicit and checked by the compiler.
This can be also expressed in any language, even in unityped (a.k.a. "dynamic") ones. In theory you don't need types for that at all. (Even types would be the current std. method to achieve that).
> For example, if you assume that a list is non-empty, Idris can ensure this assumption always holds before the program is run.
You can construct a NEL (NonEmptyList) in any language. That's trivial. You just need to enforce the usage of the right factory method for your list.
> If desired, properties of program behaviour to be formally stated and proven.
This has nothing to do with dependent types (directly). A language based on simply typed lambda calculus is "enough" to arrive at this property.
Exactly, you can ship software in unityped languages. Still, how do you express even such a simple thing as a generic list type in simply typed lambda calculus?
It's just that dependent types enable you to express any invariants statically, without actually running the code.
> You can construct a NEL (NonEmptyList) in any language. That's trivial.
That's true, with the caveat of: this invariant only exists in your head and hopefully docs. When it's encoded in a type, your compiler gets awareness of this invariant as well. Then it's able to do things that were impossible before, it can catch bugs that you would not (e.g. due the sheer amount of source code).
Even though there are other approaches to software verification, types are the only one that got mainstream adoption so far.
>I'd be curious to know about any/all languages where types can be passed to functions, returned from functions -- and even generated at runtime by functions... that is, where the language regards types as first-class constructs...
I'm going to guess not very expressive. Zig refuses to have operator overloading because it might be used to call a function that's more expensive than you'd guess so the idea that they'd let types enforce all that much seems unlikely.
But in both cases that aren't types in the sens of type theory. In both cases those are just regular values (of type "Type"). In both cases you will be able to find those values with the help of a debugger. (The difference between Zig and C# being that in Zig those values are constructed during compilation and than mostly thrown away again whereas the "type" values are just regular runtime data on the CLR).
The types created in compile-time code are just as "real" as the types you declare in source code.
All types become values at compile time. When you write a compiler, the parser creates ordinary values, perhaps instances of a Type type. The type-checking code operates on these values to find errors according to the rules of type theory. Compile-time functions are a way of calling user-defined code in the compiler, much like a macro but saner, and they can create types that can be used in declarations, verified by the type checker, and used as input to the code generator.
(Reflective use of runtime types in languages like C#, Java, or Go is different.)
Even you almost didn't say anything wrong here there is a key misunderstanding:
The term "type theory", which I've used and to which I referred (and which you've picked up), is a technical term form math.
Languages like C# or Zig are not based on type theory.
What is called "types" in those languages has (mostly) no relation to type theory. Those "types" are (mostly) regular values when looked at them through the type theory lens.
This is presumably not what you're thinking of but ofc Javascript uses objects as prototype "types". As is, they wont provide what you want, but if there was some kind of "compile-time" processing of prototypes such that types assemblies, dependencies, constraints and other relationships could be manipulated or asserted before they were applied in runtime code, that may provide some of the "first-class type" functionality mentioned.
Ed: actually, Typescript might be a better host for this.
> PDS: I'd be curious to know about any/all languages where types can be passed to functions, returned from functions -- and even generated at runtime by functions... that is, where the language regards types as first-class constructs...
I was not replying to the article, I was replying to the question from “peter_d_sherman” asking if any other programming language has types as first-class constructs. And Python does that.
typedfalse wrote "unityped", not "untyped" (note the "i").
When type system people talk about types they're implicitly talking about static types, rather than what Python might call a type. Robert Harper lays out the argument [0] that Python is basically a statically typed language with many "classes" but a single type - that's where I was first exposed to this idea.
> The characteristics of a dynamic language are [that] values are classified [...] into a variety of forms that can be distinguished at run-time. A collection of values can be of a variety of classes, and we can sort out at run-time which is which and how to handle each form of value. Since every value in a dynamic language is classified in this manner, what we are doing is agglomerating all of the values of the language into a single, gigantic (perhaps even extensible) type. To borrow an apt description from Dana Scott, the so-called untyped (that is “dynamically typed”) languages are, in fact, unityped. Rather than have a variety of types from which to choose, there is but one!
> And this is precisely what is wrong with dynamically typed languages: rather than affording the freedom to ignore types, they instead impose the bondage of restricting attention to a single type! Every single value has to be a value of that type, you have no choice! Even if in a particular situation we are absolutely certain that a particular value is, say, an integer, we have no choice but to regard it as a value of the “one true type” that is classified, not typed, as an integer. Conceptually, this is just rubbish, but it has serious, tangible penalties. For one, you are depriving yourself of the ability to state and enforce the invariant that the value at a particular program point must be an integer. For another, you are imposing a serious bit of run-time overhead to represent the class itself (a tag of some sort) and to check and remove and apply the class tag on the value each time it is used.
That’s not the normal nomenclature. If that language was used to describe, for instance, Java, we could say that Java only has a very small number of types: the unboxed types, and “object”. But this is not how we usually describe Java, and also not how we normally describe dynamically typed languages.
No, that's not the case. The term "class" here isn't being used to describe OOP classes. You can read the article for more details, but "class" here is being used to describe something like a dynamically distinguished subset of a type: a branch of a sum type in a language like Haskell or SML or Idris.
If I write a well-typed function in one of these languages that consumes a value of type Foo, it's statically guaranteed that it can operate on any value of type Foo, regardless of its class, but it needs to dynamically ascertain - using a pattern match - which class of Foo-typed values it is to do anything meaningful with it.
Java operates under the same principle, in its way. In Java, barring escape hatches from the type system (casts), if I write a method that consumes an object of type Foo, likewise, it's statically enforced that it can operate on any object of type Foo, and notably it can't operate on any object of type Object. But in order to invoke a method on my Foo-typed object, it needs to do a dynamic dispatch to ascertain which class' methods to invoke. So what Java calls classes do type-theoretic double-duty - in a static dispatch context, Foo is a type; but in a dynamic dispatch context, Foo's subclasses constitute its open set of (type-theoretic) classes.
This is totally normal nomenclature in PLT circles (Harper, after all, is one of the biggest names in PLT circles, and he's borrowed this idea from Dana Scott who's another big name). The comments section of an article on a programming language, especially one deeply rooted in PLT, is probably a reasonable place to be open to it.
> When type system people talk about types they're implicitly talking about static types, rather than what Python might call a type. Robert Harper lays out the argument [0] that Python is basically a statically typed language with many "classes" but a single type - that's where I was first exposed to this idea.
According to this, there are no such things as "dynamically typed” langugages, only “unityped” ones. I stand by my claim that the former is a normal accepted term, while the latter is niche at best.
It doesn’t actually matter what that thing said. Most everybody uses the term “dynamically typed” when referring to languages like Python, Lisp, etc. and this is the first time I’ve heard the term “unityped” – is not a term in common use.
And (bringing this back to its origin), using terms which actually are in common use, Python does allow storing types as values in variables.
That page is decidedly worth reading and re-reading many times in the future...
I think it boils down to the following:
>"Curry–Howard correspondence [...] is the direct relationship between computer programs and mathematical proofs."
And:
>"If one abstracts on the peculiarities of either formalism, the following generalization arises: a proof is a program, and the formula it proves is the type for the program."
In fact, I'm going to go for "full crackpot" here...
If all computer programs are algorithms, and all mathematical proofs are algorithms, and all types are algorithms -- then a "grand unifying theory" between Computer Programs and Mathematics -- looks like this:
It's all Algorithms.
Algorithms on top of algorithms.
You know, like turtles on top of turtles...
This makes sense too, if we think about it...
Algorithms are just series of steps, with given inputs and given outputs.
That is no different than Computer Programs.
And that is no different than Math...
You can call this series of steps an Algorithm, you can call it a Function, you can call it y=f(x), you can call it a Type, you can call it a Computer Program, you can call it a Math Equation, you can call it a logical proposition, and you can use whatever notation and nomenclature you like -- but in the end, in the end, it all boils down to an Algorithm...
A series of steps...
Now, perhaps that series of steps -- uses other series of steps -- perhaps that Algorithm uses other Algorithms, perhaps that function uses other functions, perhaps that Computer Program uses other computer programs, perhaps that Math Equation uses other math equations, etc., etc. --
But in the end, in the end...
It's all a series of rigorously defined steps...
It's all an Algorithm...
Or Algorithm consisting of other Algorithms... (recursively...)
Patterns of steps -- inside of patterns of other steps (again, recursively...)
Anyway, great page, definitely worth reading and re-reading!
Officially you're supposed to download the book and then edit the exercise solutions into it with Emacs (or VSCode, I think), as you can then run the exercises to see if they type-check (i.e., if they're correct!). However, there's also a not-necessarily-up-to-date interactive in-browser version:
I haven't used Idris, so I'd say it's quite possible that working through the Idris book is also just as fun and relevant for understanding the implications of the Curry-Howard correspondence.
They're not. Only constructive proofs have corresponding programs (namely, the program that actually carries out the relevant construction). You can embed nonconstructive proofs indirectly via double-negation translation (computational counterpart: continuation-passing style), but equivalence of classical formulas isn't preserved.
> all types are algorithms
Definitely not the case. Types correspond to propositions, not their proofs.
All mathematical proofs consist of a series of steps.
>all types are algorithms
All types can be expressed as a series of steps -- with a given input -- and an output of True or False following those steps.
True if the given input is a member of that Type.
False if a given input is not a member of that Type.
If the definition of an 'Algorithm' -- is 'a series of steps', then both mathematical proofs and types -- must be Algorithms...
If we have any debate -- then we are debating the semantics of "what constitutes a step" -- what rigorously defines it -- and what steps may be permitted when...
It may very well be that the Lambda Calculus is the best definition of what constitutes these steps -- but it may very well be that there is a different/better paradigm for looking at them (I don't know myself -- I am trying to determine this)...
Here, you might like the following video (graciously submitted by rbonvall!) for an overview of some of the different possible paradigms that these steps -- might be considered in:
> All types can be expressed as a series of steps -- with a given input -- and an output of True or False following those steps.
No, this is a fundamental misunderstanding of what types are. They're not, in general, subsets of some larger universe of values, and you can't have terms without types attached. (Of course there are "gradually typed" programming languages, but these are really languages with a top type a la `object`, subtyping, and generally a healthy dose of unsoundness).
>They're not, in general, subsets of some larger universe of values
You have a Computer.
The Computer has N bytes of memory.
You want to declare a Boolean type variable.
You tell your compiler this by coding it in the language of your choice.
The compiler will typically allocate 1 to 8 bytes of memory to store that Boolean value -- depending on such things as how many bits your CPU is, what compiler options are, if variables should be byte or dword or qword aligned in memory, etc.
>They're not, in general, subsets of some larger universe of values
But the thing is, that memory allocated for the Boolean variable is now constrained -- to be a subset of all of the previously permissible bit patterns for that memory.
It is constrained to be either 0 (representing False) or 1, (representing True).
If that Boolean value is 8 bits long (let's say) -- then the only two permissible possibilties that exist for it -- are either 00000000 (0 - False) or 00000001 (1 - True).
>They're not, in general, subsets of some larger universe of values
That type, the Boolean -- is very much a subset -- of a larger universe of values...
For a given Byte it can also be determined if it belongs to the Boolean Type
-- via a simple function (aka "series of steps", aka algorithm).
Basically, just compare that Byte's value to 0 or 1. If it's either a 0 or a 1, then it's a member -- and if it isn't one of these values then it isn't.
That small series of steps -- is a function.
A function which determines membership of given data -- an input (in this case, a byte) -- to a type (in this case, a Boolean type).
To validate or repudiate type membership (or lack thereof) of something more complex -- a more complex function/algorithm/series of steps -- may be needed -- but the point is, that's how it's done...
>They're not, in general, subsets of some larger universe of values
So they are all-inclusive sets of some larger universe of values?
?
If that's the case -- then why do neither computer programs nor mathematical constructs that use types -- use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
> The Computer has N bytes of memory. You want to declare a Boolean type variable. You tell your compiler this by coding it in the language of your choice. The compiler will typically allocate 1 to 8 bytes of memory to store that Boolean value -- depending on such things as how many bits your CPU is, what compiler options are, if variables should be byte or dword or qword aligned in memory, etc.
You're confusing several things here.
- a term is not its runtime representation
- a type is not the set of all terms that inhabit it (though you can get away with pretending it is in simple cases)
- and more generally, operational semantics are not denotational semantics
> So they are all-inclusive sets of some larger universe of values?
No, they're simply not sets. Types are primitive notions in type theories, the same way that sets are the primitive notions of set theory. No reduction of one to the other is necessary or, typically, desirable.
And even if you try to build type theory on set-theoretic foundations - which is 100% the wrong choice if you want to apply it to computational problems down the line - you're still going to run into problems once things start getting recursive. Consider, for example, the type of "hyperfunctions" given by
`type Hyper a b = Hyper b a -> b`
This is too big to be a set, for the usual self-containment reasons. But it's a perfectly legitimate Haskell type, modulo syntax. I've used it in real code.
> use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
That's what a dynamically typed programming language is.
I think the problem is that the parent tries to look at quite advanced topics without understanding anything about the foundations.
I'm not sure repeating already stated facts will change much therefore.
He was given already quite good sources to learn more. Now it's on him to understand those things.
(Of course things would be simpler if he would asks questions instead of insisting on his misunderstandings.)
> > use a single solitary type that represents the larger universe of all possible values everywhere that a type is used?
LOL, the all mighty uni-type! :-D
But I see here more people that confuse mere runtime values with types…
I fear too much exposition to "dynamic" languages (or maybe also static languages with "type values") causes some severe damage to future understanding of PLT concepts and confuses people.
I think some PLT / functional programming needs to be thought early in school as part of the math education. This would likely prevent some of damage form being exposed to imperative programming and/or dynamic languages later on.
In computer programming languages, consider Integers and consider Floats (Floating point numbers, i.e., numbers with decimal points, i.e., Float, Double, Long Double, etc).
OK, so let's suppose we define an Integer variable (in whatever language)...
int MyInt = 100;
And now, let's suppose that we want to divide it (using non-integer division!) by 3...
MyInt = MyInt / 3;
Well, now we have a bit of a problem!
You see, even though the answer should be 33.33333 (repeating) -- MyInt can only hold an Integer value!
It can only hold 33!
Some languages will permit this operation -- and the result of the operation will be 33 -- which is the wrong answer.
Some languages will prohibit this operation.
But the point is, is that true division, not explicitly integer division -- is an operation.
Some operations/operators -- make sense to perform on data that is of a specific type -- and some do not!
It's perfectly OK to perform true divison on a floating point type (well, ignoring division by zero, which creates problems no matter what!) and put that value back into that floating point type -- but it doesn't make sense to perform true divison on an integer -- and put that value (now wrong!) back into the integer!
At least not without an explicit typecast -- which tells the compiler "I am OK performing this non-standard operation on this type -- I am OK with the side-effects..."
So that's one example.
Another example is adding an integer value -- to a string.
Another example is concatenating, or performing another string operation -- to an integer...
Basic understanding is this -- types prevent operations on data -- where it doesn't make sense to perform the operation on the given data type!
A type determines a subset -- of the set of all operations (which are basically functions!) possible that "make sense" to be applied to them!
So types are in fact subsets!
Subsets of possibilities, subsets of various amounts of bits and bytes, subsets of operations/functions which make sense to be permitted on those types!
"Programs as proves" is only a thing in the context of mathematically pure languages.
Almost all programming languages aren't pure.
That's on the other hand's side why prove assistants are very unusable as programming languages; you can't do anything with them usually besides proving stuff. Running actually "useful" code is mostly not possible. Things like Haskell or Idris try to bridge both worlds, but this isn't straight forward. How to actually do anything in a pure programing language is still an open question. Monads are some kludge but not very practicable for most people…
So to summarize: "Normal" programs don't correspond to proves in any way!
"Evaluation corresponds exactly to simplification of proofs..."
(The rest of the video, both before and after this statement, contain more context...)
That being said, I agree with you that languages which are used primarily for theorem proving (AKA, "proof assistants") -- are usually not as applicable to as broad a range of programming paradigms and problems as most general purpose computer programming languages are...
You might wish to re-watch the linked talk as you just restated your confusion (which was already the cause of a lot of down-votes in this thread).
Wadler says there:
"Evaluation [of simply typed lambda calculus!] corresponds exactly to simplification of proofs…"
If you didn't get that context you actually missed the whole talk.
You really need to understand: "Programs as proves" is only a thing in languages with strongly normalizing type-systems. This implies that the language is pure and does not contain general recursion.
In a language with mutation (which is obviously not pure) you can destroy any "prove" by just writing to a variable, e.g. switching a single bit in the case of a boolean value.
Terms can be obviously only proves if it's not possible to change a term ("prove") form true to false (or the other way around) at will! Once some term is determined as having some type it may not change any more. This rules out obviously any language with mutation or I/O.
That's why you can't write applications in Agda, or mathematical proves in Haskell. (In the general case; you could do both with "tricks"; but those are indeed tricks).
>"Programs as proves" is only a thing in the context of mathematically pure languages.
>Almost all programming languages aren't pure.
Yes, but any Turing-Complete language -- is Turing-Complete...
Challenge: Show me a Mathematical Algorithm -- that can be expressed in Math, that cannot be expressed using symbols and symbol manipulation on a Turing Machine -- that is, on any plain, regular computer...
Hint: Every computer that Mathematica runs on -- is a Turing Machine...
Extra Hint: Mathematica can express, manipulate (and typically solve!) -- any expression in Mathematics...
Extra Extra Hint: Programming Languages need not be "pure" -- to be Turing-Complete.
Your question is trivial anyway: An algorithm is something that can be performed by a (Turing-complete) machine.
Therefore there exists no algorithm that can't be computed (on a Turing machine). That's by definition!
But this has absolutely nothing to do with which kinds of languages can be used to prove anything in math.
To prove something you need algorithms that are guarantied to produce results. Your machine must halt to spit out a result!
But as everybody knows there is no such guaranty for arbitrary algorithms. The question whether some arbitrary algorithm halts is undecidable.
Therefore Turing-complete languages are "too powerful" to be used to prove things. Because you can't know whether a "prove" in such language can be computed at all.
Or to formulate it differently: A computer can't compute uncomputable numbers, or decide undecidable problems.
But math can—of course—express uncomputable numbers. (I hope you're able to google some definitions of such number on your own…)
And just a reminder: This site is not the right place to learn basics.
Also your tone is getting unacceptable.
That said, please keep in mind that the internet does not forget… Your childish behavior will be remembered until the end of time. (In case you've forgot, you're posting here under your RL name, boy.)
>An algorithm is something that can be performed by a (Turing-complete) machine.
We agree so far...
>Therefore there exists no algorithm that can't be computed (on a Turing machine). That's by definition!
We agree so far...
>But this has absolutely nothing to do with which kinds of languages can be used to prove anything in math.
Here we disagree!
It has everything to do with which kinds of languages can prove anything in math!
If a language is Turing-complete, it can run any algorithm.
If a language can run any algorithm, it can be programmed to perform symbolic manipulations of Math equations that are expressed in symbolic form.
Basically, it can perform Mathematics.
This is what Mathematica is and does.
Mathematica could be programmed -- in any Turing-complete programming language.
If Mathematica could be programmed in any Turing-complete programming language, and Mathematica can be used to solve any Mathematical problem, then any Turing-complete programming language could be used to program what Mathematica does -- which is Mathematics, basically.
Which includes Mathematical proofs, incidentally.
>Your machine must halt to spit out a result!
This is a contradiction. Functions (and Programs) -- do not need to halt to spit out a result.
>The question whether some arbitrary algorithm halts is undecidable.
Algorithms (and programs and functions!) -- can be tested for halting by actually running them!
If they halt, they halt (99.9999% of them do not -- unless they are coded wrong!)...
>Or to formulate it differently: A computer can't compute uncomputable numbers
>or decide undecidable problems.
Yes -- but this reframes all mathematical proofs as being uncomputable and/or undecidable.
Challenge: Show me a mathematical proof which is either uncomputable and/or undecidable.
>But math can—of course—express uncomputable numbers. (I hope you're able to google some definitions of such number on your own…)
Computers can express uncomputable numbers (and any other concept in Mathematics) symbolically.
Computers can express proofs (and other operations in Mathematics) via symbol manipulation.
This is what Mathematica does.
Variables, after all, are symbols.
They can be symbols of things in the real world and/or they can be symbols of ideas...
But any symbol if defined in a Turing-machine, by whatever method -- can be symbolically manipulated in that Turing-machine.
Any language which is Turing-complete -- has the ability to manipulate symbols in this way -- if properly programmed -- like Mathematica does...
Conclusion: All Turing-complete programming languages -- have the ability to express Mathematical proofs, like Mathematica does, if properly programmed to do so...
>Also your tone is getting unacceptable.
To who, exactly?
?
Perhaps pure logic is interpreted as "tone" -- but the error of that particular type of interpretation -- is not on my side of the fence! <g>
>That said, please keep in mind that the internet does not forget… Your childish behavior will be remembered until the end of time.
I hope it does! <g>
The Internet will remember me (for a long time!) -- for my dedication to self-evident truths, first principles, logic, reason, clear thinking and simple explanations...<g>
The Internet, on the other hand, tends to forget people who endlessly confuse, distract, complain, propagandize, derail, make mountains out of molehills (and molehills out of mountains!), speak with "forked tongues" and engage in Selective Abstraction, Arbitrary Inference, Equivocation, Prevarication, Duplicity, Straw Man arguments, Dichotomous Reasoning -- or one/some/all of the above!
I'm not saying that that's you...
I'm just saying that the Internet tends to forget such people... <g>
You know, I guess it's their "right to be forgotten" -- for one or more such activities! <g>
>(In case you've forgot, you're posting here under your RL name, boy.)
<g>
Well, we know for a fact that I am neither:
a) A GPT-3 or other bot...
b) A paid disinformant and/or Troll...
c) Someone with such a large degree of narcissism and/or agenda -- that they feel the necessity to continuously railroad other posters to their point of view (remember, you engaged me in conversation first -- I did not engage you!)
d) One/Some/All of the above...
>And just a reminder: This site is not the right place to learn basics.
No site on the Internet is the right place to make illogical arguments to logical people...<g>
A huge overlap. Idris is aimed at being a general purpose language with proof assistant characteristics while some of its peers are more theorem provers built with general purpose language characteristics.
LEAN's syntax looks really great. I wish Scala could take some inspiration.
Scala's syntax is anyway quite similar already, but would need proper Unicode support and some other "tuning" like the `:=` and the `let monadic ← effect` syntax, imho.
One difference I see is Lean uses type classes while Idris uses implicits. Can anyone comment on the pros and cons of that decision? (Or pros and cons between the two languages in general?)
It also seems that Idris was created as a general-purpose language first, whereas Lean started as a proof assistant and only recently added general-purpose language. Is that evident when using the languages? A quick look at Lean's general-purpose language seems reasonably similar to the Indris experience AFAICT.
Imo the biggest difference is the "heritage", Lean is Coq inspired, Idris Agda inspired. This manifests in how people prove things. Idris has dependent pattern matching, so proofs are written as normal terms. Lean has tactics and proofs are small tactics scripts (which then generate the proof term behind the scene).
Other than that, the difference is mainly in infrastructure/libraries and community.
Well, idris (v2) is very special since it has modalities for compile-time erasure and linearity. This sets it apart from all the other dependently-typed languages. This makes it possible in theory (and currently being experimented with) to implement efficient-by-construction code.
Linearity is exclusive to Idris 2, compile time only values/run-time erasure are supported by all major proof assistants. Coq and Lean have the Prop type (compile time only) while Agda allows annotation of variables as run-time irrelevant.
And how does it compare to Arend? Does anybody know?
Are there any other HOT based languages out there? (Agda seems to have also some support for the "cubic version"; whatever this means, I'm actually clueless).
Cubical type theory is a way to get to a purely constructive foundation for Homotopy Type Theory. Super cool stuff from a foundations of math perspective and has some surprising applications to topological quantum computing but if it isn’t your rabbit hole you can probably not pay too much attention to it.
The whole "foundations of math" part is extremely exciting!
But I'm not an mathematician so I don't understand much (if anything). More in the programming languages "department" of interests.
But if something is "good enough" to describe whole math it could be useful for getting strong guaranties about programs, I guess. So I would be quite interested how this HOT stuff could be useful in practice. What could type systems based on this enable?
My limited understanding is that you get additionally "paths" between types and type constructor which denote equivalence between them.
But I have no clue what one could do with that (besides doing math, of course).
I know I'm missing something, but this seems like the functional version of object classes.
I've mostly favored procedural programming and only use objects when I have to because the abstraction can make the program harder to reason about, especially when there's a lot of inheritance.
But I've heard some hype about types in Rust, and now this. The free chapter in the manning book even introduces types in terms of real world objects.
So, for someone who doesn't love objects, why should I love types?
IMO types are fundamentally about restriction. Limiting the shape of your data and what you can do with it, so that this information becomes explicit - making it easier to reason about the program, both for yourself and the compiler. I think this is illuminated very well by sum types. If you have a variable of X | Y, you know that it's either exactly X or Y. You can pattern match, to handle the case that it's an X and then you call a function on it and you know exactly which function it is.
Inheritance is so nightmarish to me because people tend to use it in such a way that I often don't have the slightest clue what I'm actually dealing with, buried under layers of indirection that seem to serve absolutely no purpose. If I have an object of class X, is that actually an X, or something that just looks like it and actually behaves differently because someone decided to override a method in a subclass?
Interfaces tend to be much less monstrous IME, so I don't think the issue is with abstractions themselves. Especially when people use interfaces to abstract over common behavior between data types, rather than trying to future proof for something that never occurs.
...C. I thought the context here made it obvious, the person I replied to was talking about programming in a procedural manner and eschewing abstractions.
I never even mentioned anything that even relates to runtime types, I don't know why you'd bring them up?
And saying that classes, inheritance and overriding are orthogonal is bizarre. They're intricately linked and at the heart of OOP. It doesn't even make sense to define overriding without inheritance.
>In the end OO and FP are anyway the same thing at the core. You can mechanically translate one into the other:
Coinductive types have destructors so they look a bit OO-ish, but that's it. The key part of OO isn't having fields, but inheritance.
> I never even mentioned anything that even relates to runtime types, I don't know why you'd bring them up?
You've complained about dynamic overrides. You're talking about dynamic overrides because in the case of "static overrides" (or usually in "normal" OOP languages that are overloads) there can't be any confusion which method gets called. The compiler, and therefore your IDE, knows that. Dynamic overrides are only a thing when dynamic dispatch is involved. Dynamic dispatch is directly related to runtime types.
> And saying that classes, inheritance and overriding are orthogonal is bizarre. They're intricately linked and at the heart of OOP.
That's only the case for languages like C++ / Java and clones.
You can have of course inheritance and overriding without classes. See Self or more prominently JavaScript. (No, JS does not have classes. It has by now some syntax sugar that is called "class", but that's just a simple source translation to JS prototype system under the hood).
> It doesn't even make sense to define overriding without inheritance.
Of course you can have overriding without inheritance.
You can create type-class hierarchies where functions on the leaves override functions above them in the hierarchy.
This is possible without having any inheritance relation between the objects involved. I could show you Scala code that does exactly this.
In a system with prototypes there is also no need for any inheritance relationship to override something. You can just grab the prototype of an object and change ("override") methods on it. You can do that form everywhere in your program in a language like JS…
And in theory you could have inheritance without overriding. Also of course no classes are needed for that. (Even I don't know any language that does something funny like that as it would be quite limiting).
> Coinductive types have destructors so they look a bit OO-ish, but that's it.
No, that's not it.
You can create an algorithm that can translate code form the FP form to the OO form (and back).
I didn't invent this. Someone actually created such an "converter". (I would need to dig a little bit to find the relevant paper and YouTube talks again; maybe you're faster with googling :-D. If not, feel free to ask again, than I would go digging in my PDF chaos).
> The key part of OO isn't having fields, but inheritance.
Well, the inventor of OO himself would strongly disagree… ;-)
OO is about message passing.
The C++ / Java "OOP interpretation" is some quite ill abnormality OTOH.
> I've mostly favored procedural programming and only use objects when I have to because the abstraction can make the program harder to reason about, especially when there's a lot of inheritance.
In functional programming, you generally abstract in different directions than you do in object-oriented programming. IMO, functional abstractions make things easier to reason about, since they generally constrain what any given block of code might do.
> So, for someone who doesn't love objects, why should I love types?
Because powerful type systems mean that a lot of mistakes will be compiler errors instead of bugs.
Where objects can hide state or I/O in a program, types and functional type driven development make each function very clear as to what it has for input and outputs and part of that is not hiding state anywhere that doesn’t make sense. You might still have a global logger because that makes sense, but generally you know the things you have access to because they’re in the function signature.
It can break down if you aren’t careful, and there’s some nuance where you might still store a configuration data as an object property, but it’s a clearly typed property that’s assigned at initialization rather than hidden state that magically appears due to some hidden program flow.
Objects are just "things" in the memory of a computer. Anything in a program is an object. That's why for example the C guys are constantly talking about objects, even C does not have any special facilities to handle more complex types of objects.
Types on the other hands side are an abstract concept. You could think of them like sets of objects (even this is a little bit misleading as types are not really sets, but types, which is a different, but in some ways quite similar, mathematical concept).
What looks quite similar are types and classes. In OOP languages both are bound to each other: A class describes a set of objects (as a class is a kind of template for objects) and it introduces at the same time a corresponding type (which is as mentioned also "a set" of objects).
The interesting thing about types is, as they're an abstract concept, they exist independent of running your program. (Objects OTOH manifest not before they get constructed at runtime in the memory of a computer). With the help of a sound type system it's possible to prove things about the runtime behavior of a program. That's where the value of types comes form. They allow to construct more correct programs by ruling out errors by type-checking a program, which can be done in the case of a static type system at compile time.
Stronger type-systems allow to constrain the possible runtime behaviors of a program better and more precisely. They help to make illegal program states unrepresentable.
Also it gets possible to deduct correct-by-construction implementations of some program behaviors solely form types. That's what type-driven development is about.
So, if you don't like runtime bugs but correct programs you should like strong static typing.
Anyone else finding this documentation is pretty hard to navigate? Makes me think this isn't a serious language, but rather an academic toy or resume builder.
The main repository is in the top 1% of stargazed repositories on GitHub, and the project has a mailing list, Discord server, IRC, AND Slack.
It has been forked over 300 times, and has over 100 contributors.
What in the world counts as "serious" to some of you people?
Sometimes I read HN and ask myself if people here know at all how hard it is make something people want, and how many people know what the real-world thresholds are for understanding when you have made something that people want.
I guess if everyone hasn't heard of it, people here don't think it's serious.
I find Idris incredibly interesting. Specially combined with the output it can generate in other languages. The book of it is also a masterpiece, even if you never program in it. Already lots of "serious" hacked together programming languages.
The only downside I see if the lack of a package manager.
>What in the world counts as "serious" to some of you people?
For a programming language? Would be good to know what projects have used it. Most languages people develop are done as either a hobby or to serve some kind of research/academic purpose. Is Idris used for any production grade software?
I mean you tell me, which would be bullying to you? You work for years on something and someone tells you it's not "serious" because they haven't learned to browse a source tree, or the guy telling him to read a little more?
TBH I actually think the documentation for Idris is quite good, but your overall point is correct: it's definitely somewhere between "academic toy" and "serious language". It's not really usable for anything real world at the moment, and - like I comment every time it pops up here - just as it was building up momentum the author scrapped the whole thing and started over (Idris 2), and even though I've been making that comment for years it still hasn't recovered. The book everyone recommends to read on it is based around a dead version of the language; the tooling it has you use (which is a huge part of working with a dependently-typed language) depends on Atom, which is now dead, and Idris 2 still doesn't have comparable IDE support.
I find it really frustrating because I like the language a lot, and I want to be using dependent types in my real life ASAP. Right now I'd be looking more towards Lean, though, which seems to be more modern and developing rapidly.
The Atom implementation has been replicated in VSCode and works alright. It was good enough for me to learn how Idris worked and go through the book's exercises.
> Anyone else finding this documentation is pretty hard to navigate?
I can't speak to these docs in particular, but I think documentation regarding type-system in general can be hard to digest. And it probably varies by reader.
E.g., some people find "The Little Typer" [0] to be a wonderful intro to its topic. Personally, I find the writing style impenetrable.
> Makes me think this isn't a serious language, but rather an academic toy or resume builder.
If the documentation turns out to be useless to the majority of readers, then you might be right that it's a sign of under-investment. I'm not sure if that's happening here.
There was a good interview on Corecursive [1] about Idris, so IMHO that's some evidence that some people are serious about the language.
I think it's the chicken-egg issue. Firstly it's not the tools'fault. But overall, the culture of using that tool made it worse.
For example, most of readthedocs documentation i've seen is nothing better a wiki page, where you have a bunch of links to click on in a messy way somehow.
So i guess your issue with this one, is you're missing some structured knowledge with this page.
https://www.manning.com/books/type-driven-development-with-i...