Hacker News new | past | comments | ask | show | jobs | submit login
Why is Idris 2 so much faster than Idris 1? (type-driven.org.uk)
282 points by culturedsystems 42 days ago | hide | past | favorite | 81 comments



I bought the book "Type Driven Development in Idris" and got several chapters in before getting distracted by whatever else. It was enough that I'm now annoyed relatively often that I can't pass types as parameters to be altered during execution. I really think they're onto something brilliant. I need to pick it back up and finish the book.


It's worth noting that the book is written by the creator of Idris.


Hmm. My interest is piqued but this book is from 2017 and I am wondering if it is not horribly out of date at this point. WDYT?


There are some minor changes, but they are documented in detail [1]. The Idris2 repository maintains tests to detect any further divergence from the book [2].

I highly recommend the book. Typing in examples and getting experience collaborating with the compiler on edits is worthwhile. Even if you never use Idris(2) again after reading it, the demos on creating state machine DSLs with contextual invariants will leave a lasting impression.

[1] https://idris2.readthedocs.io/en/latest/typedd/typedd.html

[2] https://github.com/idris-lang/Idris2/tree/master/tests/typed...


Well minor including the switch to use of so called “quantitative type theory” where you can include use counts for variables that inform the compiler how long variables will stick around. Also the compiler switch to Chez Scheme yields a pretty big performance boost.


The Idris 2 compiler is build with Chez Scheme? That's pretty cool in my opinion, didn't know that.


It compiles to Chez Scheme. It's written in Idris 2 (self-hosted).


It is written with Idris2 as the article mentions.

It produces/translates to Chez Scheme code.


You're right, there are several notable improvements in Idris2. I was mainly referring to the amount of code from the book that must be adapted. The QTT features, as you say, are opt-in; by default no new constraints are imposed.


Not really, Idris2 is very similar to Idris1 and the book is more on developing concepts/ideas rather than just being a cookbook of things to do in Idris1.


That book pretty much teaches just the basics. The basics have not changed.


> pass types as parameters to be altered during execution

Wouldn't that be at odds with static typing?


Dependent types let you do this. The type system ensures that you have all the types known at every point, and everything is sewn up before runtime - even though you don't know which type you will actually get ahead of time.

Types are values, and types can depend on values - so if your protocol tells you you should be getting a string, a string shows up in the type signature.


Types are not "altered". Types can be parameters. You can have a function that takes a type and returns something else (another type, or even another value). But nothing is changing.


> you don't know which type you will actually get ahead of time

Sounds like it's statically kinded, and dynamically typed. Or is it a place between kinds and types at which Idris operates?


As the workhorse example, consider the length-indexed Vector:

    data Vec : Nat -> Type -> Type
Each Vec has its length in its type. You get a type error if you try the following:

    f : Vec 3 Int -> Int
    f [1, 2, 3, 4] -- error: wanted a Vec 3 Int, found a Vec 4 Int
and with that, consider the following program:

    import Data.Vect
    
    readInts : (n : Nat) -> IO (Vect n Int)
    readInts 0 = pure []
    readInts (S n) = do
      x <- getLine
      rest <- readInts n
      pure $ (cast x) :: rest
    
    main : IO ()
    main = do
      length <- getLine
      ints <- readInts (fromInteger . cast $ length)
      pure ()
The above program doesn't know what the type of `ints` is in the do-block precisely. It has a length specified by the user! But we can reason about its length - and the type system would ensure that its length is propagated and handled properly everywhere we try to use it.


It's just standard dependent types. Types don't (necessarily) get erased, but all the type checking itself is done at compile time.

I don't think it makes sense to think of it as specifically "statically kinded", since kinds (which are just types higher on the universe hierarchy) can also be present at runtime.


Indeed - the Type/Kind hierarchy is unified when you write a dependently typed system. In simply typed lambda calculus, you have a system of values, types, and kinds, which are required to make sense of the structure. In dependent types, you have values, and types, which are also values.


> Type/Kind hierarchy is unified when you write a dependently typed system

i'm being pedantic, but i don't think that's always the case. afaik

  Type : Type
leads to "inconsistency" (sounds like Russell's paradox, but my knowledge of the theory gets flaky here). so e.g. in Coq there's actually a hierarchy of "universes":

> "[The type of Int is Set, ] the type of Set is Type(0), the type of Type(0) is Type(1), the type of Type(1) is Type(2), and so on."

http://adam.chlipala.net/cpdt/html/Universes.html

(though a dependently-typed system may hide universes from the user by default and offer "universe polymorphism" to allow them to gloss over it)


The terminology is nearing its limit when talking about these kind of things.

I believe that static typing here means that there is no need for runtime dynamics type checks (outside of things like dynamic dispatch as with haskell typeclasses).

That is you can have a body of code where a variable can assume multiple incompatible types, but you always know that it will be the right one.

I would say it is a change in paradigm. Similarly to how rust' borrow checker and lifetimes change the paradigm of manual vs GC memory management


It is more that you can have values like `payload : if byte[TYPE_OFFSET] == TYPE_INT then Int else String`.

You are required to disambiguate the type before anything useful can be done. What has happened is that you have types being passed around by functions and evaluations which reflect dynamic behaviour at runtime - but there is a type statically known at all times for each part of the program.


Chez Scheme has an interesting compiler [1] and has also been adopted by Racket [2].

It really cool to see a little used lisp have such a great impact.

[1] https://news.ycombinator.com/item?id=15156027

[2] https://blog.racket-lang.org/2020/02/racket-on-chez-status.h...


Andy Keep has given many talks on the nanopass compiler, and searching for him on YouTube will give you lots of interesting talks!

Chez, together with the commercial lisps, is among the best dynamic language systems there are. Apart from the unboxed floats me ruined by another sibling posts, it produces really good code.

In my tests it has been ever so slightly faster than sbcl when the code does not depend excessively on spending time in the standard library. SBCL's standard library has had a lot more optimization work done on the individual function level (I'm looking at you, isqrt). Most of the difference in my case can be erased by the new SBCL block optimization though, which chez does on the module level.


Yes, Chez is incredible!

Please, please Mr.Keep! Let's get Chez unboxed floats!


How practical is Idris (or Idris 2) for real-world production programming right now? It seems to be a pretty natural evolution of Haskell with dependent types built in from the ground up instead of being sort of achievable with a patchwork of language extensions. Moreover, making laziness optional is a big deal for real-world industrial applications. (Note that even the author of this post indicates at the end significant difficulty with space leaks in the original Haskell compiler for Idris.)

Or, said differently, what's missing in the language or ecosystem to make it more immediately practical? Could Idris be the successor to Haskell with appropriate support?


My experience is from 2015 but from a language-perspective the only things I personally saw lacking for prime-time use would be performance and the compiler/type-checker not always giving accurate error messages - but overall, nothing considerable when comparing to Haskell. I don't know how much those areas have been improving since. Overall I was very excited about Idris and found the syntax a lot more approachable than Haskell.

If the meager library of third-party and community packages is an important factor or not depends a lot on the domain.

I really wish some powerful corp would pick up Idris and put contributions and/or funding to it.


Precisely my experience as well! I also wish Edwin would get some sort of corporate backing so that Idris could be funded to get into production-ready mode. Idris2 is much faster (both compilation wise as well as execution wise) than Idris1, but the standard library is not done (as are parts of the language itself).

I would love to be able to write production code in Idris rather than patching together a million Haskell extensions.


I find this slightly depressing that using global mutable state had such a significant performance given the functional programming (encompassing Idris) is very much in favor of immutable data structures and purity. I know it's best to explicitly switch to mutability iff a bottleneck is found (like how `mut` is explicit and not default in Rust), but I want to believe that Idris can live in a purity. This makes me think my opinions are not far off from religion.


Immutability is an implementation detail with no inherent value. What we actually care about is referential transparency, encapsulation & tracking of effects, performance, abstraction, thread/type/memory safety, etc. The State monad in Haskell is implemented without mutation, but presents an API with mutation. The ST monad is implemented with mutation but presents a pure and referentially transparent external API.

It is fair to say that functional programming is primrarily in favor of immutable data structures because it is simpler to implement them efficiently in a setting where referential transparency is the default. Mutation (meaning referential intrasparency) is just more complicated, and we need heavier machinery to structure it nicely. We may use linear/substructural types, uniqueness types, regions, etc for this purpose, but here are a number of choices, and the picture is not as clear-cut as with plain pure lambda calculi. Just remember that mutation is not bad by itself, many functional programmers are often happy to speed up production code with mutation, it's just that it is usually more complicated to structure nicely.


> Immutability is an implementation detail with no inherent value. What we actually care about is referential transparency, encapsulation & tracking of effects, performance, abstraction, thread/type/memory safety, etc. The State monad in Haskell is implemented without mutation, but presents an API with mutation. The ST monad is implemented with mutation but presents a pure and referentially transparent external API.

Both State- and ST-using code is harder to understand than code that doesn't use them. Immutable semantics are easier to reason about. Of course the implementation of those semantics may involve mutation at some lower level. But ideally we would have a runtime system that could implement immutable semantics with no loss of efficiency compared to implementing those lower-level mutations explicitly in our code.


Computer hardware has no concept of immutability or local state on the hardware level. You only have memory regions. If you want to get a feeling for this, I suggest to start learning assembly language.

Even if computers had immutable memory, then you still need to initialize memory regions with constant values and that would be a mutation.

It is possible to protect memory regions from writing, but these regions are still writable by the operating system kernel. And this is purely for safety and not for performance.

(And don't get me started on CPU caches and registers)


I think you need to see it as something like, the idea is that immutable data structures and purity are considered an excellent base layer for defining code, like also in math where integrals and differential equations are "purely functional" even though they implement or describe "phenomena of mutation." Mutable arrays are present in Haskell, they are just described on the linguistic level with immutable purity.


What language features are needed in order to implement dependent types? Can you implement it on top of existing features of say Rust or Swift?


A first feature is the ability to define “interesting” types. By interesting I mean “generic” types which do things depending on the types of their arguments. In other words, a way to have functions at the type level. In Haskell you can do this with type families. I don’t know if you can do something similar in rust (have a type in a trait and then implement that trait in certain cases to get your function from input type to the type in the trait implementation for the input type)

A second feature that I think is needed is a way to go between types and values which I don’t think rust or swift can do.

Two fundamental dependent types are the dependent sum and dependent product. The dependent product is the type of a function whose output type depends on its input value. This is a generalisation of the product type: the product type (a,b) is like the dependent product type (x:{1,2}) -> T x where T 1 = a and T 2 = b.

The dependent sum type consists of a tag value of some type and a value whose type depends on the tag. For example a vector of arbitrary length Of floats could be written with a type something like:

  Sum(n:Nat)(Vec n Float)
And a value like

  (2,[1,2])
Sometimes you see online that every pair can be represented as a dependent sum and this is true but completely misses the point because it implies that you should think of it like a generalised product which you should not do.


You can, in fact, use traits to do type-level programming in Rust[1], but this is type-level programming; it isn't /dependent/ types. The biggest "blocker" for using dependent types is that programs must be /total/, because that program has to be evaluated for it to typecheck! If a program is not total, the typechecker has the potential of getting stuck. The two dependently-typed languages I've programmed in, Mostly Agda and a tiny bit of Idris, have totality checkers to aid in this. Generally this means that recursion is only allowed if its structural, and there can be no run-forever loops, i.e., no servers.

I think instead, for one of these languages, the ideal would be opt in should you need it. Dependent types are, by their nature, proof carrying[2], and there are times when you want this, but for a general-purpose programming language, also times that you do not. You can't be total over IO (or any effect for that matter), so, yeah, skip it when arg-parsing, but then opt-in when you're processing your financial transactions or actuating your robot.

[1] https://dpitt.me/talks/tt-for-rust/

[2] http://www.cs.nott.ac.uk/~psztxa/publ/ydtm.pdf


> The biggest "blocker" for using dependent types is that programs must be /total/, because that program has to be evaluated for it to typecheck!

Totality is orthogonal to dependent types. You can absolutely have non-total programs at the type level: Rust has such programs today in fact!

> Generally this means that recursion is only allowed if its structural, and there can be no run-forever loops, i.e., no servers.

You can have an infinite loop in a total language like Agda or Idris: coinduction is the mechanism.

> Dependent types are, by their nature, proof carrying[2], and there are times when you want this, but for a general-purpose programming language, also times that you do not.

This doesn't make a whole lot of sense to me. Dependent types are "proof carrying" in the sense that any program of the type:

    Int -> Int
Is a proof that there exists a function of type `Int -> Int`, nothing more. I don't know what "opting out" of the "carrying" there would mean.

> You can't be total over IO (or any effect for that matter), so, yeah, skip it when arg-parsing, but then opt-in when you're processing your financial transactions or actuating your robot.

You can be total over IO, and effects do not imply non-totality either.


> Totality is orthogonal to dependent types. You can absolutely have non-total programs at the type level: Rust has such programs today in fact!

Absolutely, the talk I linked to gets at this to some extent. In rust, partiality causes type checking to fail. You could use it on purpose!

> You can have an infinite loop in a total language like Agda or Idris: coinduction is the mechanism.

For sure, but the totality checker is appeased by sized-types, or at least that's how I know to do it in Agda. I've heard it can be done without them, but I'm not familiar with the approach. This is what I intended w/r/t structural recursion.

> This doesn't make a whole lot of sense to me.

This is because, as the authors state, to type check the program is to evaluate it, so before it can run you have proof that it is correct.

By opting out I'm more talking about the converse, to opt in when you need it. Kind of the opposite of Idris's %partial directive.

> You can be total over IO, and effects do not imply non-totality either.

That's fair, you're right. This is poorly stated.


Idris does support forever loops or servers without losing totality. See https://idris2.readthedocs.io/en/latest/tutorial/typesfuns.h...


Hm, that's not how I'm reading this. An infinite loop is, by definition, partial. What I'm gathering from this is that that stuckness that the typechecker can encounter in the face of partiality is okay in Idris, that its typechecker just says, "welp, this is as far as I go."


I’m specifically talking about the “Produces a non-empty, finite, prefix of a possibly infinite result” part on that page. As long as your server generates finite responses in finite times, Idris recognizes your code as total.


When dealing with corecursion (infinite streams) you don't care about whether you terminate, but rather, whether the infinite stream is _productive_ meaning it doesn't stop producing values. The prime example of a function that's impossible(?) to prove productive (for all cases) is filter – since it takes a predicate function that decides whether values are returned or not. If you give filter a predicate that always returns false you will never produce any values, hence that wouldn't be productive.


If you have codata you can have infinite loops in total functional programming.

https://en.wikipedia.org/wiki/Corecursion


> A first feature is the ability to define “interesting” types. By interesting I mean “generic” types which do things depending on the types of their arguments

LF does not have higher kinded types but it does have dependent types.


Dependent types are a language feature, simply adding two types --- the dependent product / exponential / function type, and the dependent sum / product / tuple type.

You can definitely add dependent types to Rust, but I see at least the following problems.

1. Rust compile times and dependent typing compile times are both really long. Combining the two sounds like it'd be unacceptably long.

2. Rust promises full type erasure when you're not using "dyn". Predictably erasing dependent types seems to be a hard problem. Heck, even ergonomically giving low-level control over the layout of a dependent sum sounds hard.

3. Dependent types give you a whole bunch of new ways to do things that you can already do in Rust, which is probably a bad thing.

I think "bolt-on" approaches, as Hoare logic is to imperative pseudocode, or SPARK is to Ada, are something that Rust could easily employ, which are more well-tested, and would solve all of the use cases of dependent types in industry.


> Rust promises full type erasure when you're not using "dyn"

I don't know if "type erasure" is the accurate term to use with the generic-specialization scheme that Rust uses; one can trivially write a generic Rust function that demonstrates that types, while inaccessible at runtime, still semantically affect the generated code:

    use std::mem::size_of;
    
    fn foo<T>(t: T) {
        dbg!(size_of::<T>());
    }
    
    fn main() {
        foo(1);  // prints 4
        foo("bar");  // prints 16
    }


Isn't this just compile-time dispatching? I think types 'inaccessible at runtime but still affecting generating code' are table stakes, and basically half of the point of doing these types of generics.

Equivalent C++ code (admittedly, C++ templates aren't really generics) would have enough information to shove literal 4 and 16 into the binary there.


This is kind of my point, I've never heard the term "type erasure" used with static dispatch (and Rust code, for better or worse, overwhelmingly prefers static dispatch to dynamic dispatch).

Rust does guarantee type erasure on lifetimes, but lifetimes are only used for semantic analysis, and are completely separate from and unrelated to code generation/type layout.


What do you mean by C++ templates not really being generics? Aren't Rust's generics also implemented as templates (as opposed to parametric polymorphism)?


Rust Generics are Parametric Polymorphism - you must pre-declare the allowed operations on the Type Parameters via Trait Bounds (essentially Haskell Typeclass constraints), and the Generic will fail to compile if you try to perform any other operations in the body (even if the Generic is never instantiated).

In C++ you can do any syntactically valid thing with the Type Parameter in a Template body, and it isn't type-checked until you attempt to instantiate it.


Would it be correct to describe Rust's generics as parametrically polymorphic templates then? I'm still not seeing how C++ templates aren't proper generics though - parametric polymorphism isn't a requirement for that, is it?

I guess I had mistakenly thought that run time dispatch and a single binary implementation was somehow required for parametric polymorphism since it requires treating all types identically. Upon reflection I can see that it's not actually exclusive of compile time dispatch though.

From a pragmatic perspective it seems unfortunate that such generics can't be used in an API without corresponding access to the source code. I saw the ability to work with arbitrary future types at runtime as largely being the point of type erasure.

From a theoretical perspective I'm having some trouble accepting the idea that Rust generics are an example of true parametric polymorphism. In the earlier example a function could behave differently based on the size of the type. That's not uniformly dispatching to a behavior implemented by the type though, which seems like the core idea behind parametric polymorphism to me. (I suppose my concerns are purely academic though. AFAIK runtime reflection in Java allows utterly breaking type erasure but in practice Java generics still seem to be referred to as an example of parametric polymorphism.)


... which does also happen at compile time. Or before you instantiate it, if the template is declared using concepts, in C++20.

The effect without concepts is that you can (in principle) get things that "accidentally" match, like applying "+" to strings; and that the error messages when they don't match are unpleasant. Concepts resolve both.


correct me if i'm wrong - i only spent a hours with rust: doesn't dbg! run at compile time? and therefore your example is still compatible with type erasure?


`dbg!(x)`, being a macro, expands to something at compile time; at runtime, that expanded code will do something like `write(x.fmt(), stderr)`. it's not relevant here, only `size_of` is.

what's happening here is that rust does "monomorphization" – it generates two versions of `foo`, one for ints and one for strings; then, in each specialized version, `size_of` is specialized to that particular type's implementation of `size_of` (compiler-generated), which returns 4 for i32 and 16 for <whatever that string type is>.

i'm sure it's actually more nuanced than that, but i think that's the general principle.


I'm asking what we're referring to by "type erasure" here; one can use trait objects to erase a type in Rust, and size_of (which, yes, runs at compile-time (the debug macro is just for printing, not for const eval)) does indeed work on such type-erased objects, but it will always give you the size of the fat pointer to the trait object.


IMO, it would be easy to create a new dependently typed language with on a subset of the Rust or Swift syntax. Here's a tutorial written by an Idris contributor. http://davidchristiansen.dk/tutorials/nbe/ Following the guide, you can build a dependently typed language with Racket syntax in two hours.

But it is not easy to plug dependent types to existing compilers of Rust or Swift. In general, a dependently typed language is not Turing-complete so that the type checking (which now requires running programs) is decidable. That means you have to rule out features like general recursion and arbitrary loops. That could mean reconnecting the wires in the existing compiler. From an engineering aspect, it should be easier to plug dependent types on a functional programming language, e.g., dependent Haskell, than imperative languages.


I'm not sure that this question even makes sense. It's like asking: "What language features are needed in order to implement a borrow checker?"

You don't bolt this functionality on to an existing language; it requires changes to your whole compiler pipeline to be useful.


Don't be so certain, some languages allow a lot of flexibility within libraries. For instance, Clojure has an excellent library implementing Go-style channels with green threads (core.async) all built without any modifications to the core system.


Sure. Linear types and borrow checking is possible in Lisp too, but I wouldn't take that as evidence that it's super easy to make Clojure behave like Rust.

What is possible and what is practical are two different things.


Good point, Lispy languages with sufficiently powerful macro systems can do this. This is part of the reason, why no language committee deciding on backwards compatible changes to the language is needed to develop such a library.

I don't know enough to say whether this would be possible in Rust.


He does it on top of scheme. Have a look at

https://mitpress.mit.edu/books/little-typer


You might like to read https://serokell.io/blog/why-dependent-haskell , which describes a roadmap for making dependent types practical in Haskell.

Useful steps on the road: singleton types (needed to make it easy to represent basic values as types), higher-kinded types (needed to implement functions at the type level), kind polymorphism (needed to be able to reuse the same functions at the type and the value level). Rust is I think getting the first, is gradually implementing little subsets of the second, and is nowhere near the third.

The other thing you need to worry about is how dependent types interact with existing language features. In particular, if not implemented carefully then they may break type inference, and I imagine Rust's lifetime inference might have similar issues.


With no restriction on what kind of dependent types are allowed the type system should at least be Turing complete.


https://dl.acm.org/doi/10.1145/3371071

Dependent Types via macros, in Racket

somewhere near the end of the paper is a table of the macro features used, which gives the impression it was only possible in Racket, but the lack of a common terminology for these things makes it hard to compare other languages' macro systems unless you are deeply familiar with them, so maybe it could be adapted


You can implement any language in any language that is Turing complete. Or practically anything with conditionals, looping and recursion.

So yes to all of your questions.


You mean you could write a compiler for Idris (or any other dependent typed language) in a Turing complete language. That’s fine but it’s not a very useful statement.

If you want dependent types IN Rust or Swift then Turing-completeness won’t help you, because neither of them have Turing-complete type checkers. You’d have to modify the compiler for them (and thus change the language itself).


> because neither of them have Turing-complete type checkers

https://sdleffler.github.io/RustTypeSystemTuringComplete/


Sure, and there exists a dependent type system in Rust's trait system. It is wildly impractical to actually use, interacts poorly with the actual language, and a great example of why Turing Completeness of a type system means very little in practice.


That's almost certainly a bug. At any rate, a proper language with dependent types does not have a Turing-complete type checker and that's what makes it magical. You get incredible power without having to worry about non-termination in your type checker.


It's not a bug, it's due to the trait (aka typeclass) system.

Apparently Haskell has some restrictions that make typeclasses decidable (if -XUndecidableInstances is not set), but Rust has no such mechanism other than numeric iteration limits.


Guaranteed termination of a sound dependent type system is only useful as a mathematical property, not a practical one (yes, I know the mathematical consequences have practical benefit but I mean directly it doesn’t say anything about what a typical user thinks of as “termination”). You could still trivially write an algorithm at the type level which is Turing-complete in a practical sense.

For example, write a type that implements a Turing machine parameterized by a program and a natural number N of evaluation steps to take. Apply that type to Graham’s number or some suitably large natural number. You now have a type that is, for all practical purposes, Turing-complete, even though it is theoretically guaranteed to terminate.


For example, write a type that implements a Turing machine parameterized by a program and a natural number N of evaluation steps to take. Apply that type to Graham’s number or some suitably large natural number.

To do that you’d first need to be able to fit Graham’s number into your source code which unfortunately is never going to happen.

Idris has a termination checker which in practice rejects non-primitive recursion. If you can’t prove to the type checker that your recursive function deferments toward a base case at every step then your program won’t type check. So there’s no back-door method to get Graham’s number or other silliness like the Ackerman function into your types.


I’ve done it in Agda - it’s not even hard. The naive definitions are accepted without issue.

https://github.com/mokus0/junkbox/blob/master/Agda/ack.agda


The usual answer for such a question about X2 and X1 is that X1 was really slow. It is usually correct, with the proviso that X2 might still be really slow, only less so.

With most computer artifacts, it is hard to tell, prima facie, that they are slow, until somebody makes a faster one. Then the original comes to be recognized as slow (although somebody will still insist it's not that slow). The faster one might still be slow, but that fact remains unknown.

The fact is that almost everything is slow, in the sense that somebody smarter, more experienced, and more diligent could make it faster, often by doing things that the person who wrote the slow version would have found distasteful.


Compared to Haskell, it was slow indeed.


The author first goes with:

> Idris 1 is implemented in Haskell, but that has little (if anything) to do with the difference.

But latter they also go on to say:

> Idris 2 benefits from a robust, well-engineered and optimised run time system, by compiling to Chez Scheme.

I must say I'm slightly confused here. Yes a rewrite might also enable to avoid all the legacy part that might slow down the code, but what is also possible, is that a new language and a new runtime could enable new optimizations that are not possible before. The author did mention Chez's profiling tools help a lot in the rewrite. So I was curious: is it really true, that we cannot attribute some part of the speedup to language differences?

Also I was interested in the rationale behind using Scheme to replace Haskell, but I failed to find some reasoning behind this, anyone can shed some light on this?


That's not relevant at all. I think you have misunderstood the situation here. In Idris 1 the compiler and code generator is in Haskell, but the runtime is in C. So Scheme isn't exactly replacing Haskell but replacing C, for the runtime. (What replaced Haskell was Idris itself, by virtue of being self-hosting.)

The author even says that it's difficult to write C that deals well with the functional style of lots of little functions, and this is a problem Scheme also has and has solved. That's enough rationale to switch to Scheme:

> Generating good C code corresponding to a functional program which might have lots of small (and higher order) definitions is extremely challenging, so it is better to take advantage of someone else's work here.


Scheme isn't used to replace Haskell. Idris 1 is written in Haskell and compiled by default to C (it comes with a Javascript backend too and it has an API for making others). Idris 2 is written in Idris 2 and compiles to Scheme so far (Chez by default, there is also support for Racket and Gambit), it's not tied to scheme but there is no pluggable backend functionality yet.

So Scheme is replacing C, and Idris is replacing Haskell. The goal was self-hosting by writing the compiler in Idris. Chez Scheme is a nice compilation target because it's performant and Scheme is a sugared lambda calculus that goes well together with FP languages.


Thanks guys for the explanation! I truly have misunderstood this post.


What is Idris?

Critical info missing.


Your question is fair, and it's unfair you're being downvoted. Idris is a dependently typed programming language.

If you imagine a "cube" of fundamental language features, on one corner is "the simply typed lambda calculus" which you can think of as old-school C. Along each of three edges we add a fundamental language feature: parametric polymorphism (values that depend on their type; think 'C# generics'), type constructors (types that depend on types; something akin to C++ template types like `vector` but not like `std::sort`), and dependent types (types that depend on values; but definitely not restricted like `std::array`; something more like MacroC or `enable_if`).

There's not a lot of good analogies for this stuff in commonly used programming languages because the big languages are really basic (not simple!) compared to the advanced stuff the programming language theorists have gotten up to.


> Your question is fair, and it's unfair you're being downvoted.

I agree partly; I didn't downvote but I think in this case the downvotes are not that weird; it's quite well known what Idris is in HN circles, but that's not the point; the article has links that tell you (pointing to idris-lang) and google idris edwin brady (which is on top of the article) will get you more than you want to know most likely but at least enough to know what it is. So it is the absolute bare minimum to open an article and pop in; 'what is this?' if it's not in the first line of an article which I think is not great. A little bit more I do expect from this place; like clicking on the article, immediately recognising it as a language (that's clear within seconds of scrolling the article I would say) and then maybe come back with 'what is dependently typed?' or something. But going from opening the article to hitting wikipedia explaining dependently typed languages is ... well, not much more work than doing nothing and just popping 'huh?' in here imho.

Edit: actually, just google 'idris2' and you have in the first line exactly what, who, why etc.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: