Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Specifically because Lisp/Scheme, as simple as they are, still add a lot of complexity on top of the λ-calculus that isn't needed at all. In fact, I think the λ-calculus has a shield against this very issue on Lisp/Scheme, which is the fact is is much easier to quantify what is the "best" abstraction for a given problem.


In your initial post, you said:

> The λ-calculus - a 70 year old system - already does everything that all those do, simpler and faster...

If you're saying that you can write programs faster in the λ-calculus than you can in Lisp/Scheme, I think you're kidding yourself. If you think they'll execute faster, I still seriously question your view. And if you think a medium-to-large program will be faster to understand if written in the λ-calculus, I think you're completely mistaken.


Of course I can, much faster. Maybe you do not see how high level the λ-calculus is... take a look on the Caramel syntax. I can translate line-by-line any Haskell program to Caramel. The λ-calculus, when given a decent syntax, is just Haskell without the types (but still has algebraic data structures identically, just no annotations). And I can be much more productive in Haskell than in Scheme, I think that goes without saying.

About the performance of the program itself, yes, λcalculus is faster than Scheme, by an order of magnitude, sometimes. For one, algorithms on church-encoded lists fuse by reduction, which means that the code below, in Caramel:

    map f (filter cond (zipWith (+) (map f a) (map f b)))
fuses to a single tight loop - i.e., it iterates through the lists `a` and `b` only once, and returns the result after that, creating no intermediate structure. Scheme would create 4 intermediate structures on this case. Since it is already 2~3x slower than the λ-calculus (running on GHC), that'd make it an order of magnitude slower in that case.


If you like Caramel, nobody here is going to tell you to use something else. Each to their own.

Agreed, λ-calculus is simple and clean. But what about the machine code being generated? Eventually, each concept in the abstraction has to be translated, often through several layers, to memory locations, register contents and the available instructions. Allowing for cache issues and out of order instruction issue. It is because of these and many other reasons that compilers, such as gcc, have gotten so big and complex.


Which complexities in particular do you find onerous?

Bear in mind that almost nothing I write is simple, self-contained, written in isolation, etc.


Macros (not necessary at all, just use regular functions and enable laziness), ints/strings/bools/lists/etc and all their hardcoded features (you can encode all that neatly on the λ-calculus). Every single addition that Scheme makes on top of λ-calculus is harmful in some sense - algorithms on list, for example, don't fuse because of that, making Scheme less performant than the λ-calculus; multi-argument functions make the whole language segmented and are much less convenient than curried functions... etc., etc.

R5RS, the simplest used spec of Scheme, takes dozens of page to merely describe the language. The λ-calculus can be described and implemented in a paragraph. That makes it much easier to create new backends, for example.


* IMO macros are occasionally useful, but YMMV. * "Hardcoded features" like?

I'm all for purity, but so far, I haven't seen anything IRL that backs up your claims/arguments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: