Hacker News new | comments | show | ask | jobs | submit login
Towards λ-calculus (free.fr)
178 points by martyalain 7 months ago | hide | past | web | favorite | 45 comments



For those who want to grok lambda calculus, I would highly recommend "An Introduction to Functional Programming Through Lambda Calculus" book by Greg Michaelson[0]. After going through the book, I could say that you're brain will adapt the functional programming paradigm if you are used to the imperative programming language. Functional programming will change the way you think and make you a better programmer.

[0] - https://pdfs.semanticscholar.org/d986/546bc3780db3a3c0f8d88b...


I was lucky enough to be taught lambda calculus and related topics at Heriot-Watt by Greg - I'll never forget the moment when he introduced the S and K combinators - been fascinated with those ever since.

[NB Related topics included strongly recommending GEB].


> I'll never forget the moment when he introduced the S and K combinators - been fascinated with those ever since.

Rabbit hole warning. If you go down this particular hole you will lose at least a month and maybe more. It will be the most fun you've had in a long time but your productivity will take an extreme hit.


Indeed, my final year project was to build a functional programming language that macro expanded into lambda calculus the transform that into different sets of combinators and look at the performance improvements.

To this day (almost exactly 30 years) - I am staggered that you can express Y in terms of S & K. I know this follows trivially but actually having coded it and watched it run (slowly) did made it very real to me.

NB This was running on an Orion minicomputer with rather limited memory - I had to implement a mark and sweep garbage collector to get expressions of any size to run!


Do you still have this? That would be a very nice project to share.


I still have a print out of it - I did find myself considering the idea of re-implementing it in something a bit nicer than 1980s K&R C.


I'd be happy to help if you want.


It's somewhere on my long list of projects to work on - thanks for the offer though :-)


Seconded--I got this book on a whim and didn't expect to finish it very quickly, but I was having so much fun with it it displaced all the other books I was working on at the time. This was step two on my journey towards embracing FP (step one was seeing OCaml in a class).


It's intended more for kids, but Alligator Eggs is a fun way to play with lambda calculus: http://worrydream.com/AlligatorEggs/


Sounds very promising!

To make sure(er) the PDF sticks around, here it is stored on IPFS: https://ipfs.io/ipfs/QmRKHcRYBpYJBWbiRkPeGA9kWfsjUtZPq45oMi6...


Hello HN folks,

Again and again, as a personal work, I try to find the best minimal foundations for a language. Beside lambdatalk and its “non standard” regexp based implementation, I explore with lambdacode a standard AST based implementation.

Am I on the good lambda-calculus way?

   - http://lambdaway.free.fr/workshop/?view=lambdacode_inside_min
   - http://lambdaway.free.fr/workshop/?view=lambdacode 
   - http://lambdaway.free.fr/workshop/?view=helloworld
   - http://lambdaway.free.fr/
If you are willing to read it as pure speculation, I would appreciate that. If not, I am sure you have a waste basket handy.

Thanks


>>> If you are willing to read it as pure speculation, I would appreciate that. If not, I am sure you have a waste basket handy.

Langlands :-)


:) :) :)


Factorials in pure lambda calculus that actually run in non-geologic time:

http://www.flownet.com/ron/lambda-calculus.html



That is actually very cool. You should submit it to HN.


Thanks. I did it 3 months ago without any comment. https://news.ycombinator.com/item?id=16202754


A lot of good stuff is falling through the cracks nowadays. Even YC content is going unnoticed, e.g.

https://news.ycombinator.com/item?id=16946261

But if a submission gets no comments you can re-submit it after a few hours without setting off the de-duper.


Sometimes (quasi) re-submissions are flagged https://news.ycombinator.com/item?id=16066019


And sometimes they stay alone for ever: https://news.ycombinator.com/item?id=16983540


For people who wanna learn Functional Programming, I recommend this resources:

https://github.com/allenleein/brains/projects/9


Learnt functional programming and lambda calculus under Raul Rojas, happy to see he's mentioned here. He taught us the evaluation step as a vacuum cleaner sucking in the variables.


I began to understand lambda-calculus with this introduction from Raoul Rojas. But I was lost when he explains recursion via the Y-combinator. In fact I think that recursion is best understood when lambdas are introduced to bring a bit of lazyness in an by default eager evaluation. The (if bool one two) control structure is not so easy to understand under the cap.


I don't get one thing from this article. Am I suppose to be impressed by this code that prints some fruits?

> In order to understand what can be done with so little

    ((lambda (f n) (f f n)) (lambda (f list) ((((lambda (n) (n (lambda (x) (lambda (z) (z (lambda (x y) y)))) (lambda (z) (z (lambda (x y) x))))) list) ((lambda (x y) (lambda (z) (z x y))) (lambda (list) '.) (lambda (list) (join ((lambda (z) (z (lambda (x y) x))) list) (f f ((lambda (z) (z (lambda (x y) y))) list)) )))) list)) ((lambda (x y) (lambda (z) (z x y))) 'apple ((lambda (x y) (lambda (z) (z x y))) 'banana ((lambda (x y) (lambda (z) (z x y))) 'lemon ((lambda (x y) (lambda (z) (z x y))) 'grapes ((lambda (x y) (lambda (z) (z x y))) 'orange (lambda (s z) z)))))))

    apple banana lemon grapes orange

Not too impressive if you ask me, maybe if you want to obfuscate your code so nobody will understand it. Maybe then.


Lambda calculus is an attempt to build the foundations of computation using a couple of primitives and some re-write rules. It’s the equivalent of ZFC for math.


It's just what I try to do. I think that the lambda-calculus is the Maxwell's equations of the computer field and not the LISP as Alan Kay said. The implementation is also an important point. In the wiki page "towards λ-calculus" I use a standard AST based evaluator, a minimal version of ( http://lambdaway.free.fr/workshop/?view=lambdacode ). In this other wiki page ( http://lambdaway.free.fr/?view=NIL ) I use a completely different implementation, {lambda talk}, where the re-write rules are the heart of the machine: 1) finding from inside-out S-expressions in the whole code and replacing them by words and 2) inside lambdas, replacing in the body arguments by the given values. Not only this implementation works like (what I understand to be) a Turing machine, but it works fine in a wiki context where quoting words and bracketing sentences would be unacceptable. I find this "inverted" implementation worthwhile, from the both theoretical and practical points of view.


Lambda Calculus is perhaps the Maxwell's equations of computability, not of the computer field. What Kay might have meant is that Lisp is the Maxwell's equations of programming a computer to do something useful.

Lambda Calculus doesn't even have numbers; you either have to build them out of functions, or else extend Lambda Calculus with numeric terms. Lambda Calculus doesn't have eval or quote, and if it did, they wouldn't work because Lambda Calculus doesn't have a data structure representing Lambda Calculus source code. Lambda Calculus also lacks practicalities like assignable variables, which are part of computer science and mutable data structures in general.

MacCarthy had another analogy: Lisp for programming is kind of like the advantage of using binary over decimal for digital computers:

> This internal representation of symbolic information gives up the familiar infix notations in favor of a notation that simplifies the task of programming the substantive computations, e.g. logical deduction or algebraic simplification, differentiation or integration. If customary notations are to be used externally, translation programs must be written. Thus most LISP programs use a prefix notation for algebraic expressions, because they usually must determine the main connective before deciding what to do next. In this LISP differs from almost every other symbolic computation system. COMIT, FORMAC, and Formula Algol programs all express the computations as operations on some approximation to the customary printed forms of symbolic expressions. SNOBOL operates on character strings but is neutral on how character strings are used to represent symbolic information. This feature probably accounts for LISP's success in competition with these languages, especially when large programs have to be written. The advantage is like that of binary computers over decimal - but larger. [History of Lisp -> LISP prehistory - Summer 1956 through Summer 1958.]


"Lambda Calculus is perhaps the Maxwell's equations of computability, not of the computer field." I agree with you.

"What Kay might have meant is that Lisp is the Maxwell's equations of programming a computer to do something useful." But what I discovered is that lambda calculus + S-expressions define a consistent infrastructure on which useful superstructures can be built, data structures (pairs, trees, lists,...) and data controls (recursion). And make implementation a pleasure, either via AST (lambda code) or via regexps {lambda talk}.


It's not intended to impress anybody. I write the result first, an unreadable lambda-calc expression, then I explain the making of in few lines easier to understand, IMHO, than what I could see elsewhere. Did I missed something in the explanations?


[flagged]


It's easier to formally reason about a program that is specified in a small formal language than one that is sloppy, big, and leaks state, yes. When you can formally reason about a program you can prove correctness. And that's the ultimate goal.

Minified JavaScript is not a fair comparison either. You can write scheme and lisp programs that are very readable. It annoys me when people conflate a terse and/or illegible programming style with functional programming.

We have abstractions and large languages so that humans can better understand the intent of a program. Isn't that exactly what this post is about, making a more readable lambda calculus interpreter?


"Isn't that exactly what this post is about, making a more readable lambda calculus interpreter?"

Here is a less terse introduction: http://lambdaway.free.fr/workshop/?view=helloworld, using a regexp based implementation of a "dialect" of lambda-calculus.

Thanks for your interest.


> It annoys me when people conflate a terse and/or illegible programming style with functional programming.

The problem is, functional programming languages are almost always harder to read than other languages. Haskell is the obvious example, but Lisp is pretty bad too. It takes a bit of experience to be able to refactor in the middle of a Lisp block without messing up the balancing of the parentheses. Yes, it can be learned, and it can be learned faster than people expect. But the barrier to entry is much higher than it is for most languages.

By the way, I love both Haskell and Lisp, and I think more people should use them. But I'm not going to pretend it's easy get started working with those languages.

F# and other languages ML-style syntax are probably the easiest to read.


Depends on what is "easy" for you. Haskell is syntactically dense and can be obtuse in the hands of many authors, but on the other hand I find it easier to read well written Haskell because it's easier to break down into chunks. Similar for Lisp, keeping track of parentheses can be annoying but the regularity of S-expressions make it pretty easy to follow syntactically. I do agree it can be hard to get started in Haskell (though not so much Lisp, especially Scheme), but I think that's for other reasons that aren't really dependent on the syntax or readability.

I think we can all agree APL is incomprehensible moon language, though /s.


APL fascinates me, but it's very hard to understand. I haven't found any intellectual footing as it's so dissimilar to what I've learned already.

Would you have any good book recommendations? Or any recommendations, really? I'm just asking in the chance that you do.


APL really is no harder to understand than any other programming language, in a way it is easier than most because you can look at the whole program at once without being bothered by all the cruft of loops and such. You're probably mostly thrown off by the symbols.

There was a gorgeous article submitted a couple of days ago:

http://www.jsoftware.com/papers/50/

That should serve as a good intro.


Having a whole program in your field of view isn't the same thing as comprehending it all at once. That would be a horrible fallacy to assert.

Software Engineering can be regarded as a discipine in which we strive for the understandability of programs which cannot possibly fit into a person's field of view all at once.


No, it isn't the same thing. But it does make comprehension easier. You can get a feel for this by making a text editing window really small, say 10 x 20 characters and then trying to comprehend an otherwise perfectly legible program through that window. It's going to be much harder than when you have more overview. This is also why we make diagrams to show a simplified version of a program to aid understanding.

APL went all the way with this and assigned many reasonably complex operations on complex datastructures to single character symbols. Once you know those symbols the code is about as hard to understand as any other language that you've mastered. It's just that because the symbols are not used in any other language family that you are going to struggle quite a bit in the beginning. It's like trying to ready Cyrillic.


Fitting an entire function on the screen is useful compared to just a fraction of a function. (However, if you can only see a fraction of a function, the problem is with the function, not with your screen.) Having a hundred functions in view at once (full definitions) is way, way past the point of diminishing returns. And we have folding editors so you can see function headers: names and arguments.

By the way, a 10x20 window: isn't that like a wasteland of wast proportions to an APL programmer?


I was joking, I have actually used APL a bit and quite enjoy it. Array based programming is quite intuitive once it clicks in your head and I think at least a conceptual understanding of APL is very valuable even when using other languages (e.g. NumPy in Python).


> It takes a bit of experience to be able to refactor in the middle of a Lisp block without messing up the balancing of the parentheses

I double-click on one parenthesis and the whole expression gets selected - I can then operate on it. How would I mess up the balancing?

The only way to mess up the balancing is editing without s-expression support. No Lisp programmer would do that.


> The only way to mess up the balancing is editing without s-expression support. No Lisp programmer would do that.

Well, in the past they did.


Long ago.

BBN Lisp (later known as Xerox Interlisp) had already a structure editor in the 60s.

http://www.softwarepreservation.org/projects/LISP/bbnlisp/BB... see page 40ff.

A lot of editing support was then developed in the 70s with the various Emacs editors (TECO Emacs, Zmacs, Multics Emacs, ...) or the Interlisp editors. In the 80s various Lisp IDEs had support for editing Lisp on PCs, Macs, Unix-Workstations, Lisp Machines, ...


Most people that know both, find functional languages far easier to read than imperative languages. They are declarative and permit a wider range of abstractions. Operator-overloading is a similar example that comes up for debate, over-and-over again. There are people that simply do not want to learn any new abstractions and are happy to read and write the same patterns/code over-and-over again. The problem for the rest of us, is that we may find ourselves in a position where we have to maintain code written by these heathens.


> The problem is, functional programming languages are almost always harder to read than other languages. Haskell is the obvious example

There are many legitimate reasons to dislike Haskell, such as being hard to parse mechanically, but being hard to read is not one of them.

> F# and other languages ML-style syntax are probably the easiest to read.

The syntax of ML's module language is pretty complicated. You cannot look at those “where type” (SML) and “with type” (OCaml) clauses and tell me with a straight face that they were meant to be easy to read. This syntax makes translucent ascription harder to read than it ought to be. It is so bad that many[0] people work around it in various ways, like using the combination of generative datatypes and transparent ascription as a poor man's translucent ascription.

As for F#, I would not call it ML-style, precisely due to the inability to express modular abstraction.

[0] Relative to the size of the ML community, of course.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: