Hacker News new | past | comments | ask | show | jobs | submit login

Welcome back PG!

HN: How do you get an intuitionistic understanding of computation itself? While Turing Machines kind of make sense in the context of algorithms, can I really intuitively understand how lambda calculus is equivalent to Turing machines. Or how Lambda Calculus can solve algorithms? What resources helped understanding these concepts?

I'm currently following http://index-of.co.uk/Theory-of-Computation/Charles_Petzold-... and a bunch of other resources in the hope I'll "get" them eventually.




I can share my experience, because I was asking myself the same question 6 years ago...

My approach was to try and build a Lisp -> Brainfuck compiler. My reasoning was: Brainfuck is pretty close to a Turing machine, so if I can see how code that I understand gets translated to movement on a tape, I'll understand the fundamentals of computation.

It became an obsession of mine for 2 years, and I managed to develop a stack based virtual machine, which executed the stack instructions on a Brainfuck interpreter. It was implemented in Python. You could do basic calculations with positive numbers, define variables, arrays, work with pointers...

On one hand, it was very satisfying to see familiar code get translated to a large string of pluses and minuses; on the other, even though I built that contraption, I still didn't feel like I "got" computation in the fundamental sense. But it was a very fun project, a deep dive in computing!

My conclusion was that even though you can understand each individual layer (eventually), for a sufficiently large program, it's impossible to intuitively understand everything about it, even if you built the machine that executes that program. Your mind gets stuck in the abstractions. :)

So... good luck! I'm very interested to hear more about your past and future experiences of exploring this topic.


(Binary) Lambda Calculus can interpret brainfuck in under 104 bytes:

http://tromp.github.io/cl/Binary_lambda_calculus.html#Brainf...


Are you aware that GNU Guile, which is self hosted (written mainly in scheme), can interpret brainfuck?

https://www.gnu.org/software/guile/manual/guile.html#Support...


I wasn't aware, no. However, interpreting Brainfuck code is the easy part, as I've learned. The hard part was creating a "runtime" that understands memory locations aka. variables.

See this question that I asked (and later answered myself) around that time: https://softwareengineering.stackexchange.com/questions/2847...

Most of the project was figuring out things similar to that. You get to appreciate how high-level machine code on our processors really is! When things like "set the stack pointer to 1234" are one instruction, instead of 10k.


Quite intrigued with your approach; will look into trying something similar for visualising an embedded system.


I wrote a book to help answer these questions: https://computationbook.com/. It won’t necessarily be right for you (e.g. the code examples are in Ruby) but the sample chapter may help you to decide that for yourself.


I had a small homework on lambda-calculus and Turing machines equivalence when I was an undergrad (during my third year of uni iirc). It's in French but you may be able to follow by looking at the "code" parts. In it I use pure lambda-calculus to simulate a universal Turing Machine. It's probably not done the best not the more canonical way, but it's what I got while discovering lambda-calculus at the same time, so it might be good as a starting point for where you're at now. You can find the paper here: https://pablo.rauzy.name/files/lambdacalcul.pdf. Hope it helps!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: