HN: How do you get an intuitionistic understanding of computation itself? While Turing Machines kind of make sense in the context of algorithms, can I really intuitively understand how lambda calculus is equivalent to Turing machines. Or how Lambda Calculus can solve algorithms? What resources helped understanding these concepts?
I can share my experience, because I was asking myself the same question 6 years ago...
My approach was to try and build a Lisp -> Brainfuck compiler. My reasoning was: Brainfuck is pretty close to a Turing machine, so if I can see how code that I understand gets translated to movement on a tape, I'll understand the fundamentals of computation.
It became an obsession of mine for 2 years, and I managed to develop a stack based virtual machine, which executed the stack instructions on a Brainfuck interpreter. It was implemented in Python. You could do basic calculations with positive numbers, define variables, arrays, work with pointers...
On one hand, it was very satisfying to see familiar code get translated to a large string of pluses and minuses; on the other, even though I built that contraption, I still didn't feel like I "got" computation in the fundamental sense. But it was a very fun project, a deep dive in computing!
My conclusion was that even though you can understand each individual layer (eventually), for a sufficiently large program, it's impossible to intuitively understand everything about it, even if you built the machine that executes that program. Your mind gets stuck in the abstractions. :)
So... good luck! I'm very interested to hear more about your past and future experiences of exploring this topic.
I wasn't aware, no. However, interpreting Brainfuck code is the easy part, as I've learned. The hard part was creating a "runtime" that understands memory locations aka. variables.
Most of the project was figuring out things similar to that. You get to appreciate how high-level machine code on our processors really is! When things like "set the stack pointer to 1234" are one instruction, instead of 10k.
I wrote a book to help answer these questions: https://computationbook.com/. It won’t necessarily be right for you (e.g. the code examples are in Ruby) but the sample chapter may help you to decide that for yourself.
I had a small homework on lambda-calculus and Turing machines equivalence when I was an undergrad (during my third year of uni iirc). It's in French but you may be able to follow by looking at the "code" parts. In it I use pure lambda-calculus to simulate a universal Turing Machine. It's probably not done the best not the more canonical way, but it's what I got while discovering lambda-calculus at the same time, so it might be good as a starting point for where you're at now. You can find the paper here: https://pablo.rauzy.name/files/lambdacalcul.pdf. Hope it helps!
HN: How do you get an intuitionistic understanding of computation itself? While Turing Machines kind of make sense in the context of algorithms, can I really intuitively understand how lambda calculus is equivalent to Turing machines. Or how Lambda Calculus can solve algorithms? What resources helped understanding these concepts?
I'm currently following http://index-of.co.uk/Theory-of-Computation/Charles_Petzold-... and a bunch of other resources in the hope I'll "get" them eventually.