Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Whether a call is a tail-call is trivially decidable; it's an entirely syntactic condition (a call to f is a tail-call just in case it's of the form "return f(...);").

It's true that it's not possible to make a Turing-complete language for which it's possible to statically determine the amount of memory arbitrary programs on arbitrary inputs will use. But typically a language has other means of supporting dynamic memory allocation than simply via frame allocation for functions; e.g., via built-in operations to manipulate lists of arbitrary length.

So, if you wanted to only allow recursion via tail-calls, you easily could, and if you wanted to furthermore go ahead and prevent all dynamic memory allocation, you could as well, but you would necessarily be giving up Turing-completeness.



> So, if you wanted to only allow recursion via tail-calls, you easily could

To be fair, we're talking about a language designed in 1959, where a compiler had to run in 4KB of memory or so. At that point you're barely optimizing anything, just doing the simplest transformation you can.

Doing tail-call elimination isn't quite as simple as just doing a "JMP" instead of a "CALL" since you also need to replace the currently in-use activation record. Consider the C fragment:

     int f(int a, int b)
     {
         if (a > b)
            return f(b, a);
         // ...
     }
Now imagine you're trying to compile this for an ancient register-less machine. We have to replace "a,b" in the activation record with "b,a" but if we just naively copy the values one by one we'll step on "a" before we read it and end up with "b,b".

These problems certainly aren't insurmountable, but if it takes a few hundred instructions you've already blown a lot of your budget on one little feature. You could see why a language designer in 1959 would want to leave it out.


I don't see the relationship between Turing-completness and dynamic memory allocation.

Even in a classic Turing machine I interpret the tape as a single pre-allocated array (of infinite size).

When talking about Turing-complete programming languages, a relaxed definition with finite-size memory is assumed, and to me it seems that it's just hair splitting whether program hits that limit via malloc() call or by advancing a pointer in an internal fixed-size array. Dynamic memory allocation is just a name for a bunch of bookkeeping done on a fixed-size memory.

In practice asm.js programs are not allowed to dynamically allocate any memory from the browser: a fixed-size array is created before the asm.js program starts. From the point of the VM programs are just shuffling bytes within a fixed-size array, and yet, we can run "Turing-complete" programs in asm.js.


A finite memory machine is a finite state machine. These are not Turing complete. The notion of Turing completeness becomes uninteresting if we can't even draw this distinction.

"dynamic memory allocation" may not have been the right term to use to denote moving beyond finite state machines, for an audience which wants to consider a Turing machine as having statically allocated infinite memory. But it makes sense if you think of a Turing machine's memory as only storing enough data for the portion of the tape which has already been accessed, and allocating new memory as new tape regions are needed.

It's true that the actual machine sitting on your desk only has a fixed finite memory. In that sense, without a peripheral unbounded store, it's not properly Turing complete.

It's also true that we often find it convenient to analyze it as Turing complete anyway, but this abstraction basically goes hand-in-hand with abstracting away its memory limitations. I don't see any point in pretending you can be Turing complete using only programs with a statically given memory bound (which amounts to a logarithmic bound on the number of states in a corresponding finite state machine); if we're going to pretend, let's pretend the memory is effectively infinite as well.


The problem with analyzing programs in the context of a fixed memory limit is that you would have to revisit all of the proofs whenever the fixed limit was increased. If you can prove something true for unlimited memory, it's also true for any limited amount of memory.

>>we often find it convenient to analyze it as Turing complete anyway

It's convenient, because the proofs will still be true even if memory sizes grow by 500x. Turing machines clearly don't have a real, material existence, similarly to real numbers (assuming that all actual numbers are finite). Real numbers can be considered as modeling integers that are arbitrarily large.


I'm not sure how to untangle it, but I think you've got your abstraction layers mixed up. Anyway, that infinite tape is critical to Turing completeness. So strictly speaking, a limited-memory computer isn't strictly Turing complete, just close enough as makes no difference, because in practice they can use as much memory as they want, via dynamic memory allocation. When you tell the program it can't use as much memory as it wants, by taking away dynamic allocation, then you're not only not Turing complete, you're not even close to the Turing machine model.


I thought languages were Turing-complete, not machines. Languages don't have the problem, because they don't have to have their own memory.


If a language only allows you to specify programs with bounded memory usage, then it will fail to be Turing-complete.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: