Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He's saying though that those two can't be separated -- the functions the machines are computing are themselves changing with time.


the functions the machines are computing are changing with time.

Sure. But that's not what Turing was talking about. Give a UTM a computable function, and the UTM will compute it. Specify an infinite class of functions F(t), and of course a UTM will have trouble.

There's no contradiction here, just a really weird way of defining "function".


It sounds like he's stating that a UTM can't simulate an arbitrary lambda function then, since that returns a variable function (a second-order UTM: a UTM can't simulate a UTM, which makes it not a UTM?).

In order to evaluate a lambda you need to be able to compute an arbitrary function of arbitrary variables; with infinite resources, you should be able to process infinite variables? Surely there's work done on UTMs and lambdas.


It sounds like he's stating that a UTM can't simulate an arbitrary lambda function then

The concept doesn't make sense. Lambda functions don't do anything, they're just notations. The process of beta reduction is what causes things/computation to happen. Beta reduction rewrites all the lambda functions, hence they're no longer the same functions. What you're saying is something like: a computer can't compute "computation".


Example: a function accepts as input the source of another program (assume it's computable and halts). Thus, the original function is variable; can it be simulated by a UTM for any arbitrary valid input? I'm probably using the wrong terminology here, but the overall logic is what I'm getting at.


The input to a function can't change (it's a fixed set). If the input changes then you've defined a new function. That's the mathematical definition of a function. You're talking about a set/class of functions, so you would need 1 UTM per function.


Turing was simply describing what people do: in a math test, I give you a problem, you solve it - without cheating, that is, interacting with your environment - using an algorithm, you hand it back to me. He idealized what a human does - infinite time and memory -, and then, as a final step, he said: "A machine - also idealized - can do that." The important thing is that in the definition of what was later to be called "Turing computability", interaction during the computation is verboten.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: