Thank you for that reply. I started reading his page and it was getting stupid. He almost deliberately seems to not want to understand the definition of a Universal Computer (by which he must mean a Universal Turing Machine).
It would surprise me if his example functions are actually computable. In his paper on the subject he comes up with a scenario in which a machine needs to compute a function of n variables that vary with time. He then places the restriction that reading each variable takes one unit of time so that once you've read the first one the value of the second one has changed and so you can't do the computation.
Then he goes on to say that if he supposes a computer capable of doing n computations at once he can get round the restriction he's imposed because he's able to read all the variables at the same time.
How is this supposed to make me think that Turing was wrong?
Having worked on software/hardware interactions quite a bit you actually see this sort of thing happen. Lots of data comes in on different ports and you need to read it all at the same time. This is solved by latching the data into a buffer which the CPU can read at its leisure.
But the fact remains that the function you are computing is actually computable (he gives the example of summing the data). He's just produced an artificial restriction on the computer being incapable of reading the data fast enough. His n-processors is a complicated way of solving what we solve with buffers.
And also who said there's a lower bound on the per-instruction speed of a Turing Machine. Sounds like this guy just needs a faster machine. After all a Turing Machine is an abstract idea, so let's just redefine it's operating speed by a factor of n and his function becomes computable.
How is this supposed to make me think that Turing was wrong?
Akl would certainly not claim that "Turing was wrong" -- the submitted title is obvious flamebait.
He's just produced an artificial restriction on the computer being incapable of reading the data fast enough.
Well, he is assuming a different model of computation. Whether it is "artificial" or not is debatable: in an actual physical system, you can't pause the world, fix all the inputs, and then compute for an arbitrary length of time.
In any case, this is theoretical work: investigating the nature of computation under a different set of assumptions is perfectly valid, and I think it's very interesting.
let's just redefine it's operating speed by a factor of n and his function becomes computable.
For a given function, sure. But unless your machine can perform an infinite number of computations per time step, no machine will be fast enough to compute all possible functions, which is the point.
(BTW, I took some classes with Akl as an undergrad. He's a very sharp guy, and a great professor -- and definitely not a crackpot.)
>>> For a given function, sure. But unless your machine can perform an infinite number of computations per time step, no machine will be fast enough to compute all possible functions, which is the point.
I mentioned this somewhere further down in the comments, but a machine that could perform an infinite number of computations would break Turing's proof of the undecidability of the halting problem over Turing machines. That probably furthers Akl's claim.
I most definitely think this is interesting in that it is a new set of assumptions about what a computer is and what it should be capable of doing. All we have to do now is define a new (probably recursively-defined) automaton capable of infinite growth in it's possible inputs.
Then again, if a computer had an infinite amount of inputs, that raises even more interesting questions about what problems possibly then become decidable.
neilc: "Akl would certainly not claim that "Turing was wrong" -- the submitted title is obvious flamebait."
Akl: "The consequences to theoretical and practical computing are significant. Thus the conjectured "Church-Turing Thesis" is false. It is no longer true that, given enough time and space, any single general-purpose computer, defined a priori, can perform all computations that are possible on all other computers. Not the Turing Machine, not your laptop, not the most powerful of supercomputers. In view of the computational problems mentioned above (and detailed in the papers below), the only possible universal computer would be one capable of an infinite number of operations per time unit."
Me: It sure sounds like he's claiming Turing, in the Church-Turing Thesis, is false.
The point he's making is salient enough if you've worked with sensor networks, specifically if you're trying to track physical objects and predict what it will do next. You run up against a sort of computational corollary to Nyquist-Shannon (http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_samplin...).
In real life, we just sample data as fast as the computation will allow and accept that a large fraction of the world's information will not be part of our computation and that the computation is imperfect. For example: I'm sensing some time-varying function, perhaps an unpredictable aircraft, and it takes 10 sensor sampling cycles to compute my approximation of speed/velocity/turn rate/etc; I'm not going to compute every sample and watch my picture of the world lag behind by 10n (for n samples) -- I'm only going to run the computation on every 10th sample. It doesn't make Turing wrong, it just makes [sequential] sampling and computation of multiple unpredictable inputs slower than the rate at which the unpredictable inputs appear.
But if you had a computer that had enough I/O bandwidth to be able to read all the sensors in one clock cycle, and enough CPU power to be able to run the computation on those readings in one clock cycle, you'd have a computer that was capable of performing the computation optimally. When you add more sensors to the configuration or you have a new computation that is too complex for this computer to solve in one clock cycle, then the computer can no longer be capable of solving that computational problem as stated. Obviously, you can change the problem to make it acceptable to sample instead as you mentioned.
If I'm understanding the principle in question and Mr. Akl's result, he states that the number of computations required to solve a problem is unbounded. If the definition of a universal computer is that the memory and run time is infinite, but the number of computations performed per time slice must be bounded, then there will always be a problem that a realized universal computer cannot solve.
A comment up this thread states that it is silly to have a restriction that all the data must be processable in one time slice because you can just buffer it. I'm not aware of any buffering solution that does not require additional computation to read the value and store it in the buffer.
In his short list of misconceptions and responses, I think that #3 is closest to this argument. I don't know if it would be an absolute requirement that the initial collection of the data must remain an intrinsic part of the universal computer as opposed to the simpler definition that the observed data required for solving the problem must simply be supplied to the computer.
I think I'd lean toward the latter definition because it doesn't seem right to me to expect the universal computer to be responsible for both observing and recording every event in the universe as opposed to just expecting it to be able to perform a calculation on any given set of observed events.
How is this not complete gibberish? A function is a map from a value (set) to a value (set). The identity/definition of a function is the two values it relates. Values don't change, by definition. All he appears to be saying is that a single computer can't compute multiple functions simultaneously, which would be the non-statement of the century.
EDIT: let's say the function computes the average age of all living people. So at time t=1 the domain is people alive at time 1 and at time t=2 the domain is people alive at time 2. He's saying that the input to the function is a variable called "people" which keeps changing, but that's not how it works. The input to a function is always a fixed value. The functions defined at times 1 and 2 are different functions because they're defined on different domains.
He's restricting the input functions so that they're parameterised by the variable that marks the current state of the computer. In reality this occurs often (the variable represents time) but mathematically there's no need for this. It seems to me that he's refining the model of a computer which we can realistically construct and showing that are even fewer things it can compute than its theoretical brethren.
Ok, I think I get it. It's really about entropy. When you create a UTM at time t=1 it's only universal at time t=1, because at time t=2 the information content of the universe has increased, and now you have the TM formerly known as universal.
At first it struck me as absurd: "you can't build a computer for functions that don't exist." Um, right. But that's exactly it. Tomorrow there will exist functions that don't exist today.
Or, from the classification/sensor perspective: "choose any number, I can then add 1 to it and my number will be larger than yours." Um, yeah. But again the UTM is part of the universe and therefore can't contain more information than the universe. Tomorrow the quantity of information in the universe will have increased and your formerly universal TM will fail.
The speed issue is kind of a red herring. If you created the ultimate cisc computer that could compute the entire universe in a single clock cycle it would be constrained by size rather than speed. And tomorrow it would fail.
Yup, that's a good way of putting it. So his claim that the CTT is false is misleading. Really he's showing that it doesn't hold for real machines. Perhaps that's not a particularly novel statement but i certainly found it enlightening :)
But even more to the point: the objection is that no particular 'computer' (like some physical device or at least a performance spec) could be a universal computer in his terms, b/c past a point it can't keep up.
The same idea rules out the possibility of a universal computer (barring magic), too, except in the heads of theoreticians: any particular real-world computer is just a finite state machine, and a turing machine with eg 32 gb of space on the tape isn't actually universal, either (or even a turing machine).
the functions the machines are computing are changing with time.
Sure. But that's not what Turing was talking about. Give a UTM a computable function, and the UTM will compute it. Specify an infinite class of functions F(t), and of course a UTM will have trouble.
There's no contradiction here, just a really weird way of defining "function".
It sounds like he's stating that a UTM can't simulate an arbitrary lambda function then, since that returns a variable function (a second-order UTM: a UTM can't simulate a UTM, which makes it not a UTM?).
In order to evaluate a lambda you need to be able to compute an arbitrary function of arbitrary variables; with infinite resources, you should be able to process infinite variables? Surely there's work done on UTMs and lambdas.
It sounds like he's stating that a UTM can't simulate an arbitrary lambda function then
The concept doesn't make sense. Lambda functions don't do anything, they're just notations. The process of beta reduction is what causes things/computation to happen. Beta reduction rewrites all the lambda functions, hence they're no longer the same functions. What you're saying is something like: a computer can't compute "computation".
Example: a function accepts as input the source of another program (assume it's computable and halts). Thus, the original function is variable; can it be simulated by a UTM for any arbitrary valid input? I'm probably using the wrong terminology here, but the overall logic is what I'm getting at.
The input to a function can't change (it's a fixed set). If the input changes then you've defined a new function. That's the mathematical definition of a function. You're talking about a set/class of functions, so you would need 1 UTM per function.
Turing was simply describing what people do: in a math test, I give you a problem, you solve it - without cheating, that is, interacting with your environment - using an algorithm, you hand it back to me. He idealized what a human does - infinite time and memory -, and then, as a final step, he said: "A machine - also idealized - can do that." The important thing is that in the definition of what was later to be called "Turing computability", interaction during the computation is verboten.
This appears to me to be a little (or in fact a lot) crackpotish.
A universal machine with capable of n operations per time unit can be simulate a universal machine capable of n+m operations per time unit. It's just a bit slower.
His argument seems to be that this will make it so slow that that it won't be able to keep up. Which is just stupid.
The Church-Turing thesis says that a universal machine can simulate any other computing machine given infinite memory. It doesn't say anything about how fast it will be :p
Maybe I'm misunderstanding this guy and he's not saying something so completely and utterly stupid. Anyone who wants to correct me please do...
He seems to be misunderstood. Which is no surprise since he's into unconventional computation - whatever that is[1]. He's well published[2] and has chaired at least one conference on unconventional computing[3]. So that's more than I can say about my career in theoretical computer science. I suspect having unconventional thoughts alone will create some dissonance in related communities. Similar to what Doron Zeilberger gets for his beliefs in Ultrafinitism.
I agree that Akl is talking about a very specific circumstance of UTMs. How many thousands of Mathematicians and Computer Scientists have read that paper in the last 60 years and verified it? If there were ever going to be any true challenge to UTMs it had to be obscure and weird like this. That doesn't mean that I believe he's right, but I don't think he's a crackpot just unconventional.
Your right, I probably shouldn't be so quick to call someone a crackpot.
But still, unless I'm very much mistaken all he's doing is done is to expand the definition of "able to compute a function" to include "able to run fast enough to gather the input data for a function". That might be a useful definition for some things but to use that to claim that "Turing was wrong"?
His argument is deeper than that; he's basically saying that there's no such thing as a universal machine if the function being simulated meets any of the 5 criteria he lists, since the function can't be simulated (that is, you can't compute it with any less steps than running the whole thing through, and it could require up to infinite variables). I don't know if this makes it non-crackpot-ish, though; I don't have the background in formal theory tell; but it doesn't seem something like a "it'll take arbitrarily long" argument.
It doesn't seem terribly crackpotish to me. He's absolutely correct in pointing out that it's very sloppy to define "computability" as "What the Universal Turing Machine Can Do." It's also not unreasonable at all to talk about variables that change with time, and he's absolutely correct that this is a gap in complexity theory. One which should excite new grad students, because eventually it will net someone a lot of academic fame and one hell of a PhD thesis.
Personally, I'm very interested in this and I'm going to keep an eye on it.
It is quite unreasonable to talk about variables that change with time in the context of a Turing machine. If a "Universal Computer" is one that deals with variables which change during the computation due to interaction, then it is not what Turing was talking about. And if he wasn't talking about it, he can't be wrong about it.
It's naming, plus some more: Turing never named his idealized entity a "Turing machine" - that was done by others. He claimed universality for the algorithmic "Universal Computing Machine" he defined, and proved it - for algorithms. Now Akl comes along and says the "Turing machine" is not universal, but he uses interaction, not algorithm, to base his definition of computation on. So it seems to me that he's not only renaming "the Universal Computing Machine" to "the Universal Computer" by ways of "the Turing Machine", but that he is also moving the goalposts in this process.
Turing machines are constructed in a closed, static universe. This guy says that if you need to deal with a source of changing data, turing results don't apply. This is both obvious and important to state, although not with as much verbiage.
Expanding on that, it should also be pointed out that this is well-known as being among the list of assumptions that prevent Turing Machines from working in the real world. We speak of "steps", but they do not have any actual relationship to "time". The tape has no obvious relationship to "space", either.
So, right off in the first sentence "I have recently shown that the concept of a Universal Computer cannot be realized.", I thought "well, it's hardly like that's a challenge..."
There's actually some interesting theory here, I think, but it's only obscured by being wrapped in rhetoric about Church-Turing being wrong and such. All mathematical theorems include a set of assumptions, and it's not news that if you change the truth of the assumptions the theorems may stop holding.
Look at it this way: imagine you have a server that's totally maxed out (CPU-bound), 24 hours a day. If you add just one more connection to its load, it won't be able to handle it. No matter how long you give it, it'll just fall further and further behind. The requested pages are uncomputable! A faster computer, on the other hand, will keep up just fine, and finish all the computations.
This isn't how computations have been traditionally defined, so that's why it seems so unintuitive. But, he's right nonetheless.
Turing's computer was an idealized human - to him, computers were humans who did computations for a living - who were granted infinite time and tape - that's the idealization - to solve exactly one problem. Anything that talks about "x problems per hour" has nothing to do with what he said.
Suppose that my "computing device" were a machine designed to build and program more copies of itself. This device would "compute" a function across n variables in time log n by building n-1 copies of itself, and then parallelizing the computation across them. With sufficient resources, this machine could could compute the function for any n in finite time.
All the author has done is posit an ever-increasing volume of data, such that the rate at which the volume increases causes it to outpace the speed of the computation. It seems that positing an ever-increasing computational capacity is a reasonable solution to this problem.
Your time complexity of log n appears from nowhere. It is entirely possible that the complexity of calculating the original function F was worse than polynomial in the number of variables, in which case a polynomial/linear increase in the number of machines could not reduce the complexity by more than a polynomial/linear factor, thus leaving the computational complexity worse than polynomial (e.g. exponential/super-exponential). I've made a comment further down that posits an idea similar to yours, though with a bit more precision.
Since when was there a minimum amount of time for a theoretical computer to do any number of computations?
I suppose that any computer must take some finite, non-zero amount of time to perform an "instruction", because if the amount of time per instruction is zero, that would violate Turing's proof of the undecidability of the halting problem over Turing machines. His idea does have some merit, but I still don't think that this means there cannot be a Universal Computer -- just that it may be something that operates on a level above a Turing machine. I'd imagine it would involve some sort of recursive structure.
The way I see it, you could program a Turing Machine MF which has some finite "per-time-step" computation speed _n_, such that MF "knows" the following:
1: The general form/program for solving function F for any arbitrary number, _m_, of variables.
2: How to construct a Turing Machine MF' capable of performing _m_ "operations per-time-step", and programmed with 1.
Even in his crazy-town world of redefined computation, the Turing Machine MF is capable of computing any computable function F, of any number of time-varying variables _m_. I.e. given such a function F, MF simply creates a suitable machine MF' and copies the output of MF' to its own output. Thus, MF computes F, albeit by proxy. QED, etc.
It's possible that I'm now in some TM dual to his time-varying primal, but whatevs, I can do what I want.
ADD: As far as the computability results that I am familiar with are concerned, without loss of generality one could also just assume that all Turing Machines have infinite speed. The necessity of introducing the concept of a "time-step" into the definition of Turing Machines only occurs when computation time is actually of interest. Though, for complexity results one could still assume infinite speed, albeit the number of operations would still need to be counted.
Isn't this more or less the same issue as multiple infinities. Whole numbers are infinite but countable, but rationals are infinite and not countable. UTMs are able to compute the infinite but there are infinities that are not computable.
Rationals are countable but dense. You can biject Z * Z <-> N trivially (start from origin, spiral out) and Q can be constructed as a subset of Z * Z. Reals are uncountable.
How does one go about showing that there is no way to solve such a problem?
With great difficulty. Usually the approach is to suppose that you have an algorithm and show that either (a) this algorithm allows you to do something impossible (e.g., solve a different impossible problem, or reach a contradiction); or (b) there are two different problems with different answers which the specified algorithm cannot distinguish between (this is generally only possible where you're proving that it's impossible to compute something in less than some number of steps).
The first approach is used for things like showing that the halting problem is impossible; the second approach is used for things like showing that it's impossible to have a comparison sort which runs in less than O(N log N) time.
Yes, interactive computation is proven to be - statistically - more powerful than algorithmic computation - sometimes. It's also "less powerful" some other times, and since it's not computable at which times interaction is "better", it's not a general advantage. But Turing never considered the case of interaction in his definition of computation, so Wegner is talking about something else.
Turing did in fact consider this possibility, and he instinctively knew that a machine that allowed interactions would be more powerful, but he couldn't prove it.
Just to make sure, I searched for "interaction" in "On computable numbers...", and it's not in there. And, like I said, in the general case, interaction is "more powerful" than algorithm as often as it is "less powerful".
Edit - Example: You're on "'Who Wants to Be a Millionaire?" You're in "algorithm mode" for most of the questions, but you're granted one "interaction", via a phone call. On the show, you're allowed to call anybody you want, and if you know to call somebody who's really clever, chances are that interaction will be more powerful than computation. But that's not the general case: the general case would be for you to call a random phone number, and chances are that won't help you - so, in that case, not more powerful.
The purpose of a machine with interactions is not to be more intelligent, it is just to be able to execute a larger body of computations than a machine without interactions would. You can't model operating systems on Turing machines, for example, precisely because they need to interact.
This is from the Wikipedia entry on Alan Turing:
"In June 1938 he obtained his Ph.D. from Princeton; his dissertation introduced the notion of relative computing, where Turing machines are augmented with so-called oracles, allowing a study of problems that cannot be solved by a Turing machine."
Right, oracle machines are - or can be interpreted as - interactive. However, whatever it is that oracles do - it's often called "hypercomputation" - they're not doing computation, in the sense that Turing used the term in "On computable numbers...", which is why Turing couldn't prove that they are more powerful. The apparent - to me - problem here is that people use a definition of "computation" which is different from Turing's (which is alright), and then say that Turing was "wrong" in what he said about "computation" (which is not alright).
I actually think that operating systems are a good example of how interaction can be less powerful than algorithm - the "oracle" in this case might be a user, and the user might be new to computers, so that in this case the result of the interaction might be worse than if the "oracle" were another (non-human) computer, the interaction with which might in turn give a worse result than if you were the "oracle".
The object of this study is not to prove that all forms of interaction are better than algorithmic computing, it is to prove that machines that interact during computation can execute a larger set of computations.
You also insist in factoring in intelligence, which is not at all revelant to this discussion. This is about which sets of functions can be executed with which machines.
Oracle machines are capacle of executing all functions that a Turing machine can (if you simply shut off interaction) and also all those that involve interaction. Having been proved that the latter set of functions is larger than the first, we can conclude that oracle machines are more powerful.
Correction: Alan Turing was right, but if we redefine "universal computer" in sufficiently weird ways, Turing's results no longer apply.
In particular, while Turing was concerned with machines computing functions, Akl is looking at machines interacting with a changing environment.