I feel like Buridan's Principle is either poorly worded, or most likely I just completely fail to understand it. The principle says:
"A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time."
But certainly I can make a discrete decision in a constant amount of time, just choose to always go left. I must therefore be missing something here.
The next possibility is that a discrete decision procedure that guarantees success is not possible in an unbounded amount of time. This is more sensible and I think the idea is that there will be some kind of infinite regress. For example a donkey will starve to death if it doesn't eat in exactly 100 seconds. The donkey can choose to walk left X meters, or walk right Y meters to some food.
There are certain values of X and Y where the donkey dies no matter what, if X and Y are sufficiently far away that the donkey can never get to them in 100 seconds then the donkey dies.
There are also certain values of X and Y where the donkey can pick either of them and survive since the donkey can get to X or Y in well less than 100 seconds, so the donkey lives.
The above two scenarios are boundary conditions of sorts, where it's fairly trivial to come to a decision, the principle is about what happens when there is a value of X where it takes almost exactly 100 seconds to get to X, (and assume way more than 100 seconds to get to Y), but in order for the donkey to come to that realization the donkey needs to spend some time thinking about it and by the time the donkey has made a decision the donkey won't have enough time to carry it out. So the donkey has to take into account not only the amount of time to get to X, but also the amount of time it takes to come to a decision to get to X, but that too takes some finite amount of time... so the donkey has to take time to come to a decision about how long it takes to come to a decision to get to X, so on so forth... and you end up with an unbounded amount of time.
I could be way off here but I feel there is some subtlety in the way this principle is described that I'm missing.
> A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time."
“Always go left” satisfies the first (a discrete decision) but not the second ([decision] is based upon an input having a continuous range). If the decision is preordained, it may be interesting but not relevant to the problem at hand.
The missing piece of "just choose to always go left" is that this is a degenerate and uninteresting case. No decision is being made.
The range must be discrete and at least 2 possible values.
There is nothing about optimality at all here. Even if both possible outputs are equally "optimal", there is no procedure to pick one in a finite amount of time.
I doubt the issue has anything to do with being interesting or not. Do you know this for a fact or can give me anything to follow up on what it means for a decision to be interesting? For example if a decision procedure required the choice of chocolate or vanilla depending on the current time of day (a continuous input resulting in a discrete output), and I present an algorithm that categorically chooses chocolate regardless of what time it is, I doubt you'd turn around and claim that my algorithm is not making a decision because it always chooses chocolate.
If an algorithm takes an input, continuous or discrete, discards that input and returns a constant value, that algorithm is just as "legitimate" as any other algorithm in so far as being an algorithm is concerned. It may not produce the optimal output for some given cost, but it is a formal and rigorous decision procedure that manages to output a discrete value given a continuous input in a bounded amount of time.
At any rate, reading the comments when this was last posted it looks like the issue has to do with producing the optimal output in a given time frame. If an optimal discrete decision must be made in the next T seconds on the basis of a continuous input X, then there always exists some value of X that requires more than T seconds to compute. This will be true for any value of T regardless of how large.
Yes, always go left (except if you start at 1) is a solution. It is dismissed by Lamport though because the function is not continuous in 1. Somehow he believes not beiing continuous violates some principles of physics.
That continuous assumption is well-justified in the paper, even for quantum physics, where indeed probability densities are continuous.
Looking at it macroscopically, highly complex decisionmaking such as the one that the brain produces, has continuity. In chaos theory, a double pendulum may, after a minute, be either on the left or the right depending on tiny changes in initial state; but changing the initial condition continuously, will change the state at the minute mark continuously, and so, the decision.
Looking at it microscopically, even CPUs are continuous, as there is a slim chance of a transistor only producing half the current, because of the quantum behavior of electrons.
It is conceptually weird that there must be a chance for permanently indecisive animals. To me, the true resolve of the paradox is superdeterminism: the choice taken had to happen.
So you are saying an algorithm which says always go left unless you start at 1, is physically impossible (apart from using superdeterminism), because the A function would be non-continuous?
> So you are saying an algorithm which says always go left unless you start at 1, is physically impossible
The algorithm itself is possible. If implemented in a continuous machine, though, it is not guaranteed to reach either point 0 or point 1 in any given timeframe.
In practice, it usually does not matter. One application I can think of, though, is the surprisingly powerful fault injection vulnerabilities which can break the physical implementation of a cryptosystem: http://euler.ecs.umass.edu/ece597/pdf/Fault-Injection-Attack...
"A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time."
But certainly I can make a discrete decision in a constant amount of time, just choose to always go left. I must therefore be missing something here.
The next possibility is that a discrete decision procedure that guarantees success is not possible in an unbounded amount of time. This is more sensible and I think the idea is that there will be some kind of infinite regress. For example a donkey will starve to death if it doesn't eat in exactly 100 seconds. The donkey can choose to walk left X meters, or walk right Y meters to some food.
There are certain values of X and Y where the donkey dies no matter what, if X and Y are sufficiently far away that the donkey can never get to them in 100 seconds then the donkey dies.
There are also certain values of X and Y where the donkey can pick either of them and survive since the donkey can get to X or Y in well less than 100 seconds, so the donkey lives.
The above two scenarios are boundary conditions of sorts, where it's fairly trivial to come to a decision, the principle is about what happens when there is a value of X where it takes almost exactly 100 seconds to get to X, (and assume way more than 100 seconds to get to Y), but in order for the donkey to come to that realization the donkey needs to spend some time thinking about it and by the time the donkey has made a decision the donkey won't have enough time to carry it out. So the donkey has to take into account not only the amount of time to get to X, but also the amount of time it takes to come to a decision to get to X, but that too takes some finite amount of time... so the donkey has to take time to come to a decision about how long it takes to come to a decision to get to X, so on so forth... and you end up with an unbounded amount of time.
I could be way off here but I feel there is some subtlety in the way this principle is described that I'm missing.