The TSP optimization problem is polynomial time reducible to the TSP decision problem by binary search.
Suppose we have an algorithm D(TSP,C) which returns 1 if the TSP has a path of cost <= C.
Then the basic idea for an algorithm for the optimization version is :
C = sum( costs of all paths in graph )
while D(TSP,C) == 1:
C = C / 2
Probably the greatest thing about having a polynomial time algorithm for an NP complete problem would be that it would allow finding the global optimum for any finite optimization problem in polynomial time.
More precisely, binary search (with a correct termination condition) takes "is there a circuit with cost less than X?" and answers "what is the minimum circuit cost?".
Going from there to "what is a minimum-cost circuit?" is a bit more work, but still polynomial:
G' = G
X = TSP-minimum-circuit-cost(G)
for E in edges(G):
if (exists-circuit-with-cost(G' minus E, X))
G' = G' minus E
return (G')
As I mentioned somewhere else in this thread, the binary search is not a reduction, because you need to use TSP-decide a polynomial number of times (and for a reduction you reduce to one problem instance).
What you show is that the problem of finding the lowest cost is in FP^NP (the class of function problems solvable in polynomial time with the help of an NP oracle). However, by taking your idea a bit further one can show that TSP is in FP^NP. In fact, it is known that TSP is FP^NP-complete, i.e. it is one of the hardest problems in that class (and thus likely not in FNP, the NP for function problems).
What you are describing isn't even a proper binary search, it just gives you a lower bound that is guaranteed to be no more than 50% smaller than the solution.
Let's assume doing an actual binary search. What you are getting, in every step of binary search using your oracle, is an upper bound U (D(TSP, U) ==1), for which we know that there is a solution smaller or equal that U, and a lower bound L (D(TSP, L) == 0) for which we know there isn't. This is where your confusion between finite and integer-valued comes into play: If the problem were integer valued, you would have the optimal solution S as soon as L_i=U_i (implying U_i is S), which would be guaranteed after a polynomial number of steps. With real valued weights, all you get is a sequence of intervals (L_1, U_1], (L_2, U_2], ... where the size of the intervals is halved in every step. What you actually want is the exact value of the solution. For that, you would need to show that there is no hamiltonian path between L_i and U_i, which isn't something your oracle can determine.
Note that this probably doesn't show that a polynomial oracle for the TSP decision problem will not yield a polynomial solution to the optimization problem in general, it just deals with your proposition.
Edit: Not -> Note
The set of all possible costs is the set of sums of all N-sized subsets of the edges.
You only need to binary search that set, not the real numbers' axis. The set's size is exponential to N, but binary searching is logarithmic, so it becomes polynomial in N.
But how do you do your binary search on a set without enumerating/sorting it? For example, if at one step, you know that 28.3 is too small and 42 is too big, how do pick the number between them that will split your set in two?
B) Have a total order between N-sized subsets by treating each edge as a binary digit (1 if it is in, or 0 if it is not). I believe this guarantees that the total order between the binary numbers that correspond to edge selections is the same total order as the one between the sums of those selections.
C) Start with some first guess (e.g: 111...111000000...)
D) Binary search on the number, by adjusting each next guess to be the nearest number with N digits enabled.
I am not entirely sure, I just made this up, but I think it might work?
Self balancing btrees will keep your set split roughly in half. So basically, you just insert elements and let the tree worry about remaining balanced.
This isn't a question of data structures: What bmichel is saying is that enumerating a nonpolynomial number of items isn't something you can do as a step in a polynomial reduction.
A priori it would be a computer science revolution if this would have been missed by everyone until now. So forgive me to be particularly cautious with your proposal. What I think is missing in your analysis is that we are lacking a polynomial method to build an hamiltonian path with cost <= C. If we had, then we would apply your algoritm to optimize in polynomial time.
The above construction is a not a breakthrough and it is how I was taught in my approximate algorithms class. Also the way the optimization problem is usually defined is not as "give a circuit with minimal cost" but "what is the cost of the minimal-cost circuit?", which is easier to reason about in an approximate way.
Same here. Intuitively, a decider that can answer "Is there an answer less than X?" will give you one bit of the lowest cost each time you query it. Do it repeatedly until you have all of the bits, and you're done.
There is no guarantee that the number of bits in 'all the bits' is finite. Because of that, there is no guarantee that such an algorithm finishes. The simplest counterexample is N=2, with a distance of 1/3 between the nodes, and shortest (and only) tour with length 2/3.
You don't need to represent the actual length of the solution in a finite number of bits, you just need to represent the solution itself. The solution can be represented as the set of edges contained within it. Each edge is either present or not, so a solution can be represented as a bit set the size of the number of edges in the graph, or at most n^2 bits where n is the number of vertices.
For a practical idea of how you'd do this, I think you'd just modify your binary search based on the possible circuit values. Basically, figure out what number excludes half of the remaining solutions, rather than just half of the range, and use that in your binary search.
Yes, any algorithm that halves the set of potential solutions in each iteration will work, but I do not see how one can get such an iteration step from "a decider that can answer "is there an answer less than X?". If suspect (but cannot prove or even have reasonably solid arguments) that the "figure out what number excludes half of the remaining solutions" step will be at least as hard as the problem itself.
Now, if I had such an oracle, I would let it loose on all graphs equal to the target graph with one edge removed. I think (but cannot prove (yet?)) that that is a way to prove, for each edge, whether it is by necessity part of the shortest tour. I also supect that, for most graphs, the set of remaining "might be in the shortest tour" will be so small that an exhaustive search will be easy.
That's exactly what floating point numbers are: representations of real number intervals as finite length bit strings. The number of bits simply determines the granularity of the representation (ie. the width of the intervals).
Floating point numbers are approximations of real numbers, which isn't sufficient in this context. Solving a lossy representation of the problem isn't a reduction in the complexity theoretic sense.
In the case of this problem the set of possible paths is finite so it can, in principle, be sorted in order of increasing cost. The smallest cost delta between any 2 paths in the sorted list is the lowest cost edge in the graph (this probably assumes that all edge costs are positive) so you only need to continue the binary search until intervals of this size are reached. At that point the exact solution has been found.
EDIT: There might be multiple paths with the same minimal cost but binary search will still find this value.
That depends on how you define your representation. It is useful for some problems to represent an interval with it's first bits, e.g. to take 0.101 as representing all numbers in the [0.101, 0.110) interval.
I choose the nonempty interval [0, 1]. I have just represented it here in 48 bits (many of which are unnecessary, but bits are cheap). I don't think your statement is correct.
I didn't immediately see what guaranteed that the binary search would run in polynomial time. In case anyone else doesn't see it:
There are O(V!) possible paths. Assuming binary search actually halves the search space each iteration, it will run in O(log(V!)) = O(log(n(n-1)...*1)) = O(log(n) + log(n-1) + ... + log(1)), which is clearly polynomial.
You assume that in order to verify that a circuit exists, the algorithm for TSP-DECIDE must provide one. However, in a number of cases in mathematics -- notably the proof of Lagrange's four-square theorem -- this is not the case. It is possible that an algorithm could verify that a circuit exists without being able to find it.
It's true that my formulation isn't completely specified but the basic idea still works because if you have a decision function for an NP problem you can find a solution (assuming at least one exists) with one call to D per bit of the solution.
So to actually solve the optimization version you would need two iterations/recursions around the call to D. The inner iteration finds a solution with the desired property, ie. cost <= C and the outer one does the binary search over C. Overall it still reduces to a number of calls to D that is polynomial in the number of bits used to represent the problem, including those used to represent the weights.
The problem with the original blog post is that while the author states correctly that the TSP optimization problem is NP hard rather than NP complete because it doesn't have the form of an NP problem, he doesn't mention that it is also NP easy and therefore NP-equivalent (see last line in this Wikipedia article):
This is a major point of confusion for people who are new to the concepts, and I'm glad someone has taken the time to explain it in depth. And it's not just TSP that it happens with. Almost every popular example of an NP-complete problem I've seen is actually an NP-hard optimization problem corresponding to an NP-complete decision problem.
The bad part is that if you don't know better, you'll think "Wait, how the hell would you verify a correct answer to this without searching for an even shorter path?" And you'll be right, but you'll feel like you're missing something. Giving examples of CS that make people feel like they fundamentally don't understand CS is probably a Bad Thing.
I think the author actually adds to the confusion about TSP by being unnecessarily vague.
Both P and NP are classes of decision problems. That means they only contain problems that ask for a yes/no answer. So because TSP asks for a route, it is not just a decision problem, and can't be in NP. However, as the author mentions, NP does in fact contain the TSP-DECIDE problem in the way he defines it.
Moreover, the problem TSP of finding a lowest cost route can be solved by a polynomial time Turing Machine that can use a TSP-DECIDE oracle (i.e. call a constant time function that solves TSP-DECIDE). It is in fact complete for that class FP^NP ("= function problems solvable in polynomial time with an oracle for NP problems"), which means it is one of the hardest problems there. The class FP^NP contains all of FNP (NP for functional problems) and is believed to contain more problems -- so there is a good chance that TSP is not in FNP.
Perhaps I simply know different people but I'm accustomed to Travelling Salesman, without explaining which version is intended, being given as the canonical NP-Hard (implicitly the optimal solution problem).
I normally see SAT given as the NP-complete example.
Yes, I've always seen SAT as the canonical example given. Though, when I did some reading into phase transitions in NP-complete problems, 3-SAT was usually the example given (and used).
Well, this is true, but it's also hyperbole. It's not really everybody misunderstanding the problem so much as simply not caring. People are sloppy with the terms "NP-hard" and "NP-complete" and use them interchangeably because it's usually not important. Usually they mean "NP-hard", and that is the significant thing.
Yes, it's wrong to use "NP-complete", but plenty of people who thoroughly understand the problem do so. Welcome to Computer Science, sloppy is par for the course.
Ironically enough, I think this guy misunderstands TSP. I wrote this comment on his blog:
Let's define a new problem, TSP-cost-optimize, which returns the cost of the best Hamiltonian. It's NP-Complete, because it reduces to TSP-decide: Just do a binary search on the space of possible scores; even if that space is exponential, we're fine, since binary search runs in the log of the size of the space.
TSP-cost-optimize's certificate can be the steps of the binary search; that will be polynomial in size, since it's a polynomial number of TSP-decide certificates, and you can verify it in polynomial time.
Given TSP-cost-optimize, I can in fact provide a polynomial-time-verifiable certificate for the result of TSP-optimize: The result of TSP-cost-optimize on the same graph! So under your definition of NP, TSP-optimize is in fact in NP. (This also makes sense if you use the non-deterministic Turing machine definition of NP.)
What's not clear to me is whether given the result of TSP-cost-optimize, you can actually find a shortest path in polynomial time. But I expect you probably can.
I guess you could find the shortest path by iteratively deleting an edge and checking whether the best cost has got worse. If it does get worse, then put that edge back in and try deleting another edge. If it does not get worse, then permanently delete that edge.
I think this should terminate (after trying all edges once each) with an example of the shortest path (that consists of all remaining edges).
The binary search is not a reduction, because you need to use TSP-decide a polynomial number of times (and for a reduction you reduce to one problem instance). What you show is that your problem TSP-cost-optimize is in FP^NP (the class of function problems solvable in polynomial time with the help of an NP oracle).
Since optimization and decision problems are equivalent, what someone means by saying that an optimization problem is in np-complete is that its equivalent decision problem is in np-complete.
TL;DR: For now TSP-OPTIMIZE is in NP-Hard but not (necessarily) in NP, so its not in NP-Complete. TSP-DECIDE is both in NP-Hard as well as NP, and is therefore NP-Complete.
I'm so thankful. After 4.5 years of CS, the only class that taught me anything was my advanced algorithm class and I learned enough to retain an immense amount of the information a year later. This is fun stuff and I hated it two years ago.
Suppose we have an algorithm D(TSP,C) which returns 1 if the TSP has a path of cost <= C.
Then the basic idea for an algorithm for the optimization version is :
Probably the greatest thing about having a polynomial time algorithm for an NP complete problem would be that it would allow finding the global optimum for any finite optimization problem in polynomial time.