That article appears to be partly wrong and partly misleading. The game is described in a way that makes no sense and so the significance of the new proof is not made clear. I think this is more accurate:
In topswops we start with a shuffled deck of N cards numbered 1..N. In each move we reverse the order of the top K cards where K is the number initially on the top card. (After that reversal, the original top card, K, is now the Kth card down and the formerly Kth card down is now the top card. Any cards in between have reversed order.)
The game terminates when the card number 1 comes to the top. (For reversing the order of a single card has no effect.)
Here is what the article leaves out: If, when 1 comes to the top, the deck is sorted 1..N, you win. Otherwise, you lose. This is a very tedious form of solitaire.
Note that if your shuffle produces an already sorted deck, you win trivially with 0 moves: the deck is already sorted and 1 is on top! If the shuffle produces a reverse sorted deck, N..1, you win trivially after 1 move!
If someone hands you a different shuffle, you might be playing for a very long time, uncertain whether you will win or lose.
For every deck-size N, there is some number of moves - call it ENOUGH(N) - such that if you haven't won in ENOUGH(N) moves, then you will not win with that deck. For example, if the deck has 12 cards, and you've been playing for 64 moves ... why you might as well quit. We know for sure that any shuffle that wins ends after at most 63 moves for a deck of size 12. ENOUGH(12) is 63. [supposedly, I haven't double checked the brute-force proof]
What is the formula for ENOUGH(N)? or at least an approximate formula?
For values of N up to (the article says) size 18, ENOUGH(N) has been computed by brute force.
Knuth gave a formula KNUTHTOPS(N) and proved that ENOUGH(N) <= KNUTHTOPS(N). His proof was interesting because, asymptotically, KNUTHTOPS(N) is lowest upper bound known. You can confidently stop a losing game after KNUTHTOPS(N) moves -- but it is possible you could have stopped earlier if you had a better formula.
These new guys gave a formula, HALnLINDAWOPS(N), and proved that HALnLINDAWOPS(N) <= ENOUGH(N). HALnLINDAWOPS(N) is interesting for being the highest lower bound known. Don't even think about stopping until you've made at least HALnLINDAWOPS(N) moves or you are guaranteed, in some cases, to quit part way through a winning shuffle.
The number at which you can quit safely, no fear of stopping a winning game, is between HALnLINDAWOPS(N) and KNUTHTOPS(N) (inclusive). The main advance is squeezing that range from below with a new proof.
Solving that puzzle is far beyond my intelligence, but I did have Dr. Sudborough as a professor. He was one of the best professors I had at UTD, really taught Automata Theory very well.
Hah, same here. I graduated Summer of 2007, so I probably had him in the Spring 2007 class. As a tenured professor, he could've easily made me see a TA when I had questions. Instead, he let me come to his office and worked with me for a while on problems that were tough for me but rote for him. I really appreciated it.
There seems to be a lot of disagreement about lower bounds here. The best case lower bound is trivially 0 since you could start with a terminating permutation. The article is talking about a worst case lower bound, which is what one would use to assign the problem to a complexity class.
It's nice to see some noteworthy CS/Mathmatics accomplishments coming from outside Stanford and MIT. But perhaps I'm biased being from the Dallas area.
I must be misunderstanding the problem. Why is the lower bound on the number of operations not 1, since you have a 1 in n chance of turning over the "1" card as your first operation?
The lower bound indeed is 1, however higher lower bounds can be found.
Here lower bound means that there exist a permutation that will take this many operations. Their goal is to find what the maximum number of operations assuming worst permutation. This is accomplished when the lower bound equals the upper bound.
Proving lower bounds is easy they just need to give an example. Proving upper bounds is hard, somehow they have a way besides running all possible cases. (The article doesn't explain how and I haven't delven deeper).
I believe it is a lower (and upper) bound on the worst-case. The case of "1" begin on top is a specific case, that is obviously not the worst possible.
The article doesn't make sense. It implies that the lower bound proved is exponential but much better than the conjectured lower bound (also exponential):
"Knuth had previously proved an exponential upper bound on the number of Topswops steps, and conjectured that one might also prove a matching lower bound. What Dr. Hal Sudborough and Dr. Linda Morales did, however, was to prove a lower bound that is much better than that proposed in Knuth’s conjecture..."
However, the paper cited says “A Quadratic Lower Bound for Topswops”, so the lower bound proved is much worse than the one conjectured by Knuth.
Well, a quadratic lower bound is better than an exponential lower bound, in the sense that the problem could still turn out to not be in EXPTIME but in PTIME.
It is worse in the sense that it tells us less about the actual complexity class the problem is in, so I guess it depends on what you interpret "better" to mean in this context.
Yeah I screwed that up, what I meant to refer to is this:
>It is not contradictory however, to say that the worst-case
>running time of insertion sort is [Big Omega](n^2),
>since there exists an input that causes the algorithm
>to take [Big Omega](n^2) time.
>Introduction to Algorithms, 2nd Edition, p 46
Yes, higher is better because it gives you a tighter bound when searching for Big-Theta: ie. if your algorithm is in O(2^n) and the previous lower bound was Omega(n^2), and someone proves a lower bound of Omega(2^n) then you know the problem is in Theta(2^n) and you can stop looking for a faster algorithm.
It's worse in the sense that the problem takes more time to solve than if someone had found an algorithm in O(n^2).
Note that that is the ACM link to the citation; the paper itself is published in an Elsevier journal, and is not available to ACM Digital Library subscribers (like myself.)
You can see the article there if you have a subscription. If you don't have one, you can buy a copy for $40. Otherwise, you can just go to the library.
It's also not too difficult to find a 2005 review copy online. Preprints for the paper apparently date back to 1995.
In topswops we start with a shuffled deck of N cards numbered 1..N. In each move we reverse the order of the top K cards where K is the number initially on the top card. (After that reversal, the original top card, K, is now the Kth card down and the formerly Kth card down is now the top card. Any cards in between have reversed order.)
The game terminates when the card number 1 comes to the top. (For reversing the order of a single card has no effect.)
Here is what the article leaves out: If, when 1 comes to the top, the deck is sorted 1..N, you win. Otherwise, you lose. This is a very tedious form of solitaire.
Note that if your shuffle produces an already sorted deck, you win trivially with 0 moves: the deck is already sorted and 1 is on top! If the shuffle produces a reverse sorted deck, N..1, you win trivially after 1 move!
If someone hands you a different shuffle, you might be playing for a very long time, uncertain whether you will win or lose.
For every deck-size N, there is some number of moves - call it ENOUGH(N) - such that if you haven't won in ENOUGH(N) moves, then you will not win with that deck. For example, if the deck has 12 cards, and you've been playing for 64 moves ... why you might as well quit. We know for sure that any shuffle that wins ends after at most 63 moves for a deck of size 12. ENOUGH(12) is 63. [supposedly, I haven't double checked the brute-force proof]
What is the formula for ENOUGH(N)? or at least an approximate formula?
For values of N up to (the article says) size 18, ENOUGH(N) has been computed by brute force.
Knuth gave a formula KNUTHTOPS(N) and proved that ENOUGH(N) <= KNUTHTOPS(N). His proof was interesting because, asymptotically, KNUTHTOPS(N) is lowest upper bound known. You can confidently stop a losing game after KNUTHTOPS(N) moves -- but it is possible you could have stopped earlier if you had a better formula.
These new guys gave a formula, HALnLINDAWOPS(N), and proved that HALnLINDAWOPS(N) <= ENOUGH(N). HALnLINDAWOPS(N) is interesting for being the highest lower bound known. Don't even think about stopping until you've made at least HALnLINDAWOPS(N) moves or you are guaranteed, in some cases, to quit part way through a winning shuffle.
The number at which you can quit safely, no fear of stopping a winning game, is between HALnLINDAWOPS(N) and KNUTHTOPS(N) (inclusive). The main advance is squeezing that range from below with a new proof.