1. Hadamard transform into a uniform superposition of all variable assignments (very standard step)
2. Add an OFF ancilla bit and rotate it by 1/(2m)'th of a turn for each satisfied clause, where m is the number of clauses, using triply-controlled m'th-root-of-NOT gates.
3. Measure that ancilla qubit
4. If the ancilla stayed OFF, restart the whole algorithm.
5. Repeat steps 2 through 4 a total of R times, without triggering the restart (the appropriate value for R is figured out later)
6. Measure all the variable qubits.
7. Return SATISFIABLE iff all clauses are satisfied by the measured variable values
For starters, I'll just say that up front there's no way this works. It's too simple. Other researchers would have tried it. Repeatedly rotating proportional to the amount satisfied and measuring? Too obvious. I would not be surprised at all if a majority of researchers have come up with this idea, tried it, and understood why it didn't work. Grover's algorithm is more complicated than this.
Anyways, why does it actually not work? It becomes clear if we simplify the algorithm a bit more by deferring the conditional restarting measurements.
Instead of conditionally continuing the 2-4 loop R times, simply run Step 2 a total of R times up front. Then measure the R ancilla bits you introduced, and restart if any of them was false. This is equivalent to conditionally restarting because a) there's no information dependency between the loops beyond "don't bother doing it" and b) measurements can always be deferred until later .
Now make the simplifying assumption that all variable assignments, except for a single satisfying one, satisfy 7/8'ths of the clauses (based on ). Note that applying 7/8'ths of a NOT will transition an OFF bit into a state where it has a roughly 96% chance of being ON.
Since most possible assignments are not satisfying, and non-satisfying assignments have a 4% chance of causing a restart (the satisfying assignment has a 0% chance, but it gets diluted by the exponentially many non-satisfiers), the ancilla bits' measured value will be dominated by the non-satisfying assignments. Each ancilla bit will have a roughly 4% chance of being OFF.
That means the chance of not having to restart is roughly 0.96^R. But the paper says that R increases faster than linear with respect to the number of qubits. So the odds of not-restarting are upper bounded by 0.96^N. That places an exponential multiplier on the running time, because it's so unlikely to succeed as N gets large.
Therefore this is not a polynomial time algorithm.
I am the co-author of the textbook mentioned as "Lipton" in the comment by user dmmackay above.
When a solution exists, you want the check to be extremely strict; 0% let through instead of 99.999999%^(n^6).
When there's no solution, you want relaxed checks the allow non-satisfying solutions to get through after not-too-many-retries (so the algorithm can terminate).