Great post. Here are a few things pointers if you want more information about your points:
B) The situation is actually worse than exhausted. We know (have proven) that the regular methods simply won't work (check up "the reletaivzation barrier" and "natural proofs" and "algebrization")
C) We actually can construct problems which take at least a certain amount of time to solve (though admittedly many of them are kind of unnatural). Search up "time hierarchy theorem". This is probably one of the theorems you mention. However, you're right, as far as natural problems go, it appears to be extreme difficult to prove most natural problems takes at least linear time (https://mathoverflow.net/questions/4953/super-linear-time-co...). Note the log*... factor grows extremely slowly.
In fact, it is not known whether SAT requires more than linear time. Since SAT is NP-complete, we expect it to have no polynomial time algorithm... and yet here we are wondering whether it requires even more than linear time. If you are interested in this sort of thing, maybe look up fine-grained complexity. Also note that to solve P != NP, we will have to atleast shown P != PSPACE, since PSPACE contains NP. However, even this seemingly easier problem has had no progress and seems like it would require a huge breakthrough.
From this it seems that the question itself might have been hopelessly naive
Is there any evidence that P might be equal to NP? This seems different from other famous conjectures like Fermat's Last Theorem or Riemann's Theorem where what's hard is finding a counterexample that disproves the theory.
I think RJ Lipton (blog author) and Knuth believe it could be true. However, most complexity theorists don't (I think Fortnow does a survey of theorists on this every few years). Knuth says something like "well all it would take is a single algorithm for any of these hundreds of problems". A fine take for the godfather of algorithms. At the same time, there are many people who have in some sense tried to find such an algorithm, even non-explicitly (e.g. factoring would be in P if P = NP). Moreover, there is a lot of complexity theoretic work which makes it hard to believe it to be true (I think Scott Aaronson's blog has some presentable information about this)
>>B) The situation is actually worse than exhausted. We know (have proven) that the regular methods simply won't work (check up "the reletaivzation barrier" and "natural proofs" and "algebrization")
I've heard of those, even tried to read the proof. Still, my understanding is those claims are ultimately "informal proofs" despite having formal step so the situation isn't entirely closed-up but still very bad.
Maybe this will help with understand the revitalization barrier. Most proof of separations of complexity classes relativize, in that not only do they show
A strict subset B
they also show
A^O strict subset B^O
for ANY oracle O. Most of these proofs don't actually explicitly state this is true, but they are readily to be extended this way (e.g. the diagonlizing proofs of the time hierarchy theorem)
However, we know (Baker, Gill, Solovay) that there exists and oracle O1 where
P^O1 subset NP^O1
as well as an oracle O2 such that
P^O2 not subset NP^O2
Therefore, we know we a proof of P != NP must not relativize or else that would contradict the aforementioned result.
No, the distribution is thought to be hard for any classical algorithm (this is still being worked on - there are still ongoing theory development for showing it is harder and harder), and by extension, any classical process.
Yes, the problem is very contrived - the distribution being sampled is given by the distribution you get from sampling from a randomly sampled quantum circuit. Of course this is easy on a quantum computer because you can just throw on the randomly chosen gates and measure it. I don't think it will be very useful in a application sense. This problem was designed for the very purpose of showing a "quantum supremacy" as soon as possible.
Also, from what I know, there is still a problem in the whole verification aspect for this problem. That is, even if it can sample from the distribution, it is difficult to verify that in fact the correct distribution was sampled. This is in contrast to the problem of factoring, where to verify the computation we just need to multiply the numbers together to check these are indeed the factors. A remedy I have heard of this (I think from Scott Aarsonson) is that we can modify the parameters so that the output is just within the limits of verifiability with a big supercomputer, and that should be good enough to show supremacy was achieved (EDIT: this is precisely what OP's article says it will do). I think this might be bending what is meant by the term "quantum supremacy" too much, but it's a pretty arbitrary point in computational power anyways , so I'm not too concerned.
If you're interested in more about quantum supremacy, these two articles have a lot of information:
> A remedy I have heard of this (I think from Scott Aarsonson) is that we can modify the parameters so that the output is just within the limits of verifiability with a big supercomputer, and that should be good enough to show supremacy was achieved.
It is a bit more difficult to see this because you are describing a continuous time process, however, the answer is that yes you would be able to to figure out the time t1 given you knew had a snapshot of the whole system through the time evolution equation.
Perhaps your confusion stems from the fact that the time t2 is observed, and it seems like you can't go back after that. An easier to grasp manifestation of this problem is that if you find Schrodinger's cat dead you can't tell if was always dead or if was dead only after your experiment. The reason is because that's only with the narrow knowledge of that cat being dead (maybe the would call this the cat state subsystem). It might be unsatisfactory to you, but in the theory of quantum mechanics, if you knew the state of everything (say the room, measurement apparatus, and so on), then you would be able to trace back to whether the cat was alive or dead. The moral is that the measurement looks irreversible if you look only at the system you are observing, but overall, people accept that there is a overarching greater space (the "environment, universe" where everything is just a unitary operation in motion according to the time evolution equation [1]). To be clear, the time evolution equation only admits reversible solutions. So with full knowledge, of say, the room and measurement apparatus, it would be possible to deduce the original state of the cat.
B) The situation is actually worse than exhausted. We know (have proven) that the regular methods simply won't work (check up "the reletaivzation barrier" and "natural proofs" and "algebrization")
C) We actually can construct problems which take at least a certain amount of time to solve (though admittedly many of them are kind of unnatural). Search up "time hierarchy theorem". This is probably one of the theorems you mention. However, you're right, as far as natural problems go, it appears to be extreme difficult to prove most natural problems takes at least linear time (https://mathoverflow.net/questions/4953/super-linear-time-co...). Note the log*... factor grows extremely slowly.
In fact, it is not known whether SAT requires more than linear time. Since SAT is NP-complete, we expect it to have no polynomial time algorithm... and yet here we are wondering whether it requires even more than linear time. If you are interested in this sort of thing, maybe look up fine-grained complexity. Also note that to solve P != NP, we will have to atleast shown P != PSPACE, since PSPACE contains NP. However, even this seemingly easier problem has had no progress and seems like it would require a huge breakthrough.