Hacker News new | past | comments | ask | show | jobs | submit login

Yes. Here's an online drag and drop simulator: https://algassert.com/quirk

Until recently [1], classical simulation was faster/cheaper/more-accurate than any existing quantum hardware. But the hardware has been improving and all classical simulation of quantum computation takes exponential time with respect to some important property such as the number of qubits, the depth of the computation, the number of non-trivial gates, or etc.

[1]: https://ai.googleblog.com/2019/10/quantum-supremacy-using-pr...






I wonder how optimizing the existing simulators are. I suppose any program with constant input could be reduced to no work, but with potentially exponentially long compile time. But there must be simpler optimizations...

Before we declare quantum supremacy, we should make sure the simulator we are comparing with is a good one, not a straw-man one.

I've seen this problem time and again with hardware accelerators. The accelerator is faster than some crappy software, but with a little work with a profiler, the software beats the hardware. Of course optimizing will not make an fundamentally exponential problem polynomial, but it can help a lot.


> Before we declare quantum supremacy, we should make sure the simulator we are comparing with is a good one, not a straw-man one.

A big part of writing the supremacy paper was optimizing the simulators. We did three different styles of simulation:

1) A state vector simulator with hand rolled SIMD assembly (called qSim; not yet publicaly released). This required too much space at 53 qubits. (Well, unless you're going to use the majority of all disk space on summit, which brings its own obstacles. IBM says they can run it that way in a few days, but we'll see.)

2) Treating the quantum circuit as a tensor network and doing optimized contraction to avoid the space blowup (called qFlex https://github.com/ngnrsaa/qflex). This required too much time at 53 qubits.

3) Custom code written to run on a supercomputer instead of distributed computers.

There's only so much effort you can put in before you have to call it. I think it's more likely for an algorithmic break to save a factor of 10 than for optimization to save a factor of 10 at this point.. although someone should probably try using GPUs or FPGAs.

I also take the view that if it takes a month to produce a new optimized implementation of a classical simulator that beats the quantum hardware, then the quantum hardware is still outperforming classical hardware for that month. The theoretical bounds are important, but in the day-to-day context of a race they aren't directly relevant.


I’m not really an expert—I’ve just been playing with quantum computer simulations since the supremacy paper came out, but yours is basically IBMs criticisms of that paper.

Part of the problem is that comparing a quantum computer simulation to the state of an actual quantum computer is almost inherently an apples-to-oranges thing.

We do not know of any way to compactly represent the state of a quantum computer in a classical computer; it appears to require an exponentially large state vector. So it requires an exponential number of operations just to evolve the full state vector. But it also produces the exact and full probability distribution describing the quantum computer state. Running a physical QC produces one _sample_ from that distribution, and you have to run it completely again to get another sample.

If we pretend that the simplest quantum computer state is equivalent to a coin toss, then classically simulating a fair coin would produce a distribution like {H: 0.5, T: 0.5}, while running the quantum computer would produce “H” half the time and “T” half the time. You could run the QC a bunch of times to estimate the actual probability, but there’s a fundamental difference between what the QC does and the simulation does.

There are some ways to approximately solve the classical simulation, but we haven’t found a way to compactly represent and manipulate the quantum state vector that are as fast as physics/nature seem to be able to do it in hardware. The Extended Church-Turing thesis (ECT) basically says that such an encoding and simulator should exist, and the basic premise of QC is that such an encoding and evolution process is impossible.

We don’t expect to ever conclusively prove that ECT is false (someone could always find an efficient simulator for QCs) but that doesn’t mean QCs can’t be useful in the interim, or even potentially after such a simulator is found (if it is a polynomial with a very large constant factor, for example).

The supremacy paper claimed that classically simulating their largest circuit would take 10,000 years. IBM said it would take them 2 days. The sycamore processor did it in 5 minutes. That still a big improvement, even if the classical simulator can be further optimized—and even IBM admits their simulator wouldn’t work if the quantum processor had even 1 more qubit.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: