Hacker News new | past | comments | ask | show | jobs | submit login

You can simulate quantum mechanics with classical computers pretty well, as long as you stick to the copenhagen interpretation.



No even simulation of pure quantum states scales exponentially with the number of degrees of freedom, that’s irrespective of any interpretation or invoking of non-unitary evolution like measurements, just pure simulation of the Schrödinger equation. If you simulate an environment to e.g. incorporate wave function collapse or measurement operations you’ll work with a master equation that also grows linearly with the complexity of the density matrix simulation.


Feynman's lecture explains why classical computers are terrible at simulating quantum systems.

The basic problem is that the number of states grows exponentially with the size of the system. You very quickly have to start making approximations, and it takes an enormous amount of classical computing power and memory to handle even relatively small systems.


Yes, you have to make approximations and deal with (estimates of) errors.

However, quantum computers also have to deal with noise and errors. So far, that's not very different.

(If we manage to build error-correcting quantum computers, that might change.)


The approximations don't just introduce small errors. To simulate quantum systems classically, you need to make drastic assumptions that fundamentally change the nature of the system.

This is very different from, say, an approximation that adds in a small amount of noise that you can estimate. The approximations in simulating quantum systems classically can radically change the behavior of the system, in ways that you might not understand or be able to easily estimate.


Huh? Your simulation doesn't care about your interpretation. All interpretations of quantum mechanics make the same predictions.


We currently can't even simulate a hydrogen atom.


We can absolutely simulate the hydrogen atom. This paper lists the equations and fundamental constants that allow calculating the hydrogen energy levels with around 13 digits of accuracy: https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.93....


That is not a simulation and those are not fundamental constants.

Because a simulation is too difficult, there are approximate formulae for computing the quantities of interest, like the energy levels of the spectrum of the hydrogen atom.

These approximate formulae include a large number of constants which are given in the paper linked by you and which are adjusted to match the experimental results.

A simulation of the hydrogen atom would start from a much smaller set of constants: the masses of the proton and of the electron, the magnetic moments of the proton and of the electron, the so-called fine structure constant (actually the intensity of the electromagnetic interaction) and also a few fundamental constants depending on the system of units used, which in SI would be the elementary electric charge (determined by the charge of a coulomb in natural units), the Planck constant (determined by the mass of a kilogram in natural units) and the speed of light in vacuum (determined by the length of a meter in natural units).


The inputs of the formulas for the hydrogen energy levels in the paper are: The Rydberg constant, the fine structure constant, the electron-to-proton mass ratio, the electron-to-muon mass ratio, the Compton wavelength of the electron, and some nuclear properties (charge radius, Friar radius, and nuclear polarizability). All inputs except the nuclear properties are as fundamental as it gets according to our current understanding of physics (note that the Rydberg constant and Compton wavelength are simple combinations of other physical constants). Nuclear physics is dominated by quantum chromodynamics which is not nearly as well developed as QED.

The constants are determined by fitting the theory to the best available measurements (not only in hydrogen). This is exactly what fundamental constants do: They convert unit-less theory expressions into measurable quantities.


We know how to simulate it, but we can't do it. Those equations though require too much computation if you solve them with any known classical algorithm.


This is completely wrong. My laptop can solve the equations in fractions of a second. I believe that with some optimizations it should be trivial to do the calculations on a 1960s mainframe.


That is not true.

You can solve such equations in fractions of a second only for very low precisions, much lower than the precision that can be reached in measurements.

For higher precision in quantum electrodynamics computations, you need to include an exponentially increasing number of terms in the equations, which come from higher order loops that are neglected when doing low precision computations.

When computing the energy levels of the spectrum of a hydrogen atom with the same precision as the experimental results (which exceed by a lot the precision of FP64 numbers, so you need to use an extended precision arithmetic library, not simple hardware instructions), you need either a very long time or a supercomputer.

I am not sure how much faster can that be done today, e.g. by using a GPU cluster, but some years ago it was not unusual for the comparisons between experiments and quantum electrodynamics to take some months (but I presume that the physicists doing the computations where not experts in optimizations, so perhaps it would have been possible to accelerate the computations by some factor).


I believe you might be confusing the QED calculations of hydrogen with those of the electron g-factor. Just have a look into the paper I linked (section VII). Most of the QED corrections are given analytically, no computers involved at all. You could in principle calculate this with pen-and-paper (and a good enough table of transcendental functions).

The most accurate hydrogen spectroscopy (of the 1S-2S transition) has reached a relative accuracy of a few parts in 1E15 which is around an order of magnitude above the precision of FP64 numbers.


The "few parts in 1E15" claim is applicable only to the absolute value of the frequency of the 1S-2S transition, which is 1 233 030 706 593 514 Hz.

That absolute frequency is computed from the ratio between an optical frequency and the 9 GHz frequency of a cesium clock, which is affected by large uncertainties due to the need for bridging the gap between optical frequencies and microwave frequencies.

The frequency ratios between distinct lines of the hydrogen atom spectrum or between lines of the hydrogen atom spectrum and lines in the optical spectra of other atoms or ions can be known with uncertainties in parts per 1E18, one thousand times better.

When comparing a simulation with the experiment, the simulation must be able to match those quantities that can be measured with the lowest uncertainty, so the simulated values must also have uncertainties of at most parts per 1E18, or better per 1E19.

This requires more bits than provided by FP64. The extended precision of Intel 8087 would barely be enough to express the final results, but it would not be enough for the intermediate computations, so one really needs quadruple precision computations or double-double-precision computations, which are faster where only FP64 hardware exists.

I have not attempted to compute the QED corrections myself, so I cannot be certain how difficult that really is.

Nevertheless, the section VII from this CODATA paper and also the previous editions of the CODATA publications, some of which had been more detailed, are not consistent with what you say i.e. with them being easy to compute.

For each correction there is a long history of cited research papers that would need to be found and read to determine how exactly they have been computed. For many of them there is a history of refinements in their computations and of discrepancies between the values computed by different teams, discrepancies that have been some times resolved by later more accurate computations, but also some where the right value was not yet known at the date of this publication.

If the computations where so easy that anyone could do them with pen and paper there would have been no need for several years to pass in some cases until the validation of the correct computation and for a very slow improvement in the accuracy of the computed values in other cases.


The accuracy of the hydrogen 1S-2S measurement was mainly limited by the second-order Doppler shift of the moving atoms (and to a lesser degree the AC Stark shift of the excitation laser and the 2S-4P quench light). The comparison between the laser frequency and the Cesium fountain clock was done with an optical frequency comb which introduces a negligible uncertainty (< 1E-19).

Isn't it fun to get your own field of expertise (wrongly) explained to you on the internet?

Edit: I never said that it is easy to derive the corrections listed in the CODATA paper. However, it is relatively easy to calculate them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: