Hacker News new | past | comments | ask | show | jobs | submit login
What makes quantum computing so hard to explain? (quantamagazine.org)
97 points by frumpish 11 days ago | hide | past | favorite | 72 comments

Quantum computing is difficult to explain because you have to learn a mathematical model to understand it. You cannot make use of metaphors & analogies to familiar references. Why? Because our language developed in a classical world. It is not equipped to deal with quantum mechanics. Quantum mechanical objects are an entirely new ontological category. Light is not a particle, nor is it a wave; it's a different third kind of thing with which you've had no prior experience. The last time most people encountered a new ontological category was in early childhood, and since then they've been building up knowledge by relating it to what they already know. The only way to surmount this obstacle is to learn a new language - the language of mathematics. All metaphors and analogies will lead you astray; they will only confuse. Thankfully the math is quite easy, it is only basic linear algebra. Learn the math and you will learn quantum computing. Don't learn the math and you will not learn quantum computing.

Let's take the simplest phenomenon in quantum computing: superposition. People get very confused about superposition. Something being both 0 and 1 at the same time? Like fuzzy logic? Is it 0 and 1? Is it 0 or 1? No, superposition is a linear combination of 0 and 1 which can be collapsed according to certain measurement rules. It can also be manipulated with quantum logic gates. There is no simpler way of explaining it.

What I still am left with is, yes we can write down a classical description of superposition, but SOMETHING RANDOM still happens.

When you imagine a fair coin being flipped and describing that with a probabilistic mathematical model, you can always imagine a classical story of what actually occurred. How you went from a spread out of possibilities to witnessing a single definite outcome. So I can understand the ontology because there is a classical ontology.

What QM is forcing us to do is write down as much classical story as possible, THEN SOMETHING TRULY RANDOM OCCURS. I can't imagine in my head any possible classical story. Both the coin and quantum "measurement" use probability theory in a similar way, but this is an important distinction.

When you say develop a new ontology, I really think you mean develop a recipe. I see no way to even comprehend anything but classical ontologies. I'm not sure if it's a limit of our, biology, imagination, or intuition. There is no classical ontology, nor settled ontology for that matter to describe the "measurement" of superpositions process.

This problem is solved by learning the language of mathematics. The limits of our language are the limits of our world. When we learn the language of vectors obeying the 2-norm manipulated by unitary matrices and collapsing probabilistically according to the Born rule, we have learned how a single qbit works.

But the Born rule required empirical content to come up with. Empirical content which is (possibly) fundamentally random.

You can't simulate that (fundamental randomness) purely classically. The classical world can be simulated with Turing machines, however. And we know infinite pen-and-paper maths can simulate any Turing machine.

How do I get from pen-and-paper maths (doing simple turing-machine-like operations/computations) to a truly random event, a necessary component for a quantum measurement? So when I'm doing my pen-and-paper maths, I have to go grab a radioactive atom and use the time it decays to somehow seed a random event into my simulation.

I do not know of a way to think beyond classically, or beyond the above notion of pen-and-paper maths.

Or are you saying some abstract high-dimensional vector really does ontologically exist in some even higher dimensional space? What space? Not spacetime, that is 4D. And even then how do we reduce that spread out WF to definite values in your ontology.

Math is not an ontology, unless you want to go the Mathematical Universe route.

The Many-worlds Interpretation explains the subjective randomness. https://en.wikipedia.org/wiki/Many-worlds_interpretation


The refusal of the majority of QM theoreticians to accept MWI is the root cause of the "mystery".

I'm sure decades after Copernicus, there were still astronomers going on and on about the complex interplay of the wanderers, and how the retrograde motion could only be explained with circles up circles, and spheres upon spheres.

This is no different.

And Copernicus was still wrong about the ontology and there are many critiques of MWI not just by people scared to accept its metaphysical weight.

I think it's unfair to criticize too harshly those who aren't ready to dive in with MWI. It's not that incomprehensible or too crazy, but it is far beyond what current physics says exists. We'd go from spacetime to some infinite or extremely high-dimensional space. Why should we need to go that far when other interpretations maybe ask less of us.


pp. 21-35 307-355 355-368 (and more) if you or anyone wants some critiques of MWI

Why bother with a 3D model of the planets when a 2D model of circles upon circles asks less of us?

It's so much simpler to model the wanderers as moving along the surface of a celestial sphere! Adding depth adds nothing to our understanding when the current mathematical models can already predict their motion to high accuracy. It's all isomorphic anyway.

Look, just go away with your heretical notions, I have calculations to perform!

The way I think of it, which makes sense to me and maybe nobody else:

Quantum mechanics is completely deterministic. But the point from which we are observing it is random. (Not our origin in space, but our origin in possibilities.) The spread of a waveform is deterministic, but it seems random because we can't predict from which outcome our perspective will originate.

That's why the multiverse interpretation is (imo) the simplest way to reason about QM.

The math behind quantum mechanics, even though working nicely in practice, doesn't fully connected to the real world because the measurement problem has yet to be solved. I would say this reason is the main source of confusion.

It's possible some new paradigm will come along which makes the mathematical model of quantum computation obsolete and renders irrelevant the question of interpretation (QC uses naive Copenhagen by convention) but I wouldn't hold my breath. And even then, like Newtonian mechanics, the mathematical model would probably still be worth learning.

I don't think QC uses naive Copenhagen by convention? I remember reading something by a quantum computation scientist (maybe it was Scott Aaronson?) who said he prefers many works.

What a researcher prefers is different from how quantum programming languages are structured and how the mathematical model is taught: you have a state vector in superposition that is collapsed probabilistically to classical values on measurement.

It is interesting though behind the scenes of this description, David Deutsch really does think the worlds of MWI are interfering with each other to allow quantum computations.

Consider reading Carlo Rovelli's "Helgoland", which is a good introduction both to Rovelli (a brilliant, poetic, and humble writer) and to Relational Quantum Mechanics, of which he is a major proponent and likely founder.

Why? Because he asserts that there is no measurement problem, and that the EPR paradox and violations of Bell's Inequality don't exist: They are based on misunderstanding what QM is actually predicting and what QM observers are actually observing. Likewise, there is no wave function collapse, no need for MWI, no hidden variables, etc.

To sum up the physics, badly....

Background: Relativity took as its principles that physics is universal (the universe is isotropic), that there are no fixed reference frames (all measures of speed, and therefore of a few other things, like energy) are relative to the observer making the measurement), and that the speed of light is a fixed constant. From that we space-time curvature, etc.

Relational QM takes this same perspective: Measurement is relative to the measurer, which can mean an observer or anything else affected by whatever is manifesting itself, like another particle, e.g.

Manifesting is a deliberate word: Absent manifestation, nothing has properties. Properties only make sense in the context of interactions.

What is the charge or spin of an isolated electron? 無 (mu): the question makes no sense, because charge and spin and everything else only arise, are only measurable, in interactions.

So take EPR: There is no paradox, because when Alice measures the spin of one of the entangled particles, she may "know" that Bob will measure or will have measured opposite spin, but she doesn't "know" this yet unless and until she interacted with Bob.

It is only after this slower-than-light communication that the two opposite spins - and the predictions of QM - are confirmed.

Until A and B interacted, entangling their states, the states A-observes-S1 and B-observes-S2 were independent. The interaction between A and B, state "A+B share information re S1+S2", took place without anything ever being FTL, without violations of locality, etc.

What I do not yet know, and have not had enough to look into, is whether and how Relational QM might or does change what we expect of quantum computing. At some point, I'll seek out some Aaronson on the subject, e.g.

Read the book, well worth it.

Deutsch explained quantum theory in his Fabric of reality book. Not every aspect but basic concepts are presented really well.

For me the following works best: Superposition is like transparency blending of two images together. For 0.5 each you still can see both images, but of course information can be lost. And a superposition of 1% and 99% gives essentially only one picture even though it is a slightly mixed state. A measurement is then the amount of a single picture in the superposition. And quantum gates are just linear operators on the mixed (superposed) image.

Do you know a good source for someone to learn about it at a high level, if they took linear algebra in college but haven’t touched it in 10+ years?

I made a lecture aimed at such people (I was such a person!), which has proven popular: https://youtu.be/F_Riqjdh2oM

With a follow-up blog post focusing on entanglement: https://ahelwer.ca/post/2018-12-07-chsh/

Andrew, that lecture you gave was fantastic. The Deutsch oracle example in the lecture was especially helpful -- I'm surprised it doesn't get used more frequently in other explanations of QC. I think one of the most interesting moments in the video was when the audience (me included) jumped to the conclusion that the QC was more powerful than it actually was and you helpfully brought everyone back to reality.

Glad you enjoyed! I do think the Deutsch Oracle problem is the closest thing to a "hello world" problem in quantum computing, or maybe more analogous to the "sum every value in a list of integers" problem in GPU programming. Of course you can shoot holes in it, and a lot of people have trouble with the "rewire the black box to use two wires" thing and think it's cheating. But such an objection would disappear if you just look at the N-bit Deutsch Oracle problem, which is unfortunately too complicated to be the first problem one encounters.

Thanks! Commenting so I don’t lose this.

You can use the “favorite” button on comments to save them to your profile.

I didn't know you could do it on a comment. It's not on the normal UI. Thanks.

This helped me a lot and I think it made the rounds on hackernews a while back as well. https://quantum.country/

There's also this video which walks through how quantum computers solve a very specific factoring problem (Shor's Algorithm) in a way that classical computers cannot. https://www.youtube.com/watch?v=lvTqbM5Dq4Q

If you want to learn it from the basics and retain what you learn, the best source is probably https://quantum.country/

I have mixed feelings when people fight so fiercily against the "trying all possibilities at the same time" explanation. It makes it sound false, when in fact it is only incomplete. "Trying all possibilities at the same time such as they all add up constructively to the right answer" would be a better way to fix it.

From the article: The goal in devising an algorithm for a quantum computer is to choreograph a pattern of constructive and destructive interference so that for each wrong answer the contributions to its amplitude cancel each other out, whereas for the right answer the contributions reinforce each other.

I think Scott Aaronson is right to rail against the explanation of "trying all possibilities at the same time". The emphasis should be on *choreographing* the right interference, not on the fact that all possibilities get accounted for by a quantum computer (since each outcome has a measurable probability).

In fact, "trying all possibilities at the same time" is a ubiquitous phenomenon in Nature. Light rays find their trajectory by tryin out all possible paths and only the one with the highest constructive interference survives (this is what Feynman discovered in Quantum Electrodynamics, which is referenced in the article).

The real trick in quantum computing is manipulating qubits through quantum gates such that the outcome that gets assigned the highest probability is the correct one. This is indeed hard as the environment usually interferes and spreads out the probabilities across all outcomes, leading to a junky, random outcome.

Yeah, quantum parallelism is a resource we exploit. The next thing to explain is interference.

I think that the next thing to explain is that what we can do has very limiting rules rules and they aren't what we are used to. Something like this:

"In order to maintain a quantum superposition, we have to follow very strict rules imposed by quantum mechanics. We can't make operations depend on the state, so nothing like an if statement is available. Also all operations have to be reversible, so we don't have logical 'and' or 'or'. A relative handful of programs can follow these rules and get a huge speedup from quantum parallelism. The vast majority of useful computer programs can't be rewritten this way and get little or no speedup."

I would be very happy if somebody showed me the equivalent to the 'Hello World' program in quantum computing. Or even something like '1 + 1 = 2' as a quantum program.

I consider simulating a coin flip (aka creating a so-called “even superposition”) to be the hello world of quantum computing. In a quantum programming language called Quil, this looks like

    DECLARE result BIT
    H 0
    MEASURE 0 result
This will put a 0 or a 1 in result with equal probability. ‘H 0’ is performing a Hadamard operation on qubit 0. The MEASURE allows us to extract a classical bit value out of a qubit.

Others might say the “Bell pair” is the hello world, and that looks like this:

    H 0
    CNOT 0 1
This is a bit harder to explain, because it requires understanding both superposition and entanglement.

I am just being annoyingly pedantic, but I do not think a Clifford circuit (both of your examples fall in this category) is a good quantum hello world, because they are trivial to simulate efficiently on a classical computer. But I do not have better suggestions.

I don't think a "hello world" type of program/circuitry(?) needs to be unique to a quantum computer. It just needs to be expressible with a quantum circuit.

Similar to how you can have hello world in both ruby and python, but arrive at them with different syntax and compilers.

And displaying "Hello World" can be done trivially with a pencil and paper. That doesn't make doing so in a classical programming language devoid of value.

Sure, but there is value in a "Hello World" being focused on what is special about the hardware/language. Look at Haskell and OCaml tutorials for instance: they all explicitly say stuff like "let us start with the 'Hello World' of functional languages, namely, implementing the Fibonacci (or factorial) function recursively".

Isn't that the case for literally anything you could run on the current state of the art quantum computers?

The troll answer of simulating a qbit using a qbit excluded.

I think part of the problem (although not all of it) is that quantum computers are at a phase that's analogous to writing a "Hello World" program in very rudimentary assembly. Even ignoring the quantum part, there's still a big gap.

I guess the closest thing to a quantum hello world would be a simple quantum circuit—each line represents a qubit register, and various types of gates operate on the qubits.

To see quantum circuits that do something useful, check out the "Example Circuits" links that show up in the Quirk simulator: https://algassert.com/quirk

via https://algassert.com/2016/05/22/quirk.html via https://news.ycombinator.com/item?id=11752421

Quantum teleportation is a somewhat simple but non-trivial procedure: https://qiskit.org/textbook/ch-algorithms/teleportation.html

A controlled-NOT gate (CNOT) can be thought of as addition mod 2: https://en.wikipedia.org/wiki/Controlled_NOT_gate

I think a simple simulator would go a long way, using random numbers in a language like Python. It doesn't have to be fast or efficient, just be able to proceed through some basic operations while we watch it evolve.

Check @ahelwer comment on this page. The youtube video is excellent.

I get the feeling that quantum computing is hard to explain largely because it isn't really here yet, so it's not widely available and only rarely used for certain niche applications.

> The largest number reliably factored by Shor's algorithm is 21 which was factored in 2012.[0]

So for the time being quantum is not a real thread to cryptography and there sound technical reasons to assume that it never will be.

[0] https://en.wikipedia.org/wiki/Integer_factorization_records#...


There is a lot of arguments around that quantum computing will scale in a Moore's Law way (wrt qubits, so doubly exponential in compute power) and thus will be useful in the near future.

These are extremely optimistic.

Quantum computing hardware just came out of academic lab in the last few years and was subjected to actual engineering, hence the massive growth in capabilities. However these were mostly the result of solving all kind of low hanging fruits. To give an example of what I mean, I talked to a physicist doing superconducting qubit work at a conference in 2015 and at the time the cavities used to hold qubit wavefunction where machined by the Dept machine shop, using a mill. He told me they would just keep making them until they had enough that were of sufficient quality for computation. If I remember properly something like 1 was kept and 20 thrown. Using proper manufacturing can get you a lot farther but...

It's unclear how far, and modern (non quantum resistant) encryption would require thousands of time more qubits to solve on QC than are currently there.


Generally when you're talking about cryptography you don't want to make assumptions that rely on reality working out in the way that's most convenient for you. Presuming that quantum computing can't be any kind of threat is a pretty risky gamble.

The NSA actually seems quite scared of it and wants everyone to move to post-quantum encryption already. Ulterior motives here would imply they have cracks for everything except quantum encryption, which probably isn't the case.


There is some opinion that the 21 factorization, and the 15 factorization for that matter, depended on knowledge of the correct result[1].

[1] https://crypto.stackexchange.com/questions/59795/largest-int...

From Aaronson's (the author's) blog: https://www.scottaaronson.com/blog/?p=5539

> Accompanying the Quanta piece is a 10-minute YouTube explainer [1] on quantum computing, which (besides snazzy graphics) features interviews with me, John Preskill, and Dorit Aharonov.

[1] https://www.youtube.com/watch?v=jHoEjvuPoB8&t=6s

I get all my quantum computing info from Aaronson. I'm grateful for his efforts, but I sure hope he knows what he's talking about!

> So a qubit is a bit that has a complex number called an amplitude attached to the possibility that it's 0, and a different amplitude attached to the possibility that it's 1. These amplitudes are closely related to probabilities, in that the further some outcome's amplitude is from zero, the larger the chance of seeing that outcome; more precisely, the probability equals the distance squared.

Two questions, for anyone who knows these things:

* Perhaps it's just not perfectly edited, but the above says the amplitudes are related to the probabilities that the qubit is 0 or 1. Shouldn't that say will be 0 or 1, because the qubit is currently in a superposition?

That may seem like nitpicking over typos, but it's necessary to the next question:

* Why have two amplitudes rather than one? The probabilities of either of two exclusive results, such as the two states of a resolved qubit, are P and 1-P. Given that amplitudes are directly proportional to those probabilities (in the last sentence of the above quote), how could the amplitudes change independently of each other? If the probability of one outcome increases, the other must proportionally decrease. And why not use one amplitude, because (it seems) if you know one amplitude you can easily determine the other? I suspect I'm misunderstanding something, perhaps that last sentence.

Caveat: I am not an expert, happy to be corrected on any of this.

The thing to understand is that amplitudes are not actually just probabilities. For a single qubit, they are a pair of complex numbers a and b such that |a|^2+|b|^2=1. If you force a and b to be positive real numbers, then they really can be thought of as (square roots of) probabilities, and knowing one determines the other. However, since they can be complex, there are actually three real degrees of freedom, not one.

On the face of it, it’s not really clear what this buys you since measurement does force these complex numbers into their plain old, squared-norm probability. The interesting part is that there are quantum computational operations which only make sense in the complex world; there’s no way to realize them in the vanilla “probabilistic computing” world (that is, where all amplitudes are positive real).

Where things really start to get interesting is with “entanglement” of qubits, where now you have complex linear combinations of all possible n-bit states (where again the squared-norms must sum to 1). If you think of these linear combinations as elements of C^{2^n}, then quantum gates can be thought of as matrices acting on this space. Again, the “secret sauce” here is when you have gates that don’t come from classical computation (permutation matrices) or even probabilistic computation (real matrices with positive coefficients).

I think the article basically makes this point implicitly: Thus far, quantum computing has been mis-explained, so to truly explain it properly requires disabusing the reader of certain notions first.

That being said, QM and QC is not particularly hard to explain, but the explanation sounds as believable as something else that could be completely made up, without some underlying knowledge, that intrinsically is seen as harder to understand than something that just leads logically from familiar experiences.

> QM and QC is not particularly hard to explain, but the explanation sounds as believable as something else that could be completely made up

So lots of people will believe it and retweet it and talk knowingly about it on Reddit?

A good portion of the educated public's knowledge how how computers work ends at the idea that math and electricity is involved. There's just too little basis for explaining a research project of any sort.

The approach I've started taking is to ask people if they know about mechanical or water powered computers. That at least implants the idea that there's a variety of ways to make a computer work.

> But how can we know that there's no classical shortcut - a conventional algorithm that would have similar scaling behavior to the quantum algorithm's? Though typically ignored in popular accounts, this question is central to quantum algorithms research, where often the difficulty is not so much proving that a quantum computer can do something quickly, but convincingly arguing that a classical computer can't.

Why is this question so important? I can see how it's interesting, but if the quantum computer does the job faster than the known classical solution, why not use it? You wouldn't delay shipping your classical code until you could prove that nobody will ever find a better solution.

Shipping your quantum algorithm requires access to a limited number of very expensive quantum computers that, for all intents and purposes, are not actually available.

A classical algorithm can be run immediately on commodity hardware tomorrow. And, most of the time, novel quantum algorithms actually can be matched or beaten classically.

Given that it makes sense to spend a small fraction of the cost of the needed quantum computer looking for the classical alternative. Preferably before you start investing in building the quantum computer.

See https://www.scottaaronson.com/blog/?p=3192 for an example of this back and forth.

> The goal in devising an algorithm for a quantum computer is to choreograph a pattern of constructive and destructive interference so that for each wrong answer the contributions to its amplitude cancel each other out, whereas for the right answer the contributions reinforce each other. If, and only if, you can arrange that, you'll see the right answer with a large probability when you look.

Knowing little about QC, and based on reading that, it seems a qbit would be better called a qgate. It's not the store of data as much as the physical component that manipulates it, based on inputs. - Is that even approximately on target?

By my limiting on QC, it is not approximately on target.

A q-bit really is a store of data that is equivalent to a bit. For example it can be a single trapped particle whose spin is in some superposition of up and down. What makes it different than a regular bit is that you can have 2 q-bits that are correlated, for example they are both in a superposition of up and down, but if one is up then the other is down and vice versa. Which means that an n q-bit system winds up being in some superposition of possible states, each of which is a sequence of n bits.

Well there are "qgates" in the form of the transformations we can apply to qbits. And I would say that the operands of those transformations should probably be called qbits because that's where information is stored in the quantum computer for doing operations. It may not resemble classical data too closely, but it's still an information store.

Something that a lot of the comments do not address and neither does the article to me is that QC relies of concepts that we don't really "understand" as experts either. The article says that in a way superposition has been indeed not well explained as well as parallelism, however to me it is insufficient to get "it". What about entanglement ? What do we mean by measurement of the outcome ? I also think the complexity in explaining it in layman terms are also in coding theory, what is an algorithm ? Why can we encode things in quantum bits and why is it different from usual binary ?

The problem with explaining QC to me lies in the quantum mechanics : it is a theory that is stochastic by nature but governed by deterministic principles. In a way we can't really explain QM properly or satisfactory to ourselves, hence how can we explain any of the quantum techs naturally ?

Quantum bullshit hype doesn't help either.

Superposition is easy to understand: when you are looking at a rotating wheel, it looks like it is in a superposition of Up and Down, because it rotating so fast. In quantum world, situation changes so fast and so fragile, than you cannot predict a state in a next moment of time, so it's much easier to use superposition in calculations.

For example, when somebody need to predict trajectory of a car, he will not try to predict position of car tires in the every moment of the trajectory, because it's too complex. He will just assume that tires are "rotating" (i.e. in superposition of all tire states), unless tire state is important for the calculation, e.g. when braking.


Sorry for my English.

Hey ! No worries your English is totally fine. I disagree with your vision, and I don't think the evolution allegory works with the rotating wheel. The complexity to me comes from the fact that in QM you describe evolution through the Schrodinger's equation (or for mixed system where you indeed lack understanding on the possible outcomes of the system, either Lindblad or Von-Neumann eqs), those are deterministic equations describing random processes and mostly the source of many unresolved problems in the interpretation of QM.

You can take two approaches : either in the Schrodinger's picture states are moving and operators (hence, measurable quantities) are static, or in the Heisenberg picture operators are moving but states are described as static. Defining trajectories themselves in QM is a daunting task (but can be done, through quantum trajectories, which can be interpreted as the ensemble of realizations of all possible outcomes of the system).

When you are dealing with superposition, in pure states, you might be in actuality in a stationary case and can only rely on measurements, which by themselves pose ontological problems (see the measurement paradox for more info). The determining factor of selection of the next outcome is philosophically unresolved and at the time I don't know if it can be explained through any case of quantum fluctuations.

At the moment for example I am working with arbitrary qubit selection with Ramsey interferences, the experimental setup can make by interferating one photon to another any superposition of two frequency states (thus manipulating the probability to be in a frequence or in other over an average of realizations of the experiment), thus an arbitrary Qubit.

In our case, we could freeze time and the outcome would be the same, in fact, time poses absolutely no role in any part in our description. I know some top physicists who lose sleep over this question of being able to describe past, present and future of quantum states in dynamical context all at once, without even considering time.

To put it simply, there's two sources of probabilities in QM : intrinsic to the random nature of QM (pure states) and probabilities due to our lack of knowledge (mixed states). When dealing with highly controlled system we can assume we are in pure states conditions, selection is automatic, and not due to any lack of description, at least to our current understanding.

Unironically one of the best explanations for quantum computing I've seen is from a webcomic (no coincidence that Scott Aaronson played a role in writing this): https://www.smbc-comics.com/comic/the-talk-3

Yeah, it's very good. I will bookmark it, thanks.

I've been trying to wrap my head around this for about a year now. The two things that tripped me up the most were that there is not 1 calculation, and that it's not parallel.

The way I understand it now, is based on the double slit experiment. If you want to know the outcome of the "double spilt operator", you need to run it many times. The outcome of the calculation, is the combination of all the runs. The way to visualize is to think of the waves passing through.

QC itself is nothing more than chaining different operators, that influence how waves interact.

QC is very fast random generator attached to physical quantum network, which performs calculation. Network is designed in such way, that an end state of the network is a solution to a problem. When you sample the network, you can sample it in a right moment and see the solution.

For small problems, it's work flawlessly, e.g. when you need to find an odd number, the network will be in solution state 99,99999999999999% of the time.

However, large networks are loosing their quantum properties very fast. Large objects have no properties of quantum objects — this is the corner stone of QM. The network can lose solution faster than it can find it, so when you sample, most of samples contains no solution to the problem, so you need to be lucky.

Currently, QC math doesn't account this, so we have a "measurement problem", because physical reality doesn't match math abstraction, but mathematicians don't want to adjust their formulas.

If you want to develop intuition, just look at reproduction of double slit experiment in macro[0]. It's just self-interference.

BTW: If you have resources, can I ask you to help to reproduce Stern–Gerlach experiment in macro? We need fresh blood (and money for experiments).

[0]: https://www.youtube.com/watch?v=nsaUX48t0w8

You can either explain it, or understand it, but not both at the same time.

You can find Aaronson's 35 secs explanation of QCs here: https://www.macleans.ca/society/science/trudeau-versus-the-e...

definitely not something a non-technical person would understand in 35 seconds!

https://youtu.be/ZoT82NDpcvQ This video starts a little rough but its my go to explanation video.

Probably the same reason it's hard to teach and understand probability and statistics.

It is so hard to explain because nobody understands normal computing.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact