Hacker News new | comments | show | ask | jobs | submit login
Chinese Researchers Achieve Quantum Entanglement Record (scientificamerican.com)
305 points by 0xbxd 4 months ago | hide | past | web | favorite | 140 comments



I visited USTC several years ago, (2014) and in this time they have basically doubled the number of qubits they can entangle, so this is a great step forward. At around ~100 qubits quantum computing becomes very useful, so maybe we won't have to wait too long.


What's the size needed for cracking large-primes based encryption? If not at 100 qubits, what are some useful calculations that we can do at that size?


So the number of raw qubits are in the order of a million for large prime factorization or to make Shor's algorithm a reality. This is because the physical qubits are highly unstable and prone to errors. The error correction methodologies will result in a fault-tolerant logical qubit. Depending on the error correction algorithm, the ratio of logical: data/physical qubit is 1:1000 at the minimum. You need 4000 logical qubits at least for large prime factorization give or take 4 million raw qubits.


This is probably a dumb question, but could you spread out the error correction in time? I know it's hard to create very large entangled systems, so would it be possible/easier to do the calculation with 4000 real qubits 1000 times and use the repetition for error correction? (Checking for correct factorization is easy, after all.)


That's been one strategy I've seen mentioned in regards to Shor's algorithm many times. Since there's the probability of an error, you still have to check it and rerun it a few times until you confirm the right answer.

That's still many times faster than you'd get conventionally, so it's a reasonable trade off.

But it doesn't address the possibility that you can't entangle enough qubits to begin with, or keep them stable long enough to actually perform the calculations, which is where the error correction stuff comes in. I don't know enough about the challenges there to comment on how many you'd need (though 1000:1 seems high to my naive intuition, i'd guess maybe 10-100 for magnitude myself).


For more info on that 1000:1 ratio, see [1].

[1] https://en.wikipedia.org/wiki/Quantum_threshold_theorem


Calculation on quantum state is already spread in time. Each time you calculate something you basically take snapshot of the state. Then you do it again. After many trials patterns emerge - that’s when you start reading your answer - which is always probabilistic.


How fast do you think the number of qubits will grow over the coming years? Will it be exponential, like Moore's law?


There was an AMA not long ago with someone in the quantum computer industry and iirc he stated that it isn't certain if we even can go that high. Guessing how fast that will happen is not reasonable at this point.


>>> qubits are highly unstable

newbie question : are they naturally instable (so we won't fix that) or is it just because we don't master them well enough right now (and we'll fix them "soon") ?


It's the former. They can be made stable at super conducting temperatures but not completely. There will always be errors. From what I can understand you cant fully eliminate the errors. I donno may be some new superconducting temp might make them stable but that's a wild guess.


A lot of things regarding chemistry and biology - finding catalysts for nitrate fertiliser production (uses 5% of worlds power), perhaps some catalysts in cement production (my idea), invention of new medicines, either due to better cell simulations or (my idea) faster protein folding.


Is the bottleneck preventing better cell simulations computational or lack of data / insufficient understanding of biology to inform a predictive model?


My understanding as a physics undergrad is that we can write down the 'equations of motion' of molecular systems with Hamiltonians describing the energies and correlations of the electrons in a system, but they very quickly become intractably difficult to solve. You're trying to solve a very, very big eigenvalue problem.

However, with the right kind of quantum computer or quantum simulator, you could construct a system of qubits that is described by the exact same Hamiltonian. That way, the quantum state of your qubits and your original system would behave in exactly the same way. Then, you just let the qubits evolve in time and read out the system's state at the end. Do that a bunch of times and you'll see an average picture of what the original system you're modelling (protein or something) would do.

So to recap - we can get around the difficult compute bottleneck caused by the desire to perform high-fidelity physics simulations by creating a system that follows the same rules and which we can probe much more easily.


I'm a cofounder of a start-up working on near-term applications of quantum computing in biology, specifically on the protein structure side of things (https://www.proteinqure.com). There are many self-contained subproblems in this space which are not limited by data because models are accurate enough to inform experiments, but there's probably not a scientist in the world who would say we have a sufficient understanding of biology to make predictive models with generality.


That's interesting, what are some examples of those subproblems?


Not an expert, but my understanding is that while better knowledge/models of the biology might be useful, ultimately it's just a massive computational project to test out and optimize protein folding and such complex chemical behavior. The physics/chemistry and biology are understood well enough, there are just nearly countless paths to explore making it a huge problem.


Nature manages to fold proteins quickly enough. What's nature doing that we can't?


Parallelization on a single particle level at every point in space.


Nature manages to cause weather patterns easily enough, why can't we?

Your question doesn't make sense. Many (most) things in nature are incredibly hard to model correctly. Just because they happen doesn't make it easy to quantify usefully.


They know the recipe, we are searching for one


I wouldn't expect a clear bottleneck, because increased computational capacity leads to increased understanding.


2048 would break current crypto, >20 would help solving most unsolvable NP hard problems.


2018 LOGICAL qubits would break current crypto. But qubits decohere extremely fast, so you need error correcting schemes. Current estimates require 1000 - 10000 physical qubits per logical error corrected qubit.

[1] https://en.wikipedia.org/wiki/Quantum_threshold_theorem


No, no one cares about those HW problems. To break RSA keys you need 0 error correction. You only need >2000 entangled qbits, and they either produce a correct result or not.

For other calculations you might need error correction, but this leads to nowhere with current technology. What the Chinese did was groundbreaking.


That's great. Now next thing that needs to be solved is to make this tech available for general public use just like computers are these days. Problems that are waiting to be solved by it are very large and of huge impact.


We all have these things in our pockets that are massive supercomputers by the standards of my college days. And we all use them just to rant at each other about political things that we in reality are totally misinformed about ... and otherwise just waste our lives. I am not sure why quantum computing is supposed to change this.

Even software that is supposed to be useful is so terribly slow. Computers are between 2 and 4 orders of magnitude faster than programmers today experientially believe they are, because today’s culture of programming has rotted so thoroughly. Do you really need a quantum computer when 3 orders of magnitude are just sitting there on the table waiting to be picked up?


We would not have Amazon, Netflix, Youtube, Facebook, Google Docs, Deep Learning, name any other modern software technology or product, if we were still writing and optimizing everything by hand in assembly or C. Those "2 to 4 orders of magnitude" that are "sitting on the table" are actually being used to make it possible for developers to work at a much higher level and be much more productive.

I totally agree with your first paragraph, though.


I disagree. The slowness of these languages is mostly uncorrelated with increases in productivity. People only think there’s cause-and-effect here because they haven’t seen counterexamples, because the trend in language design for 25 years has been to make slow languages.


They haven't seen counterexamples because there are virtually none. If what you said was correct wouldn't there be many examples of big bodies of code written in a language like C? (And you can't say "linux kernel" - low-level code needs to be written in a low-level language, in order to communicate with the lower levels like hardware and controllers and such. There is a practical reason to write that in C.)

> because the trend in language design for 25 years has been to make slow languages

No one sets out to make a language to make them slow. The trend is to make higher level languages. Do you really think that there is no reason for it besides novelty and coolness factor?


I think the point that the comment you're replying to is trying to make is not that big complicated software engineering can be done in say, C with similar levels of productivity to say, Java.

He's instead saying that it's very much possible to build a language with a similar level of abstraction/ergonomics to say, Java, or Python, or C#, or whatever but with similar performance characteristics to a lower level language like C. And we are starting to see this - there are languages like Rust or D which are (at least to my eyes) much less arduous and foot-gun prone than languages like C or C++ while having similar (or better) performance.

Of course there's also Jai, but I think we should remain unbiased here :P

As an aside though - I think some of those orders of magnitude of performance gains could be had by just writing better code in your existing high level languages. (At least in my experience with enterprise software dev).


The trend is to make solving problems easier for the untrained; giving the heavy lifting to the machines, making the machines slower for the actual task at hand. Nobody is actually targeting the experts who will love to have more power over safety


This won't be popular here, but maybe the economics work against the experts. If most problems out there can be solved with mediocre interchangeable cogs much faster than with the available pool of experts, the cogs will win long term.

So far, reality seems to confirm my intuition.


I do not disagree with you here. In the economic (real) world, "better" is the enemy of "good enough". In the hacker world, the craft is appreciated more.


> you can't say "linux

Can I say "pretty much all of Linux userspace"? Or Java VM? Or Gnome?


We would absolutely have all those things still writing in C, the biggest difference between C of 30 years ago and modern Java are _libraries and IDEs_, not higher level programming. Those things you listed are pretty much all Java, and Java doesn't really provide any "higher level" thinking other than forgetting about memory management.

A lot of optimizations left on the table have nothing to do with manual memory management, and have everything to do with "eh, let's just the query the database again, that'll shave a day off the schedule".


Why exactly do you think we have all these things for modern languages and not C despite its incredible head start?

Answer that and I get the feeling you’ll understand why your current thinking is so misguided.


The introduction of automobiles permitted the development of F1 racecar drivers, but just ended up with most people dragging themselves around to get food and money. The introduction of the printing press permitted the publication of pamphlets and books that overturned world governments and dethroned kings... but most people used it to trade porn and recipe books.

Technology doesn't make people better. It lets people do better things if they have the desire to do so. It also must enable them to do worse things if they have the desire to do so. You can not have one without the other. And on the balance things have historically gotten better so there's not too much reason to be worried about the fact most people just use global communication to bicker. They won't be remembered. Those who are, however, wouldn't have been possible otherwise.

As for where quantum computing factors into this, I don't have the slightest clue.


"The introduction of the printing press permitted the publication of pamphlets and books that overturned world governments and dethroned kings... but most people used it to trade porn and recipe books."

As Timothy Snyder notes when discussing the internet, the introduction of the printing press divided Western Christianity thereby causing a century and a half of religious wars in which a third of the population was killed. And later it gave us the Enlightenment and educated society.


Excellent context to frame this discussion.

"They won't be remembered."

-otakucode


Quantum computers won't change anything about people posting misinformed political opinions.

Three orders of magnitude speedup is a constant factor. Quantum computers provide between sqrt(n) and log(n) speedup depending on the problem - for example, biochemistry simulation for improved drug designs and such. That seems worthwhile.


Quantum computers are not faster computers that we can use to write lazier code. If they were you might have a point.


Honestly, I'd love to see examples of rotted code that could be made exponentially faster. I'm likely guilty of this.

And I agree that it's disheartening to watch how horribly we use these incredible tools. But thankfully as programmers we're in a position to design these tools to offset the worst parts of humanity.

And one field in which more powerful computers will be needed soon is deep learning. It appears that progress is beginning to stall, as larger networks are necessary. Better tools for distributed computing will make up the difference in the short term, but the current infrastructure appears to be insufficient for general intelligence.


>And one field in which more powerful computers will be needed soon is deep learning. It appears that progress is beginning to stall, as larger networks are necessary. Better tools for distributed computing will make up the difference in the short term, but the current infrastructure appears to be insufficient for general intelligence.

"In 50 years, every street in London will be buried under nine feet of manure."


Moores law says that programming is allowed to get half as efficient every year!


> And we all use them just to rant at each other about political things that we in reality are totally misinformed about ... and otherwise just waste our lives.

So if I decide to use the device that I have bought and invested in, to "rant" about something (to connect to other people through it, essentially), or even just connect to my loved ones sending "meaningless" information (as many I'm sure would call it), or if I choose to use it for anything else, who is to say that it's "wasting my life"? You? Why do you decide what is wasteful and what is not? Maybe all those people want to do those things?

You can't say that this technology is wasted or is not used properly without also implicitly assuming moral and philosophical authority about what people should choose and not to choose with their time and other resources, including the money used to buy such devices and resources spend on developing them. Why do you assume that you can be such an authority?

Of course, looking at this in another way, you definitely have the ultimate authority in this area in this one regard: where it applies to your own life. Which I guess is another way of saying that what you wrote says more about your outlook on life than on the underlying technology and its social ramifications.


I think many people would find intellectually boring your position of absolute personal freedom with no objective measure of excellence, along with your aggressive defense of such. But there is also objective information which would support OP's position for someone who considers objectivity in the personal sphere of little value: heavy computer and mobile use has been linked many times with psychological difficulties, including anxiety, depression, stress, etc. Please don't pontificate on other's allusion to human well being by presenting your own moral code to be imposed.


> And we all use them just to rant at each other about political things that we in reality are totally misinformed about ... and otherwise just waste our lives.

You're certainly free to find the relevant response to that intellectually boring. But it's quite a leap from OP's totalizing statement of despair to implicitly fill in that ellipsis with "allusion to human well being."

I'd like to be generous in my reading, but I don't see any room in there for a discussion of "heavy" vs healthy device use.


Many deep profound things are intellectually boring. For many people that's a big obstacle to overcome in order to learn them. I can't see how "being boring" is any substantive argument for or against any question discussed here.

The original author used his "objective measure of excellence" as an argument for not developing quantum computing technologies to be used in personal devices. In this specific instance, in this practical regard (even though it's probably 50 years too early for this question) - I argue that yes, the position of personal freedom (to use quantum computing in cell phones, exaggeratedly) does indeed trump the other position which is to actively exclude quantum computing from phones for the vague fear that people might waste their lives on it, by failing to fit into some objective measure of excellence.


What a long-winded way to say: "That's just your opinion"


Problems that are waiting to be solved by it are very large and of huge impact

I know virtually nothing about quantum computing, so can you give some examples? Whenever I've asked anyone who seemed to know anything about the field, all they can come up with is weather forecasting and simulating nuclear explosions. Not exactly "general use," as you put it.

In the back of my mind, I know that quantum computing is a big deal, and we're in ENIAC days with it. But I don't have a good understanding of where it goes or why.


Quantum computers are extremely useful for simulating chemistry. They can do it exponentially faster than classical computers. Cheap useful quantum computers can revolutionize biology and materials science.


There are already lots of 'practical' applications in the literature. A famous one is Shor's factorisation algorithm. But there are others - everything from doing pagerank to simulating other quantum systems.

Many classical algorithms, which run in ~O(poly(n)) can have an 'equivalent' quantum algorithm in ~O(log(n)) - an exponential speedup. There is still debate as to how the complexity class of problems which are efficient on a quantum computer (BQP) relate to other complexity classes. Its suspected P lies entirely within BQP.

However, at least initially, I think quantum computers will be a specialized piece of equipment. Classical computing is pretty good for your average person. Quantum computers will be used for more heavy compute tasks.


> Its suspected P lies entirely within BQP.

It's actually known that BQP contains P[1]. It also contains BPP. What's not known is the relationship between BQP and NP (most experts suspect there's no containment in either direction).

[1] See this for an easy proof: https://people.eecs.berkeley.edu/~vazirani/f04quantum/notes/...


Yes sorry you are right. I found this nice example of Grover's algorithm [0].

[0] http://davidbkemp.github.io/animated-qubits/grover.html



That's still only relative to an oracle, though (i.e., BQP^O vs NP^O). We also have oracle separations of P and NP, and that proves nothing about P vs NP without an oracle.


I'd also add that the need for localized personal computing services goes down as network bandwidth increases, if there's anything that people really need quantum computers for, it's not unreasonable to suggest that they will be handled on the backend with aggregated results sent to clients as is the case for most computation today.


Is Shor's an application of entanglement though?


Yes, all non-trivial quantum computing is based on entanglement.


Protein folding and gene research. Cure malaria, cancer, feed the world etc.


Doesn't basically all encryption become meaningless with quantum computers? Remember how they always said 'it would take a billion years to crack this encryption with a computer'? Well soon we are going to have computers that don't play by the rules.


ENIAC?


One of the first computers.

https://en.wikipedia.org/wiki/ENIAC


Because of the expense it will have to start off as QCaaS. You send the really hard part of your problem to whichever person actually has the hardware and they'll return the answer. If you are concerned with them not knowing your data, there are workarounds to that as well. So available to general public in a weaker sense.


Congrats! US House science committee also set to pass national initiative in quantum tech with on order of $1B funding:

National Quantum Initiative - Action Plan

https://www.lightourfuture.org/getattachment/85484dca-465a-4...


Crash course in entanglement:

The state of each qbit is represented by a state vector of two complex numbers [a, b] where |a|^2 + |b|^2 = 1. There are two special qbit values called the classical basis: [1, 0] which is the classical bit 0, and [0, 1] which is the classical bit 1. If a qbit is not in one of the two classical states, we say it is in superposition. When a qbit is in superposition, we can measure it[0] and it will collapse probabilistically to 0 or 1; for a qbit [a, b], the probability that it collapses to 0 is |a|^2 and the probability that it collapses to 1 is |b|^2.

Things get more interesting when we have multiple qbits. If we have two qbits [a, b] and [c, d], we define their product state as their tensor product [ac, ad, bc, bd]. For example, if we have two qbits both in state [1/sqrt(2), 1/sqrt(2)], their product state would be [1/2, 1/2, 1/2, 1/2]. We use the product state to calculate the action of a quantum logic gate that operates on multiple qbits - for a gate which operates on two qbits, we can always represent its action as a 4x4 matrix.

Usually we can move back and forth between the product state representation and writing out the individual qbits states. However, in certain scenarios something very special happens: we cannot factor the product state back into the individual state representation! Consider the product state [1/sqrt(2), 0, 0, 1/sqrt(2)]. If you try to write this as a tensor product of two states [a, b] and [c, d], you cannot! It cannot be factored; the qbits have no individual value, and we say they are entangled.

Well, what does this mean? It means when you measure one qbit, even if the qbits are very far apart, you instantly know the value of the other qbit. So if I entangled two qbits in the state [1/sqrt(2), 0, 0, 1/sqrt(2)], give you one, then we go to opposite ends of the universe, if I measure my qbit and see a 0 I'll know your qbit instantly also collapsed to 0 (or collapsed to 1 if I measured 1). This phenomenon has been experimentally-verified to occur faster than light. It is instantaneous, as far as we can tell. So, local realism is wrong! Spooky action at a distance is real!

There is an important caveat: while the qbits seem to coordinate in some faster-than-light way, you cannot use this to communicate in a faster-than-light way. All we have is a shared random number generator. I can't send some chosen bit from my reference frame to yours. This is called the no-communication theorem.

If you found this interesting, I have a full video on quantum computing for computer scientists here: https://youtu.be/F_Riqjdh2oM

[0] IDGAF about your chosen quantum mechanics interpretation, don't @ me


The part I don't understand is the "spooky action at a distance." Isn't this just the same as if the qbits were already in whatever their final state was as soon as they were entangled? Or in other words, is there any experimental basis for determining that the "cat" stayed alive at all? It seems a lot less magical when we just determine that in fact the value is the same no matter when or where you measure it, although it may not be predictable. And since the "entangled" qbit always had the same value, no matter when you measured it, then there is no "spooky action at a distance". The action occurred when d=0, and what you actually have is two copies of the same ROM.


The idea that the qbits choose which value they'll collapse to at time of entanglement is called local hidden variable theory, which John Bell disproved in 1964: https://en.wikipedia.org/wiki/Local_hidden_variable_theory

In more practical terms, the theory also falls apart when you start doing more complicated things with entangled qbits than just measuring them, like quantum teleportation or error correction.


Something else worth noting: an entangled state is itself a superposition of two non-entangled states. This is obvious when you look at the math: the entangled state |10> + |01> is the superposition of the factorizable (non-entangled) states |10> and |01>. But physically what this means is that entanglement is intimately related to interference. Interference happens when a photon "goes both ways". Entanglement happens when a pair of photons go in such a way that you can't tell which photon went which way. (There's more to it than that, but the point is that entanglement and interference are closely related phenomena. This fact is often overlooked, particularly when it comes to explanations of entanglement directed at lay audiences.)

Also, the idea that collapse is a physical phenomenon that propagates faster than light is not universally accepted. There are other ways to interpret these results. None of them take a toll on your intuition, but personally I like this one:

http://www.flownet.com/ron/QM.pdf

(There's a video too: https://www.youtube.com/watch?v=dEaecUuEqfc)


It’s also worth noting that as of 2015[0], this has been experimentally verified.

We’ve made real-world measurements here that correlate with measurements made at the exact same time over there in a way that mathematically cannot be accounted for without some species of intuition-violating spookiness. It’s all very real.

0: https://en.wikipedia.org/wiki/Bell_test_experiments#Hensen_e...


So, it's kind of like closures in JavaScript?

"A Bell test experiment or Bell's inequality experiment, also simply a Bell test, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to two other concepts: the principle of locality and Einstein's concept of "local realism". The experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables (called "hidden" because they are not a feature of quantum theory) to explain the behavior of particles like photons and electrons. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are not thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist."

Spooky private method scope is spooky.


Can someone tell me if I understand this correctly. So Bell famously proved that it is not the case that the correlation we see between 2 entangled particles A and B is because of some common cause C. Therefore we concluded that it must be that A => B or B => A and we called it a spooky action at a distance. But aren't we forgetting that there's a one more way to get a correlation between A and B without resorting to spooky action at a distance - conditioning on a common effect, aka collider: http://www.the100.ci/2017/03/14/that-one-weird-third-variabl... Has anyone proven that this is not the case?


Bell inequality proves (or is used to show) that the shared/correlated state of "entangled" particles cannot be fixed before the act of measurement. It has nothing to say about "A => B" or the likes. If there exists a third "C" it'll still have to set the state of A and B, at the moment of measurement, regardless of distance.


What you're proposing is essentially what Bell's inequalities contradict. That's why they're surprising.

I think the many-worlds picture is a clearer way to think about this than "spooky action at a distance".


Yes, this is more or less what Bell's theorem disproves, although it's kind of a poor metaphor, because entanglement deals with indeterminate states, not just unknown ones.


Big deal. There's nothing shocking or spooky about two billiard balls being made to spin in arbitrarily opposing directions, selecting only one (omg! without discovering which way it spins, you guys), then separating them, and then noticing the spin of one, in order to reliably grasp that the other is the reversal.

I have two guitars. I place the guitars facing one another, such that plucking the HIGH E string on one, also plucks the LOW E string on the other. We put ear plugs in our ears, such that I can separate the two guitars, without us ever hearing them. I pluck the guitars, give you one, and take the other one and travel far away. I then listen to my guitar. It is the HIGH E guitar. Now I know you have the LOW E guitar. Wow. Incredibly unspooky. Not teleportration.



These links are intended to suggest something contrary to what I've said, but they do not suggest any contradiction.


Im not an expert so from my outside viewpoint

I think by taking the guitar far away you are implying possibility of faster than light communication, which is thought to be not possible, so his links address that? Hard for me to understand so I'm just saying my thoughts out loud to get clarified, thanks


The quantum state of interest is induced when plucking the conjoined "guitars". (analog for particles)

That state is induced at the moment the guitars share "locality" because entanglement requires locality for initialization of polarization.

So then, we say we are as yet unaware of the qualities of the polarization we, ourselves, induced. Very mysterious.

So spooky, yes? We do not measure, because we choose not to, so we do not yet know.

Even if we prevent ourselves from having the capacity to measure, the results hold true, but so what? And so what, if we ask others to do the same. Imagine that we ask two waiters to tape two coins together in the kitchen, flip the linked coins, peel the coins apart while preserving the outcome of the coin flip, then take one coin to your table, and one to mine. Now I know which side of the coin you are looking at, without walking over to your table. So what. Nothing about this claims transmit information superluminously.

In reality, with instrumentation, carrier signals relay an electromagnetic transmission in such a way that one cannot peek or tamper (the waiters can't change the coin flip, we cannot hear the ringing guitar), but this does not invalidate the premise of the analog. For the purposes of the analogous guitar example, we say that our couriers (electromagnetism itself) are prevented from touching or listening to the ringing guitars, or disclosing what they might sense.

With the guitars, we say the guitars move away from the place where they were entangled. We'll say that our instrumentation rang the guitars at the grand canyon. Our couriers then transported the guitars to you, at the top of the Empire State Building in New York, and me on the Golden Gate Bridge in San Francisco. I receive the guitar, and discover that the LOW E string is ringing, it can only mean that you guitar's HIGH E string in ringing in New York.

There are no local hidden variables in this example. The premise of polarity as a corollary for guitar strings is modeled in the exact same manner. Six strings on a guitar maps to the same essential parameters of each of two directions for all three axes of spin.


You are missing the point of Bell test experiments. Such experiments demonstrate that which guitar is the high E one and which is the low E one is not decided when they are still together. It is not that you and everyone else just don't know which way it is until someone listens to one of them, it is actually not yet decided until someone listens to one of them.


It's just an expression of the conservation of energy. Much in the same way the double slit experiment conflates a particle transporting itself through two windows at once, so too, do these experiments conflate the polarizers as causing the effect.

Ask yourself: if you construct a gun, with two diametrically opposed barrels, with exactly opposed rifling twists, and you aim the gun at two opposing (but identical) abrasive knurled metal rasp targets, such that if the bullet spins one way, the ricochet off rasp target will send it to a blue target, but if the bullet spins the other way, the grain of the rasp target is such that the bullet is sent to an orange target, will you be surprised to find that the behavior of the projectiles remains consistent?

Fire those bullets out of that gun, and as the bullets leave the opposing twists of the barrel, and the spin of the bullets encounters the friction of the knurled surface, they will consistently be sent in whichever direction the spin of the barrels rifling puts them. When one side sends spins the bullet to hit the blue target, the other barrel's twist always puts the other corresponding bullet onto the orange target, by bouncing it off the polarizer rasp.

So, now, to shrink downward to the realm of particle physics, what we find is that the ballistic particle guns are such that the emitter source is an array of many guns with varying rifling twists, but like pulling a lever on a slot machine, we cannot know which of the guns embedded in the radiation source will fire next.

We won't know the turn of the rifling of the gun's barrel prior to whichever one happens to go off. We stick out our rasp target to have it send the bullet to a colored target, and we declare that the polarizer rasp directed the bullet particle, but not really. The emitting source's gun barrel imparted the spin. The polarizers induced behavior on particles that would have behaved as reciprocals anyway.


No, no, no. There is almost half a century of experimental evidence against you. The experimental results of Bell test experiments are incompatible with the assumption that the states of both particles are fixed when the pair is generated and are only classically correlated because of the way their states are fixed. All your analogy attempts are flawed and bound to fail exactly because entangled pairs of quantum particles do not behave like pairs of classical particles. You are making up classical experiment and assume that quantum particles will behave in the same way, but they don't. And that's the entire point.

EDIT: I just came across an illustration which might be helpful. I will place three coins on a table and cover them so that you can not see whether they are heads or tails. You get to pick two of them and I will reveal them for you but you can never look at the third one. Your task is to figure out by which rule I am placing the coins on the table.

In the first round you pick coins one and two, I reveal them to be heads and tails. In the second round you pick coins one and two again, now they are tails and heads. You continue picking coins one and two for a few thousand rounds and always see heads and tails or tails and heads, they are never the same.

Then you switch to picking coins two and three for a few thousand rounds and again they are always heads and tails or tails and heads, they are also never the same. Now you have figured out what I am doing, I am randomly choosing between heads, tails, heads and tails, heads, tails for coins one, two, and three.

So in the next round you pick coins one and three and I reveal them to you. Heads and tails. WtF?!? They should have been the same if I always choose between heads, tails, heads and tails, heads, tails. You try again. Heads and tails. Again. Tails and heads.

No matter what you try, you never get to see two coins with the same side up. That's ridiculous, you think. There are only two sides to a coin but three coins on the table. At least two of the coins have to have the same side up in each round and if you select the two coins to reveal at random, then you should at least sometimes get to see two coins with the same side up no matter which rule I use to place them. But you don't.

Assuming that I choose heads and tails for each of the coins when I placed them on the table and before you make your choice is incompatible with your observation that you never see two coins with the same side up. But if you assume that I can magically turn the coins around at the moment you tell me which two coins to reveal, then you can explain your observation. It may however trouble you because your explanation now involves magic.

And that is roughly how entangled pairs in Bell test experiments behave. Or more formally, classically P(1=2) + P(2=3) + P(3=1) >= 1, at least two coins always have the same side up no matter what the underlying distribution is. Entangled pairs in Bell test experiments violate this inequality, the probability of two coins having the same side up is less than 1. Not 0 as I portrayed it but 0.75.


Except you can't tell me what the spin is without polarizers. This is like telling me I have to measure the orientation of a fidget spinner with a fitness club's treadmill set to the brisk pace of an uphill jog.

None of the experiments don't use polarizing lenses to make a determination of results on both sides. This is where the experiments are fundamentally flawed and propose weak evidence.

To simply read about the fundamentals of light polarization is to understand that quantum wave function collapse is much ado about nothing, and it becomes obvious that all this contention is total bullshit, and none of it is magic.

https://en.wikipedia.org/wiki/Polarizer

https://en.wikipedia.org/wiki/Polarizing_filter_(photography...

https://en.wikipedia.org/wiki/Photon_polarization


If you read through the list of Bell test experiments [1], you will discover that not all of them are done with photons, for example »Violation of Bell’s inequality in Josephson phase qubits« [2].

[1] https://en.wikipedia.org/wiki/Bell_test_experiments#Notable_...

[2] https://web.physics.ucsb.edu/~martinisgroup/papers/Ansmann20...


https://en.wikipedia.org/wiki/Inconsistent_comparison

Josephson phase qubits aren't even utilizing the same fundamental concepts to examine the qualities of the mediums that quantum uncertainty affects.

https://en.wikipedia.org/wiki/Phase_qubit

https://en.wikipedia.org/wiki/Josephson_effect

https://en.wikipedia.org/wiki/Josephson_voltage_standard


You complained - without providing any substantive arguments why this might be an issue - that all Bell test experiments use polarizers, I pointed out that you are wrong. I have no idea what you are complaining about now, what is »[...] qualities of the mediums that quantum uncertainty affects [...]« even supposed to mean? You are obviously far outside of your area of competence. If not, just do the experiment, write the paper, and collect your Nobel Prize, no need to argue with clueless people on the web.


A Josephson junction doesn't even trap any single actual physical particle. It's just a standing wave of electrical current trapped in a bounded array of geometrically crafted slabs of superconductors and insulators. It's a phenomenon that arises from the construction of the device.

Unlike fundamental particles such as photons and electrons, there is nothing substantial about the state represented by the standing wave qubit trapped in a circuit operated by a Josephson junction device. Destroy the device (or nevermind that, just never place it in a dewar flask chilled to 4 degrees kelvin, to activate it) and the phenomenon doesn't even exist. So much for whether or not matter or energy can never be created nor destroyed.

To sit there and state that, on paper, this is the same thing as an individual electron emitted as beta decay is, well... fundamentally flawed.


Just to be clear, these are experimental results that are statistically impossible assuming a classical physics. Are you actually worried that they are somehow studying the wrong self-evidently non-classical thing, or what?


Quote from the "Phase_qubit" Wikipedia article:

"The zero voltage state describes one of the two distinct dynamic behaviors displayed by the phase particle, and corresponds to when the particle is trapped in one of the local minima in the washboard potential. [...] With the phase particle trapped in a minimum, it has zero average velocity and therefore zero average voltage. [...] The voltage state is the other dynamic behavior displayed by a Josephson junction, and corresponds to the phase particle free-running down the slope of the potential, with a non-zero average velocity and therefore non-zero voltage."

So, we're not even talking about actual fundamental subatomic particles anymore. We're talking about phase oscillations, and renaming that as if it were a "particle" because, hey, particle/wave duality, so why not?

Hand-wavey math permits us to equivocate that a current induced on a wire, by way of the transfer of many actual electrons across substrates, can serve to prove the premise of a "teleportation device" also.

See? If we play our game of three-card monte, change phase oscillations, wiggle our noses, and tilt our heads a little, it's all very obvious that faster-than-light information transfer can be generalized to fit in the same picture, because this tuning fork makes that tuning fork ring in harmony, but only when we choose to notice.


I’m not sure I follow your argument, but if I understand you right, I don’t believe what you’re quoting is relevant. From the article you are (I think?) criticizing:

We measure a Bell signal S of 2.0732 ± 0.0003, exceeding the maximum value |S| = 2 for a classical system by 244 standard deviations. In the experiment, we deterministically generate the entangled state, and measure both qubits in a single-shot manner, closing the “detection loophole”[11]. Since the Bell inequality was designed to test for non-classical behavior without assuming the applicability of quantum mechanics to the system in question, this experiment provides further strong evidence that a macroscopic electrical circuit is really a quantum system [7]. https://web.physics.ucsb.edu/~martinisgroup/papers/Ansmann20...

That says in plain English that they have not assumed that this system behaves according to quantum principles. In fact, it is precisely the opposite: the quantum nature of this system is a conclusion of their results. It would be statistically impossible for any system following classical rules to produce the same data.

(It bears repeating that the math underlying that conclusion is truly not very complex, and it is very, very well studied. If you can show that it’s flawed somehow, don’t bother publishing— just post your proof here and I’ll, uh... pick up the Nobel for you.)

The only caveat is that this experiment closes the detection loophole, but not the locality loophole; it is theoretically possible that a classical signal could be sent from one qubit to the other quickly enough to fabricate this data. There’s no particular reason to suspect a secret signal is in play, but it isn’t theoretically prohibited.

Assuming you haven’t found a flaw in their mathematics, and that you aren’t alleging that the researchers deliberately fabricated their data, the locality loophole is your best (and likely only) avenue to dispute their conclusions. However, if you wish to pursue that, you should keep in mind that there are many other experiments which close the locality loophole but not the detection loophole, and, since 2015, several that close both. Three-card monte may be a better investment of your time.


Uh, wow, at no point have I made the claim that an electrical circuit is not a quantum system. Nor have I claimed that they are incapable of simulating quantum phenomena. Quite the very opposite.

What I did clearly state, and insist as quite relevant, is that entanglement and double slit experiments are hocus pocus and irrelevant distractions. In fact, I stated that this experiment says basically nothing because it merely simulates quantum phenomena within a circuit.

Hello? Yes. Electrons are quantum entities, and assuredly interact with photons which are also quantum entities. This is demonstrated by the photo-electric effect, which we can all notice by placing tin foil in a microwave. Therefore a circuit is indeed a quantum system, since it assuredly deals in electrons.

Wow! Didn't even need to publish a paper about qubits to draw that conclusion! Amazing!

The implication here is that Bell is waste of time, and so is his theorum: such that emission doesn't determine state, especially when you don't look at it.

Great, thanks Bell. I'll be sure to not look at anything until I want to know what it is. True genius at work.


The experiment demonstrates quantum entanglement. I gather you don’t believe this. So how about this: I don’t believe you.

I don’t believe that you could, even theoretically, produce the data from a loophole-free Bell test without invoking superdeterminism, superluminality, or quantum entanglement.

Can you describe how this would be theoretically possible?


The experiment certainly demonstrates "something" in terms of how not to "measure" relativistic effects with macroscopic tools...

And yet, with relativistic particles, the wild claims are made that splitting photons through a substrate, and then passing them through the wall of a polarizing lens, means we can declare ourselves capable of rewriting and erasing history. Eh, not quite.

But hey, where there's smoke, there's fire, so something must be true, right? Let's just make up whatever.


1. Do you think all Bell test experiments ever done were flawed, i.e. none of the observed Bell inequality violations were real?

2. If so, do you think a non-flawed Bell test experiment could be done?

3. If so, do you have a definite opinion about whether or not it would yield a Bell inequality violation?

4. If so, would it yield a Bell inequality violation or not.

5. If you think all Bell test experiments ever done were flawed, can you pick one, preferably one commonly considered a good one, and point out what exactly you think the flaw in the experiment is?

6. If you think Bell inequality violations are or could be real, how do you want to explain them?

Note that those are all yes no questions, well, at least all but the last two. I don't need and want more than a yes or no for the first four because from your comments alone it is not clear, at least to me, what your position actually is.


It's not about getting correlated outcomes when the measurements are made, that would be unsurprising. It's that the correlations (or more precisely, the distributions) of the outcomes conditioned on seemingly random measurement choices suggest that the outcomes are not from independent measurements.

Now there are several ways you can interpret that: 1) the measurement choice at one site is superluminously conveyed to the other site and causes an effect; 2) the measurement choices are not independent and random despite the best attempts of the experimenters; 3) the wavefunction over the two sites is physically real and both measurement choices are necessary to sample it.


> There's nothing shocking or spooky about two billiard balls being made to spin in arbitrarily opposing directions, selecting only one, then separating them, and then noticing the spin of one, in order to reliably grasp that the other is the reversal.

Okay. So now, make two billiard balls so that they're spinning opposite directions, and separate them to opposite sides of the table. Then, go to ball A and do something to it that reverses its spin. Then observe the spins of the two balls. Are they the same?

For billiard balls, yes. If ball A is spinning clockwise and ball B is spinning counter-clockwise, and you reverse the spin of ball A, then you will observe both balls to be spinning counter-clockwise.

For quantum particles, no. If we manufacture two entangled electrons such that observing them will reveal them to have opposite spin, you can reverse the spin of one of them, and then observe them, and they will both still have opposite spin.


Unfortunately I don't think this is true. The moment you observe them, they are no longer entangled. So they have opposite spin at that point, but if you reverse one, it behaves classically and both electrons then have the same spin.

I'm still trying to wrap my head around why there isn't some hidden variable though which determines which spin they'll have. Like with Bayes' theorum, I keep learning it but for whatever reason my brain won't remember it and I have to look it up again hahah.


Correct, this only works if you can reverse the particle's spin without observing it. This is a descriptive example - the real experiment is slightly more complicated - but it was experimentally verified by Alain Aspect in 1982. If you want a more detailed explanation I recommend http://scienceblogs.com/principles/2007/02/22/spooky-action-...


The explanation you're supposing for the "spooky action at a distance" is called a hidden variable theory, and as others have said, this explanation has been disproven. You might be wondering how, and I'll tell you the scenario (adapted from The Elegant Universe) that helped me understand.

Imagine you and I each have a device: a small box with one openable window on the top, one on the front, and one on the back. When one of these windows is opened, it shows either white or black. Further, I tell you that these boxes are entangled: if we both open the same window, the same color will be displayed on both boxes. This only works for the first window opened and only if we both open the same window: if a second window is opened or if we open two different windows, the colors shown will no longer be correlated in any way.

I actually have many of these pairs of boxes, each pair numbered uniquely to identify its matching twin, and we run the experiment for an arbitrarily large number of them until you're satisfied that my claim holds true. You see nothing special about these boxes and argue that they're simply preprogrammed from the factory to display a set of colors the first time a window is opened (for example, BWB or WWB). I counter that entanglement is special and the boxes are not preprogrammed from the factory. In fact, they decide at random which color to show right at the moment one of the boxes' windows is opened!

My claim seems crazy and at first glance possibly even unprovable! But it turns out with a bit of cleverness we can test this.

We take an arbitrarily large number of boxes and begin independently opening their windows at random (without consulting or coordinating with each other when deciding which window to open), recording box numbers, window choices, and the resulting colors. When finished, we compare notes. Sure enough, when we both open the same window on entangled boxes, the colors match as expected. Upon a closer look at the data, we discover something surprising. Ignoring which window we opened, each pair of entangled boxes showed the same color exactly 50% of the time. How is this surprising? A little bit of probabilistic analysis conclusively demonstrates that such a result is fundamentally incompatible with any hidden variable theory.

Assume for the sake of argument that the boxes are preprogrammed. The only possible programming for each box is some form of XXX or XXY. Consider: if both boxes are programmed WWW (or BBB) and we both open a window at random, it will show the same color 100% of the time. If both are programmed WWB (or BWB, or BWW, or any equivalent variant), 2/3 of the time there's a 2/3 chance of displaying the same color when choosing windows at random and 1/3rd of the time there's a 1/3 chance. Combined, whenever the colors are in an XXY configuration, there's a 5/9 chance of showing the same color when windows are chosen at random. Note that in both scenarios the odds of seeing the same color from randomly-chosen windows are greater than 50%! Assuming the preprogrammed colors are chosen at random, some form of XXX will be chosen 2/8 of the time and XXY will be chosen 6/8 of the time, we see that the total odds of seeing the same color if the boxes were preprogrammed should be 2/8 * 100% + 6/8 * 5/9 = 2/3. In theory, our data should have shown a 66% agreement in color, but our experiment consistently shows colors matching 50% of the time.

A hidden variable theory—any hidden variable theory—runs into this probabilistic hurdle.


What I draw from this is that polarized light doesn't quite work the way we think, and the investigation should be into the modes of polarity.


Are you seriously dismissing offhand theoretically sound and experimentally-verified core tenets of quantum mechanics?

It's not like they're making this stuff up. It falls very straightforwardly out of the mathematics underpinning quantum mechanics, we have tested these things, and we have conclusively disproven the existence of any type of hidden variable theory.

That's not to say quantum mechanics is faultless and complete, but any new theory is going to have to incorporate these results rather than throw them out entirely (much like how Newtonian physics "falls out" of general relativity at low speeds and small masses.


I think this video deserves much more attention by HN readers, it's truly great introduction into quantum computing.


Since speed warps spacetime, does the relative speed of the qubits change this? Can they go out of sync if one is moved much faster than the other for a prolonged period of time? If not, could this property be used for a universal clock?


"Much faster than the other" relative to what?

There is no absolute motion or speed. There is only relative motion or speed.


Here's a pre-print of the USTC team's paper:

18-qubit entanglement with photon's three degrees of freedom

https://arxiv.org/abs/1801.04043


Is entanglement an actual physical phenomenon? What exactly needs to happen in-order for a photon to "become" entangled?


Not sure what "actual physical phenomenon" means, but probably no. If you mean can you look at a photon and observe something about it that tells you it's entangled to some other photon, no. If you mean do we know what physical process underlies entanglement, also no. If you mean does entanglement really happen as opposed to being theoretical, yes. The link from the first paragraph of the article may help: https://www.scientificamerican.com/article/chinese-researche...


Physical? As in everyday Newtonian physical? It doesn't work on quantum levels, just like Bohr's model of atom doesn't work during big bang because gravity we were successfully ignoring suddenly is on the same scale or bigger then electrostatic force. We will never be able to intuitively (physically) grasp quantum world because our intuition was trained on trees, rocks and cats.


Even neater is that they are using quantum entanglement in radar to detect stealth planes: https://www.popsci.com/china-quantum-radar-detects-stealth-p...


From the article: they are planning to do this, and hopeful that it will work. Not actually using it yet.


Also, if they got it to work they would surely keep it secret.


Also, if they didn't get it to work they would surely keep it secret.


Yes, and if the military was involved at all, they wouldn’t even talk about the possibility.


Yea, you have to explain this. It made no sense when I first read it, and it still makes no sense.

Here is how radar works: You shine a flashlight somewhere. If there is an object within range, some of the light is reflected back at you and you know there is an object. This is literally what radar does.

So, what part of this, and how, does this "quantum" magic affect?


Think of the quantum state as another measurement to make on the incoming signals. The receiver already measures many characteristics of incoming waves, like frequency and amplitude, and now it has yet another measurement that can be helpful. It's helpful specifically because they kept a reference wave with the same quantum state. With more data you can get higher fidelity which means you can perhaps see further or see the same distance with less power.


It works like normal radar, except that they send one entangled photon towards the plane and keep the other.

When the reflection returns, they can use the entanglement to sort out signal from noise much better and radar is safe from jamming and spoofing.


Thanks, this helps a lot, but just to clarify:

Originally "We 'shine' a bunch of photons at an object, and see if they come back. Except there could be dragged decoy spamming photons back at us, and we don't know if there is. Or it could be sending out so much light, our photons coming back are lost in the mix."

With quantum: "We can verify that any photons that get reflected are the ones that we sent out and check only our photons when flooded with photons from enemy flashlights."

Right?


Right. Oversimplifying a lot, it's putting a name tag on the photons. Then you can sort out the tagged ones from all the others, drastically reducing the practical noise floor.


Except that photons don't have tags.. Each time a story talk about entanglement, I'm stuck: what does it mean experimentally to verify that two particles are entangled? I don't know..


Interesting. Is generating entangled protons so trivial that this is feasible now?

It seems like it would take millions (more?) of entangled protons per second to achieve this. And then these can be sensed, verified, and measured at such speed that it could be used as radar?

It sounds far fetched, but I would love to know if this process is already that quick and easy.


I understand the core concept of quantum radar but does it actually work in practice as a radar? i.e. you can resolve targets with a small range and angular resolution.


Theoretically you would combine the new tech with the old tech. Quantum state would be another measurement to make on the incoming wave on top of measurements like frequency and amplitude.


> “It’s as though you took six bits in your computer, but each bit tripled in how much information it could hold,” Schreppler said

"Each bit is tripled in how much information it could hold"? Normally bits have two states, but Shreppler is claiming these qubits can have six states? What am I missing?


It's a confusing quote. They have achieved 18 qubits from 6 photons. But 18 qubits can store more information than 18 regular bits however when the 6 photon system is measured the quantum state collapses to 18 bits of data.

How do you get 18 qubits from 6 photons? You use multiple degrees of freedom. Whereas a bit in traditional memory has a single degree of freedom (electric charge), photons have many degrees of freedom such as polarity, direction of travel and angular momentum.


Even in traditional memory, you can use a single degree of freedom to store multiple bits, by quantizing the charge. Like in TLC flash memory, where you can store 3 bit per cell.

This is because in traditional memory, a single "cell" stores many elementary charges (electrons).


There are 18 qubits stored in 6 photons, where each photon effectively have 3 qubits from its 3 different degrees of freedom.

Reading the arxiv document prepared by Microsoft Word is so painful. Can't Microsoft learn something from TeX?


I thought qubits had an infinite range of states?


Oh yeah, thats right.. Isn't it based on orientation in a spherical sense? I guess you could have an infinite number of those orientations.


A pure quantum state is a basis vector, written like |n>. It's called a basis vector because it shares a list of mathematical properties with the x̂,ŷ,ẑ coordinate basis vectors you might be familiar with.

They can be combined, (like coordinate vectors) by writing coefficients before them. For example, 1/√2 |a> + 1/√2 |b> would be a combination of state a and state b. Combinations of states obey the condition that the sum of the squares of each coefficient equals one. This is the unitary constraint, and above you can see that 1/√2^2 + 1/√2^2 = 1/2 + 1/2 = 1.

Coefficients can be complex numbers, which consist of a real part and an imaginary part.

So, putting all of this together, a qbit that may be in the state |1> or the state |0> may also be in the state (a+bi)|1> + (c+di)|0>. With the unitary constraint, we can cut out a degree of freedom by introducing the equation |a+bi|^2 + |c+di|^2 = 1. This leaves three degrees of freedom, which can be mapped in to a sphere by a change of coordinates. Three spherical coordinates plus the unitary constraint specifies the four-number state combination.


Very insightful, thank you!


Question:

Why superfluid 4He is not quantum-entangled? What differentiates macroscopic many-body quantum states from many-body quantum entanglement?


Superfluid 4He is quantum-entangled and non-local effects are observable. https://www.sciencedaily.com/releases/2017/03/170321110344.h...


So why it's not taken into account when quantum-entanglement records are broken?

You can have flasks of superfluid 4He with thousands of moles of entangled atoms clearly visible.


Quite correct but usually quantum-entanglement records are for qubits rather than generic atoms. And qubits must be able to be measured in isolation e.g. the polarisation of a single photon can be measured. With superfluid 4He you can't isolate the qubits.


Ah, that makes sense. Thank you.




Take that more as a challenge and a caution to avoid overhype.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: