That's still many times faster than you'd get conventionally, so it's a reasonable trade off.
But it doesn't address the possibility that you can't entangle enough qubits to begin with, or keep them stable long enough to actually perform the calculations, which is where the error correction stuff comes in. I don't know enough about the challenges there to comment on how many you'd need (though 1000:1 seems high to my naive intuition, i'd guess maybe 10-100 for magnitude myself).
newbie question : are they naturally instable (so we won't fix that) or is it just because we don't master them well enough right now (and we'll fix them "soon") ?
However, with the right kind of quantum computer or quantum simulator, you could construct a system of qubits that is described by the exact same Hamiltonian. That way, the quantum state of your qubits and your original system would behave in exactly the same way. Then, you just let the qubits evolve in time and read out the system's state at the end. Do that a bunch of times and you'll see an average picture of what the original system you're modelling (protein or something) would do.
So to recap - we can get around the difficult compute bottleneck caused by the desire to perform high-fidelity physics simulations by creating a system that follows the same rules and which we can probe much more easily.
Your question doesn't make sense. Many (most) things in nature are incredibly hard to model correctly. Just because they happen doesn't make it easy to quantify usefully.
For other calculations you might need error correction, but this leads to nowhere with current technology.
What the Chinese did was groundbreaking.
Even software that is supposed to be useful is so terribly slow. Computers are between 2 and 4 orders of magnitude faster than programmers today experientially believe they are, because today’s culture of programming has rotted so thoroughly. Do you really need a quantum computer when 3 orders of magnitude are just sitting there on the table waiting to be picked up?
I totally agree with your first paragraph, though.
> because the trend in language design for 25 years has been to make slow languages
No one sets out to make a language to make them slow. The trend is to make higher level languages. Do you really think that there is no reason for it besides novelty and coolness factor?
He's instead saying that it's very much possible to build a language with a similar level of abstraction/ergonomics to say, Java, or Python, or C#, or whatever but with similar performance characteristics to a lower level language like C. And we are starting to see this - there are languages like Rust or D which are (at least to my eyes) much less arduous and foot-gun prone than languages like C or C++ while having similar (or better) performance.
Of course there's also Jai, but I think we should remain unbiased here :P
As an aside though - I think some of those orders of magnitude of performance gains could be had by just writing better code in your existing high level languages. (At least in my experience with enterprise software dev).
So far, reality seems to confirm my intuition.
Can I say "pretty much all of Linux userspace"? Or Java VM? Or Gnome?
A lot of optimizations left on the table have nothing to do with manual memory management, and have everything to do with "eh, let's just the query the database again, that'll shave a day off the schedule".
Answer that and I get the feeling you’ll understand why your current thinking is so misguided.
Technology doesn't make people better. It lets people do better things if they have the desire to do so. It also must enable them to do worse things if they have the desire to do so. You can not have one without the other. And on the balance things have historically gotten better so there's not too much reason to be worried about the fact most people just use global communication to bicker. They won't be remembered. Those who are, however, wouldn't have been possible otherwise.
As for where quantum computing factors into this, I don't have the slightest clue.
As Timothy Snyder notes when discussing the internet, the introduction of the printing press divided Western Christianity thereby causing a century and a half of religious wars in which a third of the population was killed. And later it gave us the Enlightenment and educated society.
"They won't be remembered."
Three orders of magnitude speedup is a constant factor. Quantum computers provide between sqrt(n) and log(n) speedup depending on the problem - for example, biochemistry simulation for improved drug designs and such. That seems worthwhile.
And I agree that it's disheartening to watch how horribly we use these incredible tools. But thankfully as programmers we're in a position to design these tools to offset the worst parts of humanity.
And one field in which more powerful computers will be needed soon is deep learning. It appears that progress is beginning to stall, as larger networks are necessary. Better tools for distributed computing will make up the difference in the short term, but the current infrastructure appears to be insufficient for general intelligence.
"In 50 years, every street in London will be buried under nine feet of manure."
So if I decide to use the device that I have bought and invested in, to "rant" about something (to connect to other people through it, essentially), or even just connect to my loved ones sending "meaningless" information (as many I'm sure would call it), or if I choose to use it for anything else, who is to say that it's "wasting my life"? You? Why do you decide what is wasteful and what is not? Maybe all those people want to do those things?
You can't say that this technology is wasted or is not used properly without also implicitly assuming moral and philosophical authority about what people should choose and not to choose with their time and other resources, including the money used to buy such devices and resources spend on developing them. Why do you assume that you can be such an authority?
Of course, looking at this in another way, you definitely have the ultimate authority in this area in this one regard: where it applies to your own life. Which I guess is another way of saying that what you wrote says more about your outlook on life than on the underlying technology and its social ramifications.
You're certainly free to find the relevant response to that intellectually boring. But it's quite a leap from OP's totalizing statement of despair to implicitly fill in that ellipsis with "allusion to human well being."
I'd like to be generous in my reading, but I don't see any room in there for a discussion of "heavy" vs healthy device use.
The original author used his "objective measure of excellence" as an argument for not developing quantum computing technologies to be used in personal devices. In this specific instance, in this practical regard (even though it's probably 50 years too early for this question) - I argue that yes, the position of personal freedom (to use quantum computing in cell phones, exaggeratedly) does indeed trump the other position which is to actively exclude quantum computing from phones for the vague fear that people might waste their lives on it, by failing to fit into some objective measure of excellence.
I know virtually nothing about quantum computing, so can you give some examples? Whenever I've asked anyone who seemed to know anything about the field, all they can come up with is weather forecasting and simulating nuclear explosions. Not exactly "general use," as you put it.
In the back of my mind, I know that quantum computing is a big deal, and we're in ENIAC days with it. But I don't have a good understanding of where it goes or why.
Many classical algorithms, which run in ~O(poly(n)) can have an 'equivalent' quantum algorithm in ~O(log(n)) - an exponential speedup. There is still debate as to how the complexity class of problems which are efficient on a quantum computer (BQP) relate to other complexity classes. Its suspected P lies entirely within BQP.
However, at least initially, I think quantum computers will be a specialized piece of equipment. Classical computing is pretty good for your average person. Quantum computers will be used for more heavy compute tasks.
It's actually known that BQP contains P. It also contains BPP. What's not known is the relationship between BQP and NP (most experts suspect there's no containment in either direction).
 See this for an easy proof: https://people.eecs.berkeley.edu/~vazirani/f04quantum/notes/...
National Quantum Initiative - Action Plan
The state of each qbit is represented by a state vector of two complex numbers [a, b] where |a|^2 + |b|^2 = 1. There are two special qbit values called the classical basis: [1, 0] which is the classical bit 0, and [0, 1] which is the classical bit 1. If a qbit is not in one of the two classical states, we say it is in superposition. When a qbit is in superposition, we can measure it and it will collapse probabilistically to 0 or 1; for a qbit [a, b], the probability that it collapses to 0 is |a|^2 and the probability that it collapses to 1 is |b|^2.
Things get more interesting when we have multiple qbits. If we have two qbits [a, b] and [c, d], we define their product state as their tensor product [ac, ad, bc, bd]. For example, if we have two qbits both in state [1/sqrt(2), 1/sqrt(2)], their product state would be [1/2, 1/2, 1/2, 1/2]. We use the product state to calculate the action of a quantum logic gate that operates on multiple qbits - for a gate which operates on two qbits, we can always represent its action as a 4x4 matrix.
Usually we can move back and forth between the product state representation and writing out the individual qbits states. However, in certain scenarios something very special happens: we cannot factor the product state back into the individual state representation! Consider the product state [1/sqrt(2), 0, 0, 1/sqrt(2)]. If you try to write this as a tensor product of two states [a, b] and [c, d], you cannot! It cannot be factored; the qbits have no individual value, and we say they are entangled.
Well, what does this mean? It means when you measure one qbit, even if the qbits are very far apart, you instantly know the value of the other qbit. So if I entangled two qbits in the state [1/sqrt(2), 0, 0, 1/sqrt(2)], give you one, then we go to opposite ends of the universe, if I measure my qbit and see a 0 I'll know your qbit instantly also collapsed to 0 (or collapsed to 1 if I measured 1). This phenomenon has been experimentally-verified to occur faster than light. It is instantaneous, as far as we can tell. So, local realism is wrong! Spooky action at a distance is real!
There is an important caveat: while the qbits seem to coordinate in some faster-than-light way, you cannot use this to communicate in a faster-than-light way. All we have is a shared random number generator. I can't send some chosen bit from my reference frame to yours. This is called the no-communication theorem.
If you found this interesting, I have a full video on quantum computing for computer scientists here: https://youtu.be/F_Riqjdh2oM
 IDGAF about your chosen quantum mechanics interpretation, don't @ me
In more practical terms, the theory also falls apart when you start doing more complicated things with entangled qbits than just measuring them, like quantum teleportation or error correction.
Also, the idea that collapse is a physical phenomenon that propagates faster than light is not universally accepted. There are other ways to interpret these results. None of them take a toll on your intuition, but personally I like this one:
(There's a video too: https://www.youtube.com/watch?v=dEaecUuEqfc)
We’ve made real-world measurements here that correlate with measurements made at the exact same time over there in a way that mathematically cannot be accounted for without some species of intuition-violating spookiness. It’s all very real.
"A Bell test experiment or Bell's inequality experiment, also simply a Bell test, is a real-world physics experiment designed to test the theory of quantum mechanics in relation to two other concepts: the principle of locality and Einstein's concept of "local realism". The experiments test whether or not the real world satisfies local realism, which requires the presence of some additional local variables (called "hidden" because they are not a feature of quantum theory) to explain the behavior of particles like photons and electrons. According to Bell's theorem, if nature actually operates in accord with any theory of local hidden variables, then the results of a Bell test will be constrained in a particular, quantifiable way. If a Bell test is performed in a laboratory and the results are not thus constrained, then they are inconsistent with the hypothesis that local hidden variables exist."
Spooky private method scope is spooky.
I think the many-worlds picture is a clearer way to think about this than "spooky action at a distance".
I have two guitars. I place the guitars facing one another, such that plucking the HIGH E string on one, also plucks the LOW E string on the other. We put ear plugs in our ears, such that I can separate the two guitars, without us ever hearing them. I pluck the guitars, give you one, and take the other one and travel far away. I then listen to my guitar. It is the HIGH E guitar. Now I know you have the LOW E guitar. Wow. Incredibly unspooky. Not teleportration.
I think by taking the guitar far away you are implying possibility of faster than light communication, which is thought to be not possible, so his links address that? Hard for me to understand so I'm just saying my thoughts out loud to get clarified, thanks
That state is induced at the moment the guitars share "locality" because entanglement requires locality for initialization of polarization.
So then, we say we are as yet unaware of the qualities of the polarization we, ourselves, induced. Very mysterious.
So spooky, yes? We do not measure, because we choose not to, so we do not yet know.
Even if we prevent ourselves from having the capacity to measure, the results hold true, but so what? And so what, if we ask others to do the same. Imagine that we ask two waiters to tape two coins together in the kitchen, flip the linked coins, peel the coins apart while preserving the outcome of the coin flip, then take one coin to your table, and one to mine. Now I know which side of the coin you are looking at, without walking over to your table. So what. Nothing about this claims transmit information superluminously.
In reality, with instrumentation, carrier signals relay an electromagnetic transmission in such a way that one cannot peek or tamper (the waiters can't change the coin flip, we cannot hear the ringing guitar), but this does not invalidate the premise of the analog. For the purposes of the analogous guitar example, we say that our couriers (electromagnetism itself) are prevented from touching or listening to the ringing guitars, or disclosing what they might sense.
With the guitars, we say the guitars move away from the place where they were entangled. We'll say that our instrumentation rang the guitars at the grand canyon. Our couriers then transported the guitars to you, at the top of the Empire State Building in New York, and me on the Golden Gate Bridge in San Francisco. I receive the guitar, and discover that the LOW E string is ringing, it can only mean that you guitar's HIGH E string in ringing in New York.
There are no local hidden variables in this example. The premise of polarity as a corollary for guitar strings is modeled in the exact same manner. Six strings on a guitar maps to the same essential parameters of each of two directions for all three axes of spin.
Ask yourself: if you construct a gun, with two diametrically opposed barrels, with exactly opposed rifling twists, and you aim the gun at two opposing (but identical) abrasive knurled metal rasp targets, such that if the bullet spins one way, the ricochet off rasp target will send it to a blue target, but if the bullet spins the other way, the grain of the rasp target is such that the bullet is sent to an orange target, will you be surprised to find that the behavior of the projectiles remains consistent?
Fire those bullets out of that gun, and as the bullets leave the opposing twists of the barrel, and the spin of the bullets encounters the friction of the knurled surface, they will consistently be sent in whichever direction the spin of the barrels rifling puts them. When one side sends spins the bullet to hit the blue target, the other barrel's twist always puts the other corresponding bullet onto the orange target, by bouncing it off the polarizer rasp.
So, now, to shrink downward to the realm of particle physics, what we find is that the ballistic particle guns are such that the emitter source is an array of many guns with varying rifling twists, but like pulling a lever on a slot machine, we cannot know which of the guns embedded in the radiation source will fire next.
We won't know the turn of the rifling of the gun's barrel prior to whichever one happens to go off. We stick out our rasp target to have it send the bullet to a colored target, and we declare that the polarizer rasp directed the bullet particle, but not really. The emitting source's gun barrel imparted the spin. The polarizers induced behavior on particles that would have behaved as reciprocals anyway.
EDIT: I just came across an illustration which might be helpful. I will place three coins on a table and cover them so that you can not see whether they are heads or tails. You get to pick two of them and I will reveal them for you but you can never look at the third one. Your task is to figure out by which rule I am placing the coins on the table.
In the first round you pick coins one and two, I reveal them to be heads and tails. In the second round you pick coins one and two again, now they are tails and heads. You continue picking coins one and two for a few thousand rounds and always see heads and tails or tails and heads, they are never the same.
Then you switch to picking coins two and three for a few thousand rounds and again they are always heads and tails or tails and heads, they are also never the same. Now you have figured out what I am doing, I am randomly choosing between heads, tails, heads and tails, heads, tails for coins one, two, and three.
So in the next round you pick coins one and three and I reveal them to you. Heads and tails. WtF?!? They should have been the same if I always choose between heads, tails, heads and tails, heads, tails. You try again. Heads and tails. Again. Tails and heads.
No matter what you try, you never get to see two coins with the same side up. That's ridiculous, you think. There are only two sides to a coin but three coins on the table. At least two of the coins have to have the same side up in each round and if you select the two coins to reveal at random, then you should at least sometimes get to see two coins with the same side up no matter which rule I use to place them. But you don't.
Assuming that I choose heads and tails for each of the coins when I placed them on the table and before you make your choice is incompatible with your observation that you never see two coins with the same side up. But if you assume that I can magically turn the coins around at the moment you tell me which two coins to reveal, then you can explain your observation. It may however trouble you because your explanation now involves magic.
And that is roughly how entangled pairs in Bell test experiments behave. Or more formally, classically P(1=2) + P(2=3) + P(3=1) >= 1, at least two coins always have the same side up no matter what the underlying distribution is. Entangled pairs in Bell test experiments violate this inequality, the probability of two coins having the same side up is less than 1. Not 0 as I portrayed it but 0.75.
None of the experiments don't use polarizing lenses to make a determination of results on both sides. This is where the experiments are fundamentally flawed and propose weak evidence.
To simply read about the fundamentals of light polarization is to understand that quantum wave function collapse is much ado about nothing, and it becomes obvious that all this contention is total bullshit, and none of it is magic.
Josephson phase qubits aren't even utilizing the same fundamental concepts to examine the qualities of the mediums that quantum uncertainty affects.
Unlike fundamental particles such as photons and electrons, there is nothing substantial about the state represented by the standing wave qubit trapped in a circuit operated by a Josephson junction device. Destroy the device (or nevermind that, just never place it in a dewar flask chilled to 4 degrees kelvin, to activate it) and the phenomenon doesn't even exist. So much for whether or not matter or energy can never be created nor destroyed.
To sit there and state that, on paper, this is the same thing as an individual electron emitted as beta decay is, well... fundamentally flawed.
"The zero voltage state describes one of the two distinct dynamic behaviors displayed by the phase particle, and corresponds to when the particle is trapped in one of the local minima in the washboard potential. [...] With the phase particle trapped in a minimum, it has zero average velocity and therefore zero average voltage. [...] The voltage state is the other dynamic behavior displayed by a Josephson junction, and corresponds to the phase particle free-running down the slope of the potential, with a non-zero average velocity and therefore non-zero voltage."
So, we're not even talking about actual fundamental subatomic particles anymore. We're talking about phase oscillations, and renaming that as if it were a "particle" because, hey, particle/wave duality, so why not?
Hand-wavey math permits us to equivocate that a current induced on a wire, by way of the transfer of many actual electrons across substrates, can serve to prove the premise of a "teleportation device" also.
See? If we play our game of three-card monte, change phase oscillations, wiggle our noses, and tilt our heads a little, it's all very obvious that faster-than-light information transfer can be generalized to fit in the same picture, because this tuning fork makes that tuning fork ring in harmony, but only when we choose to notice.
We measure a Bell signal S of 2.0732 ± 0.0003, exceeding the maximum value |S| = 2 for a classical system by 244 standard deviations. In the experiment, we deterministically generate the entangled state, and measure both qubits in a single-shot manner, closing the “detection loophole”. Since the Bell inequality was designed to test for non-classical behavior without assuming the applicability of quantum mechanics to the system in question, this experiment provides further strong evidence that a macroscopic electrical circuit is really a quantum system .
That says in plain English that they have not assumed that this system behaves according to quantum principles. In fact, it is precisely the opposite: the quantum nature of this system is a conclusion of their results. It would be statistically impossible for any system following classical rules to produce the same data.
(It bears repeating that the math underlying that conclusion is truly not very complex, and it is very, very well studied. If you can show that it’s flawed somehow, don’t bother publishing— just post your proof here and I’ll, uh... pick up the Nobel for you.)
The only caveat is that this experiment closes the detection loophole, but not the locality loophole; it is theoretically possible that a classical signal could be sent from one qubit to the other quickly enough to fabricate this data. There’s no particular reason to suspect a secret signal is in play, but it isn’t theoretically prohibited.
Assuming you haven’t found a flaw in their mathematics, and that you aren’t alleging that the researchers deliberately fabricated their data, the locality loophole is your best (and likely only) avenue to dispute their conclusions. However, if you wish to pursue that, you should keep in mind that there are many other experiments which close the locality loophole but not the detection loophole, and, since 2015, several that close both. Three-card monte may be a better investment of your time.
What I did clearly state, and insist as quite relevant, is that entanglement and double slit experiments are hocus pocus and irrelevant distractions. In fact, I stated that this experiment says basically nothing because it merely simulates quantum phenomena within a circuit.
Hello? Yes. Electrons are quantum entities, and assuredly interact with photons which are also quantum entities. This is demonstrated by the photo-electric effect, which we can all notice by placing tin foil in a microwave. Therefore a circuit is indeed a quantum system, since it assuredly deals in electrons.
Wow! Didn't even need to publish a paper about qubits to draw that conclusion! Amazing!
The implication here is that Bell is waste of time, and so is his theorum: such that emission doesn't determine state, especially when you don't look at it.
Great, thanks Bell. I'll be sure to not look at anything until I want to know what it is. True genius at work.
I don’t believe that you could, even theoretically, produce the data from a loophole-free Bell test without invoking superdeterminism, superluminality, or quantum entanglement.
Can you describe how this would be theoretically possible?
And yet, with relativistic particles, the wild claims are made that splitting photons through a substrate, and then passing them through the wall of a polarizing lens, means we can declare ourselves capable of rewriting and erasing history. Eh, not quite.
But hey, where there's smoke, there's fire, so something must be true, right? Let's just make up whatever.
2. If so, do you think a non-flawed Bell test experiment could be done?
3. If so, do you have a definite opinion about whether or not it would yield a Bell inequality violation?
4. If so, would it yield a Bell inequality violation or not.
5. If you think all Bell test experiments ever done were flawed, can you pick one, preferably one commonly considered a good one, and point out what exactly you think the flaw in the experiment is?
6. If you think Bell inequality violations are or could be real, how do you want to explain them?
Note that those are all yes no questions, well, at least all but the last two. I don't need and want more than a yes or no for the first four because from your comments alone it is not clear, at least to me, what your position actually is.
Now there are several ways you can interpret that: 1) the measurement choice at one site is superluminously conveyed to the other site and causes an effect; 2) the measurement choices are not independent and random despite the best attempts of the experimenters; 3) the wavefunction over the two sites is physically real and both measurement choices are necessary to sample it.
Okay. So now, make two billiard balls so that they're spinning opposite directions, and separate them to opposite sides of the table. Then, go to ball A and do something to it that reverses its spin. Then observe the spins of the two balls. Are they the same?
For billiard balls, yes. If ball A is spinning clockwise and ball B is spinning counter-clockwise, and you reverse the spin of ball A, then you will observe both balls to be spinning counter-clockwise.
For quantum particles, no. If we manufacture two entangled electrons such that observing them will reveal them to have opposite spin, you can reverse the spin of one of them, and then observe them, and they will both still have opposite spin.
I'm still trying to wrap my head around why there isn't some hidden variable though which determines which spin they'll have. Like with Bayes' theorum, I keep learning it but for whatever reason my brain won't remember it and I have to look it up again hahah.
Imagine you and I each have a device: a small box with one openable window on the top, one on the front, and one on the back. When one of these windows is opened, it shows either white or black. Further, I tell you that these boxes are entangled: if we both open the same window, the same color will be displayed on both boxes. This only works for the first window opened and only if we both open the same window: if a second window is opened or if we open two different windows, the colors shown will no longer be correlated in any way.
I actually have many of these pairs of boxes, each pair numbered uniquely to identify its matching twin, and we run the experiment for an arbitrarily large number of them until you're satisfied that my claim holds true. You see nothing special about these boxes and argue that they're simply preprogrammed from the factory to display a set of colors the first time a window is opened (for example, BWB or WWB). I counter that entanglement is special and the boxes are not preprogrammed from the factory. In fact, they decide at random which color to show right at the moment one of the boxes' windows is opened!
My claim seems crazy and at first glance possibly even unprovable! But it turns out with a bit of cleverness we can test this.
We take an arbitrarily large number of boxes and begin independently opening their windows at random (without consulting or coordinating with each other when deciding which window to open), recording box numbers, window choices, and the resulting colors. When finished, we compare notes. Sure enough, when we both open the same window on entangled boxes, the colors match as expected. Upon a closer look at the data, we discover something surprising. Ignoring which window we opened, each pair of entangled boxes showed the same color exactly 50% of the time. How is this surprising? A little bit of probabilistic analysis conclusively demonstrates that such a result is fundamentally incompatible with any hidden variable theory.
Assume for the sake of argument that the boxes are preprogrammed. The only possible programming for each box is some form of XXX or XXY. Consider: if both boxes are programmed WWW (or BBB) and we both open a window at random, it will show the same color 100% of the time. If both are programmed WWB (or BWB, or BWW, or any equivalent variant), 2/3 of the time there's a 2/3 chance of displaying the same color when choosing windows at random and 1/3rd of the time there's a 1/3 chance. Combined, whenever the colors are in an XXY configuration, there's a 5/9 chance of showing the same color when windows are chosen at random. Note that in both scenarios the odds of seeing the same color from randomly-chosen windows are greater than 50%! Assuming the preprogrammed colors are chosen at random, some form of XXX will be chosen 2/8 of the time and XXY will be chosen 6/8 of the time, we see that the total odds of seeing the same color if the boxes were preprogrammed should be 2/8 * 100% + 6/8 * 5/9 = 2/3. In theory, our data should have shown a 66% agreement in color, but our experiment consistently shows colors matching 50% of the time.
A hidden variable theory—any hidden variable theory—runs into this probabilistic hurdle.
It's not like they're making this stuff up. It falls very straightforwardly out of the mathematics underpinning quantum mechanics, we have tested these things, and we have conclusively disproven the existence of any type of hidden variable theory.
That's not to say quantum mechanics is faultless and complete, but any new theory is going to have to incorporate these results rather than throw them out entirely (much like how Newtonian physics "falls out" of general relativity at low speeds and small masses.
There is no absolute motion or speed. There is only relative motion or speed.
18-qubit entanglement with photon's three degrees of freedom
Here is how radar works: You shine a flashlight somewhere. If there is an object within range, some of the light is reflected back at you and you know there is an object. This is literally what radar does.
So, what part of this, and how, does this "quantum" magic affect?
When the reflection returns, they can use the entanglement to sort out signal from noise much better and radar is safe from jamming and spoofing.
Originally "We 'shine' a bunch of photons at an object, and see if they come back. Except there could be dragged decoy spamming photons back at us, and we don't know if there is. Or it could be sending out so much light, our photons coming back are lost in the mix."
With quantum: "We can verify that any photons that get reflected are the ones that we sent out and check only our photons when flooded with photons from enemy flashlights."
It seems like it would take millions (more?) of entangled protons per second to achieve this. And then these can be sensed, verified, and measured at such speed that it could be used as radar?
It sounds far fetched, but I would love to know if this process is already that quick and easy.
"Each bit is tripled in how much information it could hold"? Normally bits have two states, but Shreppler is claiming these qubits can have six states? What am I missing?
How do you get 18 qubits from 6 photons? You use multiple degrees of freedom. Whereas a bit in traditional memory has a single degree of freedom (electric charge), photons have many degrees of freedom such as polarity, direction of travel and angular momentum.
This is because in traditional memory, a single "cell" stores many elementary charges (electrons).
Reading the arxiv document prepared by Microsoft Word is so painful. Can't Microsoft learn something from TeX?
They can be combined, (like coordinate vectors) by writing coefficients before them. For example, 1/√2 |a> + 1/√2 |b> would be a combination of state a and state b. Combinations of states obey the condition that the sum of the squares of each coefficient equals one. This is the unitary constraint, and above you can see that 1/√2^2 + 1/√2^2 = 1/2 + 1/2 = 1.
Coefficients can be complex numbers, which consist of a real part and an imaginary part.
So, putting all of this together, a qbit that may be in the state |1> or the state |0> may also be in the state (a+bi)|1> + (c+di)|0>. With the unitary constraint, we can cut out a degree of freedom by introducing the equation |a+bi|^2 + |c+di|^2 = 1. This leaves three degrees of freedom, which can be mapped in to a sphere by a change of coordinates. Three spherical coordinates plus the unitary constraint specifies the four-number state combination.
Why superfluid 4He is not quantum-entangled? What differentiates macroscopic many-body quantum states from many-body quantum entanglement?
You can have flasks of superfluid 4He with thousands of moles of entangled atoms clearly visible.