Hacker News new | past | comments | ask | show | jobs | submit login
IQM achieves 99.9% 2-qubit gate fidelity and 1 millisecond coherence time (meetiqm.com)
88 points by doener 9 months ago | hide | past | favorite | 30 comments



This press release is IMO missing a critical metric: how long an operation takes. If you can run the thing at 1GHz (i.e. 1 operation on a given qubit or qubit pair per nanosecond), then this is awesome. If it’s one operation every millisecond, it’s rather less awesome. This is important for far more than the time a computation will take: it’s mandatory context for their coherence time numbers. For an actively operating computer, I don’t really care about the coherence time per se — I care how many operations I can do before the system decoheres, which affects how much error correction I need. Compare this to DRAM, which also decays (not in milliseconds unless it’s being Rowhammered, but still): the refresh process needs to be much, much faster than the decay.

You can maybe squint at their “sequence fidelity” and extract some information about this.


In principle you’re right, but like most of the leading labs IQM is using superconducting qubits, and the operation time for these is in the 10-100 ns range. That means ~10,000 serial operation, so the two-qubit gate fidelities become probably the most important thing.


I am proficient with many things, but quantum computing is not it.

Anyone could provide some context on why this is significant and currently #3 on the front page?


In QC, a gate is a logical building block, akin to the logic gates in classical computing such as AND, OR, XOR, NOT.

Multi-qubit gates like CZ (present in the article) and CNOT are crucial for manipulating quantum states and achieving the wavefunction interference patterns that yield interesting results (quantum teleportation, Shor's algorithm, and basically any nontrivial quantum result you can think of).

One of the biggest issues keeping quantum computation from taking over the world is maintaining a given quantum state for sufficiently long (coherence time), so you can do meaningful computations on it. Of course, that's meaningless if your actual operations are noisy, hence the emphasis on high fidelity as well.

Classical computation would not be very useful if, say, one-fifth of the time when you multiply two numbers together you get the all-ones bit sequence. The same can be said about quantum computing. Of course, with classical computation, we are greedy and will raise a stink about a CPU that gives us errors in, say, one in 9 billion divisions. But QC isn't quite there yet.

What I would be interested in (and didn't see from a preliminary skim) is how big of an incremental improvement this is from the previous record.


To elaborate on why CZ / CNOT are so important:

CNOT is controlled-NOT (CZ is controlled-Z, but NOT has a clearer classical analogue, and phases are harder to explain).

Consider two inputs A and B, where A is a mixture of the zero-state (|0>; ground) and the one-state (|1>; excited) a|0> + b|1> (such that |a|^2 + |b|^2 = 1), and B is the pure zero-state |0>.

In this case, A behaves somewhat like a selector: when it is |1>, B's state will be flipped from |0> to |1> (and vice versa), and when it is |0>, B's state will remain the same. But crucially, this applies for the entire superposition of states! So the portion of the wavefunction where A is in the zero-state will do nothing, but the portion where A is in the one-state, it'll flip B from its zero-state.

In other words, CNOT takes the state from a|00> + b|10> to a|00> + b|11>, i.e. it entangles a qubit that was previously in the ground state with an arbitrary other qubit. Now if you measure A, then B (or the other way around), you'll always get the same result for both (provided that the quantum state is still coherent).

(Normally, if you took two particles that had the same state but were not correlated, you'd have to consider all the possibilities of being in |00>, |01>, |10>, and |11>.)


This was very helpful, but I just can’t decipher this central paragraph

> Consider two inputs A and B, where A is a mixture of the zero-state (|0>; ground) and the one-state (|1>; excited) a|0> + b|1> (such that |a|^2 + |b|^2 = 1), and B is the pure zero-state |0>.

I’m guessing the notation is tripping me up, but I’m having trouble parsing this.

A: mix of |0> and |1>

B: apparently |0>, but elsewhere you refer to it being unconstrained.

CNOT(A,B) = a|0> + b|1> (such that |a|^2 + |b|^2 = 1)???

The relationship between the parenthetical and the a + b bit is impenetrable, but I can’t fathom what absolute value or squaring would do in the context of single qubits.


This is quantum mechanics notation and basic principles.

|x> is pronounced “ket x” (the second half of the intentionally-misspelled word braket), and refers to a quantum state x. The quantum states |0> and |1> refer to the classical states we’re used to.

But as written in the GP comment, a general qubit state is given by a|0> + b|1>, where a and b are complex numbers. This is a “superposition”: the qubit occupies both states simultaneously and will resolve to one of them when observed/measured.

For any quantum state written in this way, we know from quantum mechanics that observing the state will yield a measurement of the state |0> with probability |a|^2 (the norm of the complex number a), and will yield state |1> with probability |b|^2.

The qubit measurement must yield exactly one of a or b, so the probabilities must add up to 1, hence |a|^2 + |b|^2 = 1.


Very helpful, thank you


> B: apparently |0>, but elsewhere you refer to it being unconstrained.

This is probably due to my wording not really being a parallel construction. The unconstrained qubit was A; in my example, B is the qubit that's in the ground state.

Sorry for the confusion! A sibling commenter elaborated on my parenthetical, which is really just a constraint on what values can be used for the probability amplitudes a and b. (I kind of hate the term "probability amplitude", but that's the standard nomenclature.)

There are two main observations from a and b being complex numbers:

1. You can turn these amplitudes into classical probabilities for getting a particular outcome out of measuring A by squaring their magnitudes. So if you had a = 3/5 and b = -4/5 i, then you'd have (3/5)^2 = 36% chance of observing |0> and |-4/5 i|^2 = 64% chance of observing |1>.

2. What's perhaps the most important bit about quantum computing is the relative phase between these eigenstates, and the ability to achieve just the right interference pattern such that everything cancels out except for certain peaks that can give you something interesting. This is what differentiates QM from classical probability theory, and why we can't just take the existing quantum algorithms and reimplement them in classical computers.


In the statement |a> + |b>, both a and b are probability amplitudes (complex numbers). You obtain the probability density by squaring the probability amplitude (or rather, multiplying it by it's complex conjugate aa*), which gives the same result as squaring the magnitude, so it can be written |a|^2. Since there's two possible states in our system, their probabilities have to add to unity, so |a|^2 + |b|^2 = 1.

https://en.wikipedia.org/wiki/Probability_amplitude


I have a newbie question about quantum computing. Are there any quantum computers (regardless of the name or tech involved) in the wild now, where you can pick a problem with unknown answer, compute the result on the quantum computer, and later verify that such result was correct? Like, I don't know, calculate sum of 123+987, or something else which can be later reproduced on the "normal" computer. Sorry for the layman terms.


> Sorry for the layman terms.

Not at all! What you're describing is whether there are problems with classical solutions that have also been solved via quantum computers in practice.

It depends how exactly you define a quantum computer, but for a long time D-Wave was making a number of splashy claims about how it had some large amount of entangled qubits in its "adiabatic quantum computer" and was using it to perform quantum annealing to solve various optimization problems. (Confusingly, despite the name, quantum annealing is a classical algorithm.) As part of the showcase, this system was used to solve a sudoku puzzle among other things.

There is no clear evidence that D-Wave's devices qualify as quantum computers (and if so, certainly not general-purpose ones capable of implementing Shor's algorithm), and that they achieved meaningful quantum speedup. But, they certainly did sell a handful of these devices for millions of dollars.

The general challenge with a "real" (more precisely: universal) quantum computer is that in order to get any useful computation out of them, you need a fair amount of entangled particles with high coherence times, and a fair amount of gates with high fidelity.

The wording of your question suggests that you're aware of the various implementations of Shor's algorithm and how the circuitry was designed with the answer in mind. So instead let me point you at https://en.wikipedia.org/wiki/HHL_algorithm which solves a certain family of systems of linear equations, which we can verify using classical algorithms like Gaussian elimination or LU factorization.


nice dig at intel there :D


In all seriousness, this was intended to illustrate the enormous gulf between even this result (one error in a thousand computations) and the infamous Pentium FDIV bug.

(I would describe it as lightly poking fun at Intel... I'd hope there aren't still hard feelings over a 30-year-old hardware defect.)


It's a record for a fundamental piece of their hardware.

Overall is a bit meh since the only real hard problem that hasn't been solved in QC is noise becoming the dominant effect as you grow the number of qubits involved (at an exponential rate).

So it's "nice, but does it scale?".

They claim a 150-qubit system coming soon. That would be a significant breakthrough, but that statement is also the "full self driving this year" equivalent of quantum world.


The last time I checked (several years ago) another hard problem was that many of the widgets you need per qbit were macroscopic and difficult to miniaturize, ie. we were, if not in the difference engine era, then at best in the vacuum tube era of quantum circuit construction.


Yeah, I can vouch for that as well, the miniaturization challenges can be solved, after all there's chips now with a 2nm pitch which is crazy.

But some problems in that context have a hard physical limit, some serious researchers have proposed that it would not be feasible at all, that the effect of errors could not be mitigated and it's a dead end.

I wish I could provide concrete examples but I am not an expert and have only heard about this from friends who work on it.


IIRC, the most serious skeptic who believes that real quantum computers are never going to be realized because of the noise wall is Gil Kalai. Look into his writings, or Scott Aaronson's arguments with him, or whatever.

I am a lot less optimistic about QC than I was 15 years ago before I really started learning about this stuff, but I'm not willing to completely rule it out.

Do I think there's a plausible roadmap to a general-purpose implementation of Shor's algorithm at a scale that can break RSA-2048? Not any time in the next 30 years. But sometimes progress takes great leaps forward after many years of stagnation, so I'm still a bit hopeful.


I don’t think they’ve set any records (unless you mean personal record). These are pretty standard numbers for the top places, as they discuss


Well, they claim to have set a record.

No idea if that's true, though.

Please provide sources so we can all learn more.


I am nowhere near an expert but from what i know:

Having many qubits is of course vital to most quantum algorithms, but there's a good reason you can't simply run a quantum algorithm a qubit at a time until you've ran them all: you need all the qubits in the processor to be entangled to each other, so that the wave function collapses across all of them.

Here IBM has achieved this reliably across 2 qubits and with record processing speed. I don't think they were the first to achieve it at all, but this is a new record in terms of the important metrics beyond number of qubits.


This is not IBM.


What's the rate of change in all three measures:

* Fidelity

* Qubit count

* Coherence time

Without a roadmap to some goal this is just random PR. What's the path to convergence of fidelity of volume qubits for sustained periods to achieve calculations?


This is interesting progress but the key is actual computational benchmark performance such as quantum volume.

It is cool to achieve 99.9 2q fidelity... this is vitally important. And high coherence time is also important.

But you have to achieve "at least" this performance across all connected qubits, with 10's or 100's of qubits on chip, and get tolerant state prep and readout btw, while trying to perform useful quantum computing operations. The difference between this and what IQM announced is VASTLY different.

<-- was system engineer for a quantum computing company.


I want to see it play doom


Might get 60fps in Minecraft...


Maybe now they'll be able to factorize 3*5 with the general Shor's algorithm. AFAIK all previous quantum factorization records used precomputed knowledge of the factors.


> Maybe now they'll be able to factorize 3*5 with the general Shor's algorithm

Nope, that still needs a lot more work...


Wait, what did they actually still compute with the quantum computer at all then?


If you follow Wikipedia's reference [1] for the quantum factorization of 21 for instance, you read

> We implement a scalable iterative quantum algorithm with a compiled version of the quantum order finding routine where the circuit is constructed for a particular factoring case (here N = 21 and x = 4) admitting an experimentally tractable implementation, as shown in Fig. 2a.

> There are several steps to compilations, common to previous demonstrations6–9,21. Firstly, for N = 21 and x = 4, unitaries and their decompositions can be calculated explicitly, as can the full state evolution; secondly, redundant elements of these unitaries are omitted.

> Finally, and specific to our own demonstration, since only 3 of the possible 25 levels of the conventional 5 qubit work register are ever accessed, a single qutrit is used for the work register, instead of 5 qubits, to realise a hybrid qubit-qutrit system.

So the whole factorization is done on 1 qubit + 1 qutrit.

[1] https://en.wikipedia.org/wiki/Integer_factorization_records#...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: