The article appears to be saying that they are offering an actual quantum computer as a service, but as I recall, their previous offering was a simulation of a quantum computing environment. Yet, this same article refers to that thing as if it were a real QC. This makes me skeptical that the thing being offered here is a real quantum computer...Anyone here have any insight into whether this is legitimate?
Their previous offerings have been 2 physical quantum computers that can be accessed over the cloud. One with 5 qubits, and another with 16 qubits[0]. These computers have had thousands of users who have run millions of quantum algorithms [1].
We've had quantum computers since the 90's. We just haven't had "useful" quantum computers yet [0].
An implementation of quantum computers that is currently flourishing is one that is built out of superconducting circuits. This makes it easier to scale the quantum computer because you can leverage existing nanoelectronic fabrication techniques [1].
The term doesn't make sense because you're not going to use groups of 8 qubits to represent characters on a screen.
We may actually need a new word for 2 qubits (dual-qubits? dubits?) because it seems 2 qubits are enough to break 1-bit of encryption, and I think I've read it's enough to simulate 1 atom, too.
From the link in that sentence: "The IBM Quantum Experience (QX) enables anyone to ... ... explore tutorials and simulations around what might be possible with quantum computing."
That sentences doesn't say the small 5 and 16 bit computers don't exist.
Edit: if you read the whole link, the key description is:
"IBM’s quantum processor is made up of superconducting transmon qubits, located in a dilution refrigerator at the IBM Research headquarters at the Thomas J. Watson Research Center.
Users interact with the quantum processor through the quantum circuit model of computation, applying quantum gates on the qubits using a GUI called the quantum composer, writing quantum assembly language code[1] or through a Python API.[2]"
Which is to say you have both a quantum computer and a simulator in IBM schema (which in the cloud 'cause you can't have supercooled chips in your basement), which is as one would expect running "real" quantum computing is going to be more expensive and uncertain now despite the hope it will give vast speed later.
The last news from IBM on HN about their quantum offering was that they had optimised the memory required for simulating quantum computers. 50-qubits seemed to the the upper limit before, but they managed to simulate a 56-qubit circuit. It was a good read: https://arxiv.org/abs/1710.05867
They've had real hardware for a while, though, and they have 5-qubit and 16-qubit machines that anyone can try via IBM Q experience.
Not an expert, but I do find some of the phrasing weird. Early on they talk about improvements to connectivity and packaging, which would seem to only be relevant to real, live, physical quantum computers. There's also the professor talking about having students run programs on real quantum computers, without any qualifiers (that were included here, anyway) about how it's actually a simulation. Later it talks about software tools, including a Jupyter notebook for heaven's sake, without mentioning the vast gulf between that and what they seem to claim at the top of the article. I'm tempted to say that the author of this press release simply doesn't understand the difference.
Nothing specific to whether this claim is legitimate, but lately IBM has a tendency to make bold claims about tech that can't be backed up (e.g. Watson).
Presenting the pictures of a quantum computer with adbobe flash is an impressive combination of technologies. I hope they are use a different technology stack to develop the OS of that machine ;-)
They're definitely referring to a real ("universal") quantum computer, not the quantum annealing devices of D-wave. Their computer will be able to run the Schor algorithm, if it is as advertised. If they have 20 qubits in two months, as they say they will, that seems like great progress compared to where I thought the field was. And if they have 50 within a couple years after that, it would be huge.
One caveat is they seem to be making a distinction between a "universal" quantum computer, and a "universal fault-tolerant" one. Fault-tolerance requires a lot more qubits so it's not clear to me how valuable even 50 will be, if not error-corrected. I must say I will be kind of amazed if quantum computers prove to be scalable the way classical ones have been. I'm inclined to believe that the "Church-Turing" thesis will ultimately prevail once the cost of construction of the machines is factored in (~linear growth for the classical machines, greater than polynomial for the quantum). But I've been wrong before.
You say "If they have 20 qubits in two months" and "if they have 50 within a couple of years". Doesn't the article say that they have 50?
I'm unclear on this, because I have yet to see a published result talking about running Shor's algorithm on a quantum computer beyond the NMR-based physical simulation and the adiabatic one (which is just annealing). What exactly is IBM claiming to have done here?
What are they doing about error correction? Not that it makes this any less impressive (everything starts somewhere), are they simply ignoring the problem?
It's a bit too early for quantum computers to do fault-tolerant (error corrected) computation. This is because you need more than one physical qubit to make a logical (error corrected) qubit. You need 7 physical qubits to encode a logical qubit if you use the Steane code [0]. So with 50 qubits, they could theoretically make 7 error corrected qubits.
In fact, the smallest quantum error-correcting code using qubits uses only five qubits. [1] So you could fit 10 of these codewords into 50 qubits.
But that's not right. At least two more qubits are needed for fault-tolerant error correction. So that means you could fit nine codewords into 50 qubits (since 9x5+2=47 < 50).
But that's not right. There exist more efficient codes that put multiple encoded qubits into a single code block. [2] For example, three qubits can be encoded into eight, still with distance three. [3] Six of these could fit into 50 qubits (since 6x8+2=50), giving 18 encoded qubits.
But that's not right. The IBM systems are superconducting qubits, with very constrained interactions. Not every qubit can talk directly to every other qubit. So you'd probably want every code block to have its own extra qubits dedicated to error correction. If you need 8+2 qubits per code block, then you could fit five code blocks, for 15 encoded qubits, into 50.
I recently had a conversation with a Microsoft quantum researcher, and this a close approximation to his answer. I just wanted a number. It's complicated.
I just saw a talk by I forgot which bell labs engineer was on the transistor team. He told it took two or more years to get the errors to a useful level. It's easy to think transistor are magic solid devices, they were not.
But as they are analog devices, there are no errors with transistors
There are manufacturing defects, there are inefficient designs, inefficient production methods, poor circuit designs (like using them in common base - not wrong depending on the situation - instead of common emitter)
They don't seem to be ignoring it. They are using this idea of "quantum volume" which incentivizes them to take into account both the number of qubits and the error rate to determine how useful their quantum computer is in solving problems:
Does the appearance of quantum computers mean that the effectiveness of programming languages becomes secondary and that ease-of-use/expressiveness will be the most valued trait?
No. Quantum computers are very specialized devices. They allow massive speedups for certain applications, and no benefit at all for general-purpose computing.
I expect to see a flourishing of quantum programming languages with which to write code for quantum computers, but that's a different subject.
Good job, IBM. You may actually be ahead of other tech companies for once, although Google seems to be breathing down your neck in this area. Others seem to be at least a generation or two behind.
One other thing to note is that until recently it was believed that 50-qubit quantum computer would achieve "quantum supremacy". However, IBM itself has shown that we can simulate 56-qubits on a classical supercomputer.
Now let's see who gets to the 100-qubit quantum computer first and achieves quantum supremacy.
Going by recent developments, quantum computers seem to be following a "Moore's Law" of sorts, where their number of qubits pretty much double every two years. We need a few more generations to be sure of this, but it does look like this is the rate at which they are going to evolve.
D-Wave, which isn't a universal quantum computer, has in fact been evolving at 2-4x every 2 years (closer to 2x for last few generations).
This is exciting because unlike classical computers, quantum computers increase their performance by much more than 2x if their number of qubits double every 2 years.
> You may actually be ahead of other tech companies for once
IBM Research is and always has been an industrial research powerhouse. I doubt you could name a company that has contributed so much to so many different fields over the past few decades (Bell Labs is the universal exception).
From [1]:
"IBM Research's numerous contributions to physical and computer sciences include the Scanning Tunneling Microscope and high temperature superconductivity, both of which were awarded the Nobel Prize. IBM Research was behind the inventions of the SABRE travel reservation system, the technology of laser eye surgery, magnetic storage, the relational database, UPC barcodes and Watson, the question-answering computing system that won a match against human champions on the Jeopardy! television quiz show."
> won a match against human champions on the Jeopardy! television quiz show.
I'm _much_ more impressed by Google's AI Go champion. And besides Watson, what has IBM Research done in the past 20 years? Personally I would have thought they'd be more involved in self-driving cars.
(Disclaimer: I interned for IBM Research 10 years ago)
IBM isn't a software-only company: they do research in advanced materials and semiconductor physics, low-level computer architecture, integrated circuit testing, and so on.
When you look at the bigger picture, IBM Research has been doing amazing things.
IBM has been awarded more patents per year than any other company for the last 24 years[1]. In 2016 alone, IBM published ~22 patents per day, and ended up being ~2500 patents ahead of Samsung (#2).
I was responsible for a couple of those parents and I can tell you that's all nonsense. IBM employees are encouraged to patent _anything_, regardless of how useless or silly it may be.
Isn't that kind of the best strategy? You -- as a corporation -- want to make investments for the future. Each patent is a possible income stream in the future if the technology behind the patent somehow becomes "big". So it makes sense to patent everything you can, just for the chance of one of them to go "big".
Companies can't just spend money on research expecting no return, at least not for a long time. That's what academia is built to do.
No I mean I saw some patents on downright silly stuff. Things completely unrelated to IBM's business or even tech. IBM just wants to wave that number around.
Does that necessarily mean that is impressive?( I know it is, I am just asking for someone who may not understand what having that type of scale of IP enables a company to do.)
If I remember correctly, it takes a quantum computer with roughly 2N qubits to break N-bit RSA. So we should be ok until thousand-qubit systems are being developed.
Increasing key lengths is a short-term workaround, but the real solution is post-quantum public key encryption, which is currently an area of active research.
Do you know what the implications are for symmetric ciphers or [elliptic curve] Diffie-Hellman key exchange? I.e. will forward secrecy still hold up against such future quantum computing?
The implication for symmetric ciphers is that key lengths will need to be doubled. 128-bit ciphers like standard AES have 64-bit security against a quantum attack. I expect to see 256-bit keys adopted widely in the not-too-distant future.
I don't know off the top of my head what the implication for key exchange would be, but I know that anything that depends on the discrete logarithm problem for security is vulnerable to a quantum attack. I believe that includes all forms of Diffie-Hellman.
With a quantum computer and Grover's algorithm, 128-bit AES is breakable in 2^64 steps. But the quantum computer still needs to have a 128-bit quantum memory.
I’m not sure if you mean to be disagreeing here or simply adding color, but what you’re saying is the same as the parent comment. Grover’s algorithm allows symmetric key recovery for n bits in 2^(n/2) steps; as the parent commenter said, symmetric algorithm key sizes need to be doubled. A break in 2^64 steps is the same as 64-bit security, so changing the key size to 256-bit will offer 128 bits of security.
One nice property of ECC pubkeys is that they easily fit into UDP packets, URIs and other very compact data structures. Currently all post-quantum schemes have fairly bulky pubkeys.
> It is estimated that 2048-bit RSA keys could be broken on a quantum computer comprising 4,000
qubits and 100 million gates. Experts speculate that quantum computers of this size may be
available within the next 20-30 years.
The paper is from 2009, so ~2030 to break 2048-bit RSA seems about right. If they can double the number of qubits every two years, then we should have:
100-qubit by 2020.
200-qubit by 2022
400 qubit by 2024
800 qubit by 2026
1600 qubit by 2028
3200 qubit by 2030
6400 qubit by 2032.
It's also possible the rate of progress will be slightly higher than 2x every 2 years, so doing it a few years sooner than that is not out of the question.
Also, you have to consider that once you get a quantum computer that can break 2048-RSA, you'll be able to break all the encrypted communications you've stored in the past few years, too. So you can't "switch-on" the quantum-resistant crypto in 2031 and think you're all good. You have to do it as soon as possible, especially after practical quantum computers that are capable of scaling in a scheduled way start appearing (which seems to have happened).
Plus, even if Google is super-quick to adopt quantum-resistant crypto, doesn't mean the rest of the internet will be, too. It could take a few more years for that to happen, too.
Not too long ago I summarized every datapoint in that page. It is almost perfectly linear (there's 1 qubit lower somewhere, 1 qubit higher elsewhere), the noise is way lower than anything I was expecting to see.
>Also, you have to consider that once you get a quantum computer that can break 2048-RSA, you'll be able to break all the encrypted communications you've stored in the past few years, too
PFS can slow it down a bit, but not much. Assuming before PFS everyone changed their keys every 3 years, and with PFS they change them every 2 weeks, then it should be about 80x harder (slower to break the encryption). 80x harder may seem like a lot but it's not that much in the context of quantum computers.
Also, PFS uses 256-bit ECC, which only requires a 512-qubit quantum computer to break it. So it's possible that a 4,000 qubit quantum computer, or even a smaller one, could break ECC with PFS even faster than it can break 2048-bit RSA.
ECC is vulnerable to Shor's algorithm, which gives exponential speedup. A rough calculation implies that 256-bit ECC would take on the order of 25k quantum operations to break.
For AES, the quantum advantage is merely grover's algorithm, which allows you to invert a function in O(sqrt(N)) time instead of O(N) time. Basically, the algorithm would look like "compute AES-192 on a uniform random quantum state, then do Grover's algorithm for 2^96 timesteps." This needs as many qubits as it takes to compute AES-192 (which, since the algorithm isn't unitary, is strictly greater than 192 qubits).
More important is the fact that it's a very long-running application: it requires to you keep over 192 qubits coherent for a very long time, which is probably an order of magnitude or more in error correction requirements.
Most postulations I've seen follow the axiom that the number of qubits must be no less than twice the number of bits used to generate the key. I found a pretty good summary of that thinking on StackExchange: https://security.stackexchange.com/questions/87345/how-many-...
Quantum computers will probably be simply addon hardware to classical computers for very long time. If a classical computer can solve something "efficiently", a quantum computer will not provide any speedup (but there are problems that are practically solvable only on quantum computers).
I expect quantum computers will always run in batch mode. Storing quantum state in classical memory is impossible, so task switching would be very problematic.
We can try to partition the machine so your process can use some qubits and mine uses others. I suppose a classical computer would be running the OS, at least at first.
> There will always need to be a classical computer running the quantum computer.
Not sure about the need, but it sure is convenient. Quantum computers are not always better for all kinds of problems and being able to route different jobs to different parts of the system should be an advantage. All this looks a lot like a digitally controlled analog computer or something that can program FPGAs on-the-fly.
A digitally controlled analog computer is a good analogy.
One big limitation of quantum computers is that they can't erase information without losing their quantumness. So simple things like if-then statements become impossible (you'd have to execute both branches every time).
If quantum computing improves as it had been, how does this affect Bitcoin and ethereum as we now know it? Could the blockchain be compromised by a malicious actor who has much less than the majority?
My prediction: at some point Bitcoin will go through an upgrade process to transfer user wallets to a key pair based on a quantum resistant public key algorithm. Users will be given a time frame to transmit their coins to the new wallets, and after that time is expired, EC-based wallets will not be considered valid transaction sources any longer. Classic cash conversion in the digital age.
I wrote an article about this pending publication so I can't link you just yet.
Bitcoin is mostly safe. Transactions in the pending pool can be compromised but quantum computers would need to perform logical operations at around 660 MHz. Confirmed transactions and unspent transactions are completely safe. The earliest quantum computers won't be fast enough to perform this attack. The problem is easily remedied by hard forking to replace ECDSA with a different public key system. There will be a lot of advanced warning.
I've been following quantum computing since D-Wave made its press release some years back. Now I'm a complete skeptic.
The huge red flag I can't get over is if it is as so, why can no one validate it after all this time?
Why is there the proverbial "it works but not in the way you think it works" (i.e., quantum annealing) or "it works but we can use non-QM systems to simulate it faster, better, cheaper by a factor of a trillion"?
If QM computing was truly feasible (assuming that QM does have an underlining phenomena that is physically real), why are the results after all this time so fuzzy?
Because you have to build the quantum computer in a world that is overwhelmingly classical.
There are no qualifiers along the lines of "underlying phenomena". It's simply difficult to get a stable enough interface between the classical and the quantum, so you can control it, while at the same time isolating it enough that it doesn't decohere to classicality.
Who knows, maybe reliable scalable quantum computation truly isn't feasible for some reason, but if you study the physics, the fact that this is so hard is not really a surprise.
But they have already solved the engineering problems (at least 10 years ago).
They already have "qbits".
The interface issues look to be 98% solved.
And the temperature cooling, the EM shielding, and everything else (that is outside the circuitry design and the physical chipset), a person with a budget of 80,000 USD can recreated in his garage.
Its the results I can't understand.
Why can't X qbits, in the time they stay coherent, produce results that agree with the mathematical analysis of the setup? Why is it always off by a factor so large that its not even productive for any task.
My understanding of it is not complete, this is why I ask. Is the interface issue only 2% solved (and not 98%), etc.?
Quantum computers can be simulated classically, in a timeframe exponentially related to the number of qubits. So as long as the quantum computer is not too big you can compare the results to an exact classical simulation and see if it works. The current record for classical simulation is 56 qubits (https://arxiv.org/abs/1710.05867). This is an active area of research, since we want to be able to check the early-generation quantum computers carefully.