Hacker News new | past | comments | ask | show | jobs | submit login
IBM Q system in development with working 50 qubit processor (ibm.com)
186 points by babak_ap on Nov 10, 2017 | hide | past | favorite | 109 comments



The article appears to be saying that they are offering an actual quantum computer as a service, but as I recall, their previous offering was a simulation of a quantum computing environment. Yet, this same article refers to that thing as if it were a real QC. This makes me skeptical that the thing being offered here is a real quantum computer...Anyone here have any insight into whether this is legitimate?


Their previous offerings have been 2 physical quantum computers that can be accessed over the cloud. One with 5 qubits, and another with 16 qubits[0]. These computers have had thousands of users who have run millions of quantum algorithms [1].

[0]: https://www-03.ibm.com/press/us/en/pressrelease/52403.wss

[1]:https://www.engadget.com/2017/11/10/ibm-50-qubit-quantum-com...


Wait, I didn't think we had actual quantum computers yet. https://en.wikipedia.org/wiki/Quantum_computing


We've had quantum computers since the 90's. We just haven't had "useful" quantum computers yet [0].

An implementation of quantum computers that is currently flourishing is one that is built out of superconducting circuits. This makes it easier to scale the quantum computer because you can leverage existing nanoelectronic fabrication techniques [1].

[0]: https://en.wikipedia.org/wiki/Timeline_of_quantum_computing

[1]: https://en.wikipedia.org/wiki/Superconducting_quantum_comput...


> "We've had quantum computers since the 90's".

It wasn't until 2005 we had the first (probable) qubyte created at the University of Innsbruck in Austria.


You don't need bytes to do computation.


Wouldn't a qubyte be 8 qubits?


The term doesn't make sense because you're not going to use groups of 8 qubits to represent characters on a screen.

We may actually need a new word for 2 qubits (dual-qubits? dubits?) because it seems 2 qubits are enough to break 1-bit of encryption, and I think I've read it's enough to simulate 1 atom, too.


I thought it was x qubits is equal to 2^x classical bits? So wouldn’t 2 “dubits” actually be four times (not twice as) better?


From your link: "A small 16-qubit quantum computer exists and is available for hobbyists to experiment with via the IBM quantum experience project."


From the link in that sentence: "The IBM Quantum Experience (QX) enables anyone to ... ... explore tutorials and simulations around what might be possible with quantum computing."


That sentences doesn't say the small 5 and 16 bit computers don't exist.

Edit: if you read the whole link, the key description is:

"IBM’s quantum processor is made up of superconducting transmon qubits, located in a dilution refrigerator at the IBM Research headquarters at the Thomas J. Watson Research Center.

Users interact with the quantum processor through the quantum circuit model of computation, applying quantum gates on the qubits using a GUI called the quantum composer, writing quantum assembly language code[1] or through a Python API.[2]"

Which is to say you have both a quantum computer and a simulator in IBM schema (which in the cloud 'cause you can't have supercooled chips in your basement), which is as one would expect running "real" quantum computing is going to be more expensive and uncertain now despite the hope it will give vast speed later.


We don't.


These types of replies aren't helpful. If you disagree with the claim please provide some specific reasons why.


The last news from IBM on HN about their quantum offering was that they had optimised the memory required for simulating quantum computers. 50-qubits seemed to the the upper limit before, but they managed to simulate a 56-qubit circuit. It was a good read: https://arxiv.org/abs/1710.05867

They've had real hardware for a while, though, and they have 5-qubit and 16-qubit machines that anyone can try via IBM Q experience.

So this is 100% legit as far as I'm concerned.


There is no such thing as a quantum computer in 2017. It is fantasy.


It's both a real quantum computer and a simulation - at the same time - but you don't know until you observe the results!


Not an expert, but I do find some of the phrasing weird. Early on they talk about improvements to connectivity and packaging, which would seem to only be relevant to real, live, physical quantum computers. There's also the professor talking about having students run programs on real quantum computers, without any qualifiers (that were included here, anyway) about how it's actually a simulation. Later it talks about software tools, including a Jupyter notebook for heaven's sake, without mentioning the vast gulf between that and what they seem to claim at the top of the article. I'm tempted to say that the author of this press release simply doesn't understand the difference.


Nothing specific to whether this claim is legitimate, but lately IBM has a tendency to make bold claims about tech that can't be backed up (e.g. Watson).


Let's be charitable: IBM marketing has that tendency.

IBM does still do some amazing research.


Presenting the pictures of a quantum computer with adbobe flash is an impressive combination of technologies. I hope they are use a different technology stack to develop the OS of that machine ;-)


Talking about the websites stack instead of the content is the HN equivalent of bike-shedding. It sounds like a contribution but it isn't.

https://en.wiktionary.org/wiki/bikeshedding


Jeez man let him make a joke


Looks like their publishing platform is that old that it's still using Flickr's Flash photo embedding widget.

TIL Flickr had such a widget.


Flickr's first incarnation was as a Flex app, if I remember right. The whole thing was Flash.


Well making Flash work on mobile is one potential use for quantum computing I guess? :D


Quantum computer news! Everyone is excited and confused.


Is this a true quantum computer? The controversy around D-Wave was confusing.


They're definitely referring to a real ("universal") quantum computer, not the quantum annealing devices of D-wave. Their computer will be able to run the Schor algorithm, if it is as advertised. If they have 20 qubits in two months, as they say they will, that seems like great progress compared to where I thought the field was. And if they have 50 within a couple years after that, it would be huge. One caveat is they seem to be making a distinction between a "universal" quantum computer, and a "universal fault-tolerant" one. Fault-tolerance requires a lot more qubits so it's not clear to me how valuable even 50 will be, if not error-corrected. I must say I will be kind of amazed if quantum computers prove to be scalable the way classical ones have been. I'm inclined to believe that the "Church-Turing" thesis will ultimately prevail once the cost of construction of the machines is factored in (~linear growth for the classical machines, greater than polynomial for the quantum). But I've been wrong before.


You say "If they have 20 qubits in two months" and "if they have 50 within a couple of years". Doesn't the article say that they have 50?

I'm unclear on this, because I have yet to see a published result talking about running Shor's algorithm on a quantum computer beyond the NMR-based physical simulation and the adiabatic one (which is just annealing). What exactly is IBM claiming to have done here?


They're simulating 50; they're implementing 20.


I read it as saying they've passed some development milestones on 50 which is certainly all one could expect before the release of the 20.


What are they doing about error correction? Not that it makes this any less impressive (everything starts somewhere), are they simply ignoring the problem?


It's a bit too early for quantum computers to do fault-tolerant (error corrected) computation. This is because you need more than one physical qubit to make a logical (error corrected) qubit. You need 7 physical qubits to encode a logical qubit if you use the Steane code [0]. So with 50 qubits, they could theoretically make 7 error corrected qubits.

[0]: https://en.wikipedia.org/wiki/Steane_code


In fact, the smallest quantum error-correcting code using qubits uses only five qubits. [1] So you could fit 10 of these codewords into 50 qubits.

But that's not right. At least two more qubits are needed for fault-tolerant error correction. So that means you could fit nine codewords into 50 qubits (since 9x5+2=47 < 50).

But that's not right. There exist more efficient codes that put multiple encoded qubits into a single code block. [2] For example, three qubits can be encoded into eight, still with distance three. [3] Six of these could fit into 50 qubits (since 6x8+2=50), giving 18 encoded qubits.

But that's not right. The IBM systems are superconducting qubits, with very constrained interactions. Not every qubit can talk directly to every other qubit. So you'd probably want every code block to have its own extra qubits dedicated to error correction. If you need 8+2 qubits per code block, then you could fit five code blocks, for 15 encoded qubits, into 50.

Obviously this is really complicated.

[1] https://en.wikipedia.org/wiki/Stabilizer_code#Example_of_a_s...

[2] http://www.codetables.de/TableIII.php

[3] http://www.codetables.de/QECC.php?q=4&n=8&k=3


I recently had a conversation with a Microsoft quantum researcher, and this a close approximation to his answer. I just wanted a number. It's complicated.


Sounds like 15 is that number


It's also not 15 at the same time.


I just saw a talk by I forgot which bell labs engineer was on the transistor team. He told it took two or more years to get the errors to a useful level. It's easy to think transistor are magic solid devices, they were not.


But as they are analog devices, there are no errors with transistors

There are manufacturing defects, there are inefficient designs, inefficient production methods, poor circuit designs (like using them in common base - not wrong depending on the situation - instead of common emitter)


every measurement has an error tolerance, starting with the uncertainty principle and including transistor manufaction.


Yes, tolerances and margins, but no device is sold as "perfect" quite the contrary, and the smaller the tolerance the more costly it is.

So it's an error in the metrological sense but not in the defective sense


They don't seem to be ignoring it. They are using this idea of "quantum volume" which incentivizes them to take into account both the number of qubits and the error rate to determine how useful their quantum computer is in solving problems:

https://www.research.ibm.com/ibm-q/resources/quantum-volume....


Does the appearance of quantum computers mean that the effectiveness of programming languages becomes secondary and that ease-of-use/expressiveness will be the most valued trait?


No. Quantum computers are very specialized devices. They allow massive speedups for certain applications, and no benefit at all for general-purpose computing.

I expect to see a flourishing of quantum programming languages with which to write code for quantum computers, but that's a different subject.


Good job, IBM. You may actually be ahead of other tech companies for once, although Google seems to be breathing down your neck in this area. Others seem to be at least a generation or two behind.

One other thing to note is that until recently it was believed that 50-qubit quantum computer would achieve "quantum supremacy". However, IBM itself has shown that we can simulate 56-qubits on a classical supercomputer.

https://www.ibm.com/blogs/research/2017/10/quantum-computing...

Now let's see who gets to the 100-qubit quantum computer first and achieves quantum supremacy.

Going by recent developments, quantum computers seem to be following a "Moore's Law" of sorts, where their number of qubits pretty much double every two years. We need a few more generations to be sure of this, but it does look like this is the rate at which they are going to evolve.

D-Wave, which isn't a universal quantum computer, has in fact been evolving at 2-4x every 2 years (closer to 2x for last few generations).

This is exciting because unlike classical computers, quantum computers increase their performance by much more than 2x if their number of qubits double every 2 years.


> You may actually be ahead of other tech companies for once

IBM Research is and always has been an industrial research powerhouse. I doubt you could name a company that has contributed so much to so many different fields over the past few decades (Bell Labs is the universal exception).

From [1]:

"IBM Research's numerous contributions to physical and computer sciences include the Scanning Tunneling Microscope and high temperature superconductivity, both of which were awarded the Nobel Prize. IBM Research was behind the inventions of the SABRE travel reservation system, the technology of laser eye surgery, magnetic storage, the relational database, UPC barcodes and Watson, the question-answering computing system that won a match against human champions on the Jeopardy! television quiz show."

[1] https://en.wikipedia.org/wiki/IBM_Research


> won a match against human champions on the Jeopardy! television quiz show.

I'm _much_ more impressed by Google's AI Go champion. And besides Watson, what has IBM Research done in the past 20 years? Personally I would have thought they'd be more involved in self-driving cars.

(Disclaimer: I interned for IBM Research 10 years ago)


IBM isn't a software-only company: they do research in advanced materials and semiconductor physics, low-level computer architecture, integrated circuit testing, and so on.

When you look at the bigger picture, IBM Research has been doing amazing things.


What stands out to you in the last 20 years?


IBM has been awarded more patents per year than any other company for the last 24 years[1]. In 2016 alone, IBM published ~22 patents per day, and ended up being ~2500 patents ahead of Samsung (#2).

[1]: http://www-03.ibm.com/press/us/en/pressrelease/51353.wss


I was responsible for a couple of those parents and I can tell you that's all nonsense. IBM employees are encouraged to patent _anything_, regardless of how useless or silly it may be.


Isn't that kind of the best strategy? You -- as a corporation -- want to make investments for the future. Each patent is a possible income stream in the future if the technology behind the patent somehow becomes "big". So it makes sense to patent everything you can, just for the chance of one of them to go "big".

Companies can't just spend money on research expecting no return, at least not for a long time. That's what academia is built to do.


No I mean I saw some patents on downright silly stuff. Things completely unrelated to IBM's business or even tech. IBM just wants to wave that number around.


Does that necessarily mean that is impressive?( I know it is, I am just asking for someone who may not understand what having that type of scale of IP enables a company to do.)


Protective war chest: don't sue me, I probably own a patent you're infringing upon. At least my lawyers will make it seem so.


Very exciting, agree - although important to mention:

> unlike classical computers, quantum computers increase their performance by much more than 2x

...for particular tasks.


Some of those tasks are transformative!


At what point do we no longer trust public key cryptography (RSA)? Where's the break point?


If I remember correctly, it takes a quantum computer with roughly 2N qubits to break N-bit RSA. So we should be ok until thousand-qubit systems are being developed.

Increasing key lengths is a short-term workaround, but the real solution is post-quantum public key encryption, which is currently an area of active research.


Do you know what the implications are for symmetric ciphers or [elliptic curve] Diffie-Hellman key exchange? I.e. will forward secrecy still hold up against such future quantum computing?


The implication for symmetric ciphers is that key lengths will need to be doubled. 128-bit ciphers like standard AES have 64-bit security against a quantum attack. I expect to see 256-bit keys adopted widely in the not-too-distant future.

I don't know off the top of my head what the implication for key exchange would be, but I know that anything that depends on the discrete logarithm problem for security is vulnerable to a quantum attack. I believe that includes all forms of Diffie-Hellman.


With a quantum computer and Grover's algorithm, 128-bit AES is breakable in 2^64 steps. But the quantum computer still needs to have a 128-bit quantum memory.


I’m not sure if you mean to be disagreeing here or simply adding color, but what you’re saying is the same as the parent comment. Grover’s algorithm allows symmetric key recovery for n bits in 2^(n/2) steps; as the parent commenter said, symmetric algorithm key sizes need to be doubled. A break in 2^64 steps is the same as 64-bit security, so changing the key size to 256-bit will offer 128 bits of security.


Just wanted to clarify that in this case, 64-bit quantum computer is not enough to break said 64-bit security, still need 128 bits of memory.


SIDH (https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exch...) is one of the few popular post-quantum variants of DH key exchange, and it supports forward secrecy as well.


One nice property of ECC pubkeys is that they easily fit into UDP packets, URIs and other very compact data structures. Currently all post-quantum schemes have fairly bulky pubkeys.


SIDH keys are 330 bytes long when compression is used, so they too will fit nicely into network packets.


> It is estimated that 2048-bit RSA keys could be broken on a quantum computer comprising 4,000 qubits and 100 million gates. Experts speculate that quantum computers of this size may be available within the next 20-30 years.

https://www.entrust.com/wp-content/uploads/2013/05/WP_Quantu...

The paper is from 2009, so ~2030 to break 2048-bit RSA seems about right. If they can double the number of qubits every two years, then we should have:

100-qubit by 2020.

200-qubit by 2022

400 qubit by 2024

800 qubit by 2026

1600 qubit by 2028

3200 qubit by 2030

6400 qubit by 2032.

It's also possible the rate of progress will be slightly higher than 2x every 2 years, so doing it a few years sooner than that is not out of the question.

Also, you have to consider that once you get a quantum computer that can break 2048-RSA, you'll be able to break all the encrypted communications you've stored in the past few years, too. So you can't "switch-on" the quantum-resistant crypto in 2031 and think you're all good. You have to do it as soon as possible, especially after practical quantum computers that are capable of scaling in a scheduled way start appearing (which seems to have happened).

Plus, even if Google is super-quick to adopt quantum-resistant crypto, doesn't mean the rest of the internet will be, too. It could take a few more years for that to happen, too.


From the Wikipedia timeline[1], it seems to be growing linearly.

1 - https://en.wikipedia.org/wiki/Timeline_of_quantum_computing


To quote my thesis advisor's terrible joke: "Everything's linear to first order"


Not too long ago I summarized every datapoint in that page. It is almost perfectly linear (there's 1 qubit lower somewhere, 1 qubit higher elsewhere), the noise is way lower than anything I was expecting to see.


>Also, you have to consider that once you get a quantum computer that can break 2048-RSA, you'll be able to break all the encrypted communications you've stored in the past few years, too

isn't that what PFS is supposed to prevent?


PFS can slow it down a bit, but not much. Assuming before PFS everyone changed their keys every 3 years, and with PFS they change them every 2 weeks, then it should be about 80x harder (slower to break the encryption). 80x harder may seem like a lot but it's not that much in the context of quantum computers.

Also, PFS uses 256-bit ECC, which only requires a 512-qubit quantum computer to break it. So it's possible that a 4,000 qubit quantum computer, or even a smaller one, could break ECC with PFS even faster than it can break 2048-bit RSA.


> Also, PFS uses 256-bit ECC, which only requires a 512-qubit quantum computer to break it.

Grover's algorithm is a quadratic, not exponential speedup. It may require 512 qubits, but it still requires 2^128 time.


ECC is vulnerable to Shor's algorithm, which gives exponential speedup. A rough calculation implies that 256-bit ECC would take on the order of 25k quantum operations to break.


You have to divide the qubit count by 100 to account for error correction.


It's interesting to imagine what connected planet with no reliable encryption would be like.

Although I guess we can always fall back on one-time pads.


Quantum computers don't break all encryption. There's no risk of that, fortunately.


Please expand on that. What encryption is vulnerable and what isn't?


I don't know a lot about it, only just enough to know what search term to look for, and Wikipedia had an article that probably fits the bill: https://en.m.wikipedia.org/wiki/Post-quantum_cryptography



Symmetric encryption will need to double key lengths, such as using 256-bit AES keys instead of 128-bit.

All currently popular forms of asymmetric / public key encryption (including RSA and ECC) are vulnerable to quantum attack.


How many qubit does it take to crack an AES 192 key?


For AES, the quantum advantage is merely grover's algorithm, which allows you to invert a function in O(sqrt(N)) time instead of O(N) time. Basically, the algorithm would look like "compute AES-192 on a uniform random quantum state, then do Grover's algorithm for 2^96 timesteps." This needs as many qubits as it takes to compute AES-192 (which, since the algorithm isn't unitary, is strictly greater than 192 qubits).

More important is the fact that it's a very long-running application: it requires to you keep over 192 qubits coherent for a very long time, which is probably an order of magnitude or more in error correction requirements.


Most postulations I've seen follow the axiom that the number of qubits must be no less than twice the number of bits used to generate the key. I found a pretty good summary of that thinking on StackExchange: https://security.stackexchange.com/questions/87345/how-many-...


A lot, considering you'd need 2^96 quantum operations.


I look forward to supercool (no pun intended) qSeries mainframes and whatever wild OSs they'll run.

Right now, it looks like it's a batch processing like thing, with a single problem using the machine at any given time with long setup/teardown times.


Quantum computers will probably be simply addon hardware to classical computers for very long time. If a classical computer can solve something "efficiently", a quantum computer will not provide any speedup (but there are problems that are practically solvable only on quantum computers).


I expect quantum computers will always run in batch mode. Storing quantum state in classical memory is impossible, so task switching would be very problematic.


We can try to partition the machine so your process can use some qubits and mine uses others. I suppose a classical computer would be running the OS, at least at first.


Yeah, if you have a 5000-qubit computer and two people want to run 2000-qubit jobs, I can see them being able to run simultaneously.

There will always need to be a classical computer running the quantum computer.


> There will always need to be a classical computer running the quantum computer.

Not sure about the need, but it sure is convenient. Quantum computers are not always better for all kinds of problems and being able to route different jobs to different parts of the system should be an advantage. All this looks a lot like a digitally controlled analog computer or something that can program FPGAs on-the-fly.


A digitally controlled analog computer is a good analogy.

One big limitation of quantum computers is that they can't erase information without losing their quantumness. So simple things like if-then statements become impossible (you'd have to execute both branches every time).


If quantum computing improves as it had been, how does this affect Bitcoin and ethereum as we now know it? Could the blockchain be compromised by a malicious actor who has much less than the majority?


My prediction: at some point Bitcoin will go through an upgrade process to transfer user wallets to a key pair based on a quantum resistant public key algorithm. Users will be given a time frame to transmit their coins to the new wallets, and after that time is expired, EC-based wallets will not be considered valid transaction sources any longer. Classic cash conversion in the digital age.


That would be a fork, which is interesting. The recent BTC fork shows that this can be politically successful so there is hope.


I wrote an article about this pending publication so I can't link you just yet.

Bitcoin is mostly safe. Transactions in the pending pool can be compromised but quantum computers would need to perform logical operations at around 660 MHz. Confirmed transactions and unspent transactions are completely safe. The earliest quantum computers won't be fast enough to perform this attack. The problem is easily remedied by hard forking to replace ECDSA with a different public key system. There will be a lot of advanced warning.

I'm not sure on Ether.


IBM also plans to offer a 20-qubit hosted service this year.

https://techcrunch.com/2017/11/10/ibm-passes-major-milestone...


Are there some practical use cases for 50 qubit processor or is more off a research thing?


I've been following quantum computing since D-Wave made its press release some years back. Now I'm a complete skeptic.

The huge red flag I can't get over is if it is as so, why can no one validate it after all this time?

Why is there the proverbial "it works but not in the way you think it works" (i.e., quantum annealing) or "it works but we can use non-QM systems to simulate it faster, better, cheaper by a factor of a trillion"?

If QM computing was truly feasible (assuming that QM does have an underlining phenomena that is physically real), why are the results after all this time so fuzzy?


Because you have to build the quantum computer in a world that is overwhelmingly classical.

There are no qualifiers along the lines of "underlying phenomena". It's simply difficult to get a stable enough interface between the classical and the quantum, so you can control it, while at the same time isolating it enough that it doesn't decohere to classicality.

Who knows, maybe reliable scalable quantum computation truly isn't feasible for some reason, but if you study the physics, the fact that this is so hard is not really a surprise.


But they have already solved the engineering problems (at least 10 years ago).

They already have "qbits".

The interface issues look to be 98% solved.

And the temperature cooling, the EM shielding, and everything else (that is outside the circuitry design and the physical chipset), a person with a budget of 80,000 USD can recreated in his garage.

Its the results I can't understand.

Why can't X qbits, in the time they stay coherent, produce results that agree with the mathematical analysis of the setup? Why is it always off by a factor so large that its not even productive for any task.

My understanding of it is not complete, this is why I ask. Is the interface issue only 2% solved (and not 98%), etc.?


Do you think this can fk up RSA and render our industry obsolete?


The paper in the thread below claims that in ten years it might be the case:

https://news.ycombinator.com/item?id=15674970


Quantum computing reminds me of time travel for all the same reasons. This is not a quick win.

How would you even validate you built a quantum computer?


Quantum computers can be simulated classically, in a timeframe exponentially related to the number of qubits. So as long as the quantum computer is not too big you can compare the results to an exact classical simulation and see if it works. The current record for classical simulation is 56 qubits (https://arxiv.org/abs/1710.05867). This is an active area of research, since we want to be able to check the early-generation quantum computers carefully.


If you run something like Shor's algorithm you can validate it using a stopwatch and pocket calculator.


How do you validate you built a classical computer?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: