Hacker News new | past | comments | ask | show | jobs | submit login

I really suspect that if QC ever gets to the point where it's speaking to an OS of some type, then it will become something like what a GPU is to a CPU - where some specialized computing is offloaded to it. I imagine this because a) classical computing will most definitely be required if even just for communicating with I/O, b) QC systems are vulnerable to heat and can't be too near stuff that could get hot, c) not every type of computation benefits from QC.

I imagine we will even call it the QCU, quantum computing unit. It'll probably be a highly specialized piece of hardware for quite some time too, like what a dedicated random number generator piece of hardware would be to a normal computer. It would really need to have some killer application in order to end up in hardware mere mortals have access to.

What I really think QC will end up with is some specialized language, such as what FPGAs have, that tell it how to initialize the system and measure it. I imagine Linux would likely be one of the first OSes to actually support it, given that Linux is heavily used in servers and cloud computing will likely be the first places (after research labs) to actually be able to afford to own and run the QCU hardware.

So I believe Linux is here to stay, even if it really doesn't change so much.




I was involved with a superconducting qubit setup where we had a compiler written in python that converted a DSL to binaries that were loaded onto an FPGA, which generated control signals and read out results that streamed over pci/e. The system that this ran on ran windows but that was basically inconsequential. Linux has basically nothing to do with quantum computing other than it may be the preferred environment for some researchers to work in.


At D-Wave, we have a software stack that (no surprise) is entirely Linux based to talk to the QPU. After all, we expect users of classical computers to formulate and submit problems over a classical Internet using classical APIs, and we expect to hire classical developers to work on all of the parts of the system that aren't frozen down to a hair above absolute zero.

You're not far off on some of your assumptions for how it ends up working in reality. We do see it as an accelerator for specific operations, specifically for optimization problems; however, much as people originally saw GPUs as being for gaming, there's a lot of room for creative individuals to explore the capabilities of the system and see what other uses it can be put to.

We have a cloud IDE now with excellent visualization tools, and it's free to get started. You should definitely go check it out, considering how interested you sound; it should be very educational. Any developer with some basic Python skill can learn how to fire off toy problems to it, the hard part where you need more of a statistics and hard math background is in mapping real-world optimization problems to the Ising model we use.


> At D-Wave

Great to hear from somebody at D-Wave - how is that project coming along? As far as I'm aware it got some real hype some years back, with names like Google and IBM looking to invest in the company/product? I've not heard so much since in the news? I hope things are running smoothly over there, it was for sure one of the more promising approaches I saw.

> [..] much as people originally saw GPUs as being for gaming, there's a lot of room for creative individuals to explore the capabilities of the system and see what other uses it can be put to.

I suspect the killer application could actually end up being neural networks. You now have billions of parameters and you are trying to determine as quickly as possible some set of parameters that lead to an output through several hidden layers. You have some robustness to noise and actually you really don't care so much that the answer is not even the final answer, just that some progress towards the final answer is made. My guess is that we're probably at least 10 years away from having neural networks accelerated by QC, more likely 20-30.

> We have a cloud IDE now with excellent visualization tools, and it's free to get started. You should definitely go check it out, considering how interested you sound; it should be very educational. Any developer with some basic Python skill can learn how to fire off toy problems to it, the hard part where you need more of a statistics and hard math background is in mapping real-world optimization problems to the Ising model we use.

I took a very quick crash course in quantum computing some 5 years ago (maybe more) and worked on some toy problems. I should check it out again. I guess you're now at the stage where you have some re-programmable setup? Or is this a simulation QC machine?

I would love to hear from you about what some people have been able to do in terms of applications.


> how is that project coming along?

It's actively in production now with our new Advantage chip with 5000+ qubits (exact yields vary as fabrication is truly a difficult thing to get perfect, but each chip that's been made available to customers is at least 5000 to my knowledge).

> I hope things are running smoothly over there, it was for sure one of the more promising approaches I saw.

We have a decent number of investors with NEC being our latest, and contributions from the Canadian government's various innovation funds are also helping. It's an expensive business to run, and we've been in it for longer than anyone, so it makes it clear that this is a long term play. I personally like to liken it to investing in a Babbage or Turing sort of project: it's clear the seeds of some amazing future technology are here, but the current scale of the machine is not yet at the level where it can unlock all the things we're dreaming about yet.

That's where the hybrid solvers come in: we can basically present a way larger capability in terms of our working graph for Ising problems than fits on the QPU, and chop it up and get great quality answers that really make use of the underlying quantum hardware while keeping up with the best classical software.

> I suspect the killer application could actually end up being neural networks.

Yes, that's long been one of our expected markets, and something we hope to see increase over the next few years. One issue is that a lot of ML developers approach things from the far other side of the stack - like most hackers, picking up a new toolkit and exploring it from the outside in, rather than academically learning every aspect of the model and how it works inside. Unfortunately, it's that latter skillset that is still required to do the work of mapping an ML model onto the Ising model - way above my own personal ability to even approach.

> You have some robustness to noise and actually you really don't care so much that the answer is not even the final answer, just that some progress towards the final answer is made.

This is exactly the strength of quantum annealing: getting lower energy state solutions in a shorter time than a classical method by using quantum-mechanical properties of the universe, like tunnelling.

> I took a very quick crash course in quantum computing some 5 years ago (maybe more) and worked on some toy problems. I should check it out again. I guess you're now at the stage where you have some re-programmable setup? Or is this a simulation QC machine?

No simulation at all! You fire up the IDE from your account at https://cloud.dwavesys.com/leap/ and you will have an API token that works with our Ocean Tools SDK to submit problems to the live QPU. We also have Jupyter Notebooks in our online training session that will submit problems live. No simulation, and it's totally "reprogrammable" in the sense that you can submit whatever QUBO / Ising problem you want, within limits, and it will be sampled and a solution will be returned to you within seconds.

> I would love to hear from you about what some people have been able to do in terms of applications.

Our big win recently was Volkswagen using the system to route busses in Lisbon live with the QPU as a back end. https://www.dwavesys.com/media-coverage/volkswagen-optimizes...

We also just got a paper in Nature with simulation of magnets: https://www.nature.com/articles/s41467-021-20901-5.epdf and have done other work with spinglasses in the past.

Current customer efforts revolve around anything from physics simulations, to logistics work, and anything else where optimization functions come into play. There's actually a really wide range of optimization work people are already doing that can be tied into our system; for now, it's a matter of creating the right customer middleware to bridge the gap.


Or more likely it will be called QPU then, that goes better along with gpus and cpus


They are already called QPUs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: