Hacker News new | past | comments | ask | show | jobs | submit login

This is actually a very cool demo! I did run some simulations similar to this one, but at some point I stopped trying so many different models for the same problem and started facing more difficult tasks (like multi-class, for example). If it is to anyone's interest, the problem with escalation of this model lies on what would be the analogous to "activation functions" in classic neural networks. The cool thing about the sigmoid function is that it's monotonous, whereas every-day quantum gates are deeply rooted in periodic functions, so the chances of getting stuck inside a bad local minimum rise absurdly fast when deeper or wider circuits are considered.

Actually the problem shown on the demo was pretty much identical to the initial project for my bachelor's thesis, and some phd students took it and raised it to a whole new level (https://arxiv.org/abs/1907.02085). In order to understand the purpose of the cost functions and such, some notions of quantum mechanics come in handy, but if you believe that what we did just so happens to make some underlying sense, then the ultimate result is stunning: there is an implementation which circumvents the problem of local minima, and renders meaningful results at attainably small scales.




Oh yes, this is an amazing paper. I actually decided to include single qubit circuits after reading it.

Moving to higher dimensions and more complex problems is indeed challenging. I think that one key to the problem is how you design circuits. There are some elemental points explored here: https://qml.entropicalabs.io/native_gates.pdf

But also things that are hard to grasp on paper, with only theoretical reasoning. This "demo" is actually a derivative work of our "quantum machine learning lab" that we use to explore circuits and visualize landscapes. As you can see in the current demo: when you change circuits and use data re-uploading --as in the article-- the convexity breaks apart and local minima appears everywhere. As a consequence the learning does not converge well if at all.

We are currently working on the MNIST dataset (handwritten digits) with 5-qubit circuits and have first encouraging results. And you are right: well chosen cost functions is an important part of the equation :-)

We are still in the infancy of the domain but I think we will be able to scale these techniques to really hard, big problems. Classical perceptrons dates back to the sixties and some more decades were needed to evolve the idea in something practical. The good thing is that quantum machine learning can follow the steps of classical machine learning and progress a lot faster.

btw: we are hiring at entropica labs (both interns and long term positions). If you want to continue working on these problems you can definitely apply.


I'm truly conviced that exploring landscapes is the way to get higher abstraction skills in the design domain. Yet, discussing this point with someone in the field, they told me we're still lacking a major breakthrough, since right now we only have several (more or less interesting/successful and still) arbitrary approaches to the problem, each of them more fit for one type of problems.

It is expected that someone in academia develops the whole mathematical apparatus needed for the rest of us to keep wrong steps to a minimum. I believe until that point we will not get closer to a general solution, although of course the more knowledge we hoard about particular cases, the better!

I'll contact you regarding those positions :D

Cheers!

PS: out of curiosity, with what version of MNIST do you work? Binary (B&W, 0 or 1) or grayscale? And what image resolution?


Yes, I agree with you, we need more theory, more mathematical understanding.

MNIST grayscale, 7x7 pixels = 49 continuous variables meaning we can do something already with 5 qubits)

Yes, hope to see you around !




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: