Oh yes, this is an amazing paper. I actually decided to include single qubit circuits after reading it.
Moving to higher dimensions and more complex problems is indeed challenging. I think that one key to the problem is how you design circuits. There are some elemental points explored here: https://qml.entropicalabs.io/native_gates.pdf
But also things that are hard to grasp on paper, with only theoretical reasoning. This "demo" is actually a derivative work of our "quantum machine learning lab" that we use to explore circuits and visualize landscapes. As you can see in the current demo: when you change circuits and use data re-uploading --as in the article-- the convexity breaks apart and local minima appears everywhere. As a consequence the learning does not converge well if at all.
We are currently working on the MNIST dataset (handwritten digits) with 5-qubit circuits and have first encouraging results. And you are right: well chosen cost functions is an important part of the equation :-)
We are still in the infancy of the domain but I think we will be able to scale these techniques to really hard, big problems. Classical perceptrons dates back to the sixties and some more decades were needed to evolve the idea in something practical. The good thing is that quantum machine learning can follow the steps of classical machine learning and progress a lot faster.
btw: we are hiring at entropica labs (both interns and long term positions). If you want to continue working on these problems you can definitely apply.
I'm truly conviced that exploring landscapes is the way to get higher abstraction skills in the design domain. Yet, discussing this point with someone in the field, they told me we're still lacking a major breakthrough, since right now we only have several (more or less interesting/successful and still) arbitrary approaches to the problem, each of them more fit for one type of problems.
It is expected that someone in academia develops the whole mathematical apparatus needed for the rest of us to keep wrong steps to a minimum. I believe until that point we will not get closer to a general solution, although of course the more knowledge we hoard about particular cases, the better!
I'll contact you regarding those positions :D
Cheers!
PS: out of curiosity, with what version of MNIST do you work? Binary (B&W, 0 or 1) or grayscale? And what image resolution?
Moving to higher dimensions and more complex problems is indeed challenging. I think that one key to the problem is how you design circuits. There are some elemental points explored here: https://qml.entropicalabs.io/native_gates.pdf
But also things that are hard to grasp on paper, with only theoretical reasoning. This "demo" is actually a derivative work of our "quantum machine learning lab" that we use to explore circuits and visualize landscapes. As you can see in the current demo: when you change circuits and use data re-uploading --as in the article-- the convexity breaks apart and local minima appears everywhere. As a consequence the learning does not converge well if at all.
We are currently working on the MNIST dataset (handwritten digits) with 5-qubit circuits and have first encouraging results. And you are right: well chosen cost functions is an important part of the equation :-)
We are still in the infancy of the domain but I think we will be able to scale these techniques to really hard, big problems. Classical perceptrons dates back to the sixties and some more decades were needed to evolve the idea in something practical. The good thing is that quantum machine learning can follow the steps of classical machine learning and progress a lot faster.
btw: we are hiring at entropica labs (both interns and long term positions). If you want to continue working on these problems you can definitely apply.