> For a long time scientific computing has kept machine learning at an arm's length because its lack of interpretability and structure mean that, despite it’s tremendous predicitve success, it is almost useless for answering these kinds of central scientific questions.
> However, the recent trend has been to merge the two disciplines, allowing explainable models that are data-driven, require less data than traditional machine learning, and utilize the knowledge encapsulated in centuries of scientific literature.
Recent trend?? This is the norm in every scientific field I've even slightly worked in... This quote just described what we call "inverse theory" in the geosciences. It's the root of pretty much everything we do and has been for many decades. Those of in the scientific computing world have hardly been keeping machine learning at arm's length... A lot of machine learning methods come from scientific fields (e.g. gaussian processes).
There's more to machine learning than CNNs, and many fields have been using machine learning much longer terms like "data science" and "AI" have been around. We just trend towards parametric approaches or approaches that can be reasoned about more clearly. (E.g. lots of convex optimization problems and interpolation/"super-resolution" problems.) Those are machine learning methods as well.
All that said, most of what they're talking about in this article is about approximating solutions to differential equations with purely non-parametric approaches. That is somewhat new and is a much narrower topic, which this article does a nice job of describing. (I don't really take objection to the article as a whole, but a few sentences in it strike me as presumptuous.)
>There's more to machine learning than CNNs.
Most definitely agreed. I think there's a lot to do with utilizing all of the methods of machine learning neural networks and beyond, automatically within scientific simulation codes.
(edit: And I just realized you're the author -- excellent work by the way!)
But if there's one I missed I'd love to hear about it and start tracking the project!
For toying with autodiff and basic CNNs, CPU works just fine by the way...
This appears to finally be starting to change. See:
I guess more important question... Whyyyyyyyyyyyyy
Despite all the talk about autodiff this or that, the stuff that matters is implemented by hand by Nvidia and Intel engineers and then high level libraries build on top. AMD is simply lagging in providing low-level C libraries and GPU kernels for that.
For example, let me chip in with the libraries I develop, in Clojure, no less. They support BOTH Nvidia GPU AND AMD GPU backends. Most of the stuff is equally good on AMD GPU and Nvidia GPU. With less fuss than in Julia and Python, I'd argue.
Check out Neanderthal, for example: https:neanderthal.uncomplicate.org
Top performance on Intel CPU, Nvidia GPU, AND AMD GPU, from Clojure, with no overhead, faster than Numpy etc. You can even mix all three in the same thread with the same code.
Lots of tutorials are available at https://dragan.rocks
I'm writing two books about that:
Deep Learning for Programmers: An Interactive Tutorial with CUDA, OpenCL, MKL-DNN, Java, and Clojure 
Numerical Linear Algebra for Programmers: An Interactive Tutorial with GPU, CUDA, OpenCL, MKL, Java, and Clojure 
Drafts are available right now at https://aiprobook.com