There is a lot of research going back to the 1930s solving this problem--how do you estimate energy potential based on available data. The machine learning approach is really the variational approach but using very expensive (lots of parameter) functions instead of cleverly choosing your fitting function so that it has parameters which are physically meaningful and significantly fewer in number. The first helps with building a meaningful story of the underlying physics while the second means you need fewer data points.
One benefit of ML is being able to replace clever subject matter experts (who may be expensive and hard to identify) with ML generalists.
Your data needs to be very cheap.
And you don't have to be that clever. For periodic function sines and cosines. For potential energy quadratics, polynomials and exponents do a great job.
One of the key components which makes neural networks generally inferior is that you need enough data to learn the governing equations (always true physics) as well as the constitutive law (part of the physics which applies only to your problem) as opposed to just the constitutive law.
Of course there are notable exeptions. Neural networks have been doing a great job of processing microscopy data and building 3d models from a series of 2d images.
IME, imposing domain knowledge (usually directly into the model, or sometimes also in the training data distribution) is crucial to getting things working in an efficient and robust manner.
The primary downside is that they require more data to train.
It looks like this paper is getting at the same basic idea, though obviously actually fleshed out and with better results than I would have thought were possible. It's going not for anomaly detection but actual forecasting, and directly isolating the non-conserved force.
I'll need to start reading the related papers, I wonder if people have tried doing this with nonlinear equations, or systems where the dynamics aren't well defined? I imagine actually forecasting a nonlinear system wouldn't be possible beyond short time frames, but maybe you could still learn the underlying equation and dynamics? It would be interesting to see if you could give a neural network a bunch of nonlinear data and have it find equilibrium points, identify regions where the dynamics are almost linear, etc.
There is a special interest group at The Alan Turing Institute in the UK dedicated to working on problems of this sort.
There are a number of challenges that vary depending on both the system and the forcing.
Echo state networks do insanely well on the Mackey Glass equation and ESNs are randomly connected and use linear regression on the output node.
I agree with the content if not the style of your comment, and it may do better to say why the results are trivial
I'm very interested in anomaly detection problems, so if you find anything good out there do comment back in!
>90% of all energy spent on compute will be on matrix multiplications.
Yo, your car's shocks need replacing. It is currently underdamped and going to bounce around like a Ford Fiesta in an LA earthquake.
Hey buddy, those pricey tuner shocks are problematically over-damped which will lead to a rough ride, wear issues, and handling problems.
I mean some people take machine learning to include anything with parameter fitting, but it seems like classic engineering to me.