
Solving the chaotic three-body problem using deep neural networks (2019) - maeln
https://arxiv.org/abs/1910.07291
======
scottlocklin
These papers are such unutterable horse shit it makes my head spin. You know
what: Kriging on the data set (or any other kind of functional approximator)
would do the same thing a hell of a lot computationally faster on the 3-4
shitty solutions they demonstrated. As a bonus; it's more likely to conserve
angular momentum and energy.

The fact that HN weebs gobble this horse shit up as if it were _pate de fois
gras_ is also depressing.

TLDR: physics nerds do an unimpressive thing with neural nets. You want to
look at something impressive and still mysterious involving physics and neural
doohickeys: why do echo state networks (reservoir computers that are
effectively projections onto a random hyperplane) reproduce chaotic time
series, most famously Mackey-Glass, so well?

~~~
theincredulousk
Tell us how you really feel scott

~~~
soniman
As soon as I read that first sentence I said "Has to be Scott Locklin"

~~~
7thaccount
Same. But I figured it out in the second paragraph.

------
whinvik
I think a lot of machine learning applied to computational science problems is
basically function approximation. However a lot of computational science
problems actually are => here's a good basis function to approximate
functions, now use it to solve an ODE/PDE. It seems machine learning is
eternally stuck in the former.

~~~
godelski
> I think machine learning problems are function approximation.

FTFY. You would be entirely correct.

~~~
judofyr
Not sure if I agree. In physics you typically have the analytical solution,
but it's expensive to evaluate (integral over many dimensions) so you use
machine learning as a cheap way to approximate it.

In most other cases of machine learning there is no "objective" solution and
hence no "target function" to approximate.

~~~
IvoDankolov
Well, all of supervised learning is basically approximating an unknown
function from a finite list of samples.

But it's still an approximation, with things like e.g. backpropagation
'simply' (in the abstract mathematical sense) tweaking weights in the
direction of the derivatives to get closer to expected values.

The vast majority of machine learning just builds on that by going deep (more
layers), automatically generating inputs (e.g. in game AIs playing against
themselves), etc.

One might argue that's even worse than function optimisation as you can only
vaguely guess at the target and thus all your validation is suspect and you
have to prove it using humans by, for instance, beating them at Starcraft.

------
ausbah
older discussion on why this paper isn't as cool as it appears

[https://www.reddit.com/r/MachineLearning/comments/dnic1x/n_n...](https://www.reddit.com/r/MachineLearning/comments/dnic1x/n_newton_vs_the_machine_solving_the_chaotic/)

------
acoye
Neural net trained on data obtained with … computation. Or else Mr Wolfram
wants to hear from you as you broke the `computational irreducibility`

------
theincredulousk
Isn't it even in theory not possible for any algorithm (be it an ANN or
otherwise) provide a solution to an infinitely variable chaotic problem based
on solutions from a less-than-infinite set of other chaotic problems?

Isn't essentially they key concept of chaotic problems that they aren't
predictable, so there is no real "pattern" to train on so to speak?

~~~
User23
If there weren't a real underlying system how would the universe function
operationally? How do the bodies "know" how to move even though we can't
predict them past a certain level of inaccuracy?

~~~
jerf
Suppose we know the system's state and rules exactly. This can't be true in
the real universe, but we can construct classical, deterministic systems in
pure math that we can say that about, and even _those_ systems will exhibit
this characteristic we're talking about.

You can look at it from the point of view of, if we watch the system evolve,
can we tell whether the rules were violated at some point, by some arbitrarily
small amount? As the chaotic systems evolve, it becomes _harder and harder_ to
tell if that is the case. There isn't a discrete transition from knowing to
not knowing; our level of knowing goes down over time.

In information theory, we can see that as a loss of bits of precision on the
system, requiring more and more bits initially to make up for it. Since we
can't compute with real numbers, but only approximations given increasingly
more bits over time, even in the pure mathematical case where everything is
perfect and specified, we still lose this knowledge as the simulation
progresses. It's that much worse in the real world, where we don't even start
with all that many bits of precision.

It's not quite the question you asked, but... it's like the shadow of the
question you asked, and it's a bit easier to explain. (And reasonably
mathematically valid. You can characterize chaotic systems by how many bits
they lose per time unit.)

~~~
User23
This is such a great response that I’m not even going to voice my metaphysical
objections because I’m absolutely certain you already know them and perhaps
even explore them yourself. Have you got any recommendations for further
reading?

~~~
jerf
It depends on the direction you want to take, but there is a mathematical
basis to what I'm saying, not just a philosophical one. Unfortunately I've
never scared up the name of the concept, or at least not in any form I can
conveniently Google for. I'm not sure there is a non-textbook version of what
I'm talking about; I'd like to see it myself.

It's related to the ability to read a Lyapunov exponent as a measure of bit
loss. Lyapunov exponent is easy to Google up, and if you understand that and
information theory it's not a difficult leap to make, but I can't find any
nice explanation for people who don't already have those things.

------
hipsquare
Or: you can approximate anything with enough linear regression

------
naugtur
It's not magic, so is it safe to assume the ANN is quickly narrowing down the
search space by eliminating the possibilities that were not showing up as
patterns in training set therefore limiting the search to a finite subset of
infinite set of possible behaviors of the system?

If the chaos produces a new kind of behavior the result of the ANN may be
totally wrong. In other words - it works well, often.

Is my simplistic thinking right?

------
andrewcooke
i need to go out, so this is just off the top of my head, but isn't this
really just some kind of efficient approximation? in which case (1) can it be
extended to other problems trivially (so that we have an optimization library
that we can 'plug in' to arbitrary numerical problems) and (2) how good is it
at extrapolating?

