
Machine Learning’s ‘Amazing’ Ability to Predict Chaos - adenadel
https://www.quantamagazine.org/machine-learnings-amazing-ability-to-predict-chaos-20180418/
======
skywhopper
Don't be too seduced by the enticing ideas at the end of the article. The
disconnect here is that success in learning how to predict the results of an
algorithmic simulation is not really indicative of how it would perform with
the decidedly non-algorithmic natural behavior of weather or earthquakes,
phenomena which don't operate in a closed system with predefined limits and
parameters. It _sounds_ like the next step, but even if weather were reducible
to machine-discoverable patterns, you first must face the need to collect an
immense amount of high-resolution condition data from around the globe on an
ongoing basis. I'm guessing our current mesh of satellites and weather
stations, as impressive as it is, comes nowhere close to enough raw data to
sufficiently feed the beast in this case. And even _that_ assumes that we
_know about_ and can measure all of the factors that go into weather patterns.
I sincerely doubt that's the case.

~~~
contravariant
It's actually kind of unclear what it means to predict results from an
algorithmic simulation. Surely that's just trying to use a different method to
simulate the process?

I also don't see them mentioning anything about noise, so what exactly are
they trying to achieve? Surely perfect prediction should already be possible,
otherwise what are they comparing the output to?

~~~
joe_the_user
Yeah, they're predicting the outcome of a chaotic but deterministic process
based on the output data given.

Surely they're discovering hidden (or obvious) regularities based specifically
on the process being deterministic.

As a counter example, the Smale Horseshoe [1] is an example of a inherently
unpredictable map - it's approximately equivalent to choosing a real number
"at random" and attempting to "predict" each digit as you find it. Sure, if
some implementation has chosen a number with repeating digits, you can
"predict" those digits but this would have nothing to do with being an oracle
for algorithm itself.

More broadly, I think it's pretty established that any realistic model of
weather is kind of an extension of this concept - it depends unstably on it's
initial conditions (small changes in initial conditions result in large
changes in final result after some period).

[1]
[https://en.wikipedia.org/wiki/Horseshoe_map](https://en.wikipedia.org/wiki/Horseshoe_map)

~~~
evrydayhustling
This interpretation sounds right to me. I found it cool that the article
characterizes success in terms of prediction accuracy out to N "Lyapunov
times", a concept that is supposed to incorporate the unpredictability of the
system. So perhaps discovering these regularities proves that in this
implementation, the true Lyapunov time is longer.

------
jimmytidey
Can we have more publishing like Quanta magazine? It's just perfect in terms
of reporting from the frontier, being easy to unsetstand, and not talking down
to you.

~~~
greeneggs
It's funded by Jim Simons, of the hedge fund Renaissance Technologies. A
mathematician with money, trying to promote mathematics. Quanta magazine is a
fantastic service.

But the other Renaissance magnate, Bob Mercer, as his own media hobby funded
Breitbart News.

~~~
internetman55
Yeah it's kind of sad to me. Quanta is amazing but I'm not sure if I could
reasonably expect to exist without Simons or someone like him privately
funding it

~~~
mkempe
Why do you feel _sad_ about the fact that an individual who values science is
"privately funding" a great publication? surely you don't object to private
wealth and/or activity?

~~~
intended
Being dependent on the whim of a wealthy individual is saddening.

~~~
mkempe
Could one be, instead, appreciative of and grateful to the considered,
generous decision of that individual? how do you think _he_ would respond to
someone expressing one kind of feeling or the other?

~~~
fjsolwmv
I assume that he became a multibillionaire in part by not troubling himself
with the opinions of nobodies.

------
downandout
It would be fascinating to apply this technique to card shuffling. Shuffling
techniques are not even close to random...casinos hope for chaos at best. It
would be interesting to see if a machine learning algorithm could come up with
an approach that could be carried out by humans to, for example, predict
whether or not the _next_ shoe of a hand-shuffled blackjack game is going to
have a positive or negative expectation for the player.

~~~
curuinor
You can card count on apps already. The repeated linear operator +
nonlinearity of the ESN (the reservoir) could implement a card-counting memory
but, you know, so could a card counting memory in a normal program.

~~~
downandout
Oh I wasn't talking about card counting. I was talking about being able to
determine, knowing the order of the cards going into the shuffle, whether or
not the next shoe (after the shuffle) would have an overall positive or
negative expectation. The theoretical expectation accounting for every single
possible order of cards is slightly negative. However, if a massive percentage
of the possible orders that the cards can be in is eliminated, the expectation
may either be much more positive or negative than the overall theoretical
return.

------
shas3
Novel paper. But it seems like a lot of the excitement is because of the
fusion of two buzz-words, one from the 80s and 90s and another from the 2010s.
So, the output is a bunch of stuff coming out of a kinda simple dynamical
system. Chaotic for sure, but still simple. Deep learning (and more generally,
recurrent neural nets, LSTMs, and derivatives thereof) has been shown capable
of learning much more complex nonlinear systems, including human perception.
Given this, I think the OP paper is a low-hanging fruit. It was only a matter
of time before someone figured out a way to learn specific chaotic nonlinear
dynamical systems. Nice work nonetheless.

~~~
IAmEveryone
I'm not sure if human perception is a chaotic system.

Chaos is defined as "small change in input -> large change in output".

Perception is actually the opposite, with small input changes (changes in
light, different angles,...) leading to fundamentally unchanged perception
("It's a tree").

~~~
jessriedel
First, sensitivity to initial conditions is a necessary but not sufficient
property for chaos. Some sort of folding/mixing is also necessary, which can
be gauranteed by a bounded state space.

Second, it's definitely true that information processing systems, if they are
to be reliable enough to be useful, are not going to be chaotic throughout
their state space. They need to return the same output given a certain input.
But I'd imagine there are also at least a few noisy regions.

------
d--b
Doesn't that mean that the neural net learned itself a numerical method to
compute the solution of the equation, and that it is close enough in terms of
approximation up to 7 Lyapukov times, and after that time, the approximation
becomes not good enough and the system can't predict?

It doesn't sound like too groundbreaking...

~~~
ppod
I think it's blurring the line between modelling and simulation. You can find
an efficient route down a hillside by pouring water down it, or a line of
least resistance through a system by passing a current through it. Is the
system learning a numerical solution, or performing one? I think this is like
building a model of a system that is much closer to the territory than the map
compared to a normal model, but still easier to work with than the territory.

~~~
aeorgnoieang
> Is the system learning a numerical solution, or performing one?

Is there a real difference? Any learning must be, fundamentally, an algorithm,
so learning _is_ performing.

~~~
ppod
This is a deep topic but one good treatment of is in the works of David
Deutsch:
[https://www.cs.indiana.edu/~dgerman/hector/deutsch.pdf](https://www.cs.indiana.edu/~dgerman/hector/deutsch.pdf)

------
sgillen
Very cool! This looks like a very promising way to improve weather forecasts,
predict wildfire movements (maybe?), and can maybe be used in control for
chaotic systems? (Among other uses of course).

It feels like every day I'm seeing deep learning make more mathematical tools
obsolete. It's amazing how useful this tool has been.

~~~
freeone3000
When we started saving layers and gained nonlinear functions, it gained the
ability to approximate any function, given enough time and data. These are big
caveats.

RL can function with a lot less data. SVMs can run with a lot less time and
space. Partial function application and expert modelling reduce data needed -
and some of the best results are from ensemble suites. It's not obsoleting,
it's another tool in the box.

------
dangerlibrary
I was really hoping this was about weather forecasting.

~~~
nolite
It can be!

~~~
fmihaila
From the article:

“This paper suggests that one day we might be able perhaps to predict weather
by machine-learning algorithms and not by sophisticated models of the
atmosphere,” Kantz said.

------
jonplackett
If there's an algorithm to create this 'randomness' then how can it be all
that random?

Isn't it just learning to make an approximation of that algorithm, rather than
actually predicting chaos?

~~~
empath75
It’s working with imperfect information about the current state, which makes
the algorithm that generates it useless.

------
leot
Can anyone speak to whether PRNGs might be affected?

~~~
folli
If you know the seed and the used algorithm, you are already able to "predict"
the pseudo-random sequence perfectly.

------
Swanlyk2
Why wouldn't it? A lot of misunderstanding out there in both machine learning
and nonlinear dynamics.

------
amelius
I'm curious, has anyone tried to apply ML to solving large _linear_ systems?

~~~
freeone3000
Yes. Vladmir Vaptnik (1995)'s work on SVMs is probably the most recent, but
linear regression is an ML technique as well.

------
lewis500
I want to know how this does with traffic. Pretty remarkable.

~~~
joshuaheard
I'd like to apply it to the stock market.

~~~
aatchb
Ergodicity becomes a problem here. The nonlinear systems typically studied by
complexity and chaos theorists have strong fixed rules that do not change in
time. Markets have systematic and structural changes which can make prior
observations completely irrelevant.

------
plg
even after cross-validation?

------
XnoiVeX
Isn't deterministic-chaos an oxymoron?

~~~
imh
The specific kind of chaos here implies (roughly) that a system starting in
state X and a system starting arbitrarily close to X will at some point in the
future behave totally differently. Not different as in "they'll drift further
apart," but different as in "this one rolls off the cliff and this doesn't."

It's a fascinating field of dynamical systems, which is one of the coolest
subfields of math (since the world is dynamic). Here's the wikipedia page
[https://en.wikipedia.org/wiki/Chaos_theory](https://en.wikipedia.org/wiki/Chaos_theory)

------
joshuamorton
>when the powerful algorithm known as “deep learning”

Quanta is normally better than this (and the rest of the article is decent).
But still :(

