
Embarrassingly Parallel Time Series Analysis for Large Scale Weak Memory Systems - zeit_geist
http://arxiv.org/abs/1511.06493
======
pmontra
I skimmed through the paper. It's packed with formulas and technical details.
I can't judge if it's a relevant contribution to the subject but... the title.
Do they need that to be noticed? Just imagine "Embarrassingly Fast
Electromagnetic Waves but not any Faster" instead of "On the Electrodynamics
of Moving Bodies" (Zur Elektrodynamik bewegter Körper)
[http://.ca/wiki/Zur_Elektrodynamik_bewegter_Körper](http://.ca/wiki/Zur_Elektrodynamik_bewegter_Körper)

~~~
sethev
"Embarrassingly parallel" isn't a term that they invented:
[https://en.wikipedia.org/wiki/Embarrassingly_parallel](https://en.wikipedia.org/wiki/Embarrassingly_parallel)

~~~
pmontra
Thank you and my upvote. It smelled of clickbait but it wasn't. Apologies to
the authors.

------
SFjulie1
Isn't this lengthy thesis about saying that doing a map reduce on consecutive
data that have are hot and localized in memory is all the better that we use
commutative/distributive operations. Well it was trivial.

Well, I would have prefered a first year student demonstrating the trivia that
operation like sorting multidimensional vector will kick by the nature less
optimization possible in CPU and GPU and that non linear operation requires
reading the whole sample to have a non random quantifiable error.

Hence, map reduce performance is better for combination of distributive linear
operation (like ARMA, and I still wonder about hilbertian geometry) and
horrible for non linear mapped functions (median filter, nth percentile, sort,
top X).

